text
stringlengths 14
1.76M
|
---|
theoremTheorem[section] lemma[theorem]Lemma definition[theorem]Definition
proposition[theorem]Proposition corollary[theorem]Corollary
example[theorem]Example
# Fixpoint Theory - Upside Down
Paolo Baldan Università di Padova, Italy<EMAIL_ADDRESS>, Richard
Eggert Universität Duisburg-Essen, Germany<EMAIL_ADDRESS>,
Barbara König Universität Duisburg-Essen, Germany<EMAIL_ADDRESS>and Tommaso Padoan Università di Padova, Italy<EMAIL_ADDRESS>
###### Abstract.
Knaster-Tarski’s theorem, characterising the greatest fixpoint of a monotone
function over a complete lattice as the largest post-fixpoint, naturally leads
to the so-called coinduction proof principle for showing that some element is
below the greatest fixpoint (e.g., for providing bisimilarity witnesses). The
dual principle, used for showing that an element is above the least fixpoint,
is related to inductive invariants. In this paper we provide proof rules which
are similar in spirit but for showing that an element is above the greatest
fixpoint or, dually, below the least fixpoint. The theory is developed for
non-expansive monotone functions on suitable lattices of the form
$\mathbb{M}^{Y}$, where $Y$ is a finite set and $\mathbb{M}$ an MV-algebra,
and it is based on the construction of (finitary) approximations of the
original functions. We show that our theory applies to a wide range of
examples, including termination probabilities, metric transition systems,
behavioural distances for probabilistic automata and bisimilarity. Moreover it
allows us to determine original algorithms for solving simple stochastic
games.
###### Key words and phrases:
Fixpoints, Knaster-Tarski theorem, MV-algebras, non-expansive functions,
bisimilarity, stochastic games
This work is supported by the MIUR project PRIN2017- ASPRA, Grant No.
201784YSZ5, and and the DFG projects BEMEGA and SpeQt.
## 1\. Introduction
Fixpoints are ubiquitous in computer science as they allow to provide a
meaning to inductive and coinductive definitions (see, e.g.,
[San:IntroBisCoind, NNH:PPA]). A monotone function $f:L\to L$ over a complete
lattice $(L,\sqsubseteq)$, by Knaster-Tarski’s theorem [t:lattice-fixed-
point], admits a least fixpoint $\mu f$ and greatest fixpoint $\nu f$ which
are characterised as the least pre-fixpoint and the greatest post-fixpoint,
respectively. This immediately gives well-known proof principles for showing
that a lattice element $l\in L$ is _below_ $\nu f$ or _above_ $\mu f$
$\frac{l\sqsubseteq f(l)}{l\sqsubseteq\nu f}\qquad\qquad\frac{f(l)\sqsubseteq
l}{\mu f\sqsubseteq l}$
On the other hand, showing that a given element $l$ is _above_ $\nu f$ or
_below_ $\mu f$ is more difficult. One can think of using the characterisation
of least and largest fixpoints via Kleene’s iteration. E.g., the largest
fixpoint is the least element of the (possibly transfinite) descending chain
obtained by iterating $f$ from $\top$. Then showing that
$f^{i}(\top)\sqsubseteq l$ for some $i$, one concludes that $\nu f\sqsubseteq
l$. This proof principle is related to the notion of ranking functions.
However, this is a less satisfying notion of witness since $f$ has to be
applied $i$ times, and this can be inefficient or unfeasible when $i$ is an
infinite ordinal.
The aim of this paper is to present an alternative proof rule for this purpose
for functions over lattices of the form $L=\mathbb{M}^{Y}$ where $Y$ is a
finite set and $\mathbb{M}$ is an MV-chain, i.e., a totally ordered complete
lattice endowed with suitable operations of sum and complement. This allows us
to capture several examples, ranging from ordinary relations for dealing with
bisimilarity to behavioural metrics, termination probabilities and simple
stochastic games.
Assume $f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$ monotone and consider the question
of proving that some fixpoint $a:Y\to\mathbb{M}$ is the largest fixpoint $\nu
f$. The idea is to show that there is no “slack” or “wiggle room” in the
fixpoint $a$ that would allow us to further increase it. This is done by
associating with every $a:Y\to\mathbb{M}$ a function $f^{\\#}_{a}$ on
$\mathbf{2}^{Y}$ whose greatest fixpoint gives us the elements of $Y$ where we
have a potential for increasing $a$ by adding a constant. If no such potential
exists, i.e. $\nu f^{\\#}_{a}$ is empty, we conclude that $a$ is $\nu f$. A
similar function $f_{\\#}^{a}$ (specifying decrease instead of increase)
exists for the case of least fixpoints. Note that the premise is $\nu
f_{\\#}^{a}=\emptyset$, i.e. the witness remains coinductive. The proof rules
are:
$\frac{f(a)=a\qquad\nu f^{\\#}_{a}=\emptyset}{\nu
f=a}\qquad\frac{f(a)=a\qquad\nu f_{\\#}^{a}=\emptyset}{\mu f=a}$
For applying the rule we compute a greatest fixpoint on $\mathbf{2}^{Y}$,
which is finite, instead of working on the potentially infinite
$\mathbb{M}^{Y}$. The rule does not work for all monotone functions
$f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$, but we show that whenever $f$ is non-
expansive the rule is valid. Actually, it is not only sound, but also
reversible, i.e., if $a=\nu f$ then $\nu f_{a}^{\\#}=\emptyset$, providing an
if-and-only-if characterisation of whether a given fixpoint corresponds to the
greatest fixpoint.
Quite interestingly, under the same assumptions on $f$, using a restricted
function $f_{a}^{*}$, the rule can be used, more generally, when $a$ is just a
_pre-fixpoint_ ($f(a)\sqsubseteq a$) and it allows to conclude that $\nu
f\sqsubseteq a$. A dual result holds for _post-fixpoints_ in the case of least
fixpoints.
$\frac{f(a)\sqsubseteq a\qquad\nu f^{*}_{a}=\emptyset}{\nu f\sqsubseteq
a}\qquad\frac{a\sqsubseteq f(a)\qquad\nu f_{*}^{a}=\emptyset}{a\sqsubseteq\mu
f}$
As already mentioned, the theory above applies to many interesting scenarios:
witnesses for non-bisimilarity, algorithms for simple stochastic games
[condon92], lower bounds for termination probabilities and behavioural metrics
in the setting of probabilistic [bblm:on-the-fly-exact-journal] and metric
transition systems [afs:linear-branching-metrics] and probabilistic automata
[bblmtv:prob-bisim-distance-automata]. In particular we were inspired by, and
generalise, the self-closed relations of Fu [f:game-metrics-markov-decision],
also used in [bblmtv:prob-bisim-distance-automata].
#### Motivating example.
Consider a Markov chain $(S,T,\eta)$ with a finite set of states $S$, where
$T\subseteq S$ are the terminal states and every state $s\in S\backslash T$ is
associated with a probability distribution $\eta(s)\in\mathcal{D}(S)$.111By
$\mathcal{D}(S)$ we denote the set of all maps $p:S\to[0,1]$ such that
$\sum_{s\in S}p(s)=1$. Intuitively, $\eta(s)(s^{\prime})$ denotes the
probability of state $s$ choosing $s^{\prime}$ as its successor. Assume that,
given a fixed state $s\in S$, we want to determine the termination probability
of $s$, i.e. the probability of reaching any terminal state from $s$. As a
concrete example, take the Markov chain given in Fig. 1, where $u$ is the only
terminal state.
The termination probability arises as the least fixpoint of a function
$\mathcal{T}$ defined as in Fig. 1. The values of $\mu\mathcal{T}$ are
indicated in green (left value).
Now consider the function $t$ assigning to each state the termination
probability written in red (right value). It is not difficult to see that $t$
is another fixpoint of $\mathcal{T}$, in which states $y$ and $z$ convince
each other incorrectly that they terminate with probability $1$, resulting in
a vicious cycle that gives “wrong” results. We want to show that
$\mu\mathcal{T}\neq t$ without knowing $\mu\mathcal{T}$. Our idea is to
compute the set of states that still has some “wiggle room”, i.e., those
states which could reduce their termination probability by $\delta$ if all
their successors did the same. This definition has a coinductive flavour and
it can be computed as a greatest fixpoint on the finite powerset
$\mathbf{2}^{S}$ of states, instead of on the infinite lattice
$[{0},{1}]^{S}$.
We hence consider a function
$\mathcal{T}_{\\#}^{t}:\mathbf{2}^{[{S}]^{t}}\to\mathbf{2}^{[{S}]^{t}}$,
dependent on $t$, defined as follows. Let $[{S}]^{t}$ be the set of all states
$s$ where $t(s)>0$, i.e., a reduction is in principle possible. Then a state
$s\in[{S}]^{t}$ is in $\mathcal{T}_{\\#}^{t}(S^{\prime})$ iff $s\not\in T$ and
for all $s^{\prime}$ for which $\eta(s)(s^{\prime})>0$ it holds that
$s^{\prime}\in S^{\prime}$, i.e. all successors of $s$ are in $S^{\prime}$.
The greatest fixpoint of $\mathcal{T}_{\\#}^{t}$ is $\\{y,z\\}$. The fact that
it is not empty means that there is some “wiggle room”, i.e., the value of $t$
can be reduced on the elements $\\{y,z\\}$ and thus $t$ cannot be the least
fixpoint of $f$. Moreover, the intuition that $t$ can be improved on
$\\{y,z\\}$ can be made precise, leading to the possibility of performing the
improvement and search for the least fixpoint from there.
#### Contributions.
In the paper we formalise the theory outlined above, showing that the proof
rules work for non-expansive monotone functions $f$ on lattices of the form
$\mathbb{M}^{Y}$, where $Y$ is a finite set and $\mathbb{M}$ a (potentially
infinite) MV-algebra (Section 3 and Section 4). Additionally, given a
decomposition of $f$ we show how to obtain the corresponding approximation
compositionally (Section 5). Then, in order to show that our approach covers a
wide range of examples and allows us to derive useful and original algorithms,
we discuss various applications: termination probability, behavioural
distances for metric transition systems and probabilistic automata,
bisimilarity (Section 6) and simple stochastic games (Section 7).
Proofs and further material can be found in the appendix.
$\displaystyle\mathcal{T}:[0,1]^{S}\to[0,1]^{S}$
$\displaystyle\mathcal{T}(t)(s)=\left\\{\begin{array}[]{ll}1&\mbox{if $v\in
T$}\\\ \sum\limits_{s^{\prime}\in S}\eta(s)(s^{\prime})\cdot
t(s^{\prime})&\mbox{otherwise}\end{array}\right.$
$x$$\frac{1}{2}$/$1$$u$$1$/$1$$y$$0$/$1$$z$$0$/$1$$\frac{1}{3}$$\frac{1}{3}$$\frac{1}{3}$$1$$1$
Figure 1. Function $\mathcal{T}$ (left) and a Markov chain with two fixpoints
of $\mathcal{T}$ (right)
## 2\. Lattices and MV-algebras
In this section, we review some basic notions used in the paper, concerning
complete lattices and MV-algebras [Mun:MV].
A preordered or partially ordered set $(P,\sqsubseteq)$ is often denoted
simply as $P$, omitting the order relation. Given $x,y\in P$, with
$x\sqsubseteq y$, we denote by $[{x},{y}]$ the interval $\\{z\in P\mid
x\sqsubseteq z\sqsubseteq y\\}$. The _join_ and the _meet_ of a subset
$X\subseteq P$ (if they exist) are denoted $\bigsqcup X$ and $\bigsqcap X$,
respectively.
A _complete lattice_ is a partially ordered set $(L,\sqsubseteq)$ such that
each subset $X\subseteq L$ admits a join $\bigsqcup X$ and a meet $\bigsqcap
X$. A complete lattice $(L,\sqsubseteq)$ always has a least element
$\bot=\bigsqcup\emptyset$ and a greatest element $\top=\bigsqcap\emptyset$.
A function $f:L\to L$ is _monotone_ if for all $l,l^{\prime}\in L$, if
$l\sqsubseteq l^{\prime}$ then $f(l)\sqsubseteq f(l^{\prime})$. By Knaster-
Tarski’s theorem [t:lattice-fixed-point, Theorem 1], any monotone function on
a complete lattice has a least and a greatest fixpoint, denoted respectively
$\mu f$ and $\nu f$, characterised as the meet of all pre-fixpoints
respectively the join of all post-fixpoints: $\mu f=\bigsqcap\\{l\mid
f(l)\sqsubseteq l\\}$ and $\nu f=\bigsqcup\\{l\mid l\sqsubseteq f(l)\\}$.
Let $(C,\sqsubseteq)$, $(A,\leq)$ be complete lattices. A _Galois connection_
is a pair of monotone functions $\langle\alpha,\gamma\rangle$ such that
$\alpha:C\to A$, $\gamma:A\to C$ and for all $a\in A$ and $c\in C$:
$\alpha(c)\leq a$ iff $c\sqsubseteq\gamma(a)$.
Equivalently, for all $a\in A$ and $c\in C$, (i)
$c\sqsubseteq\gamma(\alpha(c))$ and (ii) $\alpha(\gamma(a))\leq a$. In this
case we will write $\langle\alpha,\gamma\rangle:C\to A$. For a Galois
connection $\langle\alpha,\gamma\rangle:C\to A$, the function $\alpha$ is
called the left (or lower) adjoint and $\gamma$ the right (or upper) adjoint.
Galois connections are at the heart of abstract interpretation [cc:ai-unified-
lattice-model, CC:TLA]. In particular, when $\langle\alpha,\gamma\rangle$ is a
Galois connection, given $f^{C}:C\to C$ and $f^{A}:A\to A$, monotone
functions, if $f^{C}\circ\gamma\sqsubseteq\gamma\circ f^{A}$, then $\nu
f^{C}\sqsubseteq\gamma(\nu f^{A})$. If equality holds, i.e.,
$f^{C}\circ\gamma=\gamma\circ f^{A}$, a condition sometimes referred to as
$\gamma$-completeness, then greatest fixpoints are preserved along the
connection, i.e., $\nu f^{C}=\gamma(\nu f^{A})$.
Given a set $Y$ and a complete lattice $L$, the set of functions
$L^{Y}=\\{f\mid f:Y\to L\\}$, endowed with pointwise order, i.e., for $a,b\in
L^{Y}$, $a\sqsubseteq b$ if $a(y)\sqsubseteq b(y)$ for all $y\in Y$, is a
complete lattice.
In the paper we will mostly work with lattices of the form $\mathbb{M}^{Y}$
where $\mathbb{M}$ is a special kind of lattice with a rich algebraic
structure, i.e. an MV-algebra [Mun:MV].
###### Definition 2.1 (MV-algebra).
An _MV-algebra_ is a tuple $\mathbb{M}=(M,\oplus,0,\overline{(\cdot)})$ where
$(M,\oplus,0)$ is a commutative monoid and $\overline{(\cdot)}:M\to M$ maps
each element to its _complement_ , such that for all $x,y\in M$
1. (1)
$\overline{\overline{x}}=x$
2. (2)
$x\oplus\overline{0}=\overline{0}$
3. (3)
$\overline{(\overline{x}\oplus y)}\oplus y=\overline{(\overline{y}\oplus
x)}\oplus x$.
We denote $1=\overline{0}$, multiplication $x\otimes
y=\overline{\overline{x}\oplus\overline{y}}$ and subtraction $x\ominus
y=x\otimes\overline{y}$.
Note that by using the derived operations, axioms (2) and (3) above can be
written as
1. (2)
$x\oplus 1=1$
2. (3)
$(y\ominus x)\oplus x=(x\ominus y)\oplus y$
MV-algebras are endowed with a natural order.
###### Definition 2.2 (natural order).
Let $\mathbb{M}=(M,\oplus,0,\overline{(\cdot)})$ be an MV-algebra. The
_natural order_ on $\mathbb{M}$ is defined, for $x,y\in M$, by $x\sqsubseteq
y$ if $x\oplus z=y$ for some $z\in M$. When $\sqsubseteq$ is total
$\mathbb{M}$ is called an _MV-chain_.
The natural order gives an MV-algebra a lattice structure where $\bot=0$,
$\top=1$, $x\sqcup y=(x\ominus y)\oplus y$ and $x\sqcap
y=\overline{\overline{x}\sqcup\overline{y}}=x\otimes(\overline{x}\oplus y)$.
We call the MV-algebra _complete_ , if it is a complete lattice. This is not
true in general, e.g., $([0,1]\cap\mathbb{Q},\leq)$.
###### Example 2.3.
A prototypical example of an MV-algebra is
$([0,1],\oplus,0,\overline{(\cdot)})$ where $x\oplus y=\min\\{x+y,1\\}$ and
$\overline{x}=1-x$ for $x,y\in[0,1]$. This means that $x\otimes
y=\max\\{x+y-1,0\\}$ and $x\ominus y=\max\\{0,x-y\\}$ (truncated subtraction).
The operators $\oplus$ and $\otimes$ are also known as strong disjunction and
conjunction in Łukasiewicz logic [m:lukasiewicz-mv]. The natural order is
$\leq$ (less or equal) on the reals.
Another example is $(\\{0,\dots,k\\},\oplus,0,\overline{(\cdot)})$ where
$n\oplus m=\min\\{n+m,k\\}$ and $\overline{n}=k-n$ for
$n,m\in\\{0,\dots,k\\}$. We are in particular interested in the case $k=1$.
Both MV-algebras are complete and MV-chains.
Boolean algebras (with disjunction and complement) also form MV-algebras that
are complete, but in general not MV-chains.
MV-algebras are the algebraic semantics of Łukasiewicz logic. They can be
shown to correspond to intervals of the kind $[{0},{u}]$ in suitable groups,
i.e., abelian lattice-ordered groups with a strong unit $u$ [Mun:MV].
We next review some properties of MV-algebras. They are taken from or easy
consequences of properties in [Mun:MV] and will be used throughout the paper.
[properties of MV-algebras] Let $\mathbb{M}=(M,\oplus,0,\overline{(\cdot)})$
be an MV-algebra. For all $x,y,z\in M$
1. (1)
$x\oplus\overline{x}=1$
2. (2)
$x\sqsubseteq y$ iff $\overline{x}\oplus y=1$ iff $x\otimes\overline{y}=0$ iff
$y=x\oplus(y\ominus x)$
3. (3)
$x\sqsubseteq y$ iff $\overline{y}\sqsubseteq\overline{x}$
4. (4)
$\oplus$, $\otimes$ are monotone in both arguments, $\ominus$ monotone in the
first and antitone in the second argument.
5. (5)
if $x\sqsubset y$ then $0\sqsubset y\ominus x$;
6. (6)
$(x\oplus y)\ominus y\sqsubseteq x$
7. (7)
$z\sqsubseteq x\oplus y$ if and only if $z\ominus x\sqsubseteq y$.
8. (8)
if $x\sqsubset y$ and $z\sqsubseteq\overline{y}$ then $x\oplus z\sqsubset
y\oplus z$;
9. (9)
$y\sqsubseteq\overline{x}$ if and only if $(x\oplus y)\ominus y=x$;
10. (10)
$x\ominus(x\ominus y)\sqsubseteq y$ and if $y\sqsubseteq x$ then
$x\ominus(x\ominus y)=y$.
11. (11)
Whenever $\mathbb{M}$ is an MV-chain, $x\sqsubset y$ and $0\sqsubset z$ imply
$(x\oplus z)\ominus y\sqsubset z$
###### Proof 2.4.
The proof of properties (1), (2), (3), (4) can be found directly in [Mun:MV].
For the rest:
1. (5)
Immediate consequence of (2). In fact, given $x,y\in M$, if we had $y\ominus
x=0$ then by (2), $y=x\oplus(y\ominus x)=x\oplus 0=x$.
2. (6)
Observe that $(x\oplus y)\ominus y=\overline{\overline{(x\oplus y)}\oplus
y}=\overline{(\overline{x}\ominus y)\oplus
y}=\overline{(y\ominus\overline{x})\oplus\overline{x}}\sqsubseteq\overline{\overline{x}}=x$,
where the last inequality is motivated by the fact that
$\overline{x}\sqsubseteq(y\ominus\overline{x})\oplus\overline{x}$ and point
(3).
3. (7)
The direction from left to right is an immediate consequence of (6). In fact,
if $z\sqsubseteq x\oplus y$ then $z\ominus x\sqsubseteq(x\oplus y)\ominus
x\sqsubseteq y$.
The other direction goes as follows: if $z\ominus x\sqsubseteq y$, then – by
monotonicity (4) – $(z\ominus x)\oplus x\sqsubseteq y\oplus x=x\oplus y$. The
left hand side can be rewritten to $(x\ominus z)\oplus z\sqsupseteq z$.
4. (8)
Assume that $x\sqsubset y$ and $z\sqsubseteq\overline{y}$. We know, by
property (4) that $x\oplus z\sqsubseteq y\oplus z$. Assume by contradiction
that $x\oplus z=y\oplus z$. Then we have
$\displaystyle\overline{x}$ $\displaystyle\sqsubseteq$ [by properties (3) and
(6)] $\displaystyle\sqsubseteq\overline{(x\oplus z)\ominus z}$ [since $x\oplus
z=y\oplus z$] $\displaystyle\sqsubseteq\overline{(y\oplus z)\ominus z}$
[definition of $\ominus$] $\displaystyle=(\overline{y}\ominus z)\oplus z$
[since $z\sqsubseteq\overline{y}$ and property (2)]
$\displaystyle=\overline{y}$
And with point (3) this is a contradiction.
5. (9)
Assume $y\sqsubseteq\overline{x}$. We know $(x\oplus y)\ominus y\sqsubseteq
x$. If it were $(x\oplus y)\ominus y\sqsubset x$, then $((x\oplus y)\ominus
y)\oplus y\sqsubset x\oplus y$, with (8). Since the left-hand side is equal to
$(y\ominus(x\oplus y))\oplus(x\oplus y)\sqsupseteq x\oplus y$, this is a
contradiction.
For the other direction assume that $(x\oplus y)\ominus y=x$. Hence we have
$x=(x\oplus y)\ominus y=\overline{\overline{(x\oplus y)}\oplus y}$. By
complementing on both sides we obtain $\overline{x}=\overline{(x\oplus
y)}\oplus y$ which implies that $y\sqsubseteq\overline{x}$.
6. (10)
Observe that, by (7), we have
$\overline{y}\sqsubseteq\overline{x}\oplus(\overline{y}\ominus\overline{x})=\overline{x}\oplus(x\ominus
y)=\overline{x\ominus(x\ominus y)}$. Therefore, by (3), $x\ominus(x\ominus
y)\sqsubseteq y$, as desired.
For the second part, assume if $y\sqsubseteq x$ and thus, by (3),
$\overline{x}\sqsubseteq\overline{y}$. Using (2), we obtain
$\overline{y}=\overline{x}\oplus(\overline{y}\ominus\overline{x})=\overline{x}\oplus\overline{y\oplus\overline{x}}=\overline{x}\oplus(x\ominus
y)$. Hence $y=\overline{\overline{x}\oplus(x\ominus y)}=x\ominus(x\ominus y)$.
7. (11)
We first observe that $x\sqsubseteq y\oplus(x\ominus y)$. This is a direct
consequence of axiom (3) of MV-algebras and the definition of natural order.
Second, in an MV-chain if $x,y\sqsupset 0$, then $x\ominus y\sqsubset x$. In
fact, if $x\sqsubseteq y$ and thus $x\ominus y=0\sqsubset x$. If instead,
$y\sqsubset x$ we have $0\sqsubset y$ and $x\ominus y\sqsubseteq 1\ominus
y=\overline{y}$, hence by 2.3(8) it holds that $0\oplus(x\ominus y)\sqsubset
y\oplus(x\ominus y)$. Recalling that $y\sqsubset x$ and thus by 2.3(2),
$(x\ominus y)\oplus y=x$, we conclude $x\ominus y\sqsubset x$.
Now
$\displaystyle(x\oplus z)\ominus y$
$\displaystyle\sqsubseteq(x\oplus(z\ominus(y\ominus x))\oplus(y\ominus
x))\ominus y$ [by first obs. above] $\displaystyle=(y\oplus(z\ominus(y\ominus
x))\ominus y$ [since $x\sqsubseteq y$, by 2.3(2)] $\displaystyle\sqsubseteq
z\ominus(y\ominus x)$ [by 2.3(6)] $\displaystyle\sqsubset z$ [by second obs.
above, since $z\sqsupset 0$ and $y\ominus x\sqsupset 0$ by 2.3(5)]
Note that we adhere to the following convention: whenever brackets are
missing, we always assume that we associate from left to right. So $a\oplus
b\ominus c$ should be read as $(a\oplus b)\ominus c$ and not as
$a\oplus(b\ominus c)$, which is in general different.
## 3\. Non-expansive functions and their approximations
As mentioned in the introduction, our interest is for fixpoints of monotone
functions $f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$, where $\mathbb{M}$ is an MV-
chain and $Y$ is a finite set. We will see that for non-expansive functions we
can over-approximate the sets of points in which a given $a\in\mathbb{M}^{Y}$
can be increased in a way that is preserved by the application of $f$. This
will be the core of the proof rules outlined earlier.
### 3.1. Non-expansive functions on MV-algebras.
For defining non-expansiveness it is convenient to introduce a norm.
###### Definition 3.1 (norm).
Let $\mathbb{M}$ be an MV-chain and let $Y$ be a finite set. Given
$a\in\mathbb{M}^{Y}$ we define its _norm_ as $|\\!|{a}|\\!|=\max\\{a(y)\mid
y\in Y\\}$.
Given a finite set $Y$ we extend $\oplus$ and $\otimes$ to $\mathbb{M}^{Y}$
pointwise. E.g. if $a,b\in\mathbb{M}^{Y}$, we write $a\oplus b$ for the
function defined by $(a\oplus b)(y)=a(y)\oplus b(y)$ for all $y\in Y$. Given
$Y^{\prime}\subseteq Y$ and $\delta\in\mathbb{M}$, we write
$\delta_{Y^{\prime}}$ for the function defined by
$\delta_{Y^{\prime}}(y)=\delta$ if $y\in Y^{\prime}$ and
$\delta_{Y^{\prime}}(y)=0$, otherwise. Whenever this does not generate
confusion, we write $\delta$ instead of $\delta_{Y}$. It can be seen that
$|\\!|{\cdot}|\\!|$ has the properties of a norm, i.e., for all
$a,b\in\mathbb{M}^{Y}$ and $\delta\in\mathbb{M}$, it holds that (1)
$|\\!|{a\oplus b}|\\!|\sqsubseteq|\\!|{a}|\\!|\oplus|\\!|{b}|\\!|$, (2)
$|\\!|{\delta\otimes a}|\\!|=\delta\otimes|\\!|{a}|\\!|$ and $|\\!|{a}|\\!|=0$
implies that $a$ is the constant $0$ (see 3.2 in the appendix).
###### Lemma 3.2 (properties of the norm).
Let $\mathbb{M}$ be an MV-chain and let $Y$ be a finite set. Then
$|\\!|{\cdot}|\\!|:\mathbb{M}^{Y}\to\mathbb{M}$ satisfies, for all
$a,b\in\mathbb{M}^{Y}$, $\delta\in\mathbb{M}$
1. (1)
$|\\!|{a\oplus b}|\\!|\sqsubseteq|\\!|{a}|\\!|\oplus|\\!|{b}|\\!|$,
2. (2)
$|\\!|{\delta\otimes a}|\\!|=\delta\otimes|\\!|{a}|\\!|$ and
3. (3)
$|\\!|{a}|\\!|=0$ implies that $a$ is the constant $0$.
###### Proof 3.3.
Concerning (1), let $|\\!|{a\oplus b}|\\!|$ be realised on some element $y\in
Y$, i.e., $|\\!|{a\oplus b}|\\!|=a(y)\oplus b(y)$. Since
$a(y)\sqsubseteq|\\!|{a}|\\!|$ and $b(y)\sqsubseteq|\\!|{b}|\\!|$, by
monotonicity of $\oplus$ we deduce that $|\\!|{a\oplus
b}|\\!|\sqsubseteq|\\!|{a}|\\!|\oplus|\\!|{b}|\\!|$.
Concerning (2), note that
$\displaystyle|\\!|{\delta\otimes a}|\\!|$
$\displaystyle=\max\\{\overline{\overline{\delta}\oplus\overline{a(y)}}\mid
y\in Y\\}$
$\displaystyle=\overline{\min\\{\overline{\delta}\oplus\overline{a(y)}\mid
y\in Y\\}}$
$\displaystyle=\overline{\overline{\delta}\oplus\min\\{\overline{a(y)}\mid
y\in Y\\}}$
$\displaystyle=\overline{\overline{\delta}\oplus\overline{\max\\{a(y)\mid y\in
Y\\}}}$
$\displaystyle=\overline{\overline{\delta}\oplus\overline{|\\!|{a}|\\!|}}$
$\displaystyle=\delta\otimes|\\!|{a}|\\!|\ $
Finally, point (3) is straightforward, since $0$ is the bottom of
$\mathbb{M}$.
Moreover, it is clearly monotone, i.e., if $a\sqsubseteq b$ then
$|\\!|{a}|\\!|\sqsubseteq|\\!|{b}|\\!|$.
We next introduce non-expansiveness. Despite the fact that we will finally be
interested in endo-functions $f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$, in order to
allow for a compositional reasoning we work with functions where domain and
codomain can be different.
###### Definition 3.4 (non-expansiveness).
Let $f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a function, where $\mathbb{M}$ is
an MV-chain and $Y,Z$ are finite sets. We say that it is _non-expansive_ if
for all $a,b\in\mathbb{M}^{Y}$ it holds $|\\!|{f(b)\ominus
f(a)}|\\!|\sqsubseteq|\\!|{b\ominus a}|\\!|$.
Note that $(a,b)\mapsto|\\!|{a\ominus b}|\\!|$ is the supremum lifting of a
directed version of Chang’s distance [Mun:MV]. It is easy to see that all non-
expansive functions on MV-chains are monotone (see Lemma 3.5 in the appendix).
Moreover, when $\mathbb{M}=\\{0,1\\}$, i.e., $\mathbb{M}$ is the two-points
boolean algebra, the two notions coincide.
###### Lemma 3.5 (non-expansiveness implies monotonicity).
Let $\mathbb{M}$ is an MV-chain and let $Y,Z$ be finite sets. Every non-
expansive function $f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ is monotone.
###### Proof 3.6.
Let $a,b\in\mathbb{M}^{Y}$ be such that $a\sqsubseteq b$. Therefore, by
2.3(2), $a(y)\ominus b(y)=0$ for all $y\in Y$, hence $a\ominus b=0$. Thus
$|\\!|{f(a)\ominus f(b)}|\\!|\sqsubseteq|\\!|{a\ominus b}|\\!|=0$. In turn
this implies that for all $z\in Z$, $f(a)(z)\ominus f(b)(z)=0$. Hence 2.3(2),
allows us to conclude $f(a)(z)\sqsubseteq f(b)(z)$ for all $z\in Z$, i.e.,
$f(a)\sqsubseteq f(b)$, as desired.
The next lemma provides a useful equivalent characterisation of non-
expansiveness.
###### Lemma 3.7 (characterisation of non-expansiveness).
Let $f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a monotone function, where
$\mathbb{M}$ is an MV-chain and $Y,Z$ are finite sets. Then $f$ is non-
expansive iff for all $a\in\mathbb{M}^{Y}$, $\theta\in\mathbb{M}$ and $z\in Z$
it holds $f(a\oplus\theta)(z)\ominus f(a)(z)\sqsubseteq\theta$.
###### Proof 3.8.
Let $f$ be non-expansive and let $a\in\mathbb{M}^{Y}$ and
$\theta\in\mathbb{M}$. We have that for all $z\in Z$
$\displaystyle f(a\oplus\theta)(z)\ominus f(a)(z)\sqsubseteq$
$\displaystyle\quad\sqsubseteq|\\!|{f(a\oplus\theta)\ominus f(a)}|\\!|$ [by
definition of norm] $\displaystyle\quad\sqsubseteq|\\!|{(a\oplus\theta)\ominus
a}|\\!|$ [by hypothesis] $\displaystyle\quad\sqsubseteq|\\!|{\lambda
y.\theta}|\\!|$ [by 2.3(6) and monotonicity of norm]
$\displaystyle\quad=\theta$ [by definition of norm]
Conversely, assume that for all $a\in\mathbb{M}^{Y}$, $\theta\in\mathbb{M}$
and $z\in Z$ it holds $f(a\oplus\theta)(z)\ominus f(a)(z)\sqsubseteq\theta$.
For $a,b\in\mathbb{M}^{Y}$, first observe that for all $y\in Y$ it holds
$b(y)\ominus a(y)\sqsubseteq|\\!|{b\ominus a}|\\!|$, hence, if we let
$\theta=|\\!|{b\ominus a}|\\!|$, we have $b\sqsubseteq a\oplus\theta$ and
thus, by monotonicity, $f(b)\ominus f(a)\sqsubseteq f(a\oplus\theta)\ominus
f(a)$. Thus
$\displaystyle|\\!|{f(b)\ominus f(a)}|\\!|\sqsubseteq$
$\displaystyle\quad\sqsubseteq|\\!|{f(a+\theta)\ominus f(a)}|\\!|=$ [by the
observation above and monotonicity of norm]
$\displaystyle\quad=\max\\{f(a+\theta)(z)\ominus f(a)(z)|z\in Z\\}$ [by
definition of norm] $\displaystyle\quad\sqsubseteq\theta$ [by hypothesis]
$\displaystyle\quad=|\\!|{b\ominus a}|\\!|$ [by the choice of $\theta$]
###### Lemma 3.9 (composing non-expansive functions).
Let $\mathbb{M}$ be an MV-chain and let $Y,W,Z$ be finite sets. If
$g:\mathbb{M}^{Y}\to\mathbb{M}^{W}$ and $h:\mathbb{M}^{W}\to\mathbb{M}^{Z}$
are non-expansive then $h\circ g:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ is non-
expansive.
###### Proof 3.10.
Straightforward. We have for any $a,b\in\mathbb{M}^{Y}$ that
$\displaystyle|\\!|{h(g(b))\ominus h(g(a))}|\\!|\sqsubseteq$
$\displaystyle\quad\sqsubseteq|\\!|{g(b)\ominus g(a)}|\\!|$ [by non-
expansiveness of $h$] $\displaystyle\quad\sqsubseteq|\\!|{b\ominus a}|\\!|$
[by non-expansiveness of $g$]
### 3.2. Approximating the propagation of increases.
Let $f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a monotone function and take
$a,b\in\mathbb{M}^{Y}$ with $a\sqsubseteq b$. We are interested in the
difference $b(y)\ominus a(y)$ for some $y\in Y$ and on how the application of
$f$ “propagates” this difference. The reason is that, understanding that no
increase can be propagated will be crucial to establish when a fixpoint of a
non-expansive function $f$ is actually the largest one, and, more generally,
when a (pre-)fixpoint of $f$ is above the largest fixpoint.
In order to formalise the above intuition, we rely on tools from abstract
interpretation. In particular, the following pair of functions, which, under a
suitable condition, form a Galois connection, will play a major role. The left
adjoint $\alpha_{a,\delta}$ takes as input a set $Y^{\prime}$ and, for $y\in
Y^{\prime}$, it increases the values $a(y)$ by $\delta$, while the right
adjoint $\gamma_{a,\delta}$ takes as input a function $b\in\mathbb{M}^{Y}$,
$b\in[{a},{a\oplus\delta}]$ and checks for which parameters $y\in Y$ the value
$b(y)$ exceeds $a(y)$ by $\delta$.
We also define $[{Y}]_{a}$, the subset of elements in $Y$ where $a(y)$ is not
$1$ and thus there is a potential to increase, and $\delta_{a}$, which gives
us the least of such increases (i.e., the largest increase that can be used on
all elements in $[{Y}]_{a}$ without “overflowing”).
###### Definition 3.11 (functions to sets, and vice versa).
Let $\mathbb{M}$ be an MV-algebra and let $Y$ be a finite set. Define the set
$[{Y}]_{a}=\\{y\in Y\mid a(y)\neq 1\\}$ (support of $\overline{a}$) and
$\delta_{a}=\min\\{\overline{a(y)}\mid y\in[{Y}]_{a}\\}$ with
$\min\emptyset=1$.
For $0\sqsubset\delta\in\mathbb{M}$ we consider the functions
$\alpha_{a,\delta}:\mathbf{2}^{[{Y}]_{a}}\to[{a},{a\oplus\delta}]$ and
$\gamma_{a,\delta}:[{a},{a\oplus\delta}]\to\mathbf{2}^{[{Y}]_{a}}$, defined,
for $Y^{\prime}\in\mathbf{2}^{[{Y}]_{a}}$ and $b\in[{a},{a\oplus\delta}]$, by
$\alpha_{a,\delta}(Y^{\prime})=a\oplus\delta_{Y^{\prime}}\qquad\gamma_{a,\delta}(b)=\\{y\in[{Y}]_{a}\mid
b(y)\ominus a(y)\sqsupseteq\delta\\}.$
###### Lemma 3.12 (well-definedness).
The functions $\alpha_{a,\delta}$, $\gamma_{a,\delta}$ from Def. 3.11 are
well-defined and monotone.
###### Proof 3.13.
The involved functions $\alpha_{a,\delta}$ and $\gamma_{a,\delta}$ are well-
defined. In fact, for $Y^{\prime}\subseteq[{Y}]_{a}$, clearly
$\alpha_{a,\delta}=a\oplus\delta_{Y^{\prime}}\in[{a},{a\oplus\delta}]$.
Moreover, for $b\in[{a},{a\oplus\delta}]$ we have
$\gamma_{a,\delta}(b)\subseteq[{Y}]_{a}$. In fact, if $y\not\in[{Y}]_{a}$ then
$a(y)=1$, hence $b(y)=1$ and thus $b(y)\ominus a(y)=0\not\sqsupseteq\delta$,
and thus $y\not\in\gamma_{a,\delta}(b)$. Moreover, they are clearly monotone.
When $\delta$ is sufficiently small, the pair
$\langle\alpha_{a,\delta},\gamma_{a,\delta}\rangle$ is a Galois connection.
[Galois connection] Let $\mathbb{M}$ be an MV-algebra and $Y$ be a finite set.
For $0\neq\delta\sqsubseteq\delta_{a}$, the pair
$\langle\alpha_{a,\delta},\gamma_{a,\delta}\rangle:\mathbf{2}^{[{Y}]_{a}}\to[{a},{a\oplus\delta}]$
is a Galois connection.
$\mathbf{2}^{[{Y}]_{a}}$$[{a},{a\oplus\delta}]$$\alpha_{a,\delta}$$\gamma_{a,\delta}$
###### Proof 3.14.
For all $Y^{\prime}\in\mathbf{2}^{[{Y}]_{a}}$ it holds
$\gamma_{a,\delta}(\alpha_{a,\delta}(Y^{\prime}))=\gamma_{a,\delta}(a\oplus\delta_{Y^{\prime}})=Y^{\prime}$.
In fact, for all $y\in Y^{\prime}$,
$(a\oplus\delta_{Y^{\prime}})(y)=a(y)\oplus\delta$. Moreover, and by the
choice of $\delta$ and definition of $[{Y}]_{a}$, we have
$\delta\sqsubseteq\delta_{a}\sqsubseteq\overline{a(y)}$, by 2.3(9), we have
$(a\oplus\delta_{Y^{\prime}})(y)\ominus a(y)=\delta$ hence
$y\in\gamma_{a,\delta}(\alpha_{a,\delta}(Y^{\prime}))$. Conversely, if
$y\not\in Y^{\prime}$, then $(a\oplus\delta_{Y^{\prime}})(y)=a(y)$, and thus
$(a\oplus\delta_{Y^{\prime}})(y)\ominus a(y)=0\not\sqsupseteq\delta$.
Moreover, for all $b\in[{a},{a\oplus\delta}]$ we have
$\alpha_{a,\delta}(\gamma_{a,\delta}(b))=a\oplus\delta_{\gamma_{a,\delta}(b)}\sqsubseteq
b$
In fact, for all $y\in Y$, if $y\in\gamma_{a,\delta}(b)$, i.e.,
$\delta\sqsubseteq b(y)\ominus a(y)$ then
$(a\oplus\delta_{\gamma_{a,\delta}(b)})(y)=a(y)\oplus\delta\sqsubseteq
a(y)\oplus(b(y)\ominus a(y))=b(y)$, by 2.3(2). If instead,
$y\not\in\gamma_{a,\delta}(b)$, then
$(a\oplus\delta_{\gamma_{a,\delta}(b)}(b))(y)=a(y)\sqsubseteq b(y)$.
Whenever $f$ is non-expansive, it is easy to see that it restricts to a
function $f:[{a},{a\oplus\delta}]\to[{f(a)},{f(a)\oplus\delta}]$ for all
$\delta\in\mathbb{M}$.
###### Lemma 3.15 (restricting non-expansive functions to intervals).
Let $\mathbb{M}$ be an MV-chain, let $Y,Z$ be finite sets
$f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a non-expansive function. Then $f$
restricts to a function
$f_{a,\delta}:[{a},{a\oplus\delta}]\to[{f(a)},{f(a)\oplus\delta}]$, defined by
$f_{a,\delta}(b)=f(b)$.
###### Proof 3.16.
Given $b\in[{a},{a\oplus\delta}]$, by monotonicity of $f$ we have that
$f(a)\sqsubseteq f(b)$. Moreover, $f(b)\sqsubseteq f(a\oplus\delta)\sqsubseteq
f(a)\oplus\delta$, where the last passage is motivated by 3.7.
In the following we will simply write $f$ instead of $f_{a,\delta}$.
Given an MV-chain $\mathbb{M}$ and a finite set $Y$, we first observe that
each function $b\in\mathbb{M}^{Y}$ can be expressed as a suitable sum of
functions of the shape $\delta_{Y^{\prime}}$.
###### Lemma 3.17 (standard form).
Let $\mathbb{M}$ be an MV-chain and let $Y$ be a finite set. Then for any
$b\in\mathbb{M}^{Y}$ there are $Y_{1},\ldots,Y_{n}\subseteq Y$ with
$Y_{i+1}\subseteq Y_{i}$ for $i\in\\{1,\ldots,n-1\\}$ and
$\delta^{i}\in\mathbb{M}$,
$0\neq\delta^{i}\sqsubseteq\overline{\bigoplus_{j=1}^{i-1}\delta^{j}}$ for
$i\in\\{1,\ldots,n\\}$ such that
$b=\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}}$ and
$|\\!|{b}|\\!|=\bigoplus_{i=1}^{n}\delta^{i}$.
where we assume that an empty sum evaluates to $0$.
###### Proof 3.18.
Given $b\in\mathbb{M}^{Y}$, consider $V=\\{b(y)\mid y\in Y\\}$. If $V$ is
empty, then $Y$ is empty and thus $b=1_{Y}$, i.e., we can take $n=1$,
$\delta^{1}=1$ and $Y_{1}=Y$. Otherwise, if $Y\neq\emptyset$, then $V$ is a
finite non-empty set. Let $V=\\{v_{1},\ldots,v_{n}\\}$, with $v_{i}\sqsubseteq
v_{i+1}$ for $i\in\\{1,\ldots,n-1\\}$. For $i\in\\{1,\ldots,n\\}$ define
$Y_{i}=\\{y\in Y\mid v_{i}\sqsubseteq b(y)\\}$. Clearly, $Y_{1}\supseteq
Y_{2}\supseteq\ldots\supseteq Y_{n}$. Moreover let $\delta^{1}=v_{1}$ and
$\delta^{i+1}=v_{i+1}\ominus v_{i}$ for $i\in\\{1,\ldots,n-1\\}$.
Observe that for each $i$, we have $v_{i}=\bigoplus_{j=1}^{i}\delta^{i}$, as
it can easily shown by induction. Hence $\delta^{i+1}=v_{i+1}\ominus
v_{i}=v_{i+1}\ominus\bigoplus_{j=1}^{i}\delta^{i}\sqsubseteq
1\ominus\bigoplus_{j=1}^{i}\delta^{i}=\overline{\bigoplus_{j=1}^{i}\delta^{i}}$.
We now show that $b=\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}}$ by induction on
$n$.
* •
If $n=1$ then $V=\\{v_{1}\\}$ and thus $b$ is a constant function $b(y)=v_{1}$
for all $y\in Y$. Hence $Y_{1}=Y$ and thus
$b=\delta^{1}_{Y}=\delta^{1}_{Y_{1}}$, as desired.
* •
If $n>1$, let $b^{\prime}\in\mathbb{M}^{Y}$ defined by $b^{\prime}(y)=b(y)$
for $y\in Y\backslash Y_{n}$ and $b^{\prime}(y)=v_{n-1}$ for $y\in Y_{n}$.
Note that $\\{b^{\prime}(y)\mid y\in Y\\}=\\{v_{1},\ldots,v_{n-1}\\}$. Hence,
by inductive hypothesis, $b^{\prime}=\bigoplus_{i=1}^{n-1}\delta^{i}_{Y_{i}}$.
Moreover, $b^{\prime}(y)=b\oplus\delta^{n}_{Y_{n}}$, and thus we conclude.
Finally observe that the statement requires $\delta^{i}\neq 0$ for all $i$. We
can enjoy this property by just omitting the first summand when $v_{1}=0$.
###### Example 3.19.
We illustrate the definitions with small examples whose sole purpose is to get
a better intuition. Consider the MV-chain $\mathbb{M}=[0,1]$, a set
$Y=\\{y_{1},y_{2},y_{3},y_{4}\\}$ and a function $a\colon Y\to[0,1]$ with
$a(y_{1})=0.2$, $a(y_{2})=0.4$, $a(y_{3})=0.9$, $a(y_{4})=1$. In this case
$\delta_{a}=0.1$ and $[{Y}]_{a}=\\{y_{1},y_{2},y_{3}\\}$.
Choose $\delta=0.1$ and $Y^{\prime}=\\{y_{1},y_{3}\\}$. Then
$\alpha_{a,\delta}(Y^{\prime})$ is a function that maps $y_{1}\mapsto 0.3$,
$y_{2}\mapsto 0.4$, $y_{3}\mapsto 1$, $y_{4}\mapsto 1$.
We keep $\delta=0.1$ and consider a function $b\colon Y\to[0,1]$ with
$b(y_{1})=0.3$, $b(y_{2})=0.45$, $b(y_{3})=b(y_{4})=1$. Then
$\gamma_{a,\delta}(b)=\\{y_{1},y_{3}\\}$. (See Fig. 2 for a visual
representation.)
${\color[rgb]{0.09,0.45,0.27}\definecolor[named]{pgfstrokecolor}{rgb}{0.09,0.45,0.27}a}\
=\
\raisebox{-30.0pt}{\scalebox{0.65}{\begin{picture}(0.0,0.0)\includegraphics{./galois-
a-
base.pdf}\end{picture}\begin{picture}(3250.0,1524.0)(429.0,-2227.0)\put(631.0,-2131.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{1}$}}}}}
\put(1081.0,-2131.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{2}$}}}}}
\put(1531.0,-2131.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{3}$}}}}}
\put(1981.0,-2131.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{4}$}}}}}
\put(3016.0,-1141.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$\delta=$}}}}}
\end{picture}}}$
$\alpha_{a,\delta}\colon$
$Y^{\prime}=\\{y_{1},y_{3}\\}\quad\mapsto\quad\raisebox{-30.0pt}{\scalebox{0.65}{\begin{picture}(0.0,0.0)\includegraphics{./galois-
a-plus-
base.pdf}\end{picture}\begin{picture}(1824.0,1569.0)(439.0,-2272.0)\put(676.0,-2176.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{1}$}}}}}
\put(1126.0,-2176.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{2}$}}}}}
\put(1576.0,-2176.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{3}$}}}}}
\put(2026.0,-2176.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{4}$}}}}}
\end{picture}}}$
$\gamma_{a,\delta}\colon$
${\color[rgb]{0,0,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,1}b}=\raisebox{-30.0pt}{\scalebox{0.65}{\begin{picture}(0.0,0.0)\includegraphics{./galois-
b-
base.pdf}\end{picture}\begin{picture}(1844.0,1479.0)(429.0,-2182.0)\put(676.0,-2086.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{1}$}}}}}
\put(1126.0,-2086.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{2}$}}}}}
\put(1576.0,-2086.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{3}$}}}}}
\put(2026.0,-2086.0){\makebox(0.0,0.0)[b]{\smash{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}$y_{4}$}}}}}
\end{picture}}}\quad\mapsto\quad Y^{\prime}=\\{y_{1},y_{3}\\}$
Figure 2. Visual representation of $\alpha_{a,\delta}$ and $\gamma_{a,\delta}$
As mentioned before, a crucial result shows that for all non-expansive
functions, under the assumption that $Y,Z$ are finite and the order on
$\mathbb{M}$ is total, we can suitably approximate the propagation of
increases. In order to state this result, a useful tool is a notion of
approximation of a function.
###### Definition 3.20 ($(\delta,a)$-approximation).
Let $\mathbb{M}$ be an MV-chain, let $Y$, $Z$ be finite sets and let
$f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a non-expansive function. For
$a\in\mathbb{M}^{Y}$ and any $\delta\in\mathbb{M}$ we define
$f_{a,\delta}^{\\#}:\mathbf{2}^{[{Y}]_{a}}\to\mathbf{2}^{[{Z}]_{f(a)}}$ as
$f_{a,\delta}^{\\#}=\gamma_{f(a),\delta}\circ f\circ\alpha_{a,\delta}$.
Given $Y^{\prime}\subseteq[{Y}]_{a}$, its image
$f_{a,\delta}^{\\#}(Y^{\prime})\subseteq[{Z}]_{f(a)}$ is the set of points
$z\in[{Z}]_{f(a)}$ such that $\delta\sqsubseteq
f(a\oplus\delta_{Y^{\prime}})(z)\ominus f(a)(z)$, i.e., the points to which
$f$ propagates an increase of the function $a$ with value $\delta$ on the
subset $Y^{\prime}$.
###### Example 3.21.
We continue with Example 3.19 and consider the function
$f\colon[0,1]^{Y}\to[0,1]^{Y}$ with $f(b)=b\ominus 0.3$ for every
$b\in[0,1]^{Y}$, which can easily be seen to be non-expansive. We again
consider $a\colon Y\to[0,1]$ and $\delta=0.1$ as in Example 3.19, and
$Y^{\prime}=\\{y_{1},y_{2},y_{3}\\}$. The maps $a$,
$\alpha_{a,\delta}(Y^{\prime})$, $f(a)$ and $f(\alpha_{a,\delta}(Y^{\prime}))$
are given in the table below and we obtain
$f_{a,\delta}^{\\#}(Y^{\prime})=\gamma_{f(a),\delta}(f(\alpha_{a,\delta}(Y^{\prime})))=\\{y_{2},y_{3}\\}$,
that is only the increase at $y_{2}$ and $y_{3}$ can be propagated, while the
value of $y_{1}$ is too low and $y_{4}$ is not even contained in $[{Y}]_{a}$,
i.e. the domain of $f_{a,\delta}^{\\#}$, since its value is already $1.0$ and
there is no slack left.
| $y_{1}$ | $y_{2}$ | $y_{3}$ | $y_{4}$
---|---|---|---|---
$a$ | 0.2 | 0.4 | 0.9 | 1.0
$\alpha_{a,\delta}(Y^{\prime})$ | 0.3 | 0.5 | 1.0 | 1.0
$f(a)$ | 0.0 | 0.1 | 0.6 | 0.7
$f(\alpha_{a,\delta}(Y^{\prime}))$ | 0.0 | 0.2 | 0.7 | 0.7
In general we have
$f_{a,\delta}^{\\#}(Y^{\prime})=Y^{\prime}\cap\\{y_{2},y_{3}\\}$ if
$\delta\leq\delta_{a}=0.1$,
$f_{a,\delta}^{\\#}(Y^{\prime})=Y^{\prime}\cap\\{y_{2}\\}$ if $0.1<\delta\leq
0.6$ and $f_{a,\delta}^{\\#}(Y^{\prime})=\emptyset$ if $0.6<\delta$.
We now show that $f_{a,\delta}^{\\#}$ is antitone in the parameter $\delta$, a
non-trivial result.
[anti-monotonicity] Let $\mathbb{M}$ be an MV-chain, let $Y$, $Z$ be finite
sets, let $f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a non-expansive function and
let $a\in\mathbb{M}^{Y}$. For $\theta,\delta\in\mathbb{M}$, if
$\theta\sqsubseteq\delta$ then $f_{a,\delta}^{\\#}\subseteq
f_{a,\theta}^{\\#}$.
###### Proof 3.22.
Let $Y^{\prime}\subseteq[{Y}]_{a}$ and let us prove that
$f_{a,\delta}^{\\#}(Y^{\prime})\subseteq f_{a,\theta}^{\\#}(Y^{\prime})$. Take
$z\in f_{a,\delta}^{\\#}(Y^{\prime})$. This means that $\delta\sqsubseteq
f(a\oplus\delta_{Y^{\prime}})(z)\ominus f(a)(z)$.
We have
$\displaystyle\delta\sqsubseteq f(a\oplus\delta_{Y^{\prime}})(z)\ominus
f(a)(z)$ [by hypothesis]
$\displaystyle=f(a\oplus\theta_{Y^{\prime}}\oplus(\delta\ominus\theta)_{Y^{\prime}})(z)\ominus
f(a)(z)$
$\displaystyle=f(a\oplus\theta_{Y^{\prime}}\oplus(\delta\ominus\theta)_{Y^{\prime}})(z)\ominus
f(a\oplus\theta_{Y^{\prime}})(z)\oplus f(a\oplus\theta_{Y^{\prime}})(z)\ominus
f(a)(z)$
$\displaystyle\sqsubseteq|\\!|{f(a\oplus\theta_{Y^{\prime}}\oplus(\delta\ominus\theta)_{Y^{\prime}})\ominus
f(a\oplus\theta_{Y^{\prime}})}|\\!|\oplus
f(a\oplus\theta_{Y^{\prime}})(z)\ominus f(a)(z)$ [by definition of norm and
monotonicity of $\oplus$]
$\displaystyle\sqsubseteq|\\!|{a\oplus\theta_{Y^{\prime}}\oplus(\delta\ominus\theta)_{Y^{\prime}}\ominus(a\oplus\theta_{Y^{\prime}})}|\\!|\oplus
f(a\oplus\theta_{Y^{\prime}})(z)\ominus f(a)(z)$ [by non-expansiveness of $f$
and monotonicity of $\oplus$]
$\displaystyle\sqsubseteq|\\!|{(\delta\ominus\theta)_{Y^{\prime}}}|\\!|\oplus
f(a\oplus\theta_{Y^{\prime}})(z)\ominus f(a)(z)$
$\displaystyle\sqsubseteq(\delta\ominus\theta)\oplus
f(a\oplus\theta_{Y^{\prime}})(z)\ominus f(a)(z)$ [by definition of norm]
If we subtract $\delta\ominus\theta$ on both sides, we get
$\delta\ominus(\delta\ominus\theta)\sqsubseteq
f(a\oplus\theta_{Y^{\prime}})(z)\ominus f(a)(z)$, and, as above, since, by
2.3(10), $\delta\ominus(\delta\ominus\theta)=\theta$ we conclude
$\theta\sqsubseteq f(a\oplus\theta_{Y^{\prime}})(z)\ominus f(a)(z)$
which means $z\in f_{a,\theta}^{\\#}(Y^{\prime})$.
Since $f_{a,\delta}^{\\#}$ increases when $\delta$ decreases and there are
finitely many such functions, there must be a value $\iota_{a}^{f}$ such that
all functions $f_{a,\delta}^{\\#}$ for
$0\sqsubset\delta\sqsubseteq\iota_{a}^{f}$ are equal. The resulting function
will be the approximation of interest.
We next show how $\iota_{a}^{f}$ can be determined. We start by observing that
for each $z\in[{Z}]_{f(a)}$ and $Y^{\prime}\subseteq[{Y}]_{a}$ there is a
largest increase $\theta$ such that $z\in f_{a,\theta}^{\\#}(Y^{\prime})$.
[largest increase for a point] Let $\mathbb{M}$ be a complete MV-chain, let
$Y$, $Z$ be finite sets, let $f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a non-
expansive function and fix $a\in\mathbb{M}^{Y}$. For all $z\in[{Z}]_{f(a)}$
and $Y^{\prime}\subseteq[{Y}]_{a}$ the set $\\{\theta\in\mathbb{M}\mid z\in
f_{a,\theta}^{\\#}(Y^{\prime})\\}$ has a maximum, that we denote by
$\iota_{a}^{f}(Y^{\prime},z)$.
###### Proof 3.23.
Let $V=\\{\theta\in\mathbb{M}\mid z\in f_{a,\theta}^{\\#}(Y^{\prime})\\}$.
Expanding the definition we have that
$V=\\{\theta\in\mathbb{M}\mid\theta\sqsubseteq
f(a\oplus\theta_{Y^{\prime}})(z)\ominus f(a)(z)\\}$.
If we let $\eta=\sup V$, for all $\theta\in V$, since
$\theta_{Y^{\prime}}\sqsubseteq\eta_{Y^{\prime}}$, clearly, by monotonicity
$\theta\sqsubseteq f(a\oplus\eta_{Y^{\prime}})(z)\ominus f(a)(z)$
and therefore, by definition of supremum, $\eta\sqsubseteq
f(a\oplus\eta_{Y^{\prime}})(z)\ominus f(a)(z)$, i.e., $\eta\in V$ is a
maximum, as desired.
###### Lemma 3.24.
Let $\mathbb{M}$ be an MV-chain, let $Y$, $Z$ be finite sets and let
$f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a non-expansive function. Let
$a\in\mathbb{M}^{Y}$. For $b\in[{a},{a\oplus\delta}]$, let $b\ominus
a=\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}}$ be a standard form for $b\ominus a$.
If $\gamma_{f(a),\delta}(f(b))\not=\emptyset$ then
$Y_{n}=\gamma_{a,\delta}(b)$ and $\gamma_{f(a),\delta}(f(b))\subseteq
f_{a,\delta^{n}}^{\\#}(Y_{n})$.
###### Proof 3.25.
By hypothesis $\gamma_{f(a),\delta}(f(b))\not=\emptyset$. Let
$z\in\gamma_{f(a),\delta}(f(b))$. This means that $\delta\sqsubseteq
f(b)(z)\ominus f(a)(z)$. First observe that
$\displaystyle\delta\sqsubseteq f(b)(z)\ominus f(a)(z)$ [by hypothesis]
$\displaystyle\quad\sqsubseteq|\\!|{f(b)\ominus f(a)}|\\!|$ [by definition of
norm] $\displaystyle\quad\sqsubseteq|\\!|{b\ominus a}|\\!|$ [by non-
expansiveness of $f$] $\displaystyle\quad\sqsubseteq\delta$ [since
$b\in[{a},{a\oplus\delta}]$]
Hence
$|\\!|{f(b)\ominus f(a)}|\\!|=\delta=|\\!|{b\ominus
a}|\\!|=\bigoplus_{i=1}^{n}\delta^{i}$.
Also observe that, since $\delta^{n}\neq 0$, we have $(b\ominus a)(z)=\delta$
iff $z\in Y_{n}$. In fact, if $z\in Y_{n}$ then $z\in Y_{i}$ for all
$i\in\\{1,\ldots,n\\}$ and thus $(b\ominus
a)(z)=\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}}(z)=\bigoplus_{i=1}^{n}\delta^{i}=\delta$.
Conversely, if $z\not\in Y_{n}$, then $(b\ominus
a)(z)\sqsubseteq\bigoplus_{i=1}^{n-1}\delta^{i}\sqsubset\delta$. In fact,
$0\sqsubset\delta^{n}$ and
$\bigoplus_{i=1}^{n-1}\delta^{i}\sqsubseteq\overline{\delta^{n}}$. Thus by
Lemma 2.3(8),
$\bigoplus_{i=1}^{n-1}\delta^{i}\sqsubset\delta^{n}\oplus\bigoplus_{i=1}^{n-1}\delta^{i}=\bigoplus_{i=1}^{n}\delta^{i}=\delta$.
Hence $Y_{n}=\gamma_{a,\delta}(b)$.
Let us now show that $\gamma_{f(a),\delta}(f(b))\subseteq
f_{a,\delta^{n}}^{\\#}(Y_{n})$. Given $z\in\gamma_{f(a),\delta}(f(b))$, we
show that $z\in f_{a,\delta^{n}}^{\\#}(Y_{n})$. Observe that
$\displaystyle\delta\sqsubseteq f(b)(z)\ominus f(a)(z)=$ [by hypothesis]
$\displaystyle=f(a\oplus(b\ominus a))(z)\ominus f(a)(z)=$ [by 2.3(2), since
$a\sqsubseteq b$]
$\displaystyle=f(a\oplus\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}})(z)\ominus
f(a)(z)=$ [by construction]
$\displaystyle=f(a\oplus\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}}))(z)\ominus
f(a\oplus\delta^{n}_{Y_{n}})(z)\oplus f(a\oplus\delta^{n}_{Y_{n}})(z)\ominus
f(a)(z)$ [by 2.3(2), since $f(a\oplus\delta^{n}_{Y_{n}})(z)\sqsubseteq
f(a\oplus\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}})(z)$]
$\displaystyle\sqsubseteq|\\!|{f(a\oplus\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}})\ominus
f(a\oplus\delta^{n}_{Y_{n}})}|\\!|\oplus
f(a\oplus\delta^{n}_{Y_{n}})(z)\ominus f(a)(z)$ [by definition of norm and
monotonicity of $\oplus$]
$\displaystyle\sqsubseteq|\\!|{a\oplus\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}}\ominus(a\oplus\delta^{n}_{Y_{n}})}|\\!|\oplus
f(a\oplus\delta^{n}_{Y_{n}})(z)\ominus f(a)(z)$ [by non-expansiveness of $f$
and monotonicity of $\oplus$]
$\displaystyle=|\\!|{a\oplus\delta^{n}_{Y_{n}}\oplus\bigoplus_{i=1}^{n-1}\delta^{i}_{Y_{i}}\ominus(a\oplus\delta^{n}_{Y_{n}})}|\\!|\oplus
f(a\oplus\delta^{n}_{Y_{n}})(z)\ominus f(a)(z)$ [by algebraic manipulation]
$\displaystyle\sqsubseteq|\\!|{\bigoplus_{i=1}^{n-1}\delta^{i}_{Y_{i}}}|\\!|\oplus
f(a\oplus\delta^{n}_{Y_{n}})(z)\ominus f(a)(z)$ [by 2.3(6) and monotonicity of
norm] $\displaystyle\sqsubseteq\bigoplus_{i=1}^{n-1}\delta^{i}\oplus
f(a\oplus\delta^{n}_{Y_{n}})(z)\ominus f(a)(z)$ [by 3.2(1) and the fact that
$|\\!|{\delta^{i}_{Y_{i}}}|\\!|=\delta^{i}$]
$\displaystyle=(\delta\ominus\delta^{n})\oplus
f(a\oplus\delta^{n}_{Y_{n}})(z)\ominus f(a)(z)$ [by construction, since
$\delta^{n}=\overline{\bigoplus_{i=1}^{n-1}\delta^{i}}$]
If we subtract $\delta\ominus\delta^{n}$ on both sides, we get
$\delta\ominus(\delta\ominus\delta^{n})\sqsubseteq
f(a\oplus\delta^{n}_{Y_{n}})(z)\ominus f(a)(z)$, i.e., since, by 2.3(10),
$\delta\ominus(\delta\ominus\delta^{n})=\delta^{n}$ we conclude
$\delta^{n}\sqsubseteq f(a\oplus\delta^{n}_{Y_{n}})(z)\ominus f(a)(z)$.
Hence
$z\in\gamma_{f(a),\delta^{n}}(f(\alpha_{a,\delta^{n}}(Y_{n}))=f_{a,\delta^{n}}^{\\#}(Y_{n})$,
which is the desired result.
We can then provide an explicit definition of $\iota_{a}^{f}$ and of the
approximation of a function.
[$a$-approximation for a function] Let $\mathbb{M}$ be a complete MV-chain,
let $Y,Z$ be finite sets and let $f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a non-
expansive function. Let
$\iota_{a}^{f}=\min\\{\iota_{a}^{f}(Y^{\prime},z)\mid
Y^{\prime}\subseteq[{Y}]_{a}\ \land\ z\in[{Z}]_{f(a)}\ \land\
\iota_{a}^{f}(Y^{\prime},z)\neq 0\\}\cup\\{\delta_{a}\\}$.
Then for all $0\neq\delta\sqsubseteq\iota_{a}^{f}$ it holds that
$f_{a,\delta}^{\\#}=f_{a,\iota_{a}^{f}}^{\\#}$.
The function $f_{a,\iota_{a}^{f}}^{\\#}$ is called the _$a$ -approximation_ of
$f$ and it is denoted by $f_{a}^{\\#}$.
###### Proof 3.26.
Since $\delta\sqsubseteq\iota_{a}^{f}$, by 3.21 we have
$f_{a,\delta}^{\\#}\supseteq f_{a,\iota_{a}^{f}}^{\\#}$. For the other
inclusion let $Y^{\prime}\subseteq[{Y}]_{a}$. We have
$f_{a,\delta}^{\\#}(Y^{\prime})=\\{z\in[{Z}]_{f(a)}\mid
f(a\oplus\delta_{Y^{\prime}})(z)\ominus f(a)(z)\sqsupseteq\delta\\}$
by definition. Assume that there exists $z\in f_{a,\delta}^{\\#}(Y^{\prime})$
where $f(a\oplus(\iota_{a}^{f})_{Y^{\prime}})(z)\ominus
f(a)(z)\not\sqsupseteq\iota_{a}^{f}$. But this is a contradiction, since
$\iota_{a}^{f}$ is the minimum of all such non-zero values.
We next show that indeed, for all non-expansive functions, the
$a$-approximation properly approximates the propagation of increases.
[approximation of non-expansive functions] Let $\mathbb{M}$ be a complete MV-
chain, let $Y,Z$ be finite sets and let $f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be
a non-expansive function. Then for all $0\sqsubset\delta\in\mathbb{M}$:
1. (1)
$\gamma_{f(a),\delta}\circ f\subseteq f^{\\#}_{a}\circ\gamma_{a,\delta}$
2. (2)
for $\delta\sqsubseteq\delta_{a}$: $\delta\sqsubseteq\iota_{a}^{f}$ iff
$\gamma_{f(a),\delta}\circ f=f^{\\#}_{a}\circ\gamma_{a,\delta}$
$\textstyle{[a,a\oplus\delta]\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\scriptstyle{\gamma_{a,\delta}}$$\scriptstyle{\sqsubseteq}$$\textstyle{\mathbf{2}^{[{Y}]_{a}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{\\#}_{a}}$$\textstyle{[f(a),f(a)\oplus\delta]\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\gamma_{f(a),\delta}}$$\textstyle{\mathbf{2}^{[{Z}]_{f(a)}}}$
###### Proof 3.27.
1. (1)
Let $b\in[{a},{a\oplus\delta}]$. First note that whenever
$\gamma_{f(a),\delta}(f(b))=\emptyset$, the desired inclusion obviously holds.
If instead $\gamma_{f(a),\delta}(f(b))\not=\emptyset$, let $b\ominus
a=\bigoplus_{i=1}^{n}\delta^{i}_{Y_{i}}$ be a standard form with
$\delta^{n}\neq 0$. First observe that, by 3.24, we have
$Y_{n}=\gamma_{a,\delta^{n}}(b)$ and
$\gamma_{f(a),\delta}(f(b))\subseteq f_{a,\delta^{n}}^{\\#}(Y_{n}).$ (2)
For all $z\in f_{a,\delta^{n}}^{\\#}(Y_{n})$, by definition of
$\iota_{a}^{f}(Y_{n},z)$ we have that
$0\sqsubset\delta_{n}\sqsubseteq\iota_{a}^{f}(Y_{n},z)$, therefore
$\iota_{a}^{f}\sqsubseteq\iota_{a}^{f}(Y_{n},z)$. Moreover, $z\in
f_{a,\iota_{a}^{f}(Y_{n},z)}^{\\#}(Y_{n})\subseteq
f_{a,\iota_{a}^{f}}^{\\#}(Y_{n})=f_{a}^{\\#}(Y_{n})$, where the last
inequality is motivated by 3.21 since
$\iota_{a}^{f}\sqsubseteq\iota_{a}^{f}(Y_{n},z)$. Therefore,
$f_{a,\delta^{n}}^{\\#}(Y_{n})\subseteq f_{a}^{\\#}(\gamma_{a,\delta}(b))$,
which combined with (2) gives the desired result.
2. (2)
For (2), we first show the direction from left to right. Assume that
$\delta\sqsubseteq\iota_{a}^{f}$. By (a) clearly, $\gamma_{f(a),\delta}\circ
f(b)\subseteq f^{\\#}_{a}\circ\gamma_{a,\delta}(b)$. For the converse
inclusion, note that:
$\displaystyle f_{a}^{\\#}(\gamma_{a,\delta}(b))$ [by definition of
$f_{a}^{\\#}$]
$\displaystyle\quad=f_{a,\iota_{a}^{f}}^{\\#}(\gamma_{a,\delta}(b))\subseteq$
[by 3.21, since $\delta\sqsubseteq\iota_{a}^{f}$] $\displaystyle\quad\subseteq
f_{a,\delta}^{\\#}(\gamma_{a,\delta}(b))$ [by definition of
$f_{a,\delta}^{\\#}$]
$\displaystyle\quad=\gamma_{f(a),\delta}(f(\alpha_{a,\delta}(\gamma_{a,\delta}(b))))$
[since $\alpha_{a,\delta}\circ\gamma_{a,\delta}(b)\sqsubseteq b$]
$\displaystyle\quad\subseteq\gamma_{f(a),\delta}(f(b))$
as desired.
For the other direction, assume $\gamma_{f(a),\delta}\circ
f(b)=f_{a}^{\\#}\circ\gamma_{a,\delta}(b)$ holds for all
$b\in[a,a\oplus\delta]$. Now, for every $Y^{\prime}\subseteq[{Y}]_{a}$ we have
$f_{a,\delta}^{\\#}(Y^{\prime})=\gamma_{f(a),\delta}\circ
f\circ\alpha_{a,\delta}(Y^{\prime})=f_{a}^{\\#}\circ\gamma_{a,\delta}\circ\alpha_{a,\delta}(Y^{\prime})$.
We also have $\gamma_{a,\delta}\circ\alpha_{a,\delta}(Y^{\prime})=Y^{\prime}$
(see proof of 3.13), thus
$f_{a,\delta}^{\\#}(Y^{\prime})=f_{a}^{\\#}(Y^{\prime})$. For any $\delta$
with $\iota_{a}^{f}\sqsubset\delta\sqsubseteq\delta_{a}$ there exists
$Y^{\prime}\subseteq[{Y}]_{a}$ and $z\in[{Z}]_{f(a)}$ with $z\in
f_{a}^{\\#}(Y^{\prime})$ but $z\notin f_{a,\delta}^{\\#}(Y^{\prime})$, by
definition of $\iota_{a}^{f}$. Therefore $\delta\sqsubseteq\iota_{a}^{f}$ has
to hold.
Note that if $Y=Z$ and $a$ is a fixpoint of $f$, i.e., $a=f(a)$, then
condition (1) above corresponds exactly to soundness in the sense of abstract
interpretation [cc:ai-unified-lattice-model]. Moreover, when
$\delta\sqsubseteq\delta_{a}$ and thus
$\langle\alpha_{a,\delta},\gamma_{a,\delta}\rangle$ is a Galois connection,
$f_{a,\delta}^{\\#}=\gamma_{a,\delta}\circ f\circ\alpha_{a,\delta}$ is the
best correct approximation of $f$. In particular, when
$\delta\sqsubseteq\iota_{a}^{f}$, such a best correct approximation is
$f_{a}^{\\#}$, the $a$-approximation of $f$, i.e., it becomes independent of
$\delta$, and condition (2) corresponds to ($\gamma$-)completeness [GRS:MAIC]
(see also Section 2).
## 4\. Proof rules
In this section we formalise the proof technique outlined in the introduction
for showing that a fixpoint is the largest and, more generally, for checking
over-approximations of greatest fixpoints of non-expansive functions.
### 4.1. Proof rules for fixpoints
Consider a monotone function $f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$ for some
finite set $Y$. We first focus on the problem of establishing whether some
given fixpoint $a$ of $f$ coincides with $\nu f$ (without explicitly knowing
$\nu f$), and, in case it does not, finding an “improvement”, i.e., a post-
fixpoint of $f$, larger than $a$. We first prove a technical lemma.
###### Lemma 4.1.
Let $\mathbb{M}$ be a complete MV-chain, $Y$ a finite set and
$f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$ be a non-expansive function. Let
$a\in\mathbb{M}^{Y}$ be a pre-fixpoint of $f$, let
$f_{a}^{\\#}:\mathbf{2}^{[{Y}]_{a}}\to\mathbf{2}^{[{Y}]_{f(a)}}$ be the
$a$-approximation of $f$ (3.25). Assume $\nu f\not\sqsubseteq a$ and let
$Y^{\prime}=\\{y\in[{Y}]_{a}\mid\nu f(y)\ominus a(y)=|\\!|{\nu f\ominus
a}|\\!|\\}$. Then for all $y\in Y^{\prime}$ it holds $a(y)=f(a)(y)$ and
$Y^{\prime}\subseteq f_{a}^{\\#}(Y^{\prime})$.
###### Proof 4.2.
Let $\delta=|\\!|{\nu f\ominus a}|\\!|$. Assume $\nu f\not\sqsubseteq a$,
i.e., there exists $y\in Y$ such that $\nu f(y)\not\sqsubseteq a(y)$. Since
the order is total, this means that $a(y)\sqsubset\nu f(y)$. Hence, by 2.3(5),
$\nu f(y)\ominus a(y)\sqsupset 0$. Then $\delta=|\\!|{\nu f\ominus
a}|\\!|\sqsupset 0$. Moreover, for all $y\in Y^{\prime}$,
$\overline{a(y)}=1\ominus a(y)\sqsupseteq\nu f(y)\ominus a(y)=\delta$.
First, observe that
$\nu f\sqsubseteq a\oplus\delta,$ (3)
since for all $y\in Y$ $\nu f(y)\ominus a(y)\sqsubseteq\delta$ by definition
of $\delta$ and then (3) follows from 2.3(7).
Concerning the first part, let $y\in Y^{\prime}$. Since $a$ is a pre-fixpoint,
$f(a)(y)\sqsubseteq a(y)$. Assume by contradiction that $f(a)(y)\sqsubset
a(y)$. Then we have
$\displaystyle f(a\oplus\delta)(y)=$ [by 2.3(2), since $f$ is monotone and
thus $f(a)\sqsubseteq f(a\oplus\delta)$]
$\displaystyle=f(a)(y)\oplus(f(a\oplus\delta)(y)\ominus f(a)(y))$ [since $f$
is non-expansive, by 3.7, hence $f(a\oplus\delta)(y)\ominus
f(a)(y)\sqsubseteq\delta$] $\displaystyle\sqsubseteq f(a)(y)\oplus\delta$ [by
$f(a)(y)\sqsubset a(y)$, $\delta\sqsubseteq\overline{a(y)}$ and 2.3(6)]
$\displaystyle\sqsubset a(y)\oplus\delta$ [by 2.3(2) since $a(y)\sqsubseteq\nu
f(y)$ and $\delta=\nu f(y)\ominus a(y)$] $\displaystyle=\nu f(y)$
$\displaystyle=f(\nu f)(y)$ [since $\nu f\sqsubseteq a\oplus\delta$ (3) and
$f$ monotone] $\displaystyle\sqsubseteq f(a\oplus\delta)(y)$
i.e., a contradiction. Hence it must be $a(y)=f(a)(y)$.
For the second part, in order to show $Y^{\prime}\subseteq
f_{a}^{\\#}(Y^{\prime})$, we let $b=\nu f\sqcup a$. By using (3) we
immediately have that $b\in[{a},{a\oplus\delta}]$.
We next prove that
$Y^{\prime}=\gamma_{a,\delta}(b)$.
We show separately the two inclusions. If $y\in Y^{\prime}$ then
$a(y)\sqsubset\nu f(y)$ and thus $b(y)=a(y)\sqcup\nu f(y)=\nu f(y)$ and thus
$b(y)\ominus a(y)=\nu f(y)\ominus a(y)=\delta$. Hence
$y\in\gamma_{a,\delta}(b)$. Conversely, if $y\in\gamma_{a,\delta}(b)$, then
$a(y)\sqsubset\nu f(y)$. In fact, if it were $a(y)\sqsupseteq\nu f(y)$, then,
by definition of $b$ we would have $b(y)=a(y)$ and $b(y)\ominus
a(y)=0\not\sqsupseteq\delta$. Therefore, $b(y)=\nu f(y)$ and thus $\nu
f(y)\ominus a(y)=b(y)\ominus a(y)\sqsupseteq\delta$, whence $y\in Y^{\prime}$.
We can now conclude. In fact, since $f$ is non-expansive, by 3.26(1), we have
$\gamma_{f(a),\delta}(f(b))\subseteq f^{\\#}_{a}(Y^{\prime}).$
Moreover $Y^{\prime}\subseteq\gamma_{f(a),\delta}(f(b))$. In fact, let $y\in
Y^{\prime}$, i.e., $y\in[{Y}]_{a}$ and $\delta\sqsubseteq b(y)\ominus a(y)$.
Since $a(y)=f(a)(y)$, we have that $y\in[{Y}]_{f(a)}$. In order to conclude
that $y\in\gamma_{f(a),\delta}(f(b))$ it is left to show that
$\delta\sqsubseteq f(b)(y)\ominus f(a)(y)$. We have
$\displaystyle f(b)(y)\ominus f(a)(y)$ $\displaystyle=f(b)(y)\ominus a(y)$
[since $y\in Y^{\prime}$] $\displaystyle=f(\nu f\sqcup a)(y)\ominus a(y)$
[definition of $b$] $\displaystyle\sqsupseteq(f(\nu f)(y)\sqcup
f(a)(y))\ominus a(y)$ [properties of $\sqcup$] $\displaystyle=(\nu f(y)\sqcup
a(y))\ominus a(y)$ [since $\nu f$ fixpoint and $y\in Y^{\prime}$]
$\displaystyle=b(y)\ominus a(y)$ [definition of $b$]
$\displaystyle\sqsupseteq\delta$ [since $y\in Y^{\prime}$]
Combining the two inclusions, we have $Y^{\prime}\subseteq
f_{a}^{\\#}(Y^{\prime})$, as desired.
Observe that when $a$ is a fixpoint, $[{Y}]_{a}=[{Y}]_{f(a)}$ and thus the
$a$-approximation of $f$ (3.25) is an endo-function
$f_{a}^{\\#}:[{Y}]_{a}\to[{Y}]_{a}$. We have the following result, which
relies on the fact that due to 3.26 $\gamma_{a,\delta}$ preserves fixpoints
(of $f$ and $f_{a}^{\\#}$).
[soundness and completeness for fixpoints] Let $\mathbb{M}$ be a complete MV-
chain, $Y$ a finite set and $f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$ be a non-
expansive function. Let $a\in\mathbb{M}^{Y}$ be a fixpoint of $f$. Then $\nu
f_{a}^{\\#}=\emptyset$ if and only if $a=\nu f$.
###### Proof 4.3.
Let $a$ be a fixpoint of $f$ and assume that $a=\nu f$. For
$\delta=\iota_{a}^{f}\sqsubseteq\delta_{a}$, according to 3.13, we have a
Galois connection:
$\mathbf{2}^{[{Y}]_{a}}$$[{a},{a+\delta}]$$\alpha_{a,\delta}$$\gamma_{a,\delta}$$f^{\\#}_{a}$$f_{a,\delta}$
Since $a$ is a fixpoint, then $[{Y}]_{f(a)}=[{Y}]_{a}$ and, by 3.26(2),
$\gamma_{a,{\delta}}\circ f=\gamma_{f(a),{\delta}}\circ
f=f_{a}^{\\#}\circ\gamma_{a,{\delta}}$.
Therefore by [CC:TLA, Proposition 14], $\nu
f^{\\#}_{a}=\gamma_{a,{\delta}}(\nu f)$. Recall that $\gamma_{a,{\delta}}(\nu
f)=\\{y\in Y\mid{\delta}\sqsubseteq\nu f(y)\ominus a(y)\\}$. Since $a=\nu f$
and ${\delta}\sqsupset 0$, we know that $\gamma_{a,{\delta}}(\nu f)=\emptyset$
and we conclude $\nu f^{\\#}_{a}=\emptyset$, as desired.
Conversely, in order to prove that if $\nu f^{\\#}_{a}=\emptyset$ then $a=\nu
f$, we prove the contrapositive. Assume that $a\neq\nu f$. Since $a$ is a
fixpoint and $\nu f$ is the largest, this means that $a\sqsubset\nu f$ and
thus $|\\!|{\nu f\ominus a}|\\!|\neq 0$. Consider
$Y^{\prime}=\\{y\in[{Y}]_{a}\mid\nu f(y)\ominus a(y)=|\\!|{\nu f\ominus
a}|\\!|\\}\neq\emptyset$. By 4.1, $Y^{\prime}$ is a post-fixpont of
$f_{a}^{\\#}$, i.e., $Y^{\prime}\subseteq f_{a}^{\\#}(Y^{\prime})$, and thus
$\nu f^{\\#}_{a}\supseteq Y^{\prime}$ which implies $\nu
f^{\\#}_{a}\neq\emptyset$, as desired.
Whenever $a$ is a fixpoint, but not yet the largest fixpoint of $f$, from the
result above $\nu f^{\\#}_{a}\neq\emptyset$. Intuitively, $\nu f^{\\#}_{a}$ is
the set of points where $a$ can still be “improved”. More precisely, we can
show that $a$ can be increased on the points in $\nu f^{\\#}_{a}$ producing a
post-fixpoint of $f$. In order to determine how much $a$ can be increased we
proceed similarly to what we have done for defining $\iota_{a}^{f}$ (Lemma
3.25), but restricting the attention to $\nu f_{a}^{\\#}$ instead of
considering the full $[{Y}]_{a}$.
###### Definition 4.4 (largest increase for a subset).
Let $\mathbb{M}$ be a complete MV-chain and let
$f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$ be a non-expansive function, where $Y$ is
a finite set and let $a\in\mathbb{M}^{Y}$. For $Y^{\prime}\subseteq Y$, we
define $\delta_{a}(Y^{\prime})=\min\\{\overline{a(y)}\mid y\in Y^{\prime}\\}$
and $\iota_{a}^{f}(Y^{\prime})=\min\\{\iota_{a}^{f}(Y^{\prime},y)\mid y\in
Y^{\prime}\\}$.
Note that the increase $\iota_{a}^{f}$, used in 3.25 for defining the
$a$-approximation $f_{a}^{\\#}$, is $\iota_{a}^{f}=\iota_{a}^{f}([{Y}]_{a})$.
###### Example 4.5.
We intuitively explain the computation of the values in the definition above.
Let $g\colon[0,1]^{Y}\to[0,1]^{Y}$ with $g(b)=b\oplus 0.1$, where the set $Y$
and the function $a\in[0,1]^{Y}$ are as in Example 3.19.
Let $Y^{\prime}=\\{y_{1},y_{2}\\}$. Then $\delta_{a}(Y^{\prime})=0.6$ and
$\iota_{a}^{g}(Y^{\prime})=0.5$, i.e., since $g$ adds $0.1$, we can propagate
an increase of at most $0.5$.
We next prove that when $a\in\mathbb{M}^{Y}$ is a fixpoint of $f$ and
$Y^{\prime}=\nu f_{a}^{\\#}$, the value $\iota_{a}^{f}(Y^{\prime})$ is the
largest increase $\delta$ below $\delta_{a}(Y^{\prime})$ such that
$a\oplus\delta_{Y^{\prime}}$ is a post-fixpoint of $f$.
[from a fixpoint to larger post-fixpoint] Let $\mathbb{M}$ be a complete MV-
chain, $f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$ a non-expansive function,
$a\in\mathbb{M}$ a fixpoint of $f$, and let $Y^{\prime}=\nu f_{a}^{\\#}$ be
the greatest fixpoint of the corresponding $a$-approximation. Then
$\iota_{a}^{f}(Y^{\prime})\sqsubseteq\delta_{a}(Y^{\prime})$. Moreover, for
all $\theta\sqsubseteq\iota_{a}^{f}(Y^{\prime})$ the function
$a\oplus\theta_{Y^{\prime}}$ is a post-fixpoint of $f$, while for
$\iota_{a}^{f}(Y^{\prime})\sqsubset\theta\sqsubseteq\delta_{a}(Y^{\prime})$,
it is not.
###### Proof 4.6.
We first prove that
$\iota_{a}^{f}(Y^{\prime})\sqsubseteq\delta_{a}(Y^{\prime})$. Observe that for
all $y\in Y^{\prime}$ and $\delta\in\mathbb{M}$, if $y\in
f_{a,\delta}^{\\#}(Y^{\prime})$, by definition of $f_{a,\delta}^{\\#}$, it
holds that $\delta\sqsubseteq f(a\oplus\delta_{Y^{\prime}})(y)\ominus
f(a)(y)=f(a\oplus\delta_{Y^{\prime}})(y)\ominus a(y)\sqsubseteq 1\ominus
a(y)=\overline{a(y)}$, where the second equality is motivated by the fact that
$a$ is a fixpoint. Therefore for all $y\in Y^{\prime}$ we have
$\max\\{\delta\in\mathbb{M}\mid y\in
f_{a,\delta}^{\\#}(Y^{\prime})\\}\sqsubseteq\overline{a(y)}$ and thus
$\iota_{a}^{f}(Y^{\prime})=\min_{y\in
Y^{\prime}}\max\\{\delta\in\mathbb{M}\mid y\in
f_{a,\delta}^{\\#}(Y^{\prime})\\}\sqsubseteq\min_{y\in
Y^{\prime}}\overline{a(y)}=\delta_{a}(Y^{\prime})$, as desired.
Given $\theta\sqsubseteq\iota_{a}^{f}(Y^{\prime})$, let us prove that
$a\oplus\theta_{Y^{\prime}}$ is a post-fixpoint of $f$, i.e.,
$a\oplus\theta_{Y^{\prime}}\sqsubseteq f(a\oplus\theta_{Y^{\prime}})$.
If $y\in Y^{\prime}$, since $\theta\sqsubseteq\iota_{a}^{f}(Y^{\prime})$, by
definition of $\iota_{a}^{f}(Y^{\prime})$, we have
$\theta\sqsubseteq\max\\{\delta\in\mathbb{M}\mid y\in
f_{a,\delta}^{\\#}(Y^{\prime})\\}$ and thus, by antimonotonicity of
$f_{a,\delta}^{\\#}$ with respect to $\delta$, we have $y\in
f_{a,\theta}^{\\#}(Y^{\prime})$. This means that $\theta\sqsubseteq
f(a\oplus\theta_{Y^{\prime}})(y)\ominus
f(a)(y)=f(a\oplus\theta_{Y^{\prime}})(y)\ominus a(y)$, where the last passage
uses the fact that $a$ is a fixpoint. Adding $a(y)$ on both sides and using
Lemma 2.3(2), we obtain
$a(y)\oplus\theta\sqsubseteq(f(a\oplus\theta_{Y^{\prime}})(y)\ominus
a(y))\oplus a(y)=f(a\oplus\theta_{Y^{\prime}})(y)$. Since $y\in Y^{\prime}$,
$(a\oplus\theta_{Y^{\prime}})(y)=a(y)\oplus\theta$ and thus
$(a\oplus\theta_{Y^{\prime}})(y)\sqsubseteq f(a\oplus\theta_{Y^{\prime}})(y)$,
as desired.
If instead, $y\not\in Y^{\prime}$, clearly
$(a\oplus\theta_{Y^{\prime}})(y)=a(y)=f(a)(y)\sqsubseteq
f(a\oplus\theta_{Y^{\prime}})(y)$, where we again use the fact that $a$ is a
fixpoint and monotonicity of $f$.
Lastly, we have to show that if
$\iota_{a}^{f}(Y^{\prime})\sqsubset\theta\sqsubseteq\delta_{a}(Y^{\prime})$,
then $a\oplus\theta_{Y^{\prime}}$ is a not a post-fixpoint of $f$. By
definition of $\iota_{a}^{f}(Y^{\prime})$, from the fact that
$\iota_{a}^{f}(Y^{\prime})\sqsubset\theta$, we deduce that
$\max\\{\delta\in\mathbb{M}\mid y\in
f_{a,\delta}^{\\#}(Y^{\prime})\\}\sqsubset\theta$ for some $y\in Y^{\prime}$
and thus $y\not\in f_{a,\theta}^{\\#}(Y^{\prime})$.
By definition of $f_{a,\theta}^{\\#}$ and totality of $\sqsubseteq$, the above
means $\theta\sqsupset f(a\oplus\theta_{Y^{\prime}})(y)\ominus
f(a)(y)=f(a\oplus\theta_{Y^{\prime}})(y)\ominus a(y)$, since $a$ is a fixpoint
of $f$. Since $\theta\sqsubseteq\delta_{a}(Y^{\prime})$, we can add $a(y)$ on
both sides and, by Lemma 2.3(8), we obtain $a(y)\oplus\theta\sqsupset
f(a\oplus\theta_{Y^{\prime}})(y)$. Since $y\in Y^{\prime}$, the left-hand side
is $(a\oplus\theta_{Y^{\prime}})(y)$. Hence we conclude that indeed
$a\oplus\theta_{Y^{\prime}}$ is not a post fixpoint.
Using these results one can perform an alternative fixpoint iteration where we
iterate to the largest fixpoint from below: start with a post-fixpoint
$a_{0}\sqsubseteq f(a_{0})$ (which is clearly below $\nu f$) and obtain, by
(possibly transfinite) iteration, an ascending chain that converges to $a$,
the least fixpoint above $a_{0}$. Now check with 4.2 whether $Y^{\prime}=\nu
f_{a}^{\\#}=\emptyset$. If so, we have reached $\nu f=a$. If not,
$\alpha_{a,\iota_{a}^{f}(Y^{\prime})}(Y^{\prime})=a\oplus(\iota_{a}^{f}(Y^{\prime}))_{Y^{\prime}}$
is again a post-fixpoint (cf. 4.5) and we continue this procedure until – for
some ordinal – we reach the largest fixpoint $\nu f$, for which we have $\nu
f_{\nu f}^{\\#}=\emptyset$.
In order to make the above approach as efficient as possible, the question
naturally arises asking whether ${\iota_{a}^{f}(Y^{\prime})}$ is the largest
possible increase such that $a\oplus(\iota_{a}^{f}(Y^{\prime}))_{Y^{\prime}}$
is again a post-fixpoint of $f$, even if we allow increases above
$\delta_{a,Y^{\prime}}$. The answer is negative, however, while the set of
increases below $\delta_{a,Y^{\prime}}$ which lead to post-fixpoints is
downward-closed, as proved in 4.5, this is not the case for those above
$\delta_{a,Y^{\prime}}$. This is shown later in Example 6.3, for the dual case
of least fixpoints.
We believe that a binary search bounded by $\delta_{a,\nu f_{a}^{\\#}}$ can be
the most efficient way to find the largest propagation or at least some
satisfying level of propagation. The search surely works thanks to the fact
that the set of propagations below $\delta_{a,\nu f_{a}^{\\#}}$ is downward-
closed.
### 4.2. Proof rules for pre-fixpoints
Interestingly, the soundness result in 4.2 can be generalised to the case in
which $a$ is a pre-fixpoint instead of a fixpoint. In this case, the
$a$-approximation for a function $f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$ is a
function $f_{a}^{\\#}:[{Y}]_{a}\to[{Y}]_{f(a)}$ where domain and codomain are
different, hence it would not be meaningful to look for fixpoints. However, as
explained below, it can be restricted to an endo-function.
[soundness for pre-fixpoints] Let $\mathbb{M}$ be a complete MV-chain, $Y$ a
finite set and $f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$ be a non-expansive
function. Given a pre-fixpoint $a\in\mathbb{M}^{Y}$ of $f$, let
$[{Y}]_{a=f(a)}=\\{y\in[{Y}]_{a}\mid a(y)=f(a)(y)\\}$. Let us define
$f^{*}_{a}:[{Y}]_{a=f(a)}\to[{Y}]_{a=f(a)}$ as
$f^{*}_{a}(Y^{\prime})=f_{a}^{\\#}(Y^{\prime})\cap[{Y}]_{a=f(a)}$, where
$f_{a}^{\\#}:\mathbf{2}^{[{Y}]_{a}}\to\mathbf{2}^{[{Y}]_{f(a)}}$ is the
$a$-approximation of $f$. If $\nu f_{a}^{*}=\emptyset$ then $\nu f\sqsubseteq
a$.
###### Proof 4.7.
We prove the contrapositive, i.e., we show that $\nu f\not\sqsubseteq a$
allows us to derive that $\nu f^{*}_{a}\neq\emptyset$.
Assume $\nu f\not\sqsubseteq a$, i.e., there exists $y\in Y$ such that $\nu
f(y)\not\sqsubseteq a(y)$. Since the order is total, this means that
$a(y)\sqsubset\nu f(y)$. Hence, by 2.3(5), $\nu f(y)\ominus a(y)\sqsupset 0$.
Then $\delta=|\\!|{\nu f\ominus a}|\\!|\sqsupset 0$.
Consider $Y^{\prime}=\\{y\in Y_{a}\mid\nu f(y)\ominus a(y)=|\\!|{\nu f\ominus
a}|\\!|\\}\neq\emptyset$. By 4.1, $Y^{\prime}$ is a post-fixpont of
$f_{a}^{\\#}$, i.e., $Y^{\prime}\subseteq f_{a}^{\\#}(Y^{\prime})$, and thus
$Y^{\prime}\subseteq\nu f^{\\#}_{a}$. Moreover, for all $y\in Y^{\prime}$,
$a(y)=f(a)(y)$, i.e., $Y^{\prime}\subseteq[{Y}]_{a=f(a)}$. Therefore we
conclude $Y^{\prime}\subseteq
f_{a}^{\\#}(Y^{\prime})\cap[{Y}]_{a=f(a)}=f_{a}^{*}(Y^{\prime})$, i.e.,
$Y^{\prime}$ is a post-fixpoint also for $f_{a}^{*}$, and thus $\nu
f_{a}^{*}\supseteq Y^{\prime}\neq\emptyset$, as desired.
The reason why we can limit our attention to the set of points where
$a(y)=f(a)(y)$ is as follows. Observe that, since $a$ is a pre-fixpoint and
$\ominus$ is antitone in the second argument, $\nu f\ominus a\sqsubseteq\nu
f\ominus f(a)$. Thus $|\\!|{\nu f\ominus a}|\\!|\sqsubseteq|\\!|{\nu f\ominus
f(a)}|\\!|=|\\!|{f(\nu f)\ominus f(a)}|\\!|\sqsubseteq|\\!|{\nu f\ominus
a}|\\!|$, where the last passage is motivated by non-expansiveness of $f$.
Therefore $|\\!|{\nu f\ominus a}|\\!|=|\\!|{\nu f\ominus f(a)}|\\!|$. From
this we can deduce that, if $\nu f$ is strictly larger than $a$ on some
points, surely some of these points are in $[{Y}]_{a=f(a)}$. In particular,
all points $y_{0}$ such that $\nu f(y_{0})\ominus a(y_{0})=|\\!|{\nu f\ominus
a}|\\!|$ are necessarily in $[{Y}]_{a=f(a)}$. Otherwise, we would have
$f(a)(y_{0})\sqsubset a(y_{0})$ and thus $|\\!|{\nu f\ominus a}|\\!|=\nu
f(y_{0})\ominus a(y_{0})\sqsubset\nu f(y_{0})\ominus
f(a)(y_{0})\sqsubset|\\!|{\nu f\ominus f(a)}|\\!|$.
Completeness does not generalise to pre-fixpoints, i.e., it is not true that
if $a$ is a pre-fixpoint of $f$ and $\nu f\sqsubseteq a$ then $\nu
f^{*}_{a}=\emptyset$. A pre-fixpoint might contain slack even though it is
above the greatest fixpoint. A counterexample is in 6.14.
### 4.3. The dual view for least fixpoints
The theory developed so far can be easily dualised to check under-
approximations of least fixpoints. Given a complete MV-algebra
$\mathbb{M}=(M,\oplus,0,\overline{(\cdot)})$ and a non-expansive function
$f:\mathbb{M}^{Y}\to\mathbb{M}^{Y}$, in order to show that a post-fixpoint
$a\in\mathbb{M}^{Y}$ is such that $a\sqsubseteq\mu f$ we can in fact simply
work in the dual MV-algebra,
${\mathbb{M}}^{\mathit{op}}=(M,\sqsupseteq,\otimes,\overline{(\cdot)},1,0)$.
Since $\oplus$ could be the “standard” operation on $\mathbb{M}$, it is
convenient to formulate the conditions using $\oplus$ and $\ominus$ and the
original order. The notation for the dual case is obtained from that of the
original (primal) case, exchanging subscripts and superscripts.
The pair of functions $\langle\alpha^{a,\theta},\gamma^{a,\theta}\rangle$ is
as follows. Let $a:Y\to\mathbb{M}$ and $0\sqsubset\theta\in\mathbb{M}$. The
set $[{Y}]^{a}=\\{y\in Y\mid a(y)\neq 0\\}$ and $\delta^{a}=\min\\{a(y)\mid
y\in[{Y}]^{a}\\}$
The target of the approximation is $[{a},{a\otimes\theta}]$ in the reverse
order, hence $[{a\otimes\theta},{a}]$ in the original order. Recall that
$a\otimes\theta=\overline{\overline{a}\oplus\overline{\theta}}=a\ominus\overline{\theta}$.
Hence we obtain
$\mathbf{2}^{[{Y}]^{a}}$$[{a\ominus\overline{\theta}},{a}]$$\alpha^{a,\theta}$$\gamma^{a,\theta}$
For $Y^{\prime}\in\mathbf{2}^{[{Y}]^{a}}$ we define
$\alpha^{a,\theta}(Y^{\prime})=a\otimes\theta_{Y^{\prime}}=a\ominus\overline{\theta}_{Y^{\prime}}$
Instead $\gamma^{a,\theta}(b)=\\{y\in Y\mid\theta\sqsupseteq
b(y)\mathrel{\mathchoice{{\ooalign{$\displaystyle\ominus$\cr\hfil$\displaystyle\div$\hfil\cr}}}{{\ooalign{$\textstyle\ominus$\cr\hfil$\textstyle\div$\hfil\cr}}}{{\ooalign{$\scriptstyle\ominus$\cr\hfil$\scriptstyle\div$\hfil\cr}}}{{\ooalign{$\scriptscriptstyle\ominus$\cr\hfil$\scriptscriptstyle\div$\hfil\cr}}}}a(y)\\}$
where
$\mathrel{\mathchoice{{\ooalign{$\displaystyle\ominus$\cr\hfil$\displaystyle\div$\hfil\cr}}}{{\ooalign{$\textstyle\ominus$\cr\hfil$\textstyle\div$\hfil\cr}}}{{\ooalign{$\scriptstyle\ominus$\cr\hfil$\scriptstyle\div$\hfil\cr}}}{{\ooalign{$\scriptscriptstyle\ominus$\cr\hfil$\scriptscriptstyle\div$\hfil\cr}}}}$
is the subtraction in the dual MV-algebra. Observe that
$x\mathrel{\mathchoice{{\ooalign{$\displaystyle\ominus$\cr\hfil$\displaystyle\div$\hfil\cr}}}{{\ooalign{$\textstyle\ominus$\cr\hfil$\textstyle\div$\hfil\cr}}}{{\ooalign{$\scriptstyle\ominus$\cr\hfil$\scriptstyle\div$\hfil\cr}}}{{\ooalign{$\scriptscriptstyle\ominus$\cr\hfil$\scriptscriptstyle\div$\hfil\cr}}}}y=\overline{\overline{x}\otimes
y}=\overline{x\oplus\overline{y}}=\overline{y\ominus x}$. Hence
$\theta\sqsupseteq
b(y)\mathrel{\mathchoice{{\ooalign{$\displaystyle\ominus$\cr\hfil$\displaystyle\div$\hfil\cr}}}{{\ooalign{$\textstyle\ominus$\cr\hfil$\textstyle\div$\hfil\cr}}}{{\ooalign{$\scriptstyle\ominus$\cr\hfil$\scriptstyle\div$\hfil\cr}}}{{\ooalign{$\scriptscriptstyle\ominus$\cr\hfil$\scriptscriptstyle\div$\hfil\cr}}}}a(y)$
iff $a(y)\ominus b(y)\sqsupseteq\overline{\theta}$. Thus for
$b\in[{a\ominus\overline{\theta}},{a}]$ we have
$\gamma^{a,\theta}(b)=\\{y\in Y\mid\theta\sqsupseteq
b(y)\mathrel{\mathchoice{{\ooalign{$\displaystyle\ominus$\cr\hfil$\displaystyle\div$\hfil\cr}}}{{\ooalign{$\textstyle\ominus$\cr\hfil$\textstyle\div$\hfil\cr}}}{{\ooalign{$\scriptstyle\ominus$\cr\hfil$\scriptstyle\div$\hfil\cr}}}{{\ooalign{$\scriptscriptstyle\ominus$\cr\hfil$\scriptscriptstyle\div$\hfil\cr}}}}a(y)\\}=\\{y\in
Y\mid a(y)\ominus b(y)\sqsupseteq\overline{\theta}\\}$.
Let $f:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ be a monotone function. The norm
becomes $|\\!|{a}|\\!|=\min\\{a(y)\mid y\in Y\\}$. Non-expansiveness (Def.
3.4) in the dual MV-algebra becomes: for all $a,b\in\mathbb{M}^{Y}$,
$|\\!|{f(b)\mathrel{\mathchoice{{\ooalign{$\displaystyle\ominus$\cr\hfil$\displaystyle\div$\hfil\cr}}}{{\ooalign{$\textstyle\ominus$\cr\hfil$\textstyle\div$\hfil\cr}}}{{\ooalign{$\scriptstyle\ominus$\cr\hfil$\scriptstyle\div$\hfil\cr}}}{{\ooalign{$\scriptscriptstyle\ominus$\cr\hfil$\scriptscriptstyle\div$\hfil\cr}}}}f(a)}|\\!|\sqsupseteq|\\!|{b\mathrel{\mathchoice{{\ooalign{$\displaystyle\ominus$\cr\hfil$\displaystyle\div$\hfil\cr}}}{{\ooalign{$\textstyle\ominus$\cr\hfil$\textstyle\div$\hfil\cr}}}{{\ooalign{$\scriptstyle\ominus$\cr\hfil$\scriptstyle\div$\hfil\cr}}}{{\ooalign{$\scriptscriptstyle\ominus$\cr\hfil$\scriptscriptstyle\div$\hfil\cr}}}}a}|\\!|$,
which in turn is
$\min\\{\overline{f(a)\ominus f(b)}\mid y\in
Y\\}\sqsupseteq\min\\{\overline{a(y)\ominus b(y)}\mid y\in Y\\}$
i.e., $|\\!|{f(a)\ominus f(b)}|\\!|\sqsubseteq|\\!|{a\ominus b}|\\!|$, which
coincides with non-expansiveness in the original MV-algebra.
Observe that, instead of taking a generic $\theta\sqsubset 1$ and then working
with $\bar{\theta}$, we can directly take $0\sqsubset\theta$ and replace
everywhere $\bar{\theta}$ with $\theta$.
While the approximation of a function in the primal sense are called
$f_{a}^{\\#}$, the approximations in the dual sense will be denoted by
$f^{a}_{\\#}$.
We can also dualise 4.5 and obtain that, whenever $a$ is a fixpoint and
$Y^{\prime}=\nu f^{a}_{\\#}\neq\emptyset$, then $a\ominus\theta_{Y^{\prime}}$
is a pre-fixpoint, where $\theta=\iota_{a}^{f}(Y^{\prime})$ is suitably
defined, dualising Definition. 4.4.
## 5\. (De)Composing functions and approximations
Given a non-expansive function $f$ and a (pre/post-)fixpoint $a$, it is often
non-trivial to determine the corresponding approximations. However, non-
expansive functions enjoy good closure properties (closure under composition,
and closure under disjoint union) and we will see that the same holds for the
corresponding approximations. Furthermore, it turns out that the functions
needed in the applications can be obtained from just a few templates. This
gives us a toolbox for assembling approximations with relative ease.
We start by introducing some basic functions, which will be used as the
building blocks for the functions needed in the applications.
###### Definition 5.1 (basic functions).
Let $\mathbb{M}$ be an MV-chain and let $Y$, $Z$ be finite sets.
1. (1)
_Constant:_ For a fixed $k\in\mathbb{M}^{Z}$, we define
$c_{k}:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ by
$c_{k}(a)=k$
2. (2)
_Reindexing:_ For $u:Z\to Y$, we define
$u^{*}:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ by
$u^{*}(a)=a\circ u.$
3. (3)
_Min/Max:_ For $\mathcal{R}\subseteq Y\times Z$, we define
$\min\nolimits_{\mathcal{R}},\max\nolimits_{\mathcal{R}}:\mathbb{M}^{Y}\to\mathbb{M}^{Z}$
by
$\min\nolimits_{\mathcal{R}}(a)(z)=\min\nolimits_{y\mathcal{R}z}a(y)\qquad\max\nolimits_{\mathcal{R}}(a)(z)=\max\nolimits_{y\mathcal{R}z}a(y)$
4. (4)
_Average:_ Call a function $p:Y\to\mathbb{M}$ a distribution when for all
$y\in Y$, it holds $\overline{p(y)}=\bigoplus_{y^{\prime}\in
Y\backslash\\{y\\}}p(y^{\prime})$ and let $\mathcal{D}(Y)$ be the set of
distributions. Assume that $\mathbb{M}$ is endowed with an additional
operation $\odot$ such that $(\mathbb{M},\odot,1)$ is a commutative monoid,
for $x,y\in\mathbb{M}$, $x\odot y\sqsubseteq x$, and $x\odot y=0$ iff $x=0$ or
$y=0$, and $\odot$ weakly distributes over $\oplus$, i.e., for all
$x,y,z\in\mathbb{M}$ with $y\sqsubseteq\overline{z}$, $x\odot(y\oplus
z)=x\odot y\oplus x\odot z$. For a finite set $D\subseteq\mathcal{D}(Y)$, we
define $\mathrm{av}_{D}:\mathbb{M}^{Y}\to\mathbb{M}^{D}$ by
$\mathrm{av}_{D}(a)(p)=\bigoplus_{y\in Y}p(y)\odot a(y)$
A particularly interesting subcase of (3) is when we take as relation the
_belongs to_ relation $\in\ \subseteq Y\times\mathbf{2}^{Y}$. In this way we
obtain functions for selecting the minimum and the maximum, respectively, of
an input function over a set $Y^{\prime}\subseteq Y$, that is, the functions
$\min\nolimits_{\in},\max\nolimits_{\in}:\mathbb{M}^{Y}\to\mathbb{M}^{\mathbf{2}^{Y}}$,
defined as
$\min\nolimits_{\in}(a)(Y^{\prime})=\min\limits_{y\in
Y^{\prime}}a(y)\qquad\qquad\max\nolimits_{\in}(a)(Y^{\prime})=\max\limits_{y\in
Y^{\prime}}a(y)$
Also note that in the definition of $\mathit{av}_{D}$, the operation $\odot$
is necessarily monotone. In fact, if $y\sqsubseteq y^{\prime}$ then, by Lemma
2.3(2), we have $y^{\prime}=y\oplus(y^{\prime}\ominus y)$. Therefore $x\odot
y\sqsubseteq x\odot y\oplus x\odot(y^{\prime}\ominus
y)=x\odot(y\oplus(y^{\prime}\ominus y))=x\odot y^{\prime}$, where the second
passage holds by weak distributivity.
The basic functions can be shown to be non-expansive.
###### Proposition 5.2.
The basic functions from Def. 5.1 are all non-expansive.
###### Proof 5.3.
* •
_Constant functions:_ immediate.
* •
_Reindexing:_ Let $u:Z\to Y$. For all $a,b\in\mathbb{M}^{Y}$, we have
$\displaystyle|\\!|{u^{*}(b)\ominus u^{*}(a)}|\\!|$ $\displaystyle=\max_{z\in
Z}(b(u(z))\ominus a(u(z)))$ $\displaystyle\sqsubseteq\max_{y\in Y}(b(y)\ominus
a(y))$ [since $u(Z)\subseteq Y$] $\displaystyle=|\\!|{b\ominus a}|\\!|$ [by
def. of norm]
* •
_Minimum:_ Let $\mathcal{R}\subseteq Y\times Z$ be a relation. For all
$a,b\in\mathbb{M}^{Y}$, we have
$|\\!|{\min\nolimits_{\mathcal{R}}(b)\ominus\min\nolimits_{\mathcal{R}}(a)}|\\!|=\max_{z\in
Z}(\min_{y\mathcal{R}z}b(y)\ominus\min_{y\mathcal{R}z}a(y))$
Observe that
$\max_{z\in
Z}(\min_{y\mathcal{R}z}b(y)\ominus\min_{y\mathcal{R}z}a(y))=\max_{z\in
Z^{\prime}}(\min_{y\mathcal{R}z}b(y)\ominus\min_{y\mathcal{R}z}a(y))$
where $Z^{\prime}=\\{z\in Z\mid\exists\,y\in Y.\,y\mathcal{R}z\\}$, since on
every other $z\in Z\backslash Z^{\prime}$ the difference would be $0$. Now,
for every $z\in Z^{\prime}$, take $y_{z}\in Y$ such that $y_{z}\mathcal{R}z$
and $a(y_{z})=\min\limits_{y\mathcal{R}z}a(y)$. Such a $y_{z}$ is guaranteed
to exist whenever $Y$ is finite. Then, we have
$\displaystyle\max_{z\in
Z^{\prime}}(\min_{y\mathcal{R}z}b(y)\ominus\min_{y\mathcal{R}z}a(y))$
$\displaystyle\sqsubseteq\max_{z\in Z^{\prime}}(b(y_{z})\ominus a(y_{z}))$
[$\ominus$ monotone in first arg.] $\displaystyle\sqsubseteq\max_{z\in
Z^{\prime}}|\\!|{b\ominus a}|\\!|$ [by def. of norm]
$\displaystyle=|\\!|{b\ominus a}|\\!|$ [$|\\!|{b\ominus a}|\\!|$ is
independent from $z$]
* •
_Maximum:_ Let $\mathcal{R}\subseteq Y\times Z$ be a relation. For all
$a,b\in\mathbb{M}^{Y}$ we have
$\displaystyle|\\!|{\max\nolimits_{\mathcal{R}}(b)\ominus\max\nolimits_{\mathcal{R}}(a)}|\\!|$
$\displaystyle=\max_{z\in
Z}(\max_{y\mathcal{R}z}b(y)\ominus\max_{y\mathcal{R}z}a(y))$
$\displaystyle\sqsubseteq\max_{z\in Z}(\max_{y\mathcal{R}z}((b(y)\ominus
a(y))\oplus a(y))\ominus\max_{y\mathcal{R}z}a(y))$ [since $(b(y)\ominus
a(y))\oplus a(y)=a(y)\sqcup b(y)$ and $\ominus$ monotone in first arg.]
$\displaystyle\sqsubseteq\max_{z\in Z}((\max_{y\mathcal{R}z}(b(y)\ominus
a(y))\oplus\max_{y\mathcal{R}z}a(y))\ominus\max_{y\mathcal{R}z}a(y))$ [by def.
of $\max$ and monotonicity of $\oplus$] $\displaystyle\sqsubseteq\max_{z\in
Z}\max_{y\mathcal{R}z}(b(y)\ominus a(y))\qquad\qquad\mbox{[by
\autoref{le:mvprop}(\ref{le:mvprop:6})]}$ $\displaystyle\sqsubseteq\max_{z\in
Z}\max_{y\mathcal{R}z}|\\!|{b\ominus a}|\\!|\qquad\qquad\mbox{[by def.\ of
norm]}$ $\displaystyle=|\\!|{b\ominus a}|\\!|\qquad\qquad\mbox{[since
$|\\!|{b\ominus a}|\\!|$ is independent]}$
* •
_Average:_ We first note that, when $p:Y\to\mathbb{M}$, with $Y$ finite, is a
distribution, then an inductive argument based on weak distributivity, allows
one to show that for all $x\in\mathbb{M}$, $Y^{\prime}\subseteq Y$,
$x\odot\bigoplus_{y\in Y^{\prime}}p(y)=\bigoplus_{y\in Y^{\prime}}x\odot
p(y)$.
For all $a,b\in\mathbb{M}^{Y}$ we have
$\displaystyle|\\!|{\mathrm{av}_{D}(b)\ominus\mathrm{av}_{D}(a)}|\\!|$
$\displaystyle=\max_{p\in D}(\bigoplus_{y\in Y}p(y)\odot
b(y)\ominus\bigoplus_{y\in Y}p(y)\odot a(y))$
$\displaystyle\sqsubseteq\max_{p\in D}(\bigoplus_{y\in
Y}p(y)\odot((b(y)\ominus a(y))\oplus a(y))\ominus\bigoplus_{y\in Y}p(y)\odot
a(y))$ [by monotonicity of $\odot,\oplus,\ominus$ and $(b(y)\ominus
a(y))\oplus a(y)=a(y)\sqcup b(y)$] $\displaystyle=\max_{p\in
D}(\bigoplus_{y\in Y}(p(y)\odot(b(y)\ominus a(y)))\oplus(p(y)\odot
a(y))\ominus\bigoplus_{y\in Y}p(y)\odot a(y))$ [since $b(y)\ominus
a(y)\sqsubseteq 1\ominus a(y)=\overline{a(y)}$, and $\odot$ weakly distributes
over $\oplus$] $\displaystyle=\max_{p\in D}((\bigoplus_{y\in
Y}p(y)\odot(b(y)\ominus a(y))\oplus\bigoplus_{y\in Y}p(y)\odot
a(y))\ominus\bigoplus_{y\in Y}p(y)\odot a(y))$
$\displaystyle\sqsubseteq\max_{p\in D}\bigoplus_{y\in Y}p(y)\odot(b(y)\ominus
a(y))\qquad\qquad\mbox{[by \autoref{le:mvprop}(\ref{le:mvprop:6})]}$
$\displaystyle\sqsubseteq\max_{p\in D}\bigoplus_{y\in
Y}p(y)\odot|\\!|{b\ominus a}|\\!|\qquad\qquad\mbox{[by def.\ of norm and
monotonicity of $\odot$]}$ $\displaystyle=\max_{p\in D}|\\!|{b\ominus
a}|\\!|\odot\bigoplus_{y\in Y}p(y)\qquad\mbox{[since $p$ is a distribution and
$\odot$ weakly distributes over $\oplus$]}$ $\displaystyle=\max_{p\in
D}(|\\!|{b\ominus a}|\\!|\odot 1)\qquad\qquad\mbox{[since $p$ is a
distribution over $Y$]}$ $\displaystyle=|\\!|{b\ominus
a}|\\!|\qquad\qquad\mbox{[since $|\\!|{b\ominus a}|\\!|$ is independent from
$p$]}$
The next result determines the approximations associated with the basic
functions.
###### Proposition 5.4 (approximations of basic functions).
Let $\mathbb{M}$ be an MV-chain, $Y,Z$ be finite sets and let
$a\in\mathbb{M}^{Y}$. We define
$\mathit{Min}_{a}=\\{y\in Y\mid a(y)\text{
minimal}\\}\qquad\mathit{Max}_{a}=\\{y\in Y\mid a(y)\text{ maximal}\\}$
* •
_Constant:_ for $k:\mathbb{M}^{Z}$, the approximations
$(c_{k})^{\\#}_{a}:\mathbf{2}^{[{Y}]_{a}}\to\mathbf{2}^{[{Z}]_{c_{k}(a)}}$,
$(c_{k})_{\\#}^{a}:\mathbf{2}^{[{Y}]^{a}}\to\mathbf{2}^{[{Z}]^{c_{k}(a)}}$ are
$(c_{k})^{\\#}_{a}(Y^{\prime})=\emptyset=(c_{k})_{\\#}^{a}(Y^{\prime})$
* •
_Reindexing:_ for $u:Z\to Y$, the approximations
$(u^{*})^{\\#}_{a}:\mathbf{2}^{[{Y}]_{a}}\to\mathbf{2}^{[{Z}]_{u^{*}(a)}}$,
$(u^{*})_{\\#}^{a}:\mathbf{2}^{[{Y}]^{a}}\to\mathbf{2}^{[{Z}]^{u^{*}(a)}}$ are
$(u^{*})^{\\#}_{a}(Y^{\prime})=(u^{*})_{\\#}^{a}(Y^{\prime})=u^{-1}(Y^{\prime})=\\{z\in[{Z}]_{u^{*}(a)}\mid
u(z)\in Y^{\prime}\\}$
* •
_Min:_ for $\mathcal{R}\subseteq Y\times Z$, the approximations
$(\min\nolimits_{\mathcal{R}})_{a}^{\\#}:\mathbf{2}^{[{Y}]_{a}}\to\mathbf{2}^{[{Z}]_{\min_{\mathcal{R}}(a)}}$,
$(\min\nolimits_{\mathcal{R}})^{a}_{\\#}:\mathbf{2}^{[{Y}]^{a}}\to\mathbf{2}^{[{Z}]^{\min_{\mathcal{R}}(a)}}$
are given below, where $\mathcal{R}^{-1}(z)=\\{y\in Y\mid y\mathcal{R}z\\}$:
$(\min\nolimits_{\mathcal{R}})_{a}^{\\#}(Y^{\prime})=\\{z\in[{Z}]_{\min\nolimits_{\mathcal{R}}(a)}\mid\mathit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq
Y^{\prime}\\}$
$(\min\nolimits_{\mathcal{R}})^{a}_{\\#}(Y^{\prime})=\\{z\in[{Z}]^{\min\nolimits_{\mathcal{R}}(a)}\mid\mathit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\cap
Y^{\prime}\neq\emptyset\\}$
* •
_Max:_ for $\mathcal{R}\subseteq Y\times Z$, the approximations
$(\max\nolimits_{\mathcal{R}})_{a}^{\\#}:\mathbf{2}^{[{Y}]_{a}}\to\mathbf{2}^{[{Z}]_{\max_{\mathcal{R}}(a)}}$,
$(\max\nolimits_{\mathcal{R}})^{a}_{\\#}:\mathbf{2}^{[{Y}]^{a}}\to\mathbf{2}^{[{Z}]^{\max_{\mathcal{R}}(a)}}$
are
$(\max\nolimits_{\mathcal{R}})_{a}^{\\#}(Y^{\prime})=\\{z\in[{Z}]_{\max\nolimits_{\mathcal{R}}(a)}\mid\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}\cap
Y^{\prime}\neq\emptyset\\}$
$(\max\nolimits_{\mathcal{R}})^{a}_{\\#}(Y^{\prime})=\\{z\in[{Z}]^{\max\nolimits_{\mathcal{R}}(a)}\mid\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq
Y^{\prime}\\}$
* •
_Average:_ for a finite $D\subseteq\mathcal{D}(Y)$, the approximations
$(\mathrm{av}_{D})_{a}^{\\#}:\mathbf{2}^{[{Y}]_{a}}\to\mathbf{2}^{[{Z}]_{D_{\mathrm{av}_{D}(a)}}}$,
$(\mathrm{av}_{D})^{a}_{\\#}\colon\mathbf{2}^{[{Y}]^{a}}\to\mathbf{2}^{[{Z}]^{D_{\mathrm{av}_{D}(a)}}}$
are
$(\mathrm{av}_{D})_{a}^{\\#}(Y^{\prime})=\\{p\in[{D}]_{\mathrm{av}_{D}(a)}\mid\mathit{supp}(p)\subseteq
Y^{\prime}\\}$
$(\mathrm{av}_{D})^{a}_{\\#}(Y^{\prime})=\\{p\in[{D}]^{\mathrm{av}_{D}(a)}\mid\mathit{supp}(p)\subseteq
Y^{\prime}\\},$
where $\mathit{supp}(p)=\\{y\in Y\mid p(y)>0\\}$ for $p\in\mathcal{D}(Y)$.
###### Proof 5.5.
We only consider the primal cases, the dual ones are analogous.
Let $a\in\mathbb{M}^{Y}$.
* •
_Constant:_ for all $0\sqsubset\theta\sqsubseteq\delta_{a}$ and
$Y^{\prime}\subseteq[{Y}]_{a}$ we have
$\displaystyle(c_{k})_{a,\theta}^{\\#}(Y^{\prime})$
$\displaystyle=\gamma_{c_{k}(a),\theta}\circ
c_{k}\circ\alpha_{a,\theta}(Y^{\prime})$
$\displaystyle=\\{z\in[{Z}]_{c_{k}(a)}\mid\theta\sqsubseteq
c_{k}(a\oplus\theta_{Y^{\prime}})(z)\ominus c_{k}(a)(z)\\}$
$\displaystyle=\\{z\in[{Z}]_{c_{k}(a)}\mid\theta\sqsubseteq k\ominus
k\\}=\\{z\in Z\mid\theta\sqsubseteq 0\\}=\emptyset$
Hence all values $\iota_{a}^{f}(Y^{\prime},z)$ are equal to $0$ and we have
$\iota_{a}^{f}=\delta_{a}$. Replacing $\theta$ by $\iota_{a}^{f}$ we obtain
$(c_{k})_{a}^{\\#}(Y^{\prime})=\emptyset$.
* •
_Reindexing:_ for all $0\sqsubset\theta\sqsubseteq\delta_{a}$ and
$Y^{\prime}\subseteq[{Y}]_{a}$ we have
$\displaystyle(u^{*})_{a,\theta}^{\\#}(Y^{\prime})$
$\displaystyle=\gamma_{u^{*}(a),\theta}\circ
u^{*}\circ\alpha_{a,\theta}(Y^{\prime})$
$\displaystyle=\\{z\in[{Z}]_{u^{*}(a)}\mid\theta\sqsubseteq(a\oplus\theta_{Y^{\prime}})(u(z))\ominus
a(u(z))\\}.$
We show that this corresponds to $u^{-1}(Y^{\prime})=\\{z\in Z\mid u(z)\in
Y^{\prime}\\}$. It is easy to see that for all $z\in u^{-1}(Y^{\prime})$, we
have
$(a\oplus\theta_{Y^{\prime}})(u(z))\ominus
a(u(z))=\theta=a(u(z))\ominus(a\ominus\theta_{Y^{\prime}})(u(z))$
since $u(z)\in Y^{\prime}$ and $\theta\sqsubseteq\delta_{a}$. Since $u(z)\in
Y^{\prime}\subseteq[{Y}]_{a}$, we have $u^{*}(a)(z)=a(u(z))\neq 1$ and hence
$z\in[{Z}]_{u^{*}(a)}$. On the other hand, for all $z\not\in
u^{-1}(Y^{\prime})$, we have
$(a\oplus\theta_{Y^{\prime}})(u(z))=a(u(z))=(a\ominus\theta_{Y^{\prime}})(u(z))$
since $u(z)\notin Y^{\prime}$, and so
$(a\oplus\theta_{Y^{\prime}})(u(z))\ominus
a(u(z))=a(u(z))\ominus(a\ominus\theta_{Y^{\prime}})(u(z))=0\sqsubset\theta.$
Therefore $(u^{*})_{a,\theta}^{\\#}(Y^{\prime})=u^{-1}(Y^{\prime})$.
We observe that for $Y^{\prime}\subseteq[{Y}]_{a}$, $z\in[{Z}]_{u^{*}(a)}$
either $u^{*}(a\oplus\theta_{Y^{\prime}})(z)\ominus
u^{*}(a)(z)\sqsubset\theta$ for all $0\sqsubset\theta\sqsubseteq\delta_{a}$ –
and in this case $\iota_{a}^{u^{*}}(Y^{\prime},z)=0$ – or
$u^{*}(a\oplus\theta_{Y^{\prime}})(z)\ominus u^{*}(a)(z)=\theta$ for all
$0\sqsubset\theta\sqsubseteq\delta_{a}$ – and in this case
$\iota_{a}^{u^{*}}(Y^{\prime},z)=\delta_{a}$. By taking the minimum over all
non-zero values, we get $\iota_{a}^{u^{*}}=\delta_{a}$.
And finally we observe that
$(u^{*})_{a}^{\\#}(Y^{\prime})=(u^{*})_{a,\iota_{a}^{u^{*}}}^{\\#}(Y^{\prime})=u^{-1}(Y^{\prime})$.
* •
_Minimum:_ let $0\sqsubset\theta\sqsubseteq\delta_{a}$. For all
$Y^{\prime}\subseteq[{Y}]_{a}$ we have
$\displaystyle(\min\nolimits_{\mathcal{R}})_{a,\theta}^{\\#}(Y^{\prime})$
$\displaystyle=\gamma_{\min\nolimits_{\mathcal{R}}(a),\theta}\circ\min\nolimits_{\mathcal{R}}\circ\alpha_{a,\theta}(Y^{\prime})$
$\displaystyle=\\{z\in[{Z}]_{\min\nolimits_{\mathcal{R}}(a)}\mid\theta\sqsubseteq\min_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)\ominus\min_{y\mathcal{R}z}a(y)\\}$
We compute the value
$V=\min_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)\ominus\min_{y\mathcal{R}z}a(y)$
and consider the following cases:
* –
Assume that there exists $\hat{y}\in\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}$
where $\hat{y}\not\in Y^{\prime}$.
Then $(a\oplus\theta_{Y^{\prime}})(\hat{y})=a(\hat{y})\sqsubseteq
a(y)\sqsubseteq(a\oplus\theta_{Y^{\prime}})(y)$ for all
$y\in\mathcal{R}^{-1}(z)$, which implies that
$\min_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)=a(\hat{y})$. We also have
$\min_{y\mathcal{R}z}a(y)=a(\hat{y})$ and hence $V=0$.
* –
Assume that $\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq Y^{\prime}$ and
$\theta\sqsubseteq a(y)\ominus a(\hat{y})$ whenever
$\hat{y}\in\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}$, $y\not\in Y^{\prime}$ and
$y\mathcal{R}z$.
Since $\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq Y^{\prime}$ we observe
that
$\displaystyle\min_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)$
$\displaystyle=$
$\displaystyle\min\\{\min_{y\in\mathit{Min}_{a|_{\mathcal{R}^{-1}(z)}}}(a(y)\oplus\theta),\min_{y\mathcal{R}z,y\not\in
Y^{\prime}}a(y)\\}$
We can omit the values of all $y$ with $y\mathcal{R}z$,
$y\not\in\mathit{Min}_{a|_{\mathcal{R}^{-1}(z)}}$, $y\in Y^{\prime}$, since we
will never attain the minimum there.
Now let $\hat{y}\in\mathit{Min}_{a|_{\mathcal{R}^{-1}(z)}}$ and $y$ with
$y\mathcal{R}z$ and $y\not\in Y^{\prime}$. Then $\theta\sqsubseteq a(y)\ominus
a(\hat{y})$ by assumption, which implies $a(\hat{y})\oplus\theta\sqsubseteq
a(y)$, since $a(\hat{y})\sqsubseteq a(y)$ and 2.3(2) holds.
From this we can deduce
$\min_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)=a(\hat{y})\oplus\theta$.
We also have $\min_{y\mathcal{R}z}a(y)=a(\hat{y})$ and hence – since
$a(\hat{y})\sqsubseteq\overline{\theta}$ (due to
$\theta\sqsubseteq\delta_{a}\sqsubseteq\overline{a(\hat{y})}$) and 2.3(9)
holds – $V=(a(\hat{y})\oplus\theta)\ominus a(\hat{y})=\theta$.
* –
In the remaining case $\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq
Y^{\prime}$ and there exist
$\hat{y}\in\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}$, $y\not\in Y^{\prime}$,
$y\mathcal{R}z$ such that $a(y)\ominus a(\hat{y})\sqsubset\theta$.
This implies $a(y)\sqsubseteq(a(y)\ominus a(\hat{y}))\oplus
a(\hat{y})\sqsubset\theta\oplus a(\hat{y})$ since again
$a(\hat{y})\sqsubseteq\overline{\theta}$ and 2.3(8) holds. Hence
$\min_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)\sqsubseteq a(y)$, which
means that $V\sqsubseteq a(y)\ominus a(\hat{y})\sqsubset\theta$.
Summarizing, for $\theta\sqsubseteq\delta_{a}$ we observe that $V=\theta$ if
and only if $\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq Y^{\prime}$ and
$\theta\sqsubseteq a(y)\ominus a(\hat{y})$ whenever
$\hat{y}\in\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}$, $y\not\in Y^{\prime}$ and
$y\mathcal{R}z$.
Hence if $\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq Y^{\prime}$ we have
$\iota_{a}^{\min\nolimits_{\mathcal{R}}}(Y^{\prime},z)=\min\\{a(y)\ominus
a(\hat{y})\mid\hat{y}\in\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}},y\not\in
Y^{\prime},y\mathcal{R}z\\}\cup\\{\delta_{a}\\}$
otherwise $\iota_{a}^{\min\nolimits_{\mathcal{R}}}(Y^{\prime},z)=0$.
The values above are minimal whenever
$Y^{\prime}=\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}$ and thus we have:
$\iota_{a}^{\min\nolimits_{\mathcal{R}}}=\min_{z\in[{Z}]_{\min\nolimits_{\mathcal{R}}(a)}}\\{a(y)\ominus
a(\hat{y})\mid
y\mathcal{R}z,\hat{y}\in\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}},y\not\in\textit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\\}\cup\\{\delta_{a}\\}.$
Finally we deduce that
$(\min\nolimits_{\mathcal{R}})_{a}^{\\#}(Y^{\prime})=(\min\nolimits_{\mathcal{R}})_{a,\iota_{a}^{\min\nolimits_{\mathcal{R}}}}^{\\#}(Y^{\prime})=\\{z\in[{Z}]_{\min\nolimits_{\mathcal{R}}(a)}\mid\mathit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq
Y^{\prime}\\}.$
* •
_Maximum:_ let $0\sqsubset\theta\sqsubseteq\delta_{a}$. For all
$Y^{\prime}\subseteq[{Y}]_{a}$ we have
$\displaystyle(\max\nolimits_{\mathcal{R}})_{a,\theta}^{\\#}(Y^{\prime})$
$\displaystyle=\gamma_{\max\nolimits_{\mathcal{R}}(a),\theta}\circ\max\nolimits_{\mathcal{R}}\circ\alpha_{a,\theta}(Y^{\prime})$
$\displaystyle=\\{z\in[{Z}]_{\max\nolimits_{\mathcal{R}}(a)}\mid\theta\sqsubseteq\max_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)\ominus\max_{y\mathcal{R}z}a(y)\\}$
We observe that
$\displaystyle\max_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)$
$\displaystyle=$
$\displaystyle\max\\{\max_{y\in\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}}(a\oplus\theta_{Y^{\prime}})(y),\max_{\begin{subarray}{c}y\mathcal{R}z,y\in
Y^{\prime}\\\
y\not\in\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}\end{subarray}}(a(y)\oplus\theta)\\}$
We can omit the values of all $y$ with $y\mathcal{R}z$,
$y\not\in\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}$, $y\not\in Y^{\prime}$,
since we will never attain the maximum there.
We now compute the value
$V=\max_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)\ominus\max_{y\mathcal{R}z}a(y)$
and consider the following cases:
* –
Assume that there exists $\hat{y}\in\textit{Max}_{a|_{\mathcal{R}^{-1}(z)}}$
where $\hat{y}\in Y^{\prime}$.
Then
$(a\oplus\theta_{Y^{\prime}})(\hat{y})=a(\hat{y})\oplus\theta\sqsupseteq(a\oplus\theta_{Y^{\prime}})(y)\sqsupseteq
a(y)$ for all $y\in\mathcal{R}^{-1}(z)$, which implies that
$\max_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)=a(\hat{y})\oplus\theta$.
We also have $\max_{y\mathcal{R}z}a(y)=a(\hat{y})$ and hence – since
$a(\hat{y})\sqsubseteq\overline{\theta}$ and 2.3(9) holds –
$V=(a(\hat{y})\oplus\theta)\ominus a(\hat{y})=\theta$.
* –
Assume that $\textit{Max}_{a|_{\mathcal{R}^{-1}(z)}}\cap
Y^{\prime}=\emptyset$. Now let
$\hat{y}\in\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}$ and
$y\not\in\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}$ with $y\mathcal{R}z$ and
$y\in Y^{\prime}$. Then
$\displaystyle\max_{y\in\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}}(a\oplus\theta_{Y^{\prime}})(y)$
$\displaystyle=$ $\displaystyle a(\hat{y})$
$\displaystyle\max_{\begin{subarray}{c}y\mathcal{R}z,y\in Y^{\prime}\\\
y\not\in\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}\end{subarray}}(a(y)\oplus\theta)$
$\displaystyle=$ $\displaystyle a(y^{\prime})\oplus\theta$
for some value $y^{\prime}$ with $y^{\prime}\mathcal{R}z$, $y^{\prime}\in
Y^{\prime}$, $y^{\prime}\not\in\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}$, that
is $a(y^{\prime})\sqsubset a(\hat{y})$.
So then either
$\max_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)=a(\hat{y})$ and
$V=a(\hat{y})\ominus a(\hat{y})=0$. Or
$\max_{y\mathcal{R}z}(a\oplus\theta_{Y^{\prime}})(y)=a(y^{\prime})\oplus\theta$
and by 2.3(11) $V=(a(y^{\prime})\oplus\theta)\ominus
a(\hat{y})\sqsubset\theta$.
Summarizing, for $\theta\sqsubseteq\delta_{a}$ we observe that $V=\theta$ if
and only if $\textit{Max}_{a|_{\mathcal{R}^{-1}(z)}}\cap
Y^{\prime}\neq\emptyset$, where the latter condition is independent of
$\theta$.
Hence, as in the case of reindexing, we have
$\iota_{a}^{\max\nolimits_{\mathcal{R}}}=\delta_{a}$. Finally we have
$(\max\nolimits_{\mathcal{R}})_{a}^{\\#}(Y^{\prime})=(\max\nolimits_{\mathcal{R}})_{a,\iota_{a}^{\max\nolimits_{\mathcal{R}}}}^{\\#}(Y^{\prime})=\\{z\in[{Z}]_{\max\nolimits_{\mathcal{R}}(a)}\mid\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}\cap
Y^{\prime}\neq\emptyset\\}.$
* •
_Average:_ for all $0\sqsubset\theta\sqsubseteq\delta_{a}$ and
$Y^{\prime}\subseteq[{Y}]_{a}$ by definition
$\displaystyle(\mathrm{av}_{D})_{a,\theta}^{\\#}(Y^{\prime})$
$\displaystyle=\gamma_{\mathrm{av}_{D}(a),\theta}\circ\mathrm{av}_{D}\circ\alpha_{a,\theta}(Y^{\prime})$
$\displaystyle=\\{p\in[{D}]_{\mathrm{av}_{D}(a)}\mid\theta\sqsubseteq\bigoplus_{y\in
Y}p(y)\odot(a\oplus\theta_{Y^{\prime}})(y)\ominus\bigoplus_{y\in Y}p(y)\odot
a(y)\\}$
We show that this set corresponds to
$\\{p\in[{D}]_{\mathrm{av}_{D}(a)}\mid\mathit{supp}(p)\subseteq
Y^{\prime}\\}$.
Consider $p\in[{D}]_{\mathrm{av}_{D}(a)}$ such that $\mathit{supp}(p)\subseteq
Y^{\prime}$. Note that clearly $\bigoplus_{y\in Y^{\prime}}p(y)=1$. Now we
have
$\displaystyle\bigoplus_{y\in
Y}p(y)\odot(a\oplus\theta_{Y^{\prime}})(y)\ominus\bigoplus_{y\in Y}p(y)\odot
a(y)$ $\displaystyle=\bigoplus_{y\in
Y^{\prime}}p(y)\odot(a(y)\oplus\theta)\oplus\bigoplus_{y\in Y\backslash
Y^{\prime}}p(y)\odot a(y)\ominus\bigoplus_{y\in Y}p(y)\odot a(y)$
$\displaystyle=\bigoplus_{y\in Y^{\prime}}(p(y)\odot a(y)\oplus
p(y)\odot\theta)\oplus\bigoplus_{y\in Y\backslash Y^{\prime}}p(y)\odot
a(y)\ominus\bigoplus_{y\in Y}p(y)\odot a(y)$ [by weak distributivity, since
for $y\in Y^{\prime}\subseteq[{Y}]_{a}$,
$a(y)\sqsubseteq\overline{\delta_{a}}$] $\displaystyle=\bigoplus_{y\in
Y^{\prime}}p(y)\odot\theta\oplus\bigoplus_{y\in Y^{\prime}}p(y)\odot
a(y)\oplus\bigoplus_{y\in Y\backslash Y^{\prime}}p(y)\odot
a(y)\ominus\bigoplus_{y\in Y}p(y)\odot a(y)$ $\displaystyle=\bigoplus_{y\in
Y^{\prime}}p(y)\odot\theta\oplus\bigoplus_{y\in Y^{\prime}}p(y)\odot
a(y)\ominus\bigoplus_{y\in Y^{\prime}}p(y)\odot a(y)$ [since, for $y\not\in
Y^{\prime}\supseteq\mathit{supp}(p)$, $p(y)=0$ and thus $p(y)\odot a(y)=0$]
$\displaystyle=(\bigoplus_{y\in
Y^{\prime}}p(y))\odot\theta\oplus\bigoplus_{y\in Y^{\prime}}p(y)\odot
a(y)\ominus\bigoplus_{y\in Y^{\prime}}p(y)\odot a(y)$ [by weak distributivity,
since $p$ is a distribution] $\displaystyle=1\odot\theta\oplus\bigoplus_{y\in
Y^{\prime}}p(y)\odot a(y)\ominus\bigoplus_{y\in Y^{\prime}}p(y)\odot a(y)$
[since $p$ is a distribution] $\displaystyle=\theta\oplus\bigoplus_{y\in
Y^{\prime}}p(y)\odot a(y)\ominus\bigoplus_{y\in Y^{\prime}}p(y)\odot a(y)$
$\displaystyle=\theta$
In order to motivate the last passage, observe that for all $y\in
Y^{\prime}\subseteq[{Y}]_{a}$, we have $a(y)\sqsubseteq\overline{\delta_{a}}$,
and thus $\bigoplus_{y\in Y^{\prime}}p(y)\odot a(y)\sqsubseteq\bigoplus_{y\in
Y^{\prime}}p(y)\odot\overline{\delta_{a}}=(\bigoplus_{y\in
Y^{\prime}}p(y))\odot\overline{\delta_{a}}=1\odot\overline{\delta_{a}}=\overline{\delta_{a}}$,
where the third last passage is motivated by weak distributivity. Since
$\theta\sqsubseteq\delta_{a}$, by 2.3(3), we have
$\overline{\delta_{a}}\sqsubseteq\overline{\theta}$ and thus $\bigoplus_{y\in
Y^{\prime}}p(y)\odot a(y)\sqsubseteq\overline{\theta}$. In turn, using this
fact, 2.3(9) motivates the last equality in the chain above, i.e.,
$\theta\oplus\bigoplus_{y\in Y^{\prime}}p(y)\odot a(y)\ominus\bigoplus_{y\in
Y^{\prime}}p(y)\odot a(y)=\theta$.
On the other hand, for all $p\in[{D}]_{\mathrm{av}_{D}(a)}$ such that
$\mathit{supp}(p)\not\subseteq Y^{\prime}$, there exists $y^{\prime}\in
Y\backslash Y^{\prime}$ such that $p(y^{\prime})\neq 0$. Then, we have
$\displaystyle\bigoplus_{y\in
Y}p(y)\odot(a\oplus\theta_{Y^{\prime}})(y)\ominus\bigoplus_{y\in Y}p(y)\odot
a(y)$ $\displaystyle=\bigoplus_{y\in
Y^{\prime}}p(y)\odot(a(y)\oplus\theta)\oplus\bigoplus_{y\in Y\backslash
Y^{\prime}}p(y)\odot a(y)\ominus\bigoplus_{y\in Y}p(y)\odot a(y)$
$\displaystyle=\bigoplus_{y\in Y^{\prime}}p(y)\odot\theta\oplus\bigoplus_{y\in
Y^{\prime}}p(y)\odot a(y)\oplus\bigoplus_{y\in Y\backslash
Y^{\prime}}p(y)\odot a(y)\ominus\bigoplus_{y\in Y}p(y)\odot a(y)$ [by weak
distributivity, since for $y\in Y^{\prime}\subseteq[{Y}]_{a}$,
$a(y)\sqsubseteq\overline{\delta_{a}}$] $\displaystyle=\bigoplus_{y\in
Y^{\prime}}p(y)\odot\theta\oplus\bigoplus_{y\in Y}p(y)\odot
a(y)\ominus\bigoplus_{y\in Y}p(y)\odot a(y)$
$\displaystyle\sqsubseteq\bigoplus_{y\in Y^{\prime}}p(y)\odot\theta$ [by
2.3(6)] $\displaystyle=\theta\odot\bigoplus_{y\in Y^{\prime}}p(y)$ [by weak
distributivity, since $p$ is a distribution] $\displaystyle\sqsubset\theta$
In order to motivate the last inequality, we proceed as follows. We have that
$\mathit{supp}(p)\not\subseteq Y^{\prime}$. Let
$y_{0}\in\mathit{supp}(p)\backslash Y^{\prime}$. We know that
$\overline{p(y_{0})}\sqsubseteq\bigoplus_{y\in
Y\backslash\\{y\\}}p(y)\sqsubseteq\bigoplus_{y\in Y^{\prime}}p(y)$. Therefore
$\overline{\bigoplus_{y\in Y^{\prime}}p(y)}\sqsubseteq p(y_{0})\neq 0$. Hence
$\bigoplus_{y\in Y^{\prime}}p(y)\sqsubset 1$.
The strict inequality above now follows, if we further show that given an
$x\in\mathbb{M}$, $x\neq 1$ then $\theta\odot x\sqsubset\theta$. Note that
$\overline{x}\neq 0$. Therefore $\theta=\theta\odot
1=\theta\odot(x\oplus\overline{x})=\theta\odot
x\oplus\theta\odot\overline{x}$, where the last equality follows by weak
distributivity. Now
$\theta\odot\overline{x}\sqsubseteq\overline{x}\sqsubseteq\overline{\theta\odot
x}$, and thus, by 2.3(9), we obtain $\theta\odot x=\theta\odot
x\oplus\theta\odot\overline{x}\ominus\theta\odot\overline{x}=\theta\ominus\theta\odot\overline{x}\sqsubset\theta$,
as desired. The last passage follows by the fact that $\theta,\overline{x}\neq
0$ and thus $\theta\odot\overline{x}\neq 0$.
Since these results hold for all $\theta\sqsubseteq\delta_{a}$, we have
$\iota_{a}^{\mathrm{av}_{D}}=\delta^{a}$.
And finally
$(\mathrm{av}_{D})_{a,\theta}^{\\#}(Y^{\prime})=(\mathrm{av}_{D})^{a,\theta}_{\\#}(Y^{\prime})=\\{p\in[{D}]_{\mathrm{av}_{D}(a)}\mid\mathit{supp}(p)\subseteq
Y^{\prime}\\}$.
When a non-expansive function arises as the composition of simpler ones (see
3.9) we can obtain the corresponding approximation by just composing the
approximations of the simpler functions.
###### Proposition 5.6 (composing approximations).
Let $g:\mathbb{M}^{Y}\to\mathbb{M}^{W}$ and
$h:\mathbb{M}^{W}\to\mathbb{M}^{Z}$ be non-expansive functions. For all
$a\in\mathbb{M}^{Y}$ we have that $(h\circ g)_{a}^{\\#}=h_{g(a)}^{\\#}\circ
g_{a}^{\\#}$. Analogously $(h\circ g)^{a}_{\\#}=h^{g(a)}_{\\#}\circ
g^{a}_{\\#}$ for the dual case.
###### Proof 5.7.
Here we only consider the primal case, the dual case for $(h\circ
g)_{\\#}^{a}$ is analogous.
Let $0\sqsubset\theta\sqsubseteq\min\\{\iota_{a}^{g},\iota_{g(a)}^{h}\\}$.
Then, by 3.26(2) we know that
$g_{a}^{\\#}=g_{a,\theta}^{\\#}=\gamma_{g(a),\theta}\circ
g\circ\alpha_{a,\theta}$
$h_{g(a)}^{\\#}=h_{g(a),\theta}^{\\#}=\gamma_{h(g(a)),\theta}\circ
h\circ\alpha_{g(a),\theta}$
Now we will prove that
$(h\circ g)_{a,\theta}^{\\#}=h_{g(a),\theta}^{\\#}\circ g_{a,\theta}^{\\#}$
First observe that
$g(\alpha_{a,\theta}(Y^{\prime}))\in[{g(a)},{g(a\oplus\theta)}]\subseteq[{g(a)},{g(a)\oplus\theta}]$
for all $Y^{\prime}\subseteq[{Y}]_{a}$ by 3.7. Applying 3.26(2) on $h$ we
obtain
$\displaystyle(h\circ g)_{a,\theta}^{\\#}=\gamma_{h(g(a)),\theta}\circ h\circ
g\circ\alpha_{a,\theta}(Y^{\prime})=h_{g(a),\theta}^{\\#}\circ\gamma_{g(a),\theta}\circ
g\circ\alpha_{a,\theta}(Y^{\prime})$ $\displaystyle=$ $\displaystyle
h_{g(a),\theta}^{\\#}\circ g_{a,\theta}^{\\#}(Y^{\prime})=h_{g(a)}^{\\#}\circ
g_{a}^{\\#}(Y^{\prime})$
Hence all functions $(h\circ g)_{a,\theta}^{\\#}$ are equal and independent of
$\theta$ and so it must hold that $(h\circ g)_{a,\theta}^{\\#}=(h\circ
g)_{a}^{\\#}$. Then from 3.26 we can conclude
$\min\\{\iota_{a}^{g},\iota_{g(a)}^{h}\\}\sqsubseteq\iota_{a}^{h\circ g}$.
Furthermore functions can be combined via disjoint union, preserving non-
expansiveness, as follows.
###### Proposition 5.8 (disjoint union of non-expansive functions).
Let $f_{i}:\mathbb{M}^{Y_{i}}\to\mathbb{M}^{Z_{i}}$, for $i\in I$, be non-
expansive and such that the sets $Z_{i}$ are pairwise disjoint. The function
$\biguplus\limits_{i\in I}f_{i}:\mathbb{M}^{\bigcup_{i\in
I}Y_{i}}\to\mathbb{M}^{\biguplus_{i\in I}Z_{i}}$ defined by
$\biguplus_{i\in I}f_{i}(a)(z)=f_{i}(a|_{Y_{i}})(z)\qquad\mbox{if $z\in
Z_{i}$}$
is non-expansive.
###### Proof 5.9.
For all $a,b\in\mathbb{M}^{\bigcup_{i\in I}Y_{i}}$ we have
$\displaystyle|\\!|{\biguplus_{i\in I}f_{i}(b)\ominus\biguplus_{i\in
I}f_{i}(a)}|\\!|$ $\displaystyle=\max_{z\in\biguplus_{i\in
I}Z_{i}}(\biguplus_{i\in I}f_{i}(b)(z)\ominus\biguplus_{i\in I}f_{i}(a)(z))$
$\displaystyle=\max_{i\in I}\max_{z\in Z_{i}}(f_{i}(b|_{Y_{i}})(z)\ominus
f_{i}(a|_{Y_{i}})(z))$ [since all $Z_{i}$ are disjoint]
$\displaystyle=\max_{i\in I}|\\!|{f_{i}(b|_{Y_{i}})\ominus
f_{i}(a|_{Y_{i}})}|\\!|$ [by def. of norm] $\displaystyle\sqsubseteq\max_{i\in
I}|\\!|{b|_{Y_{i}}\ominus a|_{Y_{i}}}|\\!|$ [since all $f_{i}$ are non-
expansive] $\displaystyle=\max_{i\in I}\max_{y\in Y_{i}}(b(y)\ominus a(y))$
$\displaystyle=\max_{y\in\bigcup_{i\in I}Y_{i}}(b(y)\ominus a(y))$
$\displaystyle=|\\!|{b\ominus a}|\\!|$ [by def. of norm]
Also, the corresponding approximation of a disjoint union can be conveniently
assembled from the approximations of its components.
###### Proposition 5.10 (disjoint union and approximations).
The approximations for $\biguplus\limits_{i\in I}f_{i}$, where
$f_{i}:\mathbb{M}^{Y_{i}}\to\mathbb{M}^{Z_{i}}$ are non-expansive and $Z_{i}$
are pairwise disjoint, have the following form. For all $a\colon{\bigcup_{i\in
I}Y_{i}}\to\mathbb{M}$ and $Y^{\prime}\subseteq\bigcup_{i\in I}Y_{i}$:
$\big{(}\biguplus_{i\in I}f_{i}\big{)}_{a}^{\\#}(Y^{\prime})=\biguplus_{i\in
I}(f_{i})_{a|_{Y_{i}}}^{\\#}(Y^{\prime}\cap Y_{i})\qquad\big{(}\biguplus_{i\in
I}f_{i}\big{)}^{a}_{\\#}(Y^{\prime})=\biguplus_{i\in
I}(f_{i})^{a|_{Y_{i}}}_{\\#}(Y^{\prime}\cap Y_{i})$
###### Proof 5.11.
We just show the statement for the primal case, the dual case is analogous. We
abbreviate $Y=\bigcup_{i\in I}Y_{i}$.
Let $0\sqsubset\theta\sqsubseteq\delta_{a}$. According to the definition of
$a$-approximation (3.25) we have for $Y^{\prime}\subseteq[{Y}]_{a}$:
$\big{(}\biguplus_{i\in
I}f_{i}\big{)}_{a,\theta}^{\\#}(Y^{\prime})=\gamma_{\biguplus\limits_{i\in
I}f_{i}(a),\theta}\circ\biguplus\limits_{i\in I}f_{i}\circ\alpha_{a,\theta}$
$(f_{i})_{a|_{Y_{i}},\theta}^{\\#}=\gamma_{f_{i}(a|_{Y_{i}}),\theta}\circ
f_{i}\circ\alpha_{a|_{Y_{i}},\theta}$
for all $i\in I$. Our first step is prove that
$\gamma_{\biguplus\limits_{i\in I}f_{i}(a),\theta}\circ\biguplus\limits_{i\in
I}f_{i}\circ\alpha_{a,\theta}(Y^{\prime})=\biguplus_{i\in
I}\gamma_{f_{i}(a|_{Y_{i}}),\theta}\circ
f_{i}\circ\alpha_{a|_{Y_{i}},\theta}(Y^{\prime}\cap Y_{i})$
By simply expanding the functions we obtain
$\gamma_{\biguplus\limits_{i\in I}f_{i}(a),\theta}\circ\biguplus\limits_{i\in
I}f_{i}\circ\alpha_{a,\theta}(Y^{\prime})=\\{z\in Z_{i}\mid i\in I\ \land\
\theta\sqsubseteq f_{i}((a\oplus\theta_{Y^{\prime}})|_{Y_{i}})(z)\ominus
f_{i}(a|_{Y_{i}})(z)\\}$ $\biguplus_{i\in
I}\gamma_{f_{i}(a|_{Y_{i}}),\theta}\circ
f_{i}\circ\alpha_{a|_{Y_{i}},\theta}(Y^{\prime}\cap Y_{i})=\biguplus_{i\in
I}\\{z\in Z_{i}\mid\theta\sqsubseteq
f_{i}(a|_{Y_{i}}\oplus\theta_{Y^{\prime}\cap Y_{i}})(z)\ominus
f_{i}(a|_{Y_{i}})(z)\\}$
which are the same set, since for all $i\in I$ clearly
$(a\oplus\theta_{Y^{\prime}})|_{Y_{i}}=a|_{Y_{i}}\oplus\theta_{Y^{\prime}\cap
Y_{i}}$.
This implies
$\big{(}\biguplus_{i\in
I}f_{i}\big{)}_{a,\theta}^{\\#}(Y^{\prime})=\biguplus_{i\in
I}(f_{i})_{a|_{Y_{i}},\theta}^{\\#}(Y^{\prime}\cap Y_{i}).$
Whenever $\theta\sqsubseteq\min\limits_{i\in I}\iota_{a}^{f_{i}}$, this can be
rewritten to
$\big{(}\biguplus_{i\in
I}f_{i}\big{)}_{a,\theta}^{\\#}(Y^{\prime})=\biguplus_{i\in
I}(f_{i})_{a|_{Y_{i}}}^{\\#}(Y^{\prime}\cap Y_{i}).$
All functions $\big{(}\biguplus_{i\in I}f_{i}\big{)}_{a,\theta}^{\\#}$ are
equal and independent of $\theta$ and so it must hold that
$\big{(}\biguplus_{i\in I}f_{i}\big{)}_{a,\theta}^{\\#}$
$=\big{(}\biguplus_{i\in I}f_{i}\big{)}_{a}^{\\#}$. Then with 3.26 we can also
conclude $\min\limits_{i\in
I}\iota_{a}^{f_{i}}\sqsubseteq\iota_{a}^{\biguplus_{i\in I}f_{i}}$.
Table 1. Basic functions $f\colon\mathbb{M}^{Y}\to\mathbb{M}^{Z}$ (constant,
reindexing, minimum, maximum, average), function composition, disjoint union
and the corresponding approximations
$f_{a}^{\\#}\colon\mathbf{2}^{[{Y}]_{a}}\to\mathbf{2}^{[{Z}]_{f(a)}}$,
$f^{a}_{\\#}\colon\mathbf{2}^{[{Y}]^{a}}\to\mathbf{2}^{[{Z}]^{f(a)}}$.
_Notation:_ $\mathcal{R}^{-1}(z)=\\{y\in Y\mid y\mathcal{R}z\\}$,
$\mathit{supp}(p)=\\{y\in Y\mid p(y)>0\\}$ for $p\in\mathcal{D}(Y)$,
$\mathit{Min}_{a}=\\{y\in Y\mid a(y)\text{ minimal}\\}$,
$\mathit{Max}_{a}=\\{y\in Y\mid a(y)\text{ maximal}\\}$, $a\colon
Y\to\mathbb{M}$
function $f$ | definition of $f$ | $f_{a}^{\\#}(Y^{\prime})$ (above), $f_{\\#}^{a}(Y^{\prime})$ (below)
---|---|---
$c_{k}$ | $f(a)=k$ | $\emptyset$
($k\in\mathbb{M}^{Z}$) | | $\emptyset$
$u^{*}$ | $f(a)=a\circ u$ | $u^{-1}(Y^{\prime})$
($u\colon Z\to Y$) | | $u^{-1}(Y^{\prime})$
$\min\nolimits_{\mathcal{R}}$ | $f(a)(z)=\min\limits_{y\mathcal{R}z}a(y)$ | $\\{z\in[{Z}]_{f(a)}\mid\mathit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq Y^{\prime}\\}$
($\mathcal{R}\subseteq Y\times Z$) | | $\\{z\in[{Z}]^{f(a)}\mid\mathit{Min}_{a|_{\mathcal{R}^{-1}(z)}}\cap Y^{\prime}\neq\emptyset\\}$
$\max\nolimits_{\mathcal{R}}$ | $f(a)(z)=\max\limits_{y\mathcal{R}z}a(y)$ | $\\{z\in[{Z}]_{f(a)}\mid\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}\cap Y^{\prime}\neq\emptyset\\}$
($\mathcal{R}\subseteq Y\times Z$) | | $\\{z\in[{Z}]^{f(a)}\mid\mathit{Max}_{a|_{\mathcal{R}^{-1}(z)}}\subseteq Y^{\prime}\\}$
$\mathrm{av}_{D}$ | $f(a)(p)=\bigoplus\limits_{y\in Y}p(y)\odot a(y)$ | $\\{p\in[{D}]_{f(a)}\mid\mathit{supp}(p)\subseteq Y^{\prime}\\}$
($Z=D\subseteq\mathcal{D}(Y)$) | | $\\{p\in[{D}]^{f(a)}\mid\mathit{supp}(p)\subseteq Y^{\prime}\\}$
$h\circ g$ | $f(a)=h(g(a))$ | $h_{g(a)}^{\\#}\circ g_{a}^{\\#}(Y^{\prime})$
($g\colon\mathbb{M}^{Y}\to\mathbb{M}^{W}$, | | $h^{g(a)}_{\\#}\circ g^{a}_{\\#}(Y^{\prime})$
$h\colon\mathbb{M}^{W}\to\mathbb{M}^{Z}$) | |
$\biguplus\limits_{i\in I}f_{i}$ $I$ finite | $f(a)(z)=f_{i}(a|_{Y_{i}})(z)$ | $\biguplus_{i\in I}(f_{i})_{a|_{Y_{i}}}^{\\#}(Y^{\prime}\cap Y_{i})$
($f_{i}\colon\mathbb{M}^{Y_{i}}\to\mathbb{M}^{Z_{i}}$, | ($z\in Z_{i}$) | $\biguplus_{i\in I}(f_{i})^{a|_{Y_{i}}}_{\\#}(Y^{\prime}\cap Y_{i})$
$Y=\bigcup\limits_{i\in I}Y_{i}$, $Z=\biguplus\limits_{i\in I}Z_{i}$) | |
We can then prove the desired results (non-expansiveness and approximation)
for the basic building blocks and their composition (all schematically
reported in Table 1).
All basic functions in Def. 5.1 are non-expansive. Furthermore non-expansive
functions are closed under composition and disjoint union. The approximations
are the ones listed in the third column of Table 1.
###### Proof 5.12.
Follows directly from Propositions 5.2, 5.4, 5.6, 5.8, 5.10 and 3.9.
We can also specify the maximal decrease respectively increase that is
propagated (here we are using the notation of 3.25).
###### Corollary 5.13.
Let $f\colon\mathbb{M}^{Y}\to\mathbb{M}^{Z}$, $a\in\mathbb{M}^{Y}$ and
$\iota_{a}^{f}$ be defined as in 3.25. In the dual view we have
$\iota_{f}^{a}=\min\\{\iota_{f}^{a}(Y^{\prime},z)\mid Y^{\prime}\subseteq Y\
\land\ z\in Z\ \land\ \iota_{f}^{a}(Y^{\prime},z)\neq
0\\}\cup\\{\delta^{a}\\}$, where the set $\\{\theta\sqsubseteq\delta^{a}\mid
z\in f^{a,\theta}_{\\#}(Y^{\prime})\\}$ has a maximum for each
$z\in[{Z}]^{f(a)}$ and $Y^{\prime}\subseteq[{Y}]^{a}$, that we denote by
$\iota_{f}^{a}(Y^{\prime},z)$.
We consider the basic functions from Def. 5.1, function composition as in 3.9
and disjoint union as in 5.8 and give the corresponding values for
$\iota_{a}^{f}$ and $\iota_{f}^{a}$.
For greatest fixpoints (primal case) we obtain:
* •
$\iota_{a}^{c_{k}}=\iota_{a}^{u^{*}}=\iota_{a}^{\max_{\mathcal{R}}}=\iota_{a}^{\mathrm{av}_{D}}=\delta^{a}$
* •
$\iota_{a}^{\min_{\mathcal{R}}}=\min\limits_{z\in[{Z}]_{\min_{\mathcal{R}}(a)}}\\{a(y)\ominus
a(\hat{y})\mid$
$y\mathcal{R}z,y\notin\mathit{Min}_{a\mid_{\mathcal{R}^{-1}(z)}},\hat{y}\in\mathit{Min}_{a\mid_{\mathcal{R}^{-1}(z)}}\\}\cup\\{\delta^{a}\\}$
* •
$\iota_{a}^{g\circ f}\sqsupseteq\min\\{\iota_{a}^{f},\iota_{f(a)}^{g}\\}$
* •
$\iota_{a}^{\biguplus_{i\in I}f_{i}}=\min_{i\in I}\iota_{a|_{Y_{i}}}^{f_{i}}$
For least fixpoints (dual case) we obtain:
* •
$\iota_{c_{k}}^{a}=\iota_{u^{*}}^{a}=\iota_{\min_{\mathcal{R}}}^{a}=\iota_{\mathrm{av}_{D}}^{a}=\delta_{a}$
* •
$\iota_{\max_{\mathcal{R}}}^{a}=\min\limits_{z\in[{Z}]^{\min_{\mathcal{R}}(a)}}\\{a(\hat{y})\ominus
a(y)\mid$
$y\mathcal{R}z,\hat{y}\in\mathit{Max}_{a\mid_{\mathcal{R}^{-1}(z)}},y\notin\mathit{Max}_{a\mid_{\mathcal{R}^{-1}(z)}}\\}\cup\\{\delta_{a}\\}$
* •
$\iota_{g\circ f}^{a}\sqsupseteq\min\\{\iota_{f}^{a},\iota_{g}^{f(a)}\\}$
* •
$\iota_{\biguplus_{i\in I}f_{i}}^{a}=\min_{i\in I}\iota_{f_{i}}^{a|_{Y_{i}}}$
###### Proof 5.14.
The values $\iota_{a}^{f}$ can be obtained by inspecting the proofs of
Propositions 5.4, 5.6 and 5.8.
It only remains to show that $\iota:=\iota_{a}^{\biguplus_{i\in
I}f_{i}}\sqsubseteq\min_{i\in I}\iota_{a|_{Y_{i}}}^{f_{i}}$ (cf. 5.8), which
means showing $\iota\sqsubseteq\iota_{a|_{Y_{i}}}^{f_{i}}$ for every $i\in I$.
We abbreviate $\iota_{i}:=\iota_{a|_{Y_{i}}}^{f_{i}}$.
If $\iota\sqsupset\iota_{i}$ for some $i\in I$, we will find a
$z\in[{Z_{i}}]_{f_{i}(a)}$ and $Y^{\prime}\subseteq[{Y}]_{a}$, such that
$z\in(f_{i})^{\\#}_{a|_{Y_{i}},\iota_{i}}(Y^{\prime}\cap
Y_{i})=(f_{i})^{\\#}_{a|_{Y_{i}}}(Y^{\prime}\cap Y_{i})$ but
$z\notin(f_{i})_{a|_{Y_{i}},\iota}^{\\#}(Y^{\prime}\cap Y_{i})$ by definition
(cf. 3.22). This is a contradiction since
$\displaystyle z\in\biguplus_{i\in I}(f_{i})^{\\#}_{a|_{Y_{i}}}(Y^{\prime}\cap
Y_{i})=\big{(}\biguplus_{i\in
I}f_{i}\big{)}_{a}^{\\#}(Y^{\prime})=\big{(}\biguplus_{i\in
I}f_{i}\big{)}_{a,\iota}^{\\#}(Y^{\prime})=\biguplus_{i\in
I}(f_{i})_{a|_{Y_{i}},\iota}^{\\#}(Y^{\prime}\cap Y_{i})$
and since $z\in Z_{i}$,
$z\not\in(f_{i})_{a|_{Y_{i}},\iota}^{\\#}(Y^{\prime}\cap Y_{i})$ and cannot be
contained in the union.
The arguments for the values $\iota_{f}^{a}$ in the dual case are analogous.
## 6\. Applications
### 6.1. Termination probability
We start by making the example from the introduction (Section 1) more formal.
Consider a Markov chain $(S,T,\eta)$, as defined in the introduction (Fig. 1),
where we restrict the codomain of $\eta\colon S\backslash T\to\mathcal{D}(S)$
to $D\subseteq\mathcal{D}(S)$, where $D$ is finite (to ensure that all
involved sets are finite). Furthermore let
$\mathcal{T}\colon[0,1]^{S}\to[0,1]^{S}$ be the function (Fig. 1) whose least
fixpoint $\mu\mathcal{T}$ assigns to each state its termination probability.
The function $\mathcal{T}$ can be written as
$\mathcal{T}=(\eta^{*}\circ\mathrm{av}_{D})\uplus c_{k}$
where $k\colon T\to[0,1]$ is the constant function $1$ defined only on
terminal states.
###### Proof 6.1.
Let $t\colon S\to[0,1]$. For $s\in T$ we have
$\displaystyle((\eta^{*}\circ\mathrm{av}_{D})\uplus c_{k})(t)(s)$
$\displaystyle=c_{k}(t)(s)$ [since $s\in T$] $\displaystyle=k(s)=1$ [by
definition of $c_{k}$ and $k$] $\displaystyle=\mathcal{T}(t)(s)$ [since $s\in
T$]
For $s\notin T$ we have
$\displaystyle((\eta^{*}\circ\mathrm{av}_{D})\uplus c_{k})(t)(s)$
$\displaystyle=\eta^{*}\circ\mathrm{av}_{D}(t)(s)$ [since $s\notin T$]
$\displaystyle=\mathrm{av}_{D}(t)(\eta(s))$ [by definition of reindexing]
$\displaystyle=\sum_{s^{\prime}\in S}\eta(s)(s^{\prime})\cdot t(s^{\prime})$
[by definition of $\mathrm{av}_{D}$] $\displaystyle=\mathcal{T}(t)(s)$ [since
$s\notin T$]
From this representation and section 5 it is obvious that $\mathcal{T}$ is
non-expansive.
Given a function $t\colon S\to[0,1]$, the $t$-approximation for $\mathcal{T}$
in the dual sense is
$\mathcal{T}_{\\#}^{t}\colon\mathbf{2}^{[{S}]^{t}}\to\mathbf{2}^{[{S}]^{\mathcal{T}(t)}}$
with
$\mathcal{T}_{\\#}^{t}(S^{\prime})=\\{s\in[{S}]^{\mathcal{T}(t)}\mid s\notin
T\land\mathit{supp}(\eta(s))\subseteq S^{\prime}\\}.$
###### Proof 6.2.
In the following let $t\colon S\to[0,1]$ and $S^{\prime}\subseteq[{S}]^{t}$.
By subsection 6.1 we know that
$\mathcal{T}=(\eta^{*}\circ\mathrm{av}_{D})\uplus c_{k}$, then by Propositions
5.10, 5.6, and 5.4 we have
$\displaystyle\mathcal{T}_{\\#}^{t}(S^{\prime})$
$\displaystyle=((\eta^{*}\circ\mathrm{av}_{D})\uplus
c_{k})_{\\#}^{t}(S^{\prime})$
$\displaystyle=(\eta^{*}\circ\mathrm{av}_{D})_{\\#}^{t}(S^{\prime})\cup(c_{k})_{\\#}^{t}(S^{\prime})$
$\displaystyle=(\eta^{*})_{\\#}^{\mathrm{av}_{D}(t)}\circ(\mathrm{av}_{D})_{\\#}^{t}(S^{\prime})\cup(c_{k})_{\\#}^{t}(S^{\prime})$
$\displaystyle=\\{s\in[{S\backslash
T}]^{\eta^{*}(\mathrm{av}_{D}(t))}\mid\eta(s)\in\\{q\in[{D}]^{\mathrm{av}_{D}(t)}\mid\mathit{supp}(q)\subseteq
S^{\prime}\\}\\}\cup\emptyset$ $\displaystyle=\\{s\in[{S\backslash
T}]^{\eta^{*}(\mathrm{av}_{D}(t))}\mid\eta(s)\in[{D}]^{\mathrm{av}_{D}(t)}\land\mathit{supp}(\eta(s))\subseteq
S^{\prime}\\}$
Observe that actually for all $s\in[{S\backslash
T}]^{\eta^{*}(\mathrm{av}_{D}(t))}$ it always holds that
$\eta(s)\in[{D}]^{\mathrm{av}_{D}(t)}$. In fact, since $s\in[{S\backslash
T}]^{\eta^{*}(\mathrm{av}_{D}(t))}$ we must have that
$\eta^{*}(\mathrm{av}_{D}(t))(s)=\mathrm{av}_{D}(t)(\eta(s))\neq 0$, and thus
$\eta(s)\in\\{q\in D\mid\mathrm{av}_{D}(t)(q)\neq
0\\}=[{D}]^{\mathrm{av}_{D}(t)}$. Therefore, we have that
$\displaystyle\\{s\in[{S\backslash
T}]^{\eta^{*}(\mathrm{av}_{D}(t))}\mid\eta(s)\in[{D}]^{\mathrm{av}_{D}(t)}\land\mathit{supp}(\eta(s))\subseteq
S^{\prime}\\}$ $\displaystyle=\\{s\in[{S\backslash
T}]^{\eta^{*}(\mathrm{av}_{D}(t))}\mid\mathit{supp}(\eta(s))\subseteq
S^{\prime}\\}$
Finally, the set above is the same as
$\\{s\in[{S}]^{\mathcal{T}(t)}\mid s\notin
T\land\mathit{supp}(\eta(s))\subseteq S^{\prime}\\}=\\{s\in[{S\backslash
T}]^{\mathcal{T}(t)}\mid\mathit{supp}(\eta(s))\subseteq S^{\prime}\\}$
because, for all $s\in S\backslash T$, hence $s\notin T$, we have that
$\mathcal{T}(t)(s)=\sum_{s^{\prime}\in S}\eta(s)(s^{\prime})\cdot
t(s^{\prime})=\eta^{*}(\mathrm{av}_{D}(t))(s)$, and so $[{S\backslash
T}]^{\mathcal{T}(t)}=[{S\backslash T}]^{\eta^{*}(\mathrm{av}_{D}(t))}$.
At this point we have all the ingredients needed to formalise the application
presented in the introduction. We refrain from repeating the same example, but
rather present a new example that allows us to illustrate the question of the
largest decrease for a fixpoint that still guarantees a pre-fixpoint (the dual
problem is treated in Proposition 4.4).
###### Example 6.3.
Consider the following Markov chain where $S=\\{x_{1},x_{2},x_{3}\\}$ are non-
terminal states. The least fixpoint of the underlying fixpoint function
$\mathcal{T}$ is clearly the constant $0$, since no state can reach a terminal
state.
$x_{1}$
---
$x_{2}$
---
$x_{3}$
---
11$\frac{1}{2}$$\frac{1}{2}$
Now consider the function $t\colon S\to[0,1]$ defined by $t(x_{1})=0.1$,
$t(x_{2})=0.5$ and $t(x_{3})=0.9$. This is also a fixpoint of $\mathcal{T}$.
Observe that $\mathcal{T}^{a}_{\\#}(S)=S$ and thus, clearly,
$\nu\mathcal{T}^{a}_{\\#}=S$. According to (the dual of) Def. 4.4 we have
$\delta^{a,S}=0.1$ and thus, by (the dual of) Proposition 4.5, the function
$t^{\prime}=t\ominus 0.1_{S}$, with $t^{\prime}(x_{1})=0$,
$t^{\prime}(x_{2})=0.4$, and $t^{\prime}(x_{3})=0.8$, is a pre-fixpoint.
Indeed, $\mathcal{T}(t^{\prime})(x_{1})=0$,
$\mathcal{T}(t^{\prime})(x_{2})=0.4$ and $\mathcal{T}(t^{\prime})(x_{3})=0.8$.
This is not the largest decrease producing a pre-fixpoint. In fact, we can
choose $\theta=0.9$, greater than $\delta^{a,S}$ and we have that
$a\ominus\theta_{S}$ is the constant $0$, i.e., the least fixpoint of
$\mathcal{T}$. However, if we take $\theta^{\prime}=0.5\sqsubset\theta$, then
$t\ominus\theta^{\prime}_{S}$ is not a pre-fixpoint. In fact
$(t\ominus\theta^{\prime}_{S})(x_{2})=0$, while
$\mathcal{T}(t\ominus\theta^{\prime}_{S})(x_{2})=0.2$. This means that the set
of decreases (beyond $\delta^{a,S}$) producing a pre-fixpoint is not downward-
closed and hence the largest decrease cannot be found by binary search, while,
as already mentioned, a binary search will work for decreases below
$\delta^{a,S}$.
It is well-known that the function $\mathcal{T}$ can be tweaked in such a way
that it has a unique fixpoint, coinciding with $\mu\mathcal{T}$, by
determining all states which cannot reach a terminal state and setting their
value to zero [bk:principles-mc]. Hence fixpoint iteration from above does not
really bring us any added value here. It does however make sense to use the
proof rule in order to guarantee lower bounds via post-fixpoints.
Furthermore, termination probability is a special case of the considerably
more complex stochastic games that will be studied in Section 7, where the
trick of modifying the function is not applicable.
### 6.2. Branching Distances for Metric Transition Systems
In this section we consider metric transition systems (MTS) and their
(symmetrical) branching distances, as studied in [afs:linear-branching-
metrics, bcdgr:algorithms-mean-payoff-games]. In a MTS each state has a set of
successors and some given weight in $[0,1]$. The behavioural distance between
two states is the maximum of the difference between their weights and the
Hausdorff distance of the weights of their successors.
We first consider the Hausdorff lifting and the corresponding approximation.
#### Hausdorff lifting.
Given a metric on a set $X$, the Hausdorff metric is obtained by lifting the
original metric to $\mathbf{2}^{X}$. Here we define this for general distance
functions on $\mathbb{M}$, not restricting to metrics. In particular the
Hausdorff lifting is given by a function $\mathcal{H}:\mathbb{M}^{X\times
X}\to\mathbb{M}^{\mathbf{2}^{X}\times\mathbf{2}^{X}}$ where
$\mathcal{H}(d)(X_{1},X_{2})=\max\\{\max_{x_{1}\in X_{1}}\min_{x_{2}\in
X_{2}}d(x_{1},x_{2}),\max_{x_{2}\in X_{2}}\min_{x_{1}\in
X_{1}}d(x_{1},x_{2})\\}.$
An alternative characterisation of the Hausdorff lifting due to Mémoli
[m:wasserstein], also observed in [bbkk:coalgebraic-behavioral-metrics], is
more convenient for our purposes. Let $u:\mathbf{2}^{X\times
X}\to\mathbf{2}^{X}\times\mathbf{2}^{X}$ be defined by
$u(C)=(\pi_{1}[C],\pi_{2}[C])$, where $\pi_{1},\pi_{2}$ are the projections
$\pi_{i}:X\times X\to X$ and $\pi_{i}[C]=\\{\pi_{i}(c)\mid c\in C\\}$. Then
$\mathcal{H}(d)(X_{1},X_{2})=\min\\{\max_{(x_{1},x_{2})\in
C}d(x_{1},x_{2})\mid C\subseteq X\times X\ \land\ u(C)=(X_{1},X_{2})\\}$.
Relying on this characterisation, we can obtain the result below, from which
we deduce that $\mathcal{H}$ is non-expansive and construct its approximation
as the composition of the corresponding functions from Table 1.
It holds that $\mathcal{H}=\min\nolimits_{u}\circ\max\nolimits_{\in}$ where
$\max_{\in}\colon\mathbb{M}^{X\times X}\to\mathbb{M}^{\mathbf{2}^{X\times
X}}$, with $\mathrel{\in}\ \subseteq(X\times X)\times\mathbf{2}^{X\times X}$
the “is-element-of”-relation on $X\times X$, and
$\min_{u}\colon\mathbb{M}^{\mathbf{2}^{X\times
X}}\to\mathbb{M}^{\mathbf{2}^{X}\times\mathbf{2}^{X}}$.
###### Proof 6.4.
Let for $d:X\times X\to\mathbb{M}$, $X_{1},X_{2}\subseteq X$. Then we have
$\displaystyle\min\nolimits_{u}(\max\nolimits_{\in}(d))(X_{1},X_{2})$
$\displaystyle=$
$\displaystyle\min_{u(C)=(X_{1},X_{2})}(\max\nolimits_{\in}(d))(C)=\min_{u(C)=(X_{1},X_{2})}\max_{(x_{1},x_{2})\in
C}a(x_{1},x_{2})$
which is exactly the definition of the Hausdorff lifting
$\mathcal{H}(d)(X_{1},X_{2})$ via couplings, due to Mémoli [m:wasserstein].
We next determine the approximation of the Hausdorff lifting in the dual
sense. Intuitively, given a distance function $d$ and a relation $R$ on $X$,
such function characterises those pairs $(X_{1},X_{2})$ ($X_{1},X_{2}\subseteq
X$) whose distance in the Hausdorff metric decreases by a constant when we
decrease the distance $d$ for all pairs in $R$ by the same constant.
The approximation for the Hausdorff lifting $\mathcal{H}$ in the dual sense is
as follows. Let $d\colon X\times X\to\mathbb{M}$, then
$\mathcal{H}_{\\#}^{d}\colon\mathbf{2}^{[{X\times
X}]^{d}}\to\mathbf{2}^{[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(d)}}$
with
$\displaystyle\mathcal{H}_{\\#}^{d}(R)$ $\displaystyle=$
$\displaystyle\\{(X_{1},X_{2})\in[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(d)}\mid$
$\displaystyle\qquad\forall x_{1}\in X_{1}\big{(}\min_{x_{2}^{\prime}\in
X_{2}}d(x_{1},x_{2}^{\prime})=\mathcal{H}(d)(X_{1},X_{2})\,\Rightarrow\,\exists
x_{2}\in X_{2}\colon$ $\displaystyle\qquad\qquad\qquad\qquad(x_{1},x_{2})\in
R\land d(x_{1},x_{2})=\mathcal{H}(d)(X_{1},X_{2})\big{)}\mathop{\land}$
$\displaystyle\qquad\forall x_{2}\in X_{2}\big{(}\min_{x_{1}^{\prime}\in
X_{1}}d(x_{1}^{\prime},x_{2})=\mathcal{H}(d)(X_{1},X_{2})\,\Rightarrow\,\exists
x_{1}\in X_{1}\colon$ $\displaystyle\qquad\qquad\qquad\qquad(x_{1},x_{2})\in
R\land d(x_{1},x_{2})=\mathcal{H}(d)(X_{1},X_{2})\big{)}\\}$
###### Proof 6.5.
Let $d\colon X\times X\to\mathbb{M}$ and $R\subseteq[{X\times X}]^{d}$. Then
we have:
$\displaystyle\mathcal{H}_{\\#}^{d}(R)$ $\displaystyle=$
$\displaystyle(\min\nolimits_{u})_{\\#}^{\max_{\in}(d)}((\max\nolimits_{\in})_{\\#}^{d}(R))$
where
$\displaystyle(\max\nolimits_{\in})_{\\#}^{d}\colon\mathbf{2}^{[{X\times
X}]^{d}}\to\mathbf{2}^{[{\mathbf{2}^{X\times X}}]^{\max_{\in}(d)}}$
$\displaystyle(\min\nolimits_{u})_{\\#}^{\max_{\in}(d)}\colon\mathbf{2}^{[{\mathbf{2}^{X\times
X}}]^{\max_{\in}(d)}}\to\mathbf{2}^{[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(d)}}$
We are using the approximations associated to non-expansive functions, given
in 5.4, and obtain:
$\displaystyle\mathcal{H}_{\\#}^{d}(R)$ $\displaystyle=$
$\displaystyle\\{(X_{1},X_{2})\in[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(d)}\mid\mathit{Min}_{\max_{\in}(d)|_{u^{-1}(X_{1},X_{1})}}\cap(\max\nolimits_{\in})_{\\#}^{d}(R)\neq\emptyset\\}$
$\displaystyle=$
$\displaystyle\\{(X_{1},X_{2})\in[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(d)}\mid\exists
C\subseteq X\times X,u(C)=(X_{1},X_{2}),$ $\displaystyle\qquad
C\in(\max\nolimits_{\in})_{\\#}^{d}(R),\mathrm{max}_{\in}(d)(C)=\min_{u(C^{\prime})=(X_{1},X_{2})}\mathrm{max}_{\in}(d)(C^{\prime})\\}$
$\displaystyle=$
$\displaystyle\\{(X_{1},X_{2})\in[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(d)}\mid\exists
C\subseteq X\times X,u(C)=(X_{1},X_{2}),$ $\displaystyle\qquad
C\in(\max\nolimits_{\in})_{\\#}^{d}(R),\max
d[C]=\min_{u(C^{\prime})=(X_{1},X_{2})}\max d[C^{\prime}]\\}$ $\displaystyle=$
$\displaystyle\\{(X_{1},X_{2})\in[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(d)}\mid\exists
C\subseteq X\times X,u(C)=(X_{1},X_{2}),$
$\displaystyle\qquad\mathit{Max}_{d|_{C}}\subseteq R,\max
d[C]=\mathcal{H}(d)(X_{1},X_{2})\\}$
We show that this is equivalent to the characterisation in the statement of
the lemma.
* •
Assume that for all $x_{1}\in X_{1}$ such that $\min_{x_{2}^{\prime}\in
X_{2}}d(x_{1},x_{2}^{\prime})=\mathcal{H}(d)(X_{1},X_{2})$, there exists
$x_{2}\in X_{2}$ such that $(x_{1},x_{2})\in R$ and
$d(x_{1},x_{2})=\mathcal{H}(d)(X_{1},X_{2})$ (and vice versa).
We define a set $C_{m}$ that contains all such pairs $(x_{1},x_{2})$, obtained
from this guarantee. Now let $x_{1}\not\in\pi_{1}[C_{m}]$. Then necessarily
$\min_{x_{2}^{\prime}\in
X_{2}}d(x_{1},x_{2}^{\prime})<\mathcal{H}(d)(X_{1},X_{2})$ (because the
minimal distance to an element of $X_{2}$ cannot exceed the Hausdorff distance
of the two sets). Construct another set $C^{\prime}$ that contains all such
$(x_{1},x_{2})$ where $x_{2}$ is an argument where the minimum is obtained.
Also add elements $x_{2}\not\in\pi_{2}[C_{m}]$ and their corresponding
partners to $C^{\prime}$.
The $C=C_{m}\cup C^{\prime}$ is a coupling for $X_{1},X_{2}$, i.e.,
$u(C)=(X_{1},X_{2})$. Furthermore $\mathit{Max}_{d|_{C}}=C_{m}\subseteq R$ and
$\max d[C]=\max d[C_{m}]=\mathcal{H}(d)(X_{1},X_{2})$.
* •
Assume that there exists $C\subseteq X\times X$, $u(C)=(X_{1},X_{2})$,
$\mathit{Max}_{d|_{C}}\subseteq R$, $\max d[C]=\mathcal{H}(d)(X_{1},X_{2})$.
Now let $x_{1}\in X_{1}$ such that $\min_{x_{2}^{\prime}\in
X_{2}}d(x_{1},x_{2}^{\prime})=\mathcal{H}(d)(X_{1},X_{2})$. Since $C$ is a
coupling of $X_{1},X_{2}$, there exists $x_{2}\in X_{2}$ such that
$(x_{1},x_{2})\in C\subseteq R$. It is left to show that
$d(x_{1},x_{2})=\mathcal{H}(d)(X_{1},X_{2})$, which can be done as follows:
$\mathcal{H}(d)(X_{1},X_{2})=\min_{x_{2}^{\prime}\in
X_{2}}d(x_{1},x_{2}^{\prime})\leq d(x_{1},x_{2})\leq\max
d[C]=\mathcal{H}(d)(X_{1},X_{2}).$
For an $x_{2}\in X_{2}$ such that $\min_{x_{1}^{\prime}\in
X_{1}}d(x_{1}^{\prime},x_{2})=\mathcal{H}(d)(X_{1},X_{2})$ the proof is
analogous.
#### Metric transition systems.
A _metric transition system_ is a triple $(X,w,\eta)$ where $X$ is a finite
set of states, and $w:X\to[0,1]$ and $\eta:X\to\mathbf{2}^{X}$ are functions
that assign a weight $w(x)$ and a set of successors $\eta(x)$ to each $x\in
X$. The _MTS pseudo-metric_ 222Different from a metric, for a pseudo-metric
$d$ the fact that $d(x,y)=0$ does not necessarily imply $x=y$. is the least
fixpoint of the function $\mathcal{J}\colon[0,1]^{X\times X}\to[0,1]^{X\times
X}$ defined, for $d\colon X\times X\to[0,1]$ and $x_{1},x_{2}\in X$, as:
$\mathcal{J}(d)(x_{1},x_{2})=\max\\{\mathcal{H}(d)(\eta(x_{1}),\eta(x_{2})),|w(x_{1})-w(x_{2})|\\}$
where $\mathcal{H}$ is the Hausdorff lifting (for $\mathbb{M}=[0,1]$) defined
earlier. Now, let $\bar{w}\colon X\times X\to[0,1]$ be the weight distance
function defined for $x_{1},x_{2}\in X$ via
$\bar{w}(x_{1},x_{2})=|w(x_{1})-w(x_{2})|.$
The function $\mathcal{J}$ can be written as
$\mathcal{J}=\max\nolimits_{p}\circ\left((\eta\times\eta)^{*}\circ\mathcal{H}\uplus
c_{\bar{w}}\right)$
where $p\colon(X\times X)\uplus(X\times X)\to(X\times X)$ with
$p((x_{1},x_{2}),i)=(x_{1},x_{2})$. (We use $i\in\\{0,1\\}$ to distinguish the
elements in the disjoint union.)
###### Proof 6.6.
Let $d\colon X\times X$ and $x_{1},x_{2}\in X$, then we have
$\displaystyle\mathcal{J}(d)(x_{1},x_{2})$
$\displaystyle=\max\nolimits_{p}\left(((\eta\times\eta)^{*}\circ\mathcal{H}\uplus
c_{\bar{w}})(d)\right)(x_{1},x_{2})$
$\displaystyle=\max\\{((\eta\times\eta)^{*}\circ\mathcal{H}(d))(x_{1},x_{2}),\bar{w}(x_{1},x_{2})\\}$
$\displaystyle=\max\\{\mathcal{H}(d)(\eta(x_{1}),\eta(x_{2})),\bar{w}(x_{1},x_{2})\\}$
$\displaystyle=\max\\{\mathcal{H}(d)(\eta(x_{1}),\eta(x_{2}),|w(x_{1})-w(x_{2})|\\}.$
From this representation and section 5 it follows that $\mathcal{J}$ is non-
expansive.
Let $d\colon X\times X\to[0,1]$. The approximation for $\mathcal{J}$ in the
dual sense is $\mathcal{J}_{\\#}^{d}\colon\mathbf{2}^{[{X\times
X}]^{d}}\to\mathbf{2}^{[{X\times X}]^{\mathcal{J}(d)}}$ with
$\mathcal{J}_{\\#}^{d}(Z)=\\{(x_{1},x_{2})\in[{X\times
X}]^{\mathcal{J}(d)}\mid\bar{w}(x_{1},x_{2})<\mathcal{H}(d)(\eta(x_{1}),\eta(x_{2}))\land(\eta(x_{1}),\eta(x_{2}))\in\mathcal{H}^{d}_{\\#}(Z)\\}$
###### Proof 6.7.
Let $d\colon X\times X\to[0,1]$ and $X^{\prime}\subseteq\mathbf{2}^{[{X\times
X}]^{d}}$. We abbreviate
$g=(\eta\times\eta)^{*}\circ\mathcal{H}\colon[0,1]^{X\times
X}\to[0,1]^{X\times X}$ and hence $\mathcal{J}=\max_{p}(g\uplus\bar{w})$. Thus
we obtain
$\mathcal{J}_{\\#}^{d}(X^{\prime})=(\max\nolimits_{p})_{\\#}^{g(d)\uplus\bar{w}}\left(g_{\\#}^{d}(X^{\prime})\uplus(c_{\bar{w}})_{\\#}^{d}(X^{\prime})\right).$
Since $c_{\bar{w}}\colon[0,1]^{\emptyset}\to[0,1]^{X\times X}$ is a constant
function, we conclude
$(c_{\bar{w}})_{\\#}^{d}(X^{\prime})=\emptyset.$
Now
$g_{\\#}^{d}=((\eta\times\eta)^{*})_{\\#}^{\mathcal{H}(d)}\circ\mathcal{H}_{\\#}^{d}$
where
$\displaystyle\mathcal{H}_{\\#}^{d}$ $\displaystyle\colon\mathbf{2}^{[{X\times
X}]^{d}}\to\mathbf{2}^{[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(d)}}$
$\displaystyle((\eta\times\eta)^{*})_{\\#}^{\mathcal{H}(d)}$
$\displaystyle\colon\mathbf{2}^{[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(d)}}\to\mathbf{2}^{[{X\times
X}]^{g(d)}}.$
It holds that
$((\eta\times\eta)^{*})_{\\#}^{\mathcal{H}(d)}=(\eta\times\eta)^{-1}$ and
hence
$(x_{1},x_{2})\in
g_{\\#}^{d}(X^{\prime})\Leftrightarrow(\eta(x_{1}),\eta(x_{2}))\in\mathcal{H}_{\\#}^{d}(X^{\prime}).$
Lastly, we obtain
$\displaystyle\mathcal{J}_{\\#}^{d}(X^{\prime})$
$\displaystyle=(\max\nolimits_{p})_{\\#}^{g(d)\uplus\bar{w}}((g\uplus\bar{w})^{d}_{\\#}(X^{\prime})\times\\{0\\})$
$\displaystyle=\\{(x_{1},x_{2})\in\mathbf{2}^{[{X\times
X}]^{\mathcal{J}(d)}}\mid\mathit{Max}_{(g(d)\uplus\bar{w})\mid_{p^{-1}(\\{(x_{1},x_{2})\\})}}\subseteq
g_{\\#}^{d}(X^{\prime})\times\\{0\\}\\}.$
We have that
$p^{-1}(\\{(x_{1},x_{2})\\})=\\{((x_{1},x_{2}),0),((x_{1},x_{2}),1)\\}$. The
inclusion
$\mathit{Max}_{(g(d)\uplus\bar{w})\mid_{p^{-1}(\\{(x_{1},x_{2})\\})}}\subseteq
g_{\\#}^{d}(X^{\prime})\times\\{0\\}$
can only hold if $g(d)(x_{1},x_{2})>\bar{w}(x_{1},x_{2})$ (and hence the
maximum is achieved by $g(d)$ instead of $\bar{w}$) and additionally
$((x_{1},x_{2}),0)\in g_{\\#}^{d}(X^{\prime})\times\\{0\\}$. Hence
$\displaystyle\mathcal{J}_{\\#}^{d}(Z)$
$\displaystyle=\\{(x_{1},x_{2})\in[{X\times
X}]^{\mathcal{J}(d)}\mid\bar{w}(x_{1},x_{2})<g(d)(x_{1},x_{2})\land(x_{1},x_{2})\in
g^{d}_{\\#}(Z)\\}$ $\displaystyle=\\{(x_{1},x_{2})\in\mathbf{2}^{[{X\times
X}]^{\mathcal{J}(d)}}\mid\bar{w}(x_{1},x_{2})<\mathcal{H}(d)(\eta(x_{1}),\eta(x_{2}))\land(\eta(x_{1}),\eta(x_{2}))\in\mathcal{H}^{d}_{\\#}(Z)\\}.$
###### Example 6.8.
We consider the MTS depicted below.
$x:0.1$
---
$y:0.6$
---
$z:0.3$
---
Here, $\eta(x)=\\{x,z\\}$, $\eta(y)=\\{x,y,z\\}$ and $\eta(z)=\\{x\\}$.
Additionally we have $w(x)=0.1$, $w(y)=0.6$ and $w(z)=0.3$ resulting in
$\bar{w}(x,y)=0.5$, $\bar{w}(x,z)=0.2$ and $\bar{w}(y,z)=0.3$. The least
fixpoint of $\mathcal{J}$ is a pseudo-metric $\mu\mathcal{J}$ given by
$\mu\mathcal{J}(x,y)=\mu\mathcal{J}(y,z)=0.5$ and $\mu\mathcal{J}(x,z)=0.3$.
(Since $\mu\mathcal{J}$ is a pseudo-metric, the remaining entries are fixed:
$\mu\mathcal{J}(u,u)=0$ and $\mu\mathcal{J}(u,v)=\mu\mathcal{J}(v,u)$ for all
$u,v\in\\{x,y,z\\}$.)
Now consider the pseudo-metric $d$ with $d(x,y)=d(x,z)=d(y,z)=0.5$. This is
also a fixpoint of $\mathcal{J}$. Note that
$\mathcal{H}(d)(\eta(x),\eta(y))=\mathcal{H}(d)(\eta(x),\eta(z))=\mathcal{H}(d)(\eta(y),\eta(z))=0.5$.
Let us use our technique in order to verify that $d$ is not the least fixpoint
of $\mathcal{J}$, by showing that $\nu\mathcal{J}_{\\#}^{d}\neq\emptyset$.
We start fixpoint iteration with the approximation $\mathcal{J}_{\\#}^{d}$
from the top element $[{X\times X}]^{d}$, which is given by the symmetric
closure333We denote the symmetric closure of a relation $R$ by $S(R)$. of
$\\{(x,y),(x,z),(y,z)\\}$ (since reflexive pairs do not contain slack).
We first observe that
$(x,y),(y,x)\notin\mathcal{J}_{\\#}^{d}(S(\\{(x,y),(x,z),(y,z)\\}))$ since
$\bar{w}(x,y)=0.5\not<\mathcal{H}(d)(\eta(x),\eta(y))=0.5$. Next,
$(y,z),(z,y)\notin\mathcal{J}_{\\#}^{d}(S(\\{(x,z),(y,z)\\}))$ since
$(\eta(y),\eta(z))\not\in\mathcal{H}_{\\#}^{d}(S(\\{(x,z),(y,z)\\}))$. In
order to see this, consider the approximation of the Hausdorff lifting in
Lemma 6.2 and note that for $y\in\eta(y)$ we have
$\min_{u\in\eta(z)}d(y,u)=0.5=\mathcal{H}(d)(\eta(y),\eta(z))$, but
$(y,x)\notin S(\\{(x,z),(y,z)\\})$ (where $x$ is the only element in
$\eta(z)$).
The pairs $(x,z),(z,x)$ on the other hand satisfy all conditions and hence
$\nu\mathcal{J}_{\\#}^{d}=S(\\{(x,z)\\})=\mathcal{J}_{\\#}^{d}(S(\\{(x,z)\\}))\neq\emptyset$
Thus we conclude that $d$ is not the least fixpoint, but, according to
Proposition 4.5, we can decrease the value of $d$ in the positions
$(x,z),(z,x)$ and obtain a pre-fixpoint from which we can continue fixpoint
iteration.
### 6.3. Bisimilarity
In order to define standard bisimilarity we use a variant $\mathcal{G}$ of the
Hausdorff lifting $\mathcal{H}$ defined before, where $\max$ and $\min$ are
swapped. More precisely, $\mathcal{G}:\mathbb{M}^{X\times
X}\to\mathbb{M}^{\mathbf{2}^{X}\times\mathbf{2}^{X}}$ is defined, for
$d\in\mathbb{M}^{X\times X}$, by
$\mathcal{G}(d)(X_{1},X_{2})=\max\\{\min_{(x_{1},x_{2})\in
C}d(x_{1},x_{2})\mid C\subseteq X\times X\ \land\ u(C)=(X_{1},X_{2})\\}$.
###### Lemma 6.9.
The approximation for the adapted Hausdorff lifting $\mathcal{G}$ in the
primal sense is as follows. Let $a\colon X\times X\to\\{0,1\\}$, then
$\mathcal{G}^{\\#}_{a}\colon\mathbf{2}^{[{X\times
X}]_{a}}\to\mathbf{2}^{[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]_{a}}$ with
$\displaystyle\mathcal{G}_{a}^{\\#}(R)$ $\displaystyle=$
$\displaystyle\\{(X_{1},X_{2})\in[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]_{\mathcal{H}(a)}\mid$
$\displaystyle\qquad\qquad\quad\forall x_{1}\in X_{1}\exists x_{2}\in
X_{2}\colon\big{(}(x_{1},x_{2})\not\in[{X\times X}]_{a}\lor(x_{1},x_{2})\in
R\big{)}$ $\displaystyle\qquad\qquad\mathop{\land}\forall x_{2}\in
X_{2}\exists x_{1}\in X_{1}\colon\big{(}(x_{1},x_{2})\not\in[{X\times
X}]_{a}\lor(x_{1},x_{2})\in R\big{)}\\}$
###### Proof 6.10.
We rely on the characterisation of $\mathcal{H}_{\\#}^{a}$ (dual case) of
subsection 6.2 and we examine the case where $\mathbb{M}=\\{0,1\\}$. In this
case, whenever we have
$(X_{1},X_{2})\in[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]^{\mathcal{H}(a)}$ it
must necessarily hold that $\mathcal{H}(a)(X_{1},X_{2})=1$. Hence, the first
part of the conjunction simplifies to:
$\forall x_{1}\in X_{1}\big{(}\min_{x_{2}^{\prime}\in
X_{2}}a(x_{1},x_{2}^{\prime})=1\,\Rightarrow\,\exists x_{2}\in
X_{2}\colon(x_{1},x_{2})\in R\land a(x_{1},x_{2})=1\big{)},$
from which we can omit $a(x_{1},x_{2})=1$ from the conclusion, since this
holds automatically. Furthermore $\min_{x^{\prime}_{2}\in
X_{2}}a(x_{1},x^{\prime}_{2})=1$ can be rewritten to $\forall x_{2}\in
X_{2}\colon a(x_{1},x_{2})=1$. This gives us:
$\displaystyle\forall x_{1}\in X_{1}\big{(}\lnot\forall x_{2}\in X_{2}\colon
a(x_{1},x_{2})=1\mathop{\lor}\exists x_{2}\in X_{2}\colon(x_{1},x_{2})\in
R\big{)}$ $\displaystyle\equiv$ $\displaystyle\forall x_{1}\in
X_{1}\big{(}\exists x_{2}\in X_{2}\colon a(x_{1},x_{2})=0\mathop{\lor}\exists
x_{2}\in X_{2}\colon(x_{1},x_{2})\in R\big{)}$ $\displaystyle\equiv$
$\displaystyle\forall x_{1}\in X_{1}\exists x_{2}\in
X_{2}\big{(}(x_{1},x_{2})\not\in[{X\times X}]^{a}\mathop{\lor}(x_{1},x_{2})\in
R\big{)}.$
Since this characterisation is independent of the order, we can replace
$[{X\times X}]^{a}$ by $[{X\times X}]_{a}$ and obtain a characterizing
condition for $\mathcal{G}_{a}^{\\#}$ (primal case).
Now we can define the fixpoint function for bisimilarity and its corresponding
approximation. For simplicity we consider unlabelled transition systems, but
it would be straightforward to handle labelled transitions.
Let $X$ be a finite set of states and $\eta:X\to\mathbf{2}^{X}$ a function
that assigns a set of successors $\eta(x)$ to a state $x\in X$. The fixpoint
function for bisimilarity $\mathcal{B}:\\{0,1\\}^{X\times
X}\to\\{0,1\\}^{X\times X}$ can be expressed by using the Hausdorff lifting
$\mathcal{G}$ with $\mathbb{M}=\\{0,1\\}$.
Bisimilarity on $\eta$ is the greatest fixpoint of
$\mathcal{B}=(\eta\times\eta)^{*}\circ\mathcal{G}$.
###### Proof 6.11.
Let for $a:X\times X\to\\{0,1\\}$, $x,y\in X$. Then we have
$\displaystyle(\eta\times\eta)^{*}\circ\mathcal{G}(a)(x,y)$ $\displaystyle=$
$\displaystyle\mathcal{G}(a)(\eta(x),\eta(y))$ $\displaystyle=$
$\displaystyle\max\nolimits_{u}(\min\nolimits_{\in}(a))(\eta(x),\eta(y))$
$\displaystyle=$
$\displaystyle\max_{u(C)=(\eta(x),\eta(y))}(\min\nolimits_{\in}^{X\times
X}(a))(C)$ $\displaystyle=$
$\displaystyle\max_{u(C)=(\eta(x),\eta(y))}\min_{(x^{\prime},y^{\prime})\in
C}a(x^{\prime},y^{\prime})$
Now we prove that this, indeed, corresponds with the standard bisimulation
function, i.e. $\max_{u(C)=(\eta(x),\eta(y))}\min_{(x^{\prime},y^{\prime})\in
C}a(x^{\prime},y^{\prime})=1$ if and only if for all $x^{\prime}\in\eta(x)$
there exists $y^{\prime}\in\eta(y)$ such that $a(x^{\prime},y^{\prime})=1$ and
vice versa. For the first implication, assume that
$\max_{u(C)=(\eta(x),\eta(y))}\min_{(x^{\prime},y^{\prime})\in
C}a(x^{\prime},y^{\prime})=1$. This means that there exists $C\subseteq
X\times X$ such that $u(C)=(\pi_{1}(C),\pi_{2}(C))=(\eta(x),\eta(y))$ and
$\min_{(x^{\prime},y^{\prime})\in C}a(x^{\prime},y^{\prime})=1$. Then we have
two cases. Either $C=\emptyset$, which means that $\eta(x)=\eta(y)=\emptyset$,
that is, $x$ and $y$ have no successors, and so the bisimulation property
vacuously holds. Otherwise, $C\neq\emptyset$, and we must have
$a(x^{\prime},y^{\prime})=1$ for all $(x^{\prime},y^{\prime})\in C$. Then,
since $(\pi_{1}(C),\pi_{2}(C))=(\eta(x),\eta(y))$, for all
$x^{\prime}\in\eta(x)$ there must exists $y^{\prime}\in\eta(y)$ such that
$(x^{\prime},y^{\prime})\in C$, and thus $a(x^{\prime},y^{\prime})=1$. Vice
versa, for all $y^{\prime}\in\eta(y)$ there must exists $x^{\prime}\in\eta(x)$
such that $(x^{\prime},y^{\prime})\in C$, and thus
$a(x^{\prime},y^{\prime})=1$. So the bisimulation property holds.
For the other implication, assume that for all $x^{\prime}\in\eta(x)$ there
exists $y^{\prime}\in\eta(y)$ such that $a(x^{\prime},y^{\prime})=1$ and call
$c_{1}(x^{\prime})$ such a $y^{\prime}$. Vice versa, assume also that for all
$y^{\prime}\in\eta(y)$ there exists $x^{\prime}\in\eta(x)$ such that
$a(x^{\prime},y^{\prime})=1$ and call $c_{2}(y^{\prime})$ such a $x^{\prime}$.
This means that for all $x^{\prime}\in\eta(x)$ and $y^{\prime}\in\eta(y)$, we
have $a(x^{\prime},c_{1}(x^{\prime}))=a(c_{2}(y^{\prime}),y^{\prime})=1$. Now
let $C^{\prime}=\\{(x^{\prime},y^{\prime})\in\eta(x)\times\eta(y)\mid
c_{1}(x^{\prime})=y^{\prime}\lor x^{\prime}=c_{2}(y^{\prime})\\}$. Since we
assumed that for all $x^{\prime}\in\eta(x)$ there exists
$y^{\prime}\in\eta(y)$ such that $c_{1}(x^{\prime})=y^{\prime}$, we must have
that $\pi_{1}(C^{\prime})=\eta(x)$. The same holds for all
$y^{\prime}\in\eta(y)$, thus $\pi_{2}(C^{\prime})=\eta(y)$. Therefore, we know
that $u(C^{\prime})=(\eta(x),\eta(y))$, and we can conclude by showing that
$a(x^{\prime},y^{\prime})=1$ for all $(x^{\prime},y^{\prime})\in C^{\prime}$,
in which case also
$\max_{u(C)=(\eta(x),\eta(y))}\min_{(x^{\prime},y^{\prime})\in
C}a(x^{\prime},y^{\prime})=1$. By definition of $C^{\prime}$ either
$c_{1}(x^{\prime})=y^{\prime}$ or $x^{\prime}=c_{2}(y^{\prime})$, or both,
must hold. Assume the first one holds, the other case is similar. Then, we can
immediately conclude since by hypothesis we know that
$a(x^{\prime},c_{1}(x^{\prime}))=1$.
Since we proved that the function $\mathcal{B}$ is the same of the standard
bisimulation function, then its greatest fixpoint $\nu\mathcal{B}$ is the
bisimilarity on $\eta$.
Since we are interested in the greatest fixpoint, we are working in the primal
sense. Bisimulation relations are represented by their characteristic
functions $a\colon X\times X\to\\{0,1\\}$, in fact the corresponding relation
can be obtained by taking the complement of $[{X\times
X}]_{a}=\\{(x_{1},x_{2})\in X_{1}\times X_{2}\mid a(x_{1},x_{2})=0\\}$.
Let $a\colon X\times X\to\\{0,1\\}$. The approximation for the bisimilarity
function $\mathcal{B}$ in the primal sense is
$\mathcal{B}_{a}^{\\#}\colon\mathbf{2}^{[{X\times
X}]_{a}}\to\mathbf{2}^{[{X\times X}]_{\mathcal{B}(a)}}$ with
$\displaystyle\mathcal{B}_{a}^{\\#}(R)$ $\displaystyle=$
$\displaystyle\\{(x_{1},x_{2})\in[{X\times X}]_{\mathcal{B}(a)}\mid$
$\displaystyle\quad\forall y_{1}\in\eta(x_{1})\exists
y_{2}\in\eta(x_{2})\big{(}(y_{1},y_{2})\not\in[{X\times
X}]_{a}\lor(y_{1},y_{2})\in R)\big{)}$ $\displaystyle\ \land\forall
y_{2}\in\eta(x_{2})\exists
y_{1}\in\eta(x_{1})\big{(}(y_{1},y_{2})\not\in[{X\times
X}]_{a}\lor(y_{1},y_{2})\in R)\big{\\}}$
###### Proof 6.12.
From subsection 6.2 we know that
$\displaystyle\mathcal{G}_{a}^{\\#}\colon[{X\times X}]_{a}$ $\displaystyle\to$
$\displaystyle[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]_{\mathcal{G}(a)}$
$\displaystyle\mathcal{G}_{a}^{\\#}(R)$ $\displaystyle=$
$\displaystyle\\{(X_{1},X_{2})\in[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]_{\mathcal{G}(a)}\mid$
$\displaystyle\quad\forall x_{1}\in X_{1}\exists x_{2}\in
X_{2}\colon\big{(}(x_{1},x_{2})\not\in[{X\times X}]_{a}\lor(x_{1},x_{2})\in
R\big{)}$ $\displaystyle\mathop{\land}\forall x_{2}\in X_{2}\exists x_{1}\in
X_{1}\colon\big{(}(x_{1},x_{2})\not\in[{X\times X}]_{a}\lor(x_{1},x_{2})\in
R\big{)}\\}.$
Furthermore
$\displaystyle((\eta\times\eta)^{*})_{\mathcal{G}(a)}^{\\#}\colon[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]_{\mathcal{G}(a)}$
$\displaystyle\to$ $\displaystyle[{X\times X}]_{\mathcal{B}(a)}$
$\displaystyle((\eta\times\eta)^{*})_{\mathcal{G}(a)}^{\\#}(R)$
$\displaystyle=$ $\displaystyle(\eta\times\eta)^{-1}(R)$
Composing these functions we obtain:
$\displaystyle\mathcal{B}_{a}^{\\#}\colon[{X\times X}]_{a}$ $\displaystyle\to$
$\displaystyle[{X\times X}]_{\mathcal{B}(a)}$
$\displaystyle\mathcal{B}_{a}^{\\#}(R)$ $\displaystyle=$
$\displaystyle(\eta\times\eta)^{-1}(\\{(Y_{1},Y_{2})\in[{\mathbf{2}^{X}\times\mathbf{2}^{X}}]_{\mathcal{G}(a)}\mid$
$\displaystyle\qquad\forall y_{1}\in Y_{1}\exists y_{2}\in
Y_{2}\colon\big{(}(y_{1},y_{2})\not\in[{X\times X}]_{a}\lor(y_{1},y_{2})\in
R\big{)}$ $\displaystyle\quad\,\mathop{\land}\forall y_{2}\in Y_{2}\exists
y_{1}\in Y_{1}\colon\big{(}(y_{1},y_{2})\not\in[{X\times
X}]_{a}\lor(y_{1},y_{2})\in R\big{)}\\})$ $\displaystyle=$
$\displaystyle\\{(x_{1},x_{2})\in[{X\times X}]_{\mathcal{B}(a)}\mid$
$\displaystyle\quad\forall y_{1}\in\eta(x_{1})\exists
y_{2}\in\eta(x_{2})\colon\big{(}(y_{1},y_{2})\not\in[{X\times
X}]_{a}\lor(y_{1},y_{2})\in R\big{)}$ $\displaystyle\mathop{\land}\forall
y_{2}\in\eta(x_{2})\exists
y_{1}\in\eta(x_{1})\colon\big{(}(y_{1},y_{2})\not\in[{X\times
X}]_{a}\lor(y_{1},y_{2})\in R\big{)}\\}.$
We conclude this section by discussing how this view on bisimilarity can be
useful: first, it again opens up the possibility to compute bisimilarity – a
greatest fixpoint – by iterating from below, through smaller fixpoints. This
could potentially be useful if it is easy to compute the least fixpoint of
$\mathcal{B}$ inductively and continue from there.
Furthermore, we obtain a technique for witnessing non-bisimilarity of states.
While this can also be done by exhibiting a distinguishing modal formula
[hm:hm-logic, c:automatically-explaining-bisim] or by a winning strategy for
the spoiler in the bisimulation game [s:bisim-mc-other-games], to our
knowledge there is no known method that does this directly, based on the
definition of bisimilarity.
With our technique we can witness non-bisimilarity of two states
$x_{1},x_{2}\in X$ by presenting a pre-fixpoint $a$ (i.e., $\mathcal{B}(a)\leq
a$) such that $a(x_{1},x_{2})=0$ (equivalent to $(x_{1},x_{2})\in[{X\times
X}]_{a}$) and $\nu\mathcal{B}_{a}^{\\#}=\emptyset$, since this implies
$\nu\mathcal{B}(x_{1},x_{2})\leq a(x_{1},x_{2})=0$ by our proof rule.
There are two issues to discuss: first, how can we characterise a pre-fixpoint
of $\mathcal{B}$ (which is quite unusual, since bisimulations are post-
fixpoints)? In fact, the condition $\mathcal{B}(a)\leq a$ can be rewritten to:
for all $(x_{1},x_{2})\in[{X\times X}]_{a}$ there exists $y_{1}\in\eta(x_{1})$
such that for all $y_{2}\in\eta(x_{2})$ we have $(y_{1},y_{2})\in[{X\times
X}]_{a}$ (_or_ vice versa). Second, at first sight it does not seem as if we
gained anything since we still have to do a fixpoint computation on relations.
However, the carrier set is $[{X\times X}]_{a}$, i.e., a set of non-
bisimilarity witnesses and this set can be small even though $X$ might be
large.
###### Example 6.13.
We consider the transition system depicted below.
$x$$y$$u$
Our aim is to construct a witness showing that $x,u$ are not bisimilar. This
witness is a function $a\colon X\times X\to\\{0,1\\}$ with $a(x,u)=0=a(y,u)$
and for all other pairs the value is $1$. Hence $[{X\times
X}]_{a=\mathcal{B}(a)}=[{X\times X}]_{a}=\\{(x,u),(y,u)\\}$ and it is easy to
check that $a$ is a pre-fixpoint of $\mathcal{B}$ and that
$\nu\mathcal{B}_{a}^{*}=\emptyset$: we iterate over $\\{(x,u),(y,u)\\}$ and
first remove $(y,u)$ (since $y$ has no successors) and then $(x,u)$. This
implies that $\nu\mathcal{B}\leq a$ and hence $\nu\mathcal{B}(x,u)=0$, which
means that $x,u$ are not bisimilar.
###### Example 6.14.
We modify 6.13 and consider a function $a$ where $a(x,u)=0$ and all other
values are $1$. Again $a$ is a pre-fixpoint of $\mathcal{B}$ and
$\nu\mathcal{B}\leq a$ (since only reflexive pairs are in the bisimilarity).
However $\nu\mathcal{B}_{a}^{*}\neq\emptyset$, since $\\{(x,u)\\}$ is a post-
fixpoint. This is a counterexample to completeness discussed after subsection
4.2.
Intuitively speaking, the states $y,u$ over-approximate and claim that they
are bisimilar, although they are not. (This is permissible for a pre-
fixpoint.) This tricks $x,u$ into thinking that there is some wiggle room and
that one can increase the value of $(x,u)$. This is true, but only because of
the limited, local view, since the “true” value of $(y,u)$ is $0$.
### 6.4. Behavioural metrics for probabilistic automata
We now consider behavioural metrics for probabilistic automata, which involve
both non-deterministic branching (as in Section 6.2) as well as probabilistic
branching. Before we start, we first consider the Kantorovich lifting and the
corresponding approximation.
#### Kantorovich lifting.
The Kantorovich (also known as Wasserstein) lifting converts a metric on $X$
to a metric on probability distributions over $X$. As for the Hausdorff
lifting, we lift distance functions that are not necessarily metrics.
Furthermore, in order to ensure finiteness of all the sets involved, we
restrict to $D\subseteq\mathcal{D}(X)$, some finite set of probability
distributions over $X$. A _coupling_ of $p,q\in D$ is a probability
distribution $c\in\mathcal{D}(X\times X)$ whose left and right marginals are
$p,q$, i.e., $p(x_{1})=m_{c}^{L}(x_{1}):=\sum_{x_{2}\in X}c(x_{1},x_{2})$ and
$q(x_{2})=m_{c}^{R}(x_{2}):=\sum_{x_{1}\in X}c(x_{1},x_{2})$. The set of all
couplings of $p,q$, denoted by $\Omega(p,q)$, forms a polytope with finitely
many vertices [pc:computational-ot]. The set of all polytope vertices that are
obtained by coupling any $p,q\in D$ is also finite and is denoted by
$\mathit{VP}_{D}\subseteq\mathcal{D}(X\times X)$.
The Kantorovich lifting is given by $\mathcal{K}:[0,1]^{X\times
X}\to[0,1]^{D\times D}$ where
$\mathcal{K}(d)(p,q)=\min_{c\in\Omega(p,q)}\sum_{(x_{1},x_{2})\in X\times
X}c(x_{1},x_{2})\cdot d(x_{1},x_{2}).$
The coupling $c$ can be interpreted as the optimal transport plan to move
goods from suppliers to customers [v:optimal-transport]. Again we provide an
alternative characterisation, which shows non-expansiveness of $\mathcal{K}$
and allows one to derive its approximations.
Let $u:\mathit{VP}_{D}\to D\times D$, $u(c)=(m_{c}^{L},m_{c}^{R})$. Then
$\mathcal{K}=\min\nolimits_{u}\circ\mathrm{av}_{\mathit{VP}_{D}}$
where $\mathrm{av}_{\mathit{VP}_{D}}\colon[0,1]^{X\times
X}\to[0,1]^{\mathit{VP}_{D}}$,
$\min_{u}\colon[0,1]^{\mathit{VP}_{D}}\to[0,1]^{D\times D}$.
###### Proof 6.15.
It holds that $u^{-1}(p,q)=\Omega(p,q)\cap\textit{VP}_{D}$ for $p,q\in D$.
Furthermore note it is sufficient to consider as couplings the vertices, i.e.,
the elements of $\mathit{VP}_{D}$, since the minimum is always attained there
[pc:computational-ot].
Hence we obtain for $d\colon X\times X\to[0,1]$, $p,q\in D$:
$\displaystyle\min\nolimits_{u}(\mathrm{av}_{\mathit{VP}_{D}}(d))(p,q)$
$\displaystyle=$
$\displaystyle\min_{c\in\Omega(p,q)\cap\mathit{VP}_{D}}\mathrm{av}_{\mathit{VP}_{D}}(d)(c)$
$\displaystyle=$
$\displaystyle\min_{c\in\Omega(p,q)\cap\mathit{VP}_{D}}\sum_{x_{1},x_{2}\in
X\times X}c(x_{1},x_{2})\cdot d(x_{1},x_{2})$ $\displaystyle=$
$\displaystyle\min_{c\in\Omega(p,q)}\sum_{x_{1},x_{2}\in X\times
X}c(x_{1},x_{2})\cdot d(x_{1},x_{2})$ $\displaystyle=$
$\displaystyle\mathcal{K}(d)(p,q)$
We now present the approximation of the Kantorovich lifting in the dual sense.
Intuitively, given a distance function $d$ and a relation $M$ on $X$ it
characterises those pairs $(p,q)$ of distributions whose distance in the
Kantorovich metric decreases by a constant when we decrease the distance $d$
for all pairs in $M$ by the same constant.
###### Lemma 6.16.
Let $d\colon X\times X\to[0,1]$. The approximation for the Kantorovich lifting
$\mathcal{K}$ in the dual sense is
$\mathcal{K}_{\\#}^{d}\colon\mathbf{2}^{[{X\times
X}]^{d}}\to\mathbf{2}^{[{D\times D}]^{\mathcal{K}(d)}}$ with
$\displaystyle\mathcal{K}_{\\#}^{d}(M)$ $\displaystyle=$
$\displaystyle\\{(p,q)\in[{D\times D}]^{\mathcal{K}(d)}\mid\exists
c\in\Omega(p,q),\mathit{supp}(c)\subseteq M,$ $\displaystyle\qquad\sum_{u,v\in
S}d(u,v)\cdot c(u,v)=\mathcal{K}(d)(p,q)\\}.$
###### Proof 6.17.
Let $d\colon X\times X\to[0,1]$ and $M\subseteq[{X\times X}]^{d}$. Then we
have:
$\displaystyle\mathcal{K}_{\\#}^{d}(M)$ $\displaystyle=$
$\displaystyle(\min\nolimits_{u})_{\\#}^{\mathrm{av}_{\mathit{VP}_{D}}(d)}((\mathrm{av}_{\mathit{VP}_{D}})_{\\#}^{d}(M))$
where
$\displaystyle(\mathrm{av}_{\mathit{VP}_{D}})_{\\#}^{d}\colon\mathbf{2}^{[{X\times
X}]^{d}}\to\mathbf{2}^{[{\mathit{VP}_{D}}]^{\mathrm{av}_{\mathit{VP}_{D}}(d)}}$
$\displaystyle(\min\nolimits_{u})_{\\#}^{\mathrm{av}_{\mathit{VP}_{D}}^{X\times
X}(d)}\colon\mathbf{2}^{[{\mathit{VP}_{D}}]^{\mathrm{av}_{\mathit{VP}_{D}}(d)}}\to\mathbf{2}^{[{D\times
D}]^{\mathcal{K}(d)}}$
We are using the approximations associated to non-expansive functions, given
in 5.4, and obtain:
$\displaystyle\mathcal{K}_{\\#}^{d}(M)$ $\displaystyle=$
$\displaystyle\\{(p,q)\in[{D\times
D}]^{\mathcal{K}(d)}\mid\mathit{Min}_{\mathrm{av}_{\mathit{VP}_{D}}^{X\times
X}(d)|_{u^{-1}(p,q)}}\cap(\mathrm{av}_{\mathit{VP}_{D}}^{X\times
X})_{\\#}^{d}(M)\neq\emptyset\\}$ $\displaystyle=$
$\displaystyle\\{(p,q)\in[{D\times D}]^{\mathcal{K}(d)}\mid\exists
c\in\Omega(p,q),c\in(\mathrm{av}_{\mathit{VP}_{D}})_{\\#}^{d}(M),$
$\displaystyle\qquad\mathrm{av}_{\mathit{VP}_{D}}(d)(c)=\min_{c^{\prime}\in\Omega(p,q)}\mathrm{av}_{\mathit{VP}_{D}}^{X\times
X}(d)(c^{\prime})\\}$ $\displaystyle=$ $\displaystyle\\{(p,q)\in[{D\times
D}]^{\mathcal{K}(d)}\mid\exists
c\in\Omega(p,q),c\in(\mathrm{av}_{\mathit{VP}_{D}})_{\\#}^{d}(M),$
$\displaystyle\qquad\mathrm{av}_{\mathit{VP}_{D}}(d)(c)=\mathcal{K}(d)(p,q)\\}$
$\displaystyle=$ $\displaystyle\\{(p,q)\in[{D\times
D}]^{\mathcal{K}(d)}\mid\exists c\in\Omega(p,q),\mathit{supp}(c)\subseteq M,$
$\displaystyle\qquad\sum_{u,v\in S}d(u,v)\cdot c(u,v)=\mathcal{K}(d)(p,q)\\}$
#### Probabilistic automata.
We now compare our approach with [bblmtv:prob-bisim-distance-automata], which
describes the first method for computing behavioural distances for
probabilistic automata. Although the behavioural distance arises as a least
fixpoint, it is in fact better, even the only known method, to iterate from
above, in order to reach this least fixpoint. This is done by guessing and
improving couplings, similarly to what happens for strategy iteration
discussed later in Section 7. A major complication, faced in [bblmtv:prob-
bisim-distance-automata], is that the procedure can get stuck at a fixpoint
which is not the least and one has to determine that this is the case and
decrease the current candidate. In fact this paper was our inspiration to
generalise this technique to a more general setting.
A _probabilistic automaton_ is a tuple $\mathcal{A}=(S,L,\eta,\ell)$, where
$S$ is a non-empty finite set of states, $L$ is a finite set of labels,
$\eta\colon S\to\mathbf{2}^{\mathcal{D}(S)}$ assigns finite sets of
probability distributions to states and $\ell\colon S\to L$ is a labelling
function. (In the following we again replace $\mathcal{D}(S)$ by a finite
subset $D$.)
The _probabilistic bisimilarity pseudo-metrics_ is the least fixpoint of the
function $\mathcal{M}\colon$ $[0,1]^{S\times S}\to[0,1]^{S\times S}$ where for
$d\colon S\times S\to[0,1]$, $s,t\in S$:
$\mathcal{M}(d)(s,t)=\begin{cases}1&\mbox{if $\ell(s)\neq\ell(t)$}\\\
\mathcal{H}(\mathcal{K}(d))(\eta(s),\eta(t))&\mbox{otherwise}\end{cases}$
where $\mathcal{H}$ is the Hausdorff lifting (for $\mathbb{M}=[0,1]$) and
$\mathcal{K}$ is the Kantorovich lifting defined earlier. Now assume that $d$
is a fixpoint of $\mathcal{M}$, i.e., $d=\mathcal{M}(d)$. In order to check
whether $d=\mu f$, [bblmtv:prob-bisim-distance-automata] adapts the notion of
a self-closed relation from [f:game-metrics-markov-decision].
###### Definition 6.18 ([bblmtv:prob-bisim-distance-automata]).
A relation $M\subseteq S\times S$ is _self-closed_ with respect to
$d=\mathcal{M}(d)$ if, whenever $s\,M\,t$, then
* •
$\ell(s)=\ell(t)$ and $d(s,t)>0$,
* •
if $p\in\eta(s)$ and
$d(s,t)=\min_{q^{\prime}\in\eta(t)}\mathcal{K}(d)(p,q^{\prime})$, then there
exists $q\in\eta(t)$ and $c\in\Omega(p,q)$ such that $d(s,t)=\sum_{u,v\in
S}d(u,v)\cdot c(u,v)$ and $\mathit{supp}(c)\subseteq M$,
* •
if $q\in\eta(t)$ and
$d(s,t)=\min_{p^{\prime}\in\eta(s)}\mathcal{K}(d)(p^{\prime},q)$, then there
exists $p\in\eta(s)$ and $c\in\Omega(p,q)$ such that $d(s,t)=\sum_{u,v\in
S}d(u,v)\cdot c(u,v)$ and $\mathit{supp}(c)\subseteq M$.
The largest self-closed relation, denoted by $\approx_{d}$ is empty if and
only if $d=\mu f$ [bblmtv:prob-bisim-distance-automata]. We now investigate
the relation between self-closed relations and post-fixpoints of
approximations. For this we will first show that $\mathcal{M}$ can be obtained
as the composition of non-expansive functions, which proves that it is indeed
non-expansive. Furthermore, this decomposition will help in the comparison.
The fixpoint function $\mathcal{M}$ characterizing probabilistic bisimilarity
pseudo-metrics can be written as:
$\mathcal{M}=\max\nolimits_{\rho}\circ(((\eta\times\eta)^{*}\circ\mathcal{H}\circ\mathcal{K})\uplus
c_{l})$
where $\rho\colon(S\times S)\uplus(S\times S)\to(S\times S)$ with
$\rho((s,t),i)=(s,t)$. Furthermore $l\colon S\times S\to[0,1]$ is defined as
$l(s,t)=0$ if $\ell(s)=\ell(t)$ and $l(s,t)=1$ if $\ell(s)\neq\ell(t)$.
###### Proof 6.19.
In fact, given $d\colon S\times S\to[0,1]$, we have
$\displaystyle\max\nolimits_{\rho}((((\eta\times\eta)^{*}\circ\mathcal{H}\circ\mathcal{K})\uplus
c_{l})(d))(s,t)$ $\displaystyle=$
$\displaystyle\max\\{(\eta\times\eta)^{*}\circ\mathcal{H}\circ\mathcal{K})(d)(s,t),c_{l}(s,t)\\}$
$\displaystyle=$
$\displaystyle\max\\{\mathcal{H}(\mathcal{K}(d)(\eta(s),\eta(t)),c_{l}(s,t)\\}$
$\displaystyle=$ $\displaystyle\mathcal{M}(d)(s,t)$
Hence $\mathcal{M}$ is a composition of non-expansive functions and thus non-
expansive itself. We do not spell out $\mathcal{M}_{\\#}^{d}$ explicitly, but
instead show how it is related to self-closed relations.
Let $d\colon S\times S\to[0,1]$ where $d=\mathcal{M}(d)$. Then
$\mathcal{M}_{\\#}^{d}\colon\mathbf{2}^{[{S\times
S}]^{d}}\to\mathbf{2}^{[{S\times S}]^{d}}$, where $[{S\times
S}]^{d}=\\{(s,t)\in S\times S\mid d(s,t)>0\\}$.
Then $M$ is a self-closed relation with respect to $d$ if and only if
$M\subseteq[{S\times S}]^{d}$ and $M$ is a post-fixpoint of
$\mathcal{M}_{\\#}^{d}$.
###### Proof 6.20.
First note that whenever $M$ is self-closed, it holds that $d(s,t)>0$ for all
$(s,t)\in M$ and hence $M\subseteq[{S\times S}]^{d}$.
We abbreviate
$g=(\eta\times\eta)^{*}\circ\mathcal{H}\circ\mathcal{K}\colon[0,1]^{S\times
S}\to[0,1]^{S\times S}$ and hence $\mathcal{M}=\max_{\rho}\circ(g\uplus
c_{l})$.
The approximation $g_{\\#}^{d}$ (yet to be determined) is of type
$g_{\\#}^{d}\colon\mathbf{2}^{[{S\times S}]^{d}}\to\mathbf{2}^{[{S\times
S}]^{g(d)}}$. In the following, we are using the approximations associated to
non-expansive functions, given in 5.4.
Since $c_{l}\colon[0,1]^{S\times S}\to[0,1]^{S\times S}$ is a constant
function, we have
$(c_{l})_{\\#}^{d}\colon\mathbf{2}^{[{S\times S}]^{d}}\to\mathbf{2}^{[{S\times
S}]^{l}},\quad(c_{l})_{\\#}^{d}(M)=\emptyset.$
Hence
$\displaystyle(g\uplus c_{l})_{\\#}^{d}\colon\mathbf{2}^{[{S\times S}]^{d}}$
$\displaystyle\to$ $\displaystyle\mathbf{2}^{[{S\times
S}]^{g(d)}\uplus[{S\times S}]^{l}}$ $\displaystyle(g\uplus
c_{l})_{\\#}^{d}(M)$ $\displaystyle=$ $\displaystyle
g_{\\#}^{d}(M)\times\\{0\\}\cup\emptyset\times\\{1\\}=g_{\\#}^{d}(M)\times\\{0\\}$
for $M\subseteq[{S\times S}]^{d}$.
Furthermore we obtain
$\displaystyle\mathcal{M}_{\\#}^{d}(M)$ $\displaystyle=$
$\displaystyle(\max\nolimits_{\rho})_{\\#}^{g(d)\uplus l}((g\uplus
c_{l})_{\\#}^{d}(M))$ $\displaystyle=$
$\displaystyle(\max\nolimits_{\rho})_{\\#}^{g(d)\uplus
l}(g_{\\#}^{d}(M)\times\\{0\\})$ $\displaystyle=$
$\displaystyle\\{(s,t)\in[{S\times
S}]^{\mathcal{M}(d)}\mid\mathit{Max}_{(g(d)\uplus
c_{l})|_{\rho^{-1}(\\{(s,t)\\})}}\subseteq g_{\\#}^{d}(M)\times\\{0\\}\\}$
In order to proceed, we examine
$\rho^{-1}(\\{(s,t)\\}=\\{((s,t),0),((s,t),1)\\}$. Whenever
$\ell(s)\neq\ell(t)$, we have $c_{l}(s,t)=1\geq g(d)(s,t)$, hence
$\mathit{Max}_{(g(d)\uplus l)|_{\rho^{-1}(\\{(s,t)\\})}}$ contains at least
$((s,t),1)$, which is not contained in $g_{\\#}^{d}(M)\times\\{0\\}$, which
means that the condition is not satisfied.
Whenever $\ell(s)=\ell(t)$, we have $c_{l}(s,t)=0<g(d)(s,t)$ (note that
$(s,t)\in[{S\times S}]^{\mathcal{M}(d)}$, hence $g(d)(s,t)>0$), so
$\mathit{Max}_{(g(d)\uplus l)|_{\rho^{-1}(\\{(s,t)\\})}}$ contains only
$((s,t),0)$, which is contained in $g_{\\#}^{d}(M)\times\\{0\\}$ iff $(s,t)\in
g_{\\#}^{d}(M)$.
Summarizing, we obtain
$\displaystyle\mathcal{M}_{\\#}^{d}(M)$ $\displaystyle=$
$\displaystyle\\{(s,t)\in[{S\times
S}]^{\mathcal{M}(d)}\mid\ell(s)=\ell(t),(s,t)\in g_{\\#}^{d}(M)\\}$
$\displaystyle=$ $\displaystyle\\{(s,t)\in S\times S\mid
d(s,t)>0,\ell(s)=\ell(t),(s,t)\in g_{\\#}^{d}(M)\\}$
For the last step observe that $d=\mathcal{M}(d)$.
It is left to characterise $g_{\\#}^{d}$, where
$g=(\eta\times\eta)^{*}\circ\mathcal{H}\circ\mathcal{K}$. We have
$g_{\\#}^{d}=((\eta\times\eta)^{*})_{\\#}^{\mathcal{H}(\mathcal{K}(d))}\circ\mathcal{H}_{\\#}^{\mathcal{K}(d)}\circ\mathcal{K}_{\\#}^{d}$
where
$\displaystyle\mathcal{K}_{\\#}^{d}\colon\mathbf{2}^{[{S\times S}]^{d}}$
$\displaystyle\to$ $\displaystyle\mathbf{2}^{[{D\times D}]^{\mathcal{K}(d)}}$
$\displaystyle\mathcal{H}_{\\#}^{\mathcal{K}(d)}\colon\mathbf{2}^{[{D\times
D}]^{\mathcal{K}(d)}}$ $\displaystyle\to$
$\displaystyle\mathbf{2}^{[{\mathbf{2}^{D}\times\mathbf{2}^{D}}]^{\mathcal{H}(\mathcal{K}(d))}}$
$\displaystyle((\eta\times\eta)^{*})_{\\#}^{\mathcal{H}(\mathcal{K}(d))}\colon\mathbf{2}^{[{\mathbf{2}^{D}\times\mathbf{2}^{D}}]^{\mathcal{H}(\mathcal{K}(d))}}$
$\displaystyle\to$ $\displaystyle\mathbf{2}^{[{S\times S}]^{g(d)}}.$
It holds that
$((\eta\times\eta)^{*})_{\\#}^{\mathcal{H}(\mathcal{K}(d))}=(\eta\times\eta)^{-1}$
and hence
$(s,t)\in
g_{\\#}^{d}(M)\iff(\eta(s),\eta(t))\in\mathcal{H}_{\\#}^{\mathcal{K}(d)}(\mathcal{K}_{\\#}^{d}(M)).$
Using the characterisation of the associated approximation of the Hausdorff
lifting in subsection 6.2, we obtain that this is equivalent to
for all $p\in\eta(s)$, whenever
$\min_{q^{\prime}\in\eta(t)}\mathcal{K}(d)(p,q^{\prime})=\mathcal{H}(\mathcal{K}(d))(\eta(s),\eta(t))$,
then there exists $q\in\eta(t)$ such that $(p,q)\in\mathcal{K}_{\\#}^{d}(M)$
and $\mathcal{K}(d)(p,q)=\mathcal{H}(\mathcal{K}(d))(\eta(s),\eta(t))$ (and
vice versa),
assuming that $\ell(s)=\ell(t)$ (this is a requirement in the definition of
$\mathcal{M}_{\\#}^{d}(M)$), since then we have
$\mathcal{H}(\mathcal{K}(d))(\eta(s),\eta(t))=d(s,t)>0$ and hence
$(\eta(s),\eta(t))\in[{\mathbf{2}^{D}\times\mathbf{2}^{D}}]^{\mathcal{H}(\mathcal{K}(d))}$.
Since also $d=\mathcal{M}(d)$, the condition above can be rewritten to
for all $p\in\eta(s)$, whenever
$\min_{q^{\prime}\in\eta(t)}\mathcal{K}(d)(p,q^{\prime})=d(s,t)$, then there
exists $q\in\eta(t)$ such that $(p,q)\in\mathcal{K}_{\\#}^{d}(M)$ and
$\mathcal{K}(d)(p,q)=d(s,t)$ (and vice versa).
From 6.16 we know that $(p,q)\in\mathcal{K}_{\\#}^{d}(M)$ iff
$\mathcal{K}(d)(p,q)>0$ and there exists $c\in\Omega(p,q)$ such that
$\mathit{supp}(c)\subseteq M$ and $\sum_{u,v\in S}c(u,v)\cdot
d(u,v)=\mathcal{K}(d)(p,q)$. We instantiate the condition above accordingly
and obtain
for all $p\in\eta(s)$, whenever
$d(s,t)=\min_{q^{\prime}\in\eta(t)}\mathcal{K}(d)(p,q^{\prime})$, then there
exists $q\in\eta(t)$ such that there exists $c\in\Omega(p,q)$ with
$\mathit{supp}(c)\subseteq M$, $\mathcal{K}(d)(p,q)=\sum_{u,v\in S}c(u,v)\cdot
d(u,v)$ and $\mathcal{K}(d)(p,q)=d(s,t)$ (and vice versa).
The two last equalities can be simplified to $d(s,t)=\sum_{u,v\in
S}c(u,v)\cdot d(u,v)$, since
$\mathcal{K}(d)(p,q)\leq\sum_{u,v\in S}c(u,v)\cdot
d(u,v)=d(s,t)=\min_{q^{\prime}\in\eta(t)}\mathcal{K}(d)(p,q^{\prime})\leq\mathcal{K}(d)(p,q)$
and hence $\mathcal{K}(d)(p,q)=d(s,t)$ can be inferred from the remaining
conditions.
We finally obtain the following equivalent characterisation:
for all $p\in\eta(s)$, whenever
$d(s,t)=\min_{q^{\prime}\in\eta(t)}\mathcal{K}(d)(p,q^{\prime})$, then there
exists $q\in\eta(t)$ such that there exists $c\in\Omega(p,q)$ with
$\mathit{supp}(c)\subseteq M$, $d(s,t)=\sum_{u,v\in S}c(u,v)\cdot d(u,v)$ (and
vice versa).
Hence we obtain that $(s,t)\in g_{\\#}^{d}(M)$ is equivalent to the the second
and third item of Def. 6.18 (under the assumption that $\ell(s)=\ell(t)$),
while the first item is covered by the other conditions ($d(s,t)>0$ and
$\ell(s)=\ell(t)$) in the characterisation of $\mathcal{M}_{\\#}^{d}(M)$.
## 7\. Simple stochastic games
In this section we show how our techniques can be applied to simple stochastic
games [condon92, c:algorithms-ssg]. In particular, we present two novel
algorithms based on strategy iteration and discuss some runtime results.
### 7.1. Introduction to simple stochastic games.
A simple stochastic game is a state-based two-player game where the two
players, Min and Max, each own a subset of states they control, for which they
can choose the successor. The system also contains sink states with an
assigned payoff and averaging states which randomly choose their successor
based on a given probability distribution. The goal of Min is to minimise and
the goal of Max to maximise the payoff.
Simple stochastic games are an important type of games that subsume parity
games and the computation of behavioural distances for probabilistic automata
(cf. Section 6.4, [bblmtv:prob-bisim-distance-automata]). The associated
decision problem (if both players use their best strategies, is the expected
payoff of Max greater than $\frac{1}{2}$?) is known to lie in
$\mathsf{NP}\cap\mathsf{coNP}$, but it is an open question whether it is
contained in $\mathsf{P}$. There are known randomised subexponential
algorithms [bv:randomized-algorithms-games].
It has been shown that it is sufficient to consider positional strategies,
i.e., strategies where the choice of the player is only dependent on the
current state. The expected payoffs for each state form a so-called value
vector and can be obtained as the least solution of a fixpoint equation (see
below).
A _simple stochastic game_ is given by a finite set $V$ of nodes, partitioned
into $\mathit{MIN}$, $\mathit{MAX}$, $\mathit{AV}$ (average) and
$\mathit{SINK}$, and the following data:
$\eta_{\min}:\mathit{MIN}\to\mathbf{2}^{V}$,
$\eta_{\max}:\mathit{MAX}\to\mathbf{2}^{V}$ (successor functions for Min and
Max nodes), $\eta_{\mathrm{av}}:\mathit{AV}\to D$ (probability distributions,
where $D\subseteq\mathcal{D}(V)$ finite) and $w:\mathit{SINK}\to[0,1]$
(weights of sink nodes).
The fixpoint function $\mathcal{V}\colon[0,1]^{V}\to[0,1]^{V}$ is defined
below for $a\colon V\to[0,1]$ and $v\in V$:
$\displaystyle\mathcal{V}(a)(v)=\begin{cases}\min_{v^{\prime}\in\eta_{\min}(v)}a(v^{\prime})&v\in\mathit{MIN}\\\
\max_{v^{\prime}\in\eta_{\max}(v)}a(v^{\prime})&v\in\mathit{MAX}\\\
\sum_{v^{\prime}\in V}\eta_{\mathrm{av}}(v)(v^{\prime})\cdot
a(v^{\prime})&v\in\mathit{AV}\\\ w(v)&v\in\mathit{SINK}\end{cases}$
The _least_ fixpoint of $\mathcal{V}$ specifies the average payoff for all
nodes when Min and Max play optimally. In an infinite game the payoff is $0$.
In order to avoid infinite games and guarantee uniqueness of the fixpoint,
many authors [hk:nonterminating-stochastic-games, c:algorithms-ssg,
tvk:strategy-improvement-ssg] restrict to stopping games, which are guaranteed
to terminate for every pair of Min/Max-strategies. Here we deal with general
games where more than one fixpoint may exist. Such a scenario has been studied
in [kkkw:value-iteration-ssg], which considers value iteration to under- and
over-approximate the value vector. The over-approximation faces challenges
with cyclic dependencies, similar to the vicious cycles described earlier.
Here we focus on strategy iteration, which is usually less efficient than
value iteration, but yields a precise result instead of approximating it.
###### Example 7.1.
We consider the game depicted below. Here $\min$ is a Min node with
$\eta_{\min}(\min)=\\{\textbf{1},\mathrm{av}\\}$, $\max$ is a Max node with
$\eta_{\max}(\max)=\\{\bm{\varepsilon},\mathrm{av}\\}$, 1 is a sink node with
payoff 1, $\bm{\varepsilon}$ is a sink node with some small payoff
$\varepsilon\in(0,1)$ and $\mathrm{av}$ is an average node which transitions
to both $\min$ and $\max$ with probability $\frac{1}{2}$.
Min should choose $\mathrm{av}$ as successor since a payoff of $1$ is bad for
Min. Given this choice of Min, Max should not declare $\mathrm{av}$ as
successor since this would create an infinite play and hence the payoff is
$0$. Therefore Max has to choose $\bm{\varepsilon}$ and be content with a
payoff of $\varepsilon$, which is achieved from all nodes different from
$\bm{1}$.
1
---
$\min$
---
$\mathrm{av}$
---
$\bm{\varepsilon}$
---
$\max$
---
$\frac{1}{2}$$\frac{1}{2}$
In order to be able to determine the approximation of $\mathcal{V}$ and to
apply our techniques, we consider the following equivalent definition.
$\mathcal{V}=(\eta_{\min}^{*}\circ\min\nolimits_{\in})\uplus(\eta_{\max}^{*}\circ\max\nolimits_{\in})\uplus(\eta_{\mathrm{av}}^{*}\circ\mathrm{av}_{D})\uplus
c_{w}$, where $\mathrel{\in}\ \subseteq V\times\mathbf{2}^{V}$ is the “is-
element-of”-relation on $V$.
###### Proof 7.2.
Let $a\colon V\to[0,1]$. For $v\in\mathit{MAX}$ we have
$\mathcal{V}(a)(v)=(\eta^{*}_{\max}\circ\max\nolimits_{\in})(a)(v)=\max\nolimits_{\in}(a)(\eta_{\max}(v))=\max_{v^{\prime}\in\eta_{\max}(v)}a(v^{\prime}).$
For $v\in\mathit{MIN}$ we have
$\mathcal{V}(a)(v)=(\eta^{*}_{\min}\circ\min\nolimits_{\in})(a)(v)=\min\nolimits_{\in}(a)(\eta_{\min}(v))=\min_{v^{\prime}\in\eta_{\min}(v)}a(v^{\prime}).$
For $v\in\mathit{AV}$ we have
$\mathcal{V}(a)(v)=(\eta^{*}_{\mathrm{av}}\circ\mathrm{av}_{D})(a)(v)=\mathrm{av}_{D}(a)(\eta_{\mathrm{av}}(v))=\sum_{v^{\prime}\in
V}\eta_{\mathrm{av}}(v)(v^{\prime})\cdot a(v^{\prime}).$
For $v\in\mathit{SINK}$ we have $\mathcal{V}(a)(v)=c_{w}(a)(v)=w(v)$.
As a composition of non-expansive functions, $\mathcal{V}$ is non-expansive as
well. Since we are interested in the least fixpoint we work in the dual sense
and obtain the following approximation, which intuitively says: we can
decrease a value at node $v$ by a constant only if, in the case of a Min node,
we decrease the value of one successor where the minimum is reached, in the
case of a Max node, we decrease the values of all successors where the maximum
is reached, and in the case of an average node, we decrease the values of all
successors.
Let $a\colon V\to[0,1]$. The approximation for the value iteration function
$\mathcal{V}$ in the dual sense is
$\mathcal{V}_{\\#}^{a}\colon\mathbf{2}^{[{V}]^{a}}\to\mathbf{2}^{[{V}]^{\mathcal{V}(a)}}$
with
$\displaystyle\mathcal{V}_{\\#}^{a}(V^{\prime})$ $\displaystyle=$
$\displaystyle\\{v\in[{V}]^{\mathcal{V}(a)}\mid\big{(}v\in\mathit{MIN}\land\mathit{Min}_{a_{|\eta_{\min}(v)}}\cap
V^{\prime}\not=\emptyset\big{)}\mathop{\lor}$
$\displaystyle\quad\big{(}v\in\mathit{MAX}\land\mathit{Max}_{a_{|\eta_{\max}(v)}}\subseteq
V^{\prime}\big{)}\lor\big{(}v\in\mathit{AV}\land\mathit{supp}(\eta_{\mathrm{av}}(v))\subseteq
V^{\prime}\big{)}\\}$
###### Proof 7.3.
Let $a\colon V\to[0,1]$ and $V^{\prime}\subseteq[{V}]^{a}$. By Proposition
5.10 we have:
$\displaystyle\mathcal{V}_{\\#}^{a}(V^{\prime})=$
$\displaystyle\big{(}\mathit{MIN}\cap(\eta^{*}_{\min}\circ\min\nolimits_{\in})^{a}_{\\#}(V^{\prime})\big{)}\cup\big{(}\mathit{MAX}\cap(\eta^{*}_{\max}\circ\max\nolimits_{\in})^{a}_{\\#}(V^{\prime})\big{)}\cup$
$\displaystyle\big{(}\mathit{AV}\cap(\eta^{*}_{\mathrm{av}}\circ\mathrm{av}_{D})^{a}_{\\#}(V^{\prime})\big{)}\cup\big{(}\mathit{SINK}\cap(c_{w})_{\\#}^{a}(V^{\prime})\big{)}$
It holds that
$({\eta^{*}_{\min}})_{\\#}^{\min\nolimits_{\in}(v)}=\eta^{-1}_{\min}$,
$({\eta^{*}_{\max}})_{\\#}^{\max\nolimits_{\in}(v)}=\eta^{-1}_{\max}$ and
$({\eta^{*}_{\mathrm{av}}})_{\\#}^{\mathrm{av}_{D}(v)}=\eta^{-1}_{\mathrm{av}}$.
Using previous results (5.4) we deduce
$\displaystyle
v\in(\eta^{*}_{\min}\circ\min\nolimits_{\in})^{a}_{\\#}(V^{\prime})\Leftrightarrow\eta_{\min}(v)\in(\min\nolimits_{\in})^{a}_{\\#}(V^{\prime})\Leftrightarrow\mathit{Min}_{a_{|\eta_{\min}(v)}}\cap
V^{\prime}\not=\emptyset$ $\displaystyle
v\in(\eta^{*}_{\max}\circ\max\nolimits_{\in})^{a}_{\\#}(V^{\prime})\Leftrightarrow\eta_{\max}(v)\in(\max\nolimits_{\in})^{a}_{\\#}(V^{\prime})\Leftrightarrow\mathit{Max}_{a_{|\eta_{\max}(v)}}\subseteq
V^{\prime}$ $\displaystyle
v\in(\eta^{*}_{\mathrm{av}}\circ{\mathrm{av}^{V}_{D}})_{\\#}^{a}(V^{\prime})\Leftrightarrow\eta_{\mathrm{av}}(v)\in(\mathrm{av}^{V}_{D})^{a}_{\\#}(V^{\prime})\Leftrightarrow\mathit{supp}(\eta_{\mathrm{av}}(v))\subseteq
V^{\prime}$
Lastly $(c_{w})^{a}_{\\#}(V^{\prime})=\emptyset$ for any $V^{\prime}\subseteq
V$ since $c_{w}$ is a constant function which concludes the proof.
### 7.2. Strategy iteration from above and below.
We describe two algorithms based on the idea of strategy iteration, first
introduced by Hoffman and Karp in [hk:nonterminating-stochastic-games], that
are novel, as far as we know. The first iterates to the least fixpoint from
above and uses the techniques described in Section 4 in order not to get stuck
at a larger fixpoint. The second iterates from below: the role of our results
is not directly visible in the code of the algorithm, but its non-trivial
correctness proof is based on the proof rule introduced earlier.
We first recap the underlying notions. A Min-strategy is a mapping
$\tau\colon\mathit{MIN}\to V$ such that $\tau(v)\in\eta_{\mathrm{min}}(v)$ for
every $v\in\mathit{MIN}$. Following such a strategy, Min decides to always
leave a node $v$ via $\tau(v)$. Analogously, a Max-strategy is a function
$\sigma\colon\mathit{MAX}\to V$. Fixing a strategy for either player induces a
modified value function. If $\tau$ is a Min-strategy, we obtain
$\mathcal{V}_{\tau}$ which is defined exactly as $\mathcal{V}$ but for
$v\in\mathit{MIN}$ where we set $\mathcal{V}_{\tau}(a)(v)=a(\tau(v))$.
Analogously, for $\sigma$ a Max-strategy, $\mathcal{V}_{\sigma}$ is obtained
by setting $\mathcal{V}_{\sigma}(a)(v)=a(\sigma(v))$ when $v\in\mathit{MAX}$.
If both players fix their strategies, the game reduces to a Markov chain.
###### Lemma 7.4.
For any pair of strategies $\sigma,\tau$ we have
$\mathcal{V}_{\sigma}\leq\mathcal{V}\leq\mathcal{V}_{\tau}$.
###### Proof 7.5.
Given any $a\colon V\to[0,1]$ and $v\in V$, we have
$\displaystyle\mathcal{V}_{\sigma}(a)(v)$
$\displaystyle=\begin{cases}\min_{v^{\prime}\in\eta_{\min}(v)}a(v^{\prime})&v\in\mathit{MIN}\\\
a(\sigma(v^{\prime}))&v\in\mathit{MAX}\\\ \sum_{v^{\prime}\in
V}a(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})&v\in\mathit{AV}\\\
c_{w}(v)&v\in\mathit{SINK}\end{cases}$
$\displaystyle\leq\begin{cases}\min_{v^{\prime}\in\eta_{\min}(v)}a(v^{\prime})&v\in\mathit{MIN}\\\
\max_{v^{\prime}\in\eta_{\max}(v)}a(v^{\prime})&v\in\mathit{MAX}\\\
\sum_{v^{\prime}\in
V}a(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})&v\in\mathit{AV}\\\
c_{w}(v)&v\in\mathit{SINK}\end{cases}$ $\displaystyle=\mathcal{V}(a)(v)$
The same proof idea can be applied to show
$\mathcal{V}\leq\mathcal{V}_{\tau}$.
In order to describe our algorithms we also need the notion of a _switch_.
Assume that $\tau$ is a Min-strategy and let $a$ be a (pre-)fixpoint of
$\mathcal{V}_{\tau}$. Min can now potentially improve her strategy for nodes
$v\in\mathit{MIN}$ where
$\min_{v^{\prime}\in\eta_{\min}(v)}a(v^{\prime})<a(\tau(v))$, called _switch
nodes_. This results in a Min-strategy
$\tau^{\prime}=\mathit{sw}_{\min}(\tau,a)$, where444If the minimum is achieved
in several nodes, Min simply chooses one of them. However, she will only
switch if this strictly improves the value.
$\tau^{\prime}(v)=\text{arg}\min_{v^{\prime}\in\eta_{\min}(v)}a^{(i)}(v^{\prime})$
for a switch node $v$ and $\tau^{\prime}$, $\tau$ agree otherwise. Also,
$\mathit{sw}_{\max}(\sigma,a)$ is defined analogously for Max strategies.
Determine $\mu\mathcal{V}$ (from above)
(1) Guess a Min-strategy $\tau^{(0)}$, $i:=0$ (2)
$a^{(i)}:=\mu\mathcal{V}_{\tau^{(i)}}$ (3)
$\tau^{(i+1)}:=\mathit{sw}_{\min}(\tau^{(i)},a^{(i)})$ (4) If
$\tau^{(i+1)}\neq\tau^{(i)}$ $i:=i+1$ then goto 2. (5) Compute
$V^{\prime}=\nu\mathcal{V}_{\\#}^{a}$, where $a=a^{(i)}$. (6) If
$V^{\prime}=\emptyset$ then stop and return $a^{(i)}$.
Otherwise set
$a^{(i+1)}:=a-(\iota_{\mathcal{V}}^{a}(V^{\prime}))_{V^{\prime}}$,
$\tau^{(i+2)}:=\mathit{sw}_{\min}(\tau^{(i)},a^{(i+1)})$, $i:=i+2$, goto 2 (a)
Strategy iteration from above
Determine $\mu\mathcal{V}$ (from below)
(1) Guess a Max-strategy $\sigma^{(0)}$, $i:=0$ (2)
$a^{(i)}:=\mu\mathcal{V}_{\sigma^{(i)}}$ (3)
$\sigma^{(i+1)}:=\mathit{sw}_{\max}(\sigma^{(i)},a^{(i)})$ (4) If
$\sigma^{(i+1)}\neq\sigma^{(i)}$ then
set $i:=i+1$ and goto 2
Otherwise stop and return $a^{(i)}$. (b) Strategy iteration from below
Figure 3. Strategy iteration from above and below
Now strategy iteration from above works as described in Fig. 3(a). The
computation of $\mu\mathcal{V}_{\tau^{(i)}}$ in the second step intuitively
means that Max chooses his best answering strategy and we compute the least
fixpoint based on this answering strategy. At some point no further switches
are possible and we have reached a fixpoint $a$, which need not yet be the
least fixpoint. Hence we use the techniques from Section 4 to decrease $a$ and
obtain a new pre-fixpoint $a^{(i+1)}$, from which we can continue. The
correctness of this procedure partially follows from 4.2 and 4.5, however we
also need to show the following: first, we can compute
$a^{(i)}=\mu\mathcal{V}_{\tau^{(i)}}$ efficiently by solving a linear program
(cf. subsection 7.2) by adapting [condon92]. Second, the chain of the
$a^{(i)}$ decreases, which means that the algorithm will eventually terminate
(cf. subsection 7.2).
Strategy iteration from below is given in Fig. 3(b). At first sight, the
algorithm looks simpler than strategy iteration from above, since we do not
have to check whether we have already reached $\nu\mathcal{V}$, reduce and
continue from there. However, in this case the computation of
$\mu\mathcal{V}_{\sigma^{(i)}}$ via a linear program is more involved (cf.
subsection 7.2), since we have to pre-compute (via greatest fixpoint iteration
over $\mathbf{2}^{V}$) the nodes where Min can force a cycle based on the
current strategy of Max, thus obtaining payoff $0$.
This algorithm does not directly use our technique but we can use our proof
rules to prove the correctness of the algorithm (subsection 7.2). In
particular, the proof that the sequence $a^{(i)}$ increases is quite involved:
we have to show that
$a^{(i)}=\mu\mathcal{V}_{\sigma^{(i)}}\leq\mu\mathcal{V}_{\sigma^{(i+1)}}=a^{(i+1)}$.
This could be done by showing that $\mu\mathcal{V}_{\sigma^{(i+1)}}$ is a pre-
fixpoint of $\mathcal{V}_{\sigma^{(i)}}$, but there is no straightforward way
to do this. Instead, we prove this fact using our proof rules, by showing that
$a^{(i)}$ is below the least fixpoint of $\mathcal{V}_{\sigma^{(i+1)}}$.
The algorithm generalises strategy iteration by Hoffman and Karp
[hk:nonterminating-stochastic-games]. Note that we cannot simply adapt their
proof, since we do not assume that the game is stopping, which is a crucial
ingredient. In the case where the game is stopping, the two algorithms
coincide, meaning that we also provide an alternative correctness proof in
this situation, while other correctness proofs [condon92] are based on linear
algebra and inverse matrices.
The least fixpoints of $\mathcal{V}_{\tau}$ and $\mathcal{V}_{\sigma}$ can be
determined by solving linear programs.
###### Proof 7.6.
We adapt the linear programs found in the literature on simple stochastic
games (see e.g. [condon92]).
The least fixpoint $a=\mu\mathcal{V}_{\tau}$ can be determined by solving the
following linear program:
$\displaystyle\min$ $\displaystyle\sum_{v\in V}a(v)$ $\displaystyle
a(v)=a(\tau(v))$ $\displaystyle v\in\mathit{MIN}$ $\displaystyle a(v)\geq
a(v^{\prime})$ $\displaystyle\forall
v^{\prime}\in\eta_{\max}(v),v\in\mathit{MAX}$ $\displaystyle
a(v)=\sum_{v^{\prime}\in
V}a(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})$ $\displaystyle
v\in\mathit{AV}$ $\displaystyle a(v)=w(v)$ $\displaystyle v\in\mathit{SINK}$
By having $a(v)\geq a(v^{\prime})$ for all $v^{\prime}\in\eta_{\max}(v)$ and
$v\in\mathit{MAX}$ we guarantee $a(v)=\max_{v^{\prime}\in\eta_{\max}(v)}$
$a^{(i)}(v^{\prime})$ since we minimise. The minimisation also guarantees
computation of the least fixpoint (in particular, nodes that lie on a cycle
will get a value of $0$). Hence, the linear program correctly characterises
$\mu\mathcal{V}_{\tau}$.
Given a strategy $\sigma$ for Max, we can determine
$a=\mu\mathcal{V}_{\sigma}$ by solving the following linear program:
$\displaystyle\max$ $\displaystyle\sum_{v\in V}a(v)$ $\displaystyle a(v)=0$
$\displaystyle v\in C_{\sigma}$ $\displaystyle a(v)\leq a(v^{\prime})$
$\displaystyle\forall v^{\prime}\in\eta_{\min}(v),v\in\mathit{MIN},v\not\in
C_{\sigma}$ $\displaystyle a(v)=a(\sigma(v))$ $\displaystyle
v\in\mathit{MAX},v\not\in C_{\sigma}$ $\displaystyle a(v)=\sum_{v^{\prime}\in
V}a(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})$ $\displaystyle
v\in\mathit{AV},v\not\in C_{\sigma}$ $\displaystyle a(v)=w(v)$ $\displaystyle
v\in\mathit{SINK}$
The set $C_{\sigma}$ contains those nodes which will guarantee a non-
terminating play if Min plays optimally, given the fixed Max-strategy
$\sigma$.
The set $C_{\sigma}$ can again be computed via fixpoint-iteration by computing
the greatest fixpoint of $c_{\sigma}$ via Kleene iteration on $\mathbf{2}^{V}$
from above:
$\displaystyle c_{\sigma}\colon\mathbf{2}^{V}$ $\displaystyle\to$
$\displaystyle\mathbf{2}^{V}$ $\displaystyle c_{\sigma}(V^{\prime})$
$\displaystyle=$ $\displaystyle\\{v\in
V\mid(v\in\mathit{MIN}\land\eta_{\min}(v)\cap
V^{\prime}\neq\emptyset)\lor(v\in\mathit{MAX}\land\sigma(v)\in V^{\prime})$
$\displaystyle\qquad\mathop{\lor}(v\in\mathit{AV}\land\mathit{supp}(\eta_{\mathrm{av}}(v))\subseteq
V^{\prime})\\}$
It is easy to see that $C_{\sigma}=\nu c_{\sigma}$ contains all those nodes
from which Min can force a non-terminating play and hence achieve payoff $0$.
(Note that there are further nodes that guarantee payoff $0$ – namely sinks
with that payoff and nodes which can reach such sinks – but those will obtain
value $0$ in any case.)
We now show that this linear program computes $\mu\mathcal{V}_{\sigma}$:
first, by requiring $a(v)\leq a(v^{\prime})$ for all $v\in\mathit{MIN}$,
$v^{\prime}\in\eta_{\min}(v)$, we guarantee
$a(v)=\min_{v^{\prime}\in\eta_{\min}}a(v^{\prime})$ since we maximise. Hence
we obtain the greatest fixpoint of the following function
$\mathcal{V}^{\prime}_{\sigma}\colon[0,1]^{V}\to[0,1]^{V}$:
$\displaystyle\mathcal{V}^{\prime}_{\sigma}(a)(v)$
$\displaystyle=\begin{cases}0&v\in C_{\sigma}\\\ \sum_{v^{\prime}\in
V}a(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})&v\in\mathit{AV},v\not\in
C_{\sigma}\\\ a(\sigma(v))&v\in\mathit{MAX},v\not\in C_{\sigma}\\\
\min_{v^{\prime}\in\eta_{\min}(v)}a(v^{\prime})&v\in\mathit{MIN},v\not\in
C_{\sigma}\\\ w(v)&v\in\mathit{SINK}\\\ \end{cases}$
It is easy to show that the least fixpoints of $\mathcal{V}^{\prime}_{\sigma}$
and $\mathcal{V}_{\sigma}$ agree, i.e., $\mu\mathcal{V}^{\prime}_{\sigma}$ and
$\mu\mathcal{V}_{\sigma}$:
* •
$\mu\mathcal{V}^{\prime}_{\sigma}\leq\mu\mathcal{V}_{\sigma}$ can be shown by
observing that $\mathcal{V}^{\prime}_{\sigma}\leq\mathcal{V}_{\sigma}$.
* •
$\mu\mathcal{V}_{\sigma}\leq\mu\mathcal{V}^{\prime}_{\sigma}$ can be shown by
proving that $\mu\mathcal{V}^{\prime}_{\sigma}$ is a pre-fixpoint of
$\mathcal{V}_{\sigma}$, which can be done via a straightforward case analysis.
We have to show
$\mathcal{V}_{\sigma}(\mu\mathcal{V}^{\prime}_{\sigma})(v)\leq\mu\mathcal{V}^{\prime}_{\sigma}(v)$
for all $v\in V$. We only spell out the case where $v\in\mathit{AV}$, the
other cases are similar. In this case either $v\not\in C_{\sigma}$, which
means that
$\mathcal{V}_{\sigma}(\mu\mathcal{V}^{\prime}_{\sigma})(v)=\mathcal{V}^{\prime}_{\sigma}(\mu\mathcal{V}^{\prime}_{\sigma})(v)=\mu\mathcal{V}^{\prime}_{\sigma}(v).$
If instead $v\in C_{\sigma}$, we have that
$\mathit{supp}(\eta_{\mathrm{av}}(v))\subseteq C_{\sigma}$ and so
$\mu\mathcal{V}^{\prime}_{\sigma}(v^{\prime})=0$ for all
$v^{\prime}\in\mathit{supp}(\eta_{\mathrm{av}}(v))$. Hence
$\mathcal{V}_{\sigma}(\mu\mathcal{V}^{\prime}_{\sigma})(v)=\sum_{v^{\prime}\in
V}\eta_{\mathrm{av}}(v)(v^{\prime})\cdot\mu\mathcal{V}^{\prime}_{\sigma}(v^{\prime})=0=\mu\mathcal{V}^{\prime}_{\sigma}(v)$
If we can now show that $\mathcal{V}^{\prime}_{\sigma}$ has a unique fixpoint,
we are done. The argument for this goes as follows: assume that this function
has another fixpoint $a^{\prime}$ different from
$\mu\mathcal{V}^{\prime}_{\sigma}$. Clearly $[{V}]^{a^{\prime}}\cap
C_{\sigma}=\emptyset$, where $[{V}]^{a^{\prime}}=\\{v\in V\mid
a^{\prime}(v)\neq 0\\}$. Hence, if we compare
$(\mathcal{V}^{\prime}_{\sigma})_{\\#}^{a}\colon\mathbf{2}^{[{V}]^{a}}\to\mathbf{2}^{[{V}]^{\mathcal{V}^{\prime}_{\sigma}(a)}}$
(defined analogously to subsection 7.1) and $c_{\sigma}$ above, we observe
that $(\mathcal{V}^{\prime}_{\sigma})_{\\#}^{a^{\prime}}\subseteq
c_{\sigma}|_{\mathbf{2}^{[{V}]^{a^{\prime}}}}$. (Both functions coincide,
apart from their treatment of nodes $v\in\mathit{MIN}$, where
$c_{\sigma}(V^{\prime})$ contains $v$ whenever one of its successors is
contained in $V^{\prime}$, whereas
$(\mathcal{V}^{\prime}_{\sigma})_{\\#}^{a^{\prime}}(V^{\prime})$ additionally
requires that the value of this successor is minimal.) Since $a^{\prime}$ is
not the least fixpoint we have by 4.2 that
$\emptyset\neq\nu(\mathcal{V}^{\prime}_{\sigma})_{\\#}^{a^{\prime}}\subseteq\nu(c_{\sigma}|_{\mathbf{2}^{[{V}]^{a^{\prime}}}})\subseteq\nu
c_{\sigma}=C_{\sigma}.$
This is a contradiction, since $[{V}]^{a^{\prime}}\cap C_{\sigma}=\emptyset$
as observed above.
This shows that $\mathcal{V}^{\prime}_{\sigma}$ has a unique fixpoint and
completes the proof. Note that if we do not explicitly require that the values
of all nodes in $C_{\sigma}$ are $0$, $\mathcal{V}^{\prime}_{\sigma}$ will
potentially have several fixpoints and the linear program would not
characterise the least fixpoint.
Strategy iteration from above and below both terminate and compute the least
fixpoint of $\mathcal{V}$.
###### Proof 7.7.
_Strategy iteration from above:_
We start by showing the following: Given any $a^{(i)}$ and a new switched Min-
strategy $\tau^{(i+1)}$, i.e.,
$\tau^{(i+1)}=\mathit{sw}_{\min}(\tau^{(i)},a^{(i)})$, then $a^{(i)}$ is a
pre-fixpoint of $\mathcal{V}_{\tau^{(i+1)}}$. By choice of $\tau^{(i+1)}$ we
have
$\displaystyle\mathcal{V}_{\tau^{(i+1)}}(a^{(i)})(v)$
$\displaystyle=\begin{cases}a^{(i)}(\tau^{(i+1)}(v))&v\in\mathit{MIN}\\\
\max_{v^{\prime}\in\eta_{\max}(v)}a^{(i)}(v^{\prime})&v\in\mathit{MAX}\\\
\sum_{v^{\prime}\in
V}a^{(i)}(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})&v\in\mathit{AV}\\\
w(v)&v\in\mathit{SINK}\end{cases}$
$\displaystyle=\begin{cases}\min_{v^{\prime}\in\eta_{\min}(v)}a^{(i)}(v^{\prime})&v\in\mathit{MIN}\\\
\max_{v^{\prime}\in\eta_{\max}(v)}a^{(i)}(v^{\prime})&v\in\mathit{MAX}\\\
\sum_{v^{\prime}\in
V}a^{(i)}(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})&v\in\mathit{AV}\\\
w(v)&v\in\mathit{SINK}\end{cases}$ $\displaystyle=\mathcal{V}(a^{(i)})(v)$
By 7.4 $\mathcal{V}\leq\mathcal{V}_{\tau^{(i)}}$ holds and since $a^{(i)}$ is
a fixpoint of $\mathcal{V}_{\tau^{(i)}}$ we conclude
$\displaystyle\mathcal{V}_{\tau^{(i+1)}}(a^{(i)})(v)=\mathcal{V}(a^{(i)})(v)\leq\mathcal{V}_{\tau^{(i)}}(a^{(i)})(v)=a^{(i)}(v)$
Thus we have $a^{(i+1)}\leq a^{(i)}$ (by Knaster-Tarski, since $a^{(i)}$ is a
pre-fixpoint of $\mathcal{V}_{\tau^{(i+1)}}$ and $a^{(i+1)}$ is its least
fixpoint). Furthermore we know that $a^{(i)}$ is not a fixpoint of
$\mathcal{V}_{\tau^{(i+1)}}$ (otherwise we could not have performed a switch)
and hence $a^{(i+1)}$ is strictly smaller than $a^{(i)}$ for at least one
input. Since there are only finitely many strategies we will eventually stop
switching and reach a fixpoint $a=a^{(j)}$ for an index $j$.
Then, if $V^{\prime}=\nu\mathcal{V}_{\\#}^{a}=\emptyset$ then $a$ is the least
fixpoint and we conclude.
Otherwise, we determine
$a^{(j+1)}=a-(\iota_{\mathcal{V}}^{a}(V^{\prime}))_{V^{\prime}}$. By 4.5 (dual
version), $a^{(j+1)}$ is a pre-fixpoint of $\mathcal{V}$. Now Min will choose
her best strategy $\tau=\tau^{(j+2)}=\mathit{sw}_{\min}(\tau^{(i)},a^{(i+1)})$
and we continue computing $a^{(j+2)}=\mu\mathcal{V}_{\tau^{(j+2)}}$. First,
observe that since $a^{(j+1)}$ is a pre-fixpoint of $\mathcal{V}$, it is also
a pre-fixpoint of $\mathcal{V}_{\tau^{(j+2)}}$. In fact, $\mathcal{V}$ and
$\mathcal{V}_{\tau^{(j+1)}}$ coincide on all nodes $v\not\in\mathit{MIN}$. If
$v\in\mathit{MIN}$, we have
$\displaystyle\mathcal{V}_{\tau^{(j+2)}}(a^{(j+1)})(v)$ $\displaystyle=$
$\displaystyle a^{(j+1)}(\tau^{(j+2)}(v))$ $\displaystyle=$
$\displaystyle\min_{v^{\prime}\in\eta_{\min}(v)}a^{(j+1)}(v^{\prime})=\mathcal{V}(a^{(j+1)})(v)\leq
a^{(j+1)}(v).$
Hence it follows by Knaster-Tarski that
$a^{(j+2)}=\mu\mathcal{V}_{\tau^{(j+2)}}\leq a^{(j+1)}$. In turn,
$a^{(j+1)}<a^{(j)}$ since $V^{\prime}$ is non-empty and hence also
$a^{(j+2)}<a^{(j)}$ (where $<$ on tuples means means $\leq$ in all components
and $<$ in at least one component.)
This means that the chain $a^{(i)}$ is strictly descending. Hence, at each
iteration we obtain a new strategy and, since the number of strategies is
finite, the iteration will eventually stop.
Hence the algorithm terminates and stops at the least fixpoint of
$\mathcal{V}$.
_Strategy iteration from below:_
We start as follows: Assume $a$ is the least fixpoint of
$\mathcal{V}_{\sigma}$, i.e. $a=\mu\mathcal{V}_{\sigma}$ and $\sigma^{\prime}$
the new best strategy for Max obtained by switching with respect to $a$, i.e.,
$\sigma^{\prime}=\mathit{sw}_{\max}(\sigma,a)$. We have to show that
$a^{\prime}=\mu\mathcal{V}_{\sigma^{\prime}}$ lies above $a$ ($a^{\prime}\geq
a$). Here we use our proof rules (see subsection 4.2) and show the following:
* •
First, observe that $a$ is a post-fixpoint of $\mathcal{V}_{\sigma^{\prime}}$.
For any $v\in V$ we have
$\displaystyle a(v)=\mathcal{V}_{\sigma}(a)(v)$
$\displaystyle=\begin{cases}\min_{v^{\prime}\in\eta_{\min}(v)}a(v^{\prime})&v\in\mathit{MIN}\\\
a(\sigma(v))&v\in\mathit{MAX}\\\ \sum_{v^{\prime}\in
V}a(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})&v\in\mathit{AV}\\\
w(v)&v\in\mathit{SINK}\end{cases}$
$\displaystyle\leq\begin{cases}\min_{v^{\prime}\in\eta_{\min}(v)}a(v^{\prime})&v\in\mathit{MIN}\\\
\max_{v^{\prime}\in\eta_{\max}(v)}a(v^{\prime})&v\in\mathit{MAX}\\\
\sum_{v^{\prime}\in
V}a(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})&v\in\mathit{AV}\\\
w(v)&v\in\mathit{SINK}\end{cases}$
$\displaystyle=\begin{cases}\min_{v^{\prime}\in\eta_{\min}(v)}a(v^{\prime})&v\in\mathit{MIN}\\\
a(\sigma^{\prime}(v))&v\in\mathit{MAX}\\\ \sum_{v^{\prime}\in
V}a(v^{\prime})\cdot\eta_{\mathrm{av}}(v)(v^{\prime})&v\in\mathit{AV}\\\
w(v)&v\in\mathit{SINK}\end{cases}$
$\displaystyle=\mathcal{V}_{\sigma^{\prime}_{\max}}(a)(v)$
* •
Next we show that $\nu(\mathcal{V}_{\sigma^{\prime}})_{*}^{a}=\emptyset$, thus
proving that $a\leq\mu\mathcal{V}_{\sigma^{\prime}}=a^{\prime}$ by subsection
4.2. Note that
$(\mathcal{V}_{\sigma^{\prime}})_{*}^{a}\colon[{V}]^{a=\mathcal{V}_{\sigma^{\prime}}(a)}\to[{V}]^{a=\mathcal{V}_{\sigma^{\prime}}(a)}$,
i.e., it restricts to those elements of $a$ where $a$ and
$\mathcal{V}_{\sigma^{\prime}}(a)$ coincide.
Whenever $v\in\mathit{MAX}$ is a node where the strategy has been “switched”
with respect to $a$, we have
$\mathcal{V}_{\sigma^{\prime}}(a)(v)=a(\sigma^{\prime}(v))>a(\sigma(v))=a(v).$
The first equality above is true by the definition of
$\mathcal{V}_{\sigma^{\prime}}$ and the last equality holds since $a$ is a
fixpoint of $\mathcal{V}_{\sigma}$. So if $v$ is a switch node, it holds that
$v\not\in[{V}]^{a=\mathcal{V}_{\sigma}(a)}$. By contraposition if
$v\in[{V}]^{a=\mathcal{V}_{\sigma}(a)}$, $v$ cannot be a switch node.
We next show that $(\mathcal{V}_{\sigma})_{*}^{a}$,
$(\mathcal{V}_{\sigma^{\prime}})_{*}^{a}$ agree on
$[{V}]^{a=\mathcal{V}_{\sigma^{\prime}}(a)}\subseteq[{V}]^{a}=[{V}]^{a=\mathcal{V}_{\sigma}(a)}$
(remember that $a$ is a fixpoint of $\mathcal{V}_{\sigma}$). It holds that
$\displaystyle(\mathcal{V}_{\sigma})_{*}^{a}(V^{\prime})$ $\displaystyle=$
$\displaystyle\gamma_{\mathcal{V}_{\sigma}(a),\iota}(\mathcal{V}_{\sigma}(\alpha_{a,\iota}(V^{\prime})))$
$\displaystyle(\mathcal{V}_{\sigma^{\prime}})_{*}^{a}(V^{\prime})$
$\displaystyle=$
$\displaystyle\gamma_{\mathcal{V}_{\sigma^{\prime}}(a),\iota}(\mathcal{V}_{\sigma^{\prime}}(\alpha_{a,\iota}(V^{\prime})))\cap[{V}]^{a=\mathcal{V}_{\sigma^{\prime}}(a)}$
for a suitable constant $\iota$ and if we choose $\iota$ small enough we can
use the same constant in both cases. Now let
$v\in[{V}]^{a=\mathcal{V}_{\sigma^{\prime}}(a)}$: by definition it holds that
$v\in(\mathcal{V}_{\sigma})_{*}^{a}(V^{\prime})=\gamma_{\mathcal{V}_{\sigma}(a),\iota}(\mathcal{V}_{\sigma}(\alpha_{a,\iota}(V^{\prime})))$
if and only if
$\mathcal{V}_{\sigma}(\alpha_{a,\iota}(V^{\prime}))(v)\ominus\mathcal{V}_{\sigma}(a)(v)\geq\iota$.
Since, by the considerations above, $v$ is not a switch node,
$\mathcal{V}_{\sigma}(b)(v)=\mathcal{V}_{\sigma^{\prime}}(b)(v)$ for all $b$
and we can replace $\mathcal{V}_{\sigma}$ by $\mathcal{V}_{\sigma^{\prime}}$,
resulting in the equivalent statement
$v\in\gamma_{\mathcal{V}_{\sigma^{\prime}}(a),\iota}(\mathcal{V}_{\sigma^{\prime}}(\alpha_{a,\iota}(V^{\prime})))$,
also equivalent to $v\in(\mathcal{V}_{\sigma^{\prime}})_{*}^{a}(V^{\prime})$.
Thus
$\nu(\mathcal{V}_{\sigma^{\prime}})_{*}^{a}\subseteq\nu(\mathcal{V}_{\sigma})_{*}^{a}=\emptyset$.
Hence we obtain an ascending sequence $a^{(i)}$. Furthermore, whenever we
perform a switch, we know that $a^{(i)}$ is not a fixpoint of
$\mathcal{V}_{\sigma^{(i+1)}}$ (otherwise we could not have performed a
switch) and hence $a^{(i+1)}$ is strictly larger than $a^{(i)}$ for at least
one input. Since there are only finitely many strategies we will eventually
stop switching and reach the least fixpoint.
###### Example 7.8.
Ex. 7.1 is well suited to explain our two algorithms.
Starting with strategy iteration from above, we may guess
$\tau^{(0)}(\min)=\textbf{1}$. In this case, Max would choose $\mathrm{av}$ as
successor and we would reach a fixpoint, where each node except for
$\bm{\varepsilon}$ is associated with a payoff of $1$. Next, our algorithm
would detect the vicious cycle formed by $\min$, $\mathrm{av}$ and $\max$. We
can reduce the values in this vicious cycle and reach the correct payoff
values for each node.
For strategy iteration from below assume that
$\sigma^{(0)}(\max)=\mathrm{av}$. Given this strategy of Max, Min can force
the play to stay in a cycle formed by $\min$, $\mathrm{av}$ and $\max$. Thus,
the payoff achieved by the Max strategy $\sigma^{(0)}$ and an optimal play by
Min would be $0$ for each of these nodes. In the next iteration Max switches
and chooses $\bm{\varepsilon}$ as successor, i.e.
$\sigma^{(1)}(\max)=\bm{\varepsilon}$, which results in the correct values.
### 7.3. Runtime results
We implemented strategy iteration from above and from below – in the following
abbreviated by SIA and SIB – and classical Kleene iteration (KI) in MATLAB. In
Kleene iteration we terminate with a tolerance of $10^{-14}$, i.e., we stop if
the change from one iteration to the next is below this value.
In order to test the algorithms we created random stochastic games with $n$
nodes, where each Max, Min respectively average node has a maximal number of
$m$ successors. For each node we choose randomly one of the four types of
nodes. Sink nodes are given a random weight uniformly in $[0,1]$. Max and Min
nodes are randomly assigned to successors and for an average nodes we assign a
random number to each of its successors, followed by normalisation to obtain a
probability distribution.
We performed 1000 runs with different randomly created systems for each value
of $n$ and $m=\frac{n}{2}$. The table below shows the runtimes in seconds and
the number of iterations. Also, for SIB, we display the number of nodes with a
payoff of $0$ (for an optimal play of Min) and the number of times SIA got
stuck at any other fixpoint which is not $\mu\mathcal{V}$ (all numbers –
runtime, iterations, etc. – are summed up over all $1000$ runs).
| runtime (seconds) | number of iterations | number nodes | number of
---|---|---|---|---
$~{}~{}~{}n~{}~{}~{}$ | KI | SIA | SIB | KI | SIA | SIB | payoff 0 | other fp
10 | 0.59 | 20.28 | 18.49 | 47302 | 2259 | 2152 | 2439 | 508
20 | 1.05 | 31.71 | 25.96 | 30275 | 3620 | 3018 | 4714 | 743
30 | 2.03 | 35.98 | 29.77 | 27361 | 3881 | 3275 | 7268 | 771
40 | 3.77 | 38.84 | 32.67 | 26999 | 3850 | 3296 | 9806 | 756
50 | 5.31 | 38.09 | 31.85 | 26604 | 3799 | 3215 | 12573 | 734
60 | 7.63 | 40.33 | 34.37 | 26467 | 3737 | 3218 | 15151 | 727
70 | 10.77 | 45.00 | 37.50 | 26569 | 3751 | 3154 | 17473 | 751
80 | 15.38 | 54.89 | 46.72 | 26179 | 3713 | 3105 | 20031 | 752
90 | 16.07 | 52.21 | 43.52 | 26401 | 3695 | 3083 | 22390 | 777
100 | 19.46 | 60.29 | 50.88 | 26464 | 3654 | 3062 | 25163 | 751
Note that SIB always performs slightly better than SIA. Moreover KI neatly
beats both of them. Here we need to remember that KI only converges to the
solution and it is known that the rate of convergence can be exponentially
slow [c:algorithms-ssg].
Note that the linear optimisation problems are quite costly to solve,
especially for large systems. Thus additional iterations are substantially
more costly compared to KI. Observe also that SIA has to perform more
iterations than SIB, which explains the slightly higher runtime.
The number of nodes with a payoff of 0 seems to grow linearly with the number
of nodes in the system. The number of times SIA gets stuck at a fixpoint
different from $\mu\mathcal{V}$ however seems to be independent of the system
size and comparatively small.
We performed a second comparison, where we assigned to sink nodes a value in
$\\{0,1\\}$, which is often done for simple stochastic games.
| runtime | number of iterations | number nodes | number of
---|---|---|---|---
$~{}~{}~{}n~{}~{}~{}$ | KI | SIA | SIB | KI | SIB | SIA | payoff 0 | other fp
10 | 0.36 | 14.51 | 14.58 | 42547 | 1703 | 1702 | 5484 | 219
20 | 1.00 | 19.85 | 19.98 | 29515 | 2385 | 2478 | 8168 | 137
30 | 1.97 | 20.45 | 20.77 | 27643 | 2367 | 2469 | 11502 | 33
40 | 3.30 | 20.13 | 20.94 | 26761 | 2306 | 2383 | 14989 | 12
50 | 4.96 | 20.24 | 20.94 | 26562 | 2253 | 2306 | 18821 | 2
60 | 6.87 | 20.57 | 21.19 | 26560 | 2176 | 2227 | 22573 | 0
70 | 9.14 | 21.95 | 22.35 | 26146 | 2142 | 2186 | 26260 | 0
80 | 11.73 | 24.69 | 24.94 | 26235 | 2084 | 2131 | 30136 | 0
90 | 14.73 | 28.90 | 28.71 | 26330 | 2066 | 2091 | 33930 | 0
100 | 18.22 | 34.75 | 34.84 | 26227 | 2051 | 2068 | 37496 | 0
Here, SIA performs very similar to SIB. The SIA approach seems to suffer,
since Max can easily find himself in a situation where he can never reach a
1-sink, since only half of the sink nodes are of this kind. Additionally for
these systems a significantly larger number of nodes have a payoff of 0 and
SIA is less likely to get stuck at a fixpoint different from $\mu\mathcal{V}$.
These factors seem to be correlated since it is now “harder” for Min to choose
a bad successor (with a value greater than 0).
## 8\. Conclusion
It is well-known that several computations in the context of system
verification can be performed by various forms of fixpoint iteration and it is
worthwhile to study such methods at a high level of abstraction, typically in
the setting of complete lattices and monotone functions. Going beyond the
classical results by Tarski [t:lattice-fixed-point], combination of fixpoint
iteration with approximations [CC:TLA, bkp:abstraction-up-to-games-fixpoint]
and with up-to techniques [p:complete-lattices-up-to] has proven to be
successful. Here we treated a more specific setting, where the carrier set
consists of functions from a finite set into an MV-chain and the fixpoint
functions are non-expansive (and hence monotone), and introduced a novel
technique to obtain upper bounds for greatest and lower bounds for least
fixpoints, also providing associated algorithms. Such techniques are
applicable to a wide range of examples and so far they have been studied only
in quite specific scenarios, such as in [bblmtv:prob-bisim-distance-automata,
f:game-metrics-markov-decision, kkkw:value-iteration-ssg].
In the future we plan to lift some of the restrictions of our approach. First,
an extension to an infinite domain $Y$ would of course be desirable, but since
several of our results currently depend on finiteness, such a generalisation
does not seem to be easy. The restriction to total orders, instead, seems
easier to lift: in particular, if the partially ordered MV-algebra
$\bar{\mathbb{M}}$ is of the form $\mathbb{M}^{I}$ where $I$ is a finite index
set and $\mathbb{M}$ an MV-chain. (E.g., finite Boolean algebras are of this
type.) In this case, our function space is
$\bar{\mathbb{M}}^{Y}=\big{(}\mathbb{M}^{I}\big{)}^{\raisebox{-2.0pt}{\scriptsize$Y$}}\cong\mathbb{M}^{Y\times
I}$ and we have reduced to the setting presented in this paper. This will
allow us to handle featured transition systems [ccpshl:simulation-product-
line-mc] where transitions are equipped with boolean formulas.
There are several other application examples that did not fit into this paper,
but that can also be handled by our approach, for instance coalgebraic
behavioural metrics [bbkk:coalgebraic-behavioral-metrics]. We also plan to
investigate other types of games, such as energy games [bcdgr:algorithms-mean-
payoff-games]. While here we introduced strategy iteration techniques for
simple stochastic games, we also want to check whether we can provide an
improvement to value iteration techniques, combining our approach with
[kkkw:value-iteration-ssg].
We also plan to study whether some examples can be handled with other types of
Galois connections: here we used an additive variant, but looking at
multiplicative variants (multiplication by a constant factor) might also be
fruitful.
_Acknowledgements:_ We are grateful to Ichiro Hasuo for making us aware of
stochastic games as an application domain. Furthermore we would like to thank
Matthias Kuntz and Timo Matt for their help with experiments and
implementation.
## References
* [BBKK18] Paolo Baldan, Filippo Bonchi, Henning Kerstan, and Barbara König. Coalgebraic behavioral metrics. Logical Methods in Computer Science, 14(3), 2018. Selected Papers of the 6th Conference on Algebra and Coalgebra in Computer Science (CALCO 2015).
* [BBL+19] Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, Radu Mardare, Qiyi Tang, and Franck van Breugel. Computing probabilistic bisimilarity distances for probabilistic automata. In Proc. of CONCUR ’19, volume 140 of LIPIcs, pages 9:1–9:17. Schloss Dagstuhl – Leibniz Center for Informatics, 2019.
* [BBLM17] Giorgio Bacci, Giovanni Bacci, Kim G. Larsen, and Radu Mardare. On-the-fly exact computation of bisimilarity distances. Logical Methods in Computer Science, 13(2:13):1–25, 2017.
* [BCD+11] Lubos Brim, Jakub Chaloupka, Laurent Doyen, Raffaella Gentilini, and Jean-Fran cois Raskin. Faster algorithms for mean-payoff games. Formal Methods in System Design, 38(2):97–118, 2011.
* [BK08] Christel Baier and Joost-Pieter Katoen. Principles of Model Checking. MIT Press, 2008.
* [BKP20] Paolo Baldan, Barbara König, and Tommaso Padoan. Abstraction, up-to techniques and games for systems of fixpoint equations. In Proc. of CONCUR ’20, volume 171 of LIPIcs, pages 25:1–25:20. Schloss Dagstuhl – Leibniz Center for Informatics, 2020.
* [BV05] Henrik Björklund and Sergei Vorobyov. Combinatorial structure and randomized subexponential algorithms for infinite games. Theoretical Computer Science, 349(3):347–360, 2005.
* [CC77] Patrick Cousot and Radhia Cousot. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proc. of POPL ’77 (Los Angeles, California), pages 238–252. ACM, 1977.
* [CC00] Patrick Cousot and Radhia Cousot. Temporal abstract interpretation. In Mark N. Wegman and Thomas W. Reps, editors, Proc. of POPL ’00, pages 12–25. ACM, 2000.
* [CCP+12] Maxime Cordy, Andreas Classen, Gilles Perrouin, Pierre-Yves Schobbens, Patrick Heymans, and Axel Legay. Simulation-based abstractions for software product-line model checking. In Proc. of ICSE ’12 (International Conference on Software Engineering), pages 672–682. IEEE, 2012.
* [Cle90] Rance Cleaveland. On automatically explaining bisimulation inequivalence. In Proc. of CAV ’90, pages 364–372. Springer, 1990. LNCS 531.
* [Con90] Anne Condon. On algorithms for simple stochastic games. In Advances In Computational Complexity Theory, volume 13 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 51–71, 1990.
* [Con92] Anne Condon. The complexity of stochastic games. Information and Computation, 96(2):203–224, 1992.
* [dFS09] Luca de Alfaro, Marco Faella, and Mariëlle Stoelinga. Linear and branching system metrics. IEEE Transactions on Software Engineering, 35(2):258–273, 2009\.
* [Fu12] Hongfei Fu. Computing game metrics on Markov decision processes. In Proc. of ICALP ’12, Part II, pages 227–238. Springer, 2012. LNCS 7392.
* [GRS00] Roberto Giacobazzi, Francesco Ranzato, and Francesca Scozzari. Making abstract interpretations complete. Journal of the ACM, 47(2):361–416, 2000.
* [HM85] Matthew Hennessy and Robin Milner. Algebraic laws for nondeterminism and concurrency. Journal of the ACM, 32:137–161, 1985.
* [KH66] Richard M. Karp and Alan J. Hoffman. On nonterminating stochastic games. Management Science, 12(5):359–370, 1966.
* [KKKW18] Edon Kelmendi, Julia Krämer, Jan Křetínský, and Maximilian Weininger. Value iteration for simple stochastic games: Stopping criterion and learning algorithm. In Proc. of CAV ’18, pages 623–642. Springer, 2018. LNCS 10981.
* [M1́1] Facundo Mémoli. Gromov-Wasserstein distances and the metric approach to object matching. Foundations of Computational Mathematics, 11(4):417–487, 2011.
* [Mun] Daniele Mundici. MV-algebras. A short tutorial. Available at http://www.matematica.uns.edu.ar/IXCongresoMonteiro/Comunicaciones/Mundici_tutorial.pdf.
* [Mun11] Daniele Mundici. Advanced Łukasiewicz calculus and MV-algebras, volume 35 of Trends in Logic. Springer, 2011.
* [NNH10] Flemming Nielson, Hanne R. Nielson, and Chris Hankin. Principles of Program Analysis. Springer, 2010.
* [PC20] Gabriel Peyré and Marco Cuturi. Computational optimal transport, 2020. arXiv:1803.00567.
* [Pou07] Damien Pous. Complete lattices and up-to techniques. In Proc. of APLAS ’07, pages 351–366. Springer, 2007. LNCS 4807.
* [RVAK11] Tripathi Rahul, Elena Valkanova, and V.S. Anil Kumar. On strategy improvement algorithms for simple stochastic games. Journal of Discrete Algorithms, 9:263–278, 2011.
* [San11] Davide Sangiorgi. Introduction to Bisimulation and Coinduction. Cambridge University Press, 2011.
* [Sti97] Colin Stirling. Bisimulation, model checking and other games. Notes for Mathfit instructional meeting on games and computation, Edinburgh, June 1997.
* [Tar55] Alfred Tarski. A lattice-theoretical theorem and its applications. Pacific Journal of Mathematics, 5:285–309, 1955.
* [Vil09] Cédric Villani. Optimal Transport – Old and New, volume 338 of A Series of Comprehensive Studies in Mathematics. Springer, 2009.
|
# Uncovering and displaying the coherent groups of rank data by exploratory
riffle shuffling
Vartan Choulakian and Jacques Allard Université de Moncton, Canada email:
<EMAIL_ADDRESS><EMAIL_ADDRESS>
(November 2020)
###### Abstract
Let $n$ respondents rank order $d$ items, and suppose that $d<<n$. Our main
task is to uncover and display the structure of the observed rank data by an
exploratory riffle shuffling procedure which sequentially decomposes the n
voters into a finite number of coherent groups plus a noisy group: where the
noisy group represents the outlier voters and each coherent group is composed
of a finite number of coherent clusters. We consider exploratory riffle
shuffling of a set of items to be equivalent to optimal two blocks seriation
of the items with crossing of some scores between the two blocks. A riffle
shuffled coherent cluster of voters within its coherent group is essentially
characterized by the following facts: a) Voters have identical first TCA
factor score, where TCA designates taxicab correspondence analysis, an L1
variant of correspondence analysis; b) Any preference is easily interpreted as
riffle shuffling of its items; c) The nature of different riffle shuffling of
items can be seen in the structure of the contingency table of the first-order
marginals constructed from the Borda scorings of the voters; d) The first TCA
factor scores of the items of a coherent cluster are interpreted as Borda
scale of the items. We also introduce a crossing index, which measures the
extent of crossing of scores of voters between the two blocks seriation of the
items. The novel approach is explained on the benchmarking SUSHI data set,
where we show that this data set has a very simple structure, which can also
be communicated in a tabular form.
Key words: Borda score and scale; exploratory riffle shuffle; coherent group;
coherent cluster; crossing index; taxicab correspondence analysis.
AMS 2010 subject classifications: 62H25, 62H30
## 1 Introduction
Ordering the elements of a set is a common decision making activity, such as,
voting for a political candidate, choosing a consumer product, etc. So there
is a huge literature concerning the analysis and interpretation of preference
data scattered in different disciplines. Often rank data is heterogenous: it
is composed of a finite mixture of components. The traditional methods of
finding mixture components of rank data are mostly based on parametric
probability models, distance or latent class models, and are useful for sparse
data and not for diffuse data.
Rank data are sparse if there are at most a small finite number of
permutations that capture the majority of the preferences; otherwise they are
diffuse. As a running example in this paper, we will consider the famous
benchmarking SUSHI data set enumerating $n=5000$ preferences of $d=10$ sushis,
see $\left[1\right]$. The SUSHI data set is diffuse, because there are at most
three counts for one observed permutation. It has been analyzed, among others
by $\left[2,3,4\right]$.
A second data set that we shall also analyze is the APA dataset of size
$n=5738$ by $d=5$, see $\left[5\right]$. APA data set is also considered as
non sparse by $\left[2\right]$, because all the 120 permutations exist with
positive probability.
For a general background on statistical methods for rank data, see the
excellent monograph by $\left[6\right]$ and the book $\left[7\right]$.
### 1.1 Riffle shuffle
The riffle shuffle, see $\left[8\right]$, is considered the most popular
method of card shuffling, in which one cuts a deck of $d$ cards (aka items)
into two piles of sizes $d_{1}$ and $d_{2}$, respectively, and successively
drops the cards, one by one, so that the piles are interleaved into one deck
again.
Let $V,$ named a voting profile, be a set of $n$ preferences on $d$ items.
Based on riffle shuffling ideas, $\left[2\right]$ proposed the notion of
riffled independence to model the joint probability distribution of observed
preferences of $V$. Independently, $\left[9\right]$ used it for visual
exploration of $V$, naming it two blocks partition of the Borda scored items
with crossing of some scores; this will be further developed here under the
following important
Assumption: $d<<n.$ This means that the sample size $n$ is quite large
compared to the number of items $d$.
SUSHI and APA data sets satisfy this assumption.
The most important first step in the application of a riffle shuffling
procedure is how to partition the $d$ items into two disjoint subsets. In the
probabilistic riffle shuffling approach of $\left[2\right]$, the partitioning
step is essentially done using some adhoc approach in the case of the SUSHI
data set or based on background second order information of the items in the
case of the APA data set. While in the exploratory riffle shuffling approach
of this paper an optimal partition is obtained by maximizing the cut norm of
row centered data, or equivalently by taxicab correspondence analysis of nega
coded data.
We compare the two formulations of riffle shuffle, probabilistic and
exploratory, in section 10.
### 1.2 Aim
Our aim is to explore and display a given voting profile $V$ by sequentially
partitioning it into $G$ coherent groups plus a noisy group; that is,
$V=\cup_{g=1}^{G}cohG(g)\cup noisyG,$ (1)
where $G$ represents the number of coherent groups and $cohG(g)$ is the $g$th
coherent group. Furthermore, each coherent group is partitioned into a finite
number of coherent clusters; that is,
$cohG(g)=\cup_{\alpha=1}^{c_{g}}cohC_{g}(\alpha)\text{ \ for }g=1,...,G,$ (2)
where $c_{g}$ represents the number of coherent clusters in the $g$th coherent
group. So the coherent clusters are the building blocks for the coherent
groups. We note the following facts:
Fact 1: The assumption $d<<n$ induces the new notion of coherency for the
clusters and consequently for the groups; it is a stronger characterization
than the notion of interpretability for groups as discussed in
$\left[9\right]$.
Fact 2: Each coherent group and its clusters have the same latent variable
summarized by the Borda scale.
Fact 3: Given that the proposed method sequentially peels the data like
Occam’s razor, the number of groups $G$ is calculated automatically.
Furthermore, outliers or uninformative voters belonging to the $noisyG$ are
easily tagged.
Fact 4: The approach is exploratory, visual, data analytic and is developed
within the framework of taxicab correspondence analysis (TCA). TCA is an L1
variant of correspondence analysis developed by $\left[10\right]$. TCA is a
dimension reduction technique similar to principal component analysis. In this
paper, we shall use only the first TCA factor scores of the items and of the
voters.
Two major advantages of our method are: First, we can easily identify
outliers. For the SUSHI data, our method tags 12.36% of the voters as
outliers, which form the noisy group. While no outliers in the SUSHI data have
been identified in $\left[3,\ 4\right]$. Second, it provides a tabular summary
of the preferences which compose a coherent group. For instance, consider the
first mixture component of the SUSHI data given in $\left[4\right]$, where the
modal ordering is almost the same as the Borda scale ordering of the ten
sushis in cohG(1) obtained by our method, see Table 14 in this paper. The
sample size of their first mixture component is 27.56 %, which is much smaller
than 48.36%, the sample size of our cohG(1), see Table 14. However, Table 13
of this paper provides a tabular-visual summary of the 2418 preferences which
form cohG(1). The visual summary describes different kinds of equivalent
similar riffle shufflings of the 2418 preferences, and it provides further
insight into the structure of the data. Such interesting visual summaries are
missing in $\left[3,\ 4\right]$.
### 1.3 Highlights of a coherent cluster
A coherent cluster of voters has interesting mathematical properties and is
essentially characterized by the following facts:
a) Voters have identical unique first TCA factor score.
b) Any voter preference is easily interpreted as a particular riffle shuffling
of its items.
c) The nature of riffle shuffling of the items can be observed in the
structure of the contingency table of the first-order marginals constructed
from the Borda scorings of the voters belonging to the coherent cluster.
d) The first TCA factor scores of the items of a coherent cluster are
interpreted as Borda scale of the items.
e) We also introduce the crossing index, which measures the extent of
interleaving or the crossing of scores of voters between two blocks seriation
of the items in a coherent cluster.
### 1.4 Organization
This paper has eleven sections and its contents are organized as follows:
Section 2 presents an overview of TCA; section 3 presents some preliminaries
on the Borda coding of the data and related tables and concepts; section 4
presents Theorem 1, which shows that the first principal dimension of TCA
clusters the voters into a finite number of clusters; section 5 discusses
coherent clusters and their mathematical properties; section 6 discusses
riffle shuffling in a coherent cluster; section 7 introduces the crossing
index; section 8 introduces the coherent groups; section 9 presents the
analysis of APA data set; section 10 presents a comparison of the two
formulations of riffle shuffle probabilistic and exploratory; and finally we
conclude in section 11.
All mathematical proofs are relegated to the appendix. Details of the
computation are shown only for the first coherent group of SUSHI data set.
## 2 An overview of taxicab correspondence analysis
Consider a $n\times p$ matrix $\mathbf{X}$ where $X_{ij}\geq 0.$ We have
$\sum_{j=1}^{p}\sum_{i=1}^{n}\mathbf{X}_{ij}=X_{\ast\ast}.$ Let
$\mathbf{P=X/}X_{\ast\ast}$ be the correspondence matrix associated to X; and
as usual, we define $p_{i\ast}=\sum_{j=1}^{p}p_{ij}$, $p_{\ast
j}=\sum_{i=1}^{n}p_{ij}$. Let $\mathbf{D}_{n}=Diag(p_{i\ast})$ a diagonal
matrix with diagonal elements $p_{i\ast}$. Similarly
$\mathbf{D}_{p}=Diag(p_{\ast j})$. Let $k=rank(\mathbf{P)}-1$.
In TCA the calculation of the dispersion measures $(\delta_{\alpha})$,
principal axes ($\mathbf{u}_{\alpha},\mathbf{v}_{\alpha}),$ principal basic
vectors $(\mathbf{a}_{\alpha},\mathbf{b}_{\alpha}),$ and principal factor
scores $(\mathbf{f}_{\alpha},\mathbf{g}_{\alpha})$ for $\alpha=1,...,k$ is
done in a stepwise manner. We put
$\mathbf{P}_{1}=(p_{ij}^{(1)}=p_{ij}-p_{i\ast}\ p_{\ast j})$. Let
$\mathbf{P_{\alpha}}$ be the residual correspondence matrix at the $\alpha$-th
iteration.
The variational definitions of the TCA at the $\alpha$-th iteration are
$\displaystyle\delta_{\alpha}$ $\displaystyle=$
$\displaystyle\max_{\mathbf{u\in\mathbb{R}}^{p}}\frac{\left|\left|\mathbf{P_{\alpha}u}\right|\right|_{1}}{\left|\left|\mathbf{u}\right|\right|_{\infty}}=\max_{\mathbf{v\in\mathbb{R}}^{n}}\
\frac{\left|\left|\mathbf{P_{\alpha}^{\prime}v}\right|\right|_{1}}{\left|\left|\mathbf{v}\right|\right|_{\infty}}=\max_{\mathbf{u\in\mathbb{R}}^{p},\mathbf{v\in\mathbb{R}}^{n}}\frac{\mathbf{v}^{\prime}\mathbf{P_{\alpha}u}}{\left|\left|\mathbf{u}\right|\right|_{\infty}\left|\left|\mathbf{v}\right|\right|_{\infty}},$
$\displaystyle=$ $\displaystyle\max||\mathbf{P_{\alpha}u||}_{1}\ \
\text{subject to }\mathbf{u}\in\left\\{-1,+1\right\\}^{p},$ $\displaystyle=$
$\displaystyle\max||\mathbf{P_{\alpha}^{\prime}v||}_{1}\ \ \text{subject to
}\mathbf{v}\in\left\\{-1,+1\right\\}^{n},$ $\displaystyle=$
$\displaystyle\max\mathbf{v}^{\prime}\mathbf{P_{\alpha}u}\text{ \ subject to \
}\mathbf{u}\in\left\\{-1,+1\right\\}^{p},\mathbf{v}\in\left\\{-1,+1\right\\}^{n}.$
The $\alpha$-th principal axes are
$\mathbf{u}_{\alpha}\
=\arg\max_{\mathbf{u}\in\left\\{-1,+1\right\\}^{p}}\left|\left|\mathbf{P_{\alpha}u}\right|\right|_{1}\text{
\ \ and \ \ }\mathbf{v}_{\alpha}\
=\arg\max_{\mathbf{v}\in\left\\{-1,+1\right\\}^{n}}\left|\left|\mathbf{P_{\alpha}^{\prime}v}\right|\right|_{1}\text{,}$
(3)
and the $\alpha$-th basic principal vectors are
$\mathbf{a}_{\alpha}=\mathbf{P_{\alpha}u}_{\alpha}\text{ \ and \
}\mathbf{b}_{\alpha}=\mathbf{P_{\alpha}^{\prime}v}_{\alpha},$ (4)
and the $\alpha$-th principal factor scores are
$\mathbf{f}_{\alpha}=\mathbf{D}_{n}^{-1}\mathbf{a}_{\alpha}\text{ \ and \
}\mathbf{g}_{\alpha}=\mathbf{D}_{p}^{-1}\mathbf{b}_{\alpha};$ (5)
furthermore the following relations are also useful
$\mathbf{u}_{\alpha}=sgn(\mathbf{b}_{\alpha})=sgn(\mathbf{g}_{\alpha})\text{ \
and \ }\mathbf{v}_{\alpha}=sgn(\mathbf{a}_{\alpha})=sgn(\mathbf{f}_{\alpha}),$
(6)
where $sgn(.)$ is the coordinatewise sign function, $sgn(x)=1$ if $x>0,$ and
$sgn(x)=-1$ if $x\leq 0.$ The $\alpha$-th taxicab dispersion measure
$\delta_{\alpha}$ can be represented in many different ways
$\begin{array}[]{cccc}\delta_{\alpha}&=&\left|\left|\mathbf{P_{\alpha}u}_{\alpha}\right|\right|_{1}=\left|\left|\mathbf{a}_{\alpha}\right|\right|_{1}=\mathbf{a}_{\alpha}^{\prime}\mathbf{v}_{\alpha}=\left|\left|\mathbf{D}_{n}\mathbf{f}_{\alpha}\right|\right|_{1}=\mathbf{u}_{\alpha}^{\prime}\mathbf{D}_{n}\mathbf{f}_{\alpha},&\\\
&=&\left|\left|\mathbf{P_{\alpha}^{\prime}v}_{\alpha}\right|\right|_{1}=\left|\left|\mathbf{b}_{\alpha}\right|\right|_{1}=\mathbf{b}_{\alpha}^{\prime}\mathbf{u}_{\alpha}=\left|\left|\mathbf{D}_{p}\mathbf{g}_{\alpha}\right|\right|_{1}=\mathbf{v}_{\alpha}^{\prime}\mathbf{D}_{p}\mathbf{g}_{\alpha}.&\end{array}$
(7)
The $(\alpha+1)$-th residual correspondence matrix is
$\mathbf{P_{\alpha+1}}=\mathbf{P_{\alpha}-D}_{n}\mathbf{f}_{\alpha}\mathbf{g}_{\alpha}^{{}^{\prime}}\mathbf{D}_{p}/\delta_{\alpha}.$
(8)
An interpretation of the term
$\mathbf{D}_{n}\mathbf{g}_{\alpha}\mathbf{f}_{\alpha}^{{}^{\prime}}\mathbf{D}_{p}/\delta_{\alpha}$
in (8) is that, it represents the best rank-1 approximation of the residual
correspondence matrix $\mathbf{P_{\alpha}}$, in the sense of taxicab norm.
In CA and TCA, the principal factor scores are centered; that is,
$\sum_{i=1}^{n}f_{\alpha}(i)p_{i\ast}=0=\sum_{j=1}^{p}g_{\alpha}(j)p_{\ast
j}\text{ \ \ \ for \ \ }\alpha=1,...,k.$ (9)
The reconstitution formula in TCA and CA is
$p_{ij}=p_{i.}p_{.j}\left[1+\sum_{\alpha=1}^{k}f_{\alpha}(i)g_{\alpha}(j)/\delta_{\alpha}\right].$
(10)
In TCA, the calculation of the principal component weights,
$\mathbf{u}_{\alpha}$ and $\mathbf{v}_{\alpha},$ and the principal factor
scores, $\mathbf{g}_{\alpha}$ and $\mathbf{f}_{\alpha},$ can be accomplished
by two algorithms. The first one is based on complete enumeration based on
equation (3). The second one is based on iterating the transition formulae
(4,5,6). This is an ascent algorithm; that is, it increases the value of the
objective function at each iteration, see $\left[11\right]$. The iterative
algorithm could converge to a local maximum; so it should be restarted from
several initial configurations. The rows or the columns of the data can be
used as starting values.
The TCA map is obtained by plotting $(\mathbf{g}_{1},\mathbf{g}_{2})$ or
$(\mathbf{f}_{1},\mathbf{f}_{2}).$
## 3 Preliminaries
In this section we review a) The Borda scoring of a voting profile V into R
and the Borda scale; b) Contingency table of the first order marginals of R;
c) The coded tables Rdouble and R${}_{nega}.$
### 3.1 Borda scorings and Borda scale
Let $A=\\{a_{1},a_{2},\ldots,a_{d}\\}$ denote a set of $d$
alternatives/candidates/items, and $V$ a set of $n$ voters/individuals/judges.
In this paper we consider the linear orderings/rankings/preferences, in which
all $d$ objects are rank-ordered according to their levels of desirability by
the $n$ voters. We denote a linear order by a sequence
$\mathbf{s}=(a_{k_{1}}\succ a_{k_{2}}\succ\ldots\succ a_{k_{d}})$, where
$a_{k_{1}}\succ a_{k_{2}}$ means that the alternative $a_{k_{1}}$ is preferred
to the alternative $a_{k_{2}}.$ The Borda scoring of $\mathbf{s}$, see
$\left[12\right],$ is the vector $b(\mathbf{s)}$ where to the element
$a_{k_{j}}$the score of $(d-j)$ is assigned, because $a_{k_{j}}$ is preferred
to $(d-j)$ other alternatives; or equivalently it is the $j$th most preferred
alternative. Let $\mathbf{R=(}r_{ij})$ be the matrix having $n$ rows and $d$
columns, where $r_{ij}$ designates the Borda score of the $i$th voter’s
preference of the $j$th alternative. We note that the $i$th row of
$\mathbf{R}$ will be an element of $S_{d}$ the set of permutations of the
elements of the set $\left\\{0,1,2,...,d-1\right\\}.$ A toy example of
$\mathbf{R}$ is presented in Table 1 for $n=4$ and $d=3$.
The Borda scale of the elements of $A$ is
$\mathbf{\beta}=\mathbf{1}_{n}^{\prime}\mathbf{R}/n,$ where $\mathbf{1}_{n}$
is a column vector of $1$’s having $n$ coordinates. The Borda scale
seriates/orders the $d$ items of the set $A$ according to their average
scores: $\mathbf{\beta}(j)>\mathbf{\beta}(i)$ means item $j$ is preferred to
item $i$, and $\mathbf{\beta}(j)=\mathbf{\beta}(i)$ means both items
$(a_{i},a_{j})$ are equally preferred. In the toy example of Table 1, the
Borda scale seriates $\\{A,B\\}\succ C$.
Similarly, we define the reverse Borda score of $\mathbf{s}$ to be the vector
$\overline{b}$($\mathbf{s)}$, which assigns to the element $a_{k_{j}}$the
score of $(j-1).$ We denote
$\overline{\mathbf{R}}\mathbf{=(}\overline{r}_{ij})$ to be the matrix having
$n$ rows and $d$ columns, where $\overline{r}_{ij}$ designates the reverse
Borda score of the $i$th judge’s nonpreference of the $j$th alternative. The
reverse Borda scale of the $d$ items is
$\overline{\mathbf{\beta}}=\mathbf{1}_{n}^{\prime}\overline{\mathbf{R}}/n.$
We note that
$\mathbf{R+}\overline{\mathbf{R}}=(d-1)\mathbf{1}_{n}\mathbf{1}_{d}^{\prime}$
and
$\mathbf{\beta+}\overline{\mathbf{\beta}}=(d-1)\mathbf{1}_{d}^{\prime}.$
Table 1: Toy example with $n=4$ preferences of $d=3$ items.
---
| $\mathbf{R}$ | | | | $\overline{\mathbf{R}}$ |
$A\succ B\succ C$ | 0 | 1 | 2 | 2 | 1 | 0
$A\succ C\succ B$ | 1 | 0 | 2 | 1 | 2 | 0
$B\succ A\succ C$ | 0 | 2 | 1 | 2 | 0 | 1
$B\succ C\succ A$ | 1 | 2 | 0 | 1 | 0 | 2
Borda scale $\mathbf{\beta}$ | 0.5 | 1.25 | 1.25 | | |
$\text{reverse Borda scale}\overline{\text{ }\mathbf{\beta}}$ | | | | 1.5 | 0.75 | 0.75
nega $n\overline{\mathbf{\beta}}$ | | | | 6 | 3 | 3
### 3.2 Contingency table of first-order marginals
The contingency table of first order marginals of an observed voting profile
$V$ on $d$ items is a square $d\times d$ matrix M, where
$\mathbf{M(}i,j\mathbf{)}$ stores the number of times that item $j$ has Borda
score $i$ for $i=0,...,d-1,$ see $\left[6,\ \text{p.17}\right]$. Table 2
displays the matrix M for the toy example $\mathbf{R}$ displayed in Table 1.
We note the following facts:
a) It has uniform row and column marginals equal to the sample size.
b) We can compute the Borda scale $\mathbf{\beta}$ from it.
c) It reveals the nature of crossing of scores attributed to the items for a
given binary partition of the items. For the toy example, consider the
partition $\left\\{C\right\\}$ and $\left\\{B,A\right\\}$ with attributed
scores of $\left\\{0\right\\}$ and $\left\\{1,2\right\\}$ respectively (this
is the first step in a riffle shuffle). Then the highlighted cells (marked in
bold) in Table 2 show that there are two crossing of scores, permutation
(transposition) of the scores 0 and 1, between the sets $\left\\{C\right\\}$
and $\left\\{B,A\right\\}$, (this is the second step in a random shuffle).
Furthermore the third row of Table 2 shows that the score 2 is equally
attributed to both items of the set $\left\\{B,A\right\\}$ and it never
crossed to $\left\\{C\right\\}$.
Table 2: The matrix of first-order marginals of R.
---
| $C$ | $B$ | $A$ | row sum
$0$ | 2 | 1 | 1 | 4
$1$ | 2 | 1 | 1 | 4
$2$ | 0 | 2 | 2 | 4
column sum | 4 | 4 | 4 |
Borda scale $\mathbf{\beta}$ | 0.5 | 1.25 | 1.25 |
### 3.3 Coded tables $\mathbf{R}_{double}$ and $\mathbf{R}_{nega}$
Our methodological approach is based on Benzécri’s platform, see $\left[13,\
p.1113\right],$ that we quote: “ the main problem inductive statistics has to
face is to build tables that, through appropriate coding and eventual
supplementation, give to the available data such a shape that the analysis is
able to extract from it the answer to any question that we are allowed to
ask”. Italics are ours.
There are three elements in Benzécri’s platform: a) coding, a kind of pre-
processing of data, will be discussed in the following paragraph; b) eventual
supplementation consists in applying TCA and not correspondence analysis (CA),
because in the CA case we do not have a result similar to Theorem 1; c)
question that we are allowed to ask is to explore and visualize rank data.
Within the CA framework, there are two codings of rank data
$\mathbf{R}_{double}$ and $\mathbf{R}_{nega.}$.
#### 3.3.1 $\mathbf{R}_{double}$
The first one is the doubled table of size $(2n)\times d$
$\mathbf{R}_{double}=(_{\overline{\mathbf{R}}}^{\mathbf{R}})$
proposed independently by $\left[14,\ 15\right]$, where they showed that CA of
$\mathbf{R}_{double}$ is equivalent to the dual scaling of Nishisato coding of
rank data, see $\left[16\right]$. CA of $\mathbf{R}_{double}$ is equivalent to
CA of its first residual correspondence matrix
$\mathbf{P}_{double}^{1}=\frac{1}{t}(_{-(r_{ij}-\frac{d-1}{2})}^{(r_{ij}-\frac{d-1}{2})}),$
where $t=nd(d-1)$. The structure of $\mathbf{P}_{double}^{1}$ shows that each
row is centered as in Carroll’s multidimensional preference analysis
procedure, MDPREF, exposed in Alvo and Yu (2014, p.15). In TCA the objective
function to maximize is a combinatorial problem, see equation (3); and the
first iteration in TCA of $\mathbf{R}_{double}$ corresponds to computing
$\begin{array}[]{cccc}\delta_{1}^{double}&=&\max_{\mathbf{v\in}\left\\{-1,1\right\\}^{n}}||(\mathbf{v}^{t}\
|\ \mathbf{-v}^{t})\mathbf{P}_{double}^{1}||_{1}&\\\
&=&\max_{\mathbf{v\in}\left\\{-1,1\right\\}^{n}}\frac{2}{t}\sum_{j=1}^{d}|\sum_{i=1}^{n}(r_{ij}-\frac{d-1}{2})v_{i}|\text{.}&\end{array}$
#### 3.3.2 $\mathbf{R}_{nega}$
In the second approach, we summarize $\overline{\mathbf{R}}$ by its column
total; that is, we create a row named $\mathbf{nega=}$
$n\overline{\mathbf{\beta}}=\mathbf{1}_{n}^{\prime}\overline{\mathbf{R}},$
then we vertically concatenate $\mathbf{nega}$ to $\mathbf{R}$, thus obtaining
$\mathbf{R}_{nega}=(_{\mathbf{nega}}^{\mathbf{R}})$
of size $(n+1)\times d.$
$\left[17\right]$ discussed the relationship between TCA of
$\mathbf{R}_{double}$ and TCA of $\mathbf{R}_{nega}$: TCA of
$\mathbf{R}_{nega}$ can be considered as constrained TCA of
$\mathbf{R}_{double}$, because we are constraining the vector
$\mathbf{-v}^{t}=\mathbf{-1}_{n}^{t}$ in (11); that is, the objective function
to maximize corresponds to computing
$\begin{array}[]{cccc}\delta_{1}&=&\max_{\mathbf{v\in}\left\\{-1,1\right\\}^{n}}||(\mathbf{v}^{t}\
|\ \mathbf{-1}_{n}^{t})\mathbf{P}_{double}^{1}||_{1}&\\\
&=&\max_{\mathbf{v\in}\left\\{-1,1\right\\}^{n}}||(\mathbf{v}^{t}\ \
\mathbf{-}1)\mathbf{P}_{nega}^{1}||_{1}\\\
&=&\max_{\mathbf{v\in}\left\\{-1,1\right\\}^{n}}\frac{1}{t}\sum_{j=1}^{d}|\sum_{i=1}^{n}(r_{ij}-\frac{d-1}{2})(v_{i}+1)|\text{.}&\end{array}$
So, we see that if in (11) the optimal value of $\mathbf{v}=\mathbf{1}_{n}$,
then $\delta_{1}^{double}=\delta_{1},$ otherwise
$\delta_{1}^{double}>\delta_{1}$.
Let
$\mathbf{v}_{1}=\arg\max_{\mathbf{v\in}\left\\{-1,1\right\\}^{n}}\frac{1}{t}\sum_{j=1}^{d}|\sum_{i=1}^{n}(r_{ij}-\frac{d-1}{2})(v_{i}+1)|.$
Define the set of indices $I_{+}=\left\\{i|v_{1i}=1\right\\}$ and
$I_{-}=\left\\{i|v_{1i}=-1\right\\},$ where $\mathbf{v}_{1}=(v_{1i}).$ Then
$\delta_{1}=\frac{2}{t}\sum_{j=1}^{d}|\sum_{i\in
I_{+}}(r_{ij}-\frac{d-1}{2})|$ (13)
shows that the summation in (13) is restricted to the subset of assessors that
belong to $I_{+}$. The subset $I_{+}$ indexes the voters having the same
direction in their votes. Given that we are uniquely interested in the first
TCA dimension, all the necessary information is encapsulated in $I_{+}$, as
discussed in $\left[17,\ 9\right]$ using other arguments.
Furthermore, $\delta_{1}$ in (13) equals four times the cut norm of
$\mathbf{R}_{centered}(i,j)=\frac{1}{t}(r_{ij}-\frac{d-1}{2}),$ where the cut
norm is defined to be
$\displaystyle\left|\left|\mathbf{R}_{centered}\right|\right|_{cut}$
$\displaystyle=$ $\displaystyle\max_{S,T}\frac{1}{t}\sum_{j\in S}\sum_{i\in
T}(r_{ij}-\frac{d-1}{2})$ $\displaystyle=$ $\displaystyle\frac{1}{t}\sum_{j\in
S_{+}}\sum_{i\in I_{+}}(r_{ij}-\frac{d-1}{2})$ $\displaystyle=$
$\displaystyle\delta_{1}/4,$
where $S\subseteq\left\\{1,...,d\right\\}$ and $T\subseteq I;$ it shows that
the subsets $I_{+}$ and $S_{+}$ are positively associated, for further details
see for instance, $\left[18,\ 19\right]$.
In the sequel, we will consider only the application of TCA to
$\mathbf{R}_{nega}$.
## 4 First TCA voter factor scores of Rnega
We show the results on the SUSHI data set enumerating $n=5000$ preferences of
$d=10$ sushis, see $\left[1\right]$. Even though, our interest concerns only
the first TCA voter factor scores of a voting profile $V_{1},$ it is a common
practice in CA circles to present the principal map of the row and column
projections.
Figures 1 and 2 display the principal maps obtained from CA and TCA of
$R_{nega}$ of the SUSHI data denoted by $V_{1}$. We observe that, TCA clusters
the voters into a finite number of discrete patterns, while CA does not: This
is the main reason that we prefer the use of TCA to the use of the classical
well known dimension reduction technique CA.
We have the following theorem concerning the first TCA principal factor scores
of the voters belonging to a profile $V_{1}$, $f_{1}(i)$ for $i=1,...,n$,
where the first principal axis partitions the $d$ items into $d_{1}$ and
$d_{2}$ parts such that $d=d_{1}+d_{2}.$
(a) Figure 1:CA map of SUSHI rank data
(b) Figure 2: TCA map of SUSHI rank data
Theorem 1
a) The maximum number of distinct clusters of the $n$ voters belonging to
$V_{1}$ on the first TCA principal axis (distinct $f_{1}(i\mathbf{)}$ values
for $i\mathbf{\in}V_{1})$ is $d_{1}d_{2}+1.$
b) The maximum value that $f_{1}(i\mathbf{)}$ can attain is
$2\frac{d_{1}d_{2}}{d(d-1)}.$
c) The minimum value that $f_{1}(i\mathbf{)}$ can attain is
$-2\frac{d_{1}d_{2}}{d(d-1)}.$
d) If the number of distinct clusters is maximum, $d_{1}d_{2}+1$, then the gap
between two contiguous $f_{1}(i\mathbf{)}$ values is $\frac{4}{d(d-1)}.\vskip
12.0pt plus 4.0pt minus 4.0pt$
Remark 1
a) We fix $f_{1}(nega)<0$ to eliminate the sign indeterminacy of the first
bilinear term in (10).
b) We partition $V_{1}$ into $d_{1}d_{2}+1$ clusters,
$V_{1}=\cup_{\alpha=1}^{d_{1}d_{2}+1}V_{1,\alpha}$, where the voters of the
$\alpha$th cluster are characterized by their first TCA factor score; that is,
$V_{1,\alpha}=\left\\{i\in
V_{1}\mathbf{:}f_{1}^{V_{1}}(i\mathbf{)=}2\frac{d_{1}d_{2}}{d(d-1)}-(\alpha-1)\frac{4}{d(d-1)}\right\\}$
for $\alpha=1,...,d_{1}d_{2}+1$.
Example 1: In Figure 2, $d_{1}=4$ and $d_{2}=6,$ and we observe
Fact 1: by Theorem 1a, 5000 preferences are clustered into $d_{1}d_{2}+1=25$
clusters on the first TCA principal axis.
Fact 2: by Theorem 1b, the maximum value of $f_{1}(i\mathbf{)=}$
$48/90\mathbf{.}$
Fact 3: by Theorem 1c, the minimum value of $f_{1}(i\mathbf{)=}$
$-48/90\mathbf{.}$
Fact 4: by Theorem 1d, the gap separating two contiguous clusters of voters on
the first TCA principal axis is $4/90.\vskip 12.0pt plus 4.0pt minus 4.0pt$
A cluster of voters defined in Remark 1b, $V_{1,\alpha}$ for
$\alpha=1,...,d_{1}d_{2}+1,$ can be classified as coherent or incoherent. And
this will be discussed in the next section.
## 5 Coherent cluster
The following definition characterizes a coherent cluster.
Definition 1 (Coherency of a cluster of voters $V_{1,\alpha}$ for
$\alpha=1,...,d_{1}d_{2}+1$)
A cluster of voters $V_{1,\alpha}$ $\subseteq V_{1}$ is coherent if
$f_{1}^{V_{1,\alpha}}(v\mathbf{)=}2\frac{d_{1}d_{2}}{d(d-1)}-(\alpha-1)\frac{4}{d(d-1)}$
for all $v\mathbf{\in}V_{1,\alpha},$ where $f_{1}^{V_{1,\alpha}}(i\mathbf{)}$
is the first TCA factor score of the voter $i\mathbf{\in}V_{1,\alpha}$
obtained from TCA of subprofile $V_{1,\alpha}.\vskip 12.0pt plus 4.0pt minus
4.0pt$
Remark 2:
a) It is important to distinguish between $f_{1}^{V_{1}}(i\mathbf{)}$ for
$i=1,...,|V_{1}|$ where $n=|V_{1}|,$ and $f_{1}^{V_{1,\alpha}}(i\mathbf{)}$
for $i=1,...,|V_{1,\alpha}|,$ where $|V_{1,\alpha}|$ represents the sample
size of the cluster $|V_{1,\alpha}|.$
b) Definition 1 implies that a cluster $V_{1,\alpha}$ is coherent when for all
voters $i\mathbf{\in}V_{1,\alpha}$ the first TCA factor score
$f_{1}^{V_{1,\alpha}}(i\mathbf{)}$ does not depend on the voter $i$, but it
depends on $(\alpha,d_{1},d_{2}).\vskip 12.0pt plus 4.0pt minus 4.0pt$
Corollary 1: It follows from Remark 1a and equation (13) that, a necessary
condition, but not sufficient, for a cluster $V_{1,\alpha}$ to be coherent is
that its first TCA factor score obtained from TCA of $V_{1}$ is strictly
positive; that is, $0<f_{1}^{V_{1}}(i)$ for $i\in V_{1,\alpha}.$
(a) Figure 3 : TCA map of $\mbox{V}_{1,1}$.
(b) Figure 4 : TCA map of $\mbox{V}_{1,2}$.
(c) Figure 5 : TCA map of $\mbox{V}_{1,3}$.
(d) Figure 6 : TCA map of $\mbox{V}_{1,4}$.
(e) Figure 7 : TCA map of $\mbox{V}_{1,5}$.
(f) Figure 8 : TCA map of $\mbox{V}_{1,6}$.
(g) Figure 9 : TCA map of $\mbox{V}_{1,7}$.
(h) Figure 10: TCA map of $\mbox{V}_{1,8}$.
Example 2: Figures 3 through 9 show the coherency of the clusters of voters
$V_{1,\alpha}$ for $\alpha=1,...,7,$ where dots represent clusters of voters;
while Figure 10 shows the incoherence of the cluster $V_{1,8}.$ Further, the
first three columns of Table 3 display the mathematical formulation of the 7
coherent clusters $cohC_{1}(\alpha)=V_{1,\alpha}$ for $\alpha=1,...,7$ as
defined in Remark 1b and their sample sizes $|V_{1,\alpha}|.$
Table 3: Characteristics of $cohC_{1}(\alpha)=$ $V_{1,\alpha}$ of SUSHI data.
---
$\alpha$ | $|V_{1,\alpha}|$ | description of $V_{1,\alpha}$ | $\delta_{1}(V_{1,\alpha})$ | $T_{v\in V_{1,\alpha}}(\tau_{J_{1}}(S_{1}))$ | $Cross(V_{1,\alpha})$
$1$ | $314$ | $\left\\{i\mathbf{:}f_{1}^{V_{1}}(i\mathbf{)}=48/90\right\\}$ | $48/90$ | $6$ | $0$
$2$ | $235$ | $\left\\{i\mathbf{:}f_{1}^{V_{1}}(i\mathbf{)}=44/90\right\\}$ | $44/90$ | $7$ | $1/12$
$3$ | $326$ | $\left\\{i\mathbf{:}f_{1}^{V_{1}}(i\mathbf{)}=40/90\right\\}$ | $40/90$ | $8$ | $2/12$
$4$ | $315$ | $\left\\{i\mathbf{:}f_{1}^{V_{1}}(i\mathbf{)}=36/90\right\\}$ | $36/90$ | $9$ | $3/12$
$5$ | $452$ | $\left\\{i\mathbf{:}f_{1}^{V_{1}}(i\mathbf{)}=32/90\right\\}$ | $32/90$ | $10$ | $4/12$
$6$ | $375$ | $\left\\{i\mathbf{:}f_{1}^{V_{1}}(i\mathbf{)}=28/90\right\\}$ | $28/90$ | $11$ | $5/12$
$7$ | $401$ | $\left\\{i\mathbf{:}f_{1}^{V_{1}}(i\mathbf{)}=24/90\right\\}$ | $24/90$ | $12$ | $6/12$
Proposition 1: For a voting profile $V$, $\delta_{1}(V)\geq|f_{1}(nega)|$,
where $\delta_{1}(V)$ is the first TCA dispersion value obtained from TCA of
$V,$ and $f_{1}(nega)$ is the first TCA factor score of the row $nega$.
The equality in Proposition 1 is attained only for coherent clusters as shown
in the following result.
Proposition 2: The first TCA dispersion value of a coherent cluster
$cohC_{1}(\alpha)$ satisfies
$\displaystyle\delta_{1}(cohC_{1}(\alpha))$ $\displaystyle=$
$\displaystyle|f_{1}^{V_{1,\alpha}}(nega)|.$ $\displaystyle\mathbf{=}$
$\displaystyle 2\frac{d_{1}d_{2}}{d(d-1)}-(\alpha-1)\frac{4}{d(d-1)}$
Example 3: propostion 2 can be observed by looking at the columns 3 and 4 of
Table 3 which concern the 7 coherent clusters $cohC_{1}(\alpha)=V_{1,\alpha}$
for $\alpha=1,...,7$. While for the incoherent cluster $V_{1,8}$ with sample
size of $|V_{1,8}|=335,$ we observe:
$V_{1,8}=\left\\{i\mathbf{:}f_{1}^{V_{1}}(i\mathbf{)}=20/90=0.222\right\\},$
and by Proposition 1, $\delta_{1}(V_{1,8})=0.2354$ $>2/9.$ This means that the
335 voters belonging to $V_{1,8}$ form a cluster within the whole sample of
5000 voters, but separated as 335 voters they do not form a coherent cluster.
### 5.1 Interpretability of a coherent cluster
The following result shows that for coherent clusters, the first TCA dimension
can be interpreted as Borda scaled factor.
Proposition 3: The first TCA column factor score of the item $j,$ $g_{1}(j),$
is an affine function of the Borda scale $\beta(j);$ that is,
$g_{1}(j)=\frac{2}{d-1}\beta(j)-1$ for $j=1,...,d.$ Or
$corr(\mathbf{g}_{1},\mathbf{\beta})=1.\vskip 12.0pt plus 4.0pt minus 4.0pt$
Remark 3:
The first TCA principal factor score of item $j$ for $j=1,...,d$ is bounded:
$-1\leq g_{1}(j)\leq 1,$ because $0\leq\beta(j)\leq d-1.\vskip 12.0pt plus
4.0pt minus 4.0pt$
Example 4: Table 4 displays the Borda scales of the items, sushis, in the
seven coherent clusters $cohC_{1}(\alpha)=V_{1,\alpha}$ for $\alpha=1,...,7.$
To identify the sushi type, one has to refer to Figure 2; for instance, $j10$
corresponds to $10cucumber$ $roll$ in Figure 2. We observe the following main
fact: For each of the seven coherent clusters, the first TCA principal axis
produced the same binary partition of the items:
$J_{1}=\left\\{j10,j7,j4,j9\right\\}$ characterized by $4.5>\beta(j_{1})$ for
$j_{1}\in J_{1}$, and $J_{2}=\left\\{j3,j1,j2,j6,j5,j8\right\\}$ characterized
by $\beta(j_{1})$ $>4.5$ for $j_{2}\in J_{2}.$ The six sushis in $J_{2}$ have
Borda scales above average score of $4.5=(0+9)/2$; while the four sushis in
$J_{1}$ have Borda scales below average score of $4.5.\vskip 12.0pt plus 4.0pt
minus 4.0pt\ $
Table 4: Borda scales of the 10 sushis in the seven coherent clusters.
---
Borda scale | items
| j10 | j7 | j4 | j9 | j3 | j1 | j2 | j6 | j5 | j8
$\mathbf{\beta}(cohC_{1}(1))$ | 0.66 | 1.31 | 1.87 | 2.16 | 5.55 | 5.78 | 6.03 | 6.58 | 7.31 | 7.52
$\mathbf{\beta}(cohC_{1}(2))$ | 0.69 | 1.29 | 2.44 | 2.59 | 5.47 | 5.43 | 5.50 | 6.35 | 7.38 | 7.86
$\mathbf{\beta}(cohC_{1}(3))$ | 0.65 | 1.60 | 3.04 | 2.71 | 5.25 | 5.25 | 5.39 | 6.26 | 7.17 | 7.68
$\mathbf{\beta}(cohC_{1}(4))$ | 0.83 | 1.79 | 3.10 | 3.28 | 5.30 | 4.74 | 5.22 | 6.34 | 6.76 | 7.64
$\mathbf{\beta}(cohC_{1}(5))$ | 1.12 | 2.02 | 3.26 | 3.60 | 5.70 | 4.74 | 5.27 | 5.75 | 5.99 | 7.60
$\mathbf{\beta}(cohC_{1}(6))$ | 1.12 | 2.33 | 3.62 | 3.93 | 5.68 | 4.98 | 5.21 | 5.33 | 5.25 | 7.56
$\mathbf{\beta}(cohC_{1}(7))$ | 1.42 | 2.74 | 3.84 | 4.00 | 5.45 | 4.70 | 5.02 | 5.26 | 5.20 | 7.38
Now we ask the question what are the differences among the seven coherent
clusters? The answer is riffle shuffling of the scores of the items, which we
discuss next.
## 6 Exploratory riffle shuffling
$\left[8\right]$ is the seminal reference on riffle shuffling of cards.
$\left[2\right]$ generalized the notion of independence of two subsets of
items to riffled independence to uncover the structure of rank data. Within
the framework of data analysis of preferences, exploratory riffle shuffling
can be described in the following way. We have two sets: $J$ a set of $d$
distinct items and $S$ a set of $d$ Borda scores. We partition both sets into
two disjoint subsets of sizes $d_{1}$ and $d_{2}=d-d_{1};$ that is,
$J=J_{1}\cup J_{2}$ with $J_{1}=\left\\{j_{1},j_{2},...,j_{d_{1}}\right\\}$
and $S=S_{1}\cup S_{2}$ with $S_{1}=\left\\{0,1,...,d_{1}-1\right\\}.$ Riffle
shuffling consists of two steps. In the first step, we attribute the scores of
$S_{1}$ to $J_{1}$ and the scores of $S_{2}$ to $J_{2}.$ In the second step,
we permute some scores attributed to $J_{1}$ with the same number of scores
attributed to $J_{2}.$ The second step can be mathematically described as an
application of a permutation $\tau$, such that
$\tau_{J}(S_{1},S_{2})=(\tau_{J_{1}}(S_{1}),\tau_{J_{2}}(S_{2})).$ We
interpret $\tau_{J_{1}}(S_{1})$ as the set of scores attributed to $J_{1},$
and $\tau_{J_{2}}(S_{2})$ as the set of scores attributed to $J_{2}.$
Example 5: Table 5 displays a toy example with $n=7$ voters’ Borda scorings of
$d=10$ items with $J_{1}=\left\\{a,b,c,d\right\\}$ and
$J_{2}=\left\\{e,f,g,h,i,j\right\\}.$ We observe the following: a) The first
four voters have only done the first step in a riffle shuffle: each one of
them has attributed the scores in $S_{1}=\left\\{0,1,2,3\right\\}$ to the
items in $J_{1}$ and the scores in $S_{2}=\left\\{4,5,6,7,8,9\right\\}$ to the
items in $J_{2}.$ This can be described as
$\tau_{J}(S_{1},S_{2})=(S_{1},S_{2});$ that is the permutation $\tau$ is the
identity permutation; so there is no crossing of scores between $J_{1}$ and
$J_{2}$. b) Voters 5, 6 and 7 have done both steps in a riffle shuffle. Voters
5 and 6 have permuted score 3 with 5, so we have
$\tau_{J_{1}}(S_{1})=\left\\{0,1,2,\mathbf{5}\right\\}$ and
$\tau_{J_{2}}(S_{2})=\left\\{4,\mathbf{3},6,7\mathbf{,}8,9\right\\}$. Voter 7
has permuted the scores $\left\\{\mathbf{2,3}\right\\}$ with
$\left\\{\mathbf{4,5}\right\\}$, so we have
$\tau_{J_{1}}(S_{1})=\left\\{0,1,\mathbf{4},\mathbf{5}\right\\}$ and
$\tau_{J_{2}}(S_{2})=\left\\{\mathbf{2},\mathbf{3},6,7\mathbf{,}8,9\right\\}$.
Further, we note by $|\tau_{J_{1}}(S_{1})|$ the number of voters who have done
the riffle shuffle $(\tau_{J_{1}}(S_{1}),\tau_{J_{2}}(S_{2}))$. So
$|\tau_{J_{1}}(S_{1})=\left\\{0,1,2,3\right\\}|=4,$
$|\left\\{0,1,2,\mathbf{5}\right\\}|=2$ and
$|\left\\{0,1,\mathbf{4},\mathbf{5}\right\\}|=1.$ The permuted scores between
the two blocks of items is in bold in Table 5.
Table 5: Borda scorings of 10 items by 7 voters.
---
voter | items
| a | b | c | d | e | f | g | h | i | j
1 | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
2 | 0 | 2 | 3 | 1 | 6 | 4 | 5 | 8 | 7 | 9
3 | 3 | 2 | 1 | 0 | 5 | 6 | 4 | 9 | 7 | 8
4 | 2 | 1 | 0 | 3 | 8 | 7 | 9 | 4 | 5 | 6
5 | 0 | 1 | 2 | 5 | 4 | 3 | 6 | 7 | 8 | 9
6 | 1 | 2 | 5 | 0 | 3 | 6 | 4 | 9 | 7 | 8
7 | 0 | 4 | 5 | 1 | 6 | 8 | 9 | 2 | 7 | 3
Remark 4: A useful observation that we get from Example 5 is that we can
concentrate our study either on $J_{1}$ or on $J_{2}:$ For if we know
$\tau_{J_{1}}(S_{1})$, the scores attributed to $J_{1},$ we can deduce
$\tau_{J_{2}}(S_{2})$, the scores attributed to $J_{2}$ because of mutual
exclusivity constraints ensuring that any two items, say $a$ and $b,$ never
map to the same rank by a voter.
A simple measure of magnitude of $(d_{1},d_{2})$ riffle shuffling of a voter
$i$ is the sum of its Borda scores attributed to the items in $J_{1};$ that
is,
$T_{i}(\tau_{J_{1}}(S_{1}))=\sum_{j\in J_{1}}r_{ij},$
where $r_{ij}$ is the Borda score attributed to item $j$ by voter $i$. In
Table 5, for the first four voters, $T_{i}(\tau_{J_{1}}(S_{1}))=6$ for
$i=1,...,4,$ which is the minimum attainable sum of scores; it implies that
for these voters there is no crossing of scores between the two blocks $J_{1}$
and $J_{2}$. While for voters 5 and 6, $T_{i}(\tau_{J_{1}}(S_{1}))=8$ for
$i=5,6;$ for voter 7, $T_{7}(\tau_{J_{1}}(S_{1}))=10$. These values show that
the crossing of scores between the two blocks $J_{1}$ and $J_{2}$ of voters 5
and 6 are at a lower level than the crossing of scores for voter 7.
For relatively small sample sizes, it is easy to enumerate the different types
of $(d_{1},d_{2})$ riffle shuffles. For relatively large sample sizes, we use
the contingency table of first-order marginals, that we discuss next.
### 6.1 Types of $(d_{1},d_{2})$ riffle shufflings in a coherent cluster
The contingency table of first order marginals of an observed voting profile
$V$ on $d$ items is a square $d\times d$ matrix M, where
$\mathbf{M(}i,j\mathbf{)}$ stores the number of times that item $j$ has Borda
score $i$ for $i=0,...,d-1,$ see subsection 3.2. It helps us to observe types
of $(d_{1},d_{2})$ riffle shufflings in a coherent cluster as we explain in
Example 6.
Table 6: M${}_{1,1},$ contingency table of first-order marginals of
$cohC_{1}(1).$
---
Borda | items
scores | j10 | j7 | j4 | j9 | j3 | j1 | j2 | j6 | j5 | j8 | sum
0 | 174 | 92 | 37 | 11 | 0 | 0 | 0 | 0 | 0 | 0 | 314
1 | 88 | 88 | 76 | 62 | 0 | 0 | 0 | 0 | 0 | 0 | 314
2 | 38 | 78 | 91 | 107 | 0 | 0 | 0 | 0 | 0 | 0 | 314
3 | 14 | 56 | 110 | 134 | 0 | 0 | 0 | 0 | 0 | 0 | 314
4 | 0 | 0 | 0 | 0 | 92 | 78 | 73 | 38 | 21 | 12 | 314
5 | 0 | 0 | 0 | 0 | 95 | 77 | 59 | 42 | 23 | 18 | 314
6 | 0 | 0 | 0 | 0 | 47 | 63 | 70 | 65 | 37 | 32 | 314
7 | 0 | 0 | 0 | 0 | 35 | 49 | 45 | 72 | 68 | 45 | 314
8 | 0 | 0 | 0 | 0 | 32 | 27 | 32 | 62 | 87 | 74 | 314
9 | 0 | 0 | 0 | 0 | 13 | 20 | 35 | 35 | 78 | 133 | 314
$\beta$ | 0.66 | 1.31 | 1.87 | 2.16 | 5.55 | 5.78 | 6.03 | 6.58 | 7.31 | 7.52 |
Table 7: M${}_{1,2},$ contingency table of first-order marginals of
$cohC_{1}(2).$
---
Borda | items
scores | j10 | j7 | j4 | j9 | j3 | j1 | j2 | j6 | j5 | j8 | sum
0 | 127 | 70 | 32 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 235
1 | 69 | 82 | 38 | 46 | 0 | 0 | 0 | 0 | 0 | 0 | 235
2 | 32 | 56 | 62 | 85 | 0 | 0 | 0 | 0 | 0 | 0 | 235
3 | 0 | 0 | 0 | 0 | 55 | 59 | 74 | 29 | 15 | 3 | 235
4 | 7 | 27 | 103 | 98 | 0 | 0 | 0 | 0 | 0 | 0 | 235
5 | 0 | 0 | 0 | 0 | 68 | 60 | 42 | 41 | 11 | 13 | 235
6 | 0 | 0 | 0 | 0 | 49 | 53 | 35 | 48 | 32 | 18 | 235
7 | 0 | 0 | 0 | 0 | 26 | 35 | 42 | 48 | 40 | 44 | 235
8 | 0 | 0 | 0 | 0 | 28 | 15 | 22 | 44 | 70 | 56 | 235
9 | 0 | 0 | 0 | 0 | 9 | 13 | 20 | 25 | 67 | 101 | 235
$\beta$ | 0.69 | 1.29 | 2.44 | 2.59 | 5.47 | 5.43 | 5.50 | 6.35 | 7.38 | 7.86 |
Table 8: M${}_{1,3},$ contingency table of first-order marginals of
$cohC_{1}(3).$
---
Borda | items
scores | j10 | j7 | j4 | j9 | j3 | j1 | j2 | j6 | j5 | j8 | sum
0 | 182 | 97 | 33 | 14 | 0 | 0 | 0 | 0 | 0 | 0 | 326
1 | 104 | 100 | 46 | 76 | 0 | 0 | 0 | 0 | 0 | 0 | 326
2 | 19 | 41 | 41 | 70 | 40 | 37 | 46 | 17 | 12 | 3 | 326
3 | 16 | 35 | 53 | 51 | 39 | 48 | 43 | 22 | 13 | 6 | 326
4 | 3 | 29 | 62 | 61 | 40 | 41 | 43 | 32 | 9 | 6 | 326
5 | 2 | 24 | 91 | 54 | 39 | 43 | 23 | 24 | 16 | 10 | 326
6 | 0 | 0 | 0 | 0 | 70 | 65 | 51 | 60 | 45 | 35 | 326
7 | 0 | 0 | 0 | 0 | 53 | 36 | 52 | 74 | 56 | 55 | 326
8 | 0 | 0 | 0 | 0 | 35 | 33 | 33 | 57 | 80 | 88 | 326
9 | 0 | 0 | 0 | 0 | 10 | 23 | 35 | 40 | 95 | 123 | 326
$\beta$ | 0.65 | 1.60 | 3.04 | 2.71 | 5.25 | 5.25 | 5.39 | 6.26 | 7.17 | 7.68 |
Table 9: M${}_{1,4},$ contingency table of first-order marginals of
$cohC_{1}(4).$
---
Borda | items
scores | j10 | j7 | j4 | j9 | j3 | j1 | j2 | j6 | j5 | j8 | sum
0 | 164 | 93 | 44 | 14 | 0 | 0 | 0 | 0 | 0 | 0 | 315
1 | 78 | 71 | 30 | 36 | 10 | 31 | 32 | 9 | 16 | 2 | 315
2 | 44 | 53 | 49 | 50 | 32 | 39 | 27 | 10 | 8 | 3 | 315
3 | 22 | 52 | 58 | 87 | 24 | 20 | 24 | 15 | 7 | 6 | 315
4 | 5 | 17 | 35 | 43 | 51 | 61 | 41 | 25 | 23 | 14 | 315
5 | 1 | 11 | 61 | 46 | 43 | 42 | 34 | 35 | 26 | 16 | 315
6 | 1 | 18 | 38 | 39 | 52 | 37 | 37 | 49 | 28 | 16 | 315
7 | 0 | 0 | 0 | 0 | 49 | 44 | 51 | 61 | 54 | 56 | 315
8 | 0 | 0 | 0 | 0 | 37 | 28 | 47 | 72 | 69 | 52 | 315
9 | 0 | 0 | 0 | 0 | 17 | 13 | 22 | 39 | 84 | 140 | 315
$\beta$ | 0.83 | 1.79 | 3.10 | 3.28 | 5.30 | 4.74 | 5.22 | 6.34 | 6.76 | 7.64 |
Table 10: M${}_{1,5},$ contingency table of first-order marginals of
$cohC_{1}(5).$
---
Borda | items
scores | j10 | j7 | j4 | j9 | j3 | j1 | j2 | j6 | j5 | j8 | sum
0 | 188 | 99 | 36 | 10 | 6 | 25 | 30 | 22 | 34 | 2 | 452
1 | 132 | 109 | 69 | 57 | 12 | 30 | 21 | 13 | 9 | 0 | 452
2 | 69 | 88 | 59 | 67 | 28 | 46 | 40 | 28 | 20 | 7 | 452
3 | 39 | 72 | 85 | 92 | 34 | 44 | 21 | 31 | 25 | 9 | 452
4 | 12 | 35 | 76 | 81 | 50 | 57 | 53 | 36 | 38 | 14 | 452
5 | 6 | 29 | 63 | 72 | 63 | 64 | 53 | 41 | 40 | 21 | 452
6 | 3 | 11 | 34 | 36 | 71 | 68 | 64 | 75 | 45 | 45 | 452
7 | 3 | 9 | 30 | 37 | 87 | 45 | 62 | 73 | 57 | 49 | 452
8 | 0 | 0 | 0 | 0 | 71 | 41 | 47 | 72 | 95 | 126 | 452
9 | 0 | 0 | 0 | 0 | 30 | 32 | 61 | 61 | 89 | 179 | 452
$\beta$ | 1.12 | 2.02 | 3.26 | 3.60 | 5.70 | 4.74 | 5.27 | 5.75 | 5.99 | 7.60 |
Table 11: M${}_{1,6},$ contingency table of first-order marginals of
$cohC_{1}(6).$
---
Borda | items
scores | j10 | j7 | j4 | j9 | j3 | j1 | j2 | j6 | j5 | j8 | sum
0 | 151 | 81 | 31 | 14 | 8 | 14 | 19 | 18 | 39 | 0 | 375
1 | 112 | 79 | 44 | 33 | 12 | 21 | 26 | 25 | 19 | 4 | 375
2 | 66 | 72 | 52 | 63 | 16 | 24 | 29 | 22 | 28 | 3 | 375
3 | 26 | 52 | 68 | 68 | 22 | 45 | 31 | 29 | 25 | 9 | 375
4 | 8 | 26 | 42 | 37 | 52 | 67 | 41 | 45 | 45 | 12 | 375
5 | 8 | 27 | 56 | 61 | 44 | 49 | 42 | 36 | 28 | 24 | 375
6 | 3 | 21 | 36 | 52 | 64 | 42 | 49 | 50 | 29 | 29 | 375
7 | 0 | 7 | 25 | 31 | 70 | 43 | 44 | 59 | 45 | 51 | 375
8 | 1 | 10 | 21 | 16 | 66 | 33 | 46 | 49 | 45 | 88 | 375
9 | 0 | 0 | 0 | 0 | 21 | 37 | 48 | 42 | 72 | 155 | 375
$\beta$ | 1.12 | 2.33 | 3.62 | 3.93 | 5.68 | 4.98 | 5.21 | 5.33 | 5.25 | 7.56 |
Table 12: M${}_{1,7},$ contingency table of first-order marginals of
$cohC_{1}(7).$
---
Borda | items
scores | j10 | j7 | j4 | j9 | j3 | j1 | j2 | j6 | j5 | j8 | sum
0 | 129 | 65 | 46 | 14 | 11 | 24 | 35 | 23 | 52 | 2 | 401
1 | 122 | 77 | 53 | 35 | 14 | 28 | 24 | 19 | 25 | 4 | 401
2 | 74 | 69 | 50 | 51 | 24 | 41 | 36 | 31 | 19 | 6 | 401
3 | 36 | 51 | 31 | 66 | 44 | 48 | 39 | 46 | 30 | 10 | 401
4 | 24 | 50 | 40 | 71 | 51 | 45 | 38 | 37 | 27 | 18 | 401
5 | 7 | 45 | 49 | 56 | 43 | 53 | 39 | 48 | 32 | 29 | 401
6 | 5 | 23 | 73 | 68 | 42 | 50 | 42 | 33 | 40 | 25 | 401
7 | 3 | 10 | 31 | 28 | 85 | 39 | 43 | 54 | 51 | 57 | 401
8 | 1 | 3 | 17 | 5 | 58 | 46 | 47 | 65 | 57 | 102 | 401
9 | 0 | 8 | 11 | 7 | 29 | 27 | 58 | 45 | 68 | 148 | 401
$\beta$ | 1.42 | 2.74 | 3.84 | 4.00 | 5.45 | 4.70 | 5.02 | 5.26 | 5.20 | 7.38 |
Example 6: Tables 6 to 12 display M1,α for $\alpha=1,...,7,$ the contingency
tables of first order marginals of the seven coherent clusters of the SUSHI
data, respectively. We observe the following:
Each one of them reveals the nature of the riffle shuffles of its coherent
cluster, which are summarized in Table 13. The number of observed $(4,6)$
blocks of scores for the seven coherent clusters,
($\tau_{J_{1}}(S_{1}),\tau_{J_{2}}(S_{2})),$ is only 27 in Table 13 out of the
possible total number of $10!/(4!6!)=210$. The counts of the observed $(4,6)$
blocks do not seem to be uniformly distributed in Table 13. Furthermore, we
observe that as $\alpha$ increases from 1 to 7, the magnitude of riffle
shuffles, $T_{v}(\tau_{J_{1}}(S_{1})),$ increases in the coherent clusters
from 6 to 12. Integers in bold in Table 13 are the shuffled-crossed scores.
Table 13: Types of riffle shuffles in the 7 coherent clusters of SUSHI data.
---
$cohC_{1}(\alpha)$ | scores given to | sum of | | $cohC_{1}(\alpha)$ | scores given to | sum of |
| $\left\\{j10,j7,j4,j9\right\\}$ | scores | count | | $\left\\{j10,j7,j4,j9\right\\}$ | scores | count
$cohC_{1}(1)$ | $\left\\{0,1,2,3\right\\}$ | 6 | 314 | $cohC_{1}(6)$ | $\left\\{0,1,2,\mathbf{8}\right\\}$ | 11 | 48
$cohC_{1}(2)$ | $\left\\{0,1,2,\mathbf{4}\right\\}$ | 7 | 235 | | $\left\\{0,1,\mathbf{7},3\right\\}$ | 11 | 63
$cohC_{1}(3)$ | $\left\\{0,1,2,\mathbf{5}\right\\}$ | 8 | 171 | | $\left\\{0,\mathbf{6},2,3\right\\}$ | 11 | 53
| $\left\\{0,1,\mathbf{4},3\right\\}$ | 8 | 155 | | $\left\\{\mathbf{5},1,2,3\right\\}$ | 11 | 98
$cohC_{1}(4)$ | $\left\\{0,1,2,\mathbf{6}\right\\}$ | 9 | 96 | | $\left\\{0,1,\mathbf{4,6}\right\\}$ | 11 | 59
| $\left\\{0,1,\mathbf{5},3\right\\}$ | 9 | 119 | | $\left\\{0,\mathbf{4},2,\mathbf{5}\right\\}$ | 11 | 54
| $\left\\{0,\mathbf{4},2,3\right\\}$ | 9 | 100 | $cohC_{1}(7)$ | $\left\\{0,1,2,\mathbf{9}\right\\}$ | 12 | 26
$cohC_{1}(5)$ | $\left\\{0,1,2,\mathbf{7}\right\\}$ | 10 | 79 | | $\left\\{0,1,\mathbf{8},3\right\\}$ | 12 | 26
| $\left\\{0,1,\mathbf{6},3\right\\}$ | 10 | 84 | | $\left\\{0,\mathbf{7},2,3\right\\}$ | 12 | 33
| $\left\\{0,\mathbf{5},2,3\right\\}$ | 10 | 85 | | $\left\\{\mathbf{6},1,2,3\right\\}$ | 12 | 43
| $\left\\{\mathbf{4},1,2,3\right\\}$ | 10 | 119 | | $\left\\{0,\mathbf{4,5},3\right\\}$ | 12 | 38
| $\left\\{0,1,\mathbf{4,5}\right\\}$ | 10 | 85 | | $\left\\{0,\mathbf{4},2,\mathbf{6}\right\\}$ | 12 | 39
| | | | | $\left\\{0,1,\mathbf{4},\mathbf{7}\right\\}$ | 12 | 49
| | | | | $\left\\{0,1,\mathbf{5,6}\right\\}$ | 12 | 82
| | | | | $\left\\{\mathbf{4},1,2,\mathbf{5}\right\\}$ | 12 | 65
The counts in Table 13 are calculated from M1,α for $\alpha=1,...,7,$ by
reasoning on the permutation of scores between the sets $S_{1}$ and $S_{2}$.
Here are the details, where $J_{1}=\left\\{j10,j7,j4,j9\right\\}$.
a)$\ cohC_{1}(1)$
$|\left\\{0,1,2,3\right\\}|=314,$ which is the number of $0$s attributed to
$J_{1}$ in M${}_{1,1}.\ $Among the M1,α for $\alpha=1,...,7$, note that M1,1
is the only contingency table of first-order marginals which is block
diagonal.
b) $cohC_{1}(2)$
$|\left\\{0,1,2,4\right\\}|=235,$ which is the number of $4$s attributed to
$J_{1}$ in M${}_{1,2}.$
c) $cohC_{1}(3)$
$|\left\\{0,1,2,5\right\\}|=171,$ which is the number of $5$s attributed to
$J_{1}\ $in M${}_{1,3}.$
$|\left\\{0,1,\mathbf{4},3\right\\}|=155,$ which is the number of $4$s
attributed to $J_{1}$ in M${}_{1,3}.$
d) $cohC_{1}(4)$
$|\left\\{0,1,2,\mathbf{6}\right\\}|=96,$ which is the number of $6$s
attributed to $J_{1}$ in M${}_{1,4}.$
$|\left\\{0,1,\mathbf{5},3\right\\}|=119,$ which is the number of $5$s
attributed to $J_{1}$ in M${}_{1,4}.$
$|\left\\{0,\mathbf{4},2,3\right\\}|=100,$ which is the number of $4$s
attributed to $J_{1}$ in M${}_{1,4}.$
e) $cohC_{1}(5)$
$|\left\\{0,1,2,\mathbf{7}\right\\}|=79,$ which is the number of $7$s
attributed to $J_{1}$ in M${}_{1,5}.$
$|\left\\{0,1,\mathbf{6},3\right\\}|=84,$ which is the number of $6$s
attributed to $J_{1}$ in M${}_{1,5}.$
$|\left\\{0,\mathbf{5},2,3\right\\}|=85,$ which is the number of $1$s not
attributed to $J_{1}$ in M${}_{1,5}.$
$|\left\\{0,1,\mathbf{4},\mathbf{5}\right\\}|+|\left\\{0,\mathbf{5},2,3\right\\}|=170,$
which is the total number of $5$s attributed to $J_{1}\ $in
$\mathbf{M}_{1,5};$ so
$|\left\\{0,1,\mathbf{4},\mathbf{5}\right\\}|=170-85=85.$
$|\left\\{\mathbf{4},1,2,3\right\\}|=119,$ which is the number of $0$s not
attributed to $J_{1}\ $in $\mathbf{M}_{1,5}.$
f) $cohC_{1}(6)$
$|\left\\{0,1,2,\mathbf{8}\right\\}|=48,$ which is the number of $8$s
attributed to $J_{1}\ $in $\mathbf{M}_{1,6}.$
$|\left\\{0,1,\mathbf{7},3\right\\}|=63,$ which is the number of $7$s
attributed to $J_{1}\ $in $\mathbf{M}_{1,6}.$
$|\left\\{\mathbf{5},1,2,3\right\\}|=98,$ which is the number of $0$s not
attributed to $J_{1}\ $in $\mathbf{M}_{1,6}.$
$|\left\\{0,\mathbf{4},2,\mathbf{5}\right\\}|=152-98=54,$ where $152$ is the
total number of $5$s attributed to $J_{1}\ $in $\mathbf{M}_{1,6}.$
$|\left\\{0,1,\mathbf{4},\mathbf{6}\right\\}|=113-54=59,$ where $113$ is the
total number of $4$s attributed to $J_{1}\ $in $\mathbf{M}_{1,6}.$
$|\left\\{0,\mathbf{6},2,3\right\\}|=112-59=53,$ where $112$ is the total
number of $6$s attributed to $J_{1}\ $in $\mathbf{M}_{1,6}.$
g) $cohC_{1}(7)$
$|\left\\{0,1,2,\mathbf{9}\right\\}|=26,$ which is the number of $9$s
attributed to $J_{1}\ $in $\mathbf{M}_{1,7}.$
$|\left\\{0,1,\mathbf{8},3\right\\}|=26,$ which is the number of $8$s
attributed to $J_{1}\ $in $\mathbf{M}_{1,7}.$
For the remaining counts, we have to solve the following system of 7 linear
equations, where, $u=|\left\\{0,\mathbf{7},2,3\right\\}|$,
$t=|\left\\{0,\mathbf{4},\mathbf{5},3\right\\}|$,
$s=|\left\\{0,\mathbf{4},2,\mathbf{6}\right\\}|$,
$w=|\left\\{0,1,\mathbf{4,7}\right\\}|$,
$z=|\left\\{0,1,\mathbf{5,6}\right\\}|$,
$x=|\left\\{\mathbf{6},1,2,3\right\\}|$, and
$y=|\left\\{\mathbf{4},1,2,5\right\\}|$.
$x+y=147,\ $which is the number of $0$s not attributed to $J_{1}\ $in
$\mathbf{M}_{1,7}.$
$u+w=72,\ $which is the number of $7$s attributed to $J_{1}\ $in
$\mathbf{M}_{1,7}.$
$s+z+x=169,\ $which is the number of $6$s attributed to $J_{1}\ $in
$\mathbf{M}_{1,7}.$
$t+z+y=157,\ $which is the number of $5$s attributed to $J_{1}\ $in
$\mathbf{M}_{1,7}.$
$t+s+w+y=185,\ $which is the number of $4$s attributed to $J_{1}\ $in
$\mathbf{M}_{1,7}.$
$u+t+x=158,\ $which is the number of $3$s attributed to $J_{1}\ $in
$\mathbf{M}_{1,7}.$
$u+s+x+y=218,\ $which is the number of $2$s attributed to $J_{1}\ $in
$\mathbf{M}_{1,7}.$
## 7 Crossing index
The following $(d_{1},d_{2})$ crossing index is based on the internal
dispersion of a voting profile.
Definition 3: For a voting profile $V$ we define its crossing index to be
$\displaystyle Cross(V)$ $\displaystyle=$ $\displaystyle
1-\frac{\delta_{1}(V_{d_{1},d_{2}})}{\max_{V}\delta_{1}(V_{d_{1},d_{2}})},$
$\displaystyle=$ $\displaystyle
1-\frac{\delta_{1}(V_{d_{1},d_{2}})}{2\frac{d_{1}d_{2}}{d(d-1)}}\ \ \text{by
Proposition 2.}$
where $\delta_{1}(V_{d_{1},d_{2}})$ is the first taxicab dispersion obtained
from TCA of $V$ and $(d_{1},d_{2})$ represents the optimal TCA binary
partition of the $d$ items of $V$ such that $d=d_{1}+d_{2}.\vskip 12.0pt plus
4.0pt minus 4.0pt$
Proposition 4: The crossing index of a coherent cluster is
$Cross(cohC(\alpha))=\frac{2(\alpha-1)}{d_{1}d_{2}}.$
Example 7: The last column in Table 3 contains the values of the crossing
indices of the seven coherent clusters of the first iteration of SUSHI data.
We observe: a) $Cross(cohC_{1}(1))=0$, because the structure of its matrix of
first order marginals, M${}_{1,1},$ is block diagonal; which means that the
permutation $\tau$ is the identical permutation, so there are no crossing of
scores between the two subsets of items $J_{1}$ and $J_{2}$ in $cohC_{1}(1).$
b) $Cross(cohC_{1}(\alpha))$ for $\alpha=1,...,7$ is a uniformly increasing
function of $\alpha,$ similar in spirit to the $T_{v}(\tau_{J_{1}}(S_{1}))$
statistic. c) For the incoherent cluster $V_{1,8}$, we have:
$\delta_{1}(V_{1,8})=0.2354$ given in Example 3; and $d_{1}=d_{2}=5$ from
Figure 10. So
$Cross(V_{1,8})=1-\frac{0.2354}{2(5)(5)/(10(9))}=1-0.4237=0.5763.$
## 8 Coherent group
Our aim is to explore a given voting profile $V$ by uncovering its coherent
mixture groups, see equation (1); that is, $V=\cup_{g=1}^{G}cohG(g)\cup
noisyG$, where $G$ represents the number of coherent groups and $cohG(g)$ is
the $g$th coherent group. The computation is done by an iterative procedure in
$n_{G}$ steps for $n_{G}\geq G$ that we describe:
For $g=1$; let $V_{1}=V;$ compute $cohG(1)$ from $V_{1},$ then partition
$V_{1}=V_{2}\cup cohG(1);$
For $g=2$; compute $cohG(2)$ from $V_{2},$ then partition $V_{2}=V_{3}\cup
cohG(2);$
By continuing the above procedure, after $n_{G}$ steps, we get
$V=\cup_{g=1}^{n_{G}}cohG(g).\vskip 12.0pt plus 4.0pt minus 4.0pt$
However, some of the higher ordered coherent groups may have relatively small
sample sizes; so by considering these as outliers, we lump them together thus
forming the noisy group denoted by $noisyG$ in equation (1).
Let us recall the definition of a coherent group given in equation 2
$cohG(g)=\cup_{\alpha=1}^{c_{g}}cohC_{g}(\alpha)\text{ \ for }g=1,...,G;$
that is, a coherent group is the union of its coherent clusters. This implies
that the sample size of $cohG(g)$ equals the sum of the sample sizes of its
coherent clusters
$|cohG(g)|\ =\sum_{\alpha=1}^{c_{g}}|cohC_{g}(\alpha)|.$
As an example, for the SUSHI data, from the 2nd column of Table 3 we can
compute the sample size of the first coherent group
$\displaystyle|cohG(1)|$ $\displaystyle=$
$\displaystyle\sum_{\alpha=1}^{c_{g}=7}|cohC_{1}(\alpha)|$ $\displaystyle=$
$\displaystyle 2418.$
Furthermore, $cohG(1)$ is composed of 27 observed riffle shuffles summarized
in Table 13, which provides quite a detailed view of its inner structure.
The next result shows important characteristics of a coherent group inherited
from its coherent clusters.
Theorem 2: ( Properties of a coherent group $cohG(g)$)
a) The first principal column factor score $\mathbf{g}_{1}$ of the $d$ items
in a coherent group is the weighted average of the first principal column
factor score $\mathbf{g}_{1}$ of the $d$ items of its coherent clusters; that
is,
$\displaystyle g_{1}(j$ $\displaystyle\in$ $\displaystyle
cohG(g))=\sum_{\alpha=1}^{c_{g}}\frac{|cohC_{g}(\alpha)|}{|cohG(g)|}g_{1}(j\in
cohC_{g}(\alpha))\text{\ \ for }j=1,...,d.$ $\displaystyle=$
$\displaystyle\frac{2}{d-1}\sum_{\alpha=1}^{c_{g}}\frac{|cohC_{g}(\alpha)|}{|cohG(g)|}\beta(j\in
cohC_{g}(\alpha))-1\text{ \ \ by Proposition 3.}$
And $corr(\mathbf{g}_{1}(cohG(g),\mathbf{\beta}(cohG(g))=1.$
b) The first TCA dispersion value of a coherent group is the weighted average
of the first TCA dispersion values of its coherent clusters; that is,
$\delta_{1}(cohG(g))=\sum_{\alpha=1}^{c_{g}}\frac{|cohC_{g}(\alpha)|}{|cohG(g)|}\delta_{1}(cohC_{g}(\alpha)).$
c) The crossing index of a coherent group is the weighted average of the
crossing indices of its coherent clusters; that is,
$Cross(cohG(g))=\sum_{\alpha=1}^{c_{g}}\frac{|cohC_{g}(\alpha)|}{|cohG(g)|}Cross(cohC_{g}(\alpha)).$
Example 8: Table 14 summarizes the first four coherent groups of SUSHI data,
which emerged after 5 iterations. For $g=1$, we get
$cohG(1)=\cup_{\alpha=1}^{c_{1}=7}cohC_{1}(\alpha);$ that is, the first
coherent group of voters, the majority, is composed of 48.36% of the sample
with crossing index of 27.3%. Standard errors of the Borda scale of the items
in $cohG(1)\ $in Table 14 are:
$(0.046,0.051,0.042,0.042,0.053,0.047,0.037,0.034,0.037,0.025).$
We can discern the following grouped seriation (bucket ranking) of the items
$j8\succ j5\succ j6\succ\left\\{j3,j2\right\\}\succ
j1\succ\left\\{j9,j4\right\\}\succ\left\\{j7\right\\}\succ\left\\{j10\right\\}.$
The groupings are based on the standard 95% confidence intervals of the Borda
scale of the items.
The 2nd coherent group $cohG(2),$ summarized by its Borda scales in Table 14,
is made up of eight coherent clusters; it is composed of 19.0% of the sample
with crossing index of 35.38%. The voters in this coherent group disapprove
$\left\\{uni(seaurchin),sake(salmonroe)\right\\},$ which are considered more
”daring sushis”.
The third coherent group $cohG(3),$ summarized by its Borda scales in Table
14, is made up of eight coherent clusters; it is composed of 13.24% of the
sample with crossing index of 27.3%. The voters in this coherent group prefer
the three types of tuna sushis with sea urchin sushis.
The fourth coherent group $cohG(4),$ summarized by its Borda scales in Table
14, is made up of eight coherent clusters; it is composed of 6.94% of the
sample with crossing index of 35.27%. The voters disapprove the three types of
tuna sushis.
Remark 6:
a) Note that the number of preferred sushis in $cohG(1)$ and $cohG(2)$ are
six; that is $|J_{2}|=6.$ While the number of preferred sushis in $cohG(3)$
and $cohG(4)$ are four.
b) The four coherent groups summarized in Table 14 can also be described as
two bipolar latent factors: By noting that the only major difference between
the first two coherent groups is that (5\. uni (sea urchin), 6. sake (salmon
roe)) are swapped with (7\. tamago (egg), 4. ika (squid)). While the only
major difference between the third and fourth coherent groups is that the
three tunas are swapped with (4\. ika (squid), 5. uni (sea urchin), 1. ebi
(shrimp)).
c) We consider the fifth group as noisy (outliers not shown) composed of
12.36% of the remaining sample: it contains
$cohG(5)=\cup_{\alpha=1}^{2}cohC_{5}(\alpha)$ whose sample size is $38$, a
very small number. For the sake of completeness we also provide the sample
sizes of its two coherent clusters $|cohC_{5}(1)|=22$ and $|cohC_{5}(2)|=16$.
Table 14: The first four coherent groups of SUSHI data and related statistics.
---
$\mathbf{cohG(1)=\cup}_{\alpha=1}^{7}\mathbf{cohC}_{1}\mathbf{(\alpha)}$ | $\mathbf{\beta}$ | $\mathbf{cohG(2)=\cup}_{\alpha=1}^{8}\mathbf{cohC}_{2}\mathbf{(\alpha)}$ | $\mathbf{\beta}$
8\. toro (fatty tuna) | $\mathbf{7.62}$ | 8\. toro (fatty tuna) | $\mathbf{6.15}$
5\. uni (sea urchin) | $\mathbf{6.31}$ | 2\. anago (sea eel) | $\mathbf{5.97}$
6\. sake (salmon roe) | $\mathbf{5.92}$ | 1\. ebi (shrimp) | $\mathbf{5.92}$
3\. maguro (tuna) | $\mathbf{5.49}$ | 7\. tamago (egg) | $\mathbf{5.76}$
2\. anago (sea eel) | $\mathbf{5.35}$ | 3\. maguro (tuna) | $\mathbf{5.55}$
1\. ebi (shrimp) | $\mathbf{5.04}$ | 4\. ika (squid) | $\mathbf{5.41}$
9\. tekka-maki (tuna roll) | 3.27 | 9\. tekka-maki (tuna roll) | 3.80
4\. ika (squid) | 3.10 | 10\. kappa-maki (cucumber roll) | 2.56
7\. tamago (egg) | 1.94 | 6\. sake (salmon roe) | 2.45
10\. kappa-maki (cucumber roll) | 0.97 | 5\. uni (sea urchin) | 1.44
$Cross(cohG(1))=27.3\%$ | | $Cross(cohG(2))=35.38\%$ |
$|cohG(1)|=2418\ (48.36\%)$ | | $|cohG(2)|=955\ (19.10\%)$ |
| | |
$\mathbf{cohG(3)=\cup}_{\alpha=1}^{8}\mathbf{cohC}_{3}\mathbf{(\alpha)}$ | $\mathbf{\beta}$ | $\mathbf{cohG(4)=\cup}_{\alpha=1}^{8}\mathbf{cohC}_{4}\mathbf{(\alpha)}$ | $\mathbf{\beta}$
8\. toro (fatty tuna) | $\mathbf{7.31}$ | 4\. ika (squid) | $\mathbf{6.67}$
6\. sake (salmon roe) | $\mathbf{6.62}$ | 5\. uni (sea urchin) | $\mathbf{6.50}$
3\. maguro (tuna) | $\mathbf{6.30}$ | 6\. sake (salmon roe) | $\mathbf{6.43}$
9\. tekka-maki (tuna roll) | $\mathbf{6.00}$ | 1\. ebi (shrimp) | $\mathbf{6.16}$
7\. tamago (egg) | 3.76 | 8\. toro (fatty tuna) | 3.69
4\. ika (squid) | 3.41 | 7\. tamago (egg) | 3.39
2\. anago (sea eel) | 3.00 | 2\. anago (sea eel) | 3.21
1\. ebi (shrimp) | 2.92 | 9\. tekka-maki (tuna roll) | 3.14
10\. kappa-maki (cucumber roll) | 2.86 | 10\. kappa-maki (cucumber roll) | 2.99
5\. uni (sea urchin) | 2.80 | 3\. maguro (tuna) | 2.80
$Cross(cohG(3))=31.37\%$ | | $Cross(cohG(4))=35.27\%$ |
$|cohG(3)|\ =662\ (13.24\%)$ | | $|cohG(4)|\ =347\ (6.94\%)$ |
## 9 APA data set
The 1980 American Psychological Association (APA) presidential election had
five candidates: $\left\\{A,C\right\\}$ were research psychologists,
$\left\\{D,E\right\\}$ were clinical psychologists and $B$ was a community
psychologist. In this election, voters ranked the five candidates in order of
preference. Among the 15449 votes, 5738 votes ranked all five candidates. We
consider the data set which records the 5738 complete votes; it is available
in $\left[20,\ p.96\right]$ and $\left[5,\ Table\ 1\right]$. The winner was
candidate $C$.
(a) Figure 11 : TCA map of $\mbox{Coh}_{1}C_{1}$ of APA data
(b) Figure 12: TCA map of $\mbox{Coh}_{1}C_{2}$ of APA data
Table 15 compares the results obtained by our method and the best distance-
based mixture model given in $\left[21\right]$. Distance-based models have two
parameters, a central modal ranking and a precision parameter. The precision
parameter measures the peakedness of the distribution. $\left[21\right]$ found
that the Cayley distance produced better results than the Kendall and Spearman
distances using BIC (Bayesian information criterion) and ICL (integrated
complete likelihood) criteria. Parts a and b of Table 15, are reproduced from
$\left[21,\ \text{Tables 4 and 5}\right]$.
Part c of Table 15 summarizes the results of our approach, where we only
describe the first four coherent groups: We find only the first two coherent
groups as meaningfully interpretable based on the a priori knowledge of the
candidates. Voters in $cohG(1)$, with sample size of 31%, prefer the research
oriented psychologists $\left\\{A,C\right\\}$ over the rest. Voters in
$cohG(2)$, with sample size of 23.7%, prefer the clinical psychologists
$\left\\{D,E\right\\}$ over the rest. We interpret $cohG(3)$ and $cohG(4)$ as
mixed B with 14.23% and 12.% of the voters, respectively. Additionally, there
is a $noisyG$ making up 19.1% of the sample, which comprises $cohG(5)$
displayed in Table 15.
$\left[5\right]$ discussed this data set quite in detail; surprisingly, our
results confirm his observations: a) There are two groups of candidates,
$\left\\{A,C\right\\}$ and $\left\\{D,E\right\\}.$ The voters line up behind
one group or the other ; b) The APA divides into academicians and clinicians
who are on uneasy terms. Voters seem to choose one type or the other, and then
choose within; but the group effect predominates ; c) Candidate $B$ seems to
fall in the middle, perhaps closer to $D$ and $E$.
The following important observation emerges from the comparison of results in
Table 15. We have two distinct concepts of groups for rank data, categorical
and latent variable based. To see this, consider groups 3 and 4 in part a of
Table 15: Group 3 is based on the modal category $B\succ C\succ A\succ D\succ
E$ and group 4 is based on the modal category $B\succ C\succ A\succ E\succ D.$
The only difference between these two modal categories is the permutation of
the least ranked two clinical psychologist candidates $\left\\{D,E\right\\};$
this difference is not important and does not appear in our approach, which is
a latent variable approach.
Table 15: A summary of results derived from three methods of analysis of
---
APA election data. Parts a) and b) are from Murphy and Martin (2003).
a) Parameters of the best mixture model selected, Cayley-based, using BIC
Group | sample% | modal orderings | precision
$1$ | $42$ | $D\succ B\succ E\succ C\succ A$ | $0.16$
$2$ | $31$ | $C\succ D\succ E\succ A\succ B$ | $0.79$
$3$ | $12$ | $B\succ C\succ A\succ D\succ E$ | $1.52$
$4$ | $8$ | $B\succ C\succ A\succ E\succ D$ | $1.81$
$5$ | $7$ | $B\succ D\succ A\succ E\succ C$ | $1.72$
b) Parameters of the best mixture model selected, Cayley-based, using ICL
Group | sample% | modal ordering | precision
$1$ | $100$ | $B\succ C\succ A\succ E\succ D$ | $0.25$
c) The first five coherent groups, each composed of two coherent clusters.
Group | sample% | $\beta(C)$ | $\beta(A)$ | $\beta(B)$ | $\beta(E)$ | $\beta(D)$ | $Cross$
cohG(1) Research | $31.0$ | $\mathbf{3.55}$ | $\mathbf{3.15}$ | $1.31$ | $1.15$ | $0.85$ | $10.22\%$
cohG(2) Clinical | $23.7$ | $0.83$ | $1.28$ | $1.28$ | $\mathbf{3.31}$ | $\mathbf{3.30}$ | $12.90\%$
cohG(3) mixed B | $14.2$ | $0.66$ | $\mathbf{2.70}$ | $\mathbf{2.96}$ | $0.71$ | $\mathbf{2.97}$ | $12.45\%$
cohG(4) mixed B | $12.0$ | $\mathbf{2.85}$ | $0.77$ | $\mathbf{2.86}$ | $\mathbf{2.80}$ | $0.72$ | $10.22\%$
cohG(5) outlier | $8.6$ | $0.96$ | $\mathbf{3.30}$ | $1.31$ | $\mathbf{3.40}$ | $1.00$ | $9.88\%$
### 9.1 Description
The eight coherent clusters of the first four coherent groups can simply be
described as:
$coh_{1}C(1):T_{v}(\tau_{J_{2}}(S_{2})=\tau_{\left\\{A,C\right\\}}\left\\{3,4\right\\}=\left\\{3,4\right\\})=7$
for $v=1,...,1233.$
$coh_{1}C(2):T_{v}(\tau_{J_{2}}(S_{2})=\tau_{\left\\{A,C\right\\}}\left\\{3,4\right\\}=\left\\{\mathbf{2},4\right\\})=6$
for $v=1,...,545.$
$coh_{2}C(1):T_{v}(\tau_{J_{2}}(S_{2})=\tau_{\left\\{D,E\right\\}}\left\\{3,4\right\\}=\left\\{3,4\right\\})=7$
for $v=1,...,834.$
$coh_{2}C(2):T_{v}(\tau_{J_{2}}(S_{2})=\tau_{\left\\{D,E\right\\}}\left\\{3,4\right\\}=\left\\{\mathbf{2},4\right\\})=6$
for $v=1,...,526.$
$coh_{3}C(1):T_{v}(\tau_{J_{1}}(S_{1})=\tau_{\left\\{C,E\right\\}}\left\\{0,1\right\\}=\left\\{0,1\right\\})=1$
for $v=1,...,512.$
$coh_{3}C(2):T_{v}(\tau_{J_{1}}(S_{1})=\tau_{\left\\{C,E\right\\}}\left\\{0,1\right\\}=\left\\{0,\mathbf{2}\right\\})=2$
for $v=1,...,305.$
$coh_{4}C(1):T_{v}(\tau_{J_{1}}(S_{1})=\tau_{\left\\{A,D\right\\}}\left\\{0,1\right\\}=\left\\{0,1\right\\})=1$
for $v=1,...,350.$
$coh_{4}C(2):T_{v}(\tau_{J_{1}}(S_{1})=\tau_{\left\\{A,D\right\\}}\left\\{0,1\right\\}=\left\\{0,\mathbf{2}\right\\})=2$
for $v=1,...,338.$
In this case, we can also visualize all the orderings belonging to a coherent
group: Figures 11 and 12 display all the preferences belonging to the two
coherent clusters of the first coherent group. The label $CAEBD162$ in Figure
11 should be interpreted as the preference $C\succ A\succ E\succ B\succ D$
repeated 162 times.
## 10 Riffle independence model
Riffle independence is a nonparametric probabilistic modelling method of
preferences developed by $\left[2\right]$, which generalizes the independence
model. It can be described in the following way:
(a) Partition the set $J$ of $d$ distinct items into two disjoint subsets
$J_{1}$ of size $d_{1}$ and $J_{2}$ of size $d_{2}$. Then generate an ordering
of items within each subset according to a certain ranking model. This implies
that any ordering of the $d$ items can be written as a direct product of two
disconnected orderings; which in its turn implies the independence of the two
subsets $J_{1}$ and $J_{2}$. So the model complexity of this step is of order
$d_{1}!+d_{2}!.$
(b) Interleave the two independent orderings for these two subsets using a
riffle shuffle to form a combined ordering. An interleaving is a binary
mapping from the set of orderings to $\left\\{J_{1},J_{2}\right\\}$. The model
complexity of this step is of order $d!/(d_{1}!d_{2}!).$ The interleaving step
generates the riffled independence of the two subsets $J_{1}$ and $J_{2}$.
So the combined model complexity of both steps is
$d_{1}!+d_{2}!+d!/(d_{1}!d_{2}!)$ which is much smaller than
$d!=(d_{1}+d_{2})!$.
For example, consider an ordering of the items in the set
$J=\left\\{A,B,C,D,E,F\right\\}$ from its two subsets
$J_{1}=\left\\{A,C\right\\}$ and $J_{2}=\left\\{B,D,E,F\right\\}.\ $In the
first step, relative orderings of the items in $J_{1}$ and $J_{2}$ are drawn
independently. Suppose we obtain the relative ordering $\varphi(J_{1})=(C\succ
A)$ in $J_{1},$ and the relative ordering $\varphi(J_{2})=(B\succ D\succ
F\succ E)$ in $J_{2}.$ Then, in the second step, the two relative orderings
are combined by interleaving the items in the two subsets. For instance, if
the interleaving process is
$\omega(J_{1},J_{2})=(J_{1},J_{2},J_{2},J_{1},J_{2},J_{2})$, where the
relative ordering of the items in each subset remains unchanged, the combined
ordering is then determined by the composition
$\displaystyle\omega(J_{1},J_{2})\ast(\varphi(J_{1}),\varphi(J_{2}))$
$\displaystyle=$ $\displaystyle(C\succ B\succ D\succ A\succ F\succ E)$
$\displaystyle=$ $\displaystyle\varphi(J).$
Given the two subsets $J_{1}$ and $J_{2}$ with their orderings
$\varphi(J_{1})$ and $\varphi(J_{2})$ and interleaving $\omega(J_{1},J_{2})$
generated from models with probability distributions $f_{J_{1}},$ $g_{J_{2}}$
and $m_{\omega}$, respectively, the probability of observed ordering under the
riffle independence model is
$P(\varphi(J))=m_{\omega}(\omega(J_{1},J_{2}))f_{J_{1}}(\varphi(J_{1}))g_{J_{2}}(\varphi(J_{2}).$
There are two formulations of riffle shuffle for rank data in statistics:
probabilistic and exploratory. In the riffled independence model, the set of
items is partitioned recursively, while in the exploratory approach the set of
voters is partitioned recursively.
## 11 Conclusion
The main contribution of this paper is the introduction of an exploratory
riffle shuffling procedure to reveal and display the structure of diffuse rank
data for large sample sizes. The new notion of a coherent cluster, that we
developed, is simply based on the geometric notion of taxicab projection of
points on the first TCA axis globally and locally; furthermore, it has nice
mathematical properties. Coherent clusters of a coherent group represent the
same latent variable opposing preferred items to disliked items, and can
easily be interpreted and displayed.
Like Occam’s razor, step by step, our procedure peels the essential structural
layers (coherent groups) of rank data.
Our method was able to discover some other aspects of the rank data, such as
outliers or small groups, which are eclipsed or masked by well established
methods, such as distance or random utility based methods. The major reasons
for this is that in random utility based methods the multivariate nature of a
preference is reduced to binary preferences (paired comparisons), and in
Mallows distance related methods distances between any two preferences are
bounded.
We presented a new index, $Cross$, that quantifies the extent of crossing of
scores between the optimal binary partition of the items that resulted from
TCA. The crossing index of a group is based on the first taxicab dispersion
measure: it takes values between 0 and 100%, so it is easily interpretable.
The proposed approach can easily be generalized to the analysis of rankings
with ties and partial rankings.
The package TaxicabCA written in R available on CRAN can be used to do the
calculations.
Acknowledgement: Choulakian’s research has been supported by NSERC grant
(RGPIN-2017-05092) of Canada.
References
$\left[1\right]\ \ $Kamishima, T. (2003). Nantonac collaborative filtering:
recommendation based on order responses. In: Proceedings of the ninth ACM
SIGKDD international conference on Knowledge discovery and data mining. KDD
’03, 583–588. ACM, New York.
$\left[2\right]\ \ $Huang, J., Guestrin, C. (2012). Uncovering the riffled
independence structure of ranked data. Electronic Journal of Statistics, 6,
199-230.
$\left[3\right]\ \ $Lu, T., Boutilier, C. (2014). Effective sampling and
learning for Mallows models with pairwise preference data. Journal of Machine
Learning Research, 15, 3783-3829.
$\left[4\right]\ \ $Vitelli, V., Sørenson, Ø., Crispino, M., Frigessi, A.,
Arjas, E. (2018). Probabilistic preference learning with the Mallows rank
model. Journal of Machine Learning Research, 18, 1-49.
$\left[5\right]\ \ $Diaconis, P. (1989). A generalization of spectral analysis
with application to ranked data. The Annals of Statistics, 17(3), 949-979.
$\left[6\right]\ \ $Marden, J.I. (1995). Analyzing and Modeling of Rank Data.
Chapman & Hall, London
$\left[7\right]\ \ $Alvo, M., Yu, P. (2014). Statistical Methods for Ranking
Data. Springer, New York
$\left[8\right]\ \ $Bayer, D., Diaconis, P. (1992). Trailing the dovetail
shuffle to its lair. The Annals of Probability, 2(2), 294-313.
$\left[9\right]\ \ $Choulakian, V. (2016). Globally homogenous mixture
components and local heterogeneity of rank data. arXiv:1608.05058.
$\left[10\right]\ \ $Choulakian, V. (2006). Taxicab correspondence analysis.
Psychometrika, 71, 333-345.
$\left[11\right]\ \ $Choulakian, V. (2016). Matrix factorizations based on
induced norms. Statistics, Optimization and Information Computing, 4, 1-14.
$\left[12\right]\ \ $Borda, J. de, (1781). Mémoire sur les élections au
scrutin. Histoire de L’Académie Royale des Sciences, 102, 657-665.
$\left[13\right]\ \ $Benzécri, J.P. (1991). Comment on Leo A. Goodman’s
invited paper. Journal of the American Statistical Association, 86, 1112-1115.
$\left[14\right]\ \ $Van de Velden, M. (2000). Dual scaling and correspondence
analysis of of rank order data. In: Heijmans, Pollock, Satorra (eds),
Innovations in multivariate statistical analysis, 36: 87-99. Kluwer Academic
Publishers, Dordrecht.
$\left[15\right]\ \ $Torres, A., Greenacre, M. (2002). Dual scaling and
correspondence analysis of preferences, paired comparisons and ratings.
International Journal of Research in Marketing, 19(4), 401-405.
$\left[16\right]\ \ $Nishisato, S. (1980). Analysis of Categorical Data: Dual
Scaling and Its Applications. Toronto:University of Toronto
Press.
$\left[17\right]\ \ $Choulakian, V. (2014). Taxicab correspondence analysis of
ratings and rankings. Journal de la Société Française de Statistique, 155(4),
1-23.
$\left[18\right]\ \ $Khot, S. and Naor, A. (2012). Grothendieck-type
inequalities in combinatorial optimization. Communications on Pure and Applied
Mathematics, Vol. LXV, 992-1035.
$\left[19\right]\ \ $Choulakian, V., Abou-Samra, G. (2020). Mean absolute
deviations about the mean, cut norm and taxicab correspondence analysis. Open
Journal of Statistics, 10(1), 97-112.
$\left[20\right]\ \ $Diaconis, P. (1988). Group Representations in Probability
and Statistics. Institute of Mathematical Statistics, Hayward, CA.
$\left[21\right]\ \ $Murphy, T.B., Martin, D. (2003). Mixtures of distance-
based models for ranking data. Computational Statistics and Data Analysis, 41,
645-655.
Appendix
Let $\mathbf{R}=(r_{ij})$ for $i=1,...,n$ and $j=1,...,d$ represent the Borda
scorings for preferences, where $r_{ij}$ takes values $0,...,d-1.$ Similarly,
let $\overline{\mathbf{R}}$ represent the reverse Borda scorings, whose column
sums are the cordinates of the row named $\mathbf{nega=}$
$n\overline{\mathbf{\beta}}=\mathbf{1}_{n}^{\prime}\overline{\mathbf{R}}.$ We
consider the application of TCA to the data set
$\mathbf{R}_{nega}=(_{\mathbf{nega}}^{\mathbf{R}})$
of size $(n+1)\times d.$ So let
$\mathbf{P}=\mathbf{R}_{nega}/t$
be the correspondence table associated with $\mathbf{R}_{nega},$ where
$t=2n\sum_{j=0}^{d-1}=nd(d-1).$ We have
$\begin{array}[]{cccc}p_{i\ast}&=&\frac{1}{2n}\;\;\;\text{for}\quad
i=1,...,n&\end{array}$ (14)
$\begin{array}[]{cccc}&=&\frac{1}{2}\;\;\;\text{for}\quad i=n+1,&\end{array}$
(15)
and
$p_{\ast j}=\frac{1}{d}\text{\ \ \ \ \ for\ \ \ }j=1,...,d.$ (16)
The first residuel correspondence matrix will be
$\begin{array}[]{ccccc}p_{ij}^{(1)}&=&p_{ij}-p_{i\ast}p_{\ast
j}&\qquad\qquad\qquad(17)\\\
&=&\frac{r_{ij}}{t}-\frac{1}{2n}.\frac{1}{d}\text{\ \ \ \ for\ \ \ \
}i=1,...,n&\qquad\qquad\qquad(18)\\\
&=&\frac{\mathbf{nega}_{j}}{t}-\frac{1}{2}.\frac{1}{d}\ \ \ \ \text{for\ \ \ \
}i=n+1.&\qquad\qquad\qquad(19)\end{array}$
Consider the nontrivial binary partition of the set
$S=\left\\{0,1,...,d-1\right\\}$ into $S=S_{1}\cup S_{2},$ where
$|S_{1}|=d_{1},$ $|S_{2}|=d_{2}$ and $d=d_{1}+d_{2}.$ To eliminate the sign
indeterminacy in the first TCA principal axis, we fix
$\mathbf{v}_{1}(nega)=\mathbf{v}_{1}(n+1)=-1;$ and we designate by $S_{1}$ the
set of item indices such that the first TCA principal axis coordinates are
negative, that is, $\mathbf{u}_{1}(j)=-1$ for $j\in S_{1}.$ It follows that
$\mathbf{u}_{1}(j)=1$ for $j\in S_{2}$.
Now we have by (4) for $i=1,...,n$
$\begin{array}[]{llll}a_{i1}&=&\sum_{j=1}^{d}\mathbf{u}_{1}(j)p_{ij}^{(1)}\\\
&=&\sum_{j\in S_{1}}\mathbf{u}_{1}(j)p_{ij}^{(1)}+\sum_{j\in
S_{2}}\mathbf{u}_{1}(j)p_{ij}^{(1)}&\\\ &=&-\sum_{j\in
S_{1}}p_{ij}^{(1)}+\sum_{j\in S_{2}}p_{ij}^{(1)}\\\ &=&-2\sum_{j\in
S_{1}}p_{ij}^{(1)}\ \ \ \ \text{by\ \ }(17)&\\\ &=&-2\sum_{j\in
S_{1}}(\frac{r_{ij}}{t}-\frac{1}{2n}.\frac{1}{d})\text{\ \ \ by\ \ (18)}&\\\
&=&\frac{d_{1}}{nd}-\frac{2}{t}\sum_{j\in S_{1}}r_{ij};&\end{array}$ (20)
and from which we deduce by (5) for $i=1,...,n$
$\begin{array}[]{lll}f_{i1}&=&\frac{a_{i1}}{p_{i\ast}}\\\
&=&\frac{2d_{1}}{d}-\frac{4}{d(d-1)}\sum_{j\in S_{1}}r_{ij}.\end{array}$ (21)
We have the following Theorem concerning the first TCA principal factor scores
of respondents $f_{i1}$ for $i=1,...,n.$
Theorem 1:
a) The maximum number of distinct clusters of $n$ respondents on the first TCA
principal axis (distinct $f_{i1}$ values$)$ is $d_{1}d_{2}+1.$
Proof: We consider the two extreme cases of $S_{1}$ and calculate the
summation term in (21):
For $S_{1}=\left\\{0,1,...,d_{1}-1\right\\}$,$\ \sum_{j\in
S_{1}}r_{ij}=\sum_{j=0}^{d_{1}-1}j=\frac{d_{1}(d_{1}-1)}{2}.$
For $S_{1}=\left\\{d-d_{1},1,...,d-1\right\\},$ $\sum_{j\in
S_{1}}r_{ij}=\sum_{j=d-d_{1}}^{d-1}j=\sum_{j=d_{2}}^{d-1}j=\frac{d_{1}(d_{2}+d-1)}{2}$.
It follows that
$\frac{d_{1}(d_{1}-1)}{2}\leq\sum_{j\in
S_{1}}r_{ij}\leq\frac{d_{1}(d_{2}+d-1)}{2};$
so $\sum_{j\in S_{1}}r_{ij}$ can take at most
$\frac{d_{1}(d_{2}+d-1)}{2}-\frac{d_{1}(d_{1}-1)}{2}+1=d_{1}d_{2}+1$ values.
b) The maximum value that $f_{i1}$ can attain is $2\frac{d_{1}d_{2}}{d(d-1)}.$
Proof: From (21) and Part a, it follows that the maximum value that $f_{i1}$
can attain is
$(\frac{2d_{1}}{d}-\frac{4}{d(d-1)}\frac{d_{1}(d_{1}-1)}{2})=2\frac{d_{1}d_{2}}{d(d-1)}.$
c) The minimum value that $f_{i1}$ can attain is
$-2\frac{d_{1}d_{2}}{d(d-1)}.$
Proof: From (21) and Part a, it follows that the minimum value that $f_{i1}$
can attain is
$(\frac{2d_{1}}{d}-\frac{4}{d(d-1)}\frac{d_{1}(d_{2}+d-1)}{2})=-2\frac{d_{1}d_{2}}{d(d-1)}.$
d) If the number of distinct clusters is maximum, $d_{1}d_{2}+1$, then the gap
between two contiguous $f_{i1}$ values is $\frac{4}{d(d-1)}.$
Proof: Suppose that the number of distinct clusters is maximum,
$d_{1}d_{2}+1$. We consider the first TCA factor score
$f_{i1}=\frac{2d_{1}}{d}-\frac{4}{d(d-1)}\sum_{j\in S_{1}}r_{ij}$ which is
different in value from the two extreme values $\pm
2\frac{d_{1}d_{2}}{d(d-1)}.$ Then
$f_{i_{1}1}=\frac{2d_{1}}{d}-\frac{4}{d(d-1)}(-1+\sum_{j\in S_{1}}r_{ij})$
will be the contiguous higher value to $f_{i1};$ and similarly
$f_{i_{2}1}=\frac{2d_{1}}{d}-\frac{4}{d(d-1)}(1+\sum_{j\in S_{1}}r_{ij})$ will
be the contiguous lower value to $f_{i1};$ and the required result follows.
Proposition 1: For a voting profile $V$,
$\delta_{1}\geq|f_{1}(\mathbf{nega})|$.
Proof: Let $\mathbf{a}_{1}=(_{a_{1}(nega)}^{\mathbf{a}_{11}}).$We need the
following three observations.
First, it is well known that $\mathbf{a}_{1}$ is centered by (5) and (9),
$\displaystyle\mathbf{1}_{n+1}^{\prime}\mathbf{a}_{1}$ $\displaystyle=$
$\displaystyle 0,$ $\displaystyle=$
$\displaystyle\mathbf{1}_{n}^{\prime}\mathbf{a}_{11}+a_{1}(nega);$
from which we get,
$|\mathbf{1}_{n}^{\prime}\ \mathbf{a}_{11}|=|a_{1}(nega)|.$ (22)
Second, by triangle inequality of the L1 norm we have
$||\mathbf{a}_{11}||_{1}\geq|\mathbf{1}_{n}^{\prime}\mathbf{a}_{11}|.$ (23)
Third, the marginal relative frequency of the nega row is $p_{nega\ast}=1/2$
by (15) , and $f_{i1}=a_{i1}/p_{i\ast}$ for $i=1,...,n+1$ by (5); so we have
$f_{1}(nega)=2a_{1}(nega).$ (24)
Now we have by (7)
$\begin{array}[]{llll}\delta_{1}&=&||\mathbf{a}_{1}\mathbf{||}_{1}\\\
&=&||\mathbf{a}_{11}||_{1}+|a_{1}(nega)|\\\
&\geq&|\mathbf{1}_{n}^{\prime}\mathbf{a}_{11}|+|a_{1}(nega)|\text{\ \ \ by \ \
(23)}\\\ &=&2|a_{1}(nega)|\ \ \ \ \text{by\ \ \ (22)}\\\
&=&|f_{1}(nega)|\text{\ \ \ \ by\ \ (24)}\end{array}$
Propostion 2: Let $cohC_{m}(\alpha)=V_{m,\alpha}$ be the $\alpha$th coherent
cluster of the $m$th coherent group characterized by
$f_{1}^{V_{m,\alpha}}(\mathbf{\sigma)=}f_{\alpha}^{V_{m}}$ for all
$\mathbf{\sigma\in}cohC_{m}(\alpha)$. Then
$\delta_{1}=f_{\alpha}^{V_{m}}=-f_{1}(\mathbf{nega}).$
Proof: By Definition 1 of the coherency of the cluster $V_{m,\alpha},$ we have
$0<f_{1}^{V_{m,\alpha}}(i)=f_{\alpha}^{V_{m}}$ for
$i=1,...,|cohC_{m}(\alpha)|$; by (5) it follows that $0<a_{i1}=$
$f_{\alpha}^{V_{m}}/n$ for $i=1,...,|cohC_{m}(\alpha)|$ ; so (25) becomes
equality,
$||\mathbf{a}_{11}||_{1}=\sum_{i=1}^{n}a_{i1}=|\mathbf{1}_{n}^{\prime}\mathbf{a}_{11}|$,
and the required result follows.
Proposition 3 is a corollary to the following general result
Theorem 3: If the first TCA principal axis of the columns of
$\mathbf{R}_{nega}$ is $\mathbf{v}_{1}=(_{-1}^{\mathbf{1}_{n}})$, then
the first principal column factor score $\mathbf{g}_{1}$ of the $d$ items is
an affine function of the Borda scale $\mathbf{\beta};$ that is,
$g_{1}(j)=\frac{2}{d-1}\beta(j)-1$ or $corr(\mathbf{g}_{1},\mathbf{\beta})=1.$
Proof: Suppose that $\mathbf{v}_{1}=(_{-1}^{\mathbf{1}_{n}});$ then by (4) for
$j=1,...,d$
$\displaystyle b_{1}(j)$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{n+1}v_{1}(i)p_{ij}^{(1)}$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{n}p_{ij}^{(1)}-p_{(n+1)j}^{(1)}$ $\displaystyle=$
$\displaystyle 2\sum_{i=1}^{n}p_{ij}^{(1)}\text{ \ \ by\ \ \ (17)}$
$\displaystyle=$ $\displaystyle 2\sum_{i=1}^{n}(p_{ij}-p_{i\ast}p_{\ast j})$
$\displaystyle=$ $\displaystyle 2\sum_{i=1}^{n}r_{ij}/t-p_{\ast j}\ \ \
\text{by\ \ (14)}$ $\displaystyle=$ $\displaystyle 2n\beta(j)/t-p_{\ast j}$
Thus by (5) for $j=1,...,d$
$\displaystyle g_{1}(j)$ $\displaystyle=$ $\displaystyle b_{1}(j)/p_{\ast j}$
$\displaystyle=$ $\displaystyle\frac{2n\beta(j)/t-p_{\ast j}}{p_{\ast j}}$
$\displaystyle=$ $\displaystyle\frac{2\beta(j)}{d-1}-1.$
Proposition 4: The crossing index of a coherent cluster is
$Cross(cohC(\alpha))=\frac{2(\alpha-1)}{d_{1}d_{2}}.$
Proof: Easily shown by using Definition 3 and Proposition 2.
The proof of Theorem 2a easily follows from Theorem 3. The proof of Theorem 2b
is similar to the proof of Propostion 1. The proof of Theorem 2c is similar to
the proof of Propostion 4.
|
# Existence of Primitive Normal Pairs with One Prescribed Trace over Finite
Fields
Hariom Sharma, R. K. Sharma
###### Abstract
Given $m,n,q\in\mathbb{N}$ such that $q$ is a prime power and $m\geq 3$,
$a\in\mathbb{F}_{q}$, we establish a sufficient condition for the existence of
primitive pair $(\alpha,f(\alpha))$ in $\mathbb{F}_{q^{m}}$ such that $\alpha$
is normal over $\mathbb{F}_{q}$ and
$\text{Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha^{-1})=a$, where
$f(x)\in\mathbb{F}_{q^{m}}(x)$ is a rational function of degree sum $n$.
Further, when $n=2$ and $q=5^{k}$ for some $k\in\mathbb{N}$, such a pair
definitely exists for all $(q,m)$ apart from at most $20$ choices.
Department of Mathematics, Indian Institute of Technology Delhi, New Delhi,
110016, India
Keywords: Finite Fields, Characters, Primitive element, Normal element
2010 Math. Sub. Classification: 12E20, 11T23 111emails:
<EMAIL_ADDRESS>(Hariom<EMAIL_ADDRESS>(Rajendra)
## 1 Introduction
Given the positive integers $m$ and $q$ such that $q$ is a prime power,
$\mathbb{F}_{q}$ denotes the finite field of order $q$ and
$\mathbb{F}_{q^{m}}$ be the extension of $\mathbb{F}_{q}$ of degree $m$. A
generator of the cyclic multiplicative group $\mathbb{F}_{q^{m}}^{*}$ is known
as a primitive element of $\mathbb{F}_{q^{m}}$. For a rational function
$f(x)\in\mathbb{F}_{q^{m}}(x)$ and $\alpha\in\mathbb{F}_{q^{m}}$, we call a
pair $(\alpha,f(\alpha))$ a primitive pair in $\mathbb{F}_{q^{m}}$ if both
$\alpha$ and $f(\alpha)$ are primitive elements of $\mathbb{F}_{q^{m}}$.
Further, $\alpha$ is normal over $\mathbb{F}_{q}$ if the set
$\\{\alpha,\alpha^{q},\alpha^{q^{2}},\cdots,\alpha^{q^{m-1}}\\}$ forms a basis
of $\mathbb{F}_{q^{m}}$ over $\mathbb{F}_{q}$. Also, the trace of $\alpha$
over $\mathbb{F}_{q}$, denoted by
$\text{Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha)$ is given by
$\alpha+\alpha^{q}+\alpha^{q^{2}}+\cdots+\alpha^{q^{m-1}}$.
Primitive normal elements play a vital role in coding theory and cryptography
[1]. Therefore, study of existence of such elements is an active area of
research. We refer to [12] for the existence of primitive and normal elements
in finite fields. Existence of both primitive and normal elements
simultaneously was first established by Lenstra and Schoof in [11]. Later on,
by using sieving techniques, Cohen and Huczynska [7] provided a computer-free
proof of it. In 1985, Cohen studied the existence of primitive pair
$(\alpha,f(\alpha))$ in $\mathbb{F}_{q}$ for the rational function
$f(x)=x+a,a\in\mathbb{F}_{q}$. Many more researchers worked in this direction
and proved the existence of primitive pair for more general rational function
[8, 2, 14, 3]. Additionally, in the fields of even order, Cohen[5] established
the existence of primitive pair $(\alpha,f(\alpha))$ in $\mathbb{F}_{q^{n}}$
such that $\alpha$ is normal over $\mathbb{F}_{q}$, where
$f(x)=\frac{x^{2}+1}{x}$. Similar result has been obtained in [2] for the
rational function $f(x)=\frac{ax^{2}+bx+c}{dx+e}$. Another interesting problem
is to prove the existence of primitive pair with prescribed traces which have
been discussed in [13, 10, 15].
In this article, we consider all the conditions simultaneously and prove the
existence of primitive pair $(\alpha,f(\alpha))$ in $\mathbb{F}_{q^{m}}$ such
that $\alpha$ is normal over $\mathbb{F}_{q}$ and for prescribed
$a\in\mathbb{F}_{q}$,
$\text{Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha^{-1})=a$, where $f(x)$
is more general rational function. To proceed further, we shall use some basic
terminology and conventions used in [8]. To say that a non zero polynomial
$f(x)\in\mathbb{F}_{q^{m}}[x]$ has degree $n\geq 0$ we mean that
$f(x)=a_{n}x^{n}+\cdots+a_{0}$, where $a_{n}\neq 0$ and write it as
$\deg(f)=n$. Next, for a rational function
$f(x)=f_{1}(x)/f_{2}(x)\in\mathbb{F}_{q^{m}}(x)$, we always assume that
$f_{1}$ and $f_{2}$ are coprime and degree sum of $f=\deg(f_{1})+\deg(f_{2})$.
Also, we can divide each of $f_{1}$ and $f_{2}$ by the leading coefficient of
$f_{2}$ and suppose that $f_{2}$ is monic. Further, we say that a rational
function $f\in\mathbb{F}_{q^{m}}(x)$ is exceptional if $f=cx^{i}g^{d}$ for
some $c\in\mathbb{F}_{q^{m}},i\in\mathbb{Z}$(set of integers) and $d>1$
divides $q^{m}-1$ or $f(x)=x^{i}$ for some $i\in\mathbb{Z}$ such that
$\gcd(q^{m}-1,i)\neq 1.$
Finally, we introduce some sets which have an important role in this article.
For $n_{1},n_{2}\in\mathbb{N}$, $S_{q,m}(n_{1},n_{2})$ will be used to denote
the set of non exceptional rational functions
$f=f_{1}/f_{2}\in\mathbb{F}_{q^{m}}(x)$ with $\deg(f_{1})\leq n_{1}$ and
$\deg(f_{2})\leq n_{2}$, and $T_{n_{1},n_{2}}$ as the set of pairs
$(q,m)\in\mathbb{N}\times\mathbb{N}$ such that for any given $f\in
S_{q,m}(n_{1},n_{2})$ and prescribed $a\in\mathbb{F}_{q}$,
$\mathbb{F}_{q^{m}}$ contains a normal element $\alpha$ with
$(\alpha,f(\alpha))$ a primitive pair and
$\text{Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha^{-1})=a$. Define
$S_{q,m}(n)=\bigcup\limits_{n_{1}+n_{2}=n}S_{q,m}(n_{1},n_{2})$ and
$T_{n}=\bigcap\limits_{n_{1}+n_{2}=n}T_{n_{1},n_{2}}$. By [4], for $m\leq 2$,
there does not exist any primitive element $\alpha$ such that
$\text{Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha^{-1})=0$. Therefore, we
shall assume $m\geq 3$ throughout the article.
In this paper, for $n\in\mathbb{N}$, we take $f(x)\in S_{q,m}(n)$ a general
rational function of degree sum $n$ and $a\in\mathbb{F}_{q}$, and prove the
existence of normal element $\alpha$ such that $(\alpha,f(\alpha))$ is a
primitive pair in $\mathbb{F}_{q^{m}}$ and
$\text{Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha^{-1})=a$. To be more
precise, in section $3$, we obtain a sufficient condition for the existence of
such elements in $\mathbb{F}_{q^{m}}$. In section $4$, we further improve the
condition by proving a generalization of sieving technique due to Anju and
Cohen[6]. In section $5$, we demonstrate the application of the results of
section $3$ and section $4$ by working with the finite fields of
characteristic $5$ and $n=2$. More precisely, we get a subset of $T_{2}$.
## 2 Preliminaries
In this section, we provide some preliminary notations, definitions and
results which are required further in this article. Throughout this article,
$m\geq 3$ is an integer, $q$ is an arbitrary prime power and $\mathbb{F}_{q}$
is a finite field of order $q$. For each $k(>1)\in\mathbb{N}$, $\omega(k)$
denotes the number of prime divisors of $k$ and $W(k)$ denotes the number of
square free divisors of $k$. Also for $g(x)\in\mathbb{F}_{q}[x]$,
$\Omega_{q}(g)$ and $W(g)$ denote the number of monic irreducible(over
$\mathbb{F}_{q})$ divisors of $g$ and number of square free divisors of $g$
respectively, i.e., $W(k)=2^{\omega(k)}$ and $W(g)=2^{\Omega_{q}(g)}$.
For a finite abelian group $G$, a homomorphism $\chi$ from $G$ into the
multiplicative group $S^{1}=\\{z\in\mathbb{C}:|z|=1\\}$ is known as a
character of $G$. The set of all characters of $G$ forms a group under
multiplication, which is isomorphic to $G$ and is denoted by $\widehat{G}$.
Further, the character $\chi_{0}$, defined as $\chi_{0}(g)=1$ for all $g\in G$
is called the trivial character of $G$. The order of a character $\chi$ is the
smallest positive integer $r$ such that $\chi^{r}=\chi_{0}$. For a finite
field $\mathbb{F}_{q^{m}}$, the characters of the additive group
$\mathbb{F}_{q^{m}}$ and the multiplicative group $\mathbb{F}^{*}_{q^{m}}$ are
called additive characters and multiplicative characters respectively. A
multiplicative character $\chi\in\widehat{\mathbb{F}}_{q^{m}}^{*}$ is extended
from $\mathbb{F}^{*}_{q^{m}}$ to $\mathbb{F}_{q^{m}}$ by the rule
$\chi(0)=\begin{cases}0\leavevmode\nobreak\ \leavevmode\nobreak\ \text{ if
}\chi\neq\chi_{0}\\\ 1\leavevmode\nobreak\ \leavevmode\nobreak\ \text{ if
}\chi=\chi_{0}\end{cases}.$ For more fundamentals on characters, primitive
elements and finite fields, we refer the reader to [12].
For a divisor $u$ of $q^{m}-1$, an element $w\in\mathbb{F}_{q^{m}}^{*}$ is
$\mathop{\mbox{$u$-$\mathit{free}$}}$, if $w=v^{d}$, where
$v\in\mathbb{F}_{q^{m}}$ and $d|u$ implies $d=1$. It is easy to observe that
an element in $\mathbb{F}_{q^{m}}^{*}$ is
$\mathop{\mbox{$(q^{m}-1$)-$\mathit{free}$}}$ if and only if it is primitive.
A special case of [16, Lemma 10], provides an interesting result.
###### Lemma 2.1.
Let $u$ be a divisor of $q^{m}-1$, $\xi\in\mathbb{F}_{q^{m}}^{*}$. Then
$\sum_{d|u}\frac{\mu(d)}{\phi(d)}\sum_{\chi_{d}}\chi_{d}(\xi)=\begin{cases}\frac{u}{\phi(u)}&\quad\text{
if }\xi\text{ is }\mathop{\mbox{$u$-$\mathit{free}$}},\\\
0&\quad\text{otherwise.}\end{cases}$
where $\mu(\cdot)$ is the M$\ddot{\text{o}}$bius function and $\phi(\cdot)$ is
the Euler function, $\chi_{d}$ runs through all the $\phi(d)$ multiplicative
characters over $\mathbb{F}_{q^{m}}^{*}$ with order $d$.
Therefore, for each divisor $u$ of $q^{m}-1$,
$\rho_{u}:\alpha\mapsto\theta(u)\sum_{d|u}\frac{\mu(d)}{\phi(d)}\sum_{\chi_{d}}\chi_{d}(\alpha),$
(2.1)
gives a characteristic function for the subset of
$\mathop{\mbox{$u$-$\mathit{free}$}}$ elements of $\mathbb{F}_{q^{m}}^{*}$,
where $\theta(u)=\frac{\phi(u)}{u}$.
Also, for each $a\in\mathbb{F}_{q}$,
$\tau_{a}:\alpha\mapsto\frac{1}{q}\sum\limits_{\psi\in\widehat{\mathbb{F}}_{q}}\psi(\text{
Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha)-a)$
is a characterstic function for the subset of $\mathbb{F}_{q^{m}}$ consisting
elements with $\text{ Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha)=a$. From
[12, Theorem 5.7] every additive character $\psi$ of $\mathbb{F}_{q}$ can be
obtained by $\psi(a)=\psi_{0}(ua)$, where $\psi_{0}$ is the canonical additive
character of $\mathbb{F}_{q}$ and $u$ is an element of $\mathbb{F}_{q}$
corresponding to $\psi$. Thus
$\tau_{a}(\alpha)=\frac{1}{q}\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(\text{
Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(u\alpha)-ua)$
$=\frac{1}{q}\sum\limits_{u\in\mathbb{F}_{q}}\hat{\psi_{0}}(u\alpha)\psi_{0}(-ua),$
(2.2)
where $\hat{\psi_{0}}$ is the additive character of $\mathbb{F}_{q^{m}}$
defined by $\hat{\psi_{0}}(\alpha)=\psi_{0}(\text{
Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha))$. In particular,
$\hat{\psi_{0}}$ is the canonical additive character of $\mathbb{F}_{q^{m}}$.
The additive group of $\mathbb{F}_{q^{m}}$ is an $\mathbb{F}_{q}[x]$-module
under the rule $f\leavevmode\nobreak\ o\leavevmode\nobreak\
\alpha=\sum\limits_{i=1}^{k}a_{i}\alpha^{q^{i}}$; for
$\alpha\in\mathbb{F}_{q^{m}}$ and
$f(x)=\sum\limits_{i=1}^{k}a_{i}x^{i}\in\mathbb{F}_{q}[x]$. For
$\alpha\in\mathbb{F}_{q^{m}}$, the $\mathbb{F}_{q}$-order of $\alpha$ is the
unique monic polynomial $g$ of least degree such that $g\leavevmode\nobreak\
o\leavevmode\nobreak\ \alpha=0$. Observe that $g$ is a factor of $x^{m}-1$.
Similarly, by defining the action of $\mathbb{F}_{q}[x]$ over
$\widehat{\mathbb{F}}_{q^{m}}$ by the rule $\psi\leavevmode\nobreak\
o\leavevmode\nobreak\ f(\alpha)=\psi(f\leavevmode\nobreak\
o\leavevmode\nobreak\ \alpha)$, where
$\psi\in\widehat{\mathbb{F}}_{q^{m}},\alpha\in\mathbb{F}_{q^{m}}$ and
$f\in\mathbb{F}_{q}[x]$, $\widehat{\mathbb{F}}_{q^{m}}$ becomes an
$\mathbb{F}_{q}[x]$-module, and the unique monic polynomial $g$ of least
degree such that $\psi\leavevmode\nobreak\ o\leavevmode\nobreak\ g=\chi_{0}$
is called the $\mathbb{F}_{q}$-order of $\psi$. Further there are
$\Phi_{q}(g)$ characters of $\mathbb{F}_{q}$-order $g$, where $\Phi_{q}(g)$ is
the analogue of Euler’s phi-function on $\mathbb{F}_{q}[x]$(see [12]).
Similar to above, for $g|x^{m}-1$ an element $\alpha\in\mathbb{F}_{q^{m}}$ is
$g$-$free$, if $\alpha=h\leavevmode\nobreak\ o\leavevmode\nobreak\ \beta$,
where $\beta\in\mathbb{F}_{q^{m}}$ and $h|g$ implies $h=1.$ It is
straightforward that an element in $\mathbb{F}_{q^{m}}$ is $(x^{m}-1)$-$free$
if and only if it is normal. Also, for $g|x^{m}-1$ an expression for the
characteristic function for $g$-$free$ elements is given by
$\kappa_{g}:\alpha\mapsto\Theta(g)\sum_{h|g}\frac{\mu^{\prime}(d)}{\Phi_{q}(h)}\sum_{\psi_{h}}\psi_{h}(\alpha),$
(2.3)
where $\Theta(g)=\frac{\Phi_{q}(g)}{q^{deg(g)}}$, the internal sum runs over
all characters $\psi_{h}$ of $\mathbb{F}_{q}$-order $h$ and $\mu^{\prime}$ is
the analogue of the M$\ddot{\text{o}}$bius function defined as
$\mu^{\prime}(g)=\begin{cases}(-1)^{s}&\text{if }$g$\text{ is a product of $s$
distinct monic irreducible polynomials},\\\
0&\quad\text{otherwise.}\end{cases}$
Following results of D. Wang and L. Fu will play a vital role in our next
section.
###### Lemma 2.2.
[9, Theorem 4.5] Let $f(x)\in\mathbb{F}_{{q}^{d}}(x)$ be a rational function.
Write $f(x)=\prod_{j=1}^{k}f_{j}(x)^{n_{j}}$, where
$f_{j}(x)\in\mathbb{F}_{{q}^{d}}[x]$ are irreducible polynomials and $n_{j}$
are non zero integers. Let $\chi$ be a multiplicative character of
$\mathbb{F}_{q^{d}}$. Suppose that the rational function
$\prod_{i=0}^{d-1}f(x^{q^{i}})$ is not of the form $h(x)^{\text{ord}(\chi)}$
in $\mathbb{F}_{q^{d}}(x),$ where ord$(\chi)$ is the order of $\chi$, then we
have
$\big{|}\sum_{\alpha\in\mathbb{F}_{q},f(\alpha)\neq
0,f(\alpha)\neq\infty}\chi(f(\alpha))\big{|}\leq(d\sum_{j=1}^{k}\deg(f_{j})-1)q^{\frac{1}{2}}.$
###### Lemma 2.3.
[9, Theorem 4.6] Let $f(x),g(x)\in\mathbb{F}_{q^{m}}(x)$ be rational
functions. Write $f(x)=\prod_{j=1}^{k}f_{j}(x)^{n_{j}}$, where
$f_{j}(x)\in\mathbb{F}_{{q}^{m}}[x]$ are irreducible polynomials and $n_{j}$
are non zero integers. Let $D_{1}=\sum_{j=1}^{k}\deg(f_{j})$, let
$D_{2}=max(\deg(g),0)$, let $D_{3}$ be the degree of denominator of $g(x)$,
and let $D_{4}$ be the sum of degrees of those irreducible polynomials
dividing denominator of $g$ but distinct from $f_{j}(x)(j=1,2,\cdots,k)$. Let
$\chi$ be a multiplicative character of $\mathbb{F}_{q^{m}}$, and let $\psi$
be a non trivial additive character of $\mathbb{F}_{q^{m}}$. Suppose $g(x)$ is
not of the form $r(x)^{q^{m}}-r(x)$ in $\mathbb{F}_{q^{m}}(x)$. Then we have
the estimate
$\big{|}\sum_{\alpha\in\mathbb{F}_{q^{m}},f(\alpha)\neq 0,\infty
g(\alpha)\neq\infty}\chi(f(\alpha))\psi(g(\alpha))\big{|}\leq(D_{1}+D_{2}+D_{3}+D_{4}-1)q^{\frac{m}{2}}.$
## 3 Sufficient condition
Let $l_{1},l_{2}\in\mathbb{N}$ be such that $l_{1},l_{2}|q^{m}-1$. Also,
$a\in\mathbb{F}_{q}$, $f(x)\in S_{q,m}(n)$ and $g|x^{m}-1$, then
$N_{f,a,n}(l_{1},l_{2},g)$ denote the number of elements
$\alpha\in\mathbb{F}_{q^{m}}$ such that $\alpha$ is both
$\mathop{\mbox{$l_{1}$-$\mathit{free}$}}$ and $g$-$free$, $f(\alpha)$ is
$\mathop{\mbox{$l_{2}$-$\mathit{free}$}}$ and
$\text{Tr}_{\mathbb{F}_{q^{m}}/\mathbb{F}_{q}}(\alpha^{-1})=a$.
We now prove one of the sufficient condition as follows.
###### Theorem 3.1.
Let $m,n\text{ and }q\in\mathbb{N}$ such that $q$ is a prime power and $m\geq
3$. Suppose that
$q^{\frac{m}{2}-1}>(n+2)W(q-1)^{2}W(x^{m}-1).$ (3.1)
Then $(q,m)\in T_{n}$.
###### Proof.
To prove the result, it is enough to show that
$N_{f,a,n}(q^{m}-1,q^{m}-1,x^{m}-1)>0$ for every $f(x)\in S_{q,m}(n)$ and
prescribed $a\in\mathbb{F}_{q}$. Let $f(x)\in S_{q,m}(n)$ be any rational
function and $a\in\mathbb{F}_{q}$. Let $U_{1}$ be the set of zeros and poles
of $f(x)$ in $\mathbb{F}_{q^{m}}$ and $U=U_{1}\cup\\{0\\}$. Assume
$l_{1},l_{2}$ be divisors of $q^{m}-1$ and $g$ be a divisor of $x^{m}-1$. Then
by definition
$N_{f,a,n}(l_{1},l_{2},g)=\sum_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\rho_{l_{1}}(\alpha)\rho_{l_{2}}(f(\alpha))\tau_{a}(\alpha^{-1})\kappa_{g}(\alpha)$
now using (2.1), (2.2) and (2.3),
$\displaystyle
N_{f,a,n}(l_{1},l_{2},g)=\frac{\theta(l_{1})\theta(l_{2})\Theta(g)}{q}\sum\limits_{\begin{subarray}{c}d_{1}|l_{1},d_{2}|l_{2}\\\
h|g\end{subarray}}\frac{\mu(d_{1})}{\phi(d_{1})}\frac{\mu(d_{2})}{\phi(d_{2})}\frac{\mu^{\prime}(h)}{\Phi_{q}(h)}\sum\limits_{\chi_{d_{1}},\chi_{d_{2}},\psi_{h}}\chi_{f,a}(d_{1},d_{2},h),$
(3.2)
where
$\chi_{f,a}(d_{1},d_{2},h)=\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(-au)\sum\limits_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\chi_{d_{1}}(\alpha)\chi_{d_{2}}(f(\alpha))\psi_{h}(\alpha)\hat{\psi_{0}}(u\alpha^{-1})$.
Since $\psi_{h}$ is an additive character of $\mathbb{F}_{q^{m}}$ and
$\hat{\psi_{0}}$ is canonical additive character of $\mathbb{F}_{q^{m}}$,
therefore there exists $v\in\mathbb{F}_{q^{m}}$ such that
$\psi_{h}(\alpha)=\hat{\psi_{0}}(v\alpha)$. Hence
$\chi_{f,a}(d_{1},d_{2},h)=\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(-au)\sum\limits_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\chi_{d_{1}}(\alpha)\chi_{d_{2}}(f(\alpha))\hat{\psi_{0}}(v\alpha+u\alpha^{-1})$.
At this point, we claim that if $(d_{1},d_{2},h)\neq(1,1,1)$, where third $1$
denotes the unity of $\mathbb{F}_{q}[x]$, then
$|\chi_{f,a}(d_{1},d_{2},h)|\leq(n+2)q^{\frac{m}{2}+1}$. To see the claim,
first suppose $d_{2}=1$, then
$\chi_{f,a}(d_{1},d_{2},h)=\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(-au)\sum\limits_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\chi_{d_{1}}(\alpha)\hat{\psi_{0}}(v\alpha+u\alpha^{-1})$. Here, if
$vx+ux^{-1}\neq r(x)^{q^{m}}-r(x)$ for any $r(x)\in\mathbb{F}_{q^{m}}(x)$ then
by Lemma 2.3 $|\chi_{f,a}(d_{1},d_{2},h)|\leq
2q^{\frac{m}{2}+1}+(|U|-1)q\leq(n+2)q^{\frac{m}{2}+1}$. Also, if
$vx+ux^{-1}=r(x)^{q^{m}}-r(x)$ for some $r(x)\in\mathbb{F}_{q^{m}}(x)$ then
following [Comm. Anju], it is possible when $u=v=0$, which implies,
$|\chi_{f,a}(d_{1},d_{2},h)|\leq|U|q<(n+2)q^{\frac{m}{2}+1}$.
Now suppose $d_{2}>1$. Let $d$ be the least common multiple of $d_{1}$ and
$d_{2}$. Then [12] suggests that there exists a character $\chi_{d}$ of order
$d$ such that $\chi_{d_{2}}=\chi_{d}^{d/d_{2}}$. Also, there is an integer
$0\leq k<q^{m}-1$ such that $\chi_{d_{1}}=\chi_{d}^{k}$. Consequently,
$\chi_{f,a}(d_{1},d_{2},h)=\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(-au)\sum\limits_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\chi_{d}(\alpha^{k}f(\alpha)^{d/d_{2}})\hat{\psi_{0}}(v\alpha+u\alpha^{-1})$.
At this moment, first suppose $vx+ux^{-1}\neq r(x)^{q^{m}}-r(x)$ for any
$r(x)\in\mathbb{F}_{q^{m}}(x)$. Then Lemma 2.3 implies that
$|\chi_{f,a}(d_{1},d_{2},h)|\leq(n+2)q^{\frac{m}{2}+1}$. Also, if
$vx+ux^{-1}=r(x)^{q^{m}}-r(x)$ for some $r(x)\in\mathbb{F}_{q^{m}}(x)$, then
following [15] we get $u=v=0$. Therefore,
$\chi_{f,a}(d_{1},d_{2},h)=\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(-au)\sum\limits_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\chi_{d}(\alpha^{k}f(\alpha)^{d/d_{2}})$. Here, if $x^{k}f(x)^{d/d_{2}}\neq
r(x)^{d}$ for any $r(x)\in\mathbb{F}_{q^{m}}(x)$, then using Lemma 2.2 we get
$|\chi_{f,a}(d_{1},d_{2},h)|\leq nq^{\frac{m}{2}+1}<(n+2)q^{\frac{m}{2}+1}$.
However, $x^{k}f(x)^{d/d_{2}}=r(x)^{d}$ for some
$r(x)\in\mathbb{F}_{q^{m}}(x)$ gives that $f$ is exceptional(see [8]).
Hence, from the above discussion along with (3.2), we get
$\displaystyle
N_{f,a,n}(l_{1},l_{2},g)\geq\frac{\theta(l_{1})\theta(l_{2})\Theta(g)}{q}(q^{m}-|U|-((n+2)q^{\frac{m}{2}+1})(W(l_{1})W(l_{2})W(g)-1))$
$\displaystyle\geq\frac{\theta(l_{1})\theta(l_{2})\Theta(g)}{q}(q^{m}-(n+1)-((n+2)q^{\frac{m}{2}+1})(W(l_{1})W(l_{2})W(g)-1))$
$\displaystyle\geq\frac{\theta(l_{1})\theta(l_{2})\Theta(g)}{q}(q^{m}-(n+2)q^{\frac{m}{2}+1}W(l_{1})W(l_{2})W(g))$
(3.3)
Thus, if $q^{\frac{m}{2}-1}>(n+2)W(l_{1})W(l_{2})W(g)$, then
$N_{f,a,n}(l_{1},l_{2},g)>0$ for all $f(x)\in S_{q}(n)$ and prescribed
$a\in\mathbb{F}_{q}$. The result now follows by taking $l_{1}=l_{2}=q^{m}-1$
and $g=x^{m}-1$. ∎
## 4 Sieving Results
Here, we state some results, their proofs have been omitted as they follow on
the lines of the results in [10] and have been used frequently in [13, 8, 10,
14, 2].
###### Lemma 4.1.
Let $k\text{ and }P$ be co-prime positive integers and
$g,G\in\mathbb{F}_{q}[x]$ be co-prime polynomials. Also, let
$\\{p_{1},p_{2},\cdots,p_{r}\\}$ be the collection of all prime divisors of
$P$, and $\\{g_{1},g_{2},\cdots,g_{s}\\}$ contains all the irreducible factors
of $G$. Then
$\displaystyle
N_{f,a,n}(kP,kP,gG)\geq\sum\limits_{i=1}^{r}N_{f,a,n}(kp_{i},k,g)+\sum\limits_{i=1}^{r}N_{f,a,n}(k,kp_{i},g)$
$\displaystyle+\sum\limits_{i=1}^{s}N_{f,a,n}(k,k,gg_{i})-(2r+s-1)N_{f,a,n}(k,k,g).$
###### Lemma 4.2.
Let $l,m,q\in\mathbb{N}$, $g\in\mathbb{F}_{q}[x]$ be such that $q$ is a prime
power, $m\geq 3$ and $l|q^{m}-1$, $g|x^{m}-1$. Let $c$ be a prime number which
divides $q^{m}-1$ but not $l$, and $e$ be irreducible polynomial dividing
$x^{m}-1$ but not $g$. Then
$\displaystyle|N_{f,a,n}(cl,l,g)-\theta(c)N_{f,a,n}(l,l,g)|\leq(n+2)\theta(c)\theta(l)^{2}\Theta(g)W(l)^{2}W(g)q^{\frac{m}{2}},$
$\displaystyle|N_{f,a,n}(l,cl,g)-\theta(c)N_{f,a,n}(l,l,g)|\leq(n+2)\theta(c)\theta(l)^{2}\Theta(g)W(l)^{2}W(g)q^{\frac{m}{2}}$
and
$\displaystyle|N_{f,a,n}(l,l,eg)-\Theta(e)N_{f,a,n}(l,l,g)|\leq(n+2)\theta(l)^{2}\Theta(e)\Theta(g)W(l)^{2}W(g)q^{\frac{m}{2}}.$
###### Theorem 4.1.
Let $l,m,q\in\mathbb{N}$, $g\in\mathbb{F}_{q}[x]$ be such that $q$ is a prime
power, $m\geq 3$ and $l|q^{m}-1$, $g|x^{m}-1$. Also, let
$\\{p_{1},p_{2},\cdots p_{r}\\}$ be the collection of primes which divides
$q^{m}-1$ but not $l$, and $\\{g_{1},g_{2},\cdots g_{s}\\}$ be the irreducible
polynomials dividing $x^{m}-1$ but not $g$. Suppose
$\delta=1-2\sum\limits_{i=1}^{r}\frac{1}{p_{i}}-\sum\limits_{i=1}^{s}\frac{1}{q^{\deg(g_{i})}},\delta>0$
and $\Delta=\frac{2r+s-1}{\delta}+2$. If $q^{\frac{m}{2}-1}>(n+2)\Delta
W(l)^{2}W(g)$ then $(q,m)\in T_{n}.$
Now, we present a more effective sieving technique than Theorem 4.1, which is
an extension of the result in [6]. For this, we adopt some notations and
conventions from [6] as described. Let $\text{Rad}(q^{m}-1)=kPL$, where $k$ is
the product of smallest prime divisors of $q^{m}-1$, $L$ is the product of
large prime divisors of $q^{m}-1$ denoted by $L=l_{1}\cdot l_{2}\cdots l_{t}$,
and rest of the prime divisors of $q^{m}-1$ lie in $P$ and denoted by
$p_{1},p_{2},\cdots,p_{r}$. Similarly, $\text{Rad}(x^{m}-1)=gGH$, where $g$ is
the product of irreducible factors of $x^{m}-1$ of least degree, and
irreducible factors of large degree are factors of $H$ which are denoted by
$h_{1},h_{2},\cdots,h_{u}$ and rest lie in $G$ and denoted by
$g_{1},g_{2},\cdots,g_{s}$.
###### Theorem 4.2.
Let $m,q\in\mathbb{N}$ such that $q$ is a prime power and $m\geq 3$. Using
above notations, let $\text{Rad}(q^{m}-1)=kPL$, $\text{Rad}(x^{m}-1)=gGH$,
$\delta=1-2\sum\limits_{i=1}^{r}\frac{1}{p_{i}}-\sum\limits_{i=1}^{s}\frac{1}{q^{\deg(g_{i})}},\epsilon_{1}=\sum\limits_{i=1}^{t}\frac{1}{l_{i}},\leavevmode\nobreak\
\epsilon_{2}=\sum\limits_{i=1}^{u}\frac{1}{q^{\deg(h_{i})}}\text{ and
}\delta\theta(k)^{2}\Theta(g)-(2\epsilon_{1}+\epsilon_{2})>0$. Then
$q^{\frac{m}{2}-1}>(n+2)[\theta(k)^{2}\Theta(g)W(k)^{2}W(g)(2r+s-1+2\delta)+(t-\epsilon_{1})+(2/(n+2))(u-\epsilon_{2})\\\
+(n/(n+2))(1/q^{m/2})(t+u-\epsilon_{1}-\epsilon_{2})]/[\delta\theta(k)^{2}\Theta(g)-(2\epsilon_{1}+\epsilon_{2})]$
(4.1)
implies $(q,m)\in T_{n}$.
###### Proof.
Clearly,
$N_{f,a,n}(q^{m}-1,q^{m}-1,x^{m}-1)=N_{f,a,n}(kPL,kPL,gGH)\geq
N_{f,a,n}(kP,kP,gG)\\\ +N_{f,a,n}(L,L,H)-N_{f,a,n}(1,1,1).$ (4.2)
Further, by Lemma 4.1
$N_{f,a,n}(kP,kP,gG)\geq\delta
N_{f,a,n}(k,k,g)+\sum\limits_{i=1}^{r}\\{N_{f,a,n}(kp_{i},k,g)-\theta(p_{i})N_{f,a,n}(k,k,g)\\}\\\
+\sum\limits_{i=1}^{r}\\{N_{f,a,n}(k,kp_{i},g)-\theta(p_{i})N_{f,a,n}(k,k,g)\\}+\sum\limits_{i=1}^{s}(N_{f,a,n}(k,k,gg_{i})-\Theta(g_{i})N_{f,a,n}(k,k,g))$
.
Using (3.3) and Lemma 4.2, we get
$\displaystyle
N_{f,a,n}(kP,kP,gG)\geq\delta\theta(k)^{2}\Theta(g)\big{(}q^{m-1}-(n+2)W(k)^{2}W(g)q^{\frac{m}{2}}\big{)}$
$\displaystyle-(n+2)\theta(k)^{2}\Theta(g)W(k)^{2}W(g)\big{(}\sum\limits_{i=1}^{r}2\theta(p_{i})+\sum\limits_{i=1}^{s}\Theta(g_{i})\big{)}q^{\frac{m}{2}}$
$\displaystyle=\theta(k)^{2}\Theta(g)\big{(}\delta
q^{m-1}-(n+2)(2r+s-1+2\delta)W(k)^{2}W(g)q^{\frac{m}{2}}\big{)}.$ (4.3)
Again, by Lemma 4.1
$N_{f,a,n}(L,L,H)-N_{f,a,n}(1,1,1)\geq\sum\limits_{i=1}^{t}N_{f,a,n}(l_{i},1,1)+\sum\limits_{i=1}^{t}N_{f,a,n}(1,l_{i},1)\\\
+\sum\limits_{i=1}^{u}N_{f,a,n}(1,1,h_{i})-(2t+u)N_{f,a,n}(1,1,1)$
$=\sum\limits_{i=1}^{t}\\{N_{f,a,n}(l_{i},1,1)-\theta(l_{i})N_{f,a,n}(1,1,1)\\}+\sum\limits_{i=1}^{t}\\{N_{f,a,n}(1,l_{i},1)-\theta(l_{i})N_{f,a,n}(1,1,1)\\}\\\
+\sum\limits_{i=1}^{u}\\{N_{f,a,n}(1,1,h_{i})-\Theta(h_{i})N_{f,a,n}(1,1,1)\\}-(2\epsilon_{1}+\epsilon_{2})N_{f,a,n}(1,1,1)$
(4.4)
By (3.2), for a prime divisor $l$ of $q^{m}-1$,
$|N_{f,a,n}(l,1,1)-\theta(l)N_{f,a,n}(1,1,1)|=\frac{\theta(l)}{\phi(l)q}|\sum\limits_{\chi_{l}}\chi_{f,a}(l,1,1)|,$
where
$\displaystyle|\chi_{f,a}(l,1,1)|=|\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(-au)\sum\limits_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\chi_{l}(\alpha)\hat{\psi_{0}}(u\alpha^{-1}|\leq q^{\frac{m}{2}+1}+nq.$
Hence,
$|N_{f,a,n}(l,1,1)-\theta(l)N_{f,a,n}(1,1,1)|\leq\theta(l)(q^{\frac{m}{2}}+n).$
Similarly,
$\displaystyle|\chi_{f,a}(1,l,1)|=|\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(-au)\sum\limits_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\chi_{l}(f(\alpha))\hat{\psi_{0}}(u\alpha^{-1}|\leq(n+1)q^{\frac{m}{2}+1},$
which further implies
$|N_{f,a,n}(1,l,1)-\theta(l)N_{f,a,n}(1,1,1)|\leq(n+1)q^{\frac{m}{2}}$.
Also, for an irreducible divisor $h$ of $x^{m}-1$,
$|\chi_{f,a}(1,1,h)|=|\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(-au)\sum\limits_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\psi_{h}(\alpha)\hat{\psi_{0}}(u\alpha^{-1}|\\\
=|\sum\limits_{u\in\mathbb{F}_{q}}\psi_{0}(-au)\sum\limits_{\alpha\in\mathbb{F}_{q^{m}}\setminus
U}\hat{\psi_{0}}(v\alpha+u\alpha^{-1}|\leq 2q^{\frac{m}{2}+1}+nq.$
Therefore,
$|N_{f,a,n}(1,1,h)-\Theta(h)N_{f,a,n}(1,1,1)|\leq\Theta(h)(q^{\frac{m}{2}}+n)$.
Using these bounds in (4.4), we have
$N_{f,a,n}(L,L,H)-N_{f,a,n}(1,1,1)\geq-\sum\limits_{i=1}^{t}\theta(l_{i})(q^{\frac{m}{2}}+n)-\sum\limits_{i=1}^{t}\theta(l_{i})(n+1)q^{\frac{m}{2}}-\sum\limits_{i=1}^{u}\Theta(h_{i})(2q^{\frac{m}{2}}+n)-(2t+u)N_{f,a,n}(1,1,1)$.
Now, $N_{f,a,n}(1,1,1)\leq q^{m-1}$ together with
$\sum\limits_{i=1}^{t}\theta(l_{i})=(t-\epsilon_{1})$ and
$\sum\limits_{i=1}^{u}=(u-\epsilon_{2})$ implies
$N_{f,a,n}(L,L,H)-N_{f,a,n}(1,1,1)\geq-\\{(n+2)(t-\epsilon_{1})+2(u-\epsilon_{2})\\}q^{\frac{m}{2}}\\\
-n(t+u-\epsilon_{1}-\epsilon_{2})-(2\epsilon_{1}+\epsilon_{2})q^{m-1}.$ (4.5)
Now using (4.3) and (4.5) in (4.2) we get,
$N_{f,a,n}(q^{m}-1,q^{m}-1,x^{m}-1)\geq\\{\delta\theta(k)^{2}\Theta(g)-(2\epsilon_{1}+\epsilon_{2})\\}q^{m-1}-\theta(k)^{2}\Theta(g)(n+2)\\\
(2r+s-1+2\delta)W(k)^{2}W(g)q^{\frac{m}{2}}-\\{(n+2)(t-\epsilon_{1})+2(u-\epsilon_{2})\\}q^{\frac{m}{2}}-n(t+u-\epsilon_{1}-\epsilon_{2})\\\
\\\
=q^{\frac{m}{2}}\big{[}\big{(}\delta\theta(k)^{2}\Theta(g)-(2\epsilon_{1}+\epsilon_{2})\big{)}q^{\frac{m}{2}-1}-(n+2)\\{\theta(k)^{2}\Theta(g)(2r+s-1+2\delta)W(k)^{2}W(g)\\\
-\\{(t-\epsilon_{1})+(2/(n+2))(u-\epsilon_{2})\\}-(n/(n+2))(1/q^{m/2})(t+u-\epsilon_{1}-\epsilon_{2})\\}\big{]}$
Thus
$q^{\frac{m}{2}-1}>(n+2)[\theta(k)^{2}\Theta(g)W(k)^{2}W(g)(2r+s-1+2\delta)+(t-\epsilon_{1})+(2/(n+2))(u-\epsilon_{2})\\\
+(n/(n+2))(1/q^{m/2})(t+u-\epsilon_{1}-\epsilon_{2})]/[\delta\theta(k)^{2}\Theta(g)-(2\epsilon_{1}+\epsilon_{2})]$
implies $N_{f,a,n}(q^{m}-1,q^{m}-1,x^{m}-1)>0$ i.e., $(q,m)\in T_{n}$.
∎
It is easy to observe that Theorem 4.1 is a special case of Theorem 4.2 and
can be obtained by setting $t=u=\epsilon_{1}=\epsilon_{2}=0$.
## 5 Working Example
However the results discussed above are applicable for arbitrary natural
number $n$ and the finite field $\mathbb{F}_{q^{m}}$ of any prime
characteristic. Though to demonstrate the application of above results and
make the calculations uncomplicated we assume that $q=5^{k}$ for some
$k\in\mathbb{N}$ and $n=2$, and work on the set $T_{2}$. Precisely, in this
section, we prove the following result.
###### Theorem 5.1.
Let $q=5^{k}$ for some $k\in\mathbb{N}$ and $m\geq 3$ is an integer. Then
$(q,m)\in T_{2}$ unless one of the following holds:
1. 1.
$q=5,5^{2},5^{3},5^{4},5^{5},5^{6},5^{8},5^{10}$ and $m=3$;
2. 2.
$q=5,5^{2},5^{3},5^{4}$ and $m=4$;
3. 3.
$q=5,5^{2}$ and $m=5,6;$
4. 4.
$q=5$ and $m=7,8,10,12.$
We shall divide it in two parts, in first part we shall work on $m\geq 5$ and
in second we shall consider $m=3,4$. For further calculation work and to apply
the previous results we shall need the following lemma which can also be
developed from [5, Lemma 6.2].
###### Lemma 5.1.
Let $M$ be a positive integer, then $W(M)<4515\times M^{1/8}.$
### 5.1 Part 1.
In this part, we assume $m\geq 5$ and write $m=m^{\prime}5^{j}$, where $j\geq
1$ is an integer and $5\nmid m^{\prime}$. Then
$\Omega_{q}(x^{m}-1)=\Omega_{q}(x^{m^{\prime}}-1)$ which further implies
$W(x^{m}-1)=W(x^{m^{\prime}}-1)$. Further, we shall divide the discussion in
two cases.
$\bullet\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
m^{\prime}|q-1$
$\bullet\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
m^{\prime}\nmid q-1$
Case 1. $m|q-1$.
Clearly [12, Theorem 2.47] implies that
$\Omega_{q}(x^{m^{\prime}}-1)=m^{\prime}$. Let $l=q^{m}-1\text{ and }g=1$ in
Theorem 4.1 then $\Delta=\frac{q^{2}+(a-3)q+2}{(a-1)q+1}$, where
$a=\frac{q-1}{m^{\prime}},$ which further implies $\Delta<q^{2}$. Hence
$(q,m)\in T_{2}$ if $q^{\frac{m}{2}-3}>4W(q^{m}-1)^{2}.$ However, by Lemma
5.1, it is sufficient if $q^{\frac{m}{4}-3}>4\cdot(4515)^{2},$ which holds for
$q\geq 125$ and for all $m\geq 28$. In particular, for $q\geq 125$ and for all
$m^{\prime}\geq 28$. Next, we examine all the cases where $m^{\prime}\leq 27$.
For this we set $l=q^{m}-1$ and $g=1$ in Theorem 4.1 unless mentioned. Then
$\delta=1-\frac{m^{\prime}}{q}$ and
$\Delta=2+\frac{(m^{\prime}-1)q}{q-m^{\prime}}$
1\. $m^{\prime}=1.$ Here $m=5^{j}$ for some integer $j\geq 1$ and $\Delta=2$.
Then by Theorem 4.1 it is sufficient if $q^{\frac{m}{2}-1}>4\cdot 2\cdot
W(q^{m}-1)^{2}$. Again Lemma 5.1 implies $(q,m)\in T_{2}$ if
$q^{\frac{m}{4}-1}>8\cdot(4515)^{2}$ i.e.,
$q^{\frac{5^{j}}{4}-1}>8\cdot(4515)^{2}$, which holds for all choices of
$(q,m)$ except
$(5,5),(5,5^{2}),(5^{2},5),(5^{2},5^{2}),(5^{3},5),(5^{4},5),\leavevmode\nobreak\
\cdots,(5^{46},5)$ which are $48$ in number. For these, we checked
$q^{\frac{m}{2}-1}>4\cdot 2\cdot W(q^{m}-1)^{2}$ directly by factoring
$q^{m}-1$ and got it verified except the pairs $(5,5),(5^{2},5),(5^{3},5),\\\
(5^{4},5)$ and $(5^{6},5)$.
2\. $m^{\prime}=2$. In this case, $m=2\cdot m^{j}$ for some $j\geq 1$ and
$\Delta=2+\frac{q}{q-2}<4$. Similar to the above case, it is sufficient if
$q^{\frac{2\cdot 5^{j}}{4}-1}>16\cdot(4515)^{2}$, which is true except the $9$
pairs $(5,10),(5,50),(5^{2},10),(5^{3},10),\cdots,(5^{8},10)$, and the
verification of $q^{\frac{m}{2}-1}>4\cdot 4\cdot W(q^{m}-1)^{2}$ for these
pairs yield the only possible exceptions as $(5,10)\text{ and }(5^{2},10)$.
Following the similar steps for the rest of the values of $m^{\prime}\leq 27$
we get that there is no exception for many values of $m^{\prime}$. Values of
$m^{\prime}$ with possible exceptional pairs is as below.
3\. $m^{\prime}=4.$ $(5,20)$.
4\. $m^{\prime}=6.$ $(5^{2},6),(5^{4},6)\text{ and }(5^{6},6).$
5\. $m^{\prime}=8.$ $(5^{2},8)$.
Furthermore, for the pairs
$(5^{3},5),(5^{4},5),(5^{6},5),(5^{2},10),(5,20),(5^{4},6),(5^{6},6)$ and
$(5^{2},8)$ Theorem 4.1 holds for some choice of $l$ and $g$ (see Table 1).
Hence, only left possible exceptions in this case are $(5,5),(5^{2},5),(5,10)$
and $(5^{2},6)$.
Table 1 Sr. No. $(q,m)$ $l$ $r$ $g$ $s$ $\delta>$ $\Delta<$ $4\Delta W(g)$
$W(l)^{2}<$ 1 $(5^{3},5)$ $2$ $5$ $1$ $1$ $0.705298$ $16.178405$ $518$ 2
$(5^{4},5)$ 6 6 $1$ 1 $0.581729$ $22.628164$ $2897$ 3 $(5^{6},5)$ 6 9 $1$ 1
0.390631 48.079201 6155 4 $(5^{2},10)$ 6 6 $1$ 2 0.503329 27.828038 3562 5
$(5,20)$ 6 6 $x^{2}+\beta^{3}x+\beta$ 2 0.183329 72.910743 18666 6 $(5^{4},6)$
6 6 $1$ 6 0.476599 37.669274 4822 7 $(5^{6},6)$ 6 9 $1$ 6 0.330094 71.677019
9175 8 $(5^{2},8)$ 6 4 $1$ 8 0.401942 39.318735 5033
where $\beta$ is a primitive element of $\mathbb{F}_{5}$.
Case 2. $m^{\prime}\nmid q-1$.
Let the order of $q\mod m^{\prime}$ be denoted by $b$. Then $b\geq 2$ and
degree of irreducible factors of $x^{m^{\prime}}-1$ over $\mathbb{F}_{q}$ is
less than or equal to $b$. Let $M$ denotes the number of distinct irreducible
factors of $x^{m}-1$ over $\mathbb{F}_{q}$ of degree less than $b$. Also let
$\nu(q,m)$ denotes the ratio $\nu(q,m)=\frac{M}{m}$. Then,
$m\nu(q,m)=m^{\prime}\nu(q,m^{\prime})$.
For the further progress, we need the following two results which are the
directly implied by Proposition $5.3$ of [7] and Lemma 7.2 of [5]
respectively.
###### Lemma 5.2.
Let $k,m,q\in\mathbb{N}$ be such that $q=5^{k}$ and $m^{\prime}\nmid q-1.$ In
the notations of Theorem 4.1, let $l=q^{m}-1$ and $g$ is the product of
irreducible factors of $x^{m}-1$ of degree less than $b$, then
$\Delta<m^{\prime}$.
###### Lemma 5.3.
Let $m^{\prime}>4$ and $m_{1}=\gcd(q-1,m^{\prime})$. Then following bounds
hold.
1. 1.
For $m^{\prime}=2m_{1}$, $\nu(q,m^{\prime})=\frac{1}{2};$
2. 2.
for $m^{\prime}=4m_{1},\nu(q,m^{\prime})=\frac{3}{8};$
3. 3.
for $m^{\prime}=6m_{1},\nu(q,m^{\prime})=\frac{13}{36};$
4. 4.
otherwise, $\nu(q,m^{\prime})\leq\frac{1}{3}$.
At this point we note that $m^{\prime}=1,2$ and $4$ divide $q-1$ for any
$q=5^{k}$ and have been discussed in above case, whereas $m^{\prime}=5$ is not
possible. Therefore, in this case we need to discuss $m^{\prime}=3$ and
$m^{\prime}\geq 6$.
First consider $m^{\prime}=3$. Then $m=3\cdot 5^{j}$ for some integer $j\geq
1$. Also, $m^{\prime}\nmid q-1$ implies if $q=5^{k}$ then $k$ is odd and
$x^{m^{\prime}}-1$ is the product of a linear factor and a quadratic factor.
Thus, $W(x^{m}-1)=W(x^{m^{\prime}}-1)=2^{2}=4$ and $(\ref{main})$ implies
$(q,m)\in T_{2}$ if $q^{\frac{m}{2}-1}>16\cdot W(q^{m}-1)^{2}.$ By Lemma 5.1,
it is sufficient if $q^{\frac{m}{4}-1}>16\cdot(4515)^{2}$, which hold for
$q=5$ and $m\geq 53$, $q=125$ and $m\geq 21$, $q\geq 5^{5}$ and $m\geq 14$.
Thus, only possible exceptions are $(5,15)$ and $(125,15)$. For these two
possible exceptions we checked $q^{\frac{m}{4}-1}>16\cdot W(q^{m}-1)^{2}$
directly by factoring $q^{m}-1$ and got it verified for $(125,15)$. Hence only
possible exception for $m^{\prime}=3$ is $(5,15)$.
Now suppose $m^{\prime}\geq 6$. At this point, in Theorem 4.1 let $l=q^{m}-1$
and $g$ be the product of irreducible factors of $x^{m}-1$ of degree less than
$b$. Therefore, Lemma 5.2 along with Theorem 4.1 implies $(q,m)\in T_{2}$ if
$q^{\frac{m}{2}-1}>4\cdot m^{\prime}\cdot W(q^{m}-1)^{2}\cdot
2^{m^{\prime}\nu(q,m^{\prime})}$. By Lemma 5.1, it is sufficient if
$\displaystyle q^{\frac{m}{4}-1}>4\cdot m\cdot(4515)^{2}\cdot
2^{m\nu(q,m^{\prime})}.$ (5.1)
Further, we shall discuss it in four cases as follows.
1. $m^{\prime}\neq 2m_{1},4m_{1},6m_{1}.$
Here, Lemma 5.3 implies $\nu(q,m^{\prime})=\frac{1}{3}$. Using this in (5.1)
we get $(q,m)\in T_{2}$ if $q^{\frac{m}{4}-1}>4\cdot m\cdot(4515)^{2}\cdot
2^{\frac{m}{3}}$, which holds for $q^{m}\geq 5^{145}$. Next, for $q^{m}\leq
5^{144}$, we verified $q^{\frac{m}{2}-1}>4\cdot m\cdot W(q^{m}-1)^{2}\cdot
2^{\frac{m}{3}}$ by factoring $q^{m}-1$ and got a list of $20$ possible
exception as follows.
$(5,6),(5,7),(5,9),(5,11),(5,12),(5,13),(5,14),(5,17),(5,18),(5,19),(5,21),\\\
(5,22),(5,27),(5,30),(5,36),(5^{2},7),(5^{2},9),(5^{2},11),(5^{3},6),(5^{5},6)$.
2. $m^{\prime}=2m_{1}.$
In this case, $\nu(q,m)=\frac{1}{2}$. Therefore, (5.1) implies $(q,m)\in
T_{2}$ if $q^{\frac{m}{4}-1}>4\cdot m\cdot(4515)^{2}\cdot 2^{\frac{m}{2}}$,
which holds for $q=5$ and $m\geq 466$ while for $q\geq 25$ it is sufficient
that $m\geq 56$. Here, for $q=5$, we have $m^{\prime}=8$ only. Thus possible
exception for $q=5$ are $(5,8),(5,40)$ and $(5,200)$. On the other hand, for
$q\geq 25$ and $q^{m}<25^{56}$ along with above three possible exceptions we
checked $q^{\frac{m}{2}-1}>4\cdot m\cdot W(q^{m}-1)^{2}\cdot 2^{\frac{m}{2}}$
and got it verified except $(5,8),(5,40)\text{ and }(5^{3},8)$.
3. $m^{\prime}=4m_{1}.$
Here, $\nu(q,m)=\frac{3}{8}$. Again, (5.1) gives $(q,m)\in T_{2}$ if
$q^{\frac{m}{4}-1}>4\cdot m\cdot(4515)^{2}\cdot 2^{\frac{3m}{8}}$, which is
true for $q^{m}\geq 5^{176}$. On the other side, verification of
$q^{\frac{m}{2}-1}>4\cdot m\cdot W(q^{m}-1)^{2}\cdot 2^{\frac{3m}{8}}$ for
$q^{m}<5^{176}$ provides only possible exception as $(5,16)$.
4. $m^{\prime}=6m_{1}.$
Similar to the above case, we have $\nu(q,m)=\frac{13}{36}$ and
$q^{\frac{m}{4}-1}>4\cdot m\cdot(4515)^{2}\cdot 2^{\frac{13m}{36}}$ holds for
$q^{m}\geq 5^{164}$. Also, for $q^{m}<5^{164}$, $q^{\frac{m}{2}-1}>4\cdot
m\cdot W(q^{m}-1)^{2}\cdot 2^{\frac{13m}{36}}$ holds for all $(q,m)$ except
$(5,24)$.
Table 2 Sr. No. $(q,m)$ $l$ $r$ $g$ $s$ $\delta>$ $\Delta<$ $4\Delta W(g)$
$W(l)^{2}<$ 1 $(5,11)$ 2 1 $1$ 3 0.799359 7.004009 225 2 $(5,13)$ 2 1 $1$ 4
0.795199 8.287731 266 3 $(5,14)$ 2 4 $x+1$ 3 0.059683 169.55170 5426 4
$(5,17)$ 2 2 $1$ 2 0.795110 8.288442 266 5 $(5,18)$ 6 5 $1$ 6 0.061578
245.59029 31436 6 $(5,19)$ 2 3 $1$ 3 0.789208 12.136745 389 7 $(5,21)$ 2 4 $1$
5 0.689908 19.393614 621 8 $(5,22)$ 2 5 $x+1$ 5 0.014867 943.67119 30198 9
$(5,27)$ 2 7 $1$ 4 0.561470 32.277659 1033 10 $(5,30)$ 6 9 $x+1$ 3 0.110695
182.67531 23383 11 $(5,36)$ 6 9 $x^{4}-1$ 8 0.170222 148.86660 152440 12
$(5^{2},7)$ 2 4 1 3 0.219683 47.520125 1521 13 $(5^{2},9)$ 6 5 1 5 0.421578
35.208505 4507 14 $(5^{2},11)$ 2 5 1 3 0.176146 70.124930 2244 15 $(5^{3},6)$
6 5 1 4 0.525578 26.734639 3423 16 $(5^{5},6)$ 6 9 10 4 0.390055 55.838482
7148 17 $(5,15)$ 2 5 1 2 0.473298 25.241167 808 18 $(5,40)$ 6 9
$x^{2}+\beta^{3}x+\beta$ 4 0.088640 238.91192 61162 19 $(5^{3},8)$ 6 6 1 6
0.454072 39.438940 5049 20 $(5,16)$ 6 4 $x+1$ 7 0.038742 363.35624 46510 21
$(5,24)$ 6 6 $x^{4}-1$ 10 0.086200 245.61740 251513
Next, we refer to Table 2 to note that Theorem 4.1 holds for the pairs
$(5,11)$, $(5,13)$, $(5,14)$, $(5,15)$, $(5,16)$, $(5,17)$, $(5,18)$,
$(5,19)$, $(5,21)$, $(5,22)$, $(5,24)$, $(5,27)$, $(5,30)$, $(5,36)$,
$(5,40)$, $(5^{2},7)$, $(5^{2},9)$, $(5^{2},11)$, $(5^{3},6)$, $(5^{3},8)$,
$(5^{5},6)$. Thus, only left possible exceptions in the case $m^{\prime}\nmid
q-1$ are $(5,6)$,$(5,7)$, $(5,8)$, $(5,9)$, and $(5,12).$
### 5.2 Part 2.
In this part we shall consider $m=3,4.$ Following result will be required for
further calculation, which follows on the lines of [6, Lemma 51].
###### Lemma 5.4.
Let $k\in\mathbb{N}$ such that $\omega(k)\geq 2828$. Then
$W(k)<k^{\frac{1}{13}}.$
Also, $W(x^{m}-1)\leq 16$. Now, first assume $\omega(q^{m}-1)\geq 2828$, then
(3.1) and Lemma 5.4 together implies $(q,m)\in T_{2}$ if
$q^{\frac{m}{2}-1}>64\cdot q^{\frac{2m}{13}}$ i.e., $q^{\frac{9m}{26}-1}>64$
or $q^{m}>64^{\frac{26m}{9m-26}}$, sufficient if $q^{m}>64^{78}$, which is
true for $\omega(q^{m}-1)\geq 2828$. To make further progress we follow [13].
Next, assume $88\leq\omega(q^{m}-1)\leq 2827$. In Theorem 4.1, let $g=x^{m}-1$
and $l$ to be the product of least $88$ primes dividing $q^{m}-1$ i.e.,
$W(l)=2^{88}$. Then $r\leq 2739$ and $\delta$ will be at least its value when
$\\{p_{1},p_{2},\cdots,p_{2739}\\}=\\{461,463,\cdots,25667\\}$. This gives
$\delta>0.0041806$ and $\Delta<1.3101\times 10^{6}$, hence $4\Delta
W(g)W(l)^{2}<8.0309\times 10^{60}=R$ (say). By Theorem 4.1 $(q,m)\in T_{2}$ if
$q^{\frac{m}{2}-1}>R$ or $q^{m}>R^{\frac{2m}{m-2}}$. But $m\geq 3$ implies
$\frac{2m}{m-2}\leq 6$. Therefore, if $q^{m}>R^{6}$ or $q^{m}>2.6828\times
10^{365}$ then $(q,m)\in T_{2}$. Hence, $\omega(q^{m}-1)\geq 152$ gives
$(q,m)\in T_{2}$. Repeating this process of Theorem 4.1 for the values in
Table 3 implies $(q,m)\in T_{2}$ if $q^{\frac{m}{2}-1}>889903387$. Thus, for
$m=3$ it is sufficient if $q>(889903387)^{2}$ and for $m=4$ we need
$q>889903387$. Hence, only possible exceptions are
$(5,3),(5^{2},3),\cdots,(5^{25},3)$ and $(5,4),(5^{2},4),\cdots,(5^{12},4)$.
However, Table 4 implies that Theorem 4.1 holds for
$(5^{9},3),(5^{11},3),(5^{12},3),(5^{13},3),\cdots,(5^{25},3)$ and
$(5^{6},4),(5^{7},4),\cdots,(5^{12},4)$. Thus, only possible exceptions here
are $(5,3),(5^{2},3),\cdots,(5^{8},3)$ and $(5^{10},3)$, and
$(5,4),(5^{2},4),\cdots,(5^{5},4)$.
Table 3 Sr. No. | $a\leq\omega(q^{m}-1)\leq b$ | $W(l)$ | $\delta>$ | $\Delta<$ | $4\Delta W(g)$ $W(l)^{2}$ $<$
---|---|---|---|---|---
1 | $a=17,\leavevmode\nobreak\ \leavevmode\nobreak\ b=151$ | $2^{17}$ | $0.0347407$ | $7687.5008$ | $8.4526\times 10^{15}$
2 | $a=9,\leavevmode\nobreak\ \leavevmode\nobreak\ b=51$ | $2^{9}$ | $0.0550187$ | $1510.5788$ | $2.5344\times 10^{10}$
3 | $a=7,\leavevmode\nobreak\ \leavevmode\nobreak\ b=37$ | $2^{7}$ | $0.0064402$ | $9163.1796$ | $9608289244$
4 | $a=7,\leavevmode\nobreak\ \leavevmode\nobreak\ b=36$ | $2^{7}$ | $0.0191790$ | $2973.9903$ | $3118453847$
5 | $a=7,\leavevmode\nobreak\ \leavevmode\nobreak\ b=34$ | $2^{7}$ | $0.0458469$ | $1158.0218$ | $1214272852$
6 | $a=7,\leavevmode\nobreak\ \leavevmode\nobreak\ b=33$ | $2^{7}$ | $0.0602354$ | $848.6790$ | $889903387$
Table 4 Sr. No. $(q,m)$ $l$ $r$ $g$ $s$ $\delta>$ $\Delta<$ $4\Delta W(g)$
$W(l)^{2}<$ 1 $(5^{9},3)$ 2 7 1 2 0.801533 20.714128 663 2 $(5^{11},3)$ 2 4 1
2 0.925433 11.725177 376 3 $(5^{12},3)$ 6 9 1 3 0.330478 62.518314 8003 4
$(5^{13},3)$ 2 4 1 2 0.910167 11.888295 381 5 $(5^{14},3)$ 6 10 1 3 0.508443
45.269297 5795 6 $(5^{15},3)$ 2 10 1 2 0.603902 36.773815 1177 7 $(5^{16},3)$
6 9 1 3 0.368379 56.291827 7206 8 $(5^{17},3)$ 2 6 1 2 0.930565 15.970005 512
9 $(5^{18},3)$ 6 12 1 3 0.499055 54.098369 6925 10 $(5^{19},3)$ 2 5 1 2
0.924693 13.895837 445 11 $(5^{20},3)$ 6 15 1 3 0.183646 176.24807 22560 12
$(5^{21},3)$ 2 9 1 2 0.822416 25.102645 804 13 $(5^{22},3)$ 6 10 1 3 0.522529
44.102865 5646 14 $(5^{23},3)$ 2 7 1 2 0.920550 18.294603 586 15 $(5^{24},3)$
6 14 1 3 0.296682 103.11815 13200 16 $(5^{25},3)$ 2 14 1 2 0.666688 45.498589
1456 17 $(5^{6},4)$ 6 6 1 4 0.485944 32.867712 4208 18 $(5^{7},4)$ 2 6 1 4
0.105913 143.62473 4596 19 $(5^{8},4)$ 2 7 1 4 0.054494 313.95724 10047 20
$(5^{9},4)$ 6 9 1 4 0.330476 65.544620 8390 21 $(5^{1}0,4)$ 6 9 1 4 0.568640
38.930216 4984 22 $(5^{1}1,4)$ 2 8 1 4 0.039829 479.03888 15330 23
$(5^{1}2,4)$ 6 9 1 4 0.368379 59.006421 7553
Further, for all the left possible exceptions we checked Theorem 4.2 and got
it verified in case of $(5^{7},3),(5^{5},4)$ and $(5,9)$ for the values in
Table 5.
Table 5 Sr. No. $(q,m)$ $k$ $P$ $L$ $f$ $G$ $H$ $R^{\prime}<$ 1 $(5,9)$ 2 589
829 $x-1$ $x^{2}+x+1$ $x^{6}+x^{3}+1$ 269 2 $(5^{7},3)$ 2 229469719 519499
$x-1$ 1 $x^{2}+x+1$ 262 3 $(5^{9},4)$ 6 216878233 9161 $x+1$
$x^{2}+x+\beta^{3}$ $x+\beta^{3}$ 2788
Where, $R^{\prime}$ represent the right hand side value of (4.1). Hence, all
the results from part 1 and part 2 collectively implies Theorem 5.1.
## References
* [1] G. B. Agnew, R. C. Mullin, I. M. Onyszchuk, and S. A. Vanstone. An implementation for a fast public-key cryptosystem. J. Cryptology, 3(2):63–79, 1991.
* [2] Anju and R. K. Sharma. Existence of some special primitive normal elements over finite fields. Finite Fields Appl., 46:280–303, 2017.
* [3] A. Booker, S. D. Cohen, N. Sutherland, and T. Trudgian. Primitive values of quadratic polynomials in a finite field. Math. Comp., 88(318):1903–1912, 2019.
* [4] W. S. Chou and S. D. Cohen. Primitive elements with zero traces. Finite Fields Appl., 7(1):125–141, 2001.
* [5] S. D. Cohen. Pairs of primitive elements in fields of even order. Finite Fields Appl., 28:22–42, 2014.
* [6] S. D. Cohen and A. Gupta. Primitive element pairs with a prescribed trace in the quartic extension of a finite field. J. Algebra Appl., 2020, DOI: https://doi.org/10.1142/S0219498821501681.
* [7] S. D. Cohen and S. Huczynska. The primitive normal basis theorem–without a computer. Lond. Math. Soc., 67(2):41–56, 2003.
* [8] S. D. Cohen, H. Sharma, and R. Sharma. Primitive values of rational functions at primitive elements of a finite field. J. Number Theory, 219:237–246, 2021.
* [9] L. Fu and D. Wan. A class of incomplete character sums. Q. J. Math., (4):1195–1211, 2018.
* [10] A. Gupta, R. K. Sharma, and S. D. Cohen. Primitive element pairs with one prescribed trace over a finite field. Finite Fields Appl., 54:1–14, 2018.
* [11] H. W. Lenstra Jr. and R. J. Schoof. Primitive normal bases for finite fields. Math. Comp., 48(177):217–231, 1987.
* [12] R. Lidl and H. Niederreiter. Finite fields, volume 20. Cambridge Univ. Press, Cambridge (UK), 1997.
* [13] H. Sharma and R. K. Sharma. Existence of primitive pairs with prescribed traces over finite fields. Comm. Algebra, 2020, DOI: https://doi.org/10.1080/00927872.2020.1852243.
* [14] R. K. Sharma, A. Awasthi, and A. Gupta. Existence of pair of primitive elements over finite fields of characteristic 2. J. Number Theory, 193:386–394, 2018.
* [15] R. K. Sharma and A. Gupta. Pair of primitive elements with prescribed traces over finite fields. Comm. Algebra, 47:1278–1286, 2017.
* [16] F. Shuqin and H. Wenbao. Character sums over galois rings and primitive polynomials over finite fields. Finite Fields Appl., 10(1):36–52, 2004.
|
# Mathematical foundations of moral preferences
Valerio Capraro<EMAIL_ADDRESS>Department of Economics, Middlesex
University, The Burroughs, London NW4 4BT, U.K. Matjaž Perc
<EMAIL_ADDRESS>Faculty of Natural Sciences and Mathematics, University
of Maribor, Koroška cesta 160, 2000 Maribor, Slovenia Department of Medical
Research, China Medical University Hospital, China Medical University,
Taichung 404332, Taiwan Alma Mater Europaea ECM, Slovenska ulica 17, 2000
Maribor, Slovenia Complexity Science Hub Vienna, Josefstädterstraße 39, 1080
Vienna, Austria
###### Abstract
One-shot anonymous unselfishness in economic games is commonly explained by
social preferences, which assume that people care about the monetary payoffs
of others. However, during the last ten years, research has shown that
different types of unselfish behaviour, including cooperation, altruism,
truth-telling, altruistic punishment, and trustworthiness are in fact better
explained by preferences for following one’s own personal norms – internal
standards about what is right or wrong in a given situation. Beyond better
organising various forms of unselfish behaviour, this moral preference
hypothesis has recently also been used to increase charitable donations,
simply by means of interventions that make the morality of an action salient.
Here we review experimental and theoretical work dedicated to this rapidly
growing field of research, and in doing so we outline mathematical foundations
for moral preferences that can be used in future models to better understand
selfless human actions and to adjust policies accordingly. These foundations
can also be used by artificial intelligence to better navigate the complex
landscape of human morality.
## I Introduction
Most people are not completely selfish. Given the right circumstances, they
are happy to give up a part of their benefit to help other people or the
society as a whole. Psychologists and economists have long observed that some
people act unselfishly even in one-shot anonymous interactions, when there are
no direct or indirect benefits for doing so rapoport1965prisoner ; engel_ee11
. The question is why? Understanding what motivates people to act unselfishly
in one-shot, anonymous interactions is of great theoretical and practical
importance. Theoretically, it may lead to a more complete and precise
mathematical framework to formalise human decision-making, while practically,
it may suggest policies and interventions to promote unselfish behaviour, with
the ultimate goal of improving our societies.
To study one-shot unselfishness, behavioural scientists usually turn to
laboratory experiments using economic games, in which experimental subjects
have to make monetary decisions that involve various forms of other-regarding
behaviour. In this context, and throughout this review, selfishness and other-
regarding behaviour is defined with respect to monetary payoffs. Clearly, a
behaviour that is unselfish from the point of view of monetary outcomes may
turn out to be selfish from a more general perspective that takes into account
also psychological benefits and costs. For example, some people may engage in
unselfish behaviour to decrease negative mood cialdini1973transgression or
increase positive feelings andreoni1990impure . Therefore, in the last
decades, behavioural scientists have been trying to mathematically explain
unselfish behaviour by means of a utility function that depends on factors
other than solely the monetary payoff of the decision maker. Based on
empirical data scholars have initially advanced the explanation that human
unselfishness in one-shot anonymous interactions is primarily driven by people
not caring only about their own monetary payoff, but caring, at least to some
extent, also about the monetary payoffs of the other people involved in the
interaction ledyard1994public ; levine1998modeling ; fehr1999theory ;
bolton2000erc ; andreoni2002giving ; charness2002understanding .
However, about fifteen years ago, this _social preference hypothesis_ came
under critique because some experiments showed that two particular forms of
unselfish behaviour, altruistic punishment and altruism (see Table 1 for these
definitions), could not be entirely explained by preferences defined solely
over monetary outcomes. In 2010, building on work on the effect of social
norms on people’s behaviour smith2010theory ; durkheim2017regles ;
parsons1937structure ; geertz1973interpretation ; schwartz1977normative ;
elster1989social ; cialdini1990focus ; bicchieri2005grammar ;
bicchieri2016norms ; hawkins2019emergence , Bicchieri and Chavez
bicchieri2010behaving proposed to explain altruistic punishment assuming that
people have preferences for following their “personal norms” (what they
personally believe to be the right thing to do) beyond the monetary
consequences that this action brings about. Subsequently, Krupka and Weber
krupka2013identifying proposed to explain altruism using “injunctive norms”
(what one believes others would approve/disapprove); however, in their
analysis, they did not consider a potential role of personal norms. In the
last five years, numerous other experiments challenged social preference
models in several behavioural domains, other than altruistic punishment and
altruism biziou2015does ; schram2015inducing ; kimbrough2016norms ;
eriksson2017costly ; capraro2018right ; tappin2018doing ; capraro2019power ;
huang2019choosing ; moreover, the best interpretation of these results turns
out to be in terms of personal norms, rather than other types of norms.
Namely, the best way to organise these results is through the moral preference
hypothesis, according to which people have preferences for following their
personal norms, beyond the economic consequences that these actions bring
about. This framework organises several forms of one-shot, anonymous unselfish
behaviour, including cooperation, altruism, altruistic punishment,
trustworthiness, honesty, and the equality-efficiency trade-off. We note at
this stage that personal norms are not universally given. They certainly
depend on the culture; for example, they can come from the internalisation of
cultural values schwartz1977normative . But they can also depend on the
individual; anecdotal evidence suggests that, even within the same family,
there might be people with different beliefs about what is right or wrong in a
given situations. We will discuss this in more details in Section VII.6.
The moral preference hypothesis also holds promise of being very useful in
practice. The idea is simple. If people care about doing the right thing, then
just providing cues that make the rightness of an action salient should work
just fine in promoting desirable behaviour. In fact, research has already
demonstrated the applicability of this approach outside of the laboratory,
showing in particular that nudges towards doing the right thing can increase
charitable donations capraro2019increasing .
In the light of ample empirical research supporting the moral preference
hypothesis, theoretical research aiming to formalise human decision-making by
means of a mathematical framework is also at a crossroads. On the one hand,
the traditional approach involving monetary payoffs has worked well in
explaining many challenging aspects of pro-social behaviour. But on the other,
experiments indicate that there are likely hard boundaries to this simplistic
approach, which will thus have to be amended by more avant-garde concepts,
including formalising the intangibles of moral psychology and philosophy.
Here we review this rapidly growing field of research within the following
sections. Section II reviews the main economic games that have been developed
to study one-shot unselfishness. Section III reviews social preference models,
as earlier attempts to explain unselfishness in one-shot economic games within
a unified theoretical framework. This section also describes a number of
experiments that violate social preference models. Section IV shows how these
experiments can be organised by general moral preferences for doing what one
believes to be the right thing. Section V focuses on practical applications of
the moral preference hypothesis. Section VI reviews the models of moral
preferences that have been introduced so far and proposes a new model that
explicitly takes into account the importance of personal norms. Lastly,
Section VII outlines a number of key questions for future work, while Section
VIII summarises the main conclusions.
Taken together, this review thus outlines a mathematical formalism for
morality, which shall inform future models aimed at better understanding
selfless actions as well as artificial intelligence that strives to emulate
counterintuitive human decision-making.
## II Measures of unselfish behaviour
There are various forms of unselfish behaviour. For example, giving money to a
homeless person on the street is, in principle, quite different from
collaborating with a colleague on a common project, or from telling the truth
when one is tempted to lie. To take this source of heterogeneity into account,
scholars have developed a series of simple games and decision problems that
are meant to prototypically represent different types of unselfish behaviour.
These are simple scenarios in which experimental subjects can make decisions
that have real consequences. To incentivise these decisions, behavioural
scientists usually use monetary payoffs (at least among adult subjects,
whereas other forms of remuneration, such as stickers, might be more effective
among children).
In this review, we will be mainly focused on one-shot decisions that are
_purely_ unselfish, meaning that they bring no monetary benefit to the
decision maker (and possibly bring a cost), no matter the beliefs of the
decision maker regarding the behaviour of other people involved in the
interaction. Specifically, we measure altruistic behaviour using the dictator
game (see Table 1 for all the definitions), cooperative behaviour in pairwise
interactions using the prisoner’s dilemma, truth-telling using the sender-
receiver game, the tradeoff between equality and efficiency using the trade-
off game, trustworthiness using player 2 in the trust game, and altruistic
punishment using player 2 in the ultimatum game. In the last section we will
also briefly consider decisions that are _strategically_ unselfish, such as
trust (player 1 in the trust game) and strategic fairness (player 1 in the
ultimatum game), which might actually maximise the payoff of the decision
maker, depending on their beliefs about the behaviour of the second player.
The distinction between pure unselfishness and strategic unselfishness
generalises the distinction between pure cooperation and strategic
cooperation, introduced by Rand in his meta-analysis rand2016cooperation ,
where “pure cooperation” was defined as paying a cost to benefit another
person, regardless of the behaviour of the other person, as opposite of
“strategic cooperation”, which might maximise the cooperator’s payoff,
depending on the other person’s behaviour.
## III Social preferences and their limitations
Behavioural scientists have long recognised that some people do act
unselfishly even in one-shot anonymous interactions. For example, the first
comprehensive empirical work on the one-shot prisoner’s dilemma dates back to
1965 rapoport1965prisoner . Formal frameworks to explain one-shot
unselfishness have a more recent history, starting in 1994, when Ledyard
observed that cooperation, altruism, and altruistic punishment could be
explained by assuming that people maximise a utility function that depends not
only on their own monetary payoff, but also on the total monetary payoff of
the other people that are involved in the interaction ledyard1994public . See
Table 2 for the exact mathematical definition. Since then, several models have
been introduced. In 1998, Levine levine1998modeling proposed a utility
function in which the level of altruism depends on the level of altruism of
the other players. Subsequently, in 1999, Fehr and Schmidt fehr1999theory
proposed a framework according to which players care about minimising
inequities. In 2000, Bolton and Ockenfels bolton2000erc followed a similar
idea and introduced a general inequity aversion model, in which the utility of
an action depends negatively on the distance between the amount of money the
decision maker gets if that action is implemented and the amount of money the
decision maker would get if the equal allocation were implemented. The authors
proposed an explicit mathematical formula only for the case of $n=2$ players.
In 2002, Andreoni and Miller andreoni2002giving estimated the behaviour of
experimental subjects in a number of dictator game choices using a specific
utility function taking into account altruistic tendencies as well as
potential convexity in the preferences. In the same year, Charness and Rabin
charness2002understanding introduced a general utility function which,
depending on the relative relationship between its two parameters, can cover
several cases, including competitive preferences, inequity aversion
preferences, and social efficiency preferences. We refer to Table 2 for the
exact functional forms. (Besides these models, scholars have also studied
models that can be applied to specific subsets of one-shot anonymous
interactions (e.g., andreoni1990impure ). In this review, we focus on models
that can be applied to any one-shot anonymous interaction involving unselfish
behaviour).
While differing in many details, all social preference models share one common
property: they assume that the utility of a decision maker is a function of
the monetary payoffs of the available actions. This assumption came under
considerable criticism for the first time in 2003 when Falk, Fehr and
Fischbacher falk2003nature showed that rejection rates in the ultimatum game
depend on the choice set available to the proposer. Specifically, the split
(8,2) — 8 to the proposer and 2 to the responder — is more likely to be
accepted in ultimatum games in which the only other choice available to the
proposer is (10,0), compared to ultimatum games in which the only other choice
available to the proposer is (5,5). Therefore, responders prefer accepting
(8,2) over rejecting it in the former case, but they prefer rejecting (8,2)
over accepting it in the latter one, despite the fact that these choices have
the same monetary consequences in the two cases. Clearly, this cannot be
explained by any model of social preferences. See bicchieri2010behaving ;
eriksson2017costly for conceptual replications.
Shortly after, in 2005, Uri Gneezy introduced the sender-receiver game
gneezy2005deception . In his experiments, decision makers were less likely to
implement an allocation of money when implementing this allocation also
required misreporting a private information. Also this finding cannot be
explained by any model of social preferences and, more generally, also not by
any utility function that depends only on the monetary payoffs that are
associated with the available actions. This thus indicates that (some) people
have an intrinsic cost of lying, which goes beyond their preferences toward
monetary outcomes. To further support this interpretation, several scholars
have independently studied the sender-receiver game in contexts in which lying
would benefit both the sender and the receiver to the same extent. This case
is particularly important because, when the benefit for the sender is equal to
the benefit for the receiver, all social preference models predict that the
totality of people would lie. However, this prediction turned out to be
violated in experiments, which showed that a significant proportion of people
tell the truth cappelen2013we ; erat2012white ; biziou2015does .
Subsequently social preference models came under critique also in one of the
behavioural domains in which they had been most successful, namely in research
involving the dictator game. Dana, Cain and Dawes dana2006you and Lazear,
Malmendier and Weber lazear2012sorting observed that some dictator game
givers would prefer to altogether avoid the dictator game interaction if given
the chance. These people thus preferred giving over keeping in a context in
which they were forced to play the dictator game, but preferred keeping over
giving in a context in which they could choose whether to play the dictator
game or not. This finding, as in the preceding examples, cannot be explained
by any utility function that is based solely on monetary outcomes.
For the same game, and along similar lines, List list2007interpretation ,
Bardsley bardsley2008dictator , and Cappelen et al. cappelen2013give found
that extending the choice set of the dictator by adding the possibility to
take money from the recipient has the effect to make some dictators less
likely to give. Therefore, these dictators preferred giving over keeping, when
the taking option was not available, but preferred keeping over giving, when
the taking option was available. This finding likewise cannot be explained by
any preference over monetary payoffs. A conceptually similar point was also
made by Krupka and Weber krupka2013identifying and Capraro and Vanzo
capraro2019power , who found that even minor changes in the instructions of
the dictator game can notably impact people’s behaviour.
In the years after 2013, the inability of purely monetary-based models to
explain empirically observed behaviour engulfed many other games and decision
problems, whose experimental regularities had been previously thought to be
explainable in terms of social preferences. Examples included the prisoner’s
dilemma kimbrough2016norms ; capraro2018right , the trust game
kimbrough2016norms , as well as different variants of the trade-off game
capraro2018right ; tappin2018doing ; huang2019choosing , thus resulting in a
crisis of the social preference hypothesis.
## IV The rise of the moral preference hypothesis
To solve a crisis, one needs a paradigm shift. The shift started in 2010, when
Bicchieri and Chavez bicchieri2010behaving proposed an elegant solution for
one of the aforementioned empirical observations. This solution builds on
classic work suggesting that, in everyday life, people’s behaviour is partly
determined by what they believe to be the norms in a given context
smith2010theory ; durkheim2017regles ; parsons1937structure ;
geertz1973interpretation ; schwartz1977normative ; elster1989social ;
cialdini1990focus ; bicchieri2005grammar ; bicchieri2016norms ;
hawkins2019emergence . This observation led behavioural scientists to propose
several classifications of norms. Particularly relevant for the thesis of this
review is the distinction between personal and social norms
schwartz1977normative . And moreover, among the social norms, the distinction
between injunctive and descriptive norms cialdini1990focus . Personal norms
refer to internal standards about what is right or wrong in a given situation;
injunctive norms refer to what other people approve or disapprove of in that
situation; descriptive norms refer to what other people actually do. In one-
shot anonymous games, like the games considered in this review, the
distinction among personal, descriptive, and injunctive norms roughly
corresponds to Bicchieri’s personal normative beliefs, empirical expectations,
and normative expectations bicchieri2005grammar . See Table 3 for precise
definitions.
The groundbreaking intuition of Bicchieri and Chavez bicchieri2010behaving
was to apply the theory of norms to deviations from monetary-based social
preferences in the ultimatum game. Specifically, Bicchieri and Chavez showed
that the ultimatum game offer that is consider to be fair by responders
depends on the choice set available to the proposer; moreover, responders tend
to reject offers that they consider unfair. This suggests that altruistic
punishment is driven by responders following their personal norms, beyond the
monetary consequences that these actions bring about. In particular, this
explains the aforementioned results of Falk, Fehr, and Fischbacher
falk2003nature , that responders reject the same offer at different rates
depending on the other offers available to the proposer.
Shortly after, in 2013, Krupka and Weber krupka2013identifying applied a
similar approach to several variants of the dictator game. However, instead of
focusing on personal norms, they focused on injunctive norms. For each of the
available actions, subjects were asked to declare whether they found the
corresponding action to be “very socially inappropriate”, “somewhat socially
inappropriate”, “somewhat socially appropriate”, or “very socially
appropriate”. Subjects were given a monetary prize if they matched the modal
choice made by other participants. Observe that, in this way, Krupka and Weber
incentivised the elicitation of the injunctive norm. (The elicitation of
personal norms cannot be incentivised.) In doing so, Krupka and Weber found
that people believe that others think that avoiding a dictator game
interaction is far less socially inappropriate than keeping the whole amount
of money in a dictator game that one is obliged to play. Therefore, the
empirical results summarised above regarding dictator games with an exit
option dana2006you ; lazear2012sorting can be explained simply by a change in
the perception of what is the injunctive norm in that context. Similarly,
Krupka and Weber found that people believe that others think that keeping the
money in a dictator game with a taking option is far less socially
inappropriate than keeping the money in the dictator game without the taking
option. In this way, they could explain also the results of List
list2007interpretation , Bardsley bardsley2008dictator , and Cappelen et al.
cappelen2013give in terms of a change in the perception of the injunctive
norm. Finally, Krupka and Weber presented a novel experiment in which subjects
played the dictator game in either of two variants: in the Standard variant,
dictators started with $10 and had to decide how much of it, if any, to give
to the recipient; in the Bully variant, the money was initially split equally
among the dictator and the recipient, and the dictator could either give,
take, or do nothing. The authors found that people were more altruistic in the
Bully variant compared to the Standard variant, and this was driven by the
fact that people rated “taking from the recipient” far less socially
appropriate than “not giving to the recipient”.
The work of Krupka and Weber suggests that taking into account injunctive
norms is important to explain deviations from social preference models in the
dictator game. But are the injunctive norms really the main force behind the
observed behavioural changes, or are there also other norms playing more
primary roles? In the last five years, a set of empirical studies tried to
address this question. Schram and Charness schram2015inducing analysed the
behaviour of dictators who were given an advice from third parties about the
injunctive norm. They observed that dictators became more pro-social only when
their choices were made public. By contrast, when their choices remained
private, they found no significant increase in pro-sociality, compared to the
case in which they did not receive any information about the injunctive norm.
These results indicate that, although injunctive norms might correlationally
explain behavioural changes in anonymous (and thus private) dictator game
experiments, they are unlikely to be the primary motivation. In fact, being
that these games were played anonymously, in front of the screen of a
computer, the intuition suggests that the norms primarily at play are the
personal norms. Two recent works provide evidence for this hypothesis. Capraro
and Vanzo capraro2019power found that framing effects in the dictator game
generated by morally loaded instructions can be explained by changes in the
perception of what people “personally think to be the right thing” in the
given context (i.e., their personal norms). Capraro et al.
capraro2019increasing showed that making personal norms salient prior to
playing the dictator game (by asking subjects to state what they personally
think to be the morally right thing to do) has a strong effect on subsequent
dictator game donations, even persisting to a second-stage prisoner’s dilemma
interaction.
This set of works thus suggests that dictator game giving is driven by
personal norms. Putting this together with the results of Bicchieri and
Chavez, we obtain that both altruism and altruistic punishment can be
explained by people following their personal norms.
More recently, this finding has been not only replicated, but, more
importantly, also extended to explain several other forms of unselfish
behaviour. In 2016, Kimbrough and Vostroknutov kimbrough2016norms introduced
a task “that measures subjects’ preferences for following rules and norms, in
a context that has nothing to do with social interaction or distributional
concerns”. They found that this measure of norm-sensitivity predicts dictator
game altruism, trust game trustworthiness (but not trust), and ultimatum game
rejection thresholds (but not offers). Taken together, this indicates that
altruism, trustworthiness, and altruistic punishment are driven by a common
desire to adhere to a personal norm. In 2017, Eriksson et al.
eriksson2017costly conducted an ultimatum game experiment under two different
conditions. The difference, however, was only in the labels that were used to
describe the action of refusing the proposer’s offer. In one treatment, this
action was labeled “rejecting the proposer’s offer”, while in the other
treatment, the same action was labeled “reducing the proposer’s payoff”. Since
these two options are monetarily equivalent, any utility function depending
only on the monetary payoffs of the available actions predict that responders
should behave the same way in both cases. But contrary to this prediction,
Eriksson et al. found that responders displayed higher rejection thresholds in
the “rejection frame” than in the “reduction frame”. Moreover, they showed
that the observed framing effect could be explained by a change in what people
think to be the right thing to do. Specifically, subjects tended to rate the
action of reducing the proposer’s offer to be morally worse than the action of
rejecting the proposer’s offer, in spite of the fact that these two actions
had the same monetary consequences. In 2018, Capraro and Rand capraro2018right
showed that behaviour in the trade-off game is highly sensitive to the labels
used to describe the available actions. In line with Eriksson et al.
eriksson2017costly , Capraro and Rand also found that their framing effects
could be explained by a change in what people think to be the right thing to
do. Notably, framing effects in the trade-off game have been replicated
several times tappin2018doing ; huang2019choosing ; capraro2019preferences ;
capraro2020gender ; capraro2020does and a recent work has shown that these
moral framings tap into relatively internalised moral preferences
capraro2020does . Moreover, Capraro and Rand also considered a situation in
which the personal norm conflicted with the descriptive norm, and found that
people tend to follow the personal norm, rather than the descriptive norm. The
same research also revealed a correlation between the framing effect in the
trade-off game and giving in the dictator game and cooperation in the
prisoner’s dilemma, thus indicating that not only trade-off decisions are
driven by personal norms, but that altruism and cooperation are also subject
to that same facilitator. Cooperative behaviour is also typically correlated
to altruistic behaviour capraro2014heuristics ; peysakhovich2014humans ;
reigstad2017extending , suggesting that they are driven by a common underlying
motivation.
To the best of our knowledge, there are no works directly exploring the role
of personal norms on truth-telling in the sender-receiver game. However,
Biziou-van-Pol et al. biziou2015does have shown that there is a positive
correlation between truth-telling in the sender-receiver game (in the Pareto
white lie condition), giving in the dictator game, and cooperation in the
prisoner’s dilemma, suggesting that these types of behaviours are driven by a
common motivation. Since the aforementioned research suggests that altruism
and cooperation are driven by personal norms, this correlation suggests that
lying aversion is so too.
In sum, research accumulated in the last ten years suggests that several forms
of one-shot, anonymous unselfishness, including altruism, altruistic
punishment, truth-telling, cooperation, trustworthiness, and the equality-
efficiency trade-off, can be explained using a unified theoretical framework,
whereby people have moral preferences for following their personal norms,
beyond the monetary payoff that these actions bring about. Of course, this is
not meant to imply that monetary payoffs do not play any role in explaining
one-shot unselfishness, but simply that something else, in addition to
monetary payoffs, should be taken into account. The thesis is that this
‘something else’ are the personal norms, which gives rise to the moral
preference hypothesis as described in Table 4. Also, this is not meant to
imply that other types of norms play no role in these forms of one-shot
selfless behaviour. For example, nudging the injunctive norm in the prisoner’s
dilemma capraro2019increasing and in the trade-off game human2020effect has
a similar effect as nudging the personal norm. Moreover, it is possible that
social norms ultimately drive personal norms, because they allow to enhance or
preserve one’s sense of self-worth and avoid self-concept distress, resulting
in a self-reinforcing behaviour that eventually benefits one’s own self-image
schwartz1977normative . However, the aforementioned literature suggests that,
at a proximate level, personal norms have a greater explanatory power, in the
sense that they consistently explain people’s behaviour also in games where
injunctive norms have been shown to play a limited role (e.g., dictator game)
or where descriptive norms play a limited role (e.g., the trade-off game).
## V Practical applications
Behavioural scientists and policy makers have been using norm-based
interventions to foster pro-sociality in real life for decades
(bicchieri2009right, ; krupka2009focusing, ; zafar2011experimental, ;
raihani2014dictator, ; d2017push, ; frey2004social, ; croson2010gendered, ;
cialdini1991focus, ; ferraro2013using, ; agerstrom2016using, ;
goldstein2008room, ; hallsworth2017behavioralist, ). Although these
paternalistic interventions have been criticised because they subtly violate
people’s freedom of choice hausman2010debate and can be exploited by
malicious institutions glaeser2005paternalism (see sunstein2014nudge for a
response to these critiques), they are well-studied because, compared to
standard procedures to foster pro-sociality (punishment and rewards), they
allow to save the monitoring cost that the institution needs to pay in order
to know who to punish or reward.
Norm-based interventions typically manipulate the descriptive or the
injunctive norm in a given context, and show that this has an effect on
people’s behaviour in that same context. The more recent works reviewed in the
previous section, showing that unselfish behaviour in one-shot, anonymous
economic games is primarily driven by a desire to follow the personal norms,
suggest that a more effective mechanism to increase pro-sociality might be to
use norm-based interventions that target personal norms, rather than social
norms. The interest in targeting personal norms, compared to other mechanisms
to promote pro-sociality, is also that targeting personal norms is potentially
cheaper than other mechanisms. Clearly, it is cheaper than punishment and
rewards because it avoids the monitoring cost. Additionally, it saves the cost
of collecting information about the behaviour or the moral judgments of other
people, which forms the basis of interventions targeting social norms.
In recent years, there has been a growing body of research exploring the
effect of nudging personal norms on various forms of unselfish behaviour. Some
works using economic games found that making personal norms salient increases
donations in the dictator game branas2007promoting ; capraro2019increasing ,
cooperation in the prisoner’s dilemma dal2014right ; capraro2019increasing ,
as well as decreases in-group favouritism, at least on average
bilancini2019right . This suggests that nudging personal norms might be
effective to increase pro-sociality in one-shot anonymous decisions that have
consequences outside the laboratory. Along these lines, Capraro et al.
capraro2019increasing found that asking people to report what they personally
think is the morally right thing to do increases crowdsourced charitable
donations by 44%.
## VI Models of moral preferences
We have thus seen that several forms of unselfish behaviour can be organised
by moral preferences for following the personal norms. The question is, can we
model this using a formal utility function?
There have been some attempts to formalise people’s tendency to follow a norm
benabou2006incentives ; levitt2007laboratory ; lopez2008aversion ;
andreoni2009social ; dellavigna2012testing ; kessler2012norms ; alger2013homo
; krupka2013identifying ; kimbrough2016norms ; kimbrough2020injunctive ;
kimbrough2020theory . Most of these models, however, are either very specific
in the sense that they can be applied only to certain games, or do not
distinguish among different types of norms. Three models can be applied to
every game of interest in this review (and, more generally, to every one-shot
game) and distinguish among different types of norms.
Levitt and List levitt2007laboratory introduced a model where the utility of
an action $a$ depends on the monetary payoff associated to that action,
$v_{i}(\pi_{i}(a))$, as well as on the moral cost (or benefit), $m(a)$,
associated to that action:
$u_{i}(a)=v_{i}(\pi_{i}(a))+m(a).$
Levitt and List assumed that the moral cost (or benefit) depends primarily on
three factors: whether the action is recorded or performed in the presence of
an observer, whether the action has negative consequences on other players,
and whether the action is in line with “social norms or legal rules that
govern behavior in a particular society”. Therefore, Levitt and List’s model,
although useful in many circumstances, does only mention social norms, while
ignoring the effect of personal norms.
A similar model was considered by Krupka and Weber krupka2013identifying ,
with the key difference that they focused on injunctive norms specifically.
Krupka and Weber introduced a function $N$ defined over the set of available
actions that, given an action $a$, returns a number $N(a)$ representing the
extent to which society views $a$ as socially appropriate. They also assumed
that people are heterogeneous in the extent to which they care about doing
what society considers to be appropriate. In doing so, they obtain the utility
function:
$u_{i}(a)=v_{i}(\pi_{i}(a))+\gamma_{i}N(a).$
As mentioned above, one of the main contributions of Krupka and Weber was to
introduce an experimental technique to elicit the injunctive norm. To this
end, they asked participants to rate each of the available actions in terms of
their social appropriateness. Participants were incentivised to match the
modal choice of the other participants.
Very recently, in 2020, Kimbrough and Vostroknutov presented a different
approach, but still based on injunctive norms kimbrough2020theory .
Specifically, they introduced the utility function
$u_{i}(a)=v_{i}(\pi_{i}(a))+\phi_{i}\eta(a),$
where $\phi_{i}$ represents the extent to which $i$ cares about following the
injunctive norm, and $\eta(a)$ represents a measure of whether the society
thinks that $a$ is socially appropriate. Although this utility function looks
very similar to the one proposed by Krupka and Weber, it differs from it in
one important dimension. While Krupka and Weber’s social appropriateness,
$N(a)$, is computed by asking participants what they think others would
approve or disapprove (and therefore it need not depend only on the monetary
consequences of the available actions), Kimbrough and Vostroknutov’s
injunctive norm, $\eta$, is built axiomatically from the game and it is
assumed to be inversely proportional to the overall dissatisfaction of the
players, defined as the difference between what they get in a given scenario
and what they could have gotten in others. This implies that one limitation of
this approach is that people always prefer Pareto dominant allocations over
Pareto dominated ones. But, in experiments, this property is not always
satisfied. For example, when lying is Pareto dominant, some people still tell
the truth, and these people tend to cooperate in a subsequent prisoner’s
dilemma and give in a subsequent dictator game biziou2015does . Moreover, in
trade-off games framed in such a way that the Pareto dominant allocation is
presented as morally wrong, people tend to choose the Pareto dominated option
capraro2018right ; tappin2018doing .
In sum, previous formal models consider only social norms or, more
specifically, injunctive norms. But, as we have seen in the previous sections,
unselfish behaviour in one-shot anonymous interactions is often driven by
personal norms, rather than by social norms. Taking inspiration from the above
models, one can formalise this using the utility function:
$u_{i}(a)=v_{i}(\pi_{i}(a))+\mu_{i}P_{i}(a),$
where $\mu_{i}$ represents the extent to which player $i$ cares about doing
what s/he personally thinks to be the morally right thing to do and $P_{i}(a)$
represents the extent to which $i$ personally thinks that $a$ is morally
right. This functional form might superficially seem similar to the ones
discussed earlier, but it differs from those in two important points. One
point is that the personal norm $P_{i}(a)$ typically depends on the individual
$i$, whereas the injunctive norm depends on the society and the culture in
which the individual is embedded. The second point is the very fact that
$P_{i}$ represents the extent to which $i$ thinks that $a$ is the morally
right thing to do, whereas $m(a),N(a)$, and $\eta(a)$ represent social norms.
In general, the personal norm might not be aligned with the social norms. In
practice, $P_{i}(a)$ can be estimated using a suitable experiment, whereas
$\mu_{i}$ and $v_{i}$ can be estimated, on average, using statistical
techniques, following a similar method as the one developed by Krupka and
Weber for injunctive norms krupka2013identifying . Specifically, one can
estimate $P_{i}(a)$ by asking subjects to self-report the extent to which they
personally think that action $a$ is the morally right thing to do. Then one
can use these ratings to predict the behaviour, using a simple regression. The
coefficient of this regression will give the average of the $\mu_{i}$’s. Also,
putting the monetary payoffs in the regression, one can also get an estimation
for the average of the $v_{i}$’s.
This utility function based on personal norms has a greater predictive power
than its counterparts based only on social norms, in the sense that it
explains behaviour in a larger set of games, compared to their counterparts
based on social norms. We have seen earlier that Schram and Charness
schram2015inducing found that making the injunctive norm salient does _not_
increases altruistic behaviour in the anonymous dictator game. D’Adda et al.
d2017push found that making the descriptive norm salient has only a
marginally significant effect on anonymous dictator game giving; this effect
also vanishes in a second interaction, played immediately after. Along the
same lines, Dimant, van Kleef and Shalvi dimant2019requiem found that
promoting the injunctive norm and promoting the descriptive norm does _not_
have any effect on people’s honesty in a deception game in which subjects can
lie for their benefit. On the other hand, numerous works have shown that
nudging personal norms impacts several forms of unselfish behaviour, ranging
from altruism branas2007promoting ; capraro2019increasing , altruistic
punishment eriksson2017costly , cooperation dal2014right ;
capraro2019increasing , and the equality-efficiency trade-off capraro2018right
. Moreover, the effect typically persists for at least another interaction and
even spills across contexts capraro2019increasing . All these results are
consistent with a utility function based on personal norms and are not
consistent with a utility function based only on social norms.
We present a summary of all above-discussed moral preference models in Table
5.
## VII Future work
This is an exciting field of research, which provides a unified view of human
choices in several contexts of decision-making, while having, at the same
time, significant practical implications. Nonetheless, there are several
questions that need to be explored in future research, as detailed in what
follows and summarised in Table 6.
### VII.1 The utility function
From a mechanistic perspective, the moral preference hypothesis raises the
question of how can we express the utility function of a decision maker.
Scholars have tried to give mathematical sense to people’s morality since the
foundation of mathematical economics jevons1879theory ; bentham1996collected .
About two centuries later, the question is still open, even in the simple
setting of one-shot anonymous interactions. One simple way to do so is to
assume that people are torn between maximising their monetary payoff and doing
what they personally think to be the morally right thing. This can be done
with a utility function of the shape
$u_{i}(a)=v_{i}(\pi_{i}(a))+\mu_{i}P_{i}(a)$. Although this utility function
outperforms their counterparts based on social norms, as well as social
preferences, it undoubtedly represents just a first candidate. Future work
should explore other ways to formalise moral preferences, through finer
experiments with the power to detect small variations in how people weight
their personal norm against monetary incentives. Future work should also find
ways to estimate what people think to be the right thing in a given context,
without asking it to the participants in a separate experiment. The literature
reviewed above shows that, in many cases, it is enough to change only one word
in the instructions of a decision problem to change people’s perception of
what is the right thing to do in a given context. This suggests that
$P_{i}(a)$ partly depends on the language in which the action $a$ is
presented. Exploring this dependence can greatly improve the predictive power
of the utility function. How can one do so? Recent work shows that emotional
content in messages increases their diffusion in social media brady2017emotion
; brady2019ideological ; brady2019mad . Translating this finding in the
context of one-shot games, it suggests that the emotions carried by the
instructions of the decision problem might contribute to the computation of
$P_{i}$. Along these lines, it is possible that one can use sentiment analysis
to better estimate $P_{i}$. Sentiment analysis is a technique developed by
computational linguists that allows to assign a polarity to any given piece of
text pang2002thumbs . In principle, this polarity could enter the utility
function of a decision maker and work as an additional motivation or obstacle
for choosing an action, beyond its monetary consequences. In any case,
mathematically describing or at least quantifying the seemingly intangible
moral preferences, and in doing so building bridges between computational
linguistics, behavioural economics, and moral psychology, is a fascinating
direction for future work.
### VII.2 Evolution of norms
Where do personal norms come from? One explanation is that they come from the
internalisation of behaviours that, although not individually optimal in the
short term, are optimal in the long run. It is therefore important to
understand which unselfish behaviours can be selected in the long term, and
under which conditions. A promising line of research uses evolutionary game
theory and statistical physics to find the conditions that promote the
evolution of cooperation on networks perc_pr17 . More recently, scholars have
started applying similar techniques also to study the evolution of other forms
of unselfish behaviour capraro_fp18 , such as truth-telling in the sender-
receiver game capraro2019evolution ; capraro_pre20 and trustworthiness in the
trust game kumar2020evolution . Some works along this line have also looked at
the evolution of choices in the ultimatum game page_prsb00 ;
killingback_prsb01 ; iranzo_pone12 ; szolnoki_prl12 . Future work should
extend the same techniques to other forms of unselfish behaviour.
### VII.3 Personal norms versus social norms
The experimental literature reviewed in the previous sections suggests that
several forms of one-shot, anonymous unselfishness can be unified under a
framework according to which people have preferences for following their
personal norms. Moreover, preliminary evidence suggests that nudging personal
norms can be an effective tool for fostering pro-sociality: making personal
norms salient affects altruism, cooperation, altruistic punishment, and trade-
off decisions between equality and efficiency branas2007promoting ;
capraro2019increasing ; eriksson2017costly ; dal2014right .
This, of course, does not mean that the social norms play no role at all. For
example, nudging injunctive norms has a significant effect on the one-shot,
anonymous, prisoner’s dilemma capraro2019increasing and the trade-off game
human2020effect . One question that is still open, however, is whether these
effects are fundamentally distinct from the effect of nudging personal norms.
It is indeed possible that nudging injunctive norms in these games also nudge
personal norms, and this is what makes people change their behaviour. A
working paper suggests that people who follow injunctive norms in the trade-
off game are different from those who follow personal norms human2020effect .
Therefore, it is possible that a larger model taking into account both
personal and injunctive norms might have an even greater predictive power, at
least in some contexts, than a model based exclusively on personal norms.
Further experiments comparing the effect of nudging different norms are needed
to clarify this point. The evidence in this case is indeed still lacunar. One
study compared the relative effect of the descriptive and the injunctive norms
in the dictator game, and found that people tend to follow the descriptive
norm bicchieri2009right . Another study compared the relative effect of
nudging personal norms and the descriptive norms in the trade-off game, and
found that people tend to follow the personal norms capraro2018right . The
aforementioned working paper compared the effect of nudging the personal and
the injunctive norm in the trade-off game and found that they have a similar
effect; moreover, when the two norms are in conflict, some people follow the
personal norm and other follow the injunctive norm human2020effect . This
suggests that people’s behaviour depends on their focus of attention within an
interconnected matrix of norms. Therefore, future work should explore norm
salience, also in cases where more than one type of norm is simultaneously
made salient.
Research should also go beyond anonymous decisions and investigate what
happens when choices are observable. The intuition suggests that when choices
are observable, social norms may play a bigger role compared to when they
remain private; in line with this intuition, Schram and Charness
schram2015inducing showed that nudging the injunctive norms impacts public
but not private dictator game giving. However, no studies compared the
relative effectiveness of targeting different norms in public decisions.
### VII.4 Boundary conditions of interventions based on personal norms
Having in mind potential practical applications, another important question
concerns the boundary conditions of interventions based on personal norms.
From a temporal perspective, previous research suggests that interventions
targeting personal norms can last for several interactions within the same
experiment dal2014right ; capraro2019increasing . However, it seems
unrealistic to expect that their effect will last indefinitely. For example, a
recent field experiment targeting injunctive norms found an effect that
diminishes after repeated interventions, although it can be restored after
waiting a sufficient amount of time between interventions ito2018moral . From
the decisional context point of view, there will certainly be behavioural
domains in which targeting personal norms might not be as effective. For
example, a recent work suggests that risky cooperation in the stag-hunt game
is primarily driven by preferences for efficiency, rather than by preferences
for following personal norms capraro2019preferences .
### VII.5 External validity of interventions based on personal norms
Given the potential relevance of this line of work for the society at large,
future studies should explore the external validity of interventions based on
personal norms. At the time of this writing, only one study investigated the
effect of nudging personal norms in contexts in which decisions have
consequences outside the laboratory. This study found that nudging personal
norms increases crowdsourced charitable donations to real humanitarian
organisations by 44% capraro2019increasing .
### VII.6 The moral phenotype and its topology
We have seen that different forms of unselfish behaviour can be explained by a
general tendency to do the right thing. We are tempted to call this tendency
“moral phenotype”, extending the notion of “cooperative phenotype” introduced
by Peysakhovich, Nowak, and Rand peysakhovich2014humans . See also
reigstad2017extending . In their work, Peysakhovich and colleagues observed
that pro-social behaviours in the dictator game, the public goods game (a
variant of the prisoner’s dilemma with more than two players), and the trust
game (both players) were all correlated; and they termed this general pro-
social tendency cooperative phenotype. Therefore, the cooperative phenotype is
uni-dimensional. On the other hand, the moral phenotype is likely to be multi-
dimensional. For example, we have seen earlier that both altruistic punishment
and altruistic giving are driven by preferences for doing the right thing.
However, Peysakhovich, Nowak, and Rand peysakhovich2014humans found that they
are not correlated. It is possible that they are not correlated because they
come from different personal norms. The multi-dimensionality of morality is
not a new idea, and several authors have come to suggest it in the last
decades from different routes. For example, Haidt and colleagues argue that
differences in people’s moral concerns can be explained by individual
differences across six “foundations” haidt2004intuitive ; graham2009liberals ;
haidt2012righteous . Kahane, Everett and colleagues have shown that (act)
utilitarianism decomposes itself in at least two dimensions kahane2018beyond ;
everett2020switching . Curry, Mullins, and Whitehouse curry2019good have
reported that seven moral rules are universal across societies, but societies
vary on how they rank them. However, we are not aware of any work exploring
how different personal norms link to different forms of one-shot
unselfishness.
Another topological property of the moral phenotype that deserves further
scrutiny is the boundary. Does, for example, the moral phenotype include
decisions that are strategically unselfish, such as strategic fairness
(ultimatum game offers) and trust (trust game transfers), both of which
maximise the decision maker’s payoff depending on the decision maker’s beliefs
about the behaviour of the other player? Previous evidence is limited and
mixed. Bicchieri and Chavez bicchieri2010behaving showed that ultimatum game
offers are partly driven by normative beliefs; Peysakhovich, Nowak, and Rand
peysakhovich2014humans found that trustees’ decisions correlate with dictator
game and public goods game decisions. By contrast, Kimbrough and Vostroknutov
kimbrough2016norms found that trustees’ and proposers’ decisions are not
correlated to their measure of norm-sensitivity.
### VII.7 A dual-process approach to personal norms
Do personal norms come out automatically, or do they require deliberation?
Research recently explored the cognitive basis of unselfish behaviour, by
using cognitive process manipulation, such as time pressure and cognitive
load, in order to favour instinctive responses rand_n12 ; andersen2018allowing
; bereby2018honesty ; bouwmeester2017registered ; capraro2019time ;
capraro2017deliberation ; chen2019cognitive ; chuan2018field ;
everett2017deliberation ; holbein2019insufficient . It has been shown that
promoting intuition favours cooperation rand2016cooperation and altruistic
punishment hallsson2018fairness . The evidence regarding altruism is instead
more mixed rand2016social ; fromell2020altruism . Instead, a meta-analysis
suggests that intuition decreases truth-telling, when lying harms abstract
others, while leaving it unaffected when it harms concrete others
kobis2019intuitive . Furthermore, results are inconclusive in the context of
trustworthiness and the equality-efficiency trade-off (see capraro2019dual
for a review). This line of work suggests that whether personal norms come out
automatically or require deliberation may not have a general answer, but might
depend on the specific behavioural context, and possibly also on the
individual characteristics of the decision maker. More work is needed to
understand which personal norms, in which context, and for which people,
become internalised as automatic reactions.
## VIII Conclusions
The moral preference hypothesis is emerging as a unified framework to explain
a wide range of one-shot, anonymous unselfish behaviours, including
cooperation, altruism, altruistic punishment, truth-telling, trustworthiness,
and the equality-efficiency trade-off. This framework has promising practical
implications, given that interventions making personal norms salient have been
shown to be effective at increasing charitable donations. Future work should
explore further mathematical formalisations of moral preferences in terms of a
utility function, investigate the evolution and internalisation of personal
norms, study the external validity and the boundary conditions of policy
interventions based on personal norms, compare the relative effectiveness of
targeting different types of norms, examine the topology of the moral
phenotype, and analyse the cognitive foundations of morality, possibly using a
dual-process perspective.
Overall, the goal of this line of research should be to build bridges between
different scientific disciplines to arrive at a better, perhaps more
mechanistic, explanation of human decision-making. The outlined mathematical
formalism for morality should be used to inform future models aimed at better
understanding selfless actions, and it should also be used in artificial
intelligence to better navigate the complex landscape of human morality and to
better emulate human decision-making. Ultimately, the goal is to use the
obtained insights to develop more efficient policies and interventions to
increase good virtues and decrease bad ones, and to collectively strive
towards better human societies.
The past century has seen strict compartmentalisation of different scientific
disciplines leading to groundbreaking and important discoveries that might had
been impossible without it. But while technology and industry might fare well
on idiosyncratic breakthroughs, human societies do not. The grandest
challenges of today remind us that sustainable social welfare and organisation
require a wholesome interdisciplinary and cross-disciplinary approach, and we
hope this review will be an inspiration towards this goal.
###### Acknowledgements.
This work was supported by the Slovenian Research Agency (Grant Nos. P1-0403,
J1-2457, J4-9302, and J1-9112).
## References
* (1) Rapoport, A., Chammah, A. M., and Orwant, C. J. Prisoner’s dilemma: A study in conflict and cooperation, University of Michigan Press, (1965).
* (2) Engel, C. Dictator games: A meta study. Experimental Economics 14, 583–610 (2011).
* (3) Cialdini, R. B., Darby, B. L., and Vincent, J. E. Transgression and altruism: A case for hedonism. Journal of Experimental Social Psychology 9(6), 502–516 (1973).
* (4) Andreoni, J. Impure altruism and donations to public goods: A theory of warm-glow giving. The eEonomic Journal 100(401), 464–477 (1990).
* (5) Ledyard, J. O. Public goods: A survey of experimental research. In Handbook of Experimental Economics, J., K. and A., R., editors. Princeton Univ. Press, Princeton, NJ (1995).
* (6) Levine, D. K. Modeling altruism and spitefulness in experiments. Review of Economic Dynamics 1(3), 593–622 (1998).
* (7) Fehr, E. and Schmidt, K. M. A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics 114(3), 817–868 (1999).
* (8) Bolton, G. E. and Ockenfels, A. Erc: A theory of equity, reciprocity, and competition. The American Economic Review 90(1), 166–193 (2000).
* (9) Andreoni, J. and Miller, J. Giving according to garp: An experimental test of the consistency of preferences for altruism. Econometrica 70(2), 737–753 (2002).
* (10) Charness, G. and Rabin, M. Understanding social preferences with simple tests. The Quarterly Journal of Economics 117(3), 817–869 (2002).
* (11) Smith, A. The theory of moral sentiments. Penguin, (2010).
* (12) Durkheim, É. Les règles de la méthode sociologique. Flammarion, (2017).
* (13) Parsons, T. The Structure of social action: A Study in Social Theory with Special Reference to a Group of Recent European Writers. Free Press, New York: London, (1937).
* (14) Geertz, C. The interpretation of cultures, volume 5019. Basic books, (1973).
* (15) Schwartz, S. H. Normative influences on altruism. In Advances in experimental social psychology, volume 10, 221–279. Elsevier (1977).
* (16) Elster, J. Social norms and economic theory. Journal of Economic Perspectives 3(4), 99–117 (1989).
* (17) Cialdini, R. B., Reno, R. R., and Kallgren, C. A. A focus theory of normative conduct: recycling the concept of norms to reduce littering in public places. Journal of personality and social psychology 58(6), 1015–1026 (1990).
* (18) Bicchieri, C. The grammar of society: The nature and dynamics of social norms. Cambridge University Press, (2005).
* (19) Bicchieri, C. Norms in the wild: How to diagnose, measure, and change social norms. Oxford University Press, (2016).
* (20) Hawkins, R. X., Goodman, N. D., and Goldstone, R. L. The emergence of social norms and conventions. Trends in cognitive sciences 23(2), 158–169 (2019).
* (21) Bicchieri, C. and Chavez, A. Behaving as expected: Public information and fairness norms. Journal of Behavioral Decision Making 23(2), 161–178 (2010).
* (22) Krupka, E. L. and Weber, R. A. Identifying social norms using coordination games: Why does dictator game sharing vary? Journal of the European Economic Association 11(3), 495–524 (2013).
* (23) Biziou-van Pol, L., Haenen, J., Novaro, A., Occhipinti Liberman, A., and Capraro, V. Does telling white lies signal pro-social preferences? Judgment and Decision Making 10, 538–548 (2015).
* (24) Schram, A. and Charness, G. Inducing social norms in laboratory allocation choices. Management Science 61(7), 1531–1546 (2015).
* (25) Kimbrough, E. O. and Vostroknutov, A. Norms make preferences social. Journal of the European Economic Association 14(3), 608–638 (2016).
* (26) Eriksson, K., Strimling, P., Andersson, P. A., and Lindholm, T. Costly punishment in the ultimatum game evokes moral concern, in particular when framed as payoff reduction. Journal of Experimental Social Psychology 69, 59–64 (2017).
* (27) Capraro, V. and Rand, D. G. Do the right thing: Experimental evidence that preferences for moral behavior, rather than equity or efficiency per se, drive human prosociality. Judgment and Decision Making 13, 99–111 (2018).
* (28) Tappin, B. M. and Capraro, V. Doing good vs. avoiding bad in prosocial choice: A refined test and extension of the morality preference hypothesis. Journal of Experimental Social Psychology 79, 64–70 (2018).
* (29) Capraro, V. and Vanzo, A. The power of moral words: Loaded language generates framing effects in the extreme dictator game. Judgment and Decision Making 14, 309–317 (2019).
* (30) Huang, L., Lei, W., Xu, F., Yu, L., and Shi, F. Choosing an equitable or efficient option: A distribution dilemma. Social Behavior and Personality: An international journal 47(10), 1–10 (2019).
* (31) Capraro, V., Jagfeld, G., Klein, R., Mul, M., and van de Pol, I. Increasing altruistic and cooperative behaviour with simple moral nudges. Scientific Reports 9(1), 1–11 (2019).
* (32) Rand, D. G. Cooperation, fast and slow: Meta-analytic evidence for a theory of social heuristics and self-interested deliberation. Psychological Science 27(9), 1192–1206 (2016).
* (33) Falk, A., Fehr, E., and Fischbacher, U. On the nature of fair behavior. Economic inquiry 41(1), 20–26 (2003).
* (34) Gneezy, U. Deception: The role of consequences. American Economic Review 95(1), 384–394 (2005).
* (35) Cappelen, A. W., Sørensen, E. Ø., and Tungodden, B. When do we lie? Journal of Economic Behavior & Organization 93, 258–265 (2013).
* (36) Erat, S. and Gneezy, U. White lies. Management Science 58(4), 723–733 (2012).
* (37) Dana, J., Cain, D. M., and Dawes, R. M. What you don’t know won’t hurt me: Costly (but quiet) exit in dictator games. Organizational Behavior and human decision Processes 100(2), 193–201 (2006).
* (38) Lazear, E. P., Malmendier, U., and Weber, R. A. Sorting in experiments with application to social preferences. American Economic Journal: Applied Economics 4(1), 136–63 (2012).
* (39) List, J. A. On the interpretation of giving in dictator games. Journal of Political economy 115(3), 482–493 (2007).
* (40) Bardsley, N. Dictator game giving: altruism or artefact? Experimental Economics 11(2), 122–133 (2008).
* (41) Cappelen, A. W., Nielsen, U. H., Sørensen, E. Ø., Tungodden, B., and Tyran, J.-R. Give and take in dictator games. Economics Letters 118(2), 280–283 (2013).
* (42) Capraro, V., Rodriguez-Lara, I., and Ruiz-Martos, M. J. Preferences for efficiency, rather than preferences for morality, drive cooperation in the one-shot stag-hunt game. Journal of Behavioral and Experimental Economics (2020).
* (43) Capraro, V. Gender differences in the trade-off between objective equality and efficiency. Judgment and Decision Making 15(4), 534–544 (2020).
* (44) Capraro, V., Jordan, J. J., and Tappin, B. M. Does observability amplify sensitivity to moral frames? evaluating a reputation-based account of moral preferences. Journal of Experimental Social Psychology (2021).
* (45) Capraro, V., Jordan, J. J., and Rand, D. G. Heuristics guide the implementation of social preferences in one-shot prisoner’s dilemma experiments. Scientific Reports 4, 6790 (2014).
* (46) Peysakhovich, A., Nowak, M. A., and Rand, D. G. Humans display a ‘cooperative phenotype’that is domain general and temporally stable. Nature communications 5(1), 1–8 (2014).
* (47) Reigstad, A. G., Strømland, E. A., and Tinghög, G. Extending the cooperative phenotype: Assessing the stability of cooperation across countries. Frontiers in psychology 8, 1990 (2017).
* (48) Human, S. J. and Capraro, V. The effect of nudging personal and injunctive norms on the trade-off between objective equality and efficiency. Available at https://psyarxiv.com/mx27g/ (2020).
* (49) Bicchieri, C. and Xiao, E. Do the right thing: but only if others do so. Journal of Behavioral Decision Making 22(2), 191–208 (2009).
* (50) Krupka, E. and Weber, R. A. The focusing and informational effects of norms on pro-social behavior. Journal of Economic Psychology 30(3), 307–320 (2009).
* (51) Zafar, B. An experimental investigation of why individuals conform. European Economic Review 55(6), 774–798 (2011).
* (52) Raihani, N. J. and McAuliffe, K. Dictator game giving: The importance of descriptive versus injunctive norms. PloS ONE 9(12), e113826 (2014).
* (53) d’Adda, G., Capraro, V., and Tavoni, M. Push, don’t nudge: Behavioral spillovers and policy instruments. Economics Letters 154, 92–95 (2017).
* (54) Frey, B. S. and Meier, S. Social comparisons and pro-social behavior: Testing” conditional cooperation” in a field experiment. American Economic Review 94(5), 1717–1722 (2004).
* (55) Croson, R. T., Handy, F., and Shang, J. Gendered giving: The influence of social norms on the donation behavior of men and women. International Journal of Nonprofit and Voluntary Sector Marketing 15(2), 199–213 (2010).
* (56) Cialdini, R. B., Kallgren, C. A., and Reno, R. R. A focus theory of normative conduct: A theoretical refinement and reevaluation of the role of norms in human behavior. In Advances in experimental social psychology, volume 24, 201–234. Elsevier (1991).
* (57) Ferraro, P. J. and Price, M. K. Using nonpecuniary strategies to influence behavior: evidence from a large-scale field experiment. Review of Economics and Statistics 95(1), 64–73 (2013).
* (58) Agerström, J., Carlsson, R., Nicklasson, L., and Guntell, L. Using descriptive social norms to increase charitable giving: The power of local norms. Journal of Economic Psychology 52, 147–153 (2016).
* (59) Goldstein, N. J., Cialdini, R. B., and Griskevicius, V. A room with a viewpoint: Using social norms to motivate environmental conservation in hotels. Journal of Consumer Research 35(3), 472–482 (2008).
* (60) Hallsworth, M., List, J. A., Metcalfe, R. D., and Vlaev, I. The behavioralist as tax collector: Using natural field experiments to enhance tax compliance. Journal of Public Economics 148, 14–31 (2017).
* (61) Hausman, D. M. and Welch, B. Debate: To nudge or not to nudge. Journal of Political Philosophy 18(1), 123–136 (2010).
* (62) Glaeser, E. L. Paternalism and psychology. Technical report, National Bureau of Economic Research, (2005).
* (63) Sunstein, C. R. Why nudge?: The politics of libertarian paternalism. Yale University Press, (2014).
* (64) Brañas-Garza, P. Promoting helping behavior with framing in dictator games. Journal of Economic Psychology 28(4), 477–486 (2007).
* (65) Dal Bó, E. and Dal Bó, P. “Do the right thing:” the effects of moral suasion on cooperation. Journal of Public Economics 117, 28–38 (2014).
* (66) Bilancini, E., Boncinelli, L., Capraro, V., Celadin, T., and Di Paolo, R. “Do the right thing” for whom? an experiment on ingroup favouritism, group assortativity and moral suasion. Judgment and Decision Making 15, 182–192 (2020).
* (67) Bénabou, R. and Tirole, J. Incentives and prosocial behavior. American Economic Review 96(5), 1652–1678 (2006).
* (68) Levitt, S. D. and List, J. A. What do laboratory experiments measuring social preferences reveal about the real world? Journal of Economic Perspectives 21(2), 153–174 (2007).
* (69) López-Pérez, R. Aversion to norm-breaking: A model. Games and Economic Behavior 64(1), 237–267 (2008).
* (70) Andreoni, J. and Bernheim, B. D. Social image and the 50–50 norm: A theoretical and experimental analysis of audience effects. Econometrica 77(5), 1607–1636 (2009).
* (71) Della Vigna, S., List, J. A., and Malmendier, U. Testing for altruism and social pressure in charitable giving. Quarterly Journal of Economics 127(1), 1–56 (2012).
* (72) Kessler, J. B. and Leider, S. Norms and contracting. Management Science 58(1), 62–77 (2012).
* (73) Alger, I. and Weibull, J. W. Homo moralis—preference evolution under incomplete information and assortative matching. Econometrica 81(6), 2269–2302 (2013).
* (74) Kimbrough, E. and Vostroknutov, A. Injunctive norms and moral rules. Technical report, mimeo, Chapman University and Maastricht University, (2020).
* (75) Kimbrough, E. O. and Vostroknutov, A. A theory of injunctive norms. Technical report, mimeo, Chapman University and Maastricht University, (2020).
* (76) Dimant, E., van Kleef, G. A., and Shalvi, S. Requiem for a nudge: Framing effects in nudging honesty. Journal of Economic Behavior and Organization 172, 247–266 (2020).
* (77) Jevons, W. S. The theory of political economy. Macmillan, (1879).
* (78) Bentham, J. The collected works of Jeremy Bentham: An introduction to the principles of morals and legislation. Clarendon Press, (1996).
* (79) Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., and Van Bavel, J. J. Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences 114(28), 7313–7318 (2017).
* (80) Brady, W. J., Wills, J. A., Burkart, D., Jost, J. T., and Van Bavel, J. J. An ideological asymmetry in the diffusion of moralized content on social media among political leaders. Journal of Experimental Psychology: General 148(10), 1802 (2019).
* (81) Brady, W. J., Crockett, M., and Van Bavel, J. J. The mad model of moral contagion: The role of motivation, attention and design in the spread of moralized content online. Perspectives on Psychological Sciences 15(4), 978–1010 (2020).
* (82) Pang, B., Lee, L., and Vaithyanathan, S. Thumbs up?: Sentiment classification using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, 79–86. Association for Computational Linguistics, (2002).
* (83) Perc, M., Jordan, J. J., Rand, D. G., Wang, Z., Boccaletti, S., and Szolnoki, A. Statistical physics of human cooperation. Phys. Rep. 687, 1–51 (2017).
* (84) Capraro, V. and Perc, M. Grand challenges in social physics: In pursuit of moral behavior. Front. Phys. 6, 107 (2018).
* (85) Capraro, V., Perc, M., and Vilone, D. The evolution of lying in well-mixed populations. Journal of the Royal Society Interface 16(156), 20190211 (2019).
* (86) Capraro, V., Perc, M., and Vilone, D. Lying on networks: The role of structure and topology in promoting honesty. Phys. Rev. E 101, 032305 (2020).
* (87) Kumar, A., Capraro, V., and Perc, M. The evolution of trust and trustworthiness. Journal of the Royal Society Interface 17(169), 20200491 (2020).
* (88) Page, K. M., Nowak, M. A., and Sigmund, K. The spatial ultimatum game. Proc. R. Soc. Lond. B 267, 2177–2182 (2000).
* (89) Killingback, T. and Studer, E. Spatial ultimatum games, collaborations and the evolution of fairness. Proc. R. Soc. Lond. B 268, 1797–1801 (2001).
* (90) Iranzo, J., Floría, L., Moreno, Y., and Sánchez, A. Empathy emerges spontaneously in the ultimatum game: Small groups and networks. PLoS ONE 7, e43781 (2011).
* (91) Szolnoki, A., Perc, M., and Szabó, G. Defense mechanisms of empathetic players in the spatial ultimatum game. Phys. Rev. Lett. 109, 078701 (2012).
* (92) Ito, K., Ida, T., and Tanaka, M. Moral suasion and economic incentives: Field experimental evidence from energy demand. American Economic Journal: Economic Policy 10(1), 240–67 (2018).
* (93) Haidt, J. and Joseph, C. Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus 133(4), 55–66 (2004).
* (94) Graham, J., Haidt, J., and Nosek, B. A. Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology 96(5), 1029–1046 (2009).
* (95) Haidt, J. The righteous mind: Why good people are divided by politics and religion. Vintage, (2012).
* (96) Kahane, G., Everett, J. A., Earp, B. D., Caviola, L., Faber, N. S., Crockett, M. J., and Savulescu, J. Beyond sacrificial harm: A two-dimensional model of utilitarian psychology. Psychological Review 125(2), 131–164 (2018).
* (97) Everett, J. A. and Kahane, G. Switching tracks? Towards a multidimensional model of utilitarian psychology. Trends in Cognitive Sciences 24(2), 124–134 (2020).
* (98) Curry, O. S., Mullins, D. A., and Whitehouse, H. Is it good to cooperate. Current Anthropology 60(1), 47–69 (2019).
* (99) Rand, D., Greene, J., and Nowak, M. Spontaneous giving and calculated greed. Nature 489, 427–430 (2012).
* (100) Andersen, S., Gneezy, U., Kajackaite, A., and Marx, J. Allowing for reflection time does not change behavior in dictator and cheating games. Journal of Economic Behavior & Organization 145, 24–33 (2018).
* (101) Bereby-Meyer, Y., Hayakawa, S., Shalvi, S., Corey, J. D., Costa, A., and Keysar, B. Honesty speaks a second language. Topics in cognitive science , 1–12 (2018).
* (102) Bouwmeester, S., Verkoeijen, P. P., Aczel, B., Barbosa, F., Bègue, L., Brañas-Garza, P., Chmura, T. G., Cornelissen, G., Døssing, F. S., Espín, A. M., et al. Registered replication report: Rand, Greene, and Nowak (2012). Perspectives on Psychological Science 12(3), 527–542 (2017).
* (103) Capraro, V., Schulz, J., and Rand, D. G. Time pressure and honesty in a deception game. Journal of Behavioral and Experimental Economics 79, 93–99 (2019).
* (104) Capraro, V., Corgnet, B., Espín, A. M., and Hernán-González, R. Deliberation favours social efficiency by making people disregard their relative shares: evidence from usa and india. Royal Society open science 4(2), 160605 (2017).
* (105) Chen, F. and Fischbacher, U. Cognitive processes underlying distributional preferences: A response time study. Experimental Economics , 1–26 (2019).
* (106) Chuan, A., Kessler, J. B., and Milkman, K. L. Field study of charitable giving reveals that reciprocity decays over time. Proceedings of the National Academy of Sciences 115(8), 1766–1771 (2018).
* (107) Everett, J. A., Ingbretsen, Z., Cushman, F., and Cikara, M. Deliberation erodes cooperative behavior—even towards competitive out-groups, even when using a control condition, and even when eliminating selection bias. Journal of Experimental Social Psychology 73, 76–81 (2017).
* (108) Holbein, J. B., Schafer, J. P., and Dickinson, D. L. Insufficient sleep reduces voting and other prosocial behaviours. Nature human behaviour 3(5), 492 (2019).
* (109) Hallsson, B. G., Siebner, H. R., and Hulme, O. J. Fairness, fast and slow: A review of dual process models of fairness. Neuroscience & Biobehavioral Reviews 89, 49–60 (2018).
* (110) Rand, D. G., Brescoll, V. L., Everett, J. A., Capraro, V., and Barcelo, H. Social heuristics and social roles: Intuition favors altruism for women but not for men. Journal of Experimental Psychology: General 145(4), 389 (2016).
* (111) Fromell, H., Nosenzo, D., and Owens, T. Altruism, fast and slow? evidence from a meta-analysis and a new experiment. Experimental Economics 23, 979–1001 (2020).
* (112) Köbis, N. C., Verschuere, B., Bereby-Meyer, Y., Rand, D., and Shalvi, S. Intuitive honesty versus dishonesty: Meta-analytic evidence. Perspectives on Psychological Science 14(5), 778–796 (2019).
* (113) Capraro, V. The dual-process approach to human sociality: A review. Available at SSRN 3409146 (2019).
* (114) Gneezy, U., Rockenbach, B., and Serra-Garcia, M. Measuring lying aversion. Journal of Economic Behavior & Organization 93, 293–300 (2013).
Table 1: Glossary of games and unselfish behaviours Dictator game: We measure
altruistic behaviour using the dictator game. The _dictator_ is given a
certain amount of money and has to decide how much of it, if any, to give to
the _recipient_ , who starts with nothing. The recipient is passive.
Prisoner’s dilemma: We measure cooperative behaviour using the prisoner’s
dilemma. Two players simultaneously decide whether to cooperate or to defect.
Cooperating means paying a cost $c$ to give a benefit $b>c$ to the other
player; defecting means doing nothing. Sender-Receiver game: We measure lying
aversion using the sender-receiver game. The _sender_ is given a private
information and has to report it to the _receiver_. In some experiments the
receiver is passive gneezy2013measuring ; biziou2015does , in others is active
gneezy2005deception ; erat2012white . Here we focus on the case in which the
receiver is passive. In this case, if the sender reports the truthful
information, then the sender and the receiver are paid according to Option A;
if the sender reports an untruthful information, then the sender and the
receiver are paid according to Option B. Only the sender knows the exact
payoffs associated to the two options. Depending on these payoffs, one can
classify lies into four main classes: black lies are those that benefit the
sender at a cost to the receiver; altruistic white lies are those that benefit
the receiver at a cost to the sender; Pareto white lies are those that benefit
both the sender and the receiver; spiteful lies are those that harm both the
sender and the receiver. Trade-Off game: We measure the trade-off between
equality and efficiency using the trade-off game. A decision-maker has to
decide between two possible allocations of money that affect people other than
the decision-maker. One decision is equal (i.e., all people involved in the
interaction receive the same monetary payoff), the other decision is efficient
(i.e., the sum of the monetary payoffs of all people is greater than it is in
the equal allocation). Trust game: We measure trustworthiness using the second
player in the trust game. The _truster_ is given a certain amount of money and
has to decide how much of it, if any, to transfer to the _trustee_. The amount
sent to the trustee is multiplied by a constant (usually equal to 3) and given
to the trustee. The trustee decides how much of the amount s/he received to
return to the truster. Ultimatum game: We measure altruistic punishment using
the second player in the ultimatum game. The _proposer_ makes an offer about
how to split a sum of money between him/herself and the _responder_. The
responder decides whether to accept or reject the offer. If the offer is
accepted, the proposer and the responder get paid according to the agreed
offer; if the offer is rejected neither the proposer nor the responder get any
money. Rejecting a low offer is considered to be a measure of altruistic
punishment.
Table 2: Social preference models Let $x_{i}$ be the monetary payoff of player
$i$. Social preference models assume that the utility function of player $i$,
$u_{i}$, is defined over the monetary payoffs that are associated with the
available actions. The main functional forms that have been proposed are the
following. Ledyard (1994):
$u_{i}(x_{1},\ldots,x_{n})=x_{i}+\alpha_{i}\sum_{j\neq i}x_{j}$, where
$\alpha_{i}$ is an individual parameter representing $i$’s level of altruism.
People with $\alpha_{i}=0$ maximise their monetary payoff; people with
$\alpha_{i}>0$ are altruistic; people with $\alpha_{i}<0$ are spiteful. Levine
(1998): $u_{i}(x_{1},\ldots,x_{n})=x_{i}+\sum_{j\neq
i}\frac{\alpha_{i}+\lambda\alpha_{j}}{1+\lambda}x_{j}$, where $\alpha_{i}$ is
an individual parameter representing $i$’s level of altruism, whereas
$\lambda\in[0,1]$ is a parameter representing how sensitive players are to the
level of altruism of the other players. Fehr and Schmidt (1999):
$u_{i}(x_{1},\ldots,x_{n})=x_{i}-\frac{\alpha_{i}}{n-1}\sum_{j\neq
i}\max(x_{j}-x_{i},0)-\frac{\beta_{i}}{n-1}\sum_{j\neq i}\max(x_{i}-x_{j},0)$,
where $\alpha_{i},\beta_{i}$ are individual parameters representing the extent
to which player $i$ cares about disadvantageous and advantageous inequities,
respectively Bolton and Ockenfels (2000):
$u_{i}(x_{1},x_{2})=\alpha_{i}x_{i}-\frac{\beta_{i}}{2}\left(\sigma_{i}-\frac{1}{2}\right)^{2}$,
where $\sigma_{i}=\frac{x_{i}}{x_{1}+x_{2}}$, with $\sigma_{i}=\frac{1}{2}$ if
$x_{1}+x_{2}=0$, $\alpha_{i}>0$ is an individual parameter representing the
extent to which player $i$ cares about their own monetary payoff, and
$\beta_{i}>0$ is an individual parameter representing the extent to which
player $i$ cares about minimising the distance between their share and the
fair share. Andreoni and Miller (2002):
$u_{1}(x_{1},x_{2})=\left(\alpha_{1}x_{1}^{\rho_{1}}+(1-\alpha_{1})x_{2}^{\rho_{1}}\right)^{1/\rho_{1}}$,
where $\alpha_{1}$ represents the extent to which the dictator cares about
their own payoff, whereas $\rho_{1}$ takes into account a potential convexity
in the preferences. Charness and Rabin (2002):
$u_{2}(x_{1},x_{2})=(\rho_{2}r+\sigma_{2}s)x_{1}+(1-\rho_{2}r-\sigma_{2}s)x_{2}$.
Depending on the relative relationship between $\rho_{2}$ and $\sigma_{2}$,
this utility function can cover several cases, including competitive
preferences, inequity aversion preferences, and social efficiency preferences.
Table 3: The classification of norms Behavioural scientists have long been
aware of the fact that people’s behaviour in a given context is influenced by
what are perceived to be the norms in that context. In the same context,
multiple norms might be at play. Scholars have proposed several norm
classifications. In this review, we will be mainly concerned with the
following three. Schwartz schwartz1977normative classified norms into two
main categories, namely _personal norms_ and _social norms_. Personal norms
refer to internal standards about what is right and what is wrong in a given
context. Social norms refer to rules and standards of behaviour that affect
the choices of individuals without the force of law. Social norms are
typically externally motivated. Cialdini, Reno and Kallgren cialdini1990focus
focused on social norms and classified them into two main categories, namely
_injunctive norms_ and _descriptive norms_. Injunctive norms refer to what
people think others would approve or disapprove. Descriptive norms refer to
what others actually do. Bicchieri bicchieri2005grammar proposed a
classification in three main categories, namely _personal normative beliefs_ ,
_empirical expectations_ , and _normative expectations_. Personal normative
beliefs refer to personal beliefs about what should happen in a given
situation. Empirical expectations refer to personal beliefs about how others
would behave in a given situation. Normative expectations refer to personal
beliefs about what others think one should do. Therefore, to the extent to
which people believe that what should (or should not) happen in a given
situation corresponds to their internal standards about what is right (or
wrong), then Bicchieri’s personal normative beliefs correspond to Schwartz’s
personal norms. In one-shot anonymous games (where decision makers receive no
information about the behaviour of other people playing in the same role),
descriptive norms correspond to empirical expectations (we replace the actual
behaviour of others with the beliefs). Finally, normative expectations
correspond to injunctive norms. Therefore, at least for the games and decision
problems considered in this review, Bicchieri’s classification can be
interpreted as a synthesis of the previous two classifications.
Table 4: The moral preference hypothesis Previous work explained unselfish
behaviour in one-shot, anonymous economic games using social preferences
defined over monetary outcomes. According to this “social preference
hypothesis”, some people act unselfishly because they do not only care about
their own monetary payoff, but they also care about the monetary payoffs of
other people. However, especially in the last five years, numerous experiments
challenged social preference models. The best way to organise these results is
through the moral preference hypothesis, according to which people have
preferences for following their own personal norms – what they think to be the
right thing to do – beyond the monetary consequences that these actions bring
about. This framework outperforms the social preference hypothesis at
organising cooperation in the prisoner’s dilemma, altruism in the dictator
game, altruistic punishment in the ultimatum game, trustworthiness in the
trust game, truth-telling in the sender-receiver game, and trade-off decisions
between equality and efficiency in the trade-off game.
Table 5: Moral preference models Let $a$ be an action for player $i$. Moral
preference models assume that the utility function of player $i$, $u_{i}$,
describes a tension between the material payoff associated to $a$,
$v_{i}(\pi_{i}(a))$, and the moral utility. The main functional forms that
have been proposed are the following. Levitt and List (2007):
$u_{i}(a)=v_{i}(\pi_{i}(a))+m(a)$. The moral cost or benefit associated to
$a$, $m(a)$, is assumed to depend on whether the action is observable, on the
material consequences of that action, and on the set of _social norms_ and
rules in place in the society where the decision maker lives. Krupka and Weber
(2013): $u_{i}(a)=v_{i}(\pi_{i}(a))+\gamma_{i}N(a)$, where $\gamma_{i}$ is the
extent to which $i$ cares about following the _injunctive norm_ and $N(a)$
represents the extent to which society views $a$ as socially appropriate.
Kimbrough and Vostroknutov (2020):
$u_{i}(a)=v_{i}(\pi_{i}(a))+\phi_{i}\eta(a)$, where $\phi_{i}$ is the extent
to which $i$ cares about following the _injunctive norm_ and $\eta(a)$
represents the extent to which society views $a$ as socially appropriate. (The
main difference between $\eta(a)$ and $N(a)$ regards the way they are
computed.) Our proposal: $u_{i}(a)=v_{i}(\pi_{i}(a))+\mu_{i}P_{i}(a)$, where
$\mu_{i}$ represents the extent to which $i$ cares about following their own
_personal norms_ and $P_{i}(a)$ represents the extent to which $i$ personally
thinks that $a$ is the right thing to do.
Table 6: Outstanding challenges • Exploring in which contexts interventions
targeting personal norms are more effective at promoting one-shot unselfish
behaviour than interventions targeting social norms. • Finding the boundary
conditions of interventions targeting personal norms. • Investigating the
dimension and the boundary of the “moral phenotype”, to understand how
different personal norms can drive different forms of unselfish behaviour and
whether the moral phenotype includes behaviours that are strategically
unselfish, such as strategic fairness and trust. • Building bridges between
computational linguistics, moral psychology, and behavioural economics, with
the goal of understanding how to express people’s utility function also in
terms of the instructions of a decision problem. • Using techniques from
evolutionary game theory, applied mathematics, network science, and
statistical physics to explore which types of unselfish behaviour are more
likely to evolve in order to understand which personal norms are more likely
to be internalised. • Exploring the cognitive basis of personal norms using a
dual-process perspective.
|
[table]capposition=top
# Variational manifold learning from incomplete data: application to
multislice dynamic MRI
Qing Zou, Abdul Haseeb Ahmed, Prashant Nagpal, Sarv Priya, Rolf F Schulte,
Mathews Jacob Qing Zou and Mathews Jacob are with the Department of Electrical
and Computer Engineering, The University of Iowa, Iowa City, IA, USA (e-mail:
<EMAIL_ADDRESS>and [email protected]). Abdul Haseeb Ahmed is with
Philips Healthcare, Rochester, MN, USA (e-mail: [email protected]).
Prashant Nagpal is with the Department of Radiology, University of Wisconsin-
Madison, Madison, WI, USA (email: [email protected]). Sarv Priya is with the
Department of Radiology, The University of Iowa, Iowa City, IA, USA (e-mail:
[email protected]). Rolf F Schulte is with General Electric Healthcare,
Munich, Germany (email: [email protected]). This work is supported by NIH under
Grants R01EB019961 and R01AG067078-01A1. This work was conducted on an MRI
instrument funded by 1S10OD025025-01.
###### Abstract
Current deep learning-based manifold learning algorithms such as the
variational autoencoder (VAE) require fully sampled data to learn the
probability density of real-world datasets. Once learned, the density can be
used for a variety of tasks, including data imputation. However, fully sampled
data is often unavailable in a variety of problems, including the recovery of
dynamic and high-resolution MRI data considered in this work. To overcome this
problem, we introduce a novel variational approach to learn a manifold from
undersampled data. The VAE uses a decoder fed by latent vectors, drawn from a
conditional density estimated from the fully sampled images using an encoder.
Since fully sampled images are not available in our setting, we approximate
the conditional density of the latent vectors by a parametric model whose
parameters are estimated from the undersampled measurements using back-
propagation. We use the framework for the joint alignment and recovery of
multislice free breathing and ungated cardiac MRI data from highly
undersampled measurements. Most of the current self-gating and manifold
cardiac MRI approaches consider the independent recovery of images from each
slice; these methods are not capable of exploiting the inter-slice
redundancies in the datasets and require sophisticated post-processing or
manual approaches to align the images from different slices. By contrast, the
proposed scheme is able to align the multislice data and exploit the
redundancies. Experimental results demonstrate the utility of the proposed
scheme in dynamic imaging alignment and reconstructions.
###### Index Terms:
Variational autoencoder; Generative model; CNN; Manifold approach;
Unsupervised learning; Free-breathing cardiac MRI; Image reconstruction
## I Introduction
Deep generative models [1] that rely on convolutional neural networks (CNNs)
are now widely used to represent data living on nonlinear manifolds. For
instance, the variational autoencoder (VAE) [2] represents the data points as
CNN mappings of the latent vectors, whose parameters are learned using the
maximum likelihood formulation. Since the exact log-likelihood of the data
points is intractable, VAE relies on the maximization of a lower bound of the
likelihood, involving an approximation for the conditional density of the
latent variable represented by an encoder neural network. The VAE framework
offers several benefits over the vanilla autoencoder [1], including improved
generalization [3] and ability to disentangle the important latent factors [4,
5]. Unfortunately, most of the current generative models are learned from
fully sampled datasets. Once learned, the probability density of the data can
be used as a prior for various applications, including data imputation [6, 7].
Unfortunately, fully-sampled datasets are often not available in many high-
resolution structural and dynamic imaging applications to train autoencoder
networks.
The main focus of this paper is to introduce a variational framework to learn
a deep generative manifold directly from undersampled/incomplete measurements.
The main application motivating this work is the multislice free-breathing and
ungated cardiac MRI. Breath-held CINE imaging, which provides valuable
indicators of abnormal structure and function, is an integral part of cardiac
MRI exams. Compressed sensing [8, 9, 10, 11] and deep learning methods have
emerged as powerful options to reduce the breath-hold duration, with excellent
performance [12, 13, 14, 15, 16]. Despite these advances, breath-held CINE
imaging is challenging for several subject groups, including pediatric and
chronic obstructive pulmonary disease (COPD) subjects. Several authors have
introduced self-gating [17, 18, 19, 20, 21, 22] and manifold approaches [23,
24, 25, 26, 27] to enable free-breathing and ungated single-slice cardiac MRI.
For instance, the smoothness regularization on manifolds (SToRM) approach [28,
29, 30] models the images as points on a low-dimensional manifold whose
structure is exploited using a kernel low-rank formulation [29, 30] to recover
the images from highly undersampled measurements. Recently, deep learning-
based manifold models were introduced [31, 32, 33] to further improve the
performance; these schemes learn a deep generative network and its latent
variables directly from the measured k-space data using a non-probabilistic
formulation.
All of the previously described free-breathing cardiac MRI reconstruction
approaches (e.g., compressed sensing-based approaches, manifold approaches,
and deep learning-based approaches) independently recover the data from each
slice. Cardiac MRI often relies on slice-by-slice acquisition to preserve
myocardium to blood pool contrast, resulting from the in-flow of blood from
unexcited regions to the slice of interest; the improved contrast facilitates
segmentation. The above-mentioned 2D self-gating and manifold methods are thus
unable to exploit the extensive redundancies between adjacent slices, which
could offer improved performance. Note that the respiratory and cardiac motion
during the acquisition of the different slices is often very different; this
makes the direct 3D extension of the 2D self-gating and manifold methods
impossible. Another challenge with the approaches mentioned above is the need
for post-processing methods to determine matching slices at specific
cardiac/respiratory phases for estimation of cardiac parameters (e.g.,
ejection fraction, strain). Several post-processing methods have been
introduced to align the data post reconstruction [24, 34, 35, 36, 37]. Because
these methods require fully sampled data, they will not facilitate the
exploitation of the inter-slice redundancies during image recovery.
We introduce a novel variational framework for the joint recovery and
alignment of multislice data from the entire heart. This approach combines the
undersampled k-t space data from different slices, possibly acquired with
multiple cardiac and respiratory motion patterns, to recover the 3D dynamic
MRI dataset. We use a 3D CNN generative model, which takes in a latent vector
and outputs a 3D image volume. The time-varying latent vectors capture the
intrinsic variability in the dataset, including cardiac and respiratory
motion. The latent variables and the parameters of the 3D CNN are jointly
learned from the multislice k-t space data using a maximum likelihood
formulation. Since the likelihood is not tractable, we maximize its
variational lower bound involving a model for the conditional distribution of
the latent variables, which is conceptually similar to the VAE approach [2].
The VAE scheme uses an encoder network to derive the conditional probabilities
of the latent vectors from fully sampled data [2]. This approach is not
directly applicable in our setting because each data sample is measured using
a different measurement operator. We hence model the conditional densities as
a Gaussian distribution whose parameters are learned from the undersampled
data directly using back-propagation. We use a Gaussian prior on the latent
variables while deriving the evidence-based lower bound (ELBO); the Gaussian
prior ensures that the latent variables from different slices have similar
distributions, facilitating the alignment of the slices. We note that the
direct extension of our previous generative manifold model [31, 32] to the 3D
setting does not have any constraint on the latent variables; this extension
results in poor alignment of the slices and degradation in image quality in
the 3D setting. We also use smoothness priors on the latent variables to
further improve the performance. Once learned, the representation can be used
to generate matching 3D volumes with any desired cardiac/respiratory phase by
exciting the generator with appropriate latent vectors. This approach of
learning a generative model of the entire heart may thus be viewed as a
paradigm shift from conventional slice-by-slice image-recovery algorithms[17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30].
## II Background on dynamic MRI
### II-A multislice free-breathing MRI: problem statement
The main application considered in this paper is the recovery of 3D cardiac
volumes of the heart from undersampled 2D multislice k-t space data acquired
in the free-breathing and ungated setting. In particular, we consider the
recovery of the time series $\mathbf{x}(\mathbf{r},t_{z})$, where
$\mathbf{r}=(x,y,z)$ represents the spatial coordinates and $t_{z}$ denotes
the time frame during the acquisition of the $z^{\rm th}$ slice. We model the
acquisition of the data as
$\mathbf{b}(t_{z})=\mathcal{A}_{t_{z}}\Big{(}\mathbf{x}(\mathbf{r},t_{z})\Big{)}+\mathbf{n}_{t_{z}},$
(1)
where $\mathbf{b}(t_{z})$ is the k-t space data of the $z^{\rm th}$ slice at
the $t^{\rm th}$ time frame. Here, $\mathcal{A}_{t_{z}}$ are the time-
dependent measurement operators, which evaluate the multi-channel single-slice
Fourier measurements of the 3D volume $\mathbf{x}(\mathbf{r},t_{z})$ on the
trajectory $k_{t_{z}}$ corresponding to the time point $t$. Specifically,
$\mathcal{A}_{t_{z}}$ extracts the $z^{\rm th}$ slice from the volume
$\mathbf{x}(\mathbf{r},t_{z})$ and evaluates its single-slice measurements.
$\mathbf{n}_{t_{z}}$ represents the noise in the measurements.
### II-B CNN-based generative manifold models in dynamic MRI
CNN-based generative models were recently introduced for single-slice dynamic
MRI [31]. This scheme models the 2-D images in the time series as the output
of a CNN generator $\mathcal{D}_{\theta}$:
$\mathbf{x}_{i}=\mathcal{D}_{\theta}(\mathbf{c}_{i}),\quad i=1,\cdots,M.$
The input $\mathbf{c}_{i}$ is the latent vector, which lives in a low-
dimensional subspace. The recovery of the images in the time series involves
the minimization of the criterion
$\displaystyle\mathcal{C}(\mathbf{c},\theta)=$
$\displaystyle\underbrace{\sum_{i=1}^{N}\|\mathcal{A}_{i}\left(\mathcal{D}_{\theta}(\mathbf{c}_{i})\right)-\mathbf{b}_{i}\|^{2}}_{\scriptsize\mbox{data
term}}$
$\displaystyle+\lambda_{1}\underbrace{\|J_{\mathbf{c}}\mathcal{D}_{\theta}(\mathbf{c})\|^{2}}_{\scriptsize\mbox{net
reg.}}+\lambda_{2}\underbrace{\|\nabla_{i}\mathbf{c}_{i}\|^{2}}_{\scriptsize\mbox{latent
reg.}}.$ (2)
The first term in the cost function is a measure of data consistency, while
the second term is a network regularization term that controls the smoothness
of the generated manifold [31]. The last term is the temporal smoothness of
the latent variables, which is used to further improve the performance.
## III Variational manifold learning
We now introduce a novel variational formulation to learn a manifold from
undersampled measurements, which is the generalization of the seminal VAE
approach [2] to the undersampled setting. We will first present the proposed
approach in a simple and general setting for simplicity and ease of
understanding. The use of this variational manifold model for the joint
alignment and recovery of 3D images from 2-D multislice MRI data will be
described in Section IV.
### III-A General problem statement and intuition
We assume that the images in the time series, indexed by $i$, live on a smooth
manifold $\mathcal{M}$ and hence can be modeled as the output of a CNN-based
generator:
$\mathbf{x}_{i}=\mathcal{D}_{\theta}(\mathbf{c}_{i}),$ (3)
where $\mathbf{c}_{i}$ is the low-dimensional latent variable corresponding to
$\mathbf{x}_{i}$. Here, $\theta$ denotes the weights of the generator, which
is shared for all the images.
Most generative models consider the learning of the above model from fully
sampled data. By contrast, we consider the recovery from incomplete
measurements
$\mathbf{b}_{i}=\mathcal{A}_{i}(\mathbf{x}_{i})+\mathbf{n}_{i}.$ (4)
Here, $\mathcal{A}_{i}$ is an undersampled measurement operator corresponding
to the $i^{\rm th}$ image frame. Here,
$\mathbf{n}_{i}\in\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I})$ are noise
vectors. Note that the measurement operators for each $\mathbf{x}_{i}$ are
different. If the same sampling operators are used for all the data points, it
is impossible to recover the images without additional prior information. We
assume that the sampling operators satisfy the following properties:
1. 1.
We assume $\mathcal{A}_{i}$ to be a rectangular sub-matrix, obtained by
picking specific rows of an orthonormal measurement operator (e.g., Fourier
transform).
2. 2.
We assume that the measurement operators $\mathcal{A}\sim\mathcal{S}$ are
drawn from a distribution and satisfy
$\mathbb{E}_{\mathcal{A}\sim S}[\mathcal{A}^{T}\mathcal{A}]=\mathcal{I},$ (5)
which is the identity operator. The above condition guarantees diversity in
the measurement operators.
We now provide some intuition about why the learning of the model with the
above settings will succeed under the restrictive assumptions on the
measurement operators described above. In the noiseless setting, we consider
the learning of the latent variables $\mathbf{c}_{i}$ and the weights $\theta$
by minimizing the empirical error:
$\left\\{\theta^{*},\mathbf{c}_{i}^{*}\right\\}=\arg\min_{\theta,\mathbf{c}_{i}}\underbrace{\sum_{i}\|\mathcal{A}_{i}\left(\mathbf{x}_{i}-\mathcal{D}_{\theta}(\mathbf{c}_{i})\right)\|^{2}}_{\mathcal{L}}.$
(6)
Here, $\mathbf{x}_{i}$ are the fully sampled data points. When
$\mathcal{A}\sim\mathcal{S}$, this empirical sum approximates
$\displaystyle\mathcal{L}$ $\displaystyle\approx$
$\displaystyle~{}\mathbb{E}_{\mathbf{x}\sim\mathcal{M}}~{}\mathbb{E}_{\mathcal{A}\sim
S}\|\mathcal{A}\left(\mathbf{x}-\mathcal{D}_{\theta}(\mathbf{c})\right)\|^{2}$
$\displaystyle=$
$\displaystyle\mathbb{E}_{\mathbf{x}\sim\mathcal{M}}~{}\mathbb{E}_{\mathcal{A}\sim
S}\left\langle\mathbf{x}-\mathcal{D}_{\theta}(\mathbf{c}),\mathcal{A}^{H}\mathcal{A}\left(\mathbf{x}-\mathcal{D}_{\theta}(\mathbf{c})\right)\right\rangle$
$\displaystyle=$
$\displaystyle\mathbb{E}_{\mathbf{x}\sim\mathcal{M}}~{}\left\langle\mathbf{x}-\mathcal{D}_{\theta}(\mathbf{c}),\underbrace{\mathbb{E}_{\mathcal{A}\sim
S}[\mathcal{A}^{H}\mathcal{A}]}_{\mathcal{I}}\left(\mathbf{x}-\mathcal{D}_{\theta}(\mathbf{c})\right)\right\rangle$
$\displaystyle=$
$\displaystyle\arg\min_{\theta,\mathbf{c}}\mathbb{E}_{\mathbf{x}\sim\mathcal{M}}~{}\|\mathbf{x}-\mathcal{D}_{\theta}(\mathbf{c})\|^{2}.$
The above result follows from (5) and the orthonormality of the full
measurement operator. This result shows that the recovery of the true manifold
is feasible from undersampled data when the sampling operators satisfy the
properties listed above.
### III-B Proposed algorithm
We consider the recovery of the images $\mathbf{x}_{i}$ from their
measurements (4) by maximizing their likelihood, specified by
$p(\mathbf{b}_{i})=\frac{p(\mathbf{b}_{i},\mathbf{c}_{i})}{p(\mathbf{c}_{i}|\mathbf{b}_{i})}$
(7)
We note that the posterior $p(\mathbf{c}_{i}|\mathbf{b}_{i})$ is not
tractable. Following the VAE approach in [2], we use a surrogate distribution
to approximate $p(\mathbf{c}_{i}|\mathbf{b}_{i})$. The VAE formulation uses an
encoder network to model $p(\mathbf{c}_{i}|\mathbf{x}_{i})$ from the fully
sampled data ($\mathbf{b}_{i}=\mathbf{x}_{i}$). Unfortunately, this approach
is not directly applicable in our setting since $\mathbf{b}_{i}$ is the
undersampled data, measured using $\mathcal{A}_{i}$ that vary with $i$.
We propose to use a Gaussian model $q_{i}(\mathbf{c}_{i})\approx
p(\mathbf{c}_{i}|\mathbf{b}_{i})$, parameterized by its mean $\bm{\mu}_{i}$
and diagonal covariance matrix $\bm{\Sigma}_{i}$, and to estimate these
parameters using back-propagation. Following a similar argument as in [2], we
show in the Appendix that the likelihood term in (7) can be lower-bounded as
$\displaystyle\log p(\mathbf{b}_{i})$ $\displaystyle\geq$
$\displaystyle\underbrace{-\frac{1}{2\sigma^{2}}\mathbb{E}_{\mathbf{c}_{i}\sim
q_{i}(\mathbf{c}_{i})}\left[\|\mathcal{A}_{i}\,\mathcal{D}_{\theta}(\mathbf{c}_{i})-\mathbf{b}_{i}\|^{2}\right]}_{{\text{data
term}}}$ (8)
$\displaystyle\qquad-\qquad\underbrace{KL[q_{i}(\mathbf{c}_{i})||p(\mathbf{c}_{i})]}_{L(q_{i}):\text{latent
regularization}}.$
Here, $p(\mathbf{c}_{i})$ is a prior on the latent variables. In this work, we
assume $p(\mathbf{c}_{i})=\mathcal{N}(\mathbf{0},\mathbf{I})$, where
$\mathbf{I}$ is the identity matrix. In this case, the KL divergence can be
explicitly evaluated as
$L(\mathbf{c}_{i})=\frac{-\log[\det(\mathbf{\Sigma})]-n+{\mathrm{trace}}(\mathbf{\Sigma})+\bm{\mu}^{T}\bm{\mu}}{2},$
where we assume a latent space of dimension $n$.
We hence solve for the unknown weights of the generator $\theta$ as well as
the parameters of $q_{i}$ denoted by $\bm{\mu}_{i}$ and $\bm{\Sigma}_{i}$ by
minimizing the negative of the lower bound in (8).
Following [2], we use a Monte-Carlo approach to approximate the expectation in
the data term. In particular, at each epoch of the training loop, we derive
the samples $\mathbf{c}_{i}$ as
$\mathbf{c}_{i}=\bm{\mu}_{i}+\bm{\Sigma}_{i}~{}\bm{\epsilon},$ (9)
where $\bm{\epsilon}$ is a zero-mean unit variance Gaussian random variable.
At each iteration, the estimation process thus involves the minimization of
the criterion
$\mathcal{C}\left(\theta,\\{\underbrace{\bm{\mu}_{i},\bm{\Sigma}_{i}}_{q_{i}}\\}\right)=\sum_{i=1}^{N_{\rm
data}}\left(\|\mathcal{A}_{i}\,\mathcal{D}_{\theta}(\mathbf{c}_{i})-\mathbf{b}_{i}\|^{2}+\sigma^{2}~{}L(q_{i})\right),$
(10)
with respect to the unknowns $\theta,\bm{\mu}_{i}$ and $\bm{\Sigma}_{i}$.
Figure 1: Illustration of variational manifold learning in the context of
learning the digit 1 from the MNIST dataset. We first trained the variational
model from the fully sampled data. (a) shows several of the original images,
and (b) shows the corresponding output of the generator (reconstructions). (c)
illustrates the learned manifold; we sample the latent vectors on a uniform
grid in the range $[-3,3]^{2}$ and show the corresponding reconstructions.
Note that the latent vectors capture the intrinsic variability in the dataset.
The second row shows the results from the variational model, which are trained
with undersampled noisy measurements. In this setting, 70% of the pixel values
are missing, and the remaining 30% of the pixel values are corrupted with
Gaussian white noise with 0 mean and 0.05 standard deviation. The zero-filled
images are shown in (d). In (e), we show the reconstructions from the
undersampled measurements. Note that the reconstructions closely resemble the
original digits in (a). (f) is the illustration of the manifold. Note that the
variability in the manifold is captured in comparison to (c).
### III-C Illustration using MNIST data
We provide a simple example for the illustration of the above variational
model from undersampled data of the digit 1 in the MNIST dataset [38]. The
images used are scaled to the range $[-1,1]$.
The generator we used here is a simple CNN with three layers. ReLU activation
function is used for the first two layers and tanh is used for the last layer.
The dimension of the latent space is chosen as 2. In this example, all the
trainable parameters are initialized as small random numbers, and the hyper-
parameter for the latent regularization $L(\mathbf{c}_{i})$ is chosen as 1. We
used 1,000 epoches to train the CNN generator.
We first trained the model from the fully sampled data
($\mathcal{A}_{i}=\mathcal{I}$), whose results are shown in the first row of
Fig. 1. Then we trained the model from undersampled noisy data. In the
example, 70% of the pixel values in each image are missing, while Gaussian
white noise with standard deviation 0.05 is added to the known 30% pixel
values. The recovered images are shown in the second row of Fig. 1. We report
the peak signal-to-noise ratio (PSNR) and the structural similarity index
measure (SSIM) for the results.
## IV Application to dynamic MRI
We first describe the application of the algorithm in the single-slice free-
breathing and ungated data, which is the setting considered in [31]. We then
generalize the approach to the novel setting of the joint alignment and
recovery of 3D MRI from multislice free-breathing data in Section IV-C.
### IV-A Acquisition scheme and pre-processing of data
The datasets used in this work are acquired using a 2D (GRE) sequence with
golden angle spiral readouts in the free-breathing and ungated setting on a
MR750W scanner (GE Healthcare, Waukesha, WI, USA). The sequence parameters for
the datasets are: FOV = 320 mm $\times$ 320 mm, flip angle = 18∘, slice
thickness = 8 mm. The datasets were acquired using a cardiac multi-channel
array with 34 channels. The Institutional Review Board at the University of
Iowa approved the acquisition of the data, and written consents were obtained
from the subjects. The number of slices acquired for different subjects
varies.
We used an algorithm developed in house to pre-select the coils that provide
the best signal-to-noise ratio in the region of interest. A PCA-based coil
combination scheme was then used such that the approximation error was less
than $5\%$. We then estimated the coil sensitivity maps based on these virtual
channels using ESPIRiT [39] and assumed them to be constant over time.
A total of 3,192 spirals were acquired for each slice in the subjects with
TR=8.4 ms, which corresponds to an acquisition time of 27 seconds. Among the
3,192 spirals, every sixth spiral was acquired with the same angle; these
spirals were used for self-navigation in the reconstruction methods that
require self-navigation. We binned the data from six spiral interleaves
corresponding to 50 ms temporal resolution for each frame.
(a) V-SToRM: SS
(b) V-SToRM: MS
Figure 2: Illustration of the proposed variational SToRM (V-SToRM) scheme. (a)
single-slice setting: The 2D network $\mathcal{D}$ receives the latent vectors
sampled from their respective latent distributions using (9). The measurements
of the 2D-generated images obtained by the respective sampling operators
$\mathcal{A}_{i}$ are compared to the acquired multi-channel measurements
using the cost function specified by (11). (b) the multislice 3D setting:
Similar to the single-slice setting, the inputs to the 3D network are samples
from the respective latent distributions. The 3D volumes are sampled by the
respective sampling operators $\mathcal{A}_{z,t}$, which extract the $z^{\rm
th}$ slice and compare it to the measured data. The optimization criterion
specified by (IV-C) is minimized in this case.
### IV-B Single-slice Variational SToRM algorithm
Based on the analysis in the previous sections, we use the following scheme
for the recovery of single-slice dynamic MRI. We use a re-parameterization
layer to obtain the latent variables $\mathbf{c}(t)$ from the time-varying
probability distributions $q(\mathbf{c}(t))$ with parameters $\bm{\mu}_{t}$
and $\bm{\Sigma}_{t}$. These latent variables are fed to the CNN generator
$\mathcal{D}_{\theta}$, which generates the reconstructed volumes
$\mathbf{x}(t)=\mathcal{D}_{\theta}(\mathbf{c}(t))$. The multi-channel, non-
uniform, Fourier transform-based forward operators are applied on the
reconstructed images, which are then compared to the actual noisy measurements
$\mathbf{b}_{i}$. The illustration of this scheme is shown in Fig. 2 (a). The
parameters in the generator and the $\bm{\mu}_{i}$ and the $\bm{\Sigma}_{i}$
are updated based on the loss function
$\mathcal{L}(\theta,\\{\bm{\mu}_{t},\bm{\Sigma}_{t}\\})=\mathcal{C}(\theta,\\{\bm{\mu}_{t},\bm{\Sigma}_{t}\\})+\lambda_{1}||\theta||_{1}^{2}+\lambda_{2}||\nabla\bm{\mu}_{t}||^{2}.$
(11)
Here, $\mathcal{C}(\theta,\\{\bm{\mu}_{t},\bm{\Sigma}_{t}\\})$ is defined in
(10), which is the lower bound for maximum likelihood estimation. The second
term in (11) is a regularization penalty on the generator weights. It has been
shown in [31] that adding this term makes the training of the decoder more
stable. The third term involves the temporal gradients of the latent vectors,
which enforces the latent vectors to capture the smooth nature of motion
patterns in the dynamic images. We use the ADAM optimization to determine the
optimal parameters. We also adopt the progressive-in-time training strategy
introduced in [31] to realize a computationally efficient reconstruction. We
term this dynamic MRI reconstruction scheme as single-slice variational SToRM.
### IV-C Multislice Variational SToRM algorithm
We now generalize the single-slice variational SToRM scheme for the joint
alignment and recovery of multislice dynamic MRI. We assume that the image
volume at the time point $t$ during the acquisition of the $z^{\rm th}$ slice,
denoted by $\mathbf{x}(\mathbf{r},t_{z})$, as the output of the generator:
$\mathbf{x}(\mathbf{r},t_{z})=\mathcal{D}_{\theta}\left(\mathbf{c}(t_{z})\right).$
Here, $\mathbf{c}(t_{z})$ are the low-dimensional latent vectors corresponding
to slice $z$ at the time point $t$, which is formed by the re-parameterization
layer. We note that the generator $\mathcal{D}_{\theta}$ is shared across all
slices and time points; this approach facilitates the exploitation of the
spatial redundancies between the slices and time points.
We propose to jointly align and reconstruct the multislice MRI by jointly
estimating the parameters $\theta$, $\bm{\mu}(t_{z})$ and $\bm{\Sigma}(t_{z})$
from the measured multislice data by minimizing the following cost function:
$\displaystyle\mathcal{L}_{MS}(\theta,\bm{\mu}(t_{z}),\bm{\Sigma}(t_{z}))=$
$\displaystyle\mathcal{C}_{MS}(\theta,\bm{\mu}(t_{z}),\bm{\Sigma}(t_{z}))+\lambda_{1}||\theta||_{1}^{2}$
$\displaystyle+\lambda_{2}\sum_{z}||\nabla_{t_{z}}\bm{\mu}(t_{z})||^{2},$ (12)
where
$\mathcal{C}_{MS}=\displaystyle\sum_{z=1}^{N_{\rm slice}}\sum_{t=1}^{N_{\rm
data}}\|\mathcal{A}_{t_{z}}\left[\mathcal{D}_{\theta}(\mathbf{c}(t_{z}))\right]-\mathbf{b}_{t_{z}}\|^{2}+\sigma^{2}~{}L(q(t_{z}))$
is the lower bound for maximum likelihood as the first term in (11). The
illustration of this scheme is given in Fig. 2(b). The parameters of the
shared 3D generator $\mathcal{D}_{\theta}$ are jointly learned in an
unsupervised fashion from the measured k-t space data using the ADAM
optimization algorithm. After the training process is complete, we will
generate the image time series by feeding the generator with the latent
variables of any specific slice. Following successful learning, we expect the
volumes of the multislice reconstructions to have the same motion patterns
characterized by the latent variables of that particular slice. We refer to
this dynamic MRI reconstruction scheme as multislice variational SToRM, or
V-SToRM.
### IV-D Comparison with state-of-the-art (SOTA) methods
We compare the proposed V-SToRM approach with the following existing methods.
* •
Analysis SToRM [28]: The analysis SToRM model uses a kernel low-rank
formulation, which involves the estimation of the manifold Laplacian matrix
from the k-space navigators using kernel low-rank regularization. This
Laplacian is then used to solve for the images. We note that the analysis
SToRM approach has been demonstrated to yield improved performance over state-
of-the-art self-gated methods, as shown in our prior work [28, 30]. We refer
to this approach as A-SToRM.
* •
Single-slice generative SToRM [31]: The single-slice generative SToRM approach
uses a CNN generator to generate the single-slice image series from the highly
undersampled k-t space data. This scheme does not rely on a variational
formulation. It performs the independent recovery of each slice and hence
fails to exploit the inter-slice redundancies. We refer to this approach as
G-SToRM:SS.
* •
Multislice generative SToRM: We extended the single-slice generative SToRM
approach without the variational framework to the multislice setting. In
particular, we use the CNN generator to produce the image volume; the
generator parameters and the latent vectors for each slice are jointly
learned. Finally, we feed the latent variables of a particular slice into the
generator to obtain the aligned multislice reconstruction. We refer to this
approach as G-SToRM:MS.
For the quantitative comparisons, in addition to the SSIM metric, we also use
the signal-to-error ratio (SER) defined as
$\mathrm{SER}=20\cdot\log_{10}\frac{||\mathbf{x}_{\rm
ref}||}{||\mathbf{x}_{\rm ref}-\mathbf{x}_{\rm recon}||}.$
Here, $\mathbf{x}_{\rm ref}$ and $\mathbf{x}_{\rm recon}$ represent the
reference and the reconstructed images, respectively. The unit for SER is
decibel (dB). In our free-breathing and ungated cardiac MRI setting, we
usually do not have access to the ground truth. Therefore, in our work, we
employ the analysis SToRM method using 25 seconds of data for the
reconstruction as the simulated ground truth.
## V Experiments and results
### V-A Implementation details
In this work, we use deep CNN to build the generator. The number of generator
output channels is dependent on the specific datasets. For the experiments
using the MNIST dataset, the channel is chosen as 1. By contrast, a two-
channel output corresponding to the real and imaginary parts of the MR images
is used for the rest of the experiments. In the MRI setting, we use a
generator of 10 layers. The total number of trainable parameters is about 6
times the size of the image volume. For the convolutional layers in the
generator, the activation function is chosen as leaky ReLU [40] except for the
final layer, where $\tanh$ is used as the activation function. Random
initialization is used to initialize the generator network.
The algorithm has three free parameters, $\sigma^{2}$, $\lambda_{1}$, and
$\lambda_{2}$. For each method, we optimize these parameters as well as the
architecture of the generator on a single dataset such that the
reconstructions closely match the 25-second A-SToRM reconstructions. Once the
optimal parameters are determined, they are kept fixed for the remaining
datasets. Our experiments showed that two latent vectors were sufficient for
the good recovery of the single-slice datasets, which correspond to the
cardiac and respiratory phases. In the multislice case, we required three to
obtain good reconstructions. In this case, two of the three latent vectors
captured cardiac and respiratory motion, respectively. The third latent vector
seemed to capture a harmonic of the respiratory motion.
Figure 3: Showcase of the single-slice V-SToRM. We trained the variational
model using the data of one slice. We showed four different phases in the time
series: diastole in End-Inspiration (E-I), diastole in End-Expiration (E-E),
systole in End-Inspiration (E-I), and systole in End-Expiration (E-E),
obtained from single-slice V-SToRM. The plot of the latent vectors are shown
at the bottom of the figure, and the latent vectors corresponding to the four
phases are indicated on the plot of the latent vectors.
### V-B Single-slice V-SToRM and comparisons
In this section, we focus on single-slice V-SToRM; the reconstructions of a
dataset and its latent vectors are shown in Fig. 3. We trained the variational
model using the data of one slice. The latent vectors we obtained are shown at
the bottom of Fig. 3. Four different phases in the time series are shown in
the figure, and their corresponding latent vectors are indicated in the plot
of the latent vectors.
The comparisons between the single-slice V-SToRM and the state-of-the-art
methods on a different dataset are shown in Fig. 4. In these experiments, we
compare the region of interest for A-SToRM, G-SToRM, and V-SToRM
reconstructions using the 7.5 seconds of data. We use A-SToRM reconstructions
from 25 seconds of data as the reference. From Fig. 4, we see that G-SToRM
(7.5 s) and V-SToRM (7.5) are able to reduce errors and noise in the images
when compared to A-SToRM (7.5 s). The proposed V-SToRM (7.5 s) is able to
provide sharper edges than G-SToRM (7.5 s). These observations are further
confirmed by the quantitative results shown at the bottom of the figure.
Figure 4: Comparisons with the state-of-the-art methods for single-slice
results. The figure shows the visual comparison of three phases: the diastole
phase (top row), the systole phase (third row), and the phase that is in
between the diastole and systole phases (second row). The first three columns
correspond to the reconstructions using the A-SToRM, G-SToRM, and V-SToRM
approaches based on 7.5 seconds of data. The last column shows the
reconstructions from A-SToRM based on 25 seconds of data; we use these
reconstructions as references for quantitative comparisons. We also report the
quantitative results at the bottom of the figure.
(a) Alignment and recovery of eight slices using V-SToRM
(b) Alignment and recovery of eight slices using G-SToRM
(c) Latent vectors obtained by V-SToRM:MS
(d) Latent vectors obtained by G-SToRM:MS
Figure 5: Alignment and joint recovery of multislice data. In (a), we show the
alignment and recovery of the eight slices obtained from the proposed
multislice V-SToRM scheme. Four different phases in the time series for each
slice are displayed. From (a), we see that all the slices have the same
cardiac phase and respiratory phase, indicating that the multislice V-SToRM is
able to align the slices. In (b), we show the alignment and recovery of the
eight slices obtained from the generalization of single-slice G-SToRM to the
multislice setting. We also use four different phases in the time series for
each slice to illustrate the alignment of the multislice data. From (b), we
see that some of the phases for some of the slices have poor image quality. In
particular, the details in the cardiac regions are poorly captured, and in
some cases the boundaries of the heart are not visible. These issues can be
understood from the plot distributions of the latent vectors obtained by the
multislice V-SToRM and G-SToRM:MS, shown in (c) and (d), respectively. We also
plot the latent vectors for two of the slices for each method. Note that we
generated the results in (a) and (b) by feeding the latent vectors
corresponding to the second slice into the generators. The corresponding
latent vectors used to generate the four different phases in (a) and (b) are
indicated in the plot of the latent vectors in (c) and (d). From (c) and (d),
we see that the latent vectors obtained from the proposed multislice V-SToRM
scheme have similar distributions, whereas the distributions for the latent
vectors obtained from G-SToRM:MS are very different.
(a) Comparisons based on slice #3
(b) Comparisons based on slice #4
Figure 6: Comparisons of the image quality of the reconstructions. We compare
the image quality of the multislice V-SToRM reconstructions with the image
quality of the reconstructions from A-SToRM, G-SToRM:SS, and G-SToRM:MS. The
multislice dataset used in this example has four slices, and we show two of
the slices in the figure to do the comparisons. For each slice, we show three
different phases: the diastole phase, the systole phase, and the phase that is
in between the diastole and systole phases. For each sub-figure, the first
four columns represent the reconstruction from A-SToRM, G-SToRM:SS,
G-SToRM:MS, and the proposed multislice V-SToRM based on 6 seconds of data.
The last column shows the reconstructions using A-SToRM based on 25 seconds of
data; they are used as simulated references for the quantitative results,
which are shown at the bottom of each sub-figure. From both the visual
comparisons and the quantitative results, we see that the multislice V-SToRM
scheme is able to provide comparable reconstructions when compared to the
competing methods. We also highlighted some of the phases in the multislice
G-SToRM reconstruction, from which we see that G-SToRM:MS has some issues in
generating some of the image frames.
### V-C Joint alignment and recovery of multislice data
In this section, we show the results of the joint alignment and recovery of
multislice data using the proposed multislice V-SToRM scheme. We also compare
the alignment results obtained from the straightforward multislice extension
of the G-SToRM scheme. The results are shown in Fig. 5. More results are shown
in the supplementary material.
The dataset used in Fig. 5 was acquired with eight slices that covered the
whole heart. We trained the variational model based on the undersampled k-t
space data and fed the latent vectors corresponding to the second slice to the
generator, which produces the aligned multislice reconstructions. Shown in the
figures are four time points based on the different phases identified by the
latent variables. The rows in Fig. 5 (a) correspond to diastole in End-
Inspiration, diastole in End-Expiration, systole in End-Inspiration, and
systole in End-Expiration for each slice obtained using the proposed
multislice V-SToRM scheme. From Fig. 5 (a), we see that the proposed
multislice V-SToRM scheme is able to jointly reconstruct and align the
multislice free-breathing and ungated cardiac MRI. We note that all the slices
in each row have the same cardiac phase and respiratory phase.
In Fig. 5 (b), we show the corresponding results for the direct extension of
the multislice G-SToRM approach. In particular, we trained the model using the
undersampled k-t space data and fed the latent vectors corresponding to the
second slice into the generator to produce the aligned multislice
reconstructions. From Fig. 5 (b), we see that the multislice G-SToRM approach
has some ability to align the multislice reconstructions. However, we find
that the image quality for some of the frames (e.g., slices 5-8) is poor. For
example, the diastole phases for the G-SToRM:MS reconstructions are blurred
and the cardiac boundaries are missing.
The reason for the poor reconstructions offered by multislice G-SToRM and the
improved performance of V-SToRM can be easily appreciated from the
distribution of the latent vectors shown in Fig. 5 (c) and Fig. 5 (d),
respectively. The use of the variational formulation in V-SToRM encouraged the
latent variables of the slices to approximate a Gaussian distribution. We also
reported the KL divergence value compared to
$\mathcal{N}(\mathbf{0},\mathbf{I})$ for each set of the latent vector in the
figure. We note that the V-SToRM scheme offers low KL divergence values,
indicating that the latent distribution of all the slices are roughly similar
to a unit Gaussian. By contrast, the G-SToRM scheme cannot guarantee that the
latent variables follow any distribution. We note from the top rows of (d)
that the distribution of the latent variables of the second slice is very
different from that of the other slices. When we feed the latent vectors of
the second slice into the generator, the generator is only able to generate
reasonable results for the second slice.
### V-D Comparison of image quality with state-of-the-art methods
We compare the image quality of the multislice V-SToRM reconstructions with
the image quality of the reconstructions from the state-of-the-art methods,
including single-slice methods, in Fig. 6. Note that the motion patterns of
the slices recovered by the single-slice methods may be very different. For
comparison, we manually matched the images of the slices of the single-slice
and multislice methods by their cardiac and respiratory phases. The
quantitative comparisons of the slices are shown at the bottom of each sub-
figure. We also show more results using another dataset in the supplementary
material.
The single-slice A-SToRM and G-SToRM:SS comparisons roughly match the
observations in Fig. 4 and the results in [31]. The results show that the
multislice V-SToRM approach is able to offer reconstructions that are less
blurred and have fewer alias artifacts when compared to single-slice analysis
methods (A-SToRM and G-SToRM:SS). The improved performance is also evidenced
by the higher SER and SSIM values. We attribute the improved performance to
the exploitation of the redundancies across slices, enabled by V-SToRM. We
also note that the G-SToRM:MS method offers poor performance, evidenced by
image blurring and missing details on the myocardium. The poor performance of
G-SToRM:MS can be understood in terms of the differences in distribution of
the latent vectors, shown in Fig. 5.
## VI Discussion and Conclusion
In this work, we introduced an approach for the variational learning of a CNN
manifold model from undersampled measurements. This work generalized the
traditional VAE scheme to the undersampled setting. Unlike the traditional VAE
scheme that uses an encoder to learn the conditional distribution from the
images, we propose to learn the parameters of the distribution from the
measurements using back-propagation. The application of the framework to
multislice cardiac MRI data enabled the joint alignment and recovery from
highly undersampled measurements. Unlike current single-slice methods that
perform independent recovery of the slices, the proposed approach aligns the
acquisitions and jointly recovers the images from the undersampled k-t space
data. In addition to facilitating the exploitation of inter-slice
redundancies, this approach also eliminates the need for post-processing
schemes to match the phases of the slices.
Our results show that the joint alignment and recovery of the slices offer
reduced blurring and reduction of artifacts compared to the direct
generalization of G-SToRM to the multislice setting. In particular, the
variational framework encourages the latent variables of different slices to
have the same distribution. By contrast, the G-SToRM framework cannot
guarantee the similarity of the probability distributions; the improper
alignment translates to image blurring and other artifacts. Similarly, the use
of the CNN generator offers implicit spatial regularization, resulting in
improved recovery over A-SToRM.
A benefit with the proposed scheme is that it does not require fully sampled
data to train the CNN. The subject-specific CNN parameters and the latent
vectors are learned directly from the undersampled data. We note that the
acquisition of fully sampled data to train neural networks is not always
possible, especially in the high-resolution and dynamic MRI settings
considered in this work. In this context, direct learning from undersampled
data is desirable. However, a challenge of the proposed scheme when compared
to pretrained deep learning methods that offer super-fast inference is the
higher computational complexity. We will explore training strategies,
including transfer learning and meta-learning, to reduce the run time in the
future.
## VII Appendix
In this appendix, we show that the likelihood term in (7) can be lower-bounded
by (8).
According to (7) and using the result of joint probability, we obtain
$\displaystyle p(\mathbf{b}_{i})$ $\displaystyle=$
$\displaystyle\frac{p(\mathbf{b}_{i},\mathbf{c}_{i})}{q_{i}(\mathbf{c}_{i})}\frac{q_{i}(\mathbf{c}_{i})}{p(\mathbf{c}_{i}|\mathbf{b}_{i})}$
(13) $\displaystyle=$
$\displaystyle\underbrace{\frac{p(\mathbf{b}_{i},\mathbf{c}_{i})}{p(\mathbf{c}_{i})}}_{p(\mathbf{b}_{i}|\mathbf{c}_{i})}\frac{p(\mathbf{c}_{i})}{q_{i}(\mathbf{c}_{i})}\frac{q_{i}(\mathbf{c}_{i})}{p(\mathbf{c}_{i}|\mathbf{b}_{i})}.$
Taking the logarithm on both sides of (13), we have
$\log p(\mathbf{b}_{i})=\log
p(\mathbf{b}_{i}|\mathbf{c}_{i})-\log\frac{q_{i}(\mathbf{c}_{i})}{p(\mathbf{c}_{i})}+\log\frac{q_{i}(\mathbf{c}_{i})}{p(\mathbf{c}_{i}|\mathbf{b}_{i})}.$
(14)
Next, we take the expectation with respect to $\mathbf{c}_{i}\thicksim
q_{i}(\mathbf{c}_{i})$ of both sides of (14), and realizing that
$\mathop{\mathbb{E}}_{\mathbf{c}_{i}\thicksim q_{i}(\mathbf{c}_{i})}\log
p(\mathbf{b}_{i})=\log p(\mathbf{b}_{i})$, we obtain
$\displaystyle\displaystyle\log p(\mathbf{b}_{i})=$
$\displaystyle\displaystyle\underbrace{\displaystyle\mathop{\mathbb{E}}_{\mathbf{c}_{i}\thicksim
q_{i}(\mathbf{c}_{i})}\log p(\mathbf{b}_{i}|\mathbf{c}_{i})}_{{\text{data
term}}}-\underbrace{\displaystyle\mathop{\mathbb{E}}_{\mathbf{c}_{i}\thicksim
q_{i}(\mathbf{c}_{i})}\log\frac{q_{i}(\mathbf{c}_{i})}{p(\mathbf{c}_{i})}}_{KL[q_{i}(\mathbf{c}_{i})||p(\mathbf{c}_{i})]}$
(15)
$\displaystyle+\underbrace{\displaystyle\mathop{\mathbb{E}}_{\mathbf{c}_{i}\thicksim
q_{i}(\mathbf{c}_{i})}\log\frac{q_{i}(\mathbf{c}_{i})}{p(\mathbf{c}_{i}|\mathbf{b}_{i})}}_{KL[q_{i}(\mathbf{c}_{i})||p(\mathbf{c}_{i}|\mathbf{b}_{i})]>0}.$
The last term is always greater than zero. The first term is the conditional
density of the measurements $\mathbf{b}_{i}$ given the images
$\mathbf{x}_{i}=\mathcal{D}_{\theta}(\mathbf{c}_{i})$. With the measurement
model specified by (4), we obtain
$\displaystyle\mathop{\mathbb{E}}_{\mathbf{c}_{i}\thicksim
q_{i}(\mathbf{c}_{i})}\log~{}p(\mathbf{b}_{i}|\mathbf{c}_{i})=-\frac{1}{2\sigma^{2}}\displaystyle\mathop{\mathbb{E}}_{\mathbf{c}_{i}\thicksim
q_{i}(\mathbf{c}_{i})}\|\mathcal{A}_{i}\,\mathcal{D}(\mathbf{c}_{i})-\mathbf{b}_{i}\|^{2}+c,$
where $c$ is a constant independent of the parameters of interest. Ignoring
the constant $c$ and plugging $\mathbb{E}_{\mathbf{c}_{i}\thicksim
q_{i}(\mathbf{c}_{i})}\log~{}p(\mathbf{b}_{i}|\mathbf{c}_{i})$ back into (15),
we obtain the desired lower bound (8).
## Acknowledgments
The authors would like to thank Ms. Melanie Laverman from the University of
Iowa for making editorial corrections to refine this paper. Financial support
for this study was provided by grants NIH 1R01EB019961 and NIH
R01AG067078-01A1. This work was conducted on MRI instruments funded by
1S10OD025025-01.
## References
* [1] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006.
* [2] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013.
* [3] I. Higgins et al., “Early visual concept learning with unsupervised deep learning,” arXiv preprint arXiv:1606.05579, 2016.
* [4] I. Higgins et al., “beta-vae: Learning basic visual concepts with a constrained variational framework,” 2016\.
* [5] J. Chung et al., “A recurrent latent variable model for sequential data,” Advances in Neural Information Processing Systems, vol. 28, pp. 2980–2988, 2015.
* [6] Y. Li, Z. Wang, R. Sun, and F. Lam, “Separation of metabolites and macromolecules for short-te 1 h-mrsi using learned component-specific representations,” IEEE Transactions on Medical Imaging, vol. 40, no. 4, pp. 1157–1167, 2021.
* [7] M. Mani, V. A. Magnotta, and M. Jacob, “qmodel: A plug-and-play model-based reconstruction for highly accelerated multi-shot diffusion mri using learned priors,” Magnetic Resonance in Medicine, vol. 86, no. 2, pp. 835–851, 2021\.
* [8] T. Kido et al., “Compressed sensing real-time cine cardiovascular magnetic resonance: accurate assessment of left ventricular function in a single-breath-hold,” Journal of Cardiovascular Magnetic Resonance, vol. 18, no. 1, pp. 1–11, 2016.
* [9] L. Axel and R. Otazo, “Accelerated mri for the assessment of cardiac function,” The British Journal of Radiology, vol. 89, no. 1063, pp. 20150655, 2016.
* [10] M. Lustig, D. Donoho, and J. M. Pauly, “Sparse mri: The application of compressed sensing for rapid mr imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007\.
* [11] T. Küstner et al., “Isotropic 3D Cartesian single breath-hold CINE MRI with multi-bin patch-based low-rank reconstruction,” Magnetic Resonance in Medicine, vol. 84, no. 4, 2020.
* [12] C. Qin et al., “Convolutional Recurrent Neural Networks for Dynamic MR Image Reconstruction,” IEEE Transactions on Medical Imaging, vol. 38, no. 1, pp. 280–290, 2019.
* [13] A. Bustin, N. Fuin, R. M. Botnar, and C. Prieto, “From compressed-sensing to artificial intelligence-based cardiac mri reconstruction,” Frontiers in Cardiovascular Medicine, vol. 7, no. 17, 2020.
* [14] T. Küstner et al., “Cinenet: deep learning-based 3d cardiac cine mri reconstruction with multi-coil complex-valued 4d spatio-temporal convolutions,” Scientific Reports, vol. 10, no. 1, pp. 1–13, 2020.
* [15] C. M. Sandino, P. Lai, S. S. Vasanawala, and J. Y. Cheng, “Accelerating cardiac cine mri using a deep learning-based espirit reconstruction,” Magnetic Resonance in Medicine, vol. 85, no. 1, pp. 152–167, 2021\.
* [16] T. Wang et al., “Ica-unet: Ica inspired statistical unet for real-time 3d cardiac cine mri segmentation,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2020, pp. 447–457.
* [17] A. G. Christodoulou et al., “Magnetic resonance multitasking for motion-resolved quantitative cardiovascular imaging,” Nature Biomedical Engineering, vol. 2, no. 4, pp. 215–226, 2018\.
* [18] L. Feng et al., “Xd-grasp: golden-angle radial mri with reconstruction of extra motion-state dimensions using compressed sensing,” Magnetic Resonance in Medicine, vol. 75, no. 2, pp. 775–788, 2016\.
* [19] L. Feng et al., “Golden-angle radial sparse parallel mri: combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric mri,” Magnetic Resonance in Medicine, vol. 72, no. 3, pp. 707–717, 2014\.
* [20] Z. Deng et al., “Four-dimensional mri using three-dimensional radial sampling with respiratory self-gating to characterize temporal phase-resolved respiratory motion in the abdomen,” Magnetic Resonance in Medicine, vol. 75, no. 4, pp. 1574–1585, 2016\.
* [21] S. Rosenzweig, N. Scholand, H. C. M. Holme, and M. Uecker, “Cardiac and respiratory self-gating in radial mri using an adapted singular spectrum analysis (ssa-fary),” IEEE Transactions on Medical Imaging, vol. 39, no. 10, pp. 3029–3041, 2020.
* [22] R. Zhou et al., “Free-breathing cine imaging with motion-corrected reconstruction at 3T using SPiral Acquisition with Respiratory correction and Cardiac Self-gating (SPARCS),” Magnetic Resonance in Medicine, 2019.
* [23] M. Usman, D. Atkinson, C. Kolbitsch, T. Schaeffter, and C. Prieto, “Manifold learning based ECG-free free-breathing cardiac CINE MRI,” Journal of Magnetic Resonance Imaging, vol. 41, no. 6, 2015.
* [24] X. Chen et al., “High-Resolution Self-Gated Dynamic Abdominal MRI Using Manifold Alignment,” IEEE Transactions on Medical Imaging, vol. 36, no. 4, pp. 960–971, 2017.
* [25] U. Nakarmi, K. Slavakis, and L. Ying, “Mls: Joint manifold-learning and sparsity-aware framework for highly accelerated dynamic magnetic resonance imaging,” in 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018). IEEE, 2018, pp. 1213–1216.
* [26] G. N. Shetty et al., “Bi-linear modeling of data manifolds for dynamic-mri recovery,” IEEE Transactions on Medical Imaging, vol. 39, no. 3, pp. 688–702, 2019.
* [27] U. Nakarmi, Y. Wang, J. Lyu, D. Liang, and L. Ying, “A kernel-based low-rank (klr) model for low-dimensional manifold recovery in highly accelerated dynamic mri,” IEEE Transactions on Medical Imaging, vol. 36, no. 11, pp. 2297–2307, 2017.
* [28] A. H. Ahmed et al., “Free-breathing and ungated dynamic mri using navigator-less spiral storm,” IEEE Transactions on Medical Imaging, 2020.
* [29] S. Poddar and M. Jacob, “Dynamic mri using smoothness regularization on manifolds (storm),” IEEE Transactions on Medical Imaging, vol. 35, no. 4, pp. 1106–1115, 2015.
* [30] S. Poddar et al., “Manifold recovery using kernel low-rank regularization: Application to dynamic imaging,” IEEE Transactions on Computational Imaging, vol. 5, no. 3, pp. 478–491, 2019.
* [31] Q. Zou, A. H. Ahmed, P. Nagpal, S. Kruger, and M. Jacob, “Dynamic imaging using deep generative storm (gen-storm) model,” IEEE Transactions on Medical Imaging, 2021.
* [32] Q. Zou, A. H. Ahmed, P. Nagpal, S. Kruger, and M. Jacob, “Deep generative storm model for dynamic imaging,” in 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). IEEE, 2021, pp. 114–117.
* [33] J. Yoo et al., “Time-dependent deep image prior for dynamic mri,” IEEE Transactions on Medical Imaging, 2021.
* [34] M. Gori, M. Maggini, and L. Sarti, “Exact and approximate graph matching using random walks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 7, pp. 1100–1111, 2005.
* [35] C. F. Baumgartner et al., “High-resolution dynamic mr imaging of the thorax for respiratory motion correction of pet using groupwise manifold alignment,” Medical Image Analysis, vol. 18, no. 7, pp. 939–952, 2014.
* [36] C. F. Baumgartner et al., “Self-aligning manifolds for matching disparate medical image datasets,” in International Conference on Information Processing in Medical Imaging. Springer, 2015, pp. 363–374.
* [37] X. Chen et al., “Dynamic volume reconstruction from multi-slice abdominal mri using manifold alignment,” in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2016, pp. 493–501.
* [38] Y. LeCun, “The mnist database of handwritten digits,” http://yann. lecun. com/exdb/mnist/, 1998.
* [39] M. Uecker et al., “Espirit—an eigenvalue approach to autocalibrating parallel mri: where sense meets grappa,” Magnetic Resonance in Medicine, vol. 71, no. 3, pp. 990–1001, 2014\.
* [40] B. Xu, N. Wang, T. Chen, and M. Li, “Empirical evaluation of rectified activations in convolutional network,” arXiv preprint arXiv:1505.00853, 2015.
|
# secureTF: A Secure TensorFlow Framework
Do Le Quoc, Franz Gregor, Sergei Arnautov TU Dresden, Scontain UG , Roland
Kunkel TU Dresden , Pramod Bhatotia TU Munich and Christof Fetzer TU
Dresden, Scontain UG
###### Abstract.
Data-driven intelligent applications in modern online services have become
ubiquitous. These applications are usually hosted in the untrusted cloud
computing infrastructure. This poses significant security risks since these
applications rely on applying machine learning algorithms on large datasets
which may contain private and sensitive information.
To tackle this challenge, we designed secureTF, a distributed secure machine
learning framework based on Tensorflow for the untrusted cloud infrastructure.
secureTF is a generic platform to support unmodified TensorFlow applications,
while providing end-to-end security for the input data, ML model, and
application code. secureTF is built from ground-up based on the security
properties provided by Trusted Execution Environments (TEEs). However, it
extends the trust of a volatile memory region (or secure enclave) provided by
the single node TEE to secure a distributed infrastructure required for
supporting unmodified stateful machine learning applications running in the
cloud.
The paper reports on our experiences about the system design choices and the
system deployment in production use-cases. We conclude with the lessons
learned based on the limitations of our commercially available platform, and
discuss open research problems for the future work.
secure machine learning, confidential computing, intel software guard
extensions (Intel SGX), tensorflow
## 1\. Introduction
Machine learning has become an increasingly popular approach for solving
various practical problems in data-driven online services
(taigman2014deepface, ; bennett2007netflix, ; foster2014machine, ;
deepmind_health, ). While these learning techniques based on private data
arguably provide useful online services, they also pose serious security
threats for the users. Especially, when these modern online services use the
third-party untrusted cloud infrastructure for deploying these computations.
In the untrusted computing infrastructure, an attacker can compromise the
confidentiality and integrity of the computation. Therefore, the risk of
security violations in untrusted infrastructure has increased significantly in
the third-party cloud computing infrastructure (Santos2009, ). In fact, many
studies show that software bugs, configuration errors, and security
vulnerabilities pose a serious threat to computations in the cloud systems
(Gunawi_bugs-in-the-cloud, ; Baumann2014, ; Santos2012, ). Furthermore, since
the data is stored outside the control of the data owner, the third-party
cloud platform provides an additional attack vector. The clients currently
have limited support to verify whether the third-party operator, even with
good intentions, can handle the data with the stated security guarantees
(pesos, ; Vahldiek-Oberwagner2015, ).
To overcome the security risks in the cloud, our work focuses on securing
machine learning computations in the untrusted computing infrastructure. In
this context, the existing techniques to secure machine learning applications
are limiting in performance (graepel2012ml, ), trade accuracy for security
(du2003using, ) or support only data classification (bost2015machine, ).
Therefore, we want to build a secure machine learning framework that supports
existing applications while retaining accuracy, supporting both training and
classification, and without compromising the performance. Furthermore, our
work strives to provide end-to-end security properties for the input data, ML
models, and application code.
To achieve our design goals, trusted execution environments (TEEs), such as
Intel SGX (intel-sgx, ) or ARM TrustZone (arm-trustzone, ), provide an
appealing way to build a secure machine learning system. In fact, given the
importance of security threats in the cloud, there is a recent surge in
leveraging TEEs for shielded execution of applications in the untrusted
infrastructure (Baumann2014, ; arnautov2016scone, ; tsai2017graphene, ;
shinde2017panoply, ; Orenbach2017, ). Shielded execution aims to provide
strong confidentiality and integrity properties for applications using a
hardware-protected secure memory region or enclave.
While these shielded execution frameworks provide strong security guarantees
against a powerful adversary, the TEEs have been designed to secure single-
node in-memory (volatile) computations. Unfortunately, the trust of TEEs does
not naturally extend to support distributed stateful applications running in
the cloud. To build a secure machine learning framework that supports both
training and classification phases, while providing all three important design
properties: transparency, accuracy, and performance, we need to address
several architectural challenges presented by TEEs, specifically Intel SGX,
which acts as the root of trust.
More specifically, in addition to the conventional architectural challenges
posed by the SGX architecture in the single node setting, such as limited
enclave memory and I/O bottlenecks, we need to address the following three
important challenges in the context of distributed cloud computing:
Firstly, we need to extend the trust of SGX to support the distributed
TensorFlow framework, where the worker nodes are running in the remote
distributed enclaves while ensuring that they execute correct
code/computations and data. However, this is a challenging task since Intel
SGX is designed for securing single machine computations.
Secondly, we need to support practical features offered by the virtualized
platforms in the public cloud service to enable elastic and fault-tolerant
computing, i.e., scaling-up/down based on the workloads, and dealing with
failures/migrations. To support these important requirements, we need to
ensure the new worker node running in a container preserves the integrity,
confidentiality of the data, ML models, and application code. However, the
traditional remote attestation using the Intel Attestation Service (IAS)
(costan2016intel, ) is impractical to support the elastic and fault-tolerant
computing. Therefore, we need to redesign the mechanism to ensure an elastic
trust establishment through a configuration and attestation service.
Lastly, we need to support stateful machine learning applications that rely on
reading the input data or write computation results from/to a file system
storage as well as to the network. Unfortunately, Intel SGX is designed to
protect only the data and computation residing in the volatile enclave memory.
It does not provide any security guarantees for stateful machine learning
computations across multiple machines.
To overcome these design challenges, we present secureTF, a secure machine
learning framework for the untrusted infrastructure. More specifically, we
make the following contributions.
* •
We have designed and implemented secureTF as the end-to-end system based on
TensorFlow that allows secure execution of the existing unmodified TensorFlow
applications without compromising the accuracy.
* •
We optimized the performance to overcome the architectural limitation of Intel
SGX in the context of machine learning workloads for distributed untrusted
cloud computing environments.
* •
We report an extensive evaluation of secureTF based on microbenchmarks and
production use-cases. Our evaluation shows that secureTF achieves reasonable
performance overheads, while providing strong security with low TCB.
secureTF is a commercially available platform, and it is currently used in
production by four major customers. In this paper, we report on our
experiences on building secureTF and deploying it in two production use-cases.
We conclude the paper with the lessons learned based on the limitations of our
system design, and a discussion on open research problems for the future work.
## 2\. Background and Threat Model
### 2.1. Machine Learning using TensorFlow
Machine learning aims to automatically extract useful patterns in large-scale
data by building probabilistic models (simeone2017brief, ). Machine learning
approaches are often categorized into supervised, unsupervised and
reinforcement learning. All forms have in common that they require datasets, a
defined objective, a model and a mechanism to update the model according to
new inputs.
To generalize the machine learning approach for masses, Google proposed
TensorFlow (abadi2016tensorflow, ) as a machine learning framework designed
for heterogeneous distributed systems. TensorFlow requires the user first to
define a directed graph consisting of nodes representing operations on
incoming data. Nodes have zero or more inputs and outputs and perform
operations on different levels of abstraction such as matrix multiplication,
pooling or reading data from disk. Nodes can also have an internal state,
depending on their type. Thus the whole graph can be stateful as well.
After defining the graph, the user can perform calculations by starting a
session and running the previously defined operations. TensorFlow uses a flow
model for calculations.
Through the division of the calculation in the graph into nodes, TensorFlow
makes it easy to distribute the execution across different devices. Therefore,
TensorFlow can be deployed on mobile devices, single personal computers, as
well as computer clusters, by mapping the computation graph on available
hardware.
TensorFlow Lite (tensorflow-lite, ) is a feature-reduced version of
TensorFlow, designed for mobile and embedded devices. Optimization for mobile
devices is achieved by running a mobile-optimized interpreter that keeps the
load at a lower level and having the overall binary size smaller when compared
to full TensorFlow. The number of available operations for defining a graph is
reduced to achieve a smaller memory footprint of the resulting binary. This
comes at the cost of trainability of the graph, because TensorFlow Lite can
only perform forward passes in graphs. Instead, a model must first be training
with the full version of TensorFlow and then exported and converted to a
special TensorFlow Lite model format. This format can then be used from the
TensorFlow Lite API for inference.
### 2.2. Intel SGX and Shielded Execution
Intel Software Guard Extension (SGX) is a set of x86 ISA extensions for
Trusted Execution Environment (TEE) (costan2016intel, ). SGX provides an
abstraction of a secure _enclave_ —a hardware-protected memory region for
which the CPU guarantees the confidentiality and integrity of the data and
code residing in the enclave memory. The enclave memory is located in the
Enclave Page Cache (EPC)—a dedicated memory region protected by an on-chip
Memory Encryption Engine (MEE). The MEE encrypts and decrypts cache lines that
are written and read to EPC, respectively. Intel SGX supports a call-gate
mechanism to control entry and exit into the TEE.
Shielded execution based on Intel SGX aims to provide strong confidentiality
and integrity guarantees for applications deployed on an untrusted computing
infrastructure (Baumann2014, ; arnautov2016scone, ; tsai2017graphene, ;
shinde2017panoply, ; Orenbach2017, ). Our work builds on the SCONE
(arnautov2016scone, ) shielded execution framework. In the SCONE framework,
the applications are linked against a modified standard C library (SCONE
libc). In this model, the application’s address space is confined to the
enclave memory, and interaction with the untrusted memory is performed via the
system call interface. In particular, SCONE runtime provides an asynchronous
system call mechanism (flexsc, ) in which threads outside the enclave
asynchronously execute the system calls. Lastly, SCONE provides an integration
to Docker for seamlessly deploying container images.
### 2.3. Threat Model
We aim to protect against a very powerful adversary even in the presence of
complex virtualization stacks in the cloud computing infrastructure
(Baumann2014, ). In this setting, the adversary can control the entire system
software stack, including the OS or the hypervisor, and is able to launch
physical attacks, such as performing memory probes. In addition, we consider
an untrusted network in the cloud environment, i.e., the adversary can drop,
inject, replay, alter packages, or manipulate the routing of packages. This
network model is consistent with the classic Dolev-Yao adversary model
(DolevYao, ). Even under this extreme threat model, our goal is to guarantee
the integrity, confidentiality, and freshness of data, code (e.g., Python
code), and models of machine learning computation. We also provide bindings
with Pesos (pesos, ), a secure storage system to protect against rollback
attacks (Parno2011, ) on the data stored beyond the secure enclave memory. Our
system is adaptable with SGXBounds (kuvaiskii2017sgxbounds, ); therefore,
secureTF is resilient to memory safety vulnerabilities (intel-mpx, ).
However, we do not protect against side-channel attacks based on cache timing
and speculative execution (foreshadow, ), and memory access patterns
(xu2015controlled, ; hahnel2017high, ). Mitigating side-channel attacks is an
active area of research (varys, ). We do not consider denial of service
attacks since these attacks are trivial for a third-party operator controlling
the underlying infrastructure (Baumann2014, ), e.g., operating system (OS),
and hypervisor. Lastly, we assume that the CPU hardware (including its
implementation of SGX) are trusted and the adversary cannot physically open
the processor packaging to extract secrets or corrupt the CPU system state.
## 3\. Design
In this section, we present the design of secureTF.
### 3.1. System Overview
secureTF is designed for secure distributed machine learning computations
using the hardware-assisted trusted execution environment (TEE) technologies
such as Intel SGX. Figure 1 depicts the high-level architecture of secureTF.
Our system ensures not only the confidentiality, integrity and freshness of
executions (e.g., training and classifying computations) but also the input
data and machine learning models. At a high-level, the system works as
follows: at the first step, when a user deploys a machine learning computation
on a remote host (e.g., a public cloud), the user needs to establish trust
into the secureTF instance running in the untrusted environment. To do so, the
user performs the remote attestation mechanism provided by the TEE technology
to ensure that the computation and the input data deployed in the remote
environment are correct and not modified by anyone e.g., an attacker. After
trusting the secureTF running in the remote environment, the user provides
secrets including keys for encrypting/decrypting input and output data (e.g.,
input images and models, certificates for TLS connections), to the machine
learning platform. After finishing the computation, secureTF returns the
results back to the user via a TLS connection.
Figure 1. System overview.
Design goals. Our primary design goal is to achieve strong confidentiality and
integrity properties. By confidentiality, we mean that all data including
models handled by the machine learning framework and the machine learning
framework code itself may not be disclosed to or obtainable by an unauthorized
party. By integrity, we mean that modifications of the data handled by
secureTF that were done by an unauthorized party must be detectable and should
not compromise the internal state and functioning. In addition, while
designing a practical system, we aim to achieve the following goals.
* •
Transparency: The secure framework must offer the same interface as the
unprotected framework, and should run unmodified existing applications based
on TensorFlow.
* •
Performance: We aim to impose as little overhead as possible when adding
security to the machine learning framework.
* •
Accuracy: We do not aim to trade-off accuracy for security. Accuracy will be
the same in the native TensorFlow framework as when using no security
protection.
### 3.2. Design Challenges
Building a practical secure distributed machine learning system using TEEs
such as Intel SGX is not straightforward, in fact, we need to handle several
challenges.
➊ Code modification. Intel SGX requires users to heavily modify the source
code of their application to run inside enclaves. Thus, transparently
supporting an unmodified machine learning framework to run inside enclaves is
not a trivial task.
➋ Limited EPC size. Currently, Intel SGX supports only a limited memory space
($\sim 94$MB) for applications running inside enclaves. However, most machine
learning computations, especially training, are extremely memory-intensive.
➌ Establishing the trust in a distributed system. Trust has to be established
in the remote distributed enclaves to ensure that they execute correct code
and data. However, this is a challenging task since Intel SGX is originally
designed for a single machine.
➍ Elastic and fault tolerant computing support. Typically, public cloud
services support elastic computing, i.e., when the input workload increases,
the framework automatically spawns new service containers or instances to
handle with the growth of requests. However, whenever spawning a new
container, it requires to perform remote attestation to ensure the integrity,
confidentiality of the machine learning application in that container before
communicating with it. Unfortunately, the traditional attestation mechanism
using the Intel Attestation Service (IAS) (costan2016intel, ) incurs
significant overhead, thus it’s impractical in this setting.
➎ Stateful computing: security of network and file system. Machine learning
applications running inside SGX enclaves need to read input data or write
results from/to a file system, storage systems, or network. Unfortunately,
Intel SGX is designed to protect only the stateless in-memory data and
computation residing inside enclaves. It does not provide any security
guarantees for state stateful machine learning computations across multiple
machines.
### 3.3. System Design
In this section, we present the detailed design of distributed secureTF that
handles the aforementioned challenges in $\S$1.
#### 3.3.1. System Components
To overcome the challenge ➊ (see $\S$1), we built secureTF based on the SCONE
shielded execution framework (arnautov2016scone, ). SCONE enables legacy
applications to be executed in Intel SGX enclaves without source code changes.
While there are other options available, we choose SCONE, because of the
relatively small extra work required to run an application and comparatively
small overhead compared to other available options. We leverage SCONE’s Docker
container support to design secure distributed secureTF which allows users to
perform machine learning computations in a secure manner on an untrusted
environment such as a public cloud. Figure 2 shows the distributed
architecture of secureTF. At the high-level, our systems consist of four core
components: Configuration and Remote Attestation Service (CAS), secure machine
learning containers including Tensorflow parameter servers and Tensorflow
workers, network shield and file system shield, and adapted Tensorflow
library.
Figure 2. The distributed architecture of secureTF.
We design the CAS component to handle the challenges ➌ and ➍. This component
takes an important role in the distributed architecture of secureTF which
transparently and automatically performs the remote attestation for secure
machine learning containers before transferring secrets and configuration
needed to run them. The CAS component is deployed inside an SGX enclave of an
untrusted server in the cloud or on a trusted server under the control of the
user. When a secure machine learning container is launched, it receives the
necessary secrets and configuration from CAS after the remote attestation
process, to run machine learning computations using the adapted Tensorflow
library running inside an enclave.
We design the network shield and the file system shield components to address
the challenge ➎. All communications between secure machine learning containers
and the CAS component are protected using the network shield component.
Next, we provide the detailed design of each component.
#### 3.3.2. Configuration and Remote Attestation Service
CAS enhances the Intel attestation service (costan2016intel, ) to bootstrap
and establish trust across the secureTF containers and maintain a secure
configuration of the distributed secureTF framework. CAS itself is deployed in
an Intel SGX enclave. In the case that CAS is deployed in an enclave on an
untrusted server, the user of secureTF needs to establish trust into the CAS
instance, i.e., he/she needs to perform remote attestation of CAS before
transferring encryption keys and certificates to process the encrypted input
data and machine learning models. By using CAS, we can maintain the original
distributed architecture of Tensorflow machine learning framework.
In addition, to guarantee the freshness of data during runtime, we design and
implement an auditing service in CAS to keep track the data modification
during machine learning computation. This mechanism allows secureTF to protect
against rollback attacks.
#### 3.3.3. Secure Machine Learning Containers
To build secure machine learning containers, we make use of TensorFlow and
TensorFlow Lite. TensorFlow Lite has the additional advantage of having a
smaller memory footprint which helps us to handle the design challenge ➋. We
use SCONE (arnautov2016scone, ) as an additional layer that allows access to
SGX features with fewer changes to application code. Figure 3 presents the
general architecture of a secure Tensorflow container using SCONE.
Figure 3. The architecture of a secure machine learning container in
secureTF.
Since we build secure machine learning containers based on SCONE in secureTF,
we use Docker (merkel2014docker, ) to conveniently deploy our system. No
changes to the Docker engine is required. The design of a secure machine
learning container in secureTF is composed of two components: (a) the secureTF
controller that provides the necessary runtime environment for securing the
TensorFlow library, and (b) secureTF TensorFlow library that allows deploying
unmodified existing TensorFlow applications. We next describe these two
components in detail.
secureTF Controller. The secureTF controller is based on the SCONE runtime.
Inside the SGX enclave, the controller provides a runtime environment for
TensorFlow, which includes the network shield, the file system shield, and the
user-level threading. Data, that is handled through file descriptors, is
transparently encrypted and authenticated through the shields. The shields
apply at each location where an application would usually trust the operating
system, such as when using sockets or writing files to disk. The shields
perform sanity checks on data passed from operating system to enclave to
prevent Iago attacks (Checkoway2013, ). More specifically, these checks
include bound checks and checking for manipulated pointers. This protection is
required to fulfill the goal of not requiring the application to deal with
untrusted systems (see challenge ➊ in $\S$1).
File system shield. The file system shield protects confidentiality and
integrity of data files. Whenever the application would write a file, the
shield either encrypts and authenticates, simply authenticates or passes the
file as is. The choice depends on user-defined path prefixes, which are part
of the configuration of an enclave. The shield splits files into chunks that
are then handled separately. Metadata for these chunks is kept inside the
enclave, meaning it is protected from manipulation. The secrets used for these
operations are different from the secrets used by the SGX implementation. They
are instead configuration parameters at the startup time of the enclave.
Network shield. TensorFlow applications do not inherently include end-to-end
encryption for network traffic. Users who want to add security must apply
other means to secure the traffic, such as a proxy for the Transport Layer
Security (TLS) protocol. According to the threat model however, data may not
leave the enclave unprotected, because the system software is not trusted.
Network communication must therefore always be end-to-end protected. Our
network shield wraps sockets, and all data passed to a socket will be
processed by the network shield before given to system software. The shield
then transparently wraps the communication channel in a TLS connection on
behalf of the user application. The keys for TLS are saved in files and
protected by the file system shield.
User-level threading. Enclave transitions are costly and should therefore be
avoided when possible. Many system calls require a thread to exit userspace
and enter kernel space for processing. To avoid thread transitions out of
enclaves as much as possible, the controller implements user space threading.
When the OS assigns a thread to an enclave, it first executes an internal
scheduler to decide, which application thread to execute. These application
threads are then mapped to SGX thread control structures. When an application
thread blocks, the controller is run again to assign the OS thread to a
different application thread instead of passing control back to the operating
system. In this way, the number of costly enclave transitions is reduced. When
no application thread is ready for execution, the OS either backs off and
waits inside the enclave, or outside, depending on the time required for an
enclave transition. A side effect of this user-level threading scheme is that
the controller does not require more OS threads than CPUs available to achieve
full CPU utilization, which is usually the case for applications running under
a conventional OS.
#### 3.3.4. secureTF TensorFlow Library
Machine learning applications consist of two major steps. In the first step,
the model is trained, and thereafter, the model is employed for classification
or inference tasks. Next, we explain the detailed design to run both training
process and classification process with Intel SGX.
Training process. For the training process, we use the full version of
TensorFlow. Training in TensorFlow is usually performed on acceleration
hardware such as GPUs and distributed across multiple machines. However, the
secureTF controller requires SGX which is only available for CPUs. We
therefore only support training on CPU. This limitation reduces the
performance of the training process, but it is required to achieve the
security goals.
The secureTF controller allows easy distribution of the application in the
form of Docker images. The training instances of secureTF can be distributed
on multiple nodes, each running separate SGX hardware. The network shield
applies transparent protection of the communication channel between instances.
Scaling on the same instance, that is, on the same CPU is possible, but does
decrease relative performance, because the limiting factor in our environment
is EPC size, which is fixed for each CPU. Only horizontal scaling with more
nodes can substantially increase performance.
Classification process. The main reason for dividing the classification and
training process in our design is that we can use different TensorFlow
variants for each step. Running with Intel SGX imposes less overhead, if
applications have a smaller memory footprint, because the limited EPC size is
the major bottleneck (see challenge ➋ in $\S$1). TensorFlow Lite has a smaller
memory footprint because it targets mobile devices. The drawback is however
that it cannot perform training by design. Therefore, we can only use it for
classification or inference. When protecting TensorFlow Lite with SCONE, the
framework uses the SCONE C library instead of the common system library. The
internals of TensorFlow Lite do otherwise not require change, as the interface
of the SCONE C library is compatible. The interface for using the
classification method of secureTF is the same as for TensorFlow Lite. Graph
definitions created for TensorFlow Lite are compatible.
## 4\. Implementation
We implement secureTF based on Tensorflow version $1.9.0$ and the SCONE
framework (arnautov2016scone, ) to run machine learning computations within
Intel SGX enclaves. We also consider other TEEs technologies such as ARM
TrustZone (arm-trustzone, ) and AMD’s TEE, SME/SEV (amd_secure_technology, ).
However, they have several limitations, e.g., ARM TrustZone supports only a
single secure zone, and does not have any remote attestation mechanism,
meanwhile, AMD’s TEE does not support integrity protection
(mofrad2018comparison, ).
We rely on SCONE to implement some features such as file system shield and
network shield. However, it is not straightforward to use these features out-
of-the-box to build secureTF. For example, SCONE does not support TLS
connection via UDP which is required in Tensorflow. SCONE provides only
confidentiality and integrity in network/storage shields, whereas, secureTF
ensures also the freshness of data, code and models of machine learning
computation. In addition, the memory management and user-level multithreading
need to adapt/extend it to fit the custom scheduler and memory management of
TensorFlow framework. Thus, we need to develop these missing parts of these
features to implement the design of secureTF.
In this section, we describe several challenges we faced during implementing
secureTF and how we addressed them. We first present how to enable the
security features in secureTF during the training process ($\S$4.1) and
classifying process ($\S$4.2). Thereafter, we describe the implementation of
the CAS component in $\S$4.3.
### 4.1. Training Process
The typical user of TensorFlow uses the Python API for defining and training
graphs, because it is the richest API. Using Python with SCONE would impose
additional complexity because it requires the dynamic library open (dlopen)
system call for imports. As the name implies, dlopen dynamically loads
libraries during runtime of a program. However, SGX does not allow an enclave
to be entered by a thread, unless it has been finalized according to the
procedures of enclave creation. A library that is dynamically loaded would
therefore not be represented in the enclave’s attestation hash. Consequently,
dlopen is disabled by default for SCONE applications. To allow dlopen, we need
to change the SCONE environment accordingly (i.e., SCONE_ALLOW_DLOPEN=yes). To
ensure the security guarantee, we need to authenticate loaded libraries during
runtime using the file system shield (see $\S$3.3).
We support not only Python but also C++ API as native Tensorflow framework. In
the previous version of secureTF, we did not support the Python API since, at
that time, SCONE did not support fork system call which is required in the
Python package (tensorscone-tech, ). The C++ version covers the low-level API
of TensorFlow, meaning many convenience features such as estimators or
monitored training are not available. However, implementation using C++ API
provides much better performance compared to using Python API. There is one
approach that let us use the convenience of the Python API for the definition
of the computation graph. TensorFlow allows exporting graphs and parameters,
such as learned biases that were created in the current session. Graph
definitions and checkpoints containing the parameters can later be imported by
another program. Importing and exporting are available in both the C++ and the
Python API, and they use interchangeable formats. The user can therefore
define a graph with the more high level Python API, including data inputs, and
later import and run it with C++. If the application does not by default
already export its model with a named interface, changes are required to the
original program, so that either the name of operations in the graph can be
known, or an interface is defined.
For the training process, we used the full version of TensorFlow, not to be
confused with TensorFlow Lite. A graph definition must be provided by the user
in form of a graph frozen by a script packaged together with TensorFlow, when
using either the Python or C++ API. If the user has used the C++ API for the
definition, the full source definition of the graph can also be used.
A frozen graph can be created from a graph definition exported from the Python
script that defines the graph in the Protocol Buffers ((protobuf, )) exchange
format. A checkpoint file containing all values of a graph that are not part
of the graph definition, such as weights, biases and counters can be exported
as well.
Alternatively, the graph can also be exported as a blank slate without any
initialized internal values. The initialization can then be done inside the
secureTF environment, which is useful if a user wants to train the graph
protected by SGX for the entire training process. The initialization
operations are required when using the Python API and are therefore usually
part of the exported graph.
The user must also provide the inputs for training, such as a set of annotated
images. secureTF protects the input data and code by activating the file
system shield (see $\S$3.3).
### 4.2. Classification /Inference Process
We implemented our design for inference/classifying computations in secureTF,
by integrating the full Tensorflow with SCONE as we developed for the training
computations. In addition, we provide a light-weight version for inference by
adapting Tensorflow Lite (tensorflow-lite, ) framework to run with SCONE. We
first ensured that Tensorflow and TensorFlow Lite compiles with the musl C
library (alpine_faq, ) on Alpine Linux (alpine_linux, ), because SCONE
enhanced the musl library to support legacy application running with Intel
SGX. The musl libc is designed to be compatible with The GNU C Library (glibc)
but more secure with a smaller code base. The issue we faced is that
Tensorflow currently uses Identical code folding (ICF) (ICF, ), which is a
compiler or linker feature, to eliminate identical function bodies at compile
or link time in order to reduce the binary size. However, it is currently
supported by gcc and the gold linker, but not by the musl linker or the
compiler wrapper for musl. We therefore removed the ICF option for the binary
targets in the TensorFlow source tree. Compiling the TensorFlow framework with
and without ICF provides similar binary sizes. Therefore, the performance cost
when deactivating ICF will also be minimal.
The next issue is that TensorFlow also uses backtrace by default. This library
is specific forglibc. We therefore could not use it directly with musl. To
solve this issue, we either recompiled dependencies against the musl libc, or
disabled backtrace in the building configuration of Tensorflow.
After adapting the Tensorflow source code, compiling it with SCONE is quite
straightforward by merely setting the environment variables CC and CXX to the
SCONE C and C++ compilers (i.e., scone-gcc and scone-g++).
Note that there is no standalone version of TensorFlow Lite available, meaning
a user of TensorFlow Lite needs to build their application inside the
TensorFlow source folder, with dependency targets set to TensorFlow Lite.
Tensorflow uses Bazel as a build tool (bazel, ), however, Bazel also does not
link library targets unless a binary target is created, which means TensorFlow
Lite cannot be easily released from the source tree by compiling all
libraries, and move them to the system’s include directories. Thus, we added
compile targets that force linking as a workaround. The libraries could then
be moved to other projects along with the header files, and used as third
party dependencies. With this, we developed a classifier service from scratch.
The service takes classification requests via network, and uses TensorFlow
Lite for inference/classifying. For evaluation, we used an example available
in the TensorFlow Lite source, which takes its inputs from the hard drive and
prints the classification results to console.
### 4.3. Configuration and Remote Attestation Service
For large-scale deployment secureTF, we design the Configuration and Remote
Attestation Service component (CAS) to transparently perform the remote
attestation and transfer keys to distributed secureTF containers (see
$\S$3.3). We implement the CAS component using Rust (rust, ) programming
language since it provides strong type safety. To run CAS with Intel SGX, we
utilize the SCONE cross compiler to compile our implementation of CAS. We make
use of an encrypted embedded SQLite (sqlite, ) to store encryption keys,
certificates, and other secrets for Tensorflow computations (see $\S$3.3).
This database itself also runs inside an enclave with the help of the SCONE
framework.
To allow a user of CAS can ensure that the code of CAS was not modified and
indeed runs inside a valid SGX enclave, besides running CAS with SCONE, we
implement CAS in the way that it has zero configuration parameters that can
control its behavior. Thus, an attacker with root/privileged accesses cannot
break the trust given by the user in CAS. A detail description of CAS
regarding protection against rollback attacks and key management is provided
in (palaemon, ).
## 5\. Evaluation
In this section, we present the evaluation results of secureTF based on both
microbenchmarks and macrobenchmarks with real world deployment.
Figure 4. The attestation and keys transferring latency comparison between
secureTF with the traditional way using IAS.
### 5.1. Experimental Setup
Cluster setup. We used three servers with em SGXv1 support running Ubuntu
Linux with a 4.4.0 Linux kernel, equipped with an Intel© Xeon© CPU E3-1280 v6
at 3.90GHz with 32 KB L1, 256 KB L2, and 8 MB L3 caches, and 64 GB main
memory. These machines are connected with each other using a 1 Gb/s switched
network. The CPUs update the latest microcode patch level.
In addition, we used a Fujitsu ESPRIMO P957/E90+ desktop machine with an
Intel© core i7-6700 CPU with 4 cores at 3.40GHz and 8 hyper-threads (2 per
core). Each core has a private 32KB L1 cache and a 256KB L2 cache while all
cores share an 8MB L3 cache.
Datasets. We used two real world datasets: (i) Cifar-10 image dataset
(krizhevsky2009learning, ) and (ii) MNIST handwritten digit dataset (mnist-
dataset, ).
Figure 5. Comparison between secureTF, native versions and the state-of-the-
art Graphene system in terms of latency with different model sizes, (a)
Densenet (42MB), (b) Inception_v3 (91MB), and (c) Inception_v4 (163MB).
#1: Cifar-10. This dataset contains a labeled subset of a much larger set of
small pictures of size 32x32 pixels collected from the Internet. It contains a
total of 60,000 pictures. Each picture belongs to one of ten classes, which
are evenly distributed, making a total of 6,000 images per class. All labels
were manually set by human labelers. Cifar-10 has the distinct advantage that
a reasonable good model can be trained in a relatively short time. The set is
freely available for research purposes and has been extensively used for
benchmarking machine learning techniques (xu2015empirical, ; he2016deep, ).
#2: MNIST. The MNIST handwritten digit dataset(mnist-dataset, ) consists of
$60000$ $28$ pixel images for training, and $10000$ examples for testing.
Methodology. Before the actual measurements, we warmed up the machine by
running at full load with IO heavy operations that require swapping of EPC
pages. We performed measurements for classification and training both with and
without the file system shield. For full end-to-end protection, the file
system shield was required. We evaluate secureTF with the two modes: (i)
hardware mode (HW) which runs with activated TEE hardware and (ii) simulation
mode (SIM) which runs with simulation without Intel SGX hardware activated. We
make use of this SIM mode during the evaluation to evaluate the performance
overhead of the Intel SGX and to evaluate secureTF when the EPC size is
getting large enough in the future CPU hardware devices.
### 5.2. Micro-benchmark: Remote Attestation and Keys Management
In secureTF, we need to securely transfer certificates and keys to
encrypt/decrypt the input data, models and the communication between worker
nodes (in distributed training process). To achieve the security goal, we make
use of the CAS component (see $\S$3.3) which attests Tensorflow processes
running inside enclaves, before transparently provides the keys and
certificates to encrypt/decrypt input data, models, and TLS communications.
Note that the advantage of using CAS over the traditional way using IAS to
perform attestation is that the CAS component is deployed on the local cluster
where we deploy secureTF.
Figure 4 shows the break-down latency in attestation and keys transferring of
our component CAS and the method using IAS. The quote verification process in
our CAS takes less than $1$ms, whereas in the IAS based method is $\sim
280$ms. In total, our attestation using CAS ($\sim 17$ms) is roughly
$19\times$ faster than the traditional attestation using IAS ($\sim 325$ms).
This is because the attestation using IAS requires providing and verifying the
measured information contained in the quotes (costan2016intel, ) which needs
several WAN communications to the IAS service.
Figure 6. The effect of file system shield on the classification latency with
different model sizes, (a) Densenet (42MB), (b) Inception_v3 (91MB), and (c)
Inception_v4 (163MB).
### 5.3. Macrobenchmark: Classifying Process
We evaluate the performance of secureTF in real-world deployments. First, we
present the evaluation results of secureTF in detecting objects in images and
classifying images using pre-trained deep learning models. Thereafter, in the
next section, we report the performance results of secureTF in training deep
learning models (see $\S$5.4).
In the first experiment, we analyze the latency of secureTF in Sim mode and HW
mode, and make a comparison with native versions using glibc and musl libc
(i.e., running Tensorflow Lite with Ubuntu and Alpine linux) and a system
(graphene-tesorflow-lite, ) provided by Intel using Graphene
(tsai2017graphene, ). Graphene is an open-source SGX implementation of the
original Graphene library OS. It follows a similar principle to Haven
(Baumann2014, ), by running a complete library OS inside of SGX enclaves.
Similar to SCONE (arnautov2016scone, ), Graphene offers developers the option
to run their applications with Intel SGX without requiring code modifications.
All evaluated systems except the Graphene-based system run inside a Docker
container.
To conduct this experiment, we use the Desktop machine (see $\S$ 5.1) to
install Ubuntu 16.04 since the Graphene based system does not work with Ubuntu
18.04. The evaluated systems run with single thread because of the current
version of the Graphene-based system does not support multiple threads, i.e.,
to run the classification process, we use the same input arguments for the
classification command line: $\$\ label\\_image-m\ model.tflite-i\
input.bmp-t\ 1$. For the latency measurement, we calculate the average over
$1,000$ runs. We use a single bitmap image from the Cifar-10 dataset as an
input of evaluated systems.
Models. For classifying images, we use several pre-trained deep learning
models with different sizes including Inception-v3 (szegedy2016rethinking, )
with the size of $91$MB, Inception-v4 (szegedy2017inception, ) with the size
of $163$MB and Densenet (huang2017densely, ) with the size of $42$MB. We
manually checked the correctness of a single classification by classifying the
image with the TensorFlow label_image application involving no self-written
code and running directly on the host without containerization. We later
compared the results to the ones provided by secureTF and other evaluated
systems, we could confirm that indeed the same classifying result was produced
by the evaluated systems.
#1: Effect of input model sizes. Figure 5 shows the latency comparison between
secureTF with Sim and HW mode, native Tensorflow Lite with glibc, native
Tensorflow Lite with musl libc, and Graphene-based system. secureTF with Sim
mode incurs only $\sim 5\%$ overhead compared to the native versions with
different model sizes. In addition, secureTF with Sim mode achieves a latency
$1.39\times$, $1.14\times$, and $1.12\times$ lower than secureTF with HW mode
with the model size of $42$MB, $91$MB, and $162$MB, respectively. This means
that operations in the libc of secureTF introduce a lightweight overhead. This
is because secureTF handles certain system calls inside the enclave and does
not need to exit to the kernel. In the Sim mode, the execution is not
performed inside hardware SGX enclaves, but secureTF still handles some system
calls in userspace, which can positively affect performance. We perform an
analysis using strace tool to confirm that some of the most costly system
calls of secureTF are indeed system calls that are handled internally by the
SCONE runtime.
Interestingly, the native Tensorflow Lite running with glibc is the same or
slightly faster compared to the version with musl libc. The reason for this is
that both C libraries excel in different areas, but glibc has the edge over
musl in most areas, according to microbenchmarks (clib_compare, ), because
glibc is tailored for performance, whereas musl is geared towards small size.
Because of this difference in goals, an application may be faster with musl or
glibc, depending on the performance bottlenecks that limit the application.
Differences in performance of both C libraries must therefore be expected.
In comparison to Graphene-based system, secureTF with HW mode is faster and
faster than Graphene-based system when we increase the size of input models,
specially when it exceeds the limit of the Intel SGX EPC size ($\sim 94$MB).
In particular, with the model size of $42$MB, secureTF with HW mode is only
$1.03\times$ faster compared to Graphene-based system, however, with the model
size of $163$MB, secureTF with HW mode is $\sim 1.4\times$ compared to
Graphene-based system. The reason for this is that when the application
allocates memory size larger than the EPC size limit, the performance of reads
and writes severely degraded because it performs encrypting data and paging
operations which are very costly. To reduce this overhead, we reduce the size
of our libraries loaded into SGX enclaves. Instead of adding the whole OS libc
into SGX enclaves as Graphene did, we make use of SCONE libc
(arnautov2016scone, ) which is a modification of musl libc having much smaller
size. In this library, system calls are not executed directly but instead are
forwarded to the outside of an enclave via the asynchronous system call
interface (see $\S$3.3). This interface together with the user level
scheduling allows secureTF to mask system call latency by switching to other
application threads. Thus, we expect this speedup factor of secureTF compared
to Graphene-based system will increase more when the size of the input model
size is increased and when the application runs with multiple threads.
#2: Effect of file system shield. One of real world usecases of secureTF is
that a user not only wants to acquire classifying results but also wants to
ensure the confidentiality of the input images since they may contain
sensitive information, e.g., handwritten document images. At the same time,
the user wants to protect her/his machine learning models since he/she had to
spend a lot of time and cost to train the models. To achieve this level of
security, the user activates the file system shield of secureTF which allows
he/she to encrypt the input including images and models and decrypt and
process them within an SGX enclave (see $\S$3.3).
Figure 7. The latency comparison in classifying cifar-10 images with different
numbers of CPU cores and nodes.
In this experiment, we evaluate the effect of this file system shield on the
overall performance of secureTF. As previous experiments, we use the same
input Cifar-10 images. Figure 6 shows the latency of secureTF when running
with/without activating the file system shield with different models. The file
system shield incurs significantly small overhead on the performance of the
classification process. secureTF with Sim mode running with the file system
shield is $0.12\%$ slower than secureTF with Sim mode running without the file
system shield. Whereas in the secureTF with HW mode, the overhead is $0.9\%$.
The lightweight overhead comes from the fact that our file system shield uses
Intel-CPU-specific hardware instructions to perform cryptographic operations
and these instructions can reach a throughput of up to 4 GB/s, while the model
is about 163 MB in size. This leads to a negligible overhead on the startup of
the application only.
Figure 8. The training latency comparison between secureTF with different
modes and native Tensorflow.
#3: Scalability. To evaluate the scalability of secureTF, we measure the
latency of secureTF in classifying $800$ cifar-10 images, with different
number of CPU cores (scale-up), and different number of physical nodes (scale-
out). Figure 7 shows that secureTF both in Sim and HW mode scale quite well
from $1$ CPU core to $4$ CPU cores. However, secureTF in HW mode does not
scale from $4$ CPU cores to $8$ CPU cores. The reason for this is that the EPC
size is limited to ~$94MB$. When secureTF runs with $8$ cores it requires more
than the capacity of the current version of Intel SGX. Thus, it requires to
perform the paging mechanism which is very expensive. For scale-out
evaluating, we keep each node to run with $4$ CPU cores. As we expected,
secureTF in both Sim and HW mode scale well with different numbers of physical
nodes. The latency of secureTF in HW mode with $1$ node is $1180$s whereas
with $3$ nodes the latency is $403$s.
#4: TensorFlow and TensorFlow Lite comparison. To show the advantage of using
TensorFlow Lite in secureTF instead of TensorFlow for inference or
classification, we make a comparison between them. In this experiment, we use
the same input model (i.e., Inception_v3 model) and input image to evaluate
the performance of secureTF using TensorFlow and TensorFlow Lite in HW mode.
secureTF with TensorFlow Lite achieves a $\sim 71\times$ lower latency
($0.697$s) compared to secureTF with TensorFlow ($49.782$s). The reason for
this is that, the binary size of secureTF with TensorFlow Lite is only
$1.9$MB, meanwhile the binary size of secureTF with TensorFlow is $87.4$MB;
and note that the Intel SGX enclave EPC size is limited to $\sim 94$MB.
### 5.4. Macrobenchmark: Distributed Training
In this section, we evaluate the performance of secureTF in training
distributed deep learning models at scale. In these experiments, we use MNIST
handwritten digit dataset (see $\S$5.1) and three physical servers having the
same configuration described in $\S$5.1. We keep the same batch size of $100$
and learning rate as $0.0005$, then measure the end-to-end latency of secureTF
with different modes including HW mode, Sim mode, with and without activating
the network shield, and a native version of Tensorflow.
Figure 8 shows that secureTF with different modes scales almost linearly with
the number of workers. secureTF, with full features running in HW mode,
achieves a speedup of $1.96\times$ and $2.57\times$ when it runs with 2 and 3
workers, respectively. Unsurprisingly, this mode of secureTF is roughly
$14\times$ slower compared to the native version due to the fact that the
training process requires memory-intensive computations and the enclave EPC
size is limited to $\sim 94$MB. However, we believe that Intel will release
new generation of its hardware which supports much large EPC sizes, thus we
performed the experiments to evaluate secureTF in the SIM mode, to see the
overhead of secureTF in the case the EPC size is enough for the training
process. The slowdown factor in comparison to the native version, is reduced
to $6\times$ and $2.3\times$ with secureTF in the SIM mode with and without
activating the network shield, respectively. This indicates that the main
overhead of the current implementation is the network shield. In addition,
note that the slowdown in SIM mode is because of a scheduling issue in SCONE.
We have reported this issue, it’s now fixed in the current version of SCONE.
From the results of experiments, we can learn that with the current Intel SGX
hardware capacity, performing securely inference/classification inside Intel
SGX is practical, but it is not feasible for securely training deep learning
(see $\S$7.1).
## 6\. Real-World Deployments
secureTF is a commercial platform, and it is actively used by four customers
(names omitted) in production. We next present the secureTF deployment for two
use cases.
Figure 9. Deployment #1: secure document digitization.
### 6.1. Secure Handwritten Documents Analysis
The first use case of secureTF is to perform secure handwritten documents
analytics (see Figure 9). A company (name omitted) is using a public cloud to
automatically translate handwritten documents into digital format using
machine learning. Customers of this company not only want to acquire the
inference results, but also want to ensure the confidentiality of the input
since the handwritten document images contain sensitive information. At the
same time, the company wants to protect its Python code for the inference
engine as well as its machine learning models. To achieve this level of
security, the company has deployed our framework—secureTF. The company uses
the file system shield to encrypt Python code and models used for the
inference. Meanwhile, the customers make use of the attestation mechanism of
secureTF to attest the enclave running the service, and then send their
handwritten document images via the TLS connections to this service to convert
them into digital text documents.
Figure 10. Deployment #2: secure federated learning.
### 6.2. Secure Federated Learning: Medical Use-case
The second use case of secureTF is secure federated learning (FL) (federated-
learning, ) (see Figure 10). FL is proposed to allow multiple parties to
jointly train a model that takes benefits from diverse datasets from the
parties. In our second use-case, several hospitals are actively collaborating
to train a model for diagnosing brain tumors. However, at the same time, they
want to protect patients’ data regarding their privacy. Thus, each hospital
performs the training locally using its local data and thereafter shares the
model parameters with the global training computation without sharing its
actual data. Unfortunately, the local model may reveal private information
(deeplearning-DP, ). These local models have been demonstrated to be
vulnerable to several privacy attacks (federated-learning-privacy, ). In
addition, there is empirical evidence to the risks presented by machine
learning models, e.g., the work by Matt et al (model-inversion-attacks, )
demonstrates that extracted images from a face recognition system look similar
to images from the underlying training dataset. To handle this issue, these
hospitals make use of secureTF to run the global training inside Intel SGX
enclaves. They only share their local model after performing the attestation
over the enclaves. The communication with the global training enclaves are
performed via TLS connections.
## 7\. Discussion and Lessons Learned
In this section, we discuss the lessons learned based on the limitations of
our commercially available platform, and also, present open research problems
for the future work.
### 7.1. Training Vs Classification
The limited EPC size has different implications for training vs
classifications. As shown in $\S$5, training deep learning with the larger
datasets inside the enclave is performance-wise limiting due to EPC paging.
However, the EPC size is quite practical for classifying/inference processes
since the size of the deployed ML model is usually much smaller than the
original training data. As discussed in $\S$6, we are effectively using
secureTF for image classification ($\S$6.1), and federated machine learning
use case (see $\S$6.2).
To improve the performance of the training phase in the limited enclave memory
regions, we are exploring two avenues: (1) data normalization: we can further
improve the training performance, by normalizing input data, e.g., in image
recognition services, all input images can be normalized to the size of
$32\times 32$; and (2) Ice lake CPUs Intel announced the next generation
processors called ice lake which supports larger EPC size (icelake, ).
### 7.2. ML Model Optimizations
To further improve the performance, we are exploring perform optimizations for
the ML models leveraging pruning and quantization tools, such as Intel
OpenVINO Toolkit (openvino, ). Since TensorFlow models are typically
abstracted as directed computation graphs (see $\S$2), where nodes are
operations and edges present the communication between operations. By
performing optimization on the model graphs such as pruning unnecessary edges
and nodes, we can significantly improve the performance of
classification/inference computations. The optimization also provides an
opportunity to deploy ML inference service at edge devices supporting SGX
(nuc, ) in edge computing. In fact, we have been working with an IoT-based
company to use secureTF for securely deploying the latest trained models at
the edge, while achieving high-performance.
### 7.3. Security Analysis and Properties
secureTF protects machine learning computations against attackers with
privileged access by executing securely these computations inside Intel SGX
enclaves. All data (input training/inference data, model, and Python code) and
communications outside enclaves are always encrypted. The encrypted data is
only decrypted inside enclaves. The keys or secrets to decrypt the data are
protected inside the CAS component which is also running inside an enclave.
The CAS component only provides these secrets via TLS connections to the
machine learning enclaves after attesting these enclaves. A detailed security
analysis of CAS is provided in (palaemon, ).
Intel SGX is typically vulnerable to side-channel attacks (sidechannel-
attack1, ; sidechannel-attack2, ; gotzfried2017cache, ;
vanbulck2018foreshadow, ; weisse2018foreshadowNG, ; spectre1, ; spectre2, ).
Although this type of attacks are out-of-scope of our work, it is worth to
mention that the version of SCONE, which was integrated in secureTF, can not
only protect against L1-based side channels attacks (varys, ) but also Iago
attacks (checkoway2013iago, ). We can also make use of LLVM-extensions, e.g.,
speculative load hardening (SpecLH2019, ) to prevent exploitable speculation
which helps us to present the variants of Spectre attacks (spectre1, ;
spectre2, ). In addition, the next generation of Intel CPUs (icelake, ) seems
to provide hardware-based solutions to handle several types of side-channel
attacks.
secureTF supports only TLS-based communications to protect against
eavesdropping on any communication between the CAS and computation nodes in a
distributed setting. In secureTF, the TLS certificates are generated inside
the SGX enclave running CAS, and thus they cannot be seen by any human. This
mechanism allows secureTF to handle man-in-the-middle attacks. However, TLS
and its predecessor are also vulnerable to side-channel attacks, e.g., attacks
on RSA (drown-attack, ; robot-attack, ). Thus, in secureTF, we recommend to
completely disable RSA encryption and replace it by forward-secret key
exchanges e.g., Elliptic-curve Diffie–Hellman (ECDHE) encryption (tls-book, ).
### 7.4. GPUs Support
Graphics Processing Units (GPUs) have become popular and essential
accelerators for machine learning (bekkerman2011scaling, ). Unfortunately,
trusted computing in GPUs is not commercially available, except research
prototypes, such as Graviton (graviton, ). Therefore, secureTF provides
security properties by relying on Intel SGX which is supported only for CPUs.
Technically, secureTF can also offer the GPU support, however, it requires
weakening the threat model, i.e., we need to assume that the GPU computations
and the communication between GPU and CPU are secure. The relaxation of the
threat model may be acceptable in practice for several use cases, e.g., when
users just want to protect their Python code and models for machine learning
computations. secureTF can ensure the code and models are encrypted. However,
this extension may not practical for many other use cases (graviton, ).
Therefore, we are currently investigating GPU enclave research proposals,
e.g., Graviton (graviton, ) and HIX (hix, ) which proposed hardware extensions
to provide a secure environment on GPUs.
## 8\. Related Work
In this section, we summarize the related work about secure machine learning
and shielded execution using Intel SGX.
Early works on preserving-privacy data mining techniques have relied on
randomizing user data (du2003using, ; bost2015machine, ; PrivApprox2017, ).
These approaches trade accuracy for privacy. They include a parameter that
allows making a trade-off between privacy and accuracy. The proposed
algorithms aim to provide privacy of computation, but they do not protect the
results themselves in the cloud, nor do they secure the classification phase.
While this can protect the users privacy, it does not cover training as in
secureTF. Further, we target to provide the same accuracy level as the native
execution.
Graepel et al. (graepel2012ml, ) developed machine learning algorithms to
perform both training and classification on encrypted data. The solution is
based on the properties of homomorphic encryption. However, homomorphic
encryption schemes provide restrictive computing operations, and incur high
performance overheads. There have been a series of recent works (secureml, ;
gazelle, ; cryptflow, ; delphi, ) aimed to provide secure machine learning
platforms with Secure multiparty computation (MPC). Especially, Delphi
(delphi, ) and CrypTFlow (cryptflow, ) demonstrated that they outperform
previous works. However, these systems also were designed only for securing
inferences. secureTF is instead based on a hardware-based encryption approach
(i.e., Intel SGX) and it supports both training and inference computations.
Shielded execution provides strong security guarantees for legacy applications
running on untrusted platforms (Baumann2014, ). Prominent examples include
Haven (Baumann2014, ), SCONE (arnautov2016scone, ), and Graphene-SGX
(tsai2017graphene, ). Our work builds on the SCONE framework. Intel SGX has
become available in clouds (IBMCloudSGX, ; AzureSGX, ), unleashing a plethora
of services to be ported, including Web search (sgx-websearch, ), actor
framework (eactors, ), storage (pesos, ; speicher, ), leases (t-lease, ),
monitoring and profilers (tee-perf, ; teemon, ), software update (TSR, ), FaaS
(clemmys, ), networking (shieldbox, ; slick, ), and data analytics systems
(sgx-pyspark, ; opaque, ; Schuster2015, ).
Recently, several secure machine learning systems have been proposed, which
rely on Intel SGX to support secure machine learning (privado, ; slalom, ;
chiron, ; ohrimenko, ). Privado (privado, ) proposes a mechanism to obtain
oblivious neural networks. Then, it executes the oblivious neural network
inside SGX enclaves for secure inferencing. Slalom (slalom, ) makes use of a
combination of Intel SGX and untrusted GPUs to secure Deep Neural Networks
(DNNs) computations. The idea of Slalom is that it splits the DNN computations
into linear operations (e.g., matrix multiplications) on GPUs, whereas
performing the non-linear operations (eg. ReLUs operations) inside Intel SGX
enclaves. This approach allows achieving much better performance since the
intensive computation is performed with GPUs. Unfortunately, Slalom still has
several limitations. First, as Privado, it focuses only on secure inferences.
It refers to secure training computations as a research challenge. Second, it
requires Tensorflow users to heavily modify or redevelop their existing code.
Third, it does not support distributed settings, i.e., it does not support
secure connections between SGX enclaves. Finally, Slalom is not production
ready, in fact, it indicates that it can be used only for testing. Chiron
(chiron, ) is the most relevant for secureTF, where they leveraged Intel SGX
for privacy-preserving machine learning services. Unfortunately, Chiron is a
single-threaded system within an enclave. In addition, Chiron requires adding
an interpreter and model compiler into enclaves which introduce significant
runtime overhead since the limited EPC size. The work from Ohrimenko et al.
(ohrimenko, ) also used Intel SGX to secure machine learning computations,
however, it supports only a limited number of operators. In contrast, we
propose secureTF — a practical distributed machine learning framework for
securing both training and inference computations.
## 9\. Conclusion
In this paper, we report on our experience with building and deploying
secureTF, a secure TensorFlow-based machine learning framework leveraging the
hardware-assisted TEEs, specifically Intel SGX. secureTF extends the security
properties of a secure stateless enclave in a single node to secure unmodified
distributed stateful machine learning applications. Thereby, it provides a
generic platform for end-to-end security for the input data, ML model, and
application code. Moreover, it supports both training and classification
phases while providing all three important design properties for the secure
machine learning workflow: transparency, accuracy, and performance. secureTF
is a commercially available platform, and is currently being used in
production by four major customers. While there are several open challenges
and limitations of our system, our experience shows that secureTF strives for
a promising approach: it incurs reasonable performance overheads, especially
in the classification/inference process, while providing strong security
properties against a powerful adversary. Lastly, we also discussed several
open challenges and on-going extensions to the system.
Acknowledgements. We thank our shepherd Professor Sara Bouchenak and the
anonymous reviewers for their insightful comments and suggestions. This work
has received funding from the Cloud-KRITIS Project and the European Union’s
Horizon 2020 research and innovation programme under the LEGaTO Project
(legato-project.eu), grant agreement No 780681.
## References
* [1] Alpine Linux. https://alpinelinux.org/. Accessed: May, 2020.
* [2] Alpine Linux FAQ. https://wiki.musl-libc.org/faq.html. Accessed: May, 2020.
* [3] AMD Secure Technology. https://www.amd.com/en/technologies/security. Accessed: May, 2020.
* [4] Comparison of C/POSIX standard library implementations for Linux. http://www.etalabs.net/compare_libcs.html. Accessed: May, 2020.
* [5] Deepmind health and research collaborations. https://deepmind.com/applied/deepmind-health/working-partners/health-research-tomorrow/. Accessed: May, 2020.
* [6] Graphene Tensorflow Lite benchmark. https://github.com/oscarlab/graphene-tests/tree/master/tensorflow/. Accessed: May, 2020.
* [7] Tensorflow lite. https://www.tensorflow.org/lite. Accessed: Jan, 2020.
* [8] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, et al. TensorFlow: A System for Large-Scale Machine Learning. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2016.
* [9] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS), 2016.
* [10] G. Allen and M. Owens. The Definitive Guide to SQLite. Apress, 2010.
* [11] ARM. Building a secure system using TrustZone technology. http://infocenter.arm.com/help/topic/com.arm.doc.prd29-genc-009492c/PRD29-GENC-009492C_trustzone_security_whitepaper.pdf, 2009\. Accessed: May, 2020.
* [12] S. Arnautov, B. Trach, F. Gregor, T. Knauth, A. Martin, C. Priebe, J. Lind, D. Muthukumaran, et al. SCONE: Secure Linux Containers with Intel SGX. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2016.
* [13] N. Aviram, S. Schinzel, J. Somorovsky, N. Heninger, M. Dankel, J. Steube, L. Valenta, D. Adrian, J. A. Halderman, V. Dukhovni, E. Käsper, S. Cohney, S. Engels, C. Paar, and Y. Shavitt. DROWN: Breaking TLS using sslv2. In 25th USENIX Security Symposium (USENIX Security), 2016.
* [14] M. Bailleu, D. Dragoti, P. Bhatotia, and C. Fetzer. Tee-perf: A profiler for trusted execution environments. In 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2019.
* [15] M. Bailleu, J. Thalheim, P. Bhatotia, C. Fetzer, M. Honda, and K. Vaswani. SPEICHER: Securing lsm-based key-value stores using shielded execution. In 17th USENIX Conference on File and Storage Technologies (FAST ), 2019.
* [16] A. Baumann, M. Peinado, and G. Hunt. Shielding Applications from an Untrusted Cloud with Haven. In Proceedings of the USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2014.
* [17] Bazel. The Bazel project. https://bazel.build/. Accessed: May, 2020.
* [18] R. Bekkerman, M. Bilenko, and J. Langford. Scaling up machine learning: Parallel and distributed approaches. Cambridge University Press, 2011.
* [19] J. Bennett, S. Lanning, et al. The netflix prize. In Proceedings of KDD cup and workshop, 2007.
* [20] H. Böck, J. Somorovsky, and C. Young. Return of bleichenbacher’s oracle threat (ROBOT). In 27th USENIX Security Symposium (USENIX Security), 2018.
* [21] R. Bost, R. A. Popa, S. Tu, and S. Goldwasser. Machine learning classification over encrypted data. In Proceedings of the Annual Network and Distributed System Security Symposium (NDSS), 2015.
* [22] F. Brasser, U. Müller, A. Dmitrienko, K. Kostiainen, S. Capkun, and A.-R. Sadeghi. Software grand exposure:$\\{$SGX$\\}$ cache attacks are practical. In 11th $\\{$USENIX$\\}$ Workshop on Offensive Technologies (WOOT), 2017.
* [23] C. Carruth. Speculative load hardening. https://llvm.org/docs/SpeculativeLoadHardening.html, 2019.
* [24] S. Checkoway and H. Shacham. Iago Attacks: Why the System Call API is a Bad Untrusted RPC Interface. In Proceedings of the 18th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2013.
* [25] S. Checkoway and H. Shacham. Iago attacks: Why the system call api is a bad untrusted rpc interface. In Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2013.
* [26] G. Chen, S. Chen, Y. Xiao, Y. Zhang, Z. Lin, and T. H. Lai. Sgxpectre attacks: Stealing intel secrets from sgx enclaves via speculative execution. arXiv e-prints, 2018.
* [27] I. Corp. 10th Generation Intel Processors Core Families. https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/10th-gen-core-families-datasheet-vol-1-datasheet.pdf. Accessed: May, 2020.
* [28] I. Corp. Intel Software Guard Extensions (Intel SGX). https://software.intel.com/en-us/sgx. Accessed: May, 2020.
* [29] I. Corporation. Intel nuc kits. Accessed: 28 May 2020.
* [30] V. Costan and S. Devadas. Intel SGX Explained. IACR Cryptology ePrint Archive, 2016.
* [31] D. Dolev and A. C. Yao. On the security of public key protocols. In Proceedings of the 22nd Annual Symposium on Foundations of Computer Science (SFCS), pages 350–357, 1981.
* [32] W. Du and Z. Zhan. Using randomized response techniques for privacy-preserving data mining. In Proceedings of the ninth international conference on Knowledge discovery and data mining (SIGKDD), 2003.
* [33] K. R. Foster, R. Koprowski, and J. D. Skufca. Machine learning, medical diagnosis, and biomedical engineering research-commentary. Biomedical engineering online, 2014.
* [34] M. Fredrikson, S. Jha, and T. Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (CCS), 2015.
* [35] Google. Google protocol buffers. https://developers.google.com/protocol-buffers/. Accessed: May, 2020.
* [36] J. C. Gordon. Microsoft azure confidential computing with intel sgx. Accessed: 28 May 2020.
* [37] J. Götzfried, M. Eckert, S. Schinzel, and T. Müller. Cache attacks on intel sgx. In Proceedings of the 10th European Workshop on Systems Security, 2017.
* [38] T. Graepel, K. Lauter, and M. Naehrig. Ml confidential: Machine learning on encrypted data. In Proceedings of the International Conference on Information Security and Cryptology, 2012.
* [39] F. Gregor, W. Ozga, S. Vaucher, R. Pires, D. L. Quoc, S. Arnautov, A. Martin, V. Schiavoni, P. Felber, and C. Fetzer. Trust Management as a Service: Enabling Trusted Execution in the Face of Byzantine Stakeholders. In Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), 2020.
* [40] K. Grover, S. Tople, S. Shinde, R. Bhagwan, and R. Ramjee. Privado: Practical and secure dnn inference with enclaves. 2018\.
* [41] H. S. Gunawi, M. Hao, T. Leesatapornwongsa, T. Patana-anake, T. Do, J. Adityatama, K. J. Eliazar, A. Laksono, J. F. Lukman, V. Martin, and A. D. Satria. What Bugs Live in the Cloud? A Study of 3000+ Issues in Cloud Systems. In Proceedings of the ACM Symposium on Cloud Computing (SoCC), 2014\.
* [42] M. Hähnel, W. Cui, and M. Peinado. High-resolution side channels for untrusted operating systems. In Proceedings of the USENIX Annual Technical Conference (ATC), 2017\.
* [43] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
* [44] B. Hitaj, G. Ateniese, and F. Perez-Cruz. Deep models under the gan: Information leakage from collaborative deep learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS), 2017.
* [45] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017.
* [46] T. Hunt, C. Song, R. Shokri, V. Shmatikov, and E. Witchel. Chiron: Privacy-preserving machine learning as a service. CoRR, 2018.
* [47] I. Jang, A. Tang, T. Kim, S. Sethumadhavan, and J. Huh. Heterogeneous isolated execution for commodity gpus. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS), 2019.
* [48] C. Juvekar, V. Vaikuntanathan, and A. Chandrakasan. Gazelle: A low latency framework for secure neural network inference. In Proceedings of the 27th USENIX Conference on Security Symposium (USENIX Security), 2018.
* [49] P. Karnati. Data-in-use protection on ibm cloud using intel sgx. Accessed: 28 May 2020.
* [50] P. Kocher, J. Horn, A. Fogh, , D. Genkin, D. Gruss, W. Haas, M. Hamburg, M. Lipp, S. Mangard, T. Prescher, M. Schwarz, and Y. Yarom. Spectre Attacks: Exploiting Speculative Execution. In 40th IEEE Symposium on Security and Privacy (S&P), 2019.
* [51] R. Krahn, D. Dragoti, F. Gregor, D. Le Quoc, V. Schiavoni, P. Felber, C. Souza, A. Brito, and C. Fetzer. TEEMon: A continuous performance monitoring framework for TEEs. In Proceedings of the 21th International Middleware Conference (Middleware), 2020.
* [52] R. Krahn, B. Trach, A. Vahldiek-Oberwagner, T. Knauth, P. Bhatotia, and C. Fetzer. Pesos: Policy enhanced secure object store. In Proceedings of the Thirteenth EuroSys Conference (EuroSys), 2018\.
* [53] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
* [54] N. Kumar, M. Rathee, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma. CrypTFlow: Secure TensorFlow Inference. In IEEE Symposium on Security and Privacy (S&P), 2020.
* [55] R. Kunkel, D. L. Quoc, F. Gregor, S. Arnautov, P. Bhatotia, and C. Fetzer. TensorSCONE: A Secure TensorFlow Framework using Intel SGX. arXiv preprint arXiv:1902.04413, 2019.
* [56] D. Kuvaiskii, O. Oleksenko, S. Arnautov, B. Trach, P. Bhatotia, P. Felber, and C. Fetzer. SGXBOUNDS: Memory Safety for Shielded Execution. In Proceedings of the 12th ACM European Conference on Computer Systems (EuroSys), 2017.
* [57] D. Le Quoc, F. Gregor, J. Singh, and C. Fetzer. Sgx-pyspark: Secure distributed data analytics. In Proceedings of the World Wide Web Conference (WWW), 2019.
* [58] Y. LeCun and C. Cortes. MNIST handwritten digit database. 2010\.
* [59] N. D. Matsakis and F. S. Klock, II. The rust language. In Proceedings of the 2014 ACM SIGAda Annual Conference on High Integrity Language Technology, HILT ’14, 2014.
* [60] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017.
* [61] D. Merkel. Docker: lightweight linux containers for consistent development and deployment. Linux Journal, 2014.
* [62] P. Mishra, R. Lehmkuhl, A. Srinivasan, W. Zheng, and R. A. Popa. Delphi: A cryptographic inference service for neural networks. In 29th USENIX Security Symposium ( USENIXSecurity), 2020.
* [63] S. Mofrad, F. Zhang, S. Lu, and W. Shi. A comparison study of Intel SGX and AMD memory encryption technology. In Proceedings of the 7th International Workshop on Hardware and Architectural Support for Security and Privacy, 2018.
* [64] P. Mohassel and Y. Zhang. Secureml: A system for scalable privacy-preserving machine learning. In 2017 IEEE Symposium on Security and Privacy (S&P), 2017.
* [65] O. Ohrimenko, F. Schuster, C. Fournet, A. Mehta, S. Nowozin, K. Vaswani, and M. Costa. Oblivious multi-party machine learning on trusted processors. In Proceedings of the 25th USENIX Security Symposium (USENIX Security), 2016.
* [66] O. Oleksenko, D. Kuvaiskii, P. Bhatotia, P. Felber, and C. Fetzer. Intel MPX Explained: A Cross-layer Analysis of the Intel MPX System Stack. Proceedings of the ACM on Measurement and Analysis of Computing Systems, 2018.
* [67] O. Oleksenko, B. Trach, R. Krahn, M. Silberstein, and C. Fetzer. Varys: Protecting SGX enclaves from practical side-channel attacks. In Proceedings of the USENIX Annual Technical Conference (USENIX ATC), 2018.
* [68] R. Oppliger. SSL and TLS: Theory and Practice. Artech House, 2016.
* [69] M. Orenbach, M. Minkin, P. Lifshits, and M. Silberstein. Eleos: ExitLess OS services for SGX enclaves. In Proceedings of the 12th ACM European ACM Conference in Computer Systems (EuroSys), 2017.
* [70] W. Ozga, D. Le Quoc, and C. Fetzer. A practical approach for updating an integrity-enforced operating system. In Proceedings of the 21th International Middleware Conference (Middleware), 2020.
* [71] B. Parno, J. R. Lorch, J. R. Douceur, J. Mickens, and J. M. McCune. Memoir: Practical state continuity for protected modules. In Proceedings of the 32nd IEEE Symposium on Security and Privacy (S&P), 2011.
* [72] R. Pires, D. Goltzsche, S. B. Mokhtar, S. Bouchenak, A. Boutet, P. Felber, R. Kapitza, M. Pasin, and V. Schiavoni. CYCLOSA: decentralizing private web search through sgx-based browser extensions. In 38th IEEE International Conference on Distributed Computing Systems(ICDCS), 2018.
* [73] D. L. Quoc, M. Beck, P. Bhatotia, R. Chen, C. Fetzer, and T. Strufe. PrivApprox: Privacy-Preserving Stream Analytics. In Proceedings of the 2017 USENIX Annual Technical Conference (USENIX ATC), 2017.
* [74] N. Santos, K. P. Gummadi, and R. Rodrigues. Towards Trusted Cloud Computing. In Proceedings of the 1st USENIX Workshop on Hot Topics in Cloud Computing (HotCloud), 2009.
* [75] N. Santos, R. Rodrigues, K. P. Gummadi, and S. Saroiu. Policy-sealed data: A new abstraction for building trusted cloud services. In Proceedings of the 21st USENIX Security Symposium, 2012.
* [76] V. A. Sartakov, S. Brenner, S. Ben Mokhtar, S. Bouchenak, G. Thomas, and R. Kapitza. Eactors: Fast and flexible trusted computing using sgx. In Proceedings of the 19th International Middleware Conference (Middleware), 2018.
* [77] F. Schuster, M. Costa, C. Gkantsidis, M. Peinado, G. Mainar-ruiz, and M. Russinovich. VC3 : Trustworthy Data Analytics in the Cloud using SGX. In Proceedings of the 36th IEEE Symposium on Security and Privacy (S&P), 2015.
* [78] S. Shinde, D. Tien, S. Tople, and P. Saxena. Panoply: Low-tcb linux applications with sgx enclaves. In Proceedings of the Annual Network and Distributed System Security Symposium (NDSS), page 12, 2017.
* [79] O. Simeone. A brief introduction to machine learning for engineers. arXiv preprint arXiv:1709.02840, 2017.
* [80] L. Soares and M. Stumm. FlexSC: Flexible System Call Scheduling with Exception-less System Calls. In Proceedings of the 9th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2010.
* [81] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the 31th AAAI Conference on Artificial Intelligence (AAAI), 2017.
* [82] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
* [83] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2014.
* [84] S. Tallam, C. Coutant, I. L. Taylor, X. D. Li, and C. Demetriou. Safe icf: Pointer safe and unwinding aware identical code folding in gold. In GCC Developers Summit, 2010.
* [85] B. Trach, R. Faqeh, O. Oleksenko, W. Ozga, P. Bhatotia, and C. Fetzer. T-lease: A trusted lease primitive for distributed systems. In ACM Symposium on Cloud Computing 2020 (SoCC), 2020.
* [86] B. Trach, A. Krohmer, S. Arnautov, F. Gregor, P. Bhatotia, and C. Fetzer. Slick: Secure Middleboxes using Shielded Execution. 2017\.
* [87] B. Trach, A. Krohmer, F. Gregor, S. Arnautov, P. Bhatotia, and C. Fetzer. ShieldBox: Secure Middleboxes using Shielded Execution. In Proceedings of the ACM SIGCOMM Symposium on SDN Research (SOSR), 2018.
* [88] B. Trach, O. Oleksenko, F. Gregor, P. Bhatotia, and C. Fetzer. Clemmys: Towards secure remote execution in faas. In 12th ACM International Conference on Systems and Storage (SYSTOR), 2019.
* [89] F. Tramèr and D. Boneh. Slalom: Fast, verifiable and private execution of neural networks in trusted hardware. In 7th International Conference on Learning Representations (ICLR), 2019.
* [90] C.-C. Tsai, D. E. Porter, and M. Vij. Graphene-SGX: A practical library OS for unmodified applications on SGX. In Proceedings of the USENIX Annual Technical Conference (USENIX ATC), 2017.
* [91] A. Vahldiek-Oberwagner, E. Elnikety, A. Mehta, D. Garg, P. Druschel, R. Rodrigues, J. Gehrke, and A. Post. Guardat: Enforcing data policies at the storage layer. In Proceedings of the 10th ACM European Conference on Computer Systems (EuroSys), 2015.
* [92] J. Van Bulck, M. Minkin, O. Weisse, D. Genkin, B. Kasikci, F. Piessens, M. Silberstein, T. F. Wenisch, Y. Yarom, and R. Strackx. Foreshadow: Extracting the keys to the Intel SGX kingdom with transient out-of-order execution. In Proceedings of the 27th USENIX Security Symposium (USENIX Security), 2018.
* [93] J. Van Bulck, M. Minkin, O. Weisse, D. Genkin, B. Kasikci, F. Piessens, M. Silberstein, T. F. Wenisch, Y. Yarom, and R. Strackx. Foreshadow: Extracting the keys to the intel sgx kingdom with transient out-of-order execution. In Proceedings of the 27th USENIX Security Symposium (USENIX Security), 2018.
* [94] S. Volos, K. Vaswani, and R. Bruno. Graviton: Trusted execution environments on gpus. In Proceedings of the 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2018.
* [95] W. Wang, G. Chen, X. Pan, Y. Zhang, X. Wang, V. Bindschaedler, H. Tang, and C. A. Gunter. Leaky cauldron on the dark land: Understanding memory side-channel hazards in sgx. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS), 2017.
* [96] O. Weisse, J. Van Bulck, M. Minkin, D. Genkin, B. Kasikci, F. Piessens, M. Silberstein, R. Strackx, T. F. Wenisch, and Y. Yarom. Foreshadow-NG: Breaking the virtual memory abstraction with transient out-of-order execution. Technical report, 2018. See also USENIX Security paper Foreshadow [93].
* [97] B. Xu, N. Wang, T. Chen, and M. Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015.
* [98] Y. Xu, W. Cui, and M. Peinado. Controlled-channel attacks: Deterministic side channels for untrusted operating systems. In Proceedings of the 36th IEEE Symposium on Security and Privacy (S&P), 2015.
* [99] A. Zaytsev and A. Zaytsev. Openvino toolkit. =https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html. Accessed: 28 May 2020.
* [100] W. Zheng, A. Dave, J. G. Beekman, R. A. Popa, J. E. Gonzalez, and I. Stoica. Opaque: An Oblivious and Encrypted Distributed Analytics Platform. In Proceedings of the 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI), 2017.
|
11institutetext: Wrocław University of Science and Technology, Faculty of Pure
and Applied Mathematics, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland
11email<EMAIL_ADDRESS>
http://szajowski.wordpress.com/ 22institutetext: Wrocław University of Science
and Technology
22email<EMAIL_ADDRESS>
# Operation comfort vs. the importance of system components
Krzysztof J. Szajowski 11 Małgorzata Średnicka 22
###### Abstract
The paper focuses on portraying importance measures that are reasonably
helpful in analyzing system reliability and its development. The presented
measures concern coherent binary and multistate systems and help to
distinguish the most influential elements of the system, which require more
attention. Importance measures are presented for systems with known structure
(e.g. parallel, series) and for repairable or nonrepairable components
Subject Classification:MSC 90B25 $\cdot$ (62N05; 60K10)
###### Keywords:
component importance coherent system binary system multistate system Barlow-
Proschan measure Birnbaum measure Natvig measure universal generating
function.
Table of Contents
section.1.1
subsection.1.1.1subsection.1.1.2subsection.1.1.3subsection.1.1.4subsection.1.1.5subsection.1.1.6section.1.2
subsection.1.2.1subsection.1.2.2subsection.1.2.3subsection.1.2.4subsection.1.2.5subsection.1.2.6subsection.1.2.7section.1.3subsection.1.3.1subsection.1.3.2subsection.1.3.3section.Asection.A.1section.A.2section.A.3section.A
## 1 Introduction
### 1.1 Preliminaries
Let’s consider a system111System (in Ancient Greek: ςυςτηµα –romanized:
systema – a complex thing) – a set of interrelated elements realizing the
assumed goals as a whole., a complex structure with specific functionality.
Contemporary systems are characterized by their structural complexity. In the
process of designing the system, the most important thing is its preparation
for the implementation of the assumed goals. The mathematical model of the
system is based on the set theory as the family of subsets of given set
$\mathbb{C}=\\{c_{1},\ldots,c_{n}\\}$ having some properties. An example is
technical devices whose design is dictated by the need to perform specific
functions. The constructed system should function in a planned and predictable
manner. This property is a requirement that should also be considered in the
design and construction (fabrication) process. The goal is therefore to reduce
the risk222It is difficult to define _risk_ in general. In short, when we
think about risk, we mean the possibility of an unexpected loss caused by an
unpredictable event or harmful behavior (human, machine, animal, nature). One
can think about possibility of loss or injury. From the other side, the risk
is the chance or probability that a person (a system) will be harmed or
experience an adverse health (functioning) effect if exposed to a hazard. It
may also apply to situations with property or equipment loss, or harmful
effects on the environment. Therefore, we are talking about reducing ownership
and loss as a result of a random event. Risk reduction means minimizing the
chance of a loss occurring or limiting its size. In order to better understand
the risk and possibilities of risk management, the task of measuring risk has
been set. The task is not formulated so that its solution is universal. This
allowed to determine the desired properties of such measures [3]. of a break
in the planned operation of the system. Therefore, ensuring reliability and
proper operation is of great importance in system analysis and management of
its operation. One of the measures to assess the quality of a solution is
system performance. Correct and expected operation gives the expected results
- both in terms of size, time of achievement and costs (outlays) of receiving
them. These expectations are achieved by ensuring reliable system operation.
The performance of the system is therefore affected by the reliability of its
components and its structure. At the same time, not only the reliability of
the system depends on these factors. In the event of a failure, it is
important to be able to _localize the damage_ more easily and to remove it
(repair it). Therefore, it is obvious that not all elements have the same
effect on the functioning of the system. To improve system reliability and
readiness, as well as streamline maintenance and repair activities, the
importance of system components should be explored for both reliability and
maintenance - including diagnostics, maintenance and repairs. Without proper
analysis, it is impossible to predict the significance of individual elements
for these features. Individual elements may affect each of them to a different
degree. There are known results on the evaluation of the weight of components
on the reliability of the system. The introduced measures of significance of
elements on reliability will be the basis for the introduction of diagnostic
algorithms, about the possibility of which they wrote at the end of his
seminal paper by Birnbaum(1968, 1969) (v. Barlow et al. (1975)). The
indication of these algorithms is the subject of this study.
In order to determine the significance of the reliability of individual system
components to the reliability of the whole system, measures are constructed
that are sensitive to modifications in the system. This allows the
rationalization of the structure and subsequent planning for optimal
maintenance. The issues are complex due to the fact that it is necessary to
take into account both the effective reliability of the constructed system and
the cost of maintaining it in readiness in a given period. Profitability
analysis is of great importance. It is natural to formulate the problem by
defining the primary goal of minimizing the cost while guaranteeing the
expected levels of reliability. With this approach, it is possible to define
weights for the cost of individual elements in a given time horizon, while
ensuring a certain level of security or readiness. This approach can be found
at paper by Wu and Coolen(2013). At the same time, one should not forget about
the other key goals and parameters in system analysis. Their inclusion in the
balanced model is possible with the use of natural methods of analysis in the
formulation of many criteria, based on elements of game theory.
We are trying to present the issue comprehensively, although there is
currently no consistent approach on the way of determining the importance of
elements in the system. This is because the loss of functionality of an
element often does not clearly affect the system’s ability to perform tasks.
This aspect is highlighted by numerous examples presented in the literature,
which show a significant impact of the state of the environment in which the
systems are operated (time of day, weather conditions, environmental
pollution). In addition, attention should be paid to the cause-effect
relationships of the work of the elements. We often deal with a sequential
progression of damage and degradation of elements, which means that it is
possible to propose a modeling method without the possibility of creating a
universal model, the calibration of which allows for a proper description of
the analyzed problem. The methodological limitations mentioned here mean that
the proposed methods are a development of the problems that we mention, but we
do not exclude that the approach may also enrich other analyzes based on other
premises and conditions. In order to organize the methodology, we will use the
systems classification, which will allow us to formulate assumptions. Wherever
possible, we provide the source of inspiration (a description of an issue in
which the proposed approach can be modeled, or we cite sources in the
literature that use an analogous model), although we realize that getting to
the original formulation of a concept or approach does not have to be the best
justification and motivation for the proposed approach.
### 1.2 Availability for planned tasks determines reliability.
We analyze the system (layout, structure) as one whole, carrying out a
specific simple task. We consider systems that are made up of elements. The
system is operational if it can accomplish the task for which it was created.
With this formulation, we assume that the task execution time is infinitely
short, so the possibility of failure in the course of the task can be
neglected. The analysis of the role of the components in such a system comes
down to the assessment of the impact of the reliability of a specific element
on the reliability of the whole. For this category of tests, measures of the
importance of elements will be helpful, which allow for the assessment
(measure) of the improvement in the system reliability resulting from the
improvement of the reliability of a given component. Such measures are useful
in determining the components whose reliability should be improved to obtain
the maximum improvement in system reliability. Examples of such measures can
be found in the works Birnbaum et al. (1961), Birnbaum (1969), Natvig (1985),
Boland et al. (1988).
We want, at this level of generality, to measure the weight of an element
related to its place in the structure, and structure has a role when the
system is intentionally designed. This analysis is also performed when the
reliability of the components is unknown. Hence we say that we are looking for
a measure of the significance of the structural element (structural importance
measure).
The factor that we want to include in the analysis is not only the position,
but also the reliability of the element. While still maintaining the
assumption that the system takes an infinitely short time to complete a task,
we introduce a measure of the element’s significance for reliability reasons
(reliability importance measure).
If the time needed to perform the task cannot be omitted, or the tasks are
repeated, and we know the reliability functions of the elements, the element
significance measure should also take into account the changes in the
reliability of elements over time. The inclusion of the reliability function
in the element significance analysis can be performed in various ways: global,
local or for a fixed time period ( various lifetime importance measure).
The aspects presented relate to the readiness to perform the task, excluding
the need for maintenance and repair, including the costs of these activities
(cost of parts, repair and maintenance time, penalties for non-availability).
In system maintenance tasks, in determining component importance, issues such
as detecting damaged components at system shutdown are important. The element
that should be checked in the first place (because it is most suspected of
causing a failure) can be treated as important for the efficient conduct of
maintenance or repair (v. e.g. Ping (2004)).
### 1.3 Raised the role of the element in failure.
At the time of failure (and not analysis during construction), the system
analyzes the maintenance team. It can monitor the state of the system. He
wants to find out what the elements meant for the observed state. To
facilitate this analysis, we determine the posterior weights of the elements.
Otherwise, in these considerations, the measure of importance of a component
(group of components) in a given system is based on the quantification of the
"role" of that component (group of components) in the failure of that system.
Examples of such measures can be found in Fussell and Vesely (1972), Barlow
and Proschan (1975), El-Neweihi et al. (1978), El-Neweihi and Sethuraman
(1991) and Abouammoh et al. (1994). Defined measures (indices) of significance
allow us to identify the components (groups) that are probably responsible for
"causing" a system failure. Establishing these indexes, in turn, leads to an
effective control and maintenance principle, as well as optimizing the storage
of spare parts and optimal allocation of repairs to the appropriate
maintenance technicians of the relevant system components.
The purpose of such research is to propose new importance measures for
degrading components (v. Cao et al. (2019)). The motivation is based on
Shapley value, which can provide answers about how important players are to
the whole cooperative game and what payoff each player can reasonably expect.
The proposed importance measure characterizes how a specific degrading
component contributes to the degradation of system reliability by using
Shapley value. Degradation models are also introduced to assess the
reliability of degrading components. The reliability of system consisting
independent degrading components is obtained by using structure functions,
while reliability of system comprising correlated degrading components is
evaluated with a multivariate distribution. The ranking of degrading
components according to this importance measure depends on the degradation
parameters of components, system structure and parameters characterizing the
association of components. A reliability degradation of engineering systems
and equipment are often attributed to the degradation of a particular or set
of components that are characterized by degrading features. This approach
reflects the responsibility of each degrading component for the deterioration
of system reliability. The results are also able to give timely feedback of
the expected contribution of each degrading component to system reliability
degradation.
### 1.4 General systems classification
The systems can be split into two categories:
1. (i)
Binary systems (BS)
2. (ii)
Multistate systems (MSS)
A binary system (i) is a system comprised of $n$ elements. It has precisely
two states: $0$ \- when the system is failed and $1$ \- when the system is
functioning. However, term "binary" pertains to the components of the system
that define it. In this case components may be in only one of two states $1$
\- when the component is functioning perfectly and $0$ \- when the element is
absolutely damaged. Nevertheless, binary systems not always meet the real life
problems. Frequently we have to reckon with elements that undergo only partial
failure, but do not cease to perform their operation and do not cause the
entire system to cease function. This is the case of the multistate systems
(ii) with the same properties as (i) beside states of components. Binary
systems are discussed in chapter 2, while the discussion of multistate systems
are moved to next paper.
There are three main classes of importance measures (v. Birnbaum(1969),
Amrutkar and Kamalja(2017))
1. (i)
Reliability importance measure
2. (ii)
Structural importance measure
3. (iii)
Lifetime importance measure
4. (iv)
Failure and its recovery costs importance measure
Reliability importance measure (i) focuses on the change in the reliability of
the system due to reliability change of the particular component. The measure
is evaluated with respect to the specific finite period of time and depends on
the components reliability and on the system structure. Nonetheless, if
reliability of the components are unknown, then we consider the case of the
structural importance measure (ii). To apply it we are obligated to know the
structure of the system. Hence, this measure indicates importance of the
system by checking significance of the positions occupied by individual
components. The lifetime importance measure (iii) depends on the lifetime
distribution of component and also on component position in the system. This
measure can be divided into two categories with respect to being the function
of time: Time Independent Lifetime importance and Time Dependent Lifetime
importance. Lastly but not least, the cost of failure and its recovery
importance measure (iv) depends on the lifetime distribution of component, its
position in the system and loss related to non-availability of the system,
diagnosis and repair. It is a new look at the importance of the components of
a complex system. The analysis and significance measure proposed in this paper
is based on the possibility of observing the components and a rational system
maintenance policy, which consists in stopping the system for maintenance and
repairs at a time when it pays off to a sufficient number of components. The
details are based on a cooperative analysis of costs and losses in the
operation of such a system (v. the section 2.7, Szajowski and Yasuda (1997)).
### 1.5 Review of importance measure concepts
Since Birnbaum(1968, 1969) the importance measures were investigated and
extended in various directions (v. Amrutkar and Kamalja(2017)).
Ramamurthy(1990) shows the relation of these idea to the research on the
cooperative games. These relationships can be helpful in determining the
importance of elements for the reliability of the system and at the same time
a role in the possibility of efficient diagnosis in the event of a failure, as
well as in determining the rules of procedure for removing a failure. Removing
the failure causes that the features of the element and the repaired module
are restored. However, it should be remembered that the method of repair and
the quality of the elements used reproduce the original features to varying
degrees (v. e.g. Navarro et al.(2019)). This has an impact on further
operation, diagnosis and maintenance (uplift). Rules are easier to set when
they are associated with objective measures of the features of components,
modules and the system. Analysis of significance measures in the context of
repairs helps to understand such relationships. Let us therefore establish
these relationships (v Do and Bérenguer (2020)).
###### Definition 1 (The structure)
For a non-empty and finite set $\mathbf{N}$333The list of symbols and
abbreviations used in the work has been collected in the section abbreviation
on page LABEL:skroty., we denote by $\mathcal{P}$ the family of subsets
$\mathbf{N}$ having the following properties
1. (1)
$\emptyset\in{\mathcal{P}}$;
2. (2)
${\mathbf{N}}\in{\mathcal{P}}$;
3. (3)
$S\subseteq T\subseteq{\mathbf{N}}$ and $S\in{\mathcal{P}}$ imply
$T\in{\mathcal{P}}$.
The family $\mathcal{P}$ is called structure.
This basic structure has been studied in many areas under a variety of names.
The monograph by Ramamurthy(1990) unified the definitions and concepts in two
main fields of application, that is cooperative game theory (simple games) (v.
Appendix 0.B, Chapt. 10 in Tijs(2003)) and reliability theory (semi-coherent
and coherent structures, v. Esary and Proschan (1963), Barlow and Wu (1978),
Ohi (2010)).
In reliability theory, consider the set ${\mathbf{N}}=\\{1,2,\ldots,n\\}$ of
components with which a system $g$ has been built. The state of the system as
well as any component can either be $0$ (a failed state) or $1$ (a functioning
state). The knowledge of the system is represented by the knowledge of the
structure function of the system which is defined as a switching function
(boolean) $g:\\{0,1\\}^{n}\rightarrow\\{0,1\\}$ of $n$ variables (or $n$
dimensional vector $\vec{x}$)444With the same symbol, we denote the system and
the analytical description of the system using the structure function wherever
it does not lead to misunderstandings.. The structure function $g$ (simply the
structure $g$) is called semi-coherent if
1. (1)
$g$ is monotone, i.e. $\overrightarrow{x}\preceq\overrightarrow{y}$ implies
$g(\overrightarrow{x})\leq g(\overrightarrow{y})$;
2. (2)
$g(\vec{0})=0$ and $g(\vec{1})=1$.
The semi-coherent structure can be called coherent when all its elements are
significant. A subset $A\subset{\mathbf{N}}$ is called a path set of $g$, if
$g(\vec{1}^{A},\vec{0}^{N\setminus A})=1$, i.e. the system is working if the
items forming the set $A$ [resp. $N\setminus A$] are working [resp. failed].
Similarly, $A\subset{\mathbf{N}}$ is called a cut set of $g$, if
$g(\vec{0}^{A},\vec{1}^{N\setminus A})=0$. Obviously, the assemblage of path
[cut] sets of a semi-coherent structure $g$ satisfies the three properties of
the basic structure mentioned at the beginning.
### 1.6 Cooperative games vs. semi-coherent systems
[30, Sec. 2] indicates the correspondence between the terminology of
cooperative game theory and reliability by means of a list of equivalent
notions: player or component; simple game or semi-coherent structure;
characteristic function or structure function; winning [blocking] coalition or
path [cut] set; minimal winning [blocking] coalition or minimal path [cut]
set. The review of the various types of simple games and semi-coherent
structures encountered in the literature are mentioned there. The most
interesting is [30, Ch. 3], where detailed study of the problem of assessing
the importance [power] of components [players] comprising the system [game] is
described. The emphasis is on the probabilistic approach to the quantification
of relative importance.
## 2 Binary systems
### 2.1 Preliminary remarks
Importance measures are helpful in deciding on the development of which
element to emphasize in order to improve the functioning of the system,
through indicating those more meaningful. A system yield function was a
concept of a general measure of importance, firstly introduced by
Birnbaum(1968). His idea take into account the structure of the system only.
Further, the research on the topic went in various direction (cf. Xie(1987)).
New variants of importance measures can be found in Dui et al.(2017), Wu and
Coolen(2013), Dutuit and Rauzy(2015). Importance measures have been widely
used as important decision-aiding indicators in various purposes such as
reliability studies, risk analysis and maintenance optimization. A novel time-
dependent importance measure for systems composed of multiple non repairable
components is proposed by Do and Bérenguer (2020). The proposed importance
measure of a component (module of components) is defined as its ability to
improve the system reliability during a mission given the current conditions
(states or degradation levels) of its components. To take into account
economic aspects, like e.g. maintenance costs, economic dependence between
components and the cost benefit thanks to maintenance operations, an extension
of the proposed importance measure is then investigated. Thanks to these
proposed importance measures, the component (group of components) can be
_rationally_ selected for preventive maintenance regarding to the reliability
criteria or the financial issues. The new treatment of the mentioned topic is
the subject of the section 2.7.
### 2.2 Coherence and system structure
In this paper, we will consider coherent structures, i.e. that are
nondecreasing functions. We call these structures monotonic. We will not
consider structures whose state does not depend on the states of their
elements.
###### Definition 2
The structure $\phi$ is called semi-coherent if for states
$\overrightarrow{x}$ and $\overrightarrow{y}$, such that
$\overrightarrow{x}\preceq\overrightarrow{y}$ implies
$\phi(\overrightarrow{x})\leq\phi(\overrightarrow{y}),$
and coherent if additionally it complies with $\phi(\vec{1})=1$ and
$\phi(\vec{0})=0$.
In multi-component system to classify a structure as coherent, we have to
introduce more notation and some properties [11], [10]. Thus, at the very
beginning we assume that $n$ components comprise the system, denoted by
$\vec{c}=(c_{1},c_{2},\dots,c_{n})$. Of the two available states (S)- failed
(F) or functioning (working W) - each component can only have one, what can be
defined by a binary indicator variable $x_{i}=\mathbb{I}_{\textbf{W}}(c_{i})$,
$c_{i}\in\\{\textbf{F},\textbf{W}\\}$ for every $i=1,2,\ldots,n$. In other
words, it is a state vector (vector of component states)
$\overrightarrow{x}=(x_{1},x_{2},\ldots,x_{n})$. The comparison of the state
vectors can be described with following notation [10] based on the component
states for $i=1,\ldots,n$:
$\displaystyle\overrightarrow{x}=\overrightarrow{y}$ if $y_{i}=x_{i}$,
$\displaystyle\overrightarrow{x}\preceq\overrightarrow{y}$ if $x_{i}\geq
y_{i}$, $\displaystyle\overrightarrow{x}\prec\overrightarrow{y}$ if
$\overrightarrow{x}\preceq\overrightarrow{y}$ and
$\overrightarrow{x}\neq\overrightarrow{y}$
Moreover, we assume that a system composed of $n$ elements whose states are
binary also has only two states possible - failed or functioning. Let
$\phi:\\{0,1\\}^{n}\rightarrow\\{0,1\\}$ be the structure function. If
inequality $x_{i}\leq y_{i}$ for $i=1,\ldots,n$ fulfills conditions from the
definition 2 and the structure is monotonic and irreducible, then the
structure function $\phi$ is called coherent.
The structure function $\phi$ for every $j=1,2,\ldots,n$ may be presented in
the manner of
$\displaystyle\phi(\overrightarrow{x})$
$\displaystyle=x_{j}\cdot\delta_{j}(\overrightarrow{x})+\mu_{j}(\overrightarrow{x})$
(1) where $\displaystyle\delta_{j}(\overrightarrow{x})$
$\displaystyle=\phi(1,\overrightarrow{x}_{-j})-\phi(0,\overrightarrow{x}_{-j})$
(2) $\displaystyle\mu_{j}(\overrightarrow{x})$
$\displaystyle=\phi(0,\overrightarrow{x}_{-j}).$ (3)
Hence, the component $c_{j}$ with the state $x_{j}$ does not influence
$\delta_{j}(\overrightarrow{x})$ and $\mu_{j}(\overrightarrow{x})$.
### 2.3 Reliability importance.
If for $i=1,\ldots,n$ we consider independent elements $X_{i}$, then the
system reliability is defined as a function of reliability of its components,
which is equal to the probability that the whole system will keep functioning.
Let assume the coherent system with a vector
$\overrightarrow{\textbf{p}}=(p_{1},\dots,p_{n})$ of the components
reliabilities, and in that case the reliability function is expressed as
$h(\overrightarrow{\textbf{p}})=\textbf{P}\\{\omega:\phi(\overrightarrow{X}(\omega))=1|\overrightarrow{\textbf{p}}\\}=\textbf{E}[\phi(\overrightarrow{X})|\overrightarrow{\textbf{p}}],$
(4)
where $h(\overrightarrow{\textbf{p}})$ is the reliability of the structure
$\phi$ as the function of the reliability of their components. From equations
(1) and (4) we have
$h(\overrightarrow{\textbf{p}})=p_{i}\cdot\textbf{E}[\delta_{i}(X)]+\textbf{E}[\mu_{i}(X)]$
(5)
for every $i=1,\ldots,n$ and from (1) and (5) we obtain the _reliability
importance_ of the component $c_{i}$ in the system $\phi$
$I_{\phi}(i;\overrightarrow{\textbf{p}})=I_{h}(i;\overrightarrow{\textbf{p}})\stackrel{{\scriptstyle\text{\cite[cite]{[\@@bibref{Number}{spivak1965calculus}{}{}]}}}}{{=}}\textbf{D}_{p_{i}}h(\overrightarrow{\textbf{p}})=\frac{\partial}{\partial
p_{i}}h(\overrightarrow{\textbf{p}})=\frac{\partial
h(\overrightarrow{\textbf{p}})}{\partial
p_{i}}\stackrel{{\scriptstyle\eqref{hp}}}{{=}}\textbf{E}[\delta_{i}(\overrightarrow{X})],$
(6)
which was first introduced by Birnbaum (1969). These _importance measures_ are
known as a vector $\vec{B}(\overrightarrow{\textbf{p}})$ having coordinates
$B(i|\overrightarrow{\textbf{p}})=\textbf{D}_{p_{i}}h(\overrightarrow{\textbf{p}})=\textbf{D}_{1-p_{i}}(1-h(\overrightarrow{\textbf{p}})),\qquad
i=1,2,...,n,$ (7)
where $B(i|\overrightarrow{\textbf{p}})$ is $\overrightarrow{\textbf{p}}$
dependent. If reliabilities $\overrightarrow{\textbf{p}}$ are unknown, we
obtain the _structural importance_ , defined as
$B(i|\overrightarrow{\textbf{p}})=I_{\phi}(i;\overrightarrow{\textbf{p}})=\textbf{D}_{p_{i}}h(\overrightarrow{\textbf{p}})\Bigg{\rvert}_{p_{1}=\ldots=p_{n}=\frac{1}{2}},\qquad
i=1,2,...,n,$ (8)
what will be discussed in section 2.6. The _reliability importance_ (v. [10])
of a component $c_{i}$ is defined as
$\displaystyle I_{\phi}(i,r;\overrightarrow{\textbf{p}})$
$\displaystyle=\textbf{P}\\{\phi(\overrightarrow{X})=r|X_{i}=r;\overrightarrow{\textbf{p}}\\}-\textbf{P}\\{\phi(\overrightarrow{X})=r|\overrightarrow{\textbf{p}}\\},$
$\displaystyle=\textbf{P}\\{\phi(\overrightarrow{X})=r|(r,\overrightarrow{\textbf{p}}_{-i})\\}-\textbf{P}\\{\phi(\overrightarrow{X})=r|\overrightarrow{\textbf{p}}\\},$
for the functioning of the structure $\phi$ with $r=1$, while for the failure
of the structure $\phi$ with $r=0$. Hence, the _compound reliability
importance_ of the component $c_{i}$ for the structure $\phi$ is
$\displaystyle I_{\phi}(i;\overrightarrow{\textbf{p}})$
$\displaystyle=I_{\phi}(i,1;\overrightarrow{\textbf{p}})+I_{\phi}(i,0;\overrightarrow{\textbf{p}}),$
(9) what is exactly equal to $\displaystyle
I_{\phi}(i;\overrightarrow{\textbf{p}})$ $\displaystyle=\frac{\partial
h(\overrightarrow{\textbf{p}})}{\partial
p_{i}}=\textbf{E}[\delta_{i}(\overrightarrow{X})]$ (10)
We can easily get (10) from (9)
$\displaystyle I_{\phi}(i;\overrightarrow{\textbf{p}})$
$\displaystyle=I_{\phi}(i,1;\overrightarrow{\textbf{p}})+I_{\phi}(i,0;\overrightarrow{\textbf{p}})$
$\displaystyle=\textbf{P}\\{\phi(\overrightarrow{X})=1|X_{i}=1;\overrightarrow{\textbf{p}}\\}-\textbf{P}\\{\phi(\overrightarrow{X})=1;\overrightarrow{\textbf{p}}\\}$
$\displaystyle\quad+\textbf{P}\\{\phi(\overrightarrow{X})=0|X_{i}=0;\overrightarrow{\textbf{p}}\\}-\textbf{P}\\{\phi(\overrightarrow{X})=0;\overrightarrow{\textbf{p}}\\}$
$\displaystyle=\textbf{P}\\{\phi(\overrightarrow{X})=1|X_{i}=1;\overrightarrow{\textbf{p}}\\}-(1-\textbf{P}\\{\phi(\overrightarrow{X})=0|X_{i}=0;\overrightarrow{\textbf{p}}\\})$
$\displaystyle=\textbf{P}\\{\phi(\overrightarrow{X})=1|X_{i}=1;\overrightarrow{\textbf{p}}\\}-\textbf{P}\\{\phi(\overrightarrow{X})=1|X_{i}=0;\overrightarrow{\textbf{p}}\\}.$
Hence, the equivalent definition of the reliability importance [43] is
$I_{h}(i;\overrightarrow{\textbf{p}})=h(1,\overrightarrow{\textbf{p}}_{-i})-h(0,\overrightarrow{\textbf{p}}_{-i})=\textbf{E}\big{[}\phi(1_{i},\overrightarrow{X})-\phi(0_{i},\overrightarrow{X})\big{]}=\textbf{E}\delta_{i}(\overrightarrow{X}).$
(11)
For the coherent system, the reliability of each element and the reliability
importance belongs to the interval $(0,1)$. From (11) we obtain
$I_{h}(i;\overrightarrow{\textbf{p}})=\textbf{P}\\{\phi(1_{i},\overrightarrow{X})-\phi(0_{i},\overrightarrow{X})=1\\}.$
(12)
From equations (11) and (12) we conclude that $I_{h}(i)$ can be interpreted as
the probability that a system has a state, in which it is spoiled due to the
$i$-th element being out of order.
###### Example 1 (Birnbaum reliability importance - series)
Let us consider the series structure presented in Fig. 1 composed of three
independent components, where each component $c_{1},c_{2},c_{3}$ has a
corresponding reliability $\overrightarrow{\textbf{p}}=(0.95,0.99,0.96)$.
$c_{1}$$c_{2}$$c_{3}$ Figure 1: Series structure
Then, at any time $t$ the system reliability is equal to
$h(\overrightarrow{\textbf{p}})=\prod^{3}_{i=1}p_{i}=0.90288$ and the Birnbaum
reliability importance (7) for components $c_{1},c_{2},c_{3}$ is
$(B(1|\overrightarrow{\textbf{p}}),B(2|\overrightarrow{\textbf{p}}),B(3|\overrightarrow{\textbf{p}}))=(\prod_{\genfrac{}{}{0.0pt}{}{i=1}{i\neq
1}}^{3}p_{i},\prod^{3}_{\genfrac{}{}{0.0pt}{}{i=1}{i\neq
2}}p_{i},\prod^{3}_{\genfrac{}{}{0.0pt}{}{i=1}{i\neq
3}}p_{i})=(0.9504,0.912,0.9405).$
In the series system we may see that the component having the smallest
reliability is the most meaningful for the system.
###### Example 2 (Birnbaum reliability importance - parallel)
Let us consider the parallel structure presented in Fig. 2 composed of three
independent components
$c_{1}$$c_{2}$$c_{3}$ Figure 2: Parallel structure.
where components $c_{1},c_{2},c_{3}$ have the same reliabilities like in
Example 2. Then, the system reliability at time $t$ is equal to
$h(\overrightarrow{\textbf{p}})=\coprod^{3}_{i=1}p_{i}=1-\prod^{3}_{i=1}(1-p_{i})=0.99998$
and the Birnbaum reliability importance (7) for components $c_{1},c_{2},c_{3}$
is
$\displaystyle(B(1|\overrightarrow{\textbf{p}}),B(2|\overrightarrow{\textbf{p}}),B(3|\overrightarrow{\textbf{p}}))$
$\displaystyle=(\prod_{\genfrac{}{}{0.0pt}{}{i=1}{i\neq
1}}^{3}(1-p_{i}),\prod^{3}_{\genfrac{}{}{0.0pt}{}{i=1}{i\neq
2}}(1-p_{i}),\prod^{3}_{\genfrac{}{}{0.0pt}{}{i=1}{i\neq 3}}(1-p_{i}))$
$\displaystyle=(0.0004,0.002,0.0005).$
In the parallel system we may see that the component having the greatest
reliability is the most relevant for the system.
### 2.4 Lifetime importance measure.
If $n$ components comprise the system, then we assume that for $t\geq 0$ and
$i=1,2,...,n$ a stochastic process $X_{i}(\omega,t)$ defines the $i$-th
component’s state with $X_{i}(\omega,t)$ equal to $0$ or $1$, reliant on
failure or functioning of the system at moment $t$, respectively. Let
$\xi_{i}(\omega)=\inf\\{t\in\Re^{+}:X_{i}(\omega,t)=0\\}$–the life time of
$i$th element and denote $Q_{i}(s)=\textbf{P}\\{\omega;\xi_{i}(\omega)\geq
s\\}$. Assuming continuous life distribution of the $i$-th component
$Q_{i}(t)=\textbf{P}\\{\omega:X_{i}(\omega,t)=1\\}$ and the structure $\phi$,
at each moment $t$ there is defined the reliability of the structure by the
adequate function $h(\overrightarrow{Q}(t))$ (see (4)). Based on these
denotation we have the process of system states
$X(\omega,t)=\phi(\overrightarrow{X}(\omega,t))$ and the system’s reliability
function $Q(t)$ can be derived. Hence,
$\displaystyle Q(t)=h(\overrightarrow{Q}(t))$
$\displaystyle=h\big{(}{Q}_{1}(t),{Q}_{2}(t),\ldots,{Q}_{n}(t)\big{)}$ (13)
$\displaystyle=\textbf{P}\\{\omega:\phi(\overrightarrow{X}(\omega,t))=1\\}=\textbf{E}\big{[}\phi(X(\omega,t)\big{]}.$
(14)
Let us calculate the density function $f(t)=-Q^{\prime}(t)=-q(t)$ of the
survival time distribution of the structure with the structure function $h$.
$\displaystyle f(t)=-\frac{d}{dt}Q(t)$
$\displaystyle\stackrel{{\scriptstyle\eqref{RelFunSys}}}{{=}}-\left\langle\nabla
h(\overrightarrow{Q}(t)),\overrightarrow{\textbf{q}}(t)\right\rangle$ (15)
$\displaystyle\stackrel{{\scriptstyle\eqref{Gradh}}}{{=}}-\left\langle\overrightarrow{\textbf{I}}_{h}(\overrightarrow{Q}(t)),\overrightarrow{\textbf{q}}(t)\right\rangle=-\left\langle\textbf{E}[\overrightarrow{\delta}(\overrightarrow{X})],\overrightarrow{\textbf{q}}(t)\right\rangle.$
First, Birnbaum in 1968 introduced importance measures for fixed function of
time $t$, while Barlow and Proschan in 1975 freed the measure from the time-
dependence. They proposed probability that the system and $i$-th component
failures coincide, which means that the $i$-th component impaired the system.
###### Fact 2.1
For $i=1,2,...,n$ let the $i$-th component have a distribution $F_{i}$,
reliability $Q_{i}$ and density $f_{i}$. Then, the probability that system
failure occurred at time t and was caused by a component $i$ is defined as
$\frac{f_{i}(t)\cdot[h(1,\overrightarrow{Q}_{-i}(t))-h(0,\overrightarrow{Q}_{-i}(t))]}{\left\langle\nabla
h(\overrightarrow{Q}(t)),\overrightarrow{\textbf{f}}(t)\right\rangle}=\frac{f_{i}(t)\cdot
I_{h}(i;\overrightarrow{Q}(t))}{\sum_{k=1}^{n}f_{k}(t)\cdot
I_{h}(k;\overrightarrow{Q}(t))}.$ (16)
###### Proof
The probability of the system functioning at time $t$ if the $i$-th element is
functioning and that the system is not functioning at time $t$ if the $i$-th
element is not functioning is
$\displaystyle
P\big{[}\phi(1,\overrightarrow{X}_{-i}(t))-\phi(0,\overrightarrow{X}_{-i}(t))=1\big{]}$
$\displaystyle=h(1,\overrightarrow{Q}_{-i}(t))-h(0,\overrightarrow{Q}_{-i}(t))$
$\displaystyle=I_{h}(i;\overrightarrow{Q}(t)).$ (17)
Therefore, the numerator in (16) multiplied by $dt$ represents the probability
that in the interval $[t,t+dt]$ the $i$-th component led to the failure of the
system and the denominator multiplied by $dt$ stands for the probability that
the system failed in the given interval [5].
###### Fact 2.2
In consequence of equation (17), the probability of $i$ causing the system
failure in time interval $[0,t]$, while the system failure occurs in the same
period of time $[0,t]$ is
$\frac{\int_{0}^{t}\cdot
I_{h}(i;\overrightarrow{Q}(u))dF_{i}(u)}{\sum_{k=1}^{n}\int_{0}^{t}\cdot
I_{h}(k;\overrightarrow{Q}(u))dF_{k}(t)}=\frac{\int_{0}^{t}[h(1,\overrightarrow{Q}_{-i}(u))-h(0,\overrightarrow{Q}_{-i}(u))]dF_{i}(u)}{\int_{0}^{t}\sum_{k=1}^{n}[h(1,\overrightarrow{Q}_{-k}(u))-h(0,\overrightarrow{Q}_{-k}(u))]dF_{k}(u)}.$
(18)
When in equation (18) $t\to\infty$, then it is a probability of $i$ leading
the system to the total failure. In this regard, the denominator is equal to
$1$. We assume that this limit is a definition of component importance.
###### Definition 3
As a consequence of (18), the probability of $i$ causing the system failure is
denoted as
$I_{h}(i;\overrightarrow{Q})=\int\limits_{0}^{\infty}[h(1,\overrightarrow{Q}(t))-h(0,\overrightarrow{Q}(t)(t))]dF_{i}(t)$
(19)
where $I_{h}(i;\overrightarrow{Q})$ is precisely the lifetime importance
measure of the $i$-th component.
###### Fact 2.3
Importance measure properties
1. 1.
$I_{h}(i;\overrightarrow{Q})\in[0,1]$
2. 2.
$I_{h}(i;\overrightarrow{Q})\in(0,1)$ if $n\geq 2$
3. 3.
$\sum_{i=1}^{n}I_{h}(i;\overrightarrow{Q})=1$
However, Birnbaum in 1969 extended reliability importance of components, he
was not able to free measure from time dependence though. Probability
distribution $F_{i}(t)=P\\{\xi_{i}\leq t\\}$ was considered with assumption of
each $i$-th component having a life length $\xi_{i}$ [10]. Therefore, using
this assumption and those from the beginning of this section, we have the
lifetime importance measure given by
$\displaystyle
I_{h}^{i}(t)=\textbf{P}\Big{[}\phi\big{(}1,\overrightarrow{X}_{-i}(t)\big{)}-\phi\big{(}0,\overrightarrow{X}_{-i}(t)\big{)}=1\Big{]}=$
(20)
$\displaystyle=h(1,\overrightarrow{Q}_{-i}(t))-h(0,\overrightarrow{Q}_{-i}(t)),$
(21)
what describes probability at time $t$ that the system is in the state $t$ in
which the $i$-th component is crucial for the system. If the $i$-th component
is series or parallel to the system, then it has a corresponding formula to
the structural importance case [41].
### 2.5 Module importance
Multi-component and coherent system may be partitioned into modules, which in
other words are sub-systems consisting of different components. As Birnbaum
[10] proposed, a module importance for fixed time with coherent structure
$\phi$ is expressed by
$\phi(x)=\phi(x_{1},x_{2},...,x_{n})=x_{1}\cdot\delta_{x_{1}}(\phi;x)+\mu_{x_{1}}\cdot(\phi;x)$
(22)
and coherent structure $\Psi(y)$ denoted as
$\Psi(y)=\Psi(y_{1},y_{2},...,y_{m}),$ (23)
we may achieve the structure $\chi$, if in $\phi(x)$ an element $x_{1}$ is
substituted by the coherent module $\Psi(y)$:
$\displaystyle\chi(y_{1},...,y_{m},x_{2},...,x_{n})=\phi[\Psi(y_{1},...,y_{m}),x_{2},...,x_{n}]=\phi[\Psi_{1}(y),x]=$
(24)
$\displaystyle=\Psi(y)\cdot\delta_{x_{1}}\cdot[\phi;x]+\mu_{x_{1}}\cdot[\phi;x].$
(25)
From (25) we deduce that
$\begin{gathered}\delta_{x_{1}}(\chi;y_{1},...,y_{m},x_{2},...,x_{n})=\\\
=\chi(1,y_{2},...,y_{m},x_{2},...,x_{n})-\chi(0,y_{2},...,y_{m},x_{2},...,x_{n})=\\\
=\delta_{x_{1}}(\phi;x)\cdot[\Psi(1,y)-\Psi(0_{1},y)]=\delta_{y_{1}}(\Psi;y)\cdot\delta_{x_{1}}(\phi;x).\end{gathered}$
(26)
If we base on equations (10) and (26), then we obtain the importance of a
module defined as
$I_{y_{i}}(\chi;y_{1},...,y_{m},x_{2},...,x_{n})=I_{x_{i}}(\phi;x)\cdot
I_{y_{i}}(\Psi;y).$ (27)
For the system $\chi$ we may derive the importance of every component
comprising the module $\Psi$ by repeating the procedure of substituting
modules for components till none is left [10].
Different definition of module importance of the coherent system was proposed
by Barlow and Proschan in [5] (cf. [8]).
###### Definition 4
For $n$ components let introduce a structure $\phi$ that is coherent, subset
of 1,2,…,n given by $M$ and its complement $M^{C}$, and coherent system $\chi$
comprised of components in $M$. Then, the module $(M,\chi)$ of the coherent
system $\phi$ is defined as
$\phi(x)=\Psi[\chi(x^{M}),x^{M^{C}}],$ (28)
where $x^{M^{C}}$ is a complement of a subset $M$.
The module importance $I_{h}(M)$ is the probability of the module causing
system failure.
###### Theorem 2.4
If $i\in M$ and $f$ denotes module’s reliability function, then
$I_{h}(i)=\int\limits_{0}^{\infty}\big{[}h(1^{M},\bar{F}(t))-h(0^{M},\bar{F}(t))\big{]}\cdot\big{[}f(1_{i},\bar{F}(t))-f(0_{i},\bar{F}(t))\big{]}d\bar{F}_{i}(t)$
(29)
and
$I_{h}(M)=\sum_{i\in M}I_{h}(i)$ (30)
###### Proof
(29) Probability of functioning of the system at time $t$, if and only if the
module functions firmly, is represented by
$h(1^{M},\bar{F}(t))-h(0^{M},\bar{F}(t))=P\big{[}\phi(1^{M},X(t))-\phi(0^{M},X(t))=1\big{]},$
while the probability of module functioning at time $t$, if and only if the
component $i$ functions, is denoted as
$f(1_{i},\bar{F}(t))-f(0_{i},\bar{F}(t))=P\big{[}\chi(1_{i},X(t))-\chi(0_{i},X(t))=1\big{]},$
In the system with modules, component $i$ may only cause system failure
through module failure, hence
$\displaystyle\sum_{i\in
M}I_{h}(i)=\int\limits_{0}^{\infty}\big{[}h(1^{M},\bar{F}(t))-h(0^{M},\bar{F}(t))\big{]}\cdot\sum_{i\in
M}\big{[}f(1_{i},\bar{F}(t))-f(0_{i},\bar{F}(t))\big{]}dF_{i}(t)=$
$\displaystyle=-\int\limits_{0}^{\infty}\big{[}h(1^{M},\bar{F}(t))-h(0^{M},\bar{F}(t))\big{]}\frac{d}{dt}f(\bar{F}(t))dt=I_{h}(M)$
###### Note 1
Definition of the module importance proposed by Birnbaum is slightly different
from the one introduced by Barlow and Proschan. In Birnbaum’s definition
importance of the module’s component is equal to the component’s importance
for the module multiplied by the importance of the module for the system. This
is not consistent with Barlow and Proschan definition due to the fact that for
each $x$ expression $r(x)=s(x)\cdot u(x)$ doesn’t imply
$\int_{a}^{b}r(x)dx=\int_{a}^{b}s(x)dx\cdot\int_{a}^{b}u(x)dx.$
###### Lemma 1
If a component $i$ is serial to the system, then the importance $I_{h}(i)$
increases in $F_{i}(t)$ and $\bar{F}_{j}(t)$ when $i\neq j$. Otherwise, if
component $i$ is parallel to the system, then the importance $I_{h}(i)$
decreases.
###### Proof
With assumption that component $i$ is serial to the system, we obtain
$I_{h}(i)=\int_{0}^{\infty}h(1_{i},\bar{F}(t))dF_{i}(t),$
while $h(0_{i},\bar{F}(t))=0$ due to the hypothesis. Since $h(1_{i},p)$
increases in each $p$, $I_{h}(i)$ increases in $\bar{F}_{j}(t)$, if $i\neq j$.
Moreover, $h(1_{i},\bar{F}(t)$ decreases in $t$, therefore $I_{h}(i)$
increases in $F_{i}(t)$.
###### Lemma 2
If component $i$ is serial or parallel to the system and all components have
the same distribution $F$, then for $i\neq j$ we obtain $I_{h}(i)\geq
I_{h}(j)$.
###### Proof
If we assume that $i$ is serial to the system and use the fact that components
are stochastically alike, $I_{h}(k)$ may be treated as the permutations’
proportion from $1$ to $n$ corresponding to the failure of the system by cause
of $k$. Hence, computation of $I_{h}(k)$ proceeds with the interchange of $j$
and $i$ in each permutation. This calculation method shows that the number of
permutations, in which the failure is caused by $i$, is not smaller than the
number of permutations in which the failure is caused by $j$.
By using lemmas 1 and 2 we may introduce the following theorem 2.5.
###### Theorem 2.5
If we assume that the $i$-th component is serial or parallel to the system and
$t\geq 0$, $j\neq i$, then the true is $F_{j}(t)\leq F_{i}(t)$ and
$I_{h}(j)\leq I_{h}(i)$.
### 2.6 Structural importance
At times we have to face the situation when information about component
reliabilities are missing. In that case we have to consider the impact of
various components to the system. And so, we define the structural importance.
Measure introduced by Birnbaum [10] requires specifying structure function
equalities (1), (2), (3).
###### Definition 5
1. a)
A component $c_{i}$ is indispensable for the $\phi$ structure at the vector of
states $x$ when
$\phi(1_{j},x)-\phi(0_{j},x)=\delta_{j}(x)=1$ (31)
2. b)
A component $c_{i}$ is indispensable at the vector of states $x$ for the
functioning of the structure $\phi$ when
$(1-x_{j})\cdot\delta_{j}(x)=1$ (32)
3. c)
A component $c_{i}$ is indispensable at the vector of states $x$ for the
failure of the structure $\phi$ when
$x_{j}\cdot\delta_{j}(x)=1$ (33)
To clarify, if $c_{i}$ is indispensable at the state vector $x$, then it is
equally indispensable for both functioning or failure, when coordinates of the
state vector $x$ equal to $0$ or $1$.
Hence, the structural importance of a component $c_{i}$ for the functioning of
$\phi$ is defined as
$I_{\phi}(j,1)=2^{-n}\sum_{(x)}(1-x_{j})\cdot\delta_{j}(x),$ (34)
where the sum covers all $2^{n}$ unit cube’s vertices. The structural
importance of a component $c_{i}$ for failure of structure $\phi$ is defined
as
$I_{\phi}(j,0)=2^{-n}\sum_{(x)}x_{j}\cdot\delta_{j}(x)$ (35)
and the structural importance of a component $c_{i}$ for the structure $\phi$
is defined as
$I_{\phi}(j)=I_{\phi}(j,1)+I_{\phi}(j,0)=\sum_{(x)}\delta_{j}(x).$ (36)
To conclude, if a component $c_{i}$ is indispensable at the state vector
$\overrightarrow{x}$ for functioning of structure $\phi$, then the component
$c_{i}$ is indispensable at $(1,\overrightarrow{x}_{-j})$ for failure,
meanwhile, if a component $c_{i}$ is indispensable at the state vector $x$ for
failure of structure $\phi$, then the component $c_{i}$ is indispensable at
$(0,\overrightarrow{x}_{-j})$ for functioning. Due to communication between
vertices at which $c_{i}$ is responsible for failure or functioning, the
number of each type of vertices is the same. Hence, from equalities (34), (35)
and (36) follows
$I_{\phi}(j,1)=I_{\phi}(j,0)=\frac{1}{2}I_{\phi}(j).$ (37)
From (37) we deduce that there is no purpose dividing the structural
importance into the one for failure and the one for functioning, unlike the
reliability importance.
When we consider continuous life distribution of components, we shall use the
structural importance measure introduced by Barlow and Proschan. The
importance of component $c_{i}$ proposed in fact 2.1 with assumption that all
components have the same life distribution $F_{1}=F_{2}=...=F_{n}$, then in
structure $\phi$, such an importance becomes the structural importance of
component $c_{i}$ denoted as $I_{\phi}(i)$. By substituting $p$ for
$\bar{F}_{i}(t)$ for $i=1,...,n$, we obtain
$I_{\phi}(i)=\int\limits_{0}^{1}[h(1_{i},p)-h(0_{i},p)]dp,$ (38)
where vector $(1,\overrightarrow{\textbf{p}}_{-i})$ has $1$ in the $i$-th
position and $p$ everywhere else i.e.
$\overrightarrow{\textbf{p}}_{-i}=\vec{p}_{-i}$.
To compute the structural importance presented by Barlow and Proschan (1975),
first we need to introduce some definitions.
###### Definition 6
1. a)
A set of elements that allow proper operating of the system is called a path
set. If the path set is irreducible, then it is called a minimal path set.
2. b)
At the same time, a set of elements that can by their own effect failure of
the system is called a cut set. If the cut set is irreducible, then it is
called a minimal cut set.
3. c)
A vector $(1_{i},x)$, which fulfills conditions of $\phi(0_{i},x)=0$ and
$\phi(1_{i},x)=1$, is called a critical path vector for the $i$-th component.
Hence, for the $i$-th component a critical path set is denoted as
$\\{i\\}\cup\\{j|x_{j}=1,i\neq j\\}.$
It means that functioning of the system or its failure is determined by a
component $c_{i}$. A critical path vector for a component $c_{i}$ with size
$r$ can be presented as
$1+\sum_{i\neq j}x_{j}=r,\qquad r=1,2,...,n.$
Therefrom, we may introduce a number of critical path vector $n_{r}(i)$ for
$i$-th component of size $r$ specified as
$n_{r}(i)=\sum_{\sum_{i\neq
j}x_{j}=r-1}[\phi(1,\overrightarrow{x}_{-j})-\phi(0,\overrightarrow{x}_{-j})]$
Hence, the structural importance $I_{\phi}(i)$ may be expressed as the number
of critical path vectors $n_{r}(i)$.
###### Theorem 2.6
$I_{\phi}(i)=\sum_{r=1}^{n}n_{r}(i)\cdot\frac{(n-r)!(r-1)!}{n!}$ (39)
###### Proof
If we merge equations (38) and (39), we obtain
$\displaystyle I_{\phi}(i)=\int\limits_{0}^{1}[h(1_{i},p)-h(0_{i},p)]dp=$ (40)
$\displaystyle=\int\limits_{0}^{1}\Big{[}\sum_{x}[\phi(1_{i},x)-\phi(0_{i},x)]\cdot
p^{\sum_{j\neq i}x_{j}}\cdot(1-p)^{n-1-\sum_{j\neq i}x_{j}}\Big{]}dp=$ (41)
$\displaystyle=\int\limits_{0}^{1}\sum_{r=1}^{n}n_{r}(i)\cdot
p^{r-1}(1-p)^{n-r}dp=\sum_{r=1}^{n}n_{r}(i)\cdot\frac{(n-r)!(r-1)!}{n!}.$ (42)
Equation (39) can be expressed as
$I_{\phi}(i)=\frac{1}{n}\sum_{r=1}^{n}n_{r}(i)\tbinom{n-1}{r-1}^{-1},$ (43)
where the numerator $n_{r}(i)$ stands for the critical path vectors with size
$r$ and the denominator stands for the number of results with precisely $r-1$
components functioning among the exactly $n-1$ components, without component
$i$. It means that the $i$-th component’s structural importance is in other
words the average probability that for the $i$-th component the vector is the
critical path vector.
Expression (42) can be also translated into
$I_{\phi}(i)=\int\limits_{0}^{1}\Big{[}\sum_{r=1}^{n}n_{r}(i)\cdot\tbinom{n-1}{r-1}^{-1}\tbinom{n-1}{r-1}\cdot(1-p)^{n-r}\cdot
p^{r-1}\Big{]}dp$ (44)
where $\tbinom{n-1}{r-1}\cdot(1-p)^{n-r}\cdot p^{r-1}$ is the probability that
$r-1$ among $n-1$ elements, omitting the $i$-th one, function. Furthermore,
$n_{r}(i)\cdot\tbinom{n-1}{r-1}^{-1}$ is the probability that functioning
components $i$ and $r-1$ comprise the critical path set for the component $i$.
Hence, equation (44) stands for the probability of $i$ causing the system
failure. Integrating it over $p$ means that the component reliability $p$ has
a uniform distribution $p\sim\mathcal{U}(0,1)$.
If we compare Barlow and Proschan structural importance
$I_{\phi}(i;\overrightarrow{\textbf{p}})=\int\limits_{0}^{1}[h(1,\vec{p}_{-i})-h(0,\vec{p}_{-i})]dp,$
(45)
with Birnbaum structural importance
$B(i;\overrightarrow{.5})=I_{\phi}(i;\overrightarrow{.5})=\frac{\partial
h(p)}{\partial
p_{i}}\Bigg{\rvert}_{p_{1}=...=p_{n}=\frac{1}{2}}=h\big{(}1,\overrightarrow{.5}_{-i}\big{)}-h\big{(}0,\overrightarrow{.5}_{-i}\big{)}$
(46)
we see that Birnbaum sets $p=\tfrac{1}{2}$ in order to compute the difference
$h(1,\overrightarrow{\textbf{p}}_{-i})-h(0,\overrightarrow{\textbf{p}}_{-i})$,
while Barlow and Proschan compute this difference for $p\in[0,1]$.
Moreover, from equation (46) we can deduce that
$B(i;\overrightarrow{\textbf{p}})=I_{\phi}(i;\overrightarrow{\textbf{p}})=\sum_{x}\frac{1}{2^{n-1}}\cdot[\phi(1,\overrightarrow{.5}_{-i})-\phi(0,\overrightarrow{.5}_{-i})].$
Hence, Birnbaum structural importance can be given as
$I_{\phi}(i;\overrightarrow{.5})=\sum_{r=1}^{n}\frac{n_{r}(i)}{2^{n-1}}.$ (47)
If we compare expressions (39) and (47), we can see that in $I_{\phi}(i)$ the
number of critical vector path $n_{r}(i)$ has a weight $(n-r)!(r-1)!/n!$,
meanwhile Birnbaum uses the same weight $1/(2^{n-1})$ everywhere. Due to
behavior of $(n-r)!(r-1)!/n!$ for different $n$ we deduce that only very large
or very small critical paths may reach the greatest weight [5].
###### Example 3 (Structural importance - series / parallel structure)
Let consider a structure of $n$ elements, which $k$ are in series and $n-k$
are parallel. This example concerns a structure of five components
$c_{1},c_{2},c_{3},c_{4},c_{5}$, $n=5$ and $k=2$ presented in figure 3.
$c_{1}$$c_{2}$$c_{3}$$c_{4}$$c_{5}$ Figure 3: Series and parallel structure
Reliability of each component $c_{i}$ is unknown, however we may derive the
system reliability for different $p_{i}$
$h(\overrightarrow{\textbf{p}})=p_{1}\cdot
p_{2}\cdot\big{[}1-(1-p_{3})\cdot(1-p_{4})\cdot(1-p_{5})\big{]}.$
The structural importance’s assumption is that the reliabilities
$p_{1}=p_{2}=...=p_{n}=p$ are identical. Since
$I_{B}(j;\overrightarrow{\textbf{p}})=\textbf{D}_{p_{j}}h(\overrightarrow{\textbf{p}}),$
we may derive formulas for the structural importance for each component
$\displaystyle I_{B}(j)=\prod^{k}_{\begin{subarray}{c}i=1\\\ i\neq
j\end{subarray}}p_{i}\big{[}1-\prod^{n}_{m=k+1}(1-p_{m})\big{]}\text{\quad for
$j=1,...,k$,}$ (48) $\displaystyle
I_{B}(j)=\prod^{k}_{i=1}p_{i}\prod^{n}_{\begin{subarray}{c}m=k+1\\\ m\neq
j\end{subarray}}(1-p_{m})\text{\quad for $j=k+1,...,n$.}$ (49) For Birnbaum
case, each reliability $p_{i}=\frac{1}{2}$, hence from (48) and (49) we obtain
$\displaystyle I_{B}(j;\overrightarrow{.5})=2^{-(k-1)}-2^{-(n-1)}\text{\quad
for $i=1,\ldots,k$,}$ $\displaystyle
I_{B}(j;\overrightarrow{.5})=2^{-(n-1)}\text{\quad for $i=k+1,\ldots,n$.}$
Therefore, for structure in figure 3 with $k=2$ and $n=5$, we have
$\displaystyle
I_{B}(1;\overrightarrow{.5})=I_{B}(2;\overrightarrow{.5})=2^{-1}-2^{-4}=0.4375$
$\displaystyle
I_{B}(3;\overrightarrow{.5})=I_{B}(4;\overrightarrow{.5})=I_{B}(5;\overrightarrow{.5})=2^{-4}=0.0625$
We can see that the components in series have much greater importance than the
components in parallel.
###### Example 4 (Minimal path and cut sets)
$A$$B$123487569 Figure 4: Graph $G_{A,B}$
Let’s consider the structure of order $10$ with edges of graph $G_{A,B}$ as
elements of the system. Every set of edges connecting vertices $A$ and $B$ is
a path and every set of edges, when removed, disconnecting vertices $A$ and
$B$ is a cut [22]. Directed graph $G_{A,B}$ represents simple example of the
network connecting two nodes (A and B) that is often used in examining
reliability of computer networks. In order to define structure function
$\phi(x)$, determining the minimal path and cut sets is obligatory.
Table 1: Minimal path set Path | Elements
---|---
1 | 1 3 8
2 | 1 4 7 8
3 | 2 5 7 8
5 | 2 6 9
The minimal path and cut sets presented in tables 1 and 2 respectively, allow
to determine the structure function. Graph $G_{A,B}$ is described by four
minimal path series structures
$\displaystyle\rho_{1}(\overrightarrow{x})$
$\displaystyle=\prod_{i\in\\{1,3,8\\}}x_{i}$
$\displaystyle\rho_{2}(\overrightarrow{x})$
$\displaystyle=\prod_{i\in\\{1,4,7,8\\}}x_{i}$
$\displaystyle\rho_{3}(\overrightarrow{x})$
$\displaystyle=\prod_{i\in\\{2,5,7,8\\}}x_{i}$
$\displaystyle\rho_{4}(\overrightarrow{x})$
$\displaystyle=\prod_{i\in\\{2,6,9\\}}x_{i}$
Table 2: Minimal cut set Cut | Elements
---|---
1 | 1 2
2 | 1 5 6
3 | 1 5 9
4 | 1 6 7
5 | 1 6 8
6 | 1 7 9
7 | 2 3 4
8 | 2 3 7
9 | 2 8
10 | 3 4 5 6
11 | 3 4 5 9
12 | 3 6 7
13 | 3 7 9
14 | 6 8
15 | 8 9
and by fifteen minimal cut parallel structures
$\displaystyle\kappa_{1}(\overrightarrow{x})$ $\displaystyle=x_{1}\amalg
x_{2}$ $\displaystyle\kappa_{2}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{1,5,6\\}}x_{i}$
$\displaystyle\kappa_{3}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{1,5,9\\}}x_{i}$
$\displaystyle\kappa_{4}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{1,6,7\\}}x_{i}$
$\displaystyle\kappa_{5}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{1,6,8\\}}x_{i}$
$\displaystyle\kappa_{6}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{1,7,9\\}}x_{i}$
$\displaystyle\kappa_{7}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{2,3,4\\}}x_{i}$
$\displaystyle\kappa_{8}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{2,3,7\\}}x_{i}$
$\displaystyle\kappa_{9}(\overrightarrow{x})$ $\displaystyle=x_{2}\amalg
x_{8}$ $\displaystyle\kappa_{10}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{3,4,5,6\\}}x_{i}$
$\displaystyle\kappa_{11}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{3,4,5,9\\}}x_{i}$
$\displaystyle\kappa_{12}(\overrightarrow{x})$
$\displaystyle=\coprod_{i\in\\{3,6,7\\}}x_{i}$
$\displaystyle\kappa_{13}(\overrightarrow{x})$ $\displaystyle=x_{3}\amalg
x_{7}\amalg x_{9}$ $\displaystyle\kappa_{14}(\overrightarrow{x})$
$\displaystyle=x_{6}\amalg x_{8}$
$\displaystyle\kappa_{15}(\overrightarrow{x})$ $\displaystyle=x_{8}\amalg
x_{9}$
The structure functions, if at least one of the minimal paths functions, can
be presented as a parallel structure of minimal path series structure
$\phi(\overrightarrow{x})=\coprod_{i=1}^{r}\rho_{i}(\overrightarrow{x})=1-\prod_{i=1}^{r}\big{[}1-\rho_{i}(\overrightarrow{x})\big{]},$
where $r$ is a number of minimal paths of graph $G_{A,B}$. Hence, the
structure function of the graph $G_{A,B}$ can be written in the form of
$\displaystyle\phi(\overrightarrow{x})$
$\displaystyle=\coprod_{i=1}^{4}\rho_{i}(\overrightarrow{x})=1-\prod_{j=1}^{4}(1-\rho_{j}(\overrightarrow{x}))$
$\displaystyle=1-(1-\prod_{i\in\\{1,3,8\\}}x_{i})(1-\prod_{i\in\\{1,4,7,8\\}}x_{i})(1-\prod_{i\in\\{2,5,7,8\\}}x_{i})(1-\prod_{i\in\\{2,6,9\\}}x_{i})$
and the structure function $\phi(\overrightarrow{x})$ is exactly equal to
$\begin{split}\phi(\overrightarrow{x})&=x_{1}^{2}x_{2}x_{3}x_{4}x_{5}x_{7}^{2}x_{8}^{3}-x_{1}^{2}x_{2}^{2}x_{3}x_{4}x_{5}x_{6}x_{7}^{2}x_{8}^{3}x_{9}-x_{1}x_{2}x_{4}x_{5}x_{7}^{2}x_{8}^{2}\\\
&\quad-
x_{1}^{2}x_{3}x_{4}x_{7}x_{8}^{2}-x_{1}x_{2}x_{3}x_{5}x_{7}x_{8}^{2}+x_{1}x_{2}^{2}x_{4}x_{5}\cdot
x_{6}x_{7}^{2}x_{8}^{2}x_{9}\\\
&\quad+x_{1}^{2}x_{2}x_{3}x_{4}x_{6}x_{7}x_{8}^{2}x_{9}+x_{1}x_{2}^{2}x_{3}x_{5}x_{6}x_{7}x_{8}^{2}x_{9}+x_{1}x_{3}x_{8}+x_{1}x_{4}x_{7}x_{8}\\\
&\quad+x_{2}x_{5}\cdot
x_{7}x_{8}-x_{1}x_{2}x_{3}x_{6}x_{8}x_{9}-x_{1}x_{2}x_{4}x_{6}x_{7}x_{8}x_{9}+x_{2}x_{6}x_{9}-x_{2}^{2}x_{5}x_{6}x_{7}x_{8}x_{9}.\end{split}$
Moreover, the structure function can be also presented in the form of the
series structure of the minimal cut parallel structures
$\phi(\overrightarrow{x})=\prod_{i=1}^{c}\kappa_{i}(\overrightarrow{x}),$
where $c$ is a number of minimal cuts of the graph $G_{A,B}$. If at least one
of the minimal cuts fails, the structure fails as well. $G_{A,B}$ graph’s
structure function can be written in the short form as
$\phi(\overrightarrow{x})=\prod_{i=1}^{14}\kappa_{i}(\overrightarrow{x}).$
The structural importance measure presented by Birnbaum in 1968 and then by
Barlow and Proschan in 1975 was once independently developed in the field of
game theory (v. Appendix 0.C) by Shapley and Shubik in 1954(v. Shapley (1953))
and Banzhaf (1965), respectively (v. Ramamurthy (1990).
### 2.7 Importance measure based on multilateral stopping problem.
The basis for the description of binary systems is the structure function
described in appendix xxx. We consider semi-coherent structures, which means
that the structure function has properties identical to the function
aggregating players’ decisions in multi-person decision problems considered in
the work of Szajowski and Yasuda (1997). Multi-player decision problems assume
that each game participant has a preference function based on a scalar
function defined on the states of a certain process. If the elements of the
structure are assigned to conservators (hypothetical players) who take care of
the condition of these elements so that they fulfill their functions properly,
the mentioned function can estimate profits and losses resulting from the
state of the element. In principle, this condition should be good, allowing
the function of the element, or bad - excluding the element from functioning.
However, in reality, it is the diagnostician who decides when to perform
maintenance or replacement (and bear the cost of it), and only sometimes a
failure introduces a forced repair. An element in a system usually lowers its
efficiency (e.g., mating components in a driveline may need lubrication to
reduce friction, which results in increased energy expenditure and lower
system efficiency), but the maintenance downtime is wasted and cannot always
be managed. The operating conditions of the system make it possible to
determine the correct payment function (cost) for each maintenance technician.
Each of the n conservators, observing the states on which its payment depends,
decides whether to order a maintenance break or to carry out uninterrupted
operation. For safety reasons and the structure of the system, it is clear
whether such a decision of a single observer is effective - it can start work
when the system is stopped, and the stoppage requires the consensus of
conservators from some critical path.
To analyze the effects of action, we will use the model of the following
antagonistic game with elements of cooperation, which are defined by the
function of the structure.
Following the results of the author and Yasuda [37] the multilateral stopping
of a Markov chain problem can be described in the terms of the notation used
in the non-cooperative game theory (see [24], [15], [23], [28]). To this end
the process and utilities of its states should be specified.
###### Definition 7 (ISS-Individual Stopping Strategies)
Let $(\overrightarrow{X}_{n},{\mathcal{F}}_{n},{\textbf{P}}_{x})$,
$n=0,1,2,\ldots,N$, be a homogeneous Markov chain with the state space
$(\mathbb{E},{\mathcal{B}})$.
* •
The players are able to observe the Markov chain sequentially. The horizon can
be finite or infinite: $N\in\mathbb{N}\cup\\{\infty\\}$.
* •
Each player has their utility function $f_{i}:\mathbb{E}\rightarrow\Re$,
$i=1,2,\ldots,p$, such that
${\textbf{E}}_{x}|f_{i}(\overrightarrow{X}_{1})|<\infty$ and the cost function
$c_{i}:\mathbb{E}\rightarrow\Re$, $i=1,2,\ldots,p$.
* •
If the process is not stopped at moment $n$, then each player, based on
${\mathcal{F}}_{n},$ can declare independently their willingness to stop the
observation of the process.
###### Definition 8 (see [42])
An individual stopping strategy of the player $i$ (ISS) is the sequence of
random variables $\\{\sigma_{n}^{i}\\}_{n=1}^{N}$, where
$\sigma_{n}^{i}:\Omega\rightarrow\\{0,1\\}$, such that $\sigma_{n}^{i}$ is
${\mathcal{F}}_{n}$-measurable.
The interpretation of the strategy is following. If $\sigma_{n}^{i}=1$, then
player $i$ declares that they would like to stop the process and accept the
realization of $X_{n}$.
###### Definition 9 (SS–Stopping Strategy (the aggregate function).)
Denote
$\sigma^{i}=(\sigma_{1}^{i},\sigma_{2}^{i},\ldots,\sigma_{N}^{i})$
and let ${\mathscr{S}}^{i}$ be the set of ISSs of player $i$,
$i=1,2,\ldots,p$. Define
${\mathscr{S}}={\mathscr{S}}^{1}\times{\mathscr{S}}^{2}\times\ldots\times{\mathscr{S}}^{p}$.
The element
$\sigma=(\sigma^{1},\sigma^{2},\ldots,\sigma^{p})^{T}\in{\mathscr{S}}$ will be
called the stopping strategy (SS).
The stopping strategy $\sigma\in{\mathscr{S}}$ is a random matrix. The rows of
the matrix are the ISSs. The columns are the decisions of the players at
successive moments. The factual stopping of the observation process, and the
players realization of the payoffs is defined by the stopping strategy
exploiting $p$-variate logical function.
Let $\delta:\\{0,1\\}^{p}\rightarrow\\{0,1\\}$ be the aggregation function. In
this stopping game model the stopping strategy is the list of declarations of
the individual players. The aggregate function $\delta$ converts the
declarations to an effective stopping time.
###### Definition 10 (An aggregated SS)
A stopping time $\tau_{\delta}(\sigma)$ generated by the SS
$\sigma\in{\mathscr{S}}$ and the aggregate function $\delta$ is defined by
$\tau_{\delta}(\sigma)=\inf\\{1\leq n\leq
N:\delta(\sigma_{n}^{1},\sigma_{n}^{2},\ldots,\sigma_{n}^{p})=1\\}$
$(\inf(\emptyset)=\infty)$. Since $\delta$ is fixed during the analysis we
skip index $\delta$ and write $\tau(\sigma)=\tau_{\delta}(\sigma)$.
###### Definition 11 (Process and utilities of its states)
* •
$\\{\omega\in\Omega:\tau_{\delta}(\sigma)=n\\}=\bigcap\nolimits_{k=1}^{n-1}\\{\omega\in\Omega:\delta(\sigma_{k}^{1},\sigma_{k}^{2},\ldots,\sigma_{k}^{p})=0\\}\cap\\{\omega\in\Omega:\delta(\sigma_{n}^{1},\sigma_{n}^{2},\ldots,\sigma_{n}^{p})=1\\}\in{\mathcal{F}}_{n}$;
* •
$\tau_{\delta}(\sigma)$ is a stopping time with respect to
$\\{{\mathcal{F}}_{n}\\}_{n=1}^{N}$.
* •
For any stopping time $\tau_{\delta}(\sigma)$ and
$\mathfrak{i}\in\\{1,2,\ldots,p\\}$ the payoff of player $\mathfrak{i}$ is
defined as follows (cf. [35]):
$f_{i}(X_{\tau_{\delta}(\sigma)})=f_{i}(X_{n})\mathbb{I}_{\\{\tau_{\delta}(\sigma)=n\\}}+\limsup_{n\rightarrow\infty}f_{i}(X_{n})\mathbb{I}_{\\{\tau_{\delta}(\sigma)=\infty\\}}.$
###### Definition 12
[An equilibrium strategy (cf. [37])] Let the aggregate rule $\delta$ be fixed.
The strategy
${}^{*}\\!\sigma=({}^{*}\\!\sigma^{1},{}^{*}\\!\sigma^{2},\ldots,{}^{*}\\!\sigma^{p})^{T}\in{\mathscr{S}}$
is an equilibrium strategy with respect to $\delta$ if for each
$\mathfrak{i}\in\\{1,2,\ldots,p\\}$ and any $\sigma^{i}\in{\mathscr{S}}^{i}$
we have
$v_{i}(\overrightarrow{x})={\textbf{E}}_{x}[f_{i}(\overrightarrow{X}_{\tau_{\delta}({}^{*}\\!\sigma)})+\sum_{k=1}^{\tau_{\delta}({}^{*}\\!\sigma)}c_{i}(\overrightarrow{X}_{k-1})]\leq{\textbf{E}}_{x}[f_{i}(\overrightarrow{X}_{\tau_{\delta}({}^{*}\\!\sigma(i))})+\sum_{k=1}^{\tau_{\delta}({}^{*}\\!\sigma(i))}c_{i}(\overrightarrow{X}_{k-1})].$
###### Definition 13
[Voting Game Importance] Let the aggregate rule $\delta=h$ be fixed and the
strategy
${}^{*}\\!\sigma=({}^{*}\\!\sigma^{1},{}^{*}\\!\sigma^{2},\ldots,{}^{*}\\!\sigma^{p})^{T}\in{\mathscr{S}}$
be an equilibrium strategy with respect to $\delta$. The voting game
importance of the elements is the component of
$\textbf{VGI}=\frac{\textbf{E}_{\overrightarrow{Q}^{0}}\overrightarrow{\textbf{v}}(\overrightarrow{X})}{\textbf{E}<\overrightarrow{\textbf{v}}(\overrightarrow{X}),\overrightarrow{Q}^{0}>}.$
The measure of significance of a structure element introduced in this way
takes into account its role in the structure by the aggregation function $h$,
it is normalized in the sense that the measures of all elements sum up to $1$.
It takes into account the external loads of elements, the cost of maintenance
and repairs. Its use requires in-depth knowledge of the system and its
components, which is a significant obstacle in its introduction into
diagnostic practice. The hardest part is figuring out the payout functions
(cost, risk, profit). The simplified version of the method may include in the
payout functions only the operating risk with components in a condition
requiring maintenance or repair, which is usually associated with less safety.
## 3 Concluding remarks
### 3.1 Summary
Ensuring the reliability and secure performance of the simple as well as
complex systems has an indisputable significance in system analysis.
Wherefore, the aim of the research was to answer the question how to recognize
the most influential elements of the system so as to improve its reliability.
This paper has demonstrated several approaches to the concept of importance
measure depending on parameters and assumptions characterizing the system. The
new approach is proposed in section 2.7.
In this paper we have considered binary systems. Their extension in the form
of multistate systems is subject of another paper. In addition, the assumption
was their coherence. Limitations and assumptions of coherent system for binary
systems have been presented. For two-state systems various importance measures
have been introduced and also the concept of the module importance, that can
be applied to any more complex system. We have looked into case when only
structure of the system was known (structural importance measure), case when
the system was dependent on both reliability of components and structure of
the system (reliability importance measure), and case, when the measure was
dependent on the lifetime distribution of the components and the system
structure (lifetime importance measure). The measures of importance have been
based on Barlow and Proschan and Birnbaum’s studies. The problem of choosing
the proper importance measure was shown, e.g. due to the inconsistent behavior
for different structures of the system. In addition, the relationship between
importance measures in the reliability theory and power indices in the game
theory have been discussed in the paper.
This analysis showed that the importance measures first introduced by Birnbaum
in 1968 became the foundation for further search of more convenient and
versatile definitions of the importance of components in the system
reliability. Since then, research has expanded in different directions but
till nowadays importance evaluation of highly complex structures such as
networks may cause many computational problems. Besides, restrictions
regarding coherence may exclude examination of certain systems. Therefore,
this subject is under constant exploration.
### 3.2 Exploratory importance measure research.
There are many quantities estimated in probabilistic risk assessments (PRAs)
to index the level of plant safety. If the PRA is to be used as a risk
management tool to assist in the safe operation of the plant, it is essential
that those elements of the plant design and its mode of operation that have
the greatest impact on plant safety be identified. These elements may be
identified by performing importance calculations. There are certain decisions
that must be made before the importance calculation is carried out. The first
is the definition of the events for which importance is to be evaluated; that
is, to what level of resolution the analysis is to be performed. The second
decision that must be made–and the major subject of this paper–is the choice
of importance measure. Many measures of importance have been proposed; this
discussion is restricted to three: the risk achievement (or degradation)
worth, the risk reduction worth, and criticality importance. In Schmidt et al.
(1985) these measures of importance are defined, their interrelationships are
discussed, and a generalized importance measure is introduced. The use of
these three measures is compared and their advantages and disadvantages are
discussed.
### 3.3 Important direction of further investigations.
When interpreting component importance (v. Wu and Coolen(2013)), concluded
that the importance of a component should depend on the following factors:
1. 1.
The location of the component in the system.
2. 2.
The reliability of the component.
3. 3.
The uncertainty in the estimate of the component reliability and related cost.
4. 4.
The costs of maintaining this component in a given time interval $(0,t)$.
(v. also Rausand et al. (2021)). The factor (3) highly depends on the
statistical method implemented in the analyzes of exploratory data analyzes.
Due to source of the data, the role of structure of the system to the
reliability of it, the importance measure should take these elements into
accounts. We are not observing the hidden state of the system directly and the
information taken from the sensors should by interpreted and evaluated to
infer on the hidden state of the elements and the system. The details of the
construction needed, based on the results by Szajowski (2020), are subject of
a paper under editorial process.
Author Contributions: Both authors equally contributed to the
conceptualization, methodology, formal analysis, investigation and
writing–original draft preparation. Małgorzata Średnicka is responsible for
the description of the importance measure concepts, examples, visualisation
(v. [32]) and Krzysztof J. Szajowski is responsible for the project
conceptualization and its administration.
Funding: This research received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.
Appendices
## Appendix 0.A Structure functions.
To study the relationship between the reliability of the components of a
structure and the reliability of the structure itself, one has to know how the
performance or failure of various components affect the performance or failure
of the structure. We do this with the help of Boolean functions. In the
reliability literature, Boolean functions are called structure functions.
Structure functions serve as a conceptual model on which the theory of
reliability is largely based. The state of the system is assumed to depend
only on the states of the components. We shall distinguish between only two
states – a functioning state and a failed state. This dichotomy applies to the
structure as well as to each component. The assumption that the state of the
system is completely determined by the states of its components implies the
existence of a Boolean function $\varphi:B^{n}\rightarrow B$.
A Boolean function of $n$ variables is a function on $B^{n}$ taking values in
$B=\\{0,1\\}$. A system or structure is assumed to consist of an element of
$\textbf{N}=\\{1,2,\ldots,n\\}$ \- the set of $n$ components. Let us consider
the state of the system at a fixed moment of time.
###### Definition 14
The structure function of a system consisting of $n$ components is a Boolean
function of $n$ variables.
Let $\varphi$ be a structure on N and $i\in\textbf{N}$. The component $i$ is
_irrelevant to the structure_ $\varphi$ if
$\varphi(1,\overrightarrow{x}_{-i})=\varphi(0,\overrightarrow{x}_{-i})$ for
all $\overrightarrow{x}\in B^{n}$ and relevant otherwise. The number of
relevant components is called the order of the structure $\varphi$. The
structure with no relevant components is a _degenerate structure_ , i.e.,
$\varphi(\overrightarrow{x})=1$ or $\varphi(\overrightarrow{x})=0$ for all
$\overrightarrow{x}\in B^{n}$.
Let $\varphi_{i}$, $i=1,2$ be two structures on
$\textbf{N}=\\{1,2,\ldots,n\\}$. The linear composition of these two
structures is a structure
$h(\overrightarrow{x},x_{n+1})=x_{n+1}\varphi(\overrightarrow{x})+(1-x_{n+1})\varphi_{2}(\overrightarrow{x})$
on $\textbf{B}^{n+1}$.
###### Corollary 1
Any structure $\varphi$ of order $n$ is a linear composition of two structures
of at most order $n-1$:
$\varphi(\overrightarrow{x})=x_{i}\varphi(1,\overrightarrow{x}_{-i})+(1-x_{i})\varphi(0,\overrightarrow{x}_{-i}),\text{for
every $\overrightarrow{x}\in B^{n}$, $i\in\textbf{N}$.}$ (50)
###### Definition 15
Let $\varphi$ be a structure on N, $A\subset\textbf{N}$ and
$J=\textbf{N}\setminus A$. The collection of $A$ of components form a path
(cut) set of $\varphi$ if $\varphi(\vec{1}^{A},\vec{0}^{J})=1$
($\varphi(\vec{0}^{A},\vec{1}^{J})=0$).
###### Definition 16
Let $\varphi$ be a structure on N. Its dual $\varphi^{\mathcal{D}}$ is another
structure on N defined by
$\varphi^{\mathcal{D}}(\overrightarrow{x})=1-\varphi(\vec{1}-\overrightarrow{x})$
for every $\overrightarrow{x}\in\textbf{B}^{n}$.
###### Definition 17
Let $\varphi$ be a structure on N. A path (cut) set $S$ of $\varphi$ is called
a minimal path (cut) set of $\varphi$ if $T\subset S$ implies that $T$ is not
a path (cut) set of a structure $\varphi$.
The family $\alpha(\varphi)$ ($\beta(\varphi)$) denotes collection of minimal
path (cut) sets of the structure $\varphi$.
###### Proposition 1
For every semicoherent structure, $\varphi$ on N
$\varphi(\overrightarrow{x})=1-\prod_{S\in\alpha(\varphi)}(1-\prod_{i\in
S}x_{i})=\prod_{S\in\beta(\varphi)}(1-\prod_{i\in S}(1-x_{i}))\text{ for all
$\overrightarrow{x}\in\textbf{B}^{n}$.}$ (51)
###### Remark 1 (A simple form of $\varphi$)
Expanding either one of the two terms on the right hand side of the expression
of Proposition 1 (putting $x_{i}^{r}=x_{i}$ for $r\geq 1$) we get a structure
function in the form
$\varphi(\overrightarrow{x})=\sum_{T\subseteq\textbf{N}}b_{T}\prod_{j\in
T}x_{j}\text{for all $\overrightarrow{x}\in\textbf{B}^{n}$}$ (52)
with $b_{T}$–some integer constants ($\prod_{j\in T}x_{j}=1$ for
$T=\emptyset$).
For any structure, there always exist at least one simple form and the simple
form of a structure is unique.
## Appendix 0.B The simple game
In game theory considers the set $N=\\{1,2,\ldots,n\\}$ of players and the
power set $2^{N}$ of coalitions. A function
$\lambda:2^{N}\rightarrow\\{0,1\\}$ is called a simple game on $N$ in
characteristic function form if
1. (1)
$\lambda(\emptyset)=0$;
2. (2)
$\lambda(N)=1$;
3. (3)
$S\subseteq T\subseteq N$ implies $\lambda(S)\leq\lambda(T)$.
A coalition $S\subset N$ is called winning if $\lambda(S)=1$ and it is called
blocking if $\lambda(N\setminus S)=0$. Indeed, the collection of winning (or
blocking) coalitions in a simple game satisfies the three properties of the
basic structure mentioned at the beginning.
## Appendix 0.C Power indexes
In the field of game theory Shapley (1953) and Banzhaf (1965) consider the
role of the players in a cooperative game to provide an idea of division of
the gain (v. Ramamurthy (1990). First, Shapley and Shubik examined $n$-player
games what let them formulate a characteristic value applicable to simple
games. Hence, originally the Shapley-Shubik index measured a power of players
in voting games. Their measure is a natural consequence of the influence of a
given voter on the result.
###### Definition 18
The Banzhaf index of the $i$-th player [component] denoted as $\psi_{i}(g)$ is
applicable for a semi-coherent structure $g$ on $N$, and is defined by
$\psi_{i}(g)=\frac{\eta_{i}(g)}{2^{n-1}},$ (53)
where $i\in N$, $r$ is a size of $\eta_{i}(g)$, which stands for the
aggregated sum of critical path vectors of g, and
$\eta_{i}(g)=\sum_{r=1}^{n}\eta_{i}(r,g)$.
Definition 18 is identical to the structural importance (47) presented by
Birnbaum.
###### Definition 19
For a semicoherent structure $g$ the Shapley-Shubik index is denoted as
$\phi_{i}(g)$, given by
$\phi_{i}=\sum_{r=1}^{n}\eta_{i}(r,g)\cdot\frac{(n-r)!(r-1)!}{n!},$ (54)
where $i\in N$, $r$ is a size of $\eta_{i}(r,g)$, which stands for the number
of critical path vectors of g.
Definition 19 is identical to the structural importance (39) presented by
Barlow and Proschan.
###### Fact 0.C.1
If we compare the expressions (53) and (54), we can see that the index
introduced by Shapley-Shubik has a weight $(n-r)!(r-1)!/n!$ attached to
$\eta_{i}(r,g)$, meanwhile the Banzhaf index is independent on $r$ and has
always weight $1/(2^{n-1})$ attached to $\eta_{i}(r,g)$. Due to the behavior
of $(n-r)!(r-1)!/n!$ for different $n$, we deduce that only very large or very
small critical paths may reach the greatest weight.
Dubey (1975) derived the Shapley-Shubik index as a logical consequence of
certain axioms. Using another set of axioms, Dubey and Shapley (1979) derived
the Banzhaf index. Straflin (1976) using a probabilistic model, providing a
unified framework for power indices.
##
## References
* Abouammoh et al. [1994] A. M. Abouammoh, E. El-Neweihi, and J. Sethuraman. The role of a group of modules in the failure of systems. _Probability in the Engineering and Informational Sciences_ , 8(1):89–101, 1994. doi: 10.1017/S0269964800003223.
* Amrutkar and Kamalja [2017] K. P. Amrutkar and K. K. Kamalja. An overview of various importance measures of reliability system. _International Journal of Mathematical, Engineering and Management Sciences_ , 2(3):150–171, 2017.
* Artzner et al. [1999] P. Artzner, F. Delbaen, J.-M. Eber, and D. Heath. Coherent measures of risk. _Math. Finance_ , 9(3):203–228, 1999. ISSN 0960-1627; 1467-9965/e. doi: 10.1111/1467-9965.00068.
* Banzhaf [1965] J. Banzhaf. Weighted voting doesn’t work: a game theoretic approach. _Rutgers Law Rev._ , 19:317–343, 1965.
* Barlow and Proschan [1975] R. E. Barlow and F. Proschan. Importance of system components and fault tree events. _Stochastic Processes and their Applications_ , 3(2):153–173, 1975. ISSN 0304-4149. doi: 10.1016/0304-4149(75)90013-7.
* Barlow and Wu [1978] R. E. Barlow and A. S. Wu. Coherent systems with multi-state components. _Mathematics of Operations Research_ , 3(4):275–281, 1978. ISSN 0364-765X. doi: 10.1287/moor.3.4.275.
* Barlow et al. [1975] R. E. Barlow, J. B. Fussell, and N. D. Singpurwalla, editors. _Reliability and fault tree analysis_. Society for Industrial and Applied Mathematics, Philadelphia, Pa., 1975\. Theoretical and applied aspects of system reliability and safety assessment, Conference held at the University of California, Berkeley, Calif., Sept. 3–7, 1974, Dedicated to Professor Z. W. Birnbaum.
* Bergman [1985] B. Bergman. On some new reliability importance measure. In W. Quirk, editor, _Safety of computer control systems 1985 (SAFECOMP’85). Proc. of the Fourth IFAC Workshop (Como, Italy, 1-3 October 1985)_ , volume 4: achieving safe real time computer systems, pages 61–64. Elsevier, 1985.
* Birnbaum [1968] Z. W. Birnbaum. On the importance of components in a system. Eur. Meet. 1968, Sel. Stat. Pap. 2, 83-95 (1968)., 1968.
* Birnbaum [1969] Z. W. Birnbaum. On the importance of different components in a multicomponent system. In P. Krishnaiah, editor, _Multivariate Analysis, II (Proc. Second Internat. Sympos., Dayton, Ohio, 1968)_ , pages 581–592. Academic Press, New York, 1969.
* Birnbaum et al. [1961] Z. W. Birnbaum, J. D. Esary, and S. C. Saunders. Multi-component systems and structures and their reliability. _Technometrics_ , 3(1):55–77, 1961. ISSN 0040-1706.
* Boland et al. [1988] P. J. Boland, E. El-Neweihi, and F. Proschan. Active redundancy allocation in coherent systems. _Probab. Eng. Inf. Sci._ , 2(3):343–353, 1988\. ISSN 0269-9648; 1469-8951/e.
* Cao et al. [2019] Y. Cao, S. Liu, and Z. Fang. Importance measures for degrading components based on cooperative game theory. _International Journal of Quality & Reliability Management_, 37(2):189–206, 2019. doi: 10.1108/IJQRM-10-2018-0278.
* Do and Bérenguer [2020] P. Do and C. Bérenguer. Conditional reliability-based importance measures. _Reliability Engineering & System Safety_, 193:106633, 2020. ISSN 0951-8320. doi: 10.1016/j.ress.2019.106633.
* Dresher [1981] M. Dresher. _The mathematics of games of strategy_. Dover Publications, Inc., New York, 1981. ISBN 0-486-64216-X. Theory and applications, Reprint of the 1961 original. MR 671740.
* Dui et al. [2017] H. Dui, S. Si, S. Wu, and R. Yam. An importance measure for multistate systems with external factors. _Reliability Engineering & System Safety_, 167:49 – 57, 2017. ISSN 0951-8320. doi: 10.1016/j.ress.2017.05.016. Special Section: Applications of Probabilistic Graphical Models in Dependability, Diagnosis and Prognosis.
* Dutuit and Rauzy [2015] Y. Dutuit and A. Rauzy. On the extension of importance measures to complex components. _Reliability Engineering & System Safety_, 142:161 – 168, 2015. ISSN 0951-8320. doi: 10.1016/j.ress.2015.04.016.
* El-Neweihi and Sethuraman [1991] E. El-Neweihi and J. Sethuraman. A study of the role of modules in the failure of systems. _Probab. Eng. Inf. Sci._ , 5(2):215–227, 1991\. ISSN 0269-9648; 1469-8951/e.
* El-Neweihi et al. [1978] E. El-Neweihi, F. Proschan, and J. Sethuraman. A simple model with applications in structural reliability, extinction of species, inventory depletion and urn sampling. _Adv. Appl. Probab._ , 10:232–254, 1978. ISSN 0001-8678.
* Esary and Proschan [1963] J. D. Esary and F. Proschan. Coherent structures of non-identical components. _Technometrics_ , 5:191–209, 1963. ISSN 0040-1706; 1537-2723/e.
* Fussell and Vesely [1972] J. Fussell and W. Vesely. New methodology for obtaining cut sets for fault trees. _Trans. Amer. Nucl. Soc._ , 15(1):262–263, 1972\. 18\. annual American Nuclear Society conference; Las Vegas, Nev; 18 Jun 1972.
* Kordecki [2002] W. Kordecki. Oszacowania niezawodności systemów. _Mathematica Applicanda_ , 30(44/03), 2002. doi: 10.14708/ma.v30i44/03.1902.
* Moulin [1982] H. Moulin. _Game theory for the social sciences._. Studies in Game Theory and Mathematical Economics. New York University Press, New York, 1982. ISBN 0-8147-5386-8/hbk; 0-8147-5387-6/pbk. Transl. from the French by the author. Zbl 0626.90095.
* Nash [1951] J. Nash. Non-cooperative games. _Ann. of Math. (2)_ , 54:286–295, 1951. ISSN 0003-486X. doi: 10.2307/1969529. MR 43432.
* Natvig [1985] B. Natvig. New light on measures of importance of system components. _Scand. J. Statist._ , 12(1):43–54, 1985. ISSN 0303-6898. MR 804224.
* Navarro et al. [2019] J. Navarro, A. Arriaza, and A. Suárez-Llorens. Minimal repair of failed components in coherent systems. _Eur. J. Oper. Res._ , 279(3):951–964, 2019. ISSN 0377-2217. Zbl 1430.90217.
* Ohi [2010] F. Ohi. Multistate coherent systems. In _Stochastic Reliability Modeling, Optimization And Applications_ , pages 3–34. World Scientific Publishing Co. Pte. Ltd., 2010. ISBN 9789814277440.
* Owen [2013] G. Owen. _Game theory_. Emerald Group Publishing Limited, Bingley, fourth edition, 2013. ISBN 987-1-7819-0507-4. MR 3443071.
* Ping [2004] Z. Ping. _Measures of importance with applications to inspection policies_. ProQuest LLC, Ann Arbor, MI, 2004. ISBN 978-0496-73752-9. URL https://www.proquest.com/docview/305074799. Thesis (Ph.D.)–University of Illinois at Chicago. MR 2705807.
* Ramamurthy [1990] K. G. Ramamurthy. _Coherent structures and simple games_ , volume 6 of _Theory and Decision Library. Series C: Game Theory, Mathematical Programming and Operations Research_. Kluwer Academic Publishers Group, Dordrecht, 1990. ISBN 0-7923-0869-7. doi: 10.1007/978-94-009-2099-6.
* Rausand et al. [2021] M. Rausand, A. Barros, and A. Høyland. _System reliability theory. Models, statistical methods, and applications._ Hoboken, NJ: John Wiley & Sons, 3rd edition edition, 2021. ISBN 978-1-119-37352-0. 2nd edition: ISBN 0-471-47133-X. xix, 636 p. (2004).
* Średnicka [2019] M. Średnicka. Importance measure in multi-state systems reliability. Master’s thesis, Wrocław University of Science and Technology, Wrocław, Poland, 2019. 40 p.
* Schmidt et al. [1985] E. Schmidt, K. Jamali, G. Parry, and S. Gibbon. Importance measures for use in PRAs and risk management. Technical Report 6, Electric Power Research Inst., Palo Alto, CA (USA), 1985. Proceedings EPRI-NP–3912-SR-Vol2. Sessions 9-16. p. 83.1–83.11.
* Shapley [1953] L. S. Shapley. A value for n–person games. In H. Kuhn and A. Tucker, editors, _Contrib. Theory of Games_ , Ann. Math. Stud. no 28, pages 307–317. Princeton University Press, Princeton, 1953.
* Shiryayev [1978] A. N. Shiryayev. _Optimal Stopping Rules._ Springer, New York, 1978. English translation of _Statisticheskiĭ posledovatelnyĭ analiz_ by A. B. Aries.
* Spivak [1965] M. Spivak. _Calculus On Manifolds: A Modern Approach To Classical Theorems Of Advanced Calculus_. Avalon Publishing, 1965. ISBN 9780805390216. URL https://books.google.com.vc/books?id=PBcbMQAACAAJ.
* Szajowski and Yasuda [1997] K. Szajowski and M. Yasuda. Voting procedure on stopping games of Markov chain. In S. O. Anthony H. Christer and L. C. Thomas, editors, _UK-Japanese Research Workshop on Stochastic Modelling in Innovative Manufacturing, July 21-22, 1995_ , volume 445 of _Lecture Notes in Economics and Mathematical Systems_ , pages 68–80. Moller Centre, Churchill College, Univ. Cambridge, UK, Springer, 1997. doi: 10.1007/978-3-642-59105-1_6. MR 98a:90159; Zbl 0878.90112.
* Szajowski [2020] K. J. Szajowski. Rationalization of detection of the multiple disorders. _Stat. Pap._ , 61(4):1545–1563, 2020. ISSN 0932-5026; 1613-9798/e. doi: https://doi.org/10.1007/s00362-020-01168-2. Zbl 1448.91017.
* Tijs [2003] S. Tijs. _Introduction to game theory_ , volume 23 of _Texts and Readings in Mathematics_. Hindustan Book Agency, New Delhi, 2003. ISBN 81-85931-37-2.
* Wu and Coolen [2013] S. Wu and F. P. Coolen. A cost-based importance measure for system components: An extension of the Birnbaum importance. _European Journal of Operational Research_ , 225(1):189 – 195, 2013. ISSN 0377-2217. doi: 10.1016/j.ejor.2012.09.034.
* Xie [1987] M. Xie. On some importance measures of system components. _Stochastic Processes Appl._ , 25:273–280, 1987. ISSN 0304-4149. doi: 10.1016/0304-4149(87)90205-5.
* Yasuda et al. [1982] M. Yasuda, J. Nakagami, and M. Kurano. Multivariate stopping problems with a monotone rule. _J. Oper. Res. Soc. Japan_ , 25(4):334–350, 1982\. ISSN 0453-4514. doi: 10.15807/jorsj.25.334. MR 692543.
* Załęska-Fornal [2006] A. Załęska-Fornal. Miary niezawodnościowej i strukturalnej istotności elementów. _Zeszyty Naukowe Akademii Marynarki Wojennej_ , R. 47(3 (166)):137–150, 2006.
|
# : a Multi-modal Dataset of Election Fraud Claims on Twitter
Anton Abilov1,2, Yiqing Hua1,2, Hana Matatov3, Ofra Amir3, Mor Naaman1,2
###### Abstract
The wide spread of unfounded election fraud claims surrounding the U.S. 2020
election had resulted in undermining of trust in the election, culminating in
violence inside the U.S. capitol. Under these circumstances, it is critical to
understand discussions surrounding these claims on Twitter, a major platform
where the claims disseminate. To this end, we collected and release the
dataset, a multi-modal dataset with 7.6M tweets and 25.6M retweets from 2.6M
users related to voter fraud claims. To make this data immediately useful for
a wide area of researchers, we further enhance the data with cluster labels
computed from the retweet graph, user suspension status, and perceptual hashes
of tweeted images. We also include in the dataset aggregated information for
all external links and YouTube videos that appear in the tweets. Preliminary
analyses of the data show that Twitter’s ban actions mostly affected a
specific community of voter fraud claim promoters, and exposes the most common
URLs, images and YouTube videos shared in the data.
## 1 Introduction
Free and fair elections are the foundation of every democracy. The 2020
presidential election in the United States was probably one of the most
consequential and contentious such events. Two-thirds of the voting-eligible
population voted, resulting in the highest turnout in the past 120 years
(Schaul, Rabinowitz, and Mellnik 2020). The Democratic Party candidate Joe
Biden was elected as the president.
Unfortunately, efforts to deligitimize the election process and its results
were carried out before, throughout and after the election. Mostly unfounded
claims of voter fraud (Frenkel 2020) were spread both through public
statements by politicians, and on social media platforms. As a result, 34% of
Americans say that they do not trust the election results as of December, 2020
(NPR 2020). Voter fraud claims without credible evidence have great
ramifications on both the integrity of the election and the stability of the
U.S. democracy. On January 6th, 2021, believing that the election was
‘stolen’, mobs breached U.S. capitol while the Congress voted to certify Biden
as the winner of the election.
Social media platforms like Facebook, Twitter, YouTube and Reddit play a
significant role in political events (Vitak et al. 2011; Allcott and Gentzkow
2017), and the 2020 election was no exception (Ferrara et al. 2020). In
particular, Twitter has been the focus of public and media attention as a
prominent public square where ideas are adopted and claims – false or true –
are spread (Vosoughi, Roy, and Aral 2018; Grinberg et al. 2019). It is thus
important to understand the participants, discussions, narratives, and
allegations around voter fraud claims on this specific platform.
In this work, we release , a multi-modal Twitter dataset of 7.6M tweets and
25.6M retweets that are related to voter fraud claims. Using a manually
curated set of keywords (e.g., “voter fraud” and “#stopthesteal”) that was
further expanded using a data-driven approach, we streamed Twitter activities
between October 23rd and December 16th, 2020. We performed various validations
on the limits of our stream, given Twitter’s API constraints (Morstatter et
al. 2013), and estimate that we were able to retrieve around 60% of the data
containing our crawled keywords.
We further enhanced the dataset in order to make it accessible for a broader
set of researchers and future research: (1) We cluster users according to
their retweeting dynamics and release the cluster labels; (2) Given Twitter’s
widespread post-election suspension action, we crawl and include the user
status as of January 10th, 2021; (3) We compute and share the perceptual
hashes of 168K images that appeared in the data; (4) We aggregate and share
metadata about 138K external links that appeared in the tweets, including 12K
unique YouTube videos. Our dataset also allows researchers to calculate the
amount of Twitter interactions with the collected tweets, users, and media
items, including number of retweets and quotes from various clusters, or from
suspended users.
A preliminary analysis finds a significant cluster of users who were promoting
the election fraud related claims, with nearly 7.8% of them suspended in
January. The suspensions focused on a specific community within the cluster. A
simple analysis of the distribution of images, based on visual similarity,
exposes that the most broadly shared (by number of tweets) and the most
retweeted images are different. Although recent research has shown that voter
fraud claims are pushed mainly by mass media (Benkler et al. 2020), we also
find that external links referenced by promoters of the claims point mostly to
low-quality news websites, streaming services, and YouTube videos. Some of the
widespread videos claiming ‘evidence’ of voter fraud were published by
surprisingly small channels. Most strikingly, all of the top ten channels and
videos spreading voter fraud claims were still available on YouTube as of
January 11th, 2021.
We believe that the release of , the largest public dataset of Twitter
discussions around the voter fraud claims, with the enhanced labels and data,
will help the broad research community better understand this important topic
at a critical time.
## 2 Data Collection
Our data collection process involved streaming Twitter data using a data-
driven manually curated set of keywords and hashtags. We report on the span
and volume of the collected data, as well as on analyses estimating its
coverage.
### 2.1 Streaming Twitter data
We used a data-driven approach to generate a list of keywords and hashtags
related to election fraud claims in an iterative manner. We started with a
single phrase and two derived keywords: voter fraud and #voterfraud. We first
used a convenience sample of 11M political tweets consisting of the tweets of
2,262 U.S. political candidates and the replies to those tweets, collected
between July 21st and Oct 22nd, 2020 using the Twitter Streaming API (Twitter
2019). We then identified hashtags that co-occur with our meta-seed keywords,
voter fraud and voterfraud. We selected all hashtags that appeared in at least
10 tweets and co-occurred with either of the meta-seed keywords at least 50%
of the time. From the resulting set, we manually filtered out those that were
not directly relevant to voter fraud. To this end, two members of the research
team reviewed the hashtags, including, if needed, searching for them on
Twitter to see whether they produce relevant results. Only the hashtags that
were agreed on by both evaluators were added, resulting in an initial set of
hashtags that was added to the two original keywords.
We computed the Jaccard coefficient between each of our seed hashtags and all
other hashtags that appeared in the new stream. We added to our set all
hashtags that had a Jaccard coefficient greater than 0.001 with any of the
seed hashtags. Three members of the team again reviewed this list by 1)
excluding hashtags that were not related to voter fraud, 2) adding
corresponding keywords of the hashtags (e.g. #discardedballots corresponds to
discarded ballots), and 3) adding relevant hashtags or keywords that the
researchers observed while searching for hashtags from the generated list.
Both the seed list and the final list of keywords and hashtags we used for
streaming are included in Appendix A (Table 3).
We collected data using the Twitter streaming API (Twitter 2019). The dataset
includes tweets from 17:00, October 23rd, 2020 to 13:00 December 16th, 2020.
We expanded the keywords list on Oct. 31st with additional keywords, and added
#stopthesteal as it started trending on November 3rd. While streaming, we
stored each tweet’s metadata (e.g., user ID, text, timestamp). We also
downloaded all image media items included in the tweets. In total, we
collected 3,781,524 original tweets, 25,566,698 retweets, and 3,821,579 quote
tweets (i.e. tweets that include a reference to another tweet) discussing
election fraud claims. Note that quote tweets are included in the Twitter
stream when either the new tweet or the referenced (quoted) tweet include one
of the keywords or hashtags on the list. In total, we collected tweets from
2,559,018 users who posted, shared or quoted one or more tweets with these
keywords.
### 2.2 Coverage Analysis
Since the Twitter streaming API provides only a sample of the tweets,
especially for large-volume keywords (Morstatter et al. 2013), we performed
multiple tests to evaluate and estimate the coverage of the dataset. This
analysis suggests that the dataset covers over 60% of the content shared on
Twitter using the keywords we tracked.
##### Retweet and quote coverage.
We evaluated the coverage of retweet and quote tweets by comparing the counts
of these objects in the stream to Twitter’s metadata. When a new retweet for
an original tweet appears in the stream, the API returns the tweet’s metadata
including the current retweet count and quote count of the original tweet. In
other words, if an original tweet $t_{i}$ is retweeted, it will appear in the
stream as a retweet $rt_{j}$, and the metadata for $rt_{j}$ will include the
total number of retweets of $t_{i}$ so far. From this metadata, it is easy to
define the retweet coverage as the ratio of the total number of retweets ($rt$
objects) streamed and stored in our dataset, over the sum of all retweet
counts of the original $t$ tweets, returned by the API in the latest $rt$
retweet of each original tweet. The quote coverage is defined analogously.
According to this analysis, the dataset captured 63.2% of the retweets and
62.6% of the quote tweets. These findings compare favorably with previous work
that shows a single API client captures only 37.6% of the retweets through the
Streaming API (Morstatter et al. 2013).
##### Comparison with #Election2020.
To further evaluate the coverage on the voter fraud tweets, we compared our
dataset with a previously published Twitter dataset of the U.S. 2020 election
(Chen, Deb, and Ferrara 2020). The creators of the #Election2020 dataset used
the streaming API to track 168 keywords that are broadly related to the
election and 57 accounts that are tied to candidates running for president.
As in , the keyword ‘voter fraud’ was also used to collect data for
#Election2020. We used this overlap to estimate our coverage. First, we can
directly compare the relative volume and overlap between the ‘voter fraud’
tweets in both datasets. We expect to have a higher volume of such tweets
because of its more focused set of keywords. Second, if we assume sampling for
both streams is independent and random, we could estimate the coverage of by
looking at the proportion of #Election2020 tweets that also appear in our
data.
To this end, we extracted all tweets and retweets that contain this keyword
from both datasets posted on two days following the November 3rd election
data: November 6th and November 13th. The analysis, performed on December
17th, was limited to two days as we had to obtain the content of the tweets of
the #Election2020 dataset by “hydrating” them (i.e. using the tweet IDs to get
the full tweet text using the Twitter API). We were unable to hydrate the full
data, presumably due to inactive accounts and deleted tweets. The hydration
yielded 92.4% of the #Election2020 data on November 6th (a total of 1.4M
tweets/3.5M retweets), and 91.1% of the data on November 13th (1.3M tweets/3M
retweets).
In total, our data includes 45,322 ‘voter fraud’ related tweets on November
6th, 2.3 times as much as recorded in #Election2020. The ratio is even higher
on November 13th, when we obtained 47,313 tweets, 3.1 times as much as in
#Election2020. Figure 1 breaks down the coverage by dates (separated by rows),
in the two datasets (by different colors). From left to right, the bars show
the percentages of tweets that are available only in our dataset (dark blue),
that are available in both datasets (light blue), and that are available only
in #Election2020 (yellow). On any given day, the dataset contains
substantially more tweets related to voter fraud, as compared to
#Election2020, especially when the estimated total volume is lower. On
November 13th (second row), contained 95.7% of the combined data (left two
bars) while #Election2020 only collected 30.7% (right two bars) of the tweets.
These numbers also indicate that ’s sample includes 32.1% of the related
samples in #Election2020 on November 6th and 85.9% on November 13th. We
acknowledge that these two numbers are not consistent, presumably because of
November 6th’s much higher volume of activity. If these samples are indeed
independent, though, it means that our lower bound of coverage is November
6th’s 32.1%.
Figure 1: Coverage comparison between our dataset and #Election2020 for tweets
containing ‘voter fraud’.
Based on these coverage analyses, we conclude that is, at the time of
submission, the largest known public Twitter dataset of voter fraud claims and
discussions.
## 3 Data Enhancement
To ensure the reusability of our data, we took the following steps to enhance
the dataset. In addition to raw streaming data, we clustered users according
to the retweet dynamics and release the cluster labels. We also queried
Twitter for the user status on 10th of January, and share the user status as
active/not-found/suspended. Furthermore, to enable research on visual
misinformation, we encode all images shared in the tweets with perceptual
hash. Finally, we release the URLs, and the metadata of the YouTube videos
that appeared in our dataset.
Community | Users | Relative size | % of users
---|---|---|---
| | | suspended
0 | 860,976 | 45.6% | 1%
1 | 437,783 | 23.2% | 4.6%
2 | 342,184 | 18.1% | 14.1%
3 | 33,857 | 1.8% | 1.5%
4 | 23,414 | 1.2% | 1.6%
(a)
(a)
(b)
Figure 2: (a) Community statistics. (b) Retweet graph colored by communities.
(c) Suspension status (orange: suspended users).
##### Retweet Graph Communities.
Retweet networks have been frequently analyzed in previous works in order to
understand political conversations on Twitter (Arif, Stewart, and Starbird
2018; Cherepnalkoski and Mozetič 2016). Using community detection algorithms,
researchers are able to study key players, sharing patterns and content on
different sides of a discussion surrounding a heated political topic.
We first constructed a retweet graph of the dataset, where nodes represent
users and directed edges correspond to retweets between the users. The
direction of an edge corresponds to the direction of the information spreading
in the retweet relation. Edges are weighted according to the number of times
the corresponding source user has been retweeted. The resulting network
consists of 1,887,736 nodes and 16,718,884 edges.
To detect communities within the graph, we used the Infomap community
detection algorithm (Bohlin et al. 2014), which captures the flow of
information in directed networks. Using the default parameters, the algorithm
produces thousands of clusters. By excluding all clusters that contain fewer
than 1% of the nodes we are left with 90% of all nodes111Since the graph only
includes retweeting and retweeted users, this number corresponds to 73.8% of
all users in our dataset. which are clustered into five communities (see
Figure 1(a)).
In Figure 2(a), we visualize the retweet network using the Force Atlas 2
layout in Gephi (Bastian, Heymann, and Jacomy 2009), using a random sample of
44,474 nodes and 456,372 edges. The nodes are colored according to their
computed community as described in Figure 1(a). Edges are colored by their
source node. The visualization indicates that the nodes are split between two
distinct clusters - community 0 (blue) on the left and communities 1, 2, 3 and
4 on the right. By examining the top users in each community, we conclude that
community 0 mostly consists of accounts that detract the voter fraud claims,
while the communities on the right consist of accounts that promote the voter
fraud claims. Most of the tweets from these users are written in English,
except for users in Community 3 who mainly post tweets in Japanese and users
in Community 4 who write in Spanish. Community 2 is more deeply embedded in
the promoter cluster compared to Community 1, as we observe tweets from
Community 1 being retweeted by Community 0 on the left, but not from Community
2. We include the list of top 5 Twitter accounts in each community by the
number of community retweets in the Appendix.
For brevity, in the following analyses, we refer to the cluster on the left as
the detractor cluster, and the cluster with community 1,2,3,4 on the right as
the promoter cluster. Note that due to the partisan nature of the U.S.
politics, most of the promoter users are likely aligned with right-leaning
politics, and detractor users align with left-leaning politics.
To identify users that are prominent within each of these two cluster, we
calculate the closeness centrality of the user nodes in each cluster. In a
retweet network this metric can be interpreted as a user’s ability to spread
information to other users in the network (Okamoto, Chen, and Li 2008). We
compute the top-k closeness centrality to find the 10,000 most central nodes
within the detractor and promoter clusters (Bisenius et al. 2017).
We release the author’s community label of each tweet, the community label of
each user, and a user’s closeness centrality in the detractor and promoter
clusters. We also include two additional metrics - retweet count by community
$X$ and quote count by community $X$. For a tweet $t_{i}$, the retweet count
by community $X$ is the total number of retweets $rt_{i}$ it received from
each user $u_{X}$ in community $X$. The metric is computed analogously for
quotes.
##### Labeling Suspended and Deleted Users
When the electoral college were set to confirm the election results on January
6th, 2021, the allegations of voter fraud took a dramatic turn, which
culminated in the storming of the US Capitol. Subsequently, Twitter took a
harder stance on moderating content on their platform and suspended at least
70,000 accounts that were engaged in propagating conspiracy theories and
sharing QAnon-content (Twitter 2021). This ban has substantial implications
for researchers seeking to understand the spread of voter fraud allegations on
Twitter, since the Twitter API does not allow the “hydration” of Tweets from
suspended users. In order to understand the distribution of suspensions within
our dataset, we queried the updated user status of all users in our dataset on
January 10th, a few days following the ban. The Twitter API returns a user
status that indicates if the user is active, suspended or not found
(presumably deleted). In total, 3.9% of the accounts (99,884 accounts) in our
data were suspended.
We enhance the dataset by labeling tweets and users that were suspended. This
metadata will enable both research and ease hydration by allowing hydraters to
skip content that is no longer available. We also include two additional
metrics for each tweet: retweet count by suspended users and quote count by
suspended users.
Due to its immense public interest, we have retained the full data we
retrieved from the 99,884 suspended users including 1,240,405 tweets and
6,246,245 retweets. This detailed data is not part of . However, we will
distribute an anonymized version of this data to published academic
researchers upon request.
##### Images.
Because of their persuasive power and ease of spread, there is a growing
interest in analyzing how visual misinformation spreads both within a platform
or across platforms (Zannettou et al. 2018; Highfield and Leaver 2016; Paris
and Donovan 2019; Moreira et al. 2018; Zannettou et al. 2020). However, visual
information such as images or videos is difficult for many researchers to
study due to computational and storage costs. Here, we make the information
about image content shared in easier to use by sharing perceptual hash values
for these images. With these numeric hash values, researchers can easily find
duplicates and near-duplicate images in tweets, without working directly with
cumbersome image content. To this end, we download all image media items that
were posted in the tweets in the streaming data, and encode them with three
different types of perceptual hashes.
Common perceptual hashes are binary strings designed such that the Hamming
distance (Zauner, Steinebach, and Hermann 2011) between two hashes is close if
and only if the two corresponding images are perceptually similar. In other
words, an image that is only slightly transformed, for example, by re-sizing,
cropping, or rotation, will have a similar hash value to the original image.
However, as the definition of perceptual similarity is often subjective and
the underlying algorithms are often different, various hashing functions have
different performance characteristics dealing with various types of image
transformations. Therefore, we encode the images in our dataset with three
perceptual hash functions: the Perceptive Hash (pHash), the Average Hash
(aHash), and the Wavelet Hash (wHash) (Petrov 2017; Zauner, Steinebach, and
Hermann 2011).
In total, our streamed tweets included 201,259 image URLs, 167,696 of them
were retrieved during streaming. We provide some more details about the
distribution of these images in Section 5.
##### External links.
Misinformation campaigns are known to use broad cross-platform information,
often via links to other sites (Wilson and Starbird 2020; Golovchenko et al.
2020). Hence, we extracted and publish the set of external (non-Twitter) URLs
that were referenced in the tweets. For ease of use, we resolved URLs that
point to a redirected location (e.g. bit.ly URLs) to their final destination
URL. Our streamed tweets included references to a total of 138,411 unique
URLs, appearing in 609,513 tweets.
Since a large portion (over 12%) of all URLs in the data were YouTube links,
we further enhanced the data with YouTube-specific metadata. A key motivation
for this specific focus was the known role YouTube plays generally in
spreading misinformation (Hussein, Juneja, and Mitra 2020; Papadamou et al.
2020) and specifically its role in the 2020 election and voter fraud claims
(Kaplan 2020; Wakabayashi 2020). For each YouTube video that was shared in the
collected tweets, we used YouTube’s Data API (YouTube 2021), to retrieve the
video’s title, description, as well as the id and title of the channel that
posted it. We retrieved the YouTube metadata on Jan 1st, 2021. On that data,
out of the 13,611 unique video ids that we have queried, 1,608 were no longer
available resulting in 12,003 YouTube URLs with full additional metadata.
## 4 Data Sharing and Format
Our dataset is available for download under FAIR principles (Wilkinson et al.
2016) in CSV
format222https://figshare.com/account/projects/96518/articles/13571084. The
data includes “item data” tables for tweets, retweets, and users keyed by
Twitter assigned IDs and augmented with additional metadata as described
below. The data also includes the images that appear in the dataset, indexed
by randomly genenerated unique IDs. Finally, the data includes aggregated
tables for URLs and for YouTube videos including the information described in
Section 3. The dataset tables and the fields they contain are summarized on
Github333https://github.com/sTechLab/VoterFraud2020.
The dataset conforms with FAIR principles. The dataset is _Findable_ as it is
publicly available on Figshare, with a digital object identifier (DOI):
10.6084/m9.figshare.13571084. It is also _Accessible_ since it can be accessed
by anyone in the world through the link. The datset is in csv format, hence it
is _Interoperable_. We release the full dataset with descriptions detailed in
this paper, as well as an online tool to explore the dataset at
http://voterfraud2020.io, making the dataset _Reusable_ to the research
community.
The tables for Tweets and Retweets contain the full set of items that were
collected, including from suspended users. These tables do not include raw
tweet data beyond the ID, according to Twitter’s ToS. However, to support use
of the data without being required to download (“hydrate”) the full set of
tweets, we augment the Tweets table with several key properties. For each
tweet we provide the number of total retweets as computed by Twitter
(retweet/quote_count_metadata), as well as the number of retweets and quotes
we streamed for this tweet from users in each of the five main communities
(retweet/quote_count_community_X, X ranging from 0 to 4). Note that the latter
do not add up to the Twitter metadata due to the coverage issues listed in
Section 2.2. The Tweet table properties also include the user_community (0–4)
for the user who posted the tweet, computed using methods listed in Section 3.
Some of the Twitter accounts were not clustered into one of the five main
communities. In this case, the user_community label is null. With this
augmentation, researchers using this dataset could very quickly, for example,
select and then hydrate a subset of the most retweeted tweets from non-
suspended users in Community 2. As the tweet itself and the ID of the user who
tweeted it is not available until hydration, Twitter’s users’ privacy is
preserved.
The Users table is similarly augmented with aggregate information about the
importance of the user in the dataset, including the community that they
belong to, their centrality in the two meta-clusters, detractor and promoter
(closeness_centrality_detractor_cluster and
closeness_centrality_promoter_cluster), and the amount of attention (retweets
and quotes) they received from other users in the different communities. We
also note whether, according to the data we collected, the user had been
suspended. With this data, interested researchers can quickly focus their
attention and research on the main actors in each community.
The Images table includes all the image media items retrieved in the stream,
their unique media ID, and the ID of the tweet in which the image was shared.
We augment this table with the image hash using three types of perceptual hash
functions – aHash, pHash and wHash, as detailed in Section 3. This
augmentation, together with the link to the Tweet ID, will allow researchers
to quickly identify and hydrate popular images using the tweet metadata. They
can also quickly identify and get information for images that are similar to
any other arbitrary image, by computing and comparting the perceptual hash
values.
The two aggregate tables, the URLs table and the YouTube Videos table again
provide information about the popularity of the object in the dataset:
aggregate retweet and quote counts, both using the Twitter metadata and the
count of objects in our stream from the various communities. In addition,
these tables are augmented with metadata about the item (URL or YouTube video)
as noted in Section 3.
## 5 Data Analysis
We performed a preliminary analysis of our dataset and its different
modalities – tweets and users, images, external links – to demonstrate its
potential interest and provide some initial guiding insights about the data.
Figure 3: Temporal overview of the dataset showing number of streamed tweets, quotes and retweets per day. The shaded regions mark the expansions of the keyword set. (a) | (b) | (c)
---|---|---
| |
Tweets | Retweets (total) | Retweets (in cluster) | Tweets | Retweets (total) | Retweets (in cluster) | Tweets | Retweets (total) | Retweets (in cluster)
15 | 24,399 | 18,020 | 11 | 20,104 | 10,424 | 34 | 28,833 | 10,250
Figure 4: Top three most retweeted images in the promoter cluster: (a)–(c),
with the number of tweets, retweets as in metadata, and retweets by users in
the cluster. Image (c) was cropped to fit the figure.
##### Tweets and users.
Figure 3 shows the amount of retweets (green), original tweets (blue) and
quote tweets (yellow) in the dataset over the time (X-axis) of the data
collection. Three shaded regions, from left to right, mark the expansion of
our set of keywords on October 31st (light blue, region b) and November 3rd
(light green, region c). The Y-axis specifies the daily count. In general,
except for the large increase after the election date (November 3rd, dotted
vertical line), the volume of the stream remains roughly the same. On average,
there are 170,938 tweets, 576,136 retweets, and 85,488 quote tweets per day
after the election.
Our manual inspection shows that top tweets retweeted by the detractor cluster
often condemn the alleged voter fraud claims, while top tweets on the promoter
cluster indeed make voter fraud claims. Not surprisingly, among the top ten
most retweeted tweets in the promoter cluster, nine were tweeted by President
Trump. We refer readers to our project website for more details about popular
tweets.
While the promoter clusters seems rather homogeneous (Figure 2(a)), users in
Community 2 (yellow) stand out in both their level of activity and the rate in
which they were suspended. Community 2 was highly active in our dataset. For
example, Community 2 comprises 18.1% of the users, but contributed 68% of the
tweets, and 74% of the retweets. Moreover, 14% of Community 2’s users were
suspended by Twitter by the time we collected the account status data as
described above, a much higher rate than the other communities, as shown in
Figure 1(a). In total, Community 2 was responsible for 46.1% of all
suspensions amongst the users we associated with the top five communities. The
suspension effect, and its focus on Community 2, can also be observed in
Figure 2(b).
A full analysis of the suspended accounts and their network communities, and
the potential impact of the suspension is out of scope for this dataset paper,
but can be easily performed using the data we share in . For example, the data
shows that 35% of the promoter cluster users that were retweeted more than
1,000 times (1,596 in total) were suspended.
To conclude, our preliminary analysis shows that alleged election fraud claims
mostly circulate in the promoter cluster, and in particular in Community 2
within the cluster. The most popular tweets (by retweet counts) supporting
such claims often come from prominent accounts. The recent moderation efforts
from Twitter seem to have effected the most active community that engaged in
fraud related misinformation, and did not broadly target all accounts involved
in promoting such claims.
#### Images.
We conducted a preliminary examination of matching and repeated images in to
analyze the distribution of images related to voter fraud claims. Our data,
using the perceptual hash functions described in Section 3, allows tracking of
duplicate and near-duplicate images that were posted in multiple tweets. In
this analysis, we experimented with three perceptual hash functions and refer
to two images as matching if they have an identical perceptual hash value.
In Figure 5(a), we show the cumulative distribution of the number of unique
perceptual hashes in (Y-axis), with hash values sorted based on the number of
unique tweets in which they appear, from the highest to the lowest (X-axis).
For example, according to pHash, the 1,000 images shared in the largest number
of unique tweets appeared together in 25,019 different tweets (not including
retweets). Although in general the results are similar when using different
hash functions, pHash is the most “conservative” in terms of assigning
matches.
Overall, our results are similar when using different hash functions. For
example, there are 109,312 (out of 167,696) images with the same pHash value.
Of these, 17,831 were shared in more than one tweet, an average of 4.27 times.
In other words, 34% of the images instances in tweets appear in more than one
tweet. Figure 5(b) presents the image that appeared in most number of unique
tweets: the same perceptual hash value appeared in over 1,000 tweets,
according to all three hash functions.
We further investigate the popularity of images, defined by number of
retweets, in particular, within the promoter and detractor clusters. After
grouping images by the same pHash value, we present in Figure 4 the top three
images that have been retweeted in the promoter cluster. Also note that
despite the high values of metadata retweets and cluster retweets, all these
“popular” images appeared in only a few original tweets in our data. For
example, image (a) appeared in 15 tweets, whose metadata retweet (as returned
from the API) counts add up to 24,399 in total, and was retweeted (as recorded
in our dataset) from users in the promoter cluster 18,020 times. We note that
images (a) and (b) were also the top two images retweeted by users in the
suspended users set, with 5,547 and 3,122 retweets in that set, respectively
(recall that as almost all suspended users belong to the promoter cluster). As
expected, the most retweeted images in the two clusters are quite different.
The three most retweeted images in the detractor cluster (not included for
lack of space) have somewhat lower spread, appearing in tweets that were
retweeted 10743, 6425, 3411 times (based on metadata). The top image is a
screenshot of the NY Times front page of Nov 11th, 2020 reporting that top
election officials across the country have not identified any fraud.
The analysis presented above can be easily extended with less-strict image
similarity matching by calculating the Hamming distance between a pair of
perceptual hash values. In this initial analysis, we used a strict sense of
similarity, treating images as similar only when they share the same
perceptual hash values.
(a)
(b)
Figure 5: (a) The cumulative number of repeated images by hash matches. (b) The most tweeted image. promoter cluster | detractor cluster
---|---
Domain | Retweets | Domain | Retweets
pscp.tv | 51,822 | washingtonpost.com | 11,220
youtube.com | 44,031 | rawstory.com | 9,267
thegatewaypundit.com | 35,967 | cnn.com | 4,139
davidharrisjr.com | 18,793 | independent.co.uk | 3,882
foxnews.com | 17,332 | nytimes.com | 3,746
theepochtimes.com | 15,297 | newsweek.com | 3,496
thedcpatriot.com | 14,958 | news.yahoo.com | 2,899
thefederalist.com | 13,288 | deadstate.org | 2,409
djhjmedia.com | 11,816 | theguardian.com | 2,232
justthenews.com | 11,149 | politicususa.com | 2,032
Table 1: Top 10 domains being retweeted in the promoter and the detractor
clusters respectively, as well as the number of retweets by users in these
clusters.
##### URLs.
We conduct preliminary analyses of the external links that have been included
in the tweets. Table 1 lists the top 10 domains that have been shared inside
the detractor and promoter clusters respectively. Most of the links shared by
users in the detractor clusters are to mainstream news media, such as the
Washington Post, CNN, and the New York Times. The rest are other news-related
websites. The links shared by users in the promoter cluster mostly point to
low-quality news-related websites.
The most shared domain in the promoter cluster is pscp.tv, a live video
streaming app that’s owned by Twitter. YouTube stands out as the second most
retweeted domain among the promoter users. This trend is reflected in multiple
news reports, warning of the significant role that YouTube plays in spreading
false information related to voter fraud claims (Frenkel 2020). The majority
of the top 10 most retweeted videos by the promoter users falsely claim
evidence of widespread election fraud. The users spreading these videos had
significant overlap with the January (or earlier) suspension action by
Twitter. For eight of these videos, around $30\%$ of the retweets of tweets
sharing those videos were by accounts later suspended by Twitter.
A scan of the top 10 YouTube channels retweeted in the promoter cluster shows
that they were relatively large (millions of subscribers), though there are
also several smaller channels. For example, the most retweeted channel,
Precinct 13, has only 3.67K subscribers, has a video that appeared in 88
tweets and have been retweeted over 9K times.
Despite YouTube’s announcement that it will take actions against content
creators who falsely claim the existence of widespread voter fraud444see:
https://twitter.com/YouTubeInsider/status/1347231471212371970, as of Jan 11th,
the top 10 channels and videos listed in our tables are still available on
YouTube.
## 6 Related Work and Datasets
We review prior work using Twitter data analysing politically related events,
with an emphasis on those that have released a public dataset.
In particular, previous works had used and published Twitter data to study
U.S. elections. Using tweets collected during the 2016 U.S. election,
researchers have analysed information operations run by social bots (Rizoiu et
al. 2018), characterized the dissemination of misinformation (Vosoughi, Roy,
and Aral 2018) and its exposure to American voters (Grinberg et al. 2019).
Work in Hua, Naaman, and Ristenpart (2020); Hua, Ristenpart, and Naaman (2020)
characterized adversarial interaction against political candidates during the
2018 U.S. general election and shared 1.7M tweets interacting with political
candidates.
Focusing on the U.S. 2020 election, research studied false claims regarding
mail-in ballots (Benkler et al. 2020) before the election as the COVID-19
pandemic made it hard to vote in person. Closest to our work is the
#Election2020 dataset (Chen, Deb, and Ferrara 2020), which streamed a broad
set of Twitter data for both political candidates’ tweets and related
keywords. As discussed above, although some of the voter fraud related
keywords were included in their data collection process, our dataset contains
more than 2.3 times as much of the related data in #Election2020, for
overlapping streaming keywords, presumably because of our more focused stream.
Our stream also included a broader set of fraud-claim related keywords.
In order to help understand the dissemination of misinformation cross
platforms, Brena et al. (2019); Hui et al. (2018) used news articles as
queries and released the tweets pointing to these articles. In 2018, Twitter
published a list of accounts that the platform suspects to be related with
Russia’s government controlled Internet Research Agency (Twitter 2018). This
release enabled a number of studies that deepened our understanding of foreign
information manipulation in the U.S. (Arif, Stewart, and Starbird 2018; Im et
al. 2020; Badawy, Ferrara, and Lerman 2018).
Most of the previous works that released Twitter datasets only included the
tweet IDs, in accordance with Twitter’s Terms of Service. We keep to that
practice, and augment the data without sharing tweet content, as detailed
above, making our multi-modal dataset more accessible and useful to the
research community.
## 7 Discussion and Conclusions
The voter fraud allegations to discredit the U.S. 2020 presidential elections
are likely to form one of the most consequential misinformation campaigns in
modern history. It is critical to allow a diverse set of researchers to
provide a deeper understanding of this effort, which will continue to have
national and global impact for years to come. To enable that contribution, it
is important to provide a public and accessible archive of this campaign on
various social media platforms, including Twitter as we do in .
The dataset has the potential to benefit the research community, and to
further inform the public regarding both Twitter activities around the voter
fraud claims, as well as Twitter’s response. Yet, the data has some
limitations. We could not possibly capture the full extent of the voter fraud
claims on Twitter, as our dataset was constructed by using matching keywords.
Further, as discussed above, we do not have full coverage even for the
keywords we tracked, though we estimate that we have a majority of the tweets
with those keywords. Nevertheless, the breadth of the data enables various
types of investigation using both the tweet data, as well as the aggregated
data of URLs, videos and images used in the campaign. We propose three major
categories of such investigation.
First, researchers can use the dataset to study the spread, reach, and
dynamics of the voter fraud campaign on Twitter. Researchers can describe and
analyze the participants, including the activities of political candidates
using information from orthogonal data sets of candidate accounts
555https://github.com/vegetable68/Midterm-2020-candidates, and the interaction
between public figures and other accounts spreading claims and promoting
certain narratives. Further, the data can help expose how different public
figures spread different claims, for example the claims regarding the Dominion
voting machines, what kind of engagement such narratives received. The data
can also be used to understand the role of bots and other coordinated
activities and campaigns in spreading this information. In general, the
dataset can provide for analysis of the distribution of attention to these
claims and how it spreads – via images, tweets, URLs – including comparison
among different pre-computed communities and clusters.
Second, we include auxiliary data – URLs including YouTube links, and image
hashes – that can help researchers examine other sources of information and
their roles in spreading these claims. For example, using the image hash
values that were encoded using publicly available algorithms, researchers can
easily map images not just within the Twitter data, but also within the larger
web ecosystem. Researchers may combine our dataset with datasets that are
collected from other social media platforms to examine how visual
misinformation spread cross platforms (e.g., (Zannettou et al. 2018; Moreira
et al. 2018)).
A third potential area of investigation is Twitter’s response to the voter
fraud claims. A specific question is the characterization of the suspended
users, who are primarily part of a specific community even within the the
group promoting voter fraud claims as shown above. Researchers can use the
data to both understand Twitter’s non-public response, and its potential
effectiveness, or even simulate the effectiveness of hypothetical earlier bans
of the same population. As noted above, while Twitter’s terms forbid us from
publicly sharing full data for these suspended users – the tweets for these
users are no longer available on Twitter by their ID – we will make these
tweets available privately to published academic researchers, as we believe
these tweets are of immense and justified public interest.
The publicly released data was collected and made available according to
Twitter’s Terms of Service for academic researchers, following established
guidelines for ethical Twitter data use (Rivers and Lewis 2014). By limiting
to the Tweet IDs as the main data element, the dataset does not expose
information about users whose data had been removed from the service. The only
content in our data that is directly tied to a Tweet ID is the hash of the
images for tweets that included them. Even though that hash, theoretically,
can be tied to an image from another source, in absence of the original tweet
the image will not be associated with any user account. We believe that this
minor disclosure risk is justified given the potential benefits of this data.
## References
* Allcott and Gentzkow (2017) Allcott, H.; and Gentzkow, M. 2017. Social media and fake news in the 2016 election. _Journal of economic perspectives_ 31(2): 211–36.
* Arif, Stewart, and Starbird (2018) Arif, A.; Stewart, L. G.; and Starbird, K. 2018. Acting the part: Examining information operations within# BlackLivesMatter discourse. _Proceedings of the ACM on Human-Computer Interaction_ 2(CSCW): 1–27.
* Badawy, Ferrara, and Lerman (2018) Badawy, A.; Ferrara, E.; and Lerman, K. 2018. Analyzing the digital traces of political manipulation: The 2016 russian interference twitter campaign. In _2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)_ , 258–265. IEEE.
* Bastian, Heymann, and Jacomy (2009) Bastian, M.; Heymann, S.; and Jacomy, M. 2009. Gephi: an open source software for exploring and manipulating networks. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 3.
* Benkler et al. (2020) Benkler, Y.; Tilton, C.; Etling, B.; Roberts, H.; Clark, J.; Faris, R.; Kaiser, J.; and Schmitt, C. 2020. Mail-In Voter Fraud: Anatomy of a Disinformation Campaign. _Available at SSRN_ .
* Bisenius et al. (2017) Bisenius, P.; Bergamini, E.; Angriman, E.; Meyerhenke, H.; and Oct, D. S. 2017. Computing Top- k Closeness Centrality in Fully-dynamic Graphs 1–26.
* Bohlin et al. (2014) Bohlin, L.; Edler, D.; Lancichinetti, A.; and Rosvall, M. 2014. _Community Detection and Visualization of Networks with the Map Equation Framework_ , 3–34. Cham: Springer International Publishing. ISBN 978-3-319-10377-8. doi:10.1007/978-3-319-10377-8˙1.
* Brena et al. (2019) Brena, G.; Brambilla, M.; Ceri, S.; Di Giovanni, M.; Pierri, F.; and Ramponi, G. 2019. News sharing user behaviour on twitter: A comprehensive data collection of news articles and social interactions. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 13, 592–597.
* Chen, Deb, and Ferrara (2020) Chen, E.; Deb, A.; and Ferrara, E. 2020. # Election2020: The first public Twitter dataset on the 2020 US presidential election. _arXiv preprint arXiv:2010.00600_ .
* Cherepnalkoski and Mozetič (2016) Cherepnalkoski, D.; and Mozetič, I. 2016. Retweet networks of the European Parliament: evaluation of the community structure. _Applied Network Science_ 1(1): 1–20. ISSN 23648228. doi:10.1007/s41109-016-0001-4. URL http://dx.doi.org/10.1007/s41109-016-0001-4.
* Ferrara et al. (2020) Ferrara, E.; Chang, H.; Chen, E.; Muric, G.; and Patel, J. 2020. Characterizing social media manipulation in the 2020 US presidential election. _First Monday_ .
* Frenkel (2020) Frenkel, S. 2020. How Misinformation ‘Superspreaders’ Seed False Election Theories. https://www.nytimes.com/2020/11/23/technology/election-misinformation-facebook-twitter.html. [Online; accessed 4-Jan-2021].
* Golovchenko et al. (2020) Golovchenko, Y.; Buntain, C.; Eady, G.; Brown, M. A.; and Tucker, J. A. 2020. Cross-Platform State Propaganda: Russian Trolls on Twitter and YouTube During the 2016 US Presidential Election. _The International Journal of Press/Politics_ 1940161220912682\.
* Grinberg et al. (2019) Grinberg, N.; Joseph, K.; Friedland, L.; Swire-Thompson, B.; and Lazer, D. 2019\. Fake news on Twitter during the 2016 US presidential election. _Science_ 363(6425): 374–378.
* Highfield and Leaver (2016) Highfield, T.; and Leaver, T. 2016. Instagrammatics and digital methods: Studying visual social media, from selfies and GIFs to memes and emoji. _Communication Research and Practice_ 2(1): 47–62.
* Hua, Naaman, and Ristenpart (2020) Hua, Y.; Naaman, M.; and Ristenpart, T. 2020. Characterizing twitter users who engage in adversarial interactions against political candidates. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ , 1–13.
* Hua, Ristenpart, and Naaman (2020) Hua, Y.; Ristenpart, T.; and Naaman, M. 2020. Towards Measuring Adversarial Twitter Interactions against Candidates in the US Midterm Elections. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 14, 272–282.
* Hui et al. (2018) Hui, P.-M.; Shao, C.; Flammini, A.; Menczer, F.; and Ciampaglia, G. L. 2018. The Hoaxy misinformation and fact-checking diffusion network. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 12.
* Hussein, Juneja, and Mitra (2020) Hussein, E.; Juneja, P.; and Mitra, T. 2020. Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube. _Proceedings of the ACM on Human-Computer Interaction_ 4(CSCW1): 1–27.
* Im et al. (2020) Im, J.; Chandrasekharan, E.; Sargent, J.; Lighthammer, P.; Denby, T.; Bhargava, A.; Hemphill, L.; Jurgens, D.; and Gilbert, E. 2020. Still out there: Modeling and identifying russian troll accounts on twitter. In _12th ACM Conference on Web Science_ , 1–10.
* Kaplan (2020) Kaplan, A. 2020. YouTube has allowed conspiracy theories about interference with voting machines to go viral. https://www.mediamatters.org/google/youtube-has-allowed-conspiracy-theories-about-interference-voting-machines-go-viral. [Online; accessed 5-Jan-2021].
* Moreira et al. (2018) Moreira, D.; Bharati, A.; Brogan, J.; Pinto, A.; Parowski, M.; Bowyer, K. W.; Flynn, P. J.; Rocha, A.; and Scheirer, W. J. 2018. Image provenance analysis at scale. _IEEE Transactions on Image Processing_ 27(12): 6109–6123.
* Morstatter et al. (2013) Morstatter, F.; Pfeffer, J.; Liu, H.; and Carley, K. M. 2013. Is the sample good enough? Comparing data from twitter’s streaming API with Twitter’s firehose. _Proceedings of the 7th International Conference on Weblogs and Social Media, ICWSM 2013_ 400–408.
* NPR (2020) NPR. 2020. Poll: Just A Quarter Of Republicans Accept Election Outcome. https://www.npr.org/2020/12/09/944385798/poll-just-a-quarter-of-republicans-accept-election-outcome. [Online; accessed 5-Jan-2021].
* Okamoto, Chen, and Li (2008) Okamoto, K.; Chen, W.; and Li, X.-y. 2008. Ranking of Closeness Centrality for Large-Scale Social Networks Ranking of Closeness Centrality for Large-Scale Social Networks (June). doi:10.1007/978-3-540-69311-6.
* Papadamou et al. (2020) Papadamou, K.; Zannettou, S.; Blackburn, J.; De Cristofaro, E.; Stringhini, G.; and Sirivianos, M. 2020. ” It is just a flu”: Assessing the Effect of Watch History on YouTube’s Pseudoscientific Video Recommendations. _arXiv preprint arXiv:2010.11638_ .
* Paris and Donovan (2019) Paris, B.; and Donovan, J. 2019. Deepfakes and Cheap Fakes. _United States of America: Data & Society_ .
* Petrov (2017) Petrov, D. 2017. Wavelet image hash in Python. https://fullstackml.com/wavelet-image-hash-in-python-3504fdd282b5. [Online; accessed 15-Jan-2021].
* Rivers and Lewis (2014) Rivers, C. M.; and Lewis, B. L. 2014. Ethical research standards in a world of big data. _F1000Research_ 3\.
* Rizoiu et al. (2018) Rizoiu, M.-A.; Graham, T.; Zhang, R.; Zhang, Y.; Ackland, R.; and Xie, L. 2018. # debatenight: The role and influence of socialbots on twitter during the 1st 2016 us presidential debate. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 12.
* Schaul, Rabinowitz, and Mellnik (2020) Schaul, K.; Rabinowitz, K.; and Mellnik, T. 2020. 2020 turnout is the highest in over a century. https://www.washingtonpost.com/graphics/2020/elections/voter-turnout/. [Online; accessed 5-Jan-2021].
* Twitter (2018) Twitter. 2018. Update on Twitter’s review of the 2016 US election. https://blog.twitter.com/official/en˙us/topics/company/2018/2016-election-update.html. [Online; accessed 7-Jan-2021].
* Twitter (2019) Twitter. 2019. Twitter Standard API. https://developer.twitter.com/en/docs/tweets/filter-realtime/overview. [Online; accessed 15-Jan-2019].
* Twitter (2021) Twitter. 2021. An update, following the riots in Washington, DC. https://blog.twitter.com/en˙us/topics/company/2021/protecting–the-conversation-following-the-riots-in-washington–.html. [Online; accessed 12-Jan-2021].
* Vitak et al. (2011) Vitak, J.; Zube, P.; Smock, A.; Carr, C. T.; Ellison, N.; and Lampe, C. 2011. It’s complicated: Facebook users’ political participation in the 2008 election. _CyberPsychology, behavior, and social networking_ 14(3): 107–114.
* Vosoughi, Roy, and Aral (2018) Vosoughi, S.; Roy, D.; and Aral, S. 2018. The spread of true and false news online. _Science_ 359(6380): 1146–1151.
* Wakabayashi (2020) Wakabayashi, D. 2020. Election misinformation continues staying up on YouTube. https://www.nytimes.com/2020/11/10/technology/election-misinformation-continues-staying-up-on-youtube.html. [Online; accessed 5-Jan-2021].
* Wilkinson et al. (2016) Wilkinson, M. D.; Dumontier, M.; Aalbersberg, I. J.; Appleton, G.; Axton, M.; Baak, A.; Blomberg, N.; Boiten, J.-W.; da Silva Santos, L. B.; Bourne, P. E.; et al. 2016. The FAIR Guiding Principles for scientific data management and stewardship. _Scientific data_ 3(1): 1–9.
* Wilson and Starbird (2020) Wilson, T.; and Starbird, K. 2020. Cross-platform disinformation campaigns: lessons learned and next steps. _Harvard Kennedy School Misinformation Review_ 1(1).
* YouTube (2021) YouTube. 2021. YouTube Data API — Google Developers. https://developers.google.com/youtube/v3. [Online; accessed 8-Jan-2021].
* Zannettou et al. (2018) Zannettou, S.; Caulfield, T.; Blackburn, J.; De Cristofaro, E.; Sirivianos, M.; Stringhini, G.; and Suarez-Tangil, G. 2018. On the origins of memes by means of fringe web communities. In _Proceedings of the Internet Measurement Conference 2018_ , 188–202.
* Zannettou et al. (2020) Zannettou, S.; Caulfield, T.; Bradlyn, B.; De Cristofaro, E.; Stringhini, G.; and Blackburn, J. 2020. Characterizing the Use of Images in State-Sponsored Information Warfare Operations by Russian Trolls on Twitter. In _Proceedings of the International AAAI Conference on Web and Social Media_ , volume 14, 774–785.
* Zauner, Steinebach, and Hermann (2011) Zauner, C.; Steinebach, M.; and Hermann, E. 2011. Rihamark: perceptual image hash benchmarking. In _Media watermarking, security, and forensics III_ , volume 7880, 78800X. International Society for Optics and Photonics.
## Appendix A Appendix
Community 0
---
ID | Handle | Active Status | Retweets
32871086 | kylegriffin1 | active | 76,302
1640929196 | mmpadellan | active | 74,393
255812611 | donwinslow | active | 69,796
216776631 | BernieSanders | active | 60,961
15952856 | AriBerman | active | 58,222
a) Community 1
25073877 | realDonaldTrump | suspended | 1,560,373
187680645 | LLinWood | suspended | 1,057,805
586707638 | SidneyPowell1 | suspended | 633,273
240454812 | GenFlynn | suspended | 334,197
1812055789 | CodeMonkeyZ | suspended | 274,210
b) Community 2
2922345639 | DonnaWR8 | suspended | 38,388
259260816 | zeusFanHouse | suspended | 36,347
393190233 | LeahR77 | suspended | 33,352
951302891708583936 | TheRISEofROD | suspended | 32,992
32804484 | Bubblebathgirl | active | 27,787
c) Commmunity 3
835040085573689346 | ganaha_masako | active | 12,480
1128981340848656384 | KadotaRyusho | active | 6,890
796450109986902016 | yamatogokorous | active | 5,716
1166577240601239552 | mei98862477 | active | 5,347
109458204 | kohyu1952 | active | 5,244
d) Community 4
3393186119 | FernandoAmandi | active | 4,217
1126414392080232449 | POTUS_Trump_ESP | active | 2,981
1195348350620622850 | TDN_NOTICIAS | active | 2,459
98294131 | 1VAFI | active | 1,802
1068238181282267137 | Gamusina77 | active | 1,638
Table 2: Top 5 Users in each community sorted by retweets from other users. Seed list | #abolishdemocratparty #ballotharvasting #ballotvoterfraud #cheatingdemocrats #democratvoterfraud #gopvoterfraud #ilhanballotharvesting #ilhanomarballotharvesting #ilhanomarvoterfraud #mailinvoterfraud #stopvoterfraud #voterfraud #voterfraudbymail #voterfraudisreal
---|---
Filtered | #abolishdemocratparty
Generated from the seed list | #ballotharvesting #voterid #ilhanomarforprison #stopgopvoterfraud #ilhanomar #nancypelosiabusingpower #nancypelosimustresign #junkmailballots #traresforcongress #immigrationfraud #votebymailfraud #ballotfraud #exposed #votersuppression #ilhanresign #voteinperson #votebymail #video #lockherup #nomailinvoting #ilhanomarelectionfraud #taxfraud #ballotharvesting #massivemailinballots #arrestilhanomar #obamagate #ilhanomarlockherup #buyingvotes #2020election #campaignfraud #homewrecker #voteinperson #minneapolis #absenteeballots #trump2020 #arrestilhanomar #absenteeballot #darktolight #wwg1wga #terrorist #daveygravyspirualsavage #trump #fraud #liar #pizzagate #republicans #qproof #theawakening #voteatthepolls #marriedherbrother #glasshouses #sheepnomore #voteyouout #cheater #georgesoros #georgia #vote #walkaway #thegreatawakening #qanon #evil #savethechildren
Keywords list 10/24 | #ballotfraud #ballotharvesting #ballotvoterfraud #cheatingdemocrats #democratvoterfraud #ilhanomarballotharvesting #ilhanomarvoterfraud #mailinvoterfraud #nomailinvoting #stopgopvoterfraud #stopvoterfraud #votebymailfraud #voterfraud #voterfraudisreal
Added on 10/31 | #discardedballots #electionfraud #electioninterference #electiontampering #gopvoterfraud #hackedvotingmachines ‘destroyed ballots’ ‘discarded ballots’ ‘election fraud’ ‘election interference’ ‘election tampering’ ‘hacked voting machine’ ‘pre-filled ballot’ ‘stolen ballots’ ‘ballot fraud’ ‘ballot harvesting’ ‘cheating democrats’ ‘democrats cheat’ ‘harvest ballot’ ‘vote by mail fraud’ ‘voter fraud’
Added on 11/03 | #stopthesteal
Table 3: Hashtags and keywords related to election fraud.
|
# PiChu: Accelerating Block Broadcasting in Blockchain Networks with
Pipelining and Chunking
Kaushik Ayinala Baek-Young Choi Sejun Song
University of Missouri-Kansas City, Kansas City, MO, USA
Email: {kapnb, choiby<EMAIL_ADDRESS>
###### Abstract
Blockchain technologies have been rapidly enhanced in recent years. However,
its scalability still has limitations in terms of throughput and broadcast
delay as the network and the amount of transaction data increase. To improve
scalability of blockchain networks, we propose a novel approach named PiChu
that accelerates block propagation in blockchain networks by pipelining and
verifying chunks of a block in parallel. Accelerating block propagation
reduces the mining interval and chance of fork occurring, which in turn
increases throughput. Our approach can be applied to the blockchain networks
either directly or with a minor modification to the consensus. Through an
extensive and large scale simulations, we validate that the proposed PiChu
scheme significantly enhances the scalability of blockchain networks. For
instance, a 64 MB block can be broadcasted in just 80 seconds in a blockchain
network with a million nodes. The efficiency of PiChu broadcasting increases
with bigger block sizes and a larger number of nodes in the network.
###### Index Terms:
blockchain, block propagation, chunking, pipelining, simulator, P2P network,
scalability
©2020 IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.
## I Introduction
Blockchain maintains a distributed ledger of the completed transactions as
blocks and chains them sequentially using the previous block hash to maintain
the order of completed transactions. Nodes in a blockchain are connected to
each other on a peer-to-peer (P2P) network. A consensus protocol running at
every node follows the agreement of a policy to add a block to the chain.
There are several consensus schemes such as Proof of Work (PoW), Proof of
Stake (PoS), Delegated Proof of Stake (DPoS), Practical Byzantine Fault
Tolerance (PBFT), and Hybrid Consensus. For instance, Proof of Work (PoW) is
one of the commonly used consensus algorithms introduced in bitcoin [1]. In
the PoW consensus algorithm, each block contains a timestamp, nonce, hash of
the block, difficulty target. Proof of Stake (PoS) is another well-known
consensus algorithm introduced in PPCoin [2].
After validating a block, the node broadcasts or propagates it to the rest of
the network. The time it takes to propagate a block depends on many factors,
such as the size of a block, the average bandwidth of the nodes, and the
maximum hop count or diameter of a network. Those factors have intricate
relationships. When the number of nodes in the network increases, the network
diameter increases along with a block broadcast time. Also, when throughput is
increased via larger block size, a block broadcast time increases, causing the
chance of undesirable forks. The blockchain network becomes unstable when
there are too many forks, or forks do not resolve. Therefore, if we increase
the throughput or capacity of the blockchain network, then it may become
unstable. This causes the scalability problem [3, 4, 5] in the blockchain.
In this paper, we propose a Pipelining and Chunking scheme for blockchain
networks, named PiChu that is to expedite a block propagation by verifying
consensus with the block header and incrementally forwarding the body of a
block by small chunks over the P2P network, instead of a whole block at once.
After receiving a chunk, a node will verify and forward the chunk.
Accelerating block broadcast time improves the scalability of the blockchain,
as the block interval can be reduced, the block size can be increased, and
forks in the chain would be reduced.
Since PiChu takes advantage of network pipelining, the efficiency is far
better than the traditional approach. Our experimental results showed, on
average, a magnitude ($\approx$ 13.6 times) less block broadcast time than the
traditional method in a blockchain network with 65,536 nodes. PiChu technique
can be applied directly to the existing consensus protocols with minimal
change to the blockchain network. PiChu approach can be directly used with the
existing consensus algorithms such as PoS, DPoS and PBFT that use a header
only to verify a block. As for PoW that uses an entire block for a
verification, PiChu approach can be employed with a minor modification in the
consensus.
Our contributions in this paper include i) proposing PiChu, a novel block
broadcasting technique, ii) development of a versatile blockchain simulator,
and iii) analysis and extensive evaluations of the efficiency of the proposed
scheme.
The rest of the paper is organized as follows. We discuss the existing works
on blockchain scalability in Section II. Section III describes the proposed
scheme in detail. The efficiency and the pseudo-code of our scheme are given
in Section IV. Section V discusses the potential attacks and proposes
countermeasures. Section VI explains the experiment environment and results.
We conclude the paper in Section VII.
## II Related Work
There are a number of studies to improve the scalability of the blockchain
networks. They follow approaches like using multiple chains, sharding or
exploiting network topology information.
Monoxide [6] uses multiple chains to linearly scales the blockchain. It
proposes Chu-ko-nu mining to maintain the same difficulty across all the
chains, and proposes a protocol to handle inter-chain transactions. A node can
mine a block in multiple or all chains by solving a single problem. Miners can
choose the chains they want to work on. This may cause a chain to be abandoned
if there are too many chains. Elastico [3] also linearly scales the blockchain
by sharding but uses a single chain. Sharding involves dividing the network
into groups or shards for a given amount of time. Each group will work on a
different set of transactions. The size of the block increases with the number
of nodes, which in turn increases the broadcast time. The size of the block is
limited by bandwidth and latency.
A scheme to speed up block propagation by choosing the closest neighbors as
peers was proposed in [7], where the closest neighbor is determined by
transmission latency. Another study [8] also improves the scalability by
maintaining the network topology using a tree structure for a broadcast
routing. Tree cluster routing is proposed to do routing during node failures.
However, it does not address adapting to dynamic network conditions such as a
new node’s join and handling a node or cluster failure. Velocity [9] improves
block broadcasting by downloading the parts of a block from multiple
neighbors. In the scheme, a block is converted into so-called fountain codes.
The node that wants to receive a block sends a request message to all of its
neighbors. The neighbors having the block sends a fountain code continuously.
After receiving sufficient codes, the node rebuilds the block. Graphene [10]
improves block propagation by reducing the transmission delay between the
nodes. Graphene uses Bloom filters and Invertible Bloom Lookup Table (IBLT) to
synchronize the block between peers. Bitcoin-NG [11] indirectly selects a
leader for a given time frame, and the leader transmits the micro blocks
throughout the time frame. The chain contains two types of blocks. They are
key and micro blocks. The node that mines the key block becomes the leader.
The consensus protocol for the key block is PoW. However, it is for a specific
type of a consensus protocol and can not be used on other existing consensus
protocols.
To the best of our knowledge, this paper is the first work that uses the
unique approach of pipelining and chunking for accelerating block propagation
blockchain networks. The proposed scheme can be used along with existing
scaling and acceleration techniques in a complementary manner.
## III PiChu: The Proposed Pipelining and Chunking Approach
This section explains the proposed PiChu scheme. PiChu scheme involves first,
sending a header as an invitation, then dividing the body of the block into
chunks, and finally forwarding the chunks in pipeline.
### III-A Verification of a Block for a Consensus
PiChu does a block verification for a consensus using only a header rather
than a whole block.
Most consensus algorithms including Proof Of Stake, Delegated Proof of Stake,
Proof of Activity, Proof of Burn, Proof of Elapsed Time, and Leased Proof of
Stake ([2, 12, 13, 14, 15]) need only the header for the consensus
verification. Those consensus protocols require only the header to verify a
block for the consensus, as shown in Equation (1). Equation (1) is the
consensus between the nodes in PoS. Thus PiChu can be readily used on those
blockchains.
$Hash(Header)<C_{w}*\textit{DifficulyTarget}$ (1)
where, $C_{w}$ is a coin day weight.
On the other hand, Proof of Work is a consensus protocol that requires an
entire block for its verification. However, it can be made PiChu-capable with
minor modifications. Note that nodes in a PoW blockchain follows Equation (2)
to add a block to their chain.
$Hash(Block)<\textit{DifficulyTarget}$ (2)
Hash of all the transactions should be included in the header. The reward
transaction should be included in the header. In Bitcoin, the size of nonce is
32 bit, but the difficulty target can be greater than $2^{32}$. Miners iterate
nonce, but they may not find the nonce that satisfies the consensus, and then
miners shuffle the transactions and iterate the nonce again. As consensus has
to be verified with the header, there are no transactions for the miners to
shuffle. So the size of the nonce has to be increased. After modifying the
header, the PoW consensus can be verified by using the Equation (3). The PoW
consensus is modified to use only the header. PiChu can now be used on
modified PoW consensus.
$Hash(Header^{\prime})<\textit{DifficulyTarget}$ (3)
### III-B Chunking and Propagation Scheme
Chunking involves dividing the body of the block into multiple chunks of the
same size. Each chunk should contain only complete transactions, and the
remaining space in a chunk is padded.
A miner appends some information about chunking to the header and signs it.
The miner can not use any key to sign the header with metadata. He has to sign
with the key that is used to claim the reward of the block. All the
blockchains give rewards to the miners. The reward is included in the block.
The reward should contain the public key of the miner. In consensus algorithms
that need a whole block, the reward is included in the body of the block. We
have to modify a whole-block consensus protocol in such a way that the reward
for mining the block should be included in the header. Thus, a miner signs the
header with metadata by using its reward private key. Miner sends the
signature along with the header to its connection. Receiving nodes retrieves
the header from the invitation and verifies the consensus. If consensus is
correct, then nodes retrieve the miner reward public key and verify the
signature of the invitation. If it is correct, then the node retrieves the
information about the chunks from the invitation and uses it receives the
chunks. After dividing the block into chunks, the miner appends the chunk
number at the starting of the chunk to identify the order of the chunks. The
miner then signs the chunk with metadata by using the reward private key, and
also signs each chunk to prevent an intermediate node from tampering the data.
When a node receives a chunk, it checks the integrity by using the reward
public key. We discuss about the optimal chunk size in Section IV.
Figure 1: Block structure in PiChu TABLE I: PiChu field types description Field Name | Size | Description
---|---|---
# of chunks | 2 Bytes | Number of chunks in the body of block, varies with number of transactions in the body
$C_{i}$ | 128 KBytes | $i^{th}$ chunk in the body of the block
$S(C_{i},K^{Pr})$ | 64 Bytes | signature of $C_{i}$ with ECDSA private key in the block header
For every chunk, we send an additional 64 bytes as a signature, which
increases the amount of data to be transferred for a block. Even though the
data transmission size is increasing, the blocks are transmitting much faster.
While storing the block, nodes can remove the metadata about the chunks.
Figure 2: Block broadcast sequence in traditional blockchains
Figure 3: Block broadcast sequence in PiChu blockchain
### III-C Pipelining
In the general broadcast approach, when a node mines or receives a new block,
it sends a block invitation to all of its neighbors. If a node receives a
block invitation, it checks whether block exists or not. If the block does not
exist, then the node replies with the block request message. After receiving
the block request message, the node forwards the block. The receiving node
verifies the block, and if the block is valid, then it sends the block
invitation to all of its neighbors. The traditional block broadcast protocol
is illustrated in Figure 3.
As illustrated in Figure 3, when a node mines or receives a block, it sends an
invitation to all the connected nodes with the PiChu header. The node that
received the invitation message verifies whether the header achieves consensus
or not. If the header achieves consensus and the node does not have that
block, it sends a chunk request message back to the original node.
Besides, it sends an invitation message to its neighbor nodes by using the
PiChu header. After sending the header invitations to all the neighbors, the
miner starts sending the chunks to the neighbors who sent the chunk request
message. When a node receives chunks, it verifies the signature of each chunk
by using the public key of the miner. Although an additional 64 bytes as a
signature is required for each chunk, the overhead is trivial. As long as a
chunk is verified, it forwards the chunk to its neighbor nodes, which sent a
chunk request. The verification of chunk includes checking the integrity and
validity of transactions in it.
## IV Analysis of PiChu Efficiency
The broadcast time is proportional to the radius of the network. If the
network radius increases, then broadcast time increases. The broadcast time is
also proportional to the delay at each node. If the delay at each node
increases, then block broadcast time increases and vice versa.
On average, the broadcast time in a traditional blockchain network is equal to
the radius of the network in hop counts multiplied by the nodal delay of a
network, and the nodal delay is the sum of the transmission delay, propagation
delay, and verification delay. The transmission delay depends on the block
size, bandwidth, and the number of neighbors. The transmission delay is
proportional to block size and the average degree of the nodes. The
transmission delay is inversely proportional to the bandwidth. The
verification delay also depends on the size of the block and the diameter of
the blockchain network. The notations of the symbols used in this section are
summarized in Table II.
$\displaystyle T_{B}$ $\displaystyle=$ $\displaystyle
R\times\left(T_{LinkTrans}+T_{LnkPrp}+T_{ver}\right)$ (4) $\displaystyle=$
$\displaystyle
R\times\left(D_{conn}\times\frac{L_{B}}{B_{w}}+T_{LnkPrp}+T_{ver}\right)$ (5)
In the PiChu scheme, the header is broadcasted first, then chunks are
pipelined in parallel. So the time it takes to propagate the block is equal to
the sum of the time it takes to broadcast the header and the time to transmit
all the chunks from a node to another. The time it takes to transmits chunks
from one node to another depends on the degree of the nodes, the number of
chunks, metadata, and bandwidth. The size of metadata for each is 520 bits.
$\displaystyle T_{PiChu}$ $\displaystyle=$ $\displaystyle T_{PH}+T_{DC}$ (7)
$\displaystyle=$ $\displaystyle
R\times\left(D_{conn}*\frac{L_{H}}{B_{w}}+T_{LnkPrp}+T_{ver}\right)+T_{DC}$
$\displaystyle=$ $\displaystyle
R\times\left(D_{conn}\times\frac{L_{H}}{B_{w}}+T_{LnkPrp}+T_{ver}\right)+$
$\displaystyle\frac{D_{conn}\times\left(N_{c}+520\right)\times L_{C}}{B_{w}}$
As seen in Equation (5), the block broadcast time depends on the product of
the network radius and block size. If the block size is increased in Equation
(5), then broadcast time increases by at least $R$ times. In Equation (7), we
can observe that the block broadcast time depends on the product of the
network radius and header size. If the block size is increased in Equation (7)
then broadcast time increases by only block transmission delay between two
nodes. So the PiChu block broadcast approach is very efficient than the
general broadcast approach. The efficiency of the PiChu broadcast approach
over the traditional broadcast approach increases with an increase in block
size and number of nodes in the network.
Algorithm 1 gives the pseudo-code of the PiChu scheme. It shows that when a
node receives an invitation from a peer, it checks whether that header or
block exists in the chain. If it does not exist, then the node requests and
receives the chunks from the peer. When a chunk is received, it immediately
forwards it to other peers. It also indicates that only one block is received
and forwarded at a time.
List HeaderConnections; Object CurrentHeader;
while _True_ do
Receive block header H as an invitation from node N;
if _CurrentHeader == H_ then
HeaderConnections.add(N);
Continue;
else if _CurrentHeader == null_ then
CurrentHeader = H;
if _If adding H makes a chain longest_ then
if _Hash(Header) <D_ then
sendToOthers(H);
Retrieve $Pu_{k}$, $N_{c}$ from H;
while _$N_{C}-- >0$_ do
Receive a chunk;
if _Chunk is valid_ then
sendToOthers(Chunk);
else
Choose a node X from HeaderConnections;
$N_{C}++$ ;
Request X to pipeline last $N_{C}$ chunks;
end while
if _Block is valid_ then
Add block to the chain;
else
Discard the block;
end while
Procedure _sendToOthers(_Data_)_
Send Data to other nodes in parallel
Algorithm 1 Psuedo code of Chunking and Pipelining Block Broadcast Scheme
The size of the chunk is bounded by the block size but should be large enough
to overcome the metadata processing overhead. The transmission delay of a
chunk should be less than the sum of propagation delay and protocol overhead
so that there will be no extra delay at each forwarding node. A node has to
receive the chunk before it receives the chunk request message from its
neighbors so that it can immediately forward the chunk after receiving the
message. The chunk size can be decided by Equation (10) below.
$\displaystyle T_{tc}$ $\displaystyle<$ $\displaystyle T_{LnkPrp}+T_{proc}$
(8) $\displaystyle\frac{L_{C}*D_{conn}}{B_{w}}$ $\displaystyle<$
$\displaystyle T_{LnkPrp}+T_{proc}$ (9) $\displaystyle L_{C}$ $\displaystyle<$
$\displaystyle\frac{(T_{LnkPrp}+T_{proc})\times B_{w}}{C_{m}}$ (10)
TABLE II: Explanation of Notations Notation | Explanation
---|---
$L_{H}$ | header size in bits
$L_{B}$ | block size in bits
$N_{C}$ | the number of chunks in a block
$L_{C}$ | chunk size in bits
$B_{w}$ | average bandwidth of a node
$R$ | radius of a network
$T_{P}$ | average broadcast time in a traditional blockchain network
$D_{conn}$ | average degree of connections of a node
$T_{LinkTrans}$ | average transmission delay between the nodes
$T_{LnkPrp}$ | average propagation delay between two nodes
$T_{ver}$ | average verification delay of a block
$T_{PiChu}$ | average delay to propagate block in PiChu scheme
$T_{PH}$ | average delay to propagate a header
$T_{DC}$ | average delay in transmitting all chunks from one node to another
$T_{tc}$ | transmission delay of a chunk
$T_{proc}$ | PiChu processing overhead delay
## V Defense against Potential Attacks
In this section, we discuss the potential attacks and mitigation strategies in
a PiChu enabled blockchain network.
### V-A Forwarding node tampers data
An intermediate node can modify the data in the block before forwarding it to
other nodes. If a malicious node modifies the data in the chunk and forwards
it, then the receiving nodes can not verify the integrity of the chunk.
Receiving nodes validates the integrity of the chunk by checking the signature
of the chunk. A node that received a tampered chunk discards the chunk and
disconnects from the node that sent it.
The node has to receive the remaining chunks from other neighbors. A node can
receive the header $H$ invitation from multiple neighbors. The node keeps a
record $R$ of neighbors that sent the header $H$ invitation. When a tampered
chunk is received for the block with header $H$, node disconnects from the
sender and requests to pipeline remaining chunks from an optimal neighbor in
the record $R$. The optimal neighbor is decided based on latency and
transmission delay.
### V-B Miner includes invalid transactions in a block
A miner can include invalid transactions in a block. This causes one or more
chunks to contain invalid transactions. The header will be accepted by all the
nodes, as it was valid. When receiving the chunks, nodes validate the chunks
before forwarding them. If the chunk contains invalid transactions, then nodes
can not validate that chunk. If it contains invalid transactions and
integration is correct, then nodes can safely assume that the miner is
malicious. After detecting that the miner is malicious, the node forwards the
chunk with invalid transactions to neighbors so that other nodes can detect
that miner is malicious.
If the miner includes invalid transactions in the last chunk, then he can
perform a denial of service on the network for the time it takes to broadcast
the block. The time to propagate the block through PiChu is small compared to
regular broadcast. So the time that the miner can perform a denial of service
on the network is small. PiChu broadcasting is used on a block if adding that
block to the chain makes it longer. We do this to reduce the forks in the
chain and prioritize the miner that finds the block first. After detecting the
invalid transactions in a chunk, nodes blacklist the header of that block.
Nodes will not add a block with a blacklisted header to their blockchains.
This causes the miner to lose the reward for a mined block. We can also revoke
all the rewards that the miner accumulated. Miner has to lose his reward if he
wants to perform a denial of service on the network.
### V-C Intermediate node delays the sending of the chunks
A forwarding or intermediate node in the blockchain network can intentionally
delay the forwarding of chunks. The nodes connected to the attacking node
receives the chunks slower than their peers. If there are many attackers in
the network, then the block broadcast time will increase. The mitigation for
this attack is similar to the data tampering mitigation. The node keeps a
record $R$ of neighbors that sent the header $H$ invitation. When a node
detects or suspects that an intermediate is delaying the forwarding of chunks,
then node disconnects from the forwarding node and requests to pipeline
remaining chunks from a neighbor in the record $R$.
### V-D Miner dies while sending the block or sends only partial block
Miner node might fail while sending the block or intentionally sends the
partial block to perform an attack on the network. It might not be possible to
differentiate whether the miner node died or intentionally sent the partial
block, so the approach for the two cases is the same. When a node does not
receive the chunk $X$ after receiving the chunk $X-1$ , it has to first decide
whether miner died or the forward node died. A forwarding node might
intentionally stop forwarding the chunks. To decide either miner died or the
forwarding node died , the node requests the chunk $X$ from all of its
neighbors. If any of the neighbors send the chunk $X$, then the forwarding
node is failed. If no neighbor sends the chunk $X$ within a time frame, it is
safe to assume that the miner died. If a forwarding node is failed, then the
node terminates the connection with the forwarding node and receives the
chunks from another neighbor.
Assume that the miner died while sending the chunk $X$, and all the previous
chunks are valid. All the nodes in the network will have chunks till $X-1$.
Some blockchains can tolerate partial block in the chain, and other
blockchains can not tolerate it. If it is tolerable, then nodes append a
special chunk to the chunk $X-1$ that represents only partial block is
received. After appending the special chunk, nodes will not receive any
further chunks for that block. If partial blocks are not acceptable, nodes
discard the chunks received for that block and keep the header in the
blacklist. Nodes will not accept the block with a blacklisted header. As the
header is not accepted in the blockchain, the miner will not get the reward
for that block. Miner loses the reward for sending partial block. This gives
the incentive to not send partial blocks. In another approach, nodes will not
use the PiChu scheme for the blacklisted header and propagate through the
general approach. In an aggressive approach, nodes can take away all the
rewards that the miner accumulated.
## VI Experiment Results
(a) Traditional approach
(b) PiChu approach
Figure 4: Block broadcast delay in a blockchain network [Number of nodes: 1K
$\sim$ 1M nodes; Block sizes: 8KB $\sim$ 64MB];
(more than 15 times faster with PiChu in a million node network and 64MB
blocks)
Figure 5: Block broadcast time comparison: Traditional vs. PiChu (in a 65536
node network)
Figure 6: Percentage of forks: Traditional vs. PiChu (in a 65536 node network)
In order to validate the effectiveness of the PiChu scheme in a very large
network with varied parameters, we have developed our own blockchain
simulator. While there is an existing blockchain simulator called Simblock
[16], it is not well-suited to simulate the block broadcasting in a network
with a large number of nodes. Our simulator is developed in Java, and we made
the source code publicly available through github [17]. It can simulate block
broadcasting in a network with millions of nodes and supports a large block
size. Our blockchain simulator can simulate block broadcast in traditional and
PiChu approach. It takes the average bandwidth of nodes, average latency
between nodes, the block size, chain length, number of nodes, and average
degree of a node ($D_{n}$) as input. For a given number of nodes, the
simulator generates a random graph topology based on the average degree per
node.
We first match general propagation results with real measurements data as well
as other existing simulators for comparable settings of the experiments. Table
III shows the simulation settings used in our study that is similar to [4] and
[16]. The output of our simulation is showed and compared in Table IV. Our
results are close to the real measurements.
TABLE III: Simulation Settings Parameter | Bitcoin | Litecoin | DodgeCoin
---|---|---|---
# of the nodes | 6000 | 800 | 600
Block Interval | 10 min | 2.5 min | 1 min
Block Size | 534 KB | 6.11 KB | 8 KB
# of the connections | based on Miller.A[18]
Bandwidth | testmy.net [19]
Propagation delay | verizon [20]
TABLE IV: Various simulators output | Bitcoin | Litecoin | DodgeCoin
---|---|---|---
(Block Interval) | (10 m) | (2 m 30 s) | (1 m)
$t_{MBP}$ of Real Measurement [4] | 8.7 s | 1.02 s | 0.98 s
$t_{MBP}$ from Gervais et. al. [4] | 9.42 s | 0.86 s | 0.83 s
$t_{MBP}$ from SimBlock [16] | 8.94 s | 0.85 s | 0.82 s
$t_{MBP}$ from our Simulator | 9.55 s | 1.04 s | 1.07 s
Measured $r_{f}$ | 0.41% | 0.27% | 0.62
Gervais et al. $r_{f}$ | 1.85% | 0.24% | 0.79%
SimBlock $r_{f}$ | 0.58% | 0.30% | 0.80%
Our Simulator $r_{f}$ | 0.55% | 0.40% | 0.70%
First, we assess how the block broadcast delay varies with the number of nodes
and the block size in the traditional broadcast scheme. The average bandwidth
of the nodes and average latency between nodes for this experiment are taken
from [19] and [20]. In this experiment, the degree of each node is varied
between 8 and 12. The maximum number of connections for a node in bitcoin is
125 [18], and the maximum number of connections for a node in Ethereum is 50
[21]. Coinscope[18] found that the majority of nodes in the Bitcoin network
have a degree between 8 and 12, even though the maximum number of connections
is set to 125. The verification delay is set to 0.25 ms for a transaction[22].
After setting the parameters for the experiment, the number of nodes in the
network and block size are varied. The results for this experiment are
represented in Figure 4 (a). We can observe that when the number of nodes is
constant, broadcast time increases linearly with an increase in block size.
The broadcast time is proportional to the product of the network radius and
block size. The broadcast time also increases with increasing the number of
nodes.
We then assess how the block broadcast delay vary with the number of nodes and
the block size in the blockchain networks using the PiChu propagation
technique. The parameters for this experiment are the same as the previous
experiment except the $D_{n}$. PiChu technique works better if the degree of
the nodes is small. In Equation (7), we can observe that when the block size
is high, the broadcast delay majorly depends on the transmission delay of the
block between two nodes. The efficiency of the PiChu depends on how fast a
chunk can be transmitted from one node to another. If the degree of a node is
high, then the time it takes to transmit a chunk from one node to another
increases, and the efficiency of the PiChu decreases. The average degree of
the node is set to 5. The reason is explained in the latter part of this
section. After setting the parameters for this experiment, the number of nodes
and block size are varied. Figure 4 (b) shows the output of this experiment.
We can observe that for a given number of nodes, the block broadcast time
increases linearly with increasing the block size, but the slope is less than
the traditional approach broadcast time. When block size is large, the
propagation delay mainly depended only on block size instead of the product of
the network radius and block size. The broadcast time increases a little with
an increase in the network radius.
Figure 6 shows how the broadcast delay change with respect to block size for
65536 nodes in PiChu and the general approach. The block propagation with
PiChu for 65536 nodes is 13.6 times less than the traditional approach. The
block propagation with PiChu for million nodes is 16.3 times less than the
traditional approach. By the experiment results, we can say that the PiChu
propagation method is efficient than traditional propagation, and the
efficiency of the PiChu increases with an increase in the number of nodes or
block size.
Figure 6 shows the percentage of forks occurring with respect to block size
for 65536 nodes in PiChu and traditional approach. The block interval for this
experiment is 10 minutes. The percentage of forks occurring in the PiChu
approach is ten times less than the traditional approach. As the forks are
reducing, we can increase the size of the block. Throughput increases with an
increase in block size.
The maximum possible block size for a given block interval is measured for a
traditional blockchain and the PiChu blockchain. Both blockchain networks
contain 65536 nodes. The maximum possible block size is for a given block
interval is determined by increasing the block size until the forks are
greater than or equal to 100 percent. When forks are greater than 100 percent,
the blockchain becomes obsolete. Figure 8 shows the results. The maximum block
size for a given interval is ten times higher in PiChu than traditional. In
both approaches, the maximum block size increases with an increase in block
interval.
The broadcast time of a block in PiChu depends on the degree of nodes. In
PiChu, the broadcast time increases with respect to the degree of nodes as the
time takes to send the chunk to all the connections increases. In the
traditional approach, the broadcast time might increase or decrease with
respect to node degree. To confirm that the broadcast time increases with an
increase in the node degree, we varied the node degree between 3 and 25. The
degree can not be two as the topology of the network will be become linear or
circular. The simulator settings are the same as the experimental settings.
The number of nodes is 65536, and the block size is set to 64 MB. Figure 8
shows how the broadcast time increases with an increase in degree. The lowest
broadcast time is recorded when the degree of the nodes is 3. If we choose the
degree as three, then nodes are susceptible to Sybil attacks, and new nodes
might find it difficult to discover the nodes. The degree should be as high as
possible with reasonable broadcast time. We suggest the degree to be 5 as the
block can be broadcast in under 80 seconds. It is to be noted that if we used
degree 3 in our previous experiments, then the experimental results will be
1.6 times better.
Figure 7: Block broadcast time with respect to maximum number of connections
per node
Figure 8: Maximum block size for a given interval in 65536 nodes network
## VII Conclusions
We have proposed a block acceleration scheme via pipelining and chunking of a
block, named PiChu, to address the issue of scalability and performance of a
blockchain network. To the best of our knowledge, this is the first kind of
approach for blockchain scalability. The approach can be employed with minimal
modification of existing blockchain networks. We have shown the efficiency of
the PiChu approach, both theoretically and extensive evaluation using our
blockchain simulator. Our experiment results show that PiChu significantly
outperforms traditional block propagation methods, and its efficiency
increases with the size of a blockchain network. Our future work includes
extending our blockchain simulator to support various scalability schemes and
exploring the effectiveness of using multiple scalability schemes together.
## References
* [1] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” 03 2009. [Online]. Available: https://bitcoin.org/bitcoin.pdf
* [2] S. King and S. Nadal, “Ppcoin: Peer-to-peer crypto-currency with proof-of-stake,” _self-published paper, August_ , vol. 19, 2012.
* [3] L. Luu, V. Narayanan, C. Zheng, K. Baweja, S. Gilbert, and P. Saxena, “A secure sharding protocol for open blockchains,” in _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_ , ser. CCS ’16. New York, NY, USA: ACM, 2016, pp. 17–30. [Online]. Available: http://doi.acm.org/10.1145/2976749.2978389
* [4] A. Gervais, G. O. Karame, K. Wüst, V. Glykantzis, H. Ritzdorf, and S. Capkun, “On the security and performance of proof of work blockchains,” in _Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security_ , ser. CCS ’16. New York, NY, USA: ACM, 2016, pp. 3–16. [Online]. Available: http://doi.acm.org/10.1145/2976749.2978341
* [5] J. Donet, C. Pérez-Solà, and J. Herrera-Joancomartí, “The bitcoin p2p network,” vol. 8438, 03 2014.
* [6] J. Wang and H. Wang, “Monoxide: Scale out blockchains with asynchronous consensus zones,” in _16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19)_. Boston, MA: USENIX Association, Feb. 2019, pp. 95–112. [Online]. Available: https://www.usenix.org/conference/nsdi19/presentation/wang-jiaping
* [7] W. Bi, H. Yang, and M. Zheng, “An accelerated method for message propagation in blockchain networks,” _CoRR_ , vol. abs/1809.00455, 2018. [Online]. Available: http://arxiv.org/abs/1809.00455
* [8] J. Kan, L. Zou, B. Liu, and X. Huang, “Boost blockchain broadcast propagation with tree routing,” _CoRR_ , vol. abs/1810.12795, 2018. [Online]. Available: http://arxiv.org/abs/1810.12795
* [9] N. Chawla, H. W. Behrens, D. Tapp, D. Boscovic, and K. S. Candan, “Velocity: Scalability improvements in block propagation through rateless erasure coding,” in _2019 IEEE International Conference on Blockchain and Cryptocurrency (ICBC)_ , 2019, pp. 447–454.
* [10] A. P. Ozisik, G. Andresen, B. N. Levine, D. Tapp, G. Bissias, and S. Katkuri, “Graphene: Efficient interactive set reconciliation applied to blockchain propagation,” in _Proceedings of the ACM Special Interest Group on Data Communication_ , ser. SIGCOMM ’19. New York, NY, USA: Association for Computing Machinery, 2019, p. 303–317. [Online]. Available: https://doi.org/10.1145/3341302.3342082
* [11] I. Eyal, A. E. Gencer, E. G. Sirer, and R. V. Renesse, “Bitcoin-ng: A scalable blockchain protocol,” in _13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16)_. Santa Clara, CA: USENIX Association, Mar. 2016, pp. 45–59. [Online]. Available: https://www.usenix.org/conference/nsdi16/technical-sessions/presentation/eyal
* [12] “Novacoin,” https://github.com/novacoin-project/novacoin/wiki/Proof-of-stake.
* [13] S. J. Alsunaidi and F. A. Alhaidari, “A survey of consensus algorithms for blockchain technology,” in _2019 International Conference on Computer and Information Sciences (ICCIS)_ , 2019, pp. 1–6.
* [14] “blockchain consensus algorithms,” https://www.cryptoexchangescript.com/blockchain-consensus-algorithms.
* [15] M. S. Ferdous, M. J. M. Chowdhury, M. A. Hoque, and A. Colman, “Blockchain consensus algorithms: A survey,” 2020.
* [16] Y. Aoki, K. Otsuki, T. Kaneko, R. Banno, and K. Shudo, “Simblock: A blockchain network simulator,” in _IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)_ , April 2019, pp. 325–329.
* [17] “Pichu and general block propagation simulator,” https://github.com/blockchain-simulator/blockchain__simulator.
* [18] A. Miller, J. Litton, A. Pachulski, N. Gupta, D. Levin, N. Spring, and B. Bhattacharjee, “Discovering bitcoin’s public topology and influential nodes,” May 2015, accsessed: 2019-12-15. [Online]. Available: http://cs.umd.edu/projects/coinscope/coinscope.pdf
* [19] “testmy.net,” http://testmy.net/country.
* [20] “Verizon latency,” http://www.verizonenterprise.com/about/network/latency/.
* [21] S. K. Kim, Z. Ma, S. Murali, J. Mason, A. Miller, and M. Bailey, “Measuring ethereum network peers,” in _Proceedings of the Internet Measurement Conference 2018_ , ser. IMC ’18. New York, NY, USA: ACM, 2018, pp. 91–104. [Online]. Available: http://doi.acm.org/10.1145/3278532.3278542
* [22] K. Croman, C. Decker, I. Eyal, A. E. Gencer, A. Juels, A. Kosba, A. Miller, P. Saxena, E. Shi, E. Sirer, D. Song, and R. Wattenhofer, “On scaling decentralized blockchains,” vol. 9604, 02 2016, pp. 106–125.
|
, Electronic address<EMAIL_ADDRESS>,
# PRINCIPAL COMPONENT ANALYSIS FOR ESTIMATING PARAMETERS OF THE L1287 DENSE
CORE BY FITTING MODEL SPECTRAL MAPS INTO OBSERVED ONES
L. E. Pirogov Institute of Applied Physics, Russian Academy of Sciences,
Nizhny Novgorod, Russia P. M. Zemlyanukha Institute of Applied Physics,
Russian Academy of Sciences, Nizhny Novgorod, Russia
###### Abstract
An algorithm has been developed for finding the global minimum of a
multidimensional error function by fitting model spectral maps into observed
ones. Principal component analysis is applied to reduce the dimensionality of
the model and the coupling degree between the parameters, and to determine the
region of the minimum. The $k$–nearest neighbors method is used to calculate
the optimal parameter values. The algorithm is used to estimate the physical
parameters of the contracting dense star-forming core of L1287. Maps in the
HCO+(1–0), H13CO+(1–0), HCN(1–0), and H13CN(1–0) lines, calculated within a 1D
microturbulent model, are fitted into the observed ones. Estimates are
obtained for the physical parameters of the core, including the radial
profiles of density ($\propto r^{-1.7}$), turbulent velocity ($\propto
r^{-0.4}$), and contraction velocity ($\propto r^{-0.1}$). Confidence
intervals are calculated for the parameter values. The power-law index of the
contraction-velocity radial profile, considering the determination error, is
lower in absolute terms than the expected one in the case of gas collapse onto
the protostar in free fall. This result can serve as an argument in favor of a
global contraction model for the L1287 core.
## 1 Introduction
Studies on the structure and kinematics of the dense cores of molecular clouds
provide information on the initial conditions and early stages of the star-
formation process to be utilized in theoretical models. This is especially
important when studying the regions of massive star and star cluster
formation, which evolutionary scenarios are now only beginning to develop
(see, e.g., [1, 2]).
According to observational data, massive stars
($\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr>\cr\sim\cr}}}}8$
M⊙) and star clusters are formed in near-virial equilibrium dense cores that
are located in filament-shaped massive gas-dust complexes and clumps (see,
e.g., [3–7]). The existing theoretical models of star formation employ
different assumptions about the initial core state. Thus, the isothermal
sphere model, which is applied for describing the formation of isolated low-
mass stars [8, 9], assumes that the quasi-equilibrium spherical core with a
Bonnor-Ebert-type density profile (a flat segment near the center and a
near-$\propto r^{-2}$ dependence in the envelope) evolves towards a
singularity at the center (protostar), after which a collapse begins, which
propagates ‘‘inside-out’’. The turbulent-core model [10], proposed for
describing the formation of massive stars and star clusters, also considers,
as the initial state, a hydrostatic-equilibrium sphere characterized by
supersonic turbulence and a $\propto r^{-3/2}$ density profile [10, 11]. Both
the isothermal sphere model and the turbulent core model use density and
velocity profiles in the region where gas collapses onto the star of the form
$\propto r^{-3/2}$ and $\propto r^{-1/2}$, respectively. As shown in [12],
these profiles do not depend on the state of gas in the core.
An alternative model of global hierarchical collapse [13] proceeds from the
fact that the cores, like the parent clouds, are nonequilibrium objects that
are in an ongoing process of global collapse even before the protostar
formation, and their observed closeness to virial equilibrium is due,
specifically, to the closeness of the free fall velocity to the virial one. In
this model, which is based on the classical works of Larson and Penston [14,
15], after the formation of the protostar, the density profile in the envelope
becomes $\propto r^{-2}$ and the contraction velocity is independent of the
radial distance (see, e.g., [16, 13]). Near the protostar, where the collapse
occurs, the radial profiles of density and contraction velocity are the same
as in the isothermal-sphere and turbulent-core models. Thus, the information
about the density profile is insufficient for us to make a choice between the
above models; firstly, we need to know the velocity profile in the outer
regions of the cores.
The kinematics of the cores is estimated mainly from observations in molecular
lines. The presence of systematic velocities along the line of sight leads to
a shift in the centers of optically thick and thin lines (see, e.g., [17]) and
to the appearance of asymmetry in the spectra of optically thick lines due to
the absorption of the emission from the inner layers by outer ones and due to
the Doppler effect (see, e.g., [18, 19]). The average contraction velocity of
the core can be estimated within more or less simple models from asymmetric
line observations at one point (see, e.g., [20–23]). To estimate the radial
profile of systematic velocity, it is necessary to fit the model spectral maps
into the observed ones, while simultaneously calculating or setting the
profiles of the other physical parameters.
Automatic fitting methods of model spectral maps into observed ones in the
case of several free parameters are rarely used today. Researchers usually
compare the spectra observed at individual points with simulated ones; less
often, they use spectral maps, varying one or two parameters and considering
the remaining ones to be specified from independent observations, theoretical
model predictions, or preliminary calculation results (see, e.g., [24–29]). In
this case, researchers either consider the systematic contraction velocity to
be constant or use a radial profile $\propto r^{-1/2}$. Finding the optimal
values while varying several parameters simultaneously may be difficult
because the error function may have more than one local minimum and the
parameters themselves may correlate with one another, leading to dependence on
the initial conditions and to poor convergence. The use of special methods to
search for the global minimum of the error function (e.g., the method of
differential evolution [30]) in the case of a model with several free
parameters may lead to considerable computational costs.
In recent years, $Principal\leavevmode\nobreak\ Component\leavevmode\nobreak\
Analysis$ (PCA) has been successfully applied to studying experimental data
[31]. Within this method, data are transformed to such an optimal basis in
which linear relations between the basic vectors are excluded. This approach
allows one to reduce the dimensionality of the data. This method is quite
often used to reduce the dimensionality of astronomical data (see, e.g., [32]
and references therein), but it can also be applied to the results of model
calculations by reducing the dimensionality of the model and determining the
range of parameter values near the minimum of the error function. The exact
values of the model parameters, which correspond to the minimum of the error
function, can be calculated by the regression method. For instance, the
$k$–nearest neighbors ($k$NN) method [33] appears to fit this purpose. It is
an analogue of the least-squares method, but unlike the latter, it allows one
to choose, from a set of models, only ones that correspond to observational
data by the least-squared error criterion.
This work aims to develop an algorithm that uses PCA and $k$NN to fit model
spectral maps into observed ones and to apply this algorithm for estimating
the radial profile of systematic velocity and other physical parameters of the
L1287 dense core. In this object, a cluster of low- and intermediate-mass
stars is being formed, and the observed profiles of optically thick lines show
an asymmetry pattern which indicates contraction (see, e.g., [34]). The
analysis used observational data in the lines of HCO+(1–0) and HCN(1–0), which
are indicators of high-density gas
($\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr>\cr\sim\cr}}}}10^{5}$
cm-3 [18]) and the isotope lines of these molecules. Observations in different
lines of the HCO+ and HCN molecules are often used to search for massive cores
with systematic motions in the line of sight (see, e.g., [35–39]). The
HCN(1–0) line is, however, less often used for these purposes. It has three
hyperfine components with different optical depths and intensity ratios that
differ from the case of local thermodynamic equilibrium (LTE). The observed
profiles of these components may overlap. To determine parameters of the
physical and kinematic structure of the cores from the HCN(1–0) data it is
necessarily to use non-LTE models (see, e.g., [40, 41]). In this work, we
calculated the excitation of HCO+, HCN, and their isotopes using a 1D
microturbulent spherically symmetric non-LTE model, the physical parameters of
which, including the systematic velocity, were functions of the distance from
the center.
This paper consists of five sections and Appendix. Section 2 presents the
algorithm for fitting model spectral maps into observed ones using PCA and
$k$NN. Section 3.1 summarizes the observational data and physical properties
of the L1287 core. Section 3.2 describes the application of the algorithm to
the observational data on L1287 and the results of estimating the physical
parameters. Sections 4 and 5 present the results and discussion and the
conclusions, respectively. A description of the model is given in Appendix.
## 2 PCA-BASED ALGORITHM FOR FITTING MODEL SPECTRAL MAPS INTO OBSERVED ONES
The process of fitting model spectral maps into observed ones by means of
conventional iterative methods for estimating physical parameters is
complicated by the fact that the multidimensional error function (the total
discrepancy between the observed and model spectra) may have several local
minima, which creates a dependence on the initial values. In this case, a
correlation between the parameters may seriously worsen the convergence.
Another approach is to calculate a set of model maps in advance for a grid of
model parameters and select those that are close to the observed parameters.
This is also a complicated approach because calculations for a discrete
$n$-dimensional grid (where $n$ is the number of parameters) that is densely
enough to cover the space of probable values may be beyond the computational
capabilities. However, such a grid would obviously be redundant. If we specify
the parameter values randomly, then, with calculated model maps for them, we
can roughly determine a region within which lies the minimum of the error
function. If we apply a certain transformation to the resulting region that
minimizes the relations between the parameters and transform it to a new space
of orthogonal vectors, we can reduce the dimensionality by discarding the
vectors carrying minimum information about the model parameters. If we then
fill the remaining vector space with a sufficiently dense grid and make the
inverse transformation, we obtain a filled-in space of model parameters near
the minimum, the exact value of which can be found by the regression method.
One such transformation can be PCA, a classical method of dimensionality
reduction [31]. It involves finding such a linear transformation where the
initial set of parameters is represented by a vector basis (with principal
components as the vectors), the correlations between the vectors being
minimized.
The described general principles enabled the development of an algorithm for
finding the physical parameters of dense cores of molecular clouds by fitting
model maps of the molecular lines into the observed ones. The algorithm
involved a preliminary analysis of observational data and determination of the
to-be-free parameters, PCA-based dimensionality reduction and determination of
the region of model parameters near the minimum, and finding the optimal
values of free parameters by the $k$NN method [33] and determination of the
confidence region boundaries for each of them. The diagram of the algorithm is
shown in Fig. 1. The optimal parameters were determined minimizing the error
function:
$\chi^{2}=\frac{1}{N_{p}-n}\sum_{j=1}^{N}\sum_{i=1}^{m}\frac{(I_{ij}^{obs}-I_{ij}^{mod})^{2}}{\sigma_{j}^{2}}\hskip
5.69054pt,$ (1)
where $N$ is the number of spatial points in the map; $m$ is the number of
channels in the spectrum; $I_{ij}^{obs}$ and $I_{ij}^{mod}$ are the observed
and model intensities in the $i$th spectral channel for the $j$th point in the
map, respectively; $\sigma_{j}$ is the standard deviation of the observed
spectrum at point $j$, calculated from intensity fluctuations outside the line
range; $N_{p}=m\times N$; and $n$ is the number of parameters in the model.
5mm normal
Figure 1: Parameter determination diagram for fitting model spectral maps
into observed ones.
In the course of preliminary analysis, we determined the coordinates of the
map’s central point (the center of the core) and the source velocity from the
observational data in the optically thin line. Using a random number
generator, we then created a set of model parameters with sufficiently wide
ranges and calculated the model spectral maps and values of the error
function. Within this set, we selected those parameters that satisfy the
inequality $\chi^{2}\leq\chi_{min}^{2}(3\sigma_{obs})$, where
$\chi_{min}^{2}(3\sigma_{obs})$ is the value of the error function at those
model parameters that yield the minimum value $\chi^{2}$ for a given set when
noise with a standard deviation of $3\sigma_{obs}$ is added to the observed
intensities. For the resulting parameter sample, we calculated matrices of the
direct and inverse transformation into the PC-space, reduced the number of
components, and filled the remaining space with a regular grid. The grid nodes
were transformed using the inverse transformation into the values of the
physical parameters.
Choosing a sufficient number of PCs is not a simple problem since any
dimensionality reduction method causes information loss. An overview of
possible options for solving this problem is given in Appendix to [42] and in
the references to that paper. In linear PCA, which we applied, the loss of
information can manifest itself in biased values of the physical parameters
after the direct and inverse transformations. The number of remaining PCs,
which determines the extent of the loss, was chosen in such a way that the
ratio of the eigenvector sum for the PC covariance matrix to the eigenvector
sum for the covariance matrix of the physical parameters differed from unity
(the value in the case of the identity transformation) by no more than 10% and
the bias in the parameter estimates did not create systematic errors [42, 43].
The final step was to calculate the physical parameter values corresponding to
the exact minimum of the error function and estimate the errors. To this end,
we used the $k$NN method [33], which we previously applied to estimate the
physical parameters of the dense core of S255N [44]. The $k$NN method is
similar to the least-squares method, but unlike the latter, it does not adjust
the model parameters to the observed spectra but calculates the optimal
parameter values from the previously obtained spectra by the same criterion.
This method enables regression analysis between a set of model maps with
different parameters and the observed map. Thus, among all the model maps, we
found $k$ nearest ones to the observed map by the criterion of the minimum of
the mean $\chi^{2}$ value. When there was no such minimum ($\chi^{2}$
increases with increasing $k$, and averaging across the models increases the
error function), a denser grid was needed for PCs in the region near the
supposed minimum. The optimal value of the $p$ parameter was the
$\chi^{2}$-weighed mean over $k$ selected instances:
$p=\frac{\sum_{i=1}^{\rm k}p_{i}/\chi_{i}^{2}}{\sum_{i=1}^{\rm
k}1/\chi_{i}^{2}}\hskip 5.69054pt,$ (2)
where $p_{i}$ and $\chi_{i}^{2}$ are the values of the parameter and error
function for the $i$th point in the parameter space, respectively.
Using maps of the object in several spectral lines of different optical
depths, we can narrow down the range of confidence values by fitting the model
spectra into these maps simultaneously. In this case, additional model
parameters are the relative abundances of the molecules. The total error for
the maps in several lines ($n_{lines}$) is written as
$\chi^{2}=\frac{1}{N_{p}-n}\sum_{k=1}^{n_{lines}}\sum_{j=1}^{N_{k}}\sum_{i=1}^{m_{k}}\frac{(I_{ijk}^{obs}-I_{ijk}^{mod})^{2}}{\sigma_{jk}^{2}}\hskip
5.69054pt,$ (3)
where $N_{p}=\sum_{k=1}^{n_{lines}}N_{k}\times m_{k}$; $N_{k}$ is the number
of spatial points in the map in the $k$th line; $m_{k}$ is the number of
channels in the spectrum of the $k$th line; and $\sigma_{jk}$ is the standard
deviation of the observed spectrum in the $k$th line at point $j$.
Since the parameter space is curvilinear, the confidence regions for the
probable parameter values were determined by applying a cross-section of the
multidimensional error function by the hyperplane
$\chi^{2}=\chi^{2}_{\sigma}$. The calculation of $\chi^{2}_{\sigma}$ does not
depend on the choice of the basis; it is convenient to perform it in the PC
space. The threshold value was
$\chi^{2}_{\sigma}=\chi_{min}^{2}(pc_{l}^{opt}\pm\sigma_{pc_{l}})$, i.e., the
value of the error function in the case where one of the PCs ($pc_{l}$) takes
a value displaced from the optimal one by $\sigma_{pc_{l}}$ and the other
components vary in such a way that the error function takes the minimum value.
As $\sigma_{pc_{l}}$, we took a symmetric estimate for the error of $pc_{l}$,
which is a diagonal element of the matrix inverse to the Hesse matrix, (see,
e.g., [45–47]), which element was calculated as
$\beta_{lm}=\sum_{k=1}^{n_{lines}}\sum_{j=1}^{N_{k}}\sum_{i=1}^{m_{k}}\frac{1}{\sigma_{jk}^{2}}\frac{\partial
I_{ijk}^{mod}}{\partial pc_{l}}\frac{\partial I_{ijk}^{mod}}{\partial
pc_{m}}\hskip 5.69054pt,$ (4)
where $pc_{l},pc_{m}$ are different PCs. The derivatives were calculated
numerically over the entire set of model maps. After estimating the threshold
value of $\chi^{2}_{\sigma}$, we constructed two-dimensional projections of
the error function and its hyperplane cross-section in the plane of different
pairs of model parameters and determined the confidence regions. In the
general case, these regions are not symmetrical relative to the optimal
parameter values. An example of using two-dimensional projections of the error
function for estimating the confidence ranges of model parameters in the
analysis of the L1287 molecular line maps is presented in Section 3.2.
## 3 ESTIMATING THE PHYSICAL PARAMETERS OF THE L1287 CORE
### 3.1 Observational Manifestations of L1287
The dark cloud L1287 is located at a distance of $0.93\pm 0.03$ kpc [48] and
is shaped as a filament of $\sim 10$ pc in length. A dust emission map in
continuum at a wavelength of 500 $\mu$m, which was acquired using the Herschel
telescope towards L1287 (observation ID: 1342249229 [49]), is shown in Fig. 2
(different shades). In the central part of the cloud, there is a high-density
core, which contains the source IRAS 00338+6312 [34]. In the core, two objects
of type FU Ori (RNO 1B/1C) were also detected [51–53], as well as a cluster of
IR and radio sources, likely associated with young stellar objects of low and
intermediate mass [54, 53]. Maser lines of water [55] and methanol molecules
[56] were also detected there. Molecular line observations [34, 57, 58]
revealed a bipolar outflow in the northeastern and southwestern directions.
Based on observations in the H13CO+(1–0) line, it was concluded [59] that the
central part of the core contains a rotating disk of radius $\sim 7800$ AU,
with the bipolar outflow oriented along the disk axis. Based on interferometry
observations, the inner part of the core
($\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr<\cr\sim\cr}}}}0.1$
pc) is highly fragmented [60, 61]. In [61], a kinematic model was proposed for
the central part of the L1287 core. In this model, gas motions towards the
center from the core’s outer regions become nonisotropic near the center and
transform into the disk rotation.
5mm normal
Figure 2: Map of the L1287 dark cloud at wavelength 500 $\mu$m from the
Herschel telescope observational data. The integrated intensity isolines in
the HCO+(1–0) line correspond to 20% and 50% of the maximum (38.6 K km/s)
[50]. The star symbol indicates the source IRAS 00338+6312.
The emission region sizes of the L1287 core in the different molecular lines
and in continuum vary from a few tenths to one parsec [34, 62–65]; the shape
of the emission regions is roughly close to a spherically symmetric one. The
profiles of optically thick lines in the L1287 core are asymmetric and have
two peaks separated by a dip, with the amplitude of the blue peak in most of
the maps being higher than that of the red one [25, 34, 62].
We observed the L1287 core in 2015 with the OSO-20m telescope in different
lines in the frequency range of $\sim 85-89$ GHz [50]. The angular resolution
of the observations was $\sim 42^{\prime\prime}$, which corresponds to a
linear resolution of $\sim 0.19$ pc. The integrated intensity isolines of the
HCO+(1–0) line, which were superimposed onto the Herschel map, are shown in
Fig. 2. The asymmetric profiles of HCO+(1–0) and HCN(1–0), observed virtually
throughout the entire region ($\sim 0.9$ pc), and the symmetric profiles of
optically thin lines, which intensity peaks are close to the dips in the
profiles of optically thick lines, are likely to be indicative of gas
contraction.
### 3.2 Map Analysis of the L1287 Core in Different Molecular Lines
The algorithm presented in Section 2 was applied for estimating the physical
parameters of the L1287 core. To this end, we performed the fitting of the
maps in the lines of HCO+(1–0), H13CO+(1–0), HCN(1–0), and H13CN(1–0),
calculated within the 1D microturbulent model (see Appendix), into the central
part of the observed region with an angular size of 80′′ ($\sim 0.4$ pc). The
physical parameters (density, turbulent and systematic velocities, and kinetic
temperature) were dependent on the distance to the center, $r$, by the law
$P=P_{0}/(1+(r/R_{0})^{\alpha_{p}}$), where $R_{0}$ is the radius of the
central layer. The free model parameters were the values $P_{0}$ for the
radial profiles of density and turbulent and systematic velocities ($n_{0}$,
$V_{turb}$, $V_{sys}$, respectively); the power-law indices $\alpha_{p}$
($\alpha_{n}$, $\alpha_{turb}$, $\alpha_{sys}$), the relative abundances of
the molecules ($X$), independent of the radial distance; and the outer radius
($R_{max}$) of the core.
The kinetic temperature profile was set at $T=80$ K$/(1+(r/R_{0})^{0.3})$ and
kept unchanged during the calculations. Meanwhile, the kinetic temperature
varied from 40 K in the central layer to
$\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr<\cr\sim\cr}}}}20$
K on the periphery, which is generally consistent with estimates based on
observational data (see, e.g., [62, 63, 65, 50]). Although the dust
temperatures for L1287 from the Herschel data are $\sim 15-24$ K
(http://www.astro.cardiff.ac.uk/research/ViaLactea/) [66], the data of
interferometric observations suggest that in the inner regions of the L1287
core
($\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr<\cr\sim\cr}}}}0.1$
pc), where the fragmentation effects are strong, the kinetic temperature of
fragments may be as high as $\sim 80-100$ K (see [60]). Thus, in the
calculations, 40 K was taken as an average value of kinetic temperature in the
central layer, the radius of which ($R_{0}$) was set at $2\times 10^{16}$ cm
($\sim 1300$ AU).
The radial velocity and the core center coordinates were estimated from the
H13CO+(1–0) line. Then, we used a map in the HCO+(1–0) line to search for the
minimum of the error function. Using a random number generator, we formed an
array of 6000 parameter values, which were randomly and equiprobably
distributed in the following ranges of eight parameters:
$n_{0}$=[$10^{6.5}...10^{9}$] cm-3, $\alpha_{n}$=[1.3…2.5],
$V_{turb}$=[1.4…7.5] km/s, $\alpha_{turb}$=[0.1…0.7], $V_{sys}$=[–1.3…–0.2]
km/s, $\alpha_{sys}$=[–0.2…0.4], $X$(HCO+)=[$10^{-10.5}...10^{-8}$],
$R_{max}$=[$10^{17.7}-10^{19.2}$] cm. Although we assumed that these ranges
were certain to include the optimal parameter values for the L1287 core, their
boundaries were not rigid and could be expanded by means of the inverse
transformation from the PC space.
For each value in the parameter array, we calculated a map in the HCO+(1–0)
line and the error function. Based on the accepted criterion,
$\chi^{2}\leq\chi_{min}^{2}(3\sigma_{obs})$, we selected 246 values from the
initial set. This number was enough to construct the statistics in the PC
space. For these values, we calculated a set of PCs using a procedure from the
$scikit$-$learn$ library [67]. Using the dependence of $R$, the ratio of the
sum of the diagonal components in the PC covariance matrix to the sum of the
diagonal components in the covariance matrix of the physical parameters, on
the number of components, we estimated the minimum number of PCs required to
represent the physical parameters (Fig. 3). Figure 3 shows that the five PCs
represent to a sufficient extent the eight physical parameters at a level of
$R=0.9$. For the five PCs, the difference after the inverse transformation did
not exceed 5% of the grid step for all the parameters, suggesting no
distortions in subsequent calculations and no error accumulation. Figure 3
also shows the contribution of each component to the relative covariance
matrix of the PCs.
5mm normal
Figure 3: Dependence of the ratio of the sum of the diagonal components in
the PC covariance matrix and the sum of the diagonal components in the
covariance matrix of the physical parameters on the number of components (red
crosses). The green circles show the contribution of an individual component
to the relative PC covariance matrix. The dashed horizontal line indicates a
cutoff level of 0.9.
In the space of the five remaining PCs, we constructed a uniform five-
dimensional grid with a center at the point of minimum of the error function;
the grid size was consistent with $6\Delta(pc_{i})$, where $\Delta(pc_{i})$ is
a standard deviation of the $i$th PC values, which was calculated from the
selected set of points. The PC array was recalculated to the array of the
physical parameter values, for which we calculated the spectral maps and
estimated the error functions. From the calculated model maps, we estimated
the optimal physical parameters from the HCO+(1–0) data by the $k$NN method.
Varying by the least squares method, the relative abundances of H13CO+, HCN,
and H13CN, we fitted the corresponding model spectral maps into the observed
ones. In so doing, we also slightly adjusted the parameters within the error
ranges calculated from the HCO+(1–0) data. By comparing the set of model
spectral maps with the observed ones, we estimated the global error function
in the four lines by equation (3). The resulting model spectra proved to be
close to the observed ones up to a scale of $\sim 0.8$ pc, which confirmed the
relevance of the applied model.
5mm normal
Figure 4: Projections of the eight-dimensional error function $\chi^{2}$ onto
the planes of the different parameter pairs calculated from the fitting of the
model spectral maps in the lines HCO+(1–0), H13CO+(1–0), HCN(1–0) and
H13CN(1–0) into the observed maps in the L1287 core. The dependencies of the
error function on the individual parameters are given over each projection
column. The red dots in the diagrams and the red vertical lines in the upper
plots indicate the global minimum of the error function, which was obtained by
$k$NN. The confidence regions for the optimal parameter values, which were
calculated from the hyperplane $\chi_{\sigma}^{2}$ cross-sections of the error
function, are shown with blue contours and horizontal lines in the two-
dimensional projections and one-dimensional plots, respectively.
Figure 4 presents a set of projections of the eight-dimensional error function
onto the planes of the different parameter pairs and the error function
projection dependencies on each of the parameters. It follows from the two-
dimensional projections and the dependencies, that the model has a global
minimum, and a confidence level can be determined for each of the parameters.
Correlations are observed between some of the parameters. A clear correlation
is observed between $R_{max}$ and the relative abundance of HCO+ ($X_{0}$),
between $\alpha_{n}$ and $X_{0}$, and between the turbulent and systematic
velocities in the central layer and the corresponding power-law indices of the
radial profiles of these parameters. A weaker correlation exists between
$\alpha_{n}$ and $n_{0}$ and between $R_{max}$ and $\alpha_{n}$. The exact
position of the minimum was estimated by the $k$NN method from all the lines;
it is marked by a red cross in the two-dimensional projections and by red
vertical lines in the $\chi^{2}$ projection dependencies on individual
parameters. The confidence regions were calculated using a cross-section of
the error function by the hyperplane $\chi_{\sigma}$. The projections of the
error function cross-sections by the $\chi_{\sigma}$ hyperplane are in fact
contours in the planes of parameter pairs; they correspond to horizontal lines
on upper plots (see Fig. 4). The confidence regions are not symmetric with
respect to the optimal values. The distortions in the shape of the contours
are due to observational noise and the discrete filling of the parameter
space. When analyzing the dependencies of $\chi^{2}$ on individual parameters
in broader ranges than those shown in Fig. 4, we found additional local
minima, which values are, however, greater than the one corresponding to the
global minimum and the corresponding parameter values considerably deviate
from independent estimates.
The estimates for the physical parameters of the L1287 core, which correspond
to the global minimum of the error function, and the uncertainties of these
estimates, which correspond to the boundaries of the confidence regions (Fig.
4), are given in Table 1. It should be noted that in accordance with the
specified form of the radial profiles, the physical parameter values in the
central layer are twice as low as the corresponding values of $n_{0}$,
$V_{turb}$, and $V_{sys}$.
0mm
Table 1: Resulting values of the physical parameters
Parameter | Value
---|---
$n_{0}$(cm-3), 107 | 2.6${}_{-1.3}^{+1.7}$
$\alpha_{n}$ | 1.7${}_{-0.3}^{+0.1}$
Vturb (km/s) | 5.6${}_{-1.4}^{+0.7}$
$\alpha_{turb}$ | 0.44${}_{-0.13}^{0.05}$
Vsys (km/s) | –0.66${}_{-0.24}^{+0.21}$
$\alpha_{sys}$ | 0.1${}_{-0.13}^{+0.08}$
Rmax(pc) | 0.8${}_{-0.25}^{+0.2}$
X(HCO+), 10-9 | 1.0${}_{-0.4}^{+0.5}$
X(H13CO+), 10-11 | 3.7${}_{-2.0}^{+2.4}$
X(HCN), 10-9 | 2.5${}_{-1.1}^{+1.4}$
X(H13CN), 10-11 | 8.5${}_{-4.8}^{+5.3}$
## 4 RESULTS AND DISCUSSION
Figures 5 and 6 show the spectral maps for the central part of the L1287 core
($\sim 0.4$ pc) in the HCO+(1–0), HCN(1–0), H13CO+(1–0), and H13CN(1–0) lines
with the fitted model spectra corresponding to the global minimum of the error
function. The asymmetry and dip in the observed profiles of the optically
thick lines of HCO+(1–0) and HCN(1–0) are well reproduced by the model. In the
central and southwestern parts of the analyzed region, the spectra of the
optically thick lines exhibit high-velocity gas emission, which was ignored in
the model calculations. The slight discrepancy between the model and observed
spectra at the edges of the observed region may be due to a difference from
spherical symmetry. The diameter (1.6 pc) of the model cloud exceeds the
observed sizes of the emission regions in the different molecular lines, the
dense gas indicators, HCO+(1–0), HCN(1–0), and NH3(1,1) ($\sim 0.3-0.5$ pc)
[62, 63, 50], since it comprises the low-density outer layers, which cause the
dip in the profiles of the optically thick lines as they absorb the emission
from the central layers.
5mm normal
Figure 5: Results of fitting the model spectra of HCO+(1–0) (left) and
HCN(1–0) (right) (smooth red curves) into the observed ones (histograms, black
lines) in the central part of the L1287 core. The horizontal axis plots the
velocities in the range from –33 to –5 km/s.
5mm normal
Figure 6: Results of fitting the model spectra of H13CO+(1–0) (left) and
H13CN(1–0) (right) (smooth red curves) into the observed ones (histograms,
black lines) in the central part of the L1287 core. The horizontal axis plots
the velocities in the range from –28.5 to –7 km/s.
The calculated physical parameters of the core are consistent, considering the
errors (see Table 1), with estimates obtained from the data of independent
observations. Thus, the model column density of molecular hydrogen for a
region of radius $\sim 20^{\prime\prime}$ agrees with the value calculated
from the data of dust observations with the Herschel telescope [66]
($4.6_{-2.3}^{+3.0}$ $10^{23}$ cm-2 and $(1.8\pm 1.2)\,10^{23}$ cm-2,
respectively). The core mass calculated from the model for a region of radius
$\sim 0.6$ pc is $\sim 1200$ M0; considering the error
($\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr>\cr\sim\cr}}}}50$%),
associated primarily with the error of $n_{0}$, this mass is consistent with
the value of 810 M0, obtained from the observations of dust for a region of
similar radius [65]. Neither does the power-law index of the radial density
profile $1.7_{-0.3}^{+0.1}$ contradict the value of $1.25\pm 0.2$, obtained
from the observations of dust in continuum [65]. Both of these estimates lie
in the value range for the power-law index of the density profile obtained for
samples of dense cores associated with regions of massive star and star
cluster formation (see, e.g., [65, 68, 69]) but are lower than 2, the value
predicted by the isothermal sphere [8] and global collapse models [13].
The model abundance ratios of the main and rarer isotopes ([HCO+]/[H13CO+] and
[HCN]/[H13CN]) are lower by a factor of $\sim 2$ than the isotope ratio
[12C]/[13C]$\sim 58$, calculated from the heliocentric dependence of this
ratio [70] for $R_{G}\sim 9$ kpc (L1287). However, the uncertainties of the
model abundance ratios are rather high
($\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr>\cr\sim\cr}}}}80$%)
to make further conclusions from this discrepancy. To verify the results
obtained, they should be compared with the chemical modeling results.
The turbulent velocity falls rather sharply with the distance from the center
(from 2.8 km/s in the central layer to $\sim 0.6$ km/s in the outer layer),
which is necessary for reproducing the shape of the dip in the HCO+(1–0) and
HCN(1–0) spectra (solid curve $1$ in Fig. 7, right panel). The contraction
velocity decreases weakly in absolute terms with the distance from the center
($\sim 0.33$ km/s in the central layer and $\sim 0.25$ km/s in the outer
layer) (dashed curve $1$ in Fig. 7, right panel). Its average value across the
model cloud is $0.26\pm 0.09$ km/s, which does not contradict the value of
$\sim 0.22$ km/s, calculated from the HCO+(1–0) line parameters for the point
(60′′,40′′) by the formula of the two-layer model [20] (the value given in
[50] is underestimated).
5mm
normal
Figure 7: Model radial profiles of density and kinetic temperature (left
panel) and contraction and turbulent velocities (right panel). Profiles were
obtained in this work ($1$) and those from [25] ($2$). The cloud radius in the
latter model was set to 1.4 pc, consistent with a distance of 0.93 kpc to
L1287. The $1A$ mark in the right panel shows the contraction velocity profile
obtained in our model with a fixed value of the power index, 0.5.
The power-law index of the radial profile for contraction velocity obtained in
the model calculations considering errors proved to be lower than 0.5 for the
case of gas collapse onto the protostar in free fall [8, 10, 13]. In [25], the
observational data for the core 0038 + 6312 (L1287) in the HCO+(1–0), CS(2–1),
and CS(5–4) lines were compared with the calculated results for the model with
the density and contraction velocity profiles $\propto r^{-3/2}$ and $\propto
r^{-1/2}$, respectively. Although the intensities and widths of the model
spectra proved to match the observed ones in the direction of individual
positions quite well, considering the sensitivity and spectral resolution of
the data, the dip in the HCO+(1–0) line profiles was not reproduced (see
[25]). This is due to the difference of the radial profiles for velocity,
which were assumed in the model [25], from the profiles obtained in our
calculations (Fig. 7, right panel). The left panel in Fig. 7 shows the radial
profiles of density and kinetic temperature for our model and for the model
[25].
As shown in the model of the global hierarchical collapse (see, e.g., [13,
16]), if the core is globally out-of-equilibrium, it experiences contraction
with a constant velocity and this contraction continues in the outer layers
even after the protostar formation. For comparison, we fitted the model maps
into the observed ones for the case where the power-law index of the radial
profile of systematic velocity was fixed at 0.5. The corresponding velocity
profile is marked $1A$ in Fig. 7 (right panel). Figure 8 shows the observed
HCO+(1–0) and HCN(1–0) lines for the point (60′′,40′′) near the core center
and the model spectra for the power-law index of systematic velocity of 0.1,
which corresponds to the global minimum of the error function, and for the
case when the index is 0.5, respectively. Upon comparison of the spectra, the
model with the index 0.1 more accurately reproduces the intensities and widths
of the asymmetric HCO+(1–0) profiles and, specifically, the profiles of three
hyperfine HCN(1–0) components than the model with the index 0.5. A similar
conclusion is true for the other points. Although in the southwestern part,
the high-velocity emission associated with bipolar outflow more strongly
distorts the spectra shape (Fig. 5), which makes it more difficult to compare
the models. The fact that the value obtained in the model with the power-law
index of the velocity profile as a free parameter turned out to be lower than
0.5 may suggest a likelihood of nonuniform gas contraction in the core – with
constant velocity in the extended envelope and with the $\propto r^{-1/2}$
profile in the region near the center. The use of the model with a single
power-law index gives a weighted average for the entire core in this case.
5mm
normal
Figure 8: Observed and model profiles of the lines HCO+(1–0) (left) and
HCN(1–0) (right) towards the (60′′,40′′) position for models with different
values of the power-law index in the radial profile of contraction velocity.
Although we used a rather simplified 1D model with uniform power-law indices
for the radial profiles of the physical parameters, it allows using the
developed algorithm for fitting the model spectral maps into the observed ones
with PCA and $k$NN to reproduce accurately the observed HCO+(1–0), HCN(1–0),
H13CO+(1–0), and H13CN(1–0) line maps and estimate the radial profiles of the
parameters in outer regions of the L1287 core
($r\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr>\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr>\cr\sim\cr}}}}0.1$
pc). Some of the differences between the model and observed spectra can be
eliminated, apparently, within more complex models with composite radial
profiles of the parameters and also within 2D or 3D models considering the
possible spatial inhomogeneity of the velocity field, rotation, and high-
velocity outflows. To reduce errors in the calculated parameters and to
confirm the conclusion about the possible global contraction of the L1287
core, further observations are required in molecular lines of different
optical depths with better spatial and spectral resolution.
A distinctive feature of the developed algorithm is its lack of ties to a
specific model and its capability of simultaneous analysis of spectral maps
with different spatial resolutions and sizes, as well as maps in continuum.
The main constraint is only the computational time required to construct the
necessary statistics.
## 5 CONCLUSIONS
The results of this work can be summarized as follows:
(1) An algorithm was developed for finding the global minimum of the
multidimensional error function and calculating the optimal parameter values
when fitting model spectral maps into observed ones. The algorithm is based on
applying principal component analysis to a given range of parameters, as a
result of which a reduction is achieved in the model’s dimensionality and in
the coupling degree between the parameters. Thus, the region of the minimum is
determined. The optimal parameter values are calculated using the $k$ nearest
neighbors method. Confidence regions for the optimal parameter values are
determined using a cross-section of the error function by a hyperplane
calculated in the PC space and its projections onto the various pairs of the
parameters. The algorithm is not tied to a specific model.
(2) The algorithm was used to perform the fitting of the model maps in the
HCO+(1–0), H13CO+(1–0), HCN(1–0), and H13CN(1–0) lines into the observed maps
of the protostellar core of L1287, in which the formation of a young stellar
cluster is underway, and the asymmetry of the profiles of optically thick
lines indicates contraction. The maps were calculated within a spherically
symmetric 1D model in which the physical parameters (density and turbulent and
systematic velocities) were functions of the distance from the center. Optimal
values were calculated for the model parameters, and their errors were
determined. It was found that density in the L1287 core decreases with the
distance from the center as $\propto r^{-1.7}$ while turbulent and contraction
velocities decrease as $\propto r^{-0.4}$ and $\propto r^{-0.1}$,
respectively. The absolute value of the power-law index for the radial profile
of contraction velocity, considering the probable error, is less than 0.5, a
value expected in the case of gas collapse onto the protostar in free fall.
This result may indicate global contraction in the L1287 core, which was
predicted in several theoretical works.
APPENDIX. MODEL DESCRIPTION
Excitation of rotational levels of the HCO+ and HCN molecules and their
isotopes and the profiles of the (1–0) transitions were calculated within a
spherically symmetric microturbulent model. The model cloud is a set of
concentric layers in which a certain physical parameter (density, kinetic
temperature, turbulent and systematic velocity) was set constant, changing
from one layer to another by the relationship
$P=P_{0}/(1+(r/R_{0})^{\alpha_{p}}$), where $r$ is the distance from the
center and $R_{0}$ is the radius of the central layer. This functional
dependence, which is a simplified form of the Plummer function, is used quite
often as a model density profile (see, e.g., [22]) to avoid singularity at the
center. In our model, this form of the dependence was used for all the
parameters. The values of $P_{0}$ and $\alpha_{p}$ for each parameter were
varied while fitting the model profiles into the observed ones. The kinetic
temperature profile was taken as $T=80$ K$/(1+(r/R_{0})^{0.3})$ and remained
unchanged during calculations. It should be noted that kinetic temperature
affects the intensities of the calculated HCO+(1–0) and HCN(1–0) lines to a
lesser degree than density and concentration. Turbulent velocity was a
parameter that gives an additional contribution - aside from the thermal one -
to the local width of the lines. The relative molecular abundance was
independent of radial distance. When calculating the excitation of HCN and
H13CN, the hyperfine structure of the rotational spectrum and the overlapping
of closely located hyperfine components [40, 71] was taken into account. The
description of the model and calculation techniques for radiation transfer in
the case of HCN is given in the Appendix to [40]. In our version of this
model, the layer width increases by the power law with the distance from the
center, and the radial profile of systematic velocity, which gives a Doppler
shift to the local profile of the line, is taken into account. The
calculations were conducted for 14 layers. The calculations used collisional
probabilities of HCO+–H2 [72] and HCN–H2 taking into account hyperfine
structure [73].
Excitation of rotational levels of a certain molecule was calculated by an
iterative method, sequentially for one point in each layer, the radial
distance of which is equal to the geometric mean of the inner and outer radii
of the layer. To this end, a system of population balance equations was
solved, while the populations in other layers were considered unchanged. After
reaching the outer layer, the populations in each layer were compared with the
values obtained in the previous iteration, and the process was repeated [40].
To increase the accuracy of calculating the radiation transfer in a moving
medium, each layer was additionally divided into ten sublayers, with different
systematic velocities. A test comparison of the calculated results for this
model with the calculated results in [74] for a molecule with two energy
levels showed that the calculated populations differ by no more than 0.4% in
the case of line optical depth of
$\mathrel{\mathchoice{\vbox{\offinterlineskip\halign{\hfil$\displaystyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\textstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptstyle#$\hfil\cr<\cr\sim\cr}}}{\vbox{\offinterlineskip\halign{\hfil$\scriptscriptstyle#$\hfil\cr<\cr\sim\cr}}}}60$.
The model code, written in Fortran, was controlled by means of a module
written in Python. Model spectra were calculated for the different impact
parameters. Using the astropy.convolve_fft procedure [75], the resulting maps
were convoluted channel by channel with a two-dimensional Gaussian of width
40′′ (the width of the main beam of the OSO-20m radio telescope at a frequency
of $\sim 90$ GHz). The model spectra were fitted into the observed ones using
a PCA- and $k$NN-based algorithm (Section 2), written in Python.
###### Acknowledgements.
The authors thank the reviewer Ya.N. Pavlyuchenkov for his valuable remarks
and additions. This work was supported by the Russian Science Foundation,
project no. 17-12-01256 (analysis of the results), and the Russian Foundation
for Basic Research, project no. 18- 02-00660-a) (program development and model
calculations).
REFERENCES
1\. J. C. Tan, M. Beltrán, P. Caselli, F. Fontani, A. Fuente, M. R. Krunholz,
C. F. McKee, and A. Stolte, in $Protostars\leavevmode\nobreak\
and\leavevmode\nobreak\ Planets\leavevmode\nobreak\ VI$, Ed. by H. Beuther, R.
S. Klessen, C. P. Dullemond, and Th. Henning (Univ. of Arizona Press, Tucson,
2014), p. 149.
2\. F. Motte, S. Bontemps, and F. Louvet, Ann. Rev. Astron. Astrophys. 56, 41
(2018).
3\. Ph. André, A. Men’shchikov, S. Bontemps, V. Könyves, et al., Astron.
Astrophys. 518, L102 (2010).
4\. Ph. André, J. Di Francesco, D. Ward-Thompson, S.-I. Inutsuka, R. E.
Pudritz, and J. E. Pineda, in $Protostars\leavevmode\nobreak\
and\leavevmode\nobreak\ Planets\leavevmode\nobreak\ VI$, Ed. by H. Beuther, R.
S. Klessen, C. P. Dullemond, and Th. Henning (Univ. of Arizona Press, Tucson,
2014), p. 27.
5\. F. Nakamura, K. Sugitani, T. Tanaka, H. Nishitani, et al., Astrophys. J.
Lett. 791, L23 (2014).
6\. Y. Contreras, G. Garay, J. M. Rathborne, and P. Sanhueza, Mon. Not. R.
Astron. Soc. 456, 2041 (2016).
7\. L. K. Dewangan, L. E. Pirogov, O. L. Ryabukhina, D. K. Ojha, and I.
Zinchenko, Astrophys. J. 877, 1 (2019).
8\. F. H. Shu, Astrophys. J. 214, 488 (1977).
9\. F. H. Shu, F. C. Adams, and S. Lizano, Ann. Rev. Astron. Astrophys. 25, 23
(1987).
10\. C. F. McKee and J. C. Tan, Astrophys. J. 585, 850 (2003).
11\. Y. Zhang and J. C. Tan, Astrophys. J. 853, 18 (2018).
12\. D. E. McLaughlin and R. E. Pudritz, Astrophys. J. 476, 750 (1997).
13\. E. Vázquez-Semadeni, A. Palau, J. Ballesteros-Paredes, G. C. Gómez, and
M. Zamora-Avilés, Mon. Not. R. Astron. Soc. 490, 3061 (2019).
14\. R. B. Larson, Mon. Not. R. Astron. Soc. 145, 271 (1969).
15\. M. V. Penston, Mon. Not. R. Astron. Soc. 144, 425 (1969).
16\. R. Naranjo-Romero, E. Vázquez-Semadeni, and R. M. Loughnane, Astrophys.
J. 814, 48 (2015).
17\. D. Mardones, P. C. Myers, M. Tafalla, D. J. Wilner, R. Bachiller, and G.
Garay, Astrophys. J. 489, 719 (1997).
18\. N. J. Evans II, Ann. Rev. Astron. Astrophys. 37, 311 (1999).
19\. Ya. Pavlyuchenkov, D. Wiebe, B. Shustov, Th. Henning, R. Launhardt, and
D. Semenov, Astrophys. J. 689, 335 (2008).
20\. P. C. Myers, D. Mardones, M. Tafalla, J. P. Williams, and D. J. Wilner,
Astrophys. J. 465, L133 (1996).
21\. C. W. Lee, P. C. Myers, and M. Tafalla, Astrophys. J. Suppl. 136, 703
(2001).
22\. C. H. De Vries and P. C. Myers, Astrophys. J. 620, 800 (2005).
23\. J. L. Campbell, R. K. Friesen, P. G. Martin, P. Caselli, J. Kauffmann,
and J. E. Pineda, Astrophys. J. 819, 143 (2016).
24\. S. Zhou, N. J. Evans, II, C. Kömpe, and C. M. Wamsley, Astrophys. J. 404,
232 (1993).
25\. R. N. F. Walker and M. R. W. Masheder, Mon. Not. R. Astron. Soc. 285, 862
(1997).
26\. G. Narayanan, G. Moriarty-Schieven, C. K. Walker, and H. M. Butner,
Astrophys. J. 565, 319 (2002).
27\. Ya. N. Pavlyuchenkov and B. M. Shustov, Astron. Rep. 48, 315 (2004).
28\. L. Pagani, I. Ristorcelli, N. Boudet, M. Giard, A. Abergel, and J.-P.
Bernard, Astron. Astrophys. 512, A3 (2010).
29\. G. Garay, D. Mardones, Y. Contreras, J. E. Pineda, E. Servajean, and A.
E. Guzmán, Astrophys. J. 799, 75 (2015).
30\. K. R. Opara and J. Arabas, Swarm Evol. Comput. 44, 546 (2019).
31\. B. Schölkopf, A. Smola, and K.-R. Müller, in Artificial Neural Networks,
Proceedings of the ICANN 97 (1997), p. 583.
32\. M. H. Heyer and P. Schloerb, Astrophys. J. 475, 173 (1997).
33\. N. S. Altman, Am. Statist. 46, 175 (1992).
34\. J. Yang, T. Umemoto, T. Iwata, and Y. Fukui, Astrophys. J. 373, 137
(1991).
35\. P. D. Klaassen, L. Testi, and H. Beuther, Astron. Astrophys. 538, A140
(2012).
36\. F. Wyrowski, R. Güsten, K. M. Menten, H. Wiesemeyer, et al., Astron.
Astrophys. 585, A149 (2016).
37\. Y.-X. He, J.-J. Zhou, J. Esimbek, W.-G. Ji, et al., Mon. Not. R. Astron.
Soc. 461, 2288 (2016).
38\. H. Yoo, K.-T. Kim, J. Cho, M. Choi, J. Wu, N. J. Evans II, and L. M.
Ziurys, Astrophys. J. Suppl. 235, 31 (2018).
39\. N. Cunningham, S. L. Lumsden, T. J. T. Moore, L. T. Maud, and I.
Mendigutia, Mon. Not. R. Astron. Soc. 477, 2455 (2018).
40\. B. E. Turner, L. Pirogov, and Y. C. Minh, Astrophys. J. 483, 235 (1997).
41\. L. Pirogov, Astron. Astrophys. 348, 600 (1999).
42\. R. Cangelosi and F. Goriely, Biol. Direct 2, 2 (2007).
43\. I. T. Jolliffe, Principal Component Analysis, Springer Series in
Statistics (Springer, New York, 2002).
44\. P. M. Zemlyanukha, I. I. Zinchenko, S. V. Salii, O. L. Ryabukhina, and
Sh.-Yu. Liu, Astron. Rep. 62, 326 (2018).
45\. G. T. Smirnov and A. P. Tsivilev, Sov. Astron. 26, 616 (1982).
46\. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery,
Numerical Recipes in Fortran 77 (Cambridge Univ. Press, Cambridge, 1992).
47\. S. Brandt, $Data\leavevmode\nobreak\ Analysis:\leavevmode\nobreak\
Statistical\leavevmode\nobreak\ and\leavevmode\nobreak\
Computational\leavevmode\nobreak\ Methods\leavevmode\nobreak\
for\leavevmode\nobreak\ Scientists\\\ \leavevmode\nobreak\
and\leavevmode\nobreak\ Engineers$ (Springer Nature, Switzerland, 2014).
48\. K. L. J. Rygl, A. Brunthaler, M. J. Reid, K. M. Menten, H. J. van
Langevelde, and Y. Xu, Astron. Astrophys. 511, A2 (2010).
49\. S. Molinari, B. Swinyard, J. Bally, M. Barlow, et al., Astron. Astrophys.
518, L100 (2010).
50\. L. E. Pirogov, V. M. Shul’ga, I. I. Zinchenko, P. M. Zemlyanukha, A. H.
Patoka, and M. Thomasson, Astron. Rep. 60, 904 (2016).
51\. H. J. Staude and T. Neckel, Astron. Astrophys. 244, L13 (1991).
52\. S. McMuldroch, G. A. Blake, and A. I. Sargent, Astron. J. 110, 354
(1995).
53\. S. P. Quanz, Th. Henning, J. Bouwman, H. Linz, and F. Lahuis, Astrophys.
J. 658, 487 (2007).
54\. G. Anglada, L. F. Rodríguez, J. M. Girart, R. Estalella, and J. M.
Torrelles, Astrophys. J. 420, L91 (1994).
55\. D. Fiebig, Astron. Astrophys. 298, 207 (1995).
56\. C.-G. Gan, X. Chen, Z.-Q. Shen, Y. Xu, and B.-G. Ju, Astrophys. J. 763, 2
(2013).
57\. R. L. Snell, R. L. Dickman, and Y.-L. Huang, Astrophys. J. 352, 139
(1990).
58\. Y. Xu, Z.-Q. Shen, J. Yang, X. W. Zheng, et al., Astron. J. 132, 20
(2006).
59\. T. Umemoto, M. Saito, J. Yang, and N. Hirano, in
$Star\leavevmode\nobreak\ Formation$ 1999,Ed. by T. Nakamoto (Nobeyama Radio
Observatory, 1999), p. 227.
60\. O. Fehér, A. Kóspál, P. Ábrahám, M. R. Hogerheijde, and C. Brinch,
Astron. Astrophys. 607, A39 (2017).
61\. C. Juárez, H. B. Liu, J. M. Girart, A. Palau, G. Busquet, R. Galván-
Madrid, N. Hirano, and Y. Lin, Astron. Astrophys. 621, A140 (2019).
62\. R. Estalella, R. Mauersberger, J. M. Torrelles, G. Anglada, J. F. Gómez,
R. López, and D. Muders, Astrophys. J. 419, 698 (1993).
63\. I. Sepúlveda, G. Anglada, R. Estalella, R. López, J. M. Girart, and J.
Yang, Astron. Astrophys. 527, A41 (2011).
64\. G. Sandell and D. A. Weintraub, Astrophys. J. Suppl. 134, 115 (2001).
65\. K. E. Mueller, Y. L. Shirley, N. J. Evans II, and H. R. Jacobson,
Astrophys. J. Suppl. 143, 469 (2002).
66\. K. A. Marsh, A. P. Whitworth, and O. Lomax, Mon. Not. R. Astron. Soc.
454, 4282 (2015).
67\. F. Pedregosa, G. Varoquaux, A. Gramfort, and V. Michel, J. Mach. Learn.
Res. 12, 2825 (2011).
68\. S. J. Williams, G. A. Fuller, and T. K. Sridharan, Astron. Astrophys.
434, 257 (2005).
69\. L. E. Pirogov, Astron. Rep. 53, 1127 (2009).
70\. Y. T. Yan, J. S. Zhang, C. Henkel, T. Mufakharov, et al., Astrophys. J.
877, 154 (2019).
71\. S. Guilloteau and A. Baudry, Astron. Astrophys. 97, 213 (1981).
72\. D. R. Flower, Mon. Not. R. Astron. Soc. 305, 651 (1999).
73\. D. Ben Abdallah, F. Najar, N. Jaidane, F. Dumouchel, and F. Lique, Mon.
Not. R. Astron. Soc. 419, 2441 (2012),
74\. G.-J. van Zadelhoff, C. P. Dullemond, F. F. S. van der Tak, J. A. Yates,
et al., Astron. Astrophys. 395, 373 (2002).
75\. The Astropy Collaboration, T. P. Robitaille, E. J. Tollerud, P.
Greenfield, M. Droettboom, et al., Astron. Astrophys. 558, A33 (2013).
$Translated\leavevmode\nobreak\ by\leavevmode\nobreak\ A.\leavevmode\nobreak\
Kobkova$
|
# On $L^{12}$ square root cancellation for exponential sums associated with
nondegenerate curves in ${\mathbb{R}}^{4}$
Ciprian Demeter Department of Mathematics, Indiana University, 831 East 3rd
St., Bloomington IN 47405<EMAIL_ADDRESS>
###### Abstract.
We prove sharp $L^{12}$ estimates for exponential sums associated with
nondegenerate curves in ${\mathbb{R}}^{4}$. We place Bourgain’s seminal result
[2] in a larger framework that contains a continuum of estimates of different
flavor. We enlarge the spectrum of methods by combining decoupling with
quadratic Weyl sum estimates, to address new cases of interest. All results
are proved in the general framework of real analytic curves.
###### Key words and phrases:
decoupling, Weyl sums, square root cancellation
The author is partially supported by the NSF grant DMS-1800305
AMS subject classification: Primary 42A45, Secondary 11L07
## 1\. Introduction
Throughout this paper, $\phi_{3},\phi_{4}$ will be real analytic functions
defined on some open interval containing $[\frac{1}{2},1]$, and satisfying
$\|\phi_{k}^{\prime}\|_{C^{3}}=\sum_{1\leq n\leq 4}\max_{\frac{1}{2}\leq t\leq
1}|\phi_{k}^{(n)}(t)|\leq A_{1},\;\;k\in\\{3,4\\},$ (1)
$A_{2}\leq\left|\det\begin{bmatrix}\phi_{3}^{(3)}(t)&\phi_{3}^{(4)}(t)\\\
\phi_{4}^{(3)}(s)&\phi_{4}^{(4)}(s)\end{bmatrix}\right|\leq
A_{3},\;\;t,s\in[\frac{1}{2},1],$ (2) $|\phi_{3}^{(3)}(t)|\geq
A_{4},\;\;t\in[\frac{1}{2},1].$ (3)
$A_{1},\ldots,A_{4}$ are positive numbers that will determine the implicit
constants in various inequalities. While $\phi_{3},\phi_{4}$ being $C^{4}$
would suffice for our purposes, we choose to work with real analytic functions
for purely aesthetic reasons. The examples of most immediate interest are
power functions $\phi_{3}(t)=t^{a}$, $\phi_{4}(t)=t^{b}$, with the real
numbers $a$ and $b$ satisfying the restrictions $a\not=b$ and
$a,b\not\in\\{0,1,2\\}$.
For a finite interval $I\subset{\mathbb{Z}}$, we write (ignoring the
dependence on $\phi_{k}$)
${\mathcal{E}}_{I,N}(x)=\sum_{n\in
I}e(nx_{1}+n^{2}x_{2}+\phi_{3}(\frac{n}{N})x_{3}+\phi_{4}(\frac{n}{N})x_{4}).$
We make the following conjecture.
###### Conjecture 1.1.
Assume $\alpha\geq\beta\geq 0$ and $\alpha+\beta=3$. Let
$\omega_{3}=[0,N^{\alpha}]$ and $\omega_{4}=[0,N^{\beta}]$. Assume
$\phi_{3},\phi_{4}$ are real analytic on $(0,3)$ and satisfy (1), (2) and (3).
Then
$\int_{[0,1]\times[0,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{[\frac{N}{2},N],N}(x)|^{12}dx\lesssim_{\epsilon}N^{9+\epsilon}.$
(4)
For virtually all conceivable applications of (4), both $\phi_{3}$ and
$\phi_{4}$ will satisfy (3). Because of this symmetry, our restriction
$\alpha\geq\beta$ is essentially meaningless.
Let us write $A\lesssim B$ or $A=O(B)$ if $|A|\leq CB$ for some, possibly
large, but universal constant $C$. The notation $A\sim B$ will mean that
$A\lesssim B$ and $B\lesssim A$ at the same time. We will write $A\ll B$ or
$A=o(B)$ if $|A|\leq cB$, for some small enough, universal constant $c$. The
notation $A{\;\lessapprox}\;B$ will mean $A\lesssim(\log N)^{O(1)}B$, where
$N$ will be the key scale parameter.
Since $|{\mathcal{E}}_{[\frac{N}{2},N],N}(x)|\sim N$ if
$x\in[0,o(\frac{1}{N})]\times[0,o(\frac{1}{N^{2}})]\times[0,o(1)]^{2}$, the
exponent 9 is optimal in (4). While the large value $N$ is attained by
$|{\mathcal{E}}_{[\frac{N}{2},N],N}(x)|$ for a small subset of $x$, estimate
(4) implies that the average value of the exponential sum on the given domain
is $O(N^{\frac{1}{2}+\epsilon})$, when measured in $L^{12}$. We call this
$L^{12}$ square root cancellation. The relevance of the requirement
$\alpha+\beta=3$ is that it guarantees
$|[0,1]\times[0,1]\times\omega_{3}\times\omega_{4}|(\sqrt{N})^{12}=N^{9}.$
The case $(\alpha,\beta)=(2,1)$ of the conjecture has been settled in [2].
This inequality was proposed by Huxley [8], with $\phi_{3}(t)=t^{3/2}$,
$\phi_{4}(t)=t^{1/2}$. In [2], it serves as the main ingredient in sharpening
the record on the Lindelöf hypothesis.
The only other known case prior to our work was $(\alpha,\beta)=(3,0)$. The
variable $x_{4}$ and the function $\phi_{4}$ play no role in this case, as
$\phi_{4}(\frac{n}{N})x_{4}=O(1)$. We treat $e(\phi_{4}(\frac{n}{N})x_{4})$ as
a coefficient $c_{n}=O(1)$. In this case, (4) follows from the inequality
$\int_{[0,1]^{2}\times[0,N^{3}]}|\sum_{n\in
I}c_{n}e(nx_{1}+n^{2}x_{2}+\phi_{3}(\frac{n}{N})x_{3}|^{12}dx_{1}dx_{2}dx_{3}\lesssim_{\epsilon}N^{9+\epsilon}\|c_{n}\|_{l^{\infty}}.$
Assuming $\phi_{3}$ satisfies (1) and (3), this was known as the Main
Conjecture in Vinogradov’s Mean Value Theorem, and was first solved in [10],
then in [5]. This reduction fails for all other values $\alpha<3$, since the
length of the interval $\omega_{3}$ becomes too small. The proofs in [2] and
[5] for $\alpha=2$ and $\alpha=3$ are fairly different, in spite of both
relying entirely on abstract decoupling.
Our main result here verifies Conjecture 1.1 in the range
$\frac{3}{2}\leq\alpha<2$.
###### Theorem 1.2.
Assume that $\frac{3}{2}\leq\alpha\leq 2$. Assume $\phi_{3},\phi_{4}$ are real
analytic on $(0,3)$ and satisfy (1), (2) and (3).
Then
$\int_{[0,1]\times[0,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{[\frac{N}{2},N],N}(x)|^{12}dx\lesssim_{\epsilon}N^{9+\epsilon}.$
(5)
If $\phi_{3}(t)=t^{a}$, $\phi_{4}(t)=t^{b}$ with $\alpha\leq a$, $\beta\leq
b$, $a+b\leq 9$, $a\not=b$ and $a,b\not\in\\{0,1,2\\}$, a very simple
rescaling argument for each member of the sum
${\mathcal{E}}_{[1,N],N}(x)=\sum_{1\leq M\leq N\atop{M\text{
dyadic}}}{\mathcal{E}}_{[\frac{M}{2},M],N}(x)$
shows that (5) holds with ${\mathcal{E}}_{[\frac{N}{2},N],N}$ replaced with
${\mathcal{E}}_{[1,N],N}(x)$. In particular, this is always the case for the
moment curve $\phi_{3}(t)=t^{3}$, $\phi_{4}(t)=t^{4}$. Other cases such as
$(\alpha,\beta)=(2,1)$, $(a,b)=(\frac{3}{2},\frac{1}{2})$ require slightly
more sophisticated arguments similar to the one in [2], but will not be
pursued here.
Theorem 1.2 will follow from its bilinear analog. We prove this reduction in
Section 7.
###### Theorem 1.3.
Let $I_{1},I_{2}$ be intervals of length $\sim N$ in $[\frac{N}{2},N]$, with
${\operatorname{dist}\,}(I_{1},I_{2})\sim{N}$. Assume $\phi_{3},\phi_{4}$ are
real analytic on $(0,2)$ and satisfy (1), (2) and (3). Assume that
$\frac{3}{2}\leq\alpha\leq 2$.
Then
$\int_{[0,1]\times[0,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{I_{1},N}(x){\mathcal{E}}_{I_{2},N}(x)|^{6}dx\lesssim_{\epsilon}N^{9+\epsilon}.$
The implicit constant in this inequality is uniform over
$A_{1},\ldots,A_{4}\sim 1$.
The reason we prove Theorem 1.2 using bilinear, as opposed to linear or
trilinear methods, is rather delicate. As explained earlier, our results are
sharp, the exponent 9 in (5) cannot be lowered. Each of the decoupling
inequalities relevant to this paper has a certain critical exponent $p_{c}>2$.
Experience shows that to achieve sharp results via decoupling, this tool must
be used at the critical exponent $p_{c}$. When we apply decoupling on spatial
balls of radius $M$, we decouple the curve into arcs of length $M^{-1/2}$. It
seems very likely that the critical exponent for such a decoupling in the
linear setting is larger than 12. See [7] for a detailed discussion on this.
Because of this, using linear $L^{12}$ decoupling turns out to be inefficient.
Bilinearizing instead, gives us access to the $L^{6}$ decoupling of the
parabola. This is an ideal scenario, since 6 is precisely the critical
exponent in this setting.
The fact that $12=6\times 2$ turns out to be crucial to our argument. The
other factorization $12=4\times 3$ is also inefficient. An approach based on
trilinearity would use $L^{4}$ estimates for hypersurfaces in
${\mathbb{R}}^{4}$. But the critical exponent here is $10/3$, not $4$.
Most of the paper is devoted to proving Theorem 1.3. The proof will combine
abstract decoupling methods with quadratic Weyl sum estimates. The decoupling
techniques are introduced in Section 2. While these results are by now
standard, the observation that the superficially stronger $l^{2}(L^{12})$
decoupling holds true for nondegenerate curves in the bilinear setting appears
to be new. One of the key features in our argument is the use of this
inequality in places where the $l^{12}(L^{12})$ decoupling used in [2] becomes
inefficient. We combine this finer decoupling with estimates from number
theory. In short, here is how our approach works. The initial integral
involves quartic Weyl sums, for which sharp estimates are totally out of reach
at the moment. Decoupling is applied once or twice in order to lower the
complexity of the sums, to the level of manageable quadratic Weyl sums. These
sums will appear in various combinations, and need to be tackled with extreme
care, using various counting arguments such as Lemma 4.1 and Lemma 4.2.
In Section 3, we start with a careful examination of Bourgain’s argument from
[2], for $\alpha=2$. In many ways, this case turns out to be the easiest, as
it works via just $l^{12}L^{12}$ decoupling, and without any input from number
theory. In Section 4 we introduce our new methodology, addressing the
symmetric case $\alpha=\beta=\frac{3}{2}.$ This ends up being the most
delicate case, since it captures the biggest region near the origin where
constructive (and near constructive) interference occurs. Also, it is in this
case that the curve looks most genuinely four dimensional, as both
$\omega_{3}$ and $\omega_{4}$ are large. For comparison, recall that when
$\alpha=3$ the curve degenerates to a three dimensional one. Sections 5 and 6
extend our method to the remaining cases, by successively building on each
other. The case $\frac{9}{5}\leq\alpha<2$ combines elements of both
approaches.
To the best of our knowledge, this paper represents the first systematic
effort to combine abstract decoupling with Weyl sum estimates. The results
proved here are part of the vast program initiated in [6], concerned with
proving sharp $L^{p}$ estimates for the moment curve on spatial domains
smaller than the torus. In [6] only the moment curve in ${\mathbb{R}}^{3}$ is
considered, and all estimates there rely solely on decoupling techniques.
There remain a lot of interesting related questions. One of them has to do
with proving Conjecture 1.1 in the range $2<\alpha<3$. We may speculate that
the solution would combine some of the tools from our paper with those used to
solve the case $\alpha=3$, see also Remark 3.1. Second, $L^{p}$ moments are
also worth investigating for smaller values $p<12$, in particular for $p=10$.
See for example [1] for some recent progress and some interesting
applications. Section 8 of our paper contains an example that describes some
of the enemies and limitations of square root cancellation in this setting. It
seems plausible that small cap decoupling for the parabola (see [6]) will be
the right tool to attack this problem. We hope to address some of these
questions in future work.
###### Acknowledgment.
The author is grateful to Hongki Jung and Zane Li for pointing out a few typos
in the earlier version of the manuscript.
## 2\. Decoupling for nondegenerate curves in ${\mathbb{R}}^{4}$
Let us start by recalling the decoupling for nondegenerate curves in
${\mathbb{R}}^{2}$.
###### Theorem 2.1 ([3]).
Let $\phi:[0,1]\to{\mathbb{R}}$ be a $C^{3}$ function, with
$\|\phi^{\prime}\|_{C^{2}}=A_{5}<\infty$ and $\min_{0\leq t\leq
1}|\phi^{\prime\prime}(t)|=A_{6}>0.$ Then for each $f:[0,1]\to{\mathbb{C}}$
and each ball $B_{N}\subset{\mathbb{R}}^{2}$ with radius $N$ we have
$\|\int_{[0,1]}f(t)e(tu+\phi(t)w)dt\|_{L_{u,w}^{6}(B_{N})}\lesssim_{\epsilon}N^{\epsilon}(\sum_{J\subset[0,1]\atop{|J|=N^{-1/2}}}\|\int_{J}f(t)e(tu+\phi(t)w)dt\|_{L_{u,w}^{6}(B_{N})}^{2})^{1/2}.$
The implicit constant is uniform over $A_{5}\sim 1$, $A_{6}\sim 1$.
We use this result to illustrate in the simplest terms the reduction of cubic
terms used in the next section for $\alpha=2$. Namely, let us show that for
$1\leq\alpha<3$
$\int_{[0,1]^{2}}|\sum_{1\leq m\leq
M}e(mu+(m^{2}+\frac{m^{3}}{M^{\alpha}})w)|^{6}dudw\lesssim_{\epsilon}M^{3+\epsilon}.$
(6)
The term $\frac{m^{3}}{M^{\alpha}}w$ is not negligible (in the sense of Lemma
9.2), as it is not $O(1)$. However, after a change of variables and using
periodicity in $u$, we may rewrite the integral as
$\frac{1}{M^{4}}\int_{[0,M^{2}]^{2}}|\sum_{1\leq m\leq
M}e(\frac{m}{M}u+\phi(\frac{m}{M})w)|^{6}dudw,$
where $\phi(t)=t^{2}+t^{3}M^{1-\alpha}$. Note that $A_{5},A_{6}\sim 1$,
uniformly over $M$. Inequality (6) is now a standard consequence of Theorem
2.1 with $N=M^{2}$. The cubic term becomes a perturbation of the quadratic
term, and does not significantly affect the constant $A_{6}$. Theorem 2.4 will
formalize this approach in four dimensions.
Throughout this section, $\phi_{3},\phi_{4}$ are arbitrary functions
satisfying (1) and (2). We denote by $E$ the extension operator associated
with the curve $\Phi$
$\Phi(t)=(t,t^{2},\phi_{3}(t),\phi_{4}(t)),\;t\in[\frac{1}{2},1].$ (7)
More precisely, for $f:[\frac{1}{2},1]\to{\mathbb{C}}$ and
$I\subset[\frac{1}{2},1]$ we write
$E_{I}f(x)=\int_{I}f(t)e(tx_{1}+t^{2}x_{2}+\phi_{3}(t)x_{3}+\phi_{4}(t)x_{4})dt.$
The following $l^{6}(L^{6})$ decoupling was proved in [2], see also [4]. It is
in fact a bilinear version of the $l^{12}L^{12}$ decoupling for the curve (7).
###### Theorem 2.2.
Let $I_{1},I_{2}$ be two intervals of length $\sim 1$ in $[\frac{1}{2},1]$,
with $dist(I_{1},I_{2})\sim 1$. Let also
$f_{i}:[\frac{1}{2},1]\to{\mathbb{C}}$. Then for each ball $B_{N}$ of radius
$N$ in ${\mathbb{R}}^{4}$ we have
$\|E_{I_{1}}f_{1}E_{I_{2}}f_{2}\|_{L^{6}(B_{N})}\lesssim_{\epsilon}N^{\frac{1}{3}+\epsilon}(\sum_{J_{1}\subset
I_{1}}\sum_{J_{2}\subset
I_{2}}\|E_{J_{1}}f_{1}E_{J_{2}}f_{2}\|_{L^{6}(B_{N})}^{6})^{1/6}.$
The sum on the right is over intervals $J_{i}$ of length $N^{-1/2}$.
In [2], this result is used in conjunction with the following estimate, an
easy consequence of transversality.
$\|E_{J_{1}}f_{1}E_{J_{2}}f_{2}\|_{L^{6}(B_{N})}^{6}\lesssim
N^{-4}\|E_{J_{1}}f_{1}\|_{L^{6}(B_{N})}^{6}\|E_{J_{2}}f_{2}\|_{L^{6}(B_{N})}^{6}.$
(8)
This is inequality (13) in [4], and there is a detailed proof there. For
reader’s convenience, we sketch a somewhat informal argument below.
The Fourier transform of $|E_{J_{i}}f_{i}|^{6}$ is supported inside a
rectangular box with dimensions $\sim N^{-1}\times N^{-1}\times N^{-1}\times
N^{-1/2}$. We have the following wavepacket representation on $B_{N}$,
slightly simplified for exposition purposes
$|E_{J_{i}}f_{i}(x)|^{6}\approx\sum_{P_{i}\in{\mathcal{P}}_{i}}a_{P_{i}}1_{P_{i}}(x).$
The coefficients $a_{P_{i}}$ are nonnegative reals. The rectangular boxes
$P_{i}$ have dimensions $\sim N\times N\times N\times N^{1/2}$ and tile
$B_{N}$. They can be thought of as $N^{1/2}$-neighborhoods of cubes with
diameter $\sim N$ inside hyperplanes ${\mathcal{H}}_{i}$. Since
${\operatorname{dist}\,}(J_{1},J_{2})\sim 1$, the angle between the normal
vectors of ${\mathcal{H}}_{i}$ is $\sim 1$. Thus, we have that $|P_{1}\cap
P_{2}|\lesssim N^{3}$. We conclude by writing
$\displaystyle\|E_{J_{1}}f_{1}E_{J_{2}}f_{2}\|_{L^{6}(B_{N})}^{6}$
$\displaystyle\sim\sum_{P_{1}\in{\mathcal{P}}_{1}}\sum_{P_{2}\in{\mathcal{P}}_{2}}a_{P_{1}}a_{P_{2}}|P_{1}\cap
P_{2}|$ $\displaystyle\lesssim
N^{-4}\sum_{P_{1}\in{\mathcal{P}}_{1}}\sum_{P_{2}\in{\mathcal{P}}_{2}}a_{P_{1}}a_{P_{2}}|P_{1}||P_{2}|$
$\displaystyle\approx
N^{-4}\|E_{J_{1}}f_{1}\|_{L^{6}(B_{N})}^{6}\|E_{J_{2}}f_{2}\|_{L^{6}(B_{N})}^{6}.$
It is worth observing that $\lesssim$ in inequality (8) is essentially an
(approximate) similarity $\approx$, making (8) extremely efficient. Indeed,
since $P_{1}$ and $P_{2}$ intersect $B_{N}$, we have that $|P_{1}\cap
P_{2}|\sim N^{3}$.
To address new values of $\alpha$ in this paper, we will need the following
$l^{2}(L^{6})$ decoupling. This implies the previous $l^{6}(L^{6})$
decoupling, and provides a critical improvement in the cases when the terms in
the sum are of significantly different sizes.
###### Theorem 2.3.
Let $I_{1},I_{2}$ be two intervals of length $\sim 1$ in $[\frac{1}{2},1]$,
with $dist(I_{1},I_{2})\sim 1$. Let also
$f_{i}:[\frac{1}{2},1]\to{\mathbb{C}}$. Then for each ball $B_{N}$ of radius
$N$ in ${\mathbb{R}}^{4}$ we have
$\|E_{I_{1}}f_{1}E_{I_{2}}f_{2}\|_{L^{6}(B_{N})}\lesssim_{\epsilon}N^{\epsilon}(\sum_{J_{1}\subset
I_{1}}\sum_{J_{2}\subset
I_{2}}\|E_{J_{1}}f_{1}E_{J_{2}}f_{2}\|_{L^{6}(B_{N})}^{2})^{1/2}.$
The sum on the right is over intervals $J$ of length $N^{-1/2}$.
###### Proof.
We briefly sketch the argument, that follows closely the one in [2]. Let
$b(N)$ be the best constant such that
$\|E_{I_{1}}f_{1}E_{I_{2}}f_{2}\|_{L^{6}(B_{N})}\leq b(N)(\sum_{J_{1}\subset
I_{1}}\sum_{J_{2}\subset
I_{2}}\|E_{J_{1}}f_{1}E_{J_{2}}f_{2}\|_{L^{6}(B_{N})}^{2})^{1/2}$
holds for all functions and balls as above. Fix $B_{N}$ and let
${\mathcal{B}}$ be a finitely overlapping cover of $B_{N}$ with balls $\Delta$
of radius $N^{2/3}$. It will soon become clear that the exponent $2/3$ is
chosen in order to make cubic terms negligible. Applying this inequality on
each $\Delta$, then summing up using Minkowski’s inequality shows that
$\|E_{I_{1}}f_{1}E_{I_{2}}f_{2}\|_{L^{6}(B_{N})}\leq
b(N^{2/3})(\sum_{H_{1}\subset I_{1}}\sum_{H_{2}\subset
I_{2}}\|E_{H_{1}}f_{1}E_{H_{2}}f_{2}\|_{L^{6}(B_{N})}^{2})^{1/2}.$
The intervals $H$ have length $N^{-1/3}$. We next analyze each term in the
sum. Let $l_{i}$ be the left endpoint of $H_{i}$, and write a generic point
$t_{i}\in H_{i}$ as $t_{i}=l_{i}+s_{i}$, with $s_{i}\in[0,N^{-1/3}]$. We use
Taylor’s formula for $k\in\\{3,4\\}$
$\phi_{k}(t_{i})=\phi_{k}(l_{i})+\phi_{k}^{\prime}(l_{i})s_{i}+\frac{\phi_{k}^{\prime\prime}(l_{i})}{2}s_{i}^{2}+\psi_{k,i}(s_{i}),$
where, due to (1), $\|\psi_{k,i}\|_{L^{\infty}([0,N^{-1/3}])}=O(\frac{1}{N})$.
Let us write
$\begin{cases}y_{1}=x_{1}+2l_{1}x_{2}+\phi_{3}^{\prime}(l_{1})x_{3}+\phi_{4}^{\prime}(l_{1})x_{4}\\\
y_{2}=x_{1}+2l_{2}x_{2}+\phi_{3}^{\prime}(l_{2})x_{3}+\phi_{4}^{\prime}(l_{2})x_{4}\\\
y_{3}=x_{2}+\frac{\phi_{3}^{\prime\prime}(l_{1})}{2}x_{3}+\frac{\phi_{4}^{\prime\prime}(l_{1})}{2}x_{4}\\\
y_{4}=x_{2}+\frac{\phi_{3}^{\prime\prime}(l_{2})}{2}x_{3}+\frac{\phi_{4}^{\prime\prime}(l_{2})}{2}x_{4}\end{cases}.$
It follows that
$|E_{H_{1}}f_{1}(x)E_{H_{2}}f_{2}(x)|=$
$|\int_{[0,N^{-1/3}]^{2}}f_{1}(l_{1}+s_{1})e(s_{1}y_{1}+s_{1}^{2}y_{3})f_{2}(l_{2}+s_{2})e(s_{2}y_{2}+s_{2}^{2}y_{4})e(L(y,s_{1},s_{2}))ds_{1}ds_{2}|.$
Here
$L(y,s_{1},s_{2})=x_{3}(\psi_{3,1}(s_{1})+\psi_{3,2}(s_{2}))+x_{4}(\psi_{4,1}(s_{1})+\psi_{4,2}(s_{2})).$
Lemma 9.1 shows that $x_{3},x_{4}$ depend linearly on
$y_{1},y_{2},y_{3},y_{4}$ with coefficients $O(1)$. This allows us to write
$e(L(y,s_{1},s_{2}))=e(\sum_{i=1}^{4}y_{i}(g_{i}(s_{1})+h_{i}(s_{2})))$
with $\|g_{i}\|_{\infty},\|h_{i}\|_{\infty}=O(\frac{1}{N})$. Letting
$\bar{f}_{i}(s_{i})=f_{i}(l_{i}+s_{i})$ and
$\begin{cases}\eta_{1}(s_{1},s_{2})=s_{1}+g_{1}(s_{1})+h_{1}(s_{2})\\\
\eta_{3}(s_{1},s_{2})=s_{1}^{2}+g_{3}(s_{1})+h_{3}(s_{2})\\\
\eta_{2}(s_{1},s_{2})=s_{2}+g_{2}(s_{1})+h_{2}(s_{2})\\\
\eta_{4}(s_{1},s_{2})=s_{2}^{2}+g_{4}(s_{1})+h_{4}(s_{2})\end{cases}$
we write
$|E_{H_{1}}f_{1}(x)E_{H_{2}}f_{2}(x)|=$
$|\int_{[0,N^{-1/3}]^{2}}\bar{f}_{1}(s_{1})e(y_{1}\eta_{1}(s_{1},s_{2})+y_{3}\eta_{3}(s_{1},s_{2}))\bar{f}_{2}(s_{2})e(y_{2}\eta_{2}(s_{1},s_{2})+y_{4}\eta_{4}(s_{1},s_{2}))ds_{1}ds_{2}|.$
For $\bar{J}_{i}\subset[0,N^{-1/3}]$ we write
${\mathcal{I}}_{\bar{J}_{1},\bar{J}_{2}}(y)=$
$|\int_{\bar{J}_{1}\times\bar{J}_{2}}\bar{f}_{1}(s_{1})e(y_{1}\eta_{1}(s_{1},s_{2})+y_{3}\eta_{3}(s_{1},s_{2}))\bar{f}_{2}(s_{2})e(y_{2}\eta_{2}(s_{1},s_{2})+y_{4}\eta_{4}(s_{1},s_{2}))ds_{1}ds_{2}|.$
This is the extension operator associated with the surface
$(\eta_{1},\ldots,\eta_{4})$, applied to the function $f_{1}\otimes f_{2}$.
We use Lemma 9.1 to write
$\int_{B_{N}}|E_{J_{1}}f_{1}(x)E_{J_{2}}f_{2}(x)|^{6}dx=\int_{\bar{B}_{N}}{\mathcal{I}}_{\bar{J}_{1},\bar{J}_{2}}(y)^{6}dy.$
Here $\bar{B}_{N}$ is a ball of radius $\sim N$ and $J_{i}=\bar{J}_{i}+l_{i}$.
Note that the surface $(\eta_{1},\ldots,\eta_{4})$ is within $O(N^{-1})$ from
the surface
$(s_{1},s_{1}^{2},s_{2},s_{2}^{2}),\;\;s_{i}\in[0,N^{-1/3}],$
so -for decoupling purposes- the two surfaces are indistinguishable when
paired with spatial variables $y$ ranging through a ball of radius $N$. The
latter surface admits an $l^{2}(L^{6})$ decoupling, as can be easily seen by
using Theorem 2.1 twice. The same remains true for the surface
$(\eta_{1},\ldots,\eta_{4})$, and thus
$\|{\mathcal{I}}_{[0,N^{-1/3}],[0,N^{-1/3}]}\|_{L^{6}({\bar{B}_{N}})}\lesssim_{\epsilon}N^{\epsilon}(\sum_{{\bar{J}_{1},\bar{J}_{2}}\subset[0,N^{-1/3}]}\|{\mathcal{I}}_{\bar{J}_{1},\bar{J}_{2}}\|^{2}_{L^{6}({\bar{B}_{N}})})^{1/2}.$
The sum on the right is over intervals of length $N^{-1/2}$. If we undo the
change of variables we find
$\|E_{H_{1}}f_{1}E_{H_{2}}f_{2}\|_{L^{6}(B_{N})}\lesssim_{\epsilon}N^{\epsilon}(\sum_{J_{1}\subset
H_{1}}\sum_{J_{2}\subset
H_{2}}\|E_{J_{1}}f_{1}E_{J_{2}}f_{2}\|^{2}_{L^{6}(B_{N})})^{1/2}.$
Putting things together, we have proved the bootstrapping inequality
$b(N)\lesssim_{\epsilon}N^{\epsilon}b(N^{2/3}).$
We conclude that $b(N)\lesssim_{\epsilon}N^{\epsilon}$, as desired.
∎
We will also record the following close relative of Theorem 2.3, that will be
needed in the next sections.
###### Theorem 2.4.
Assume $\psi_{1},\ldots,\psi_{4}:[-1,1]\to{\mathbb{R}}$ have $C^{3}$ norm
$O(1)$, and in addition satisfy
$|\psi_{2}^{\prime\prime}(t)|,|\psi_{3}^{\prime\prime}(t)|\ll
1,\;\forall\;|t|\leq 1$
and
$|\psi_{1}^{\prime\prime}(t)|,|\psi_{4}^{\prime\prime}(t)|\sim
1,\;\forall\;|t|\leq 1.$
Let $E$ be the extension operator associated with the surface
$\Psi(\xi_{1},\xi_{2})=(\xi_{1},\xi_{2},\psi_{1}(\xi_{1})+\psi_{2}(\xi_{2}),\psi_{3}(\xi_{1})+\psi_{4}(\xi_{2})),\;\;|\xi_{1}|,|\xi_{2}|\leq
1.$
More precisely, for $F:[-1,1]^{2}\to{\mathbb{C}}$, $R\subset[-1,1]^{2}$ and
$x\in{\mathbb{R}}^{4}$ we write
$E_{R}F(x)=\int_{R}F(\xi_{1},\xi_{2})e(x\cdot\Psi(\xi_{1},\xi_{2}))d\xi_{1}d\xi_{2}.$
Then for each ball $B_{N}\subset{\mathbb{R}}^{4}$ with radius $N$ we have
$\|E_{[-1,1]^{2}}F\|_{L^{6}(B_{N})}\lesssim_{\epsilon}N^{\epsilon}(\sum_{H_{1},H_{2}\subset[-1,1]}\|E_{H_{1}\times
H_{2}}F\|^{2}_{L^{6}(B_{N})})^{1/2},$ (9)
where the sum is taken over intervals of length $N^{-1/2}$.
In particular, for each constant coefficients $c_{m_{1},m_{2}}\in{\mathbb{C}}$
we have
$\|\sum_{m_{1}\leq N^{1/2}}\sum_{m_{2}\leq
N^{1/2}}c_{m_{1},m_{2}}e(x\cdot\Psi(\frac{m_{1}}{N^{1/2}},\frac{m_{2}}{N^{1/2}}))\|_{L^{6}(B_{N})}\lesssim_{\epsilon}N^{\epsilon}\|c_{m_{1},m_{2}}\|_{l^{2}}|B_{N}|^{1/6},$
(10)
while if $M\geq N^{1/2}$ we have
$\|\sum_{m_{1}\leq M}\sum_{m_{2}\leq
M}c_{m_{1},m_{2}}e(x\cdot\Psi(\frac{m_{1}}{M},\frac{m_{2}}{M}))\|_{L^{6}(B_{N})}$
(11) $\lesssim_{\epsilon}N^{\epsilon}(\sum_{J_{1},J_{2}}\|\sum_{m_{1}\in
J_{1}}\sum_{m_{2}\in
J_{2}}c_{m_{1},m_{2}}e(x\cdot\Psi(\frac{m_{1}}{M},\frac{m_{2}}{M}))\|^{2}_{L^{6}(B_{N})})^{1/2},$
with $J_{i}$ intervals of length $\sim MN^{-1/2}$ partitioning $[1,M]$.
The implicit constants in both inequalities are independent of $N$, $M$ and of
$\psi_{i}$.
###### Proof.
The exponential sum estimates (10), (11) are standard consequences of (9), so
we will focus on proving the latter. When
$\Psi(\xi_{1},\xi_{2})=(\xi_{1},\xi_{2},C_{1}\xi_{1}^{2}+C_{2}\xi_{2}^{2},C_{3}\xi_{1}^{2}+C_{4}\xi_{2}^{2})$
with $|C_{1}|,|C_{4}|\sim 1$ and $|C_{2}|,|C_{3}|\ll 1$, the result follows by
applying $l^{2}(L^{6})$ Theorem 2.1 twice (after an initial affine change of
variables that reduces it to the case $C_{1}=C_{4}=1$, $C_{2}=C_{3}=0$).
We use a bootstrapping argument similar to the one in Theorem 2.3. Let $d(N)$
be the smallest constant in (9). We need to prove that
$d(N)\lesssim_{\epsilon}N^{\epsilon}$.
We first note that
$\|E_{[-1,1]^{2}}F\|_{L^{6}(B_{N})}\leq
d(N^{2/3})(\sum_{U_{1},U_{2}\subset[-1,1]}\|E_{U_{1}\times
U_{2}}F\|^{2}_{L^{6}(B_{N})})^{1/2},$
where $U_{i}$ are intervals of length $N^{-1/3}$ centered at $l_{i}$. When
$|t|<\frac{1}{2}N^{-1/3}$ we have
$\psi_{i}(t)=\psi_{i}(l_{i})+\psi_{i}^{\prime}(l_{i})t+C_{i}t^{2}+O(\frac{1}{N}),$
where $|C_{1}|,|C_{4}|\sim 1$ and $|C_{2}|,|C_{3}|\ll 1$. It follows that,
after an affine change of variables, the restriction of $\Psi$ to $U_{1}\times
U_{2}$ may be parametrized as
$(\xi_{1},\xi_{2},C_{1}\xi_{1}^{2}+C_{2}\xi_{2}^{2}+O(\frac{1}{N}),C_{3}\xi_{1}^{2}+C_{4}\xi_{2}^{2}+O(\frac{1}{N})),\;\;|\xi_{1}|,|\xi_{2}|=O(N^{-1/3}).$
We decouple this on $B_{N}$, using the observation at the beginning of the
proof. We find
$\|E_{U_{1}\times
U_{2}}F\|_{L^{6}(B_{N})}\lesssim_{\epsilon}N^{\epsilon}(\sum_{H_{1}\subset
U_{1},H_{2}\subset U_{2}}\|E_{H_{1}\times
H_{2}}F\|^{2}_{L^{6}(B_{N})})^{1/2}.$
It follows that $d(N)\lesssim_{\epsilon}N^{\epsilon}d(N^{2/3})$, which forces
$d(N)\lesssim_{\epsilon}N^{\epsilon}$.
∎
## 3\. Bourgain’s argument for the case $\alpha=2$
For the remainder of the paper, we will use the following notation
${\mathcal{E}}_{I}(x)=\sum_{n\in I}e(N\Phi(\frac{n}{N})x)=\sum_{n\in
I}e(nx_{1}+\frac{n^{2}}{N}x_{2}+\phi_{3}(\frac{n}{N})Nx_{3}+\phi_{4}(\frac{n}{N})Nx_{4}).$
Note that, compared to ${\mathcal{E}}_{I,N}$, we dropped the subscript $N$ and
renormalized the variables $x_{2}$, $x_{3}$ and $x_{4}$. Letting
$\Omega=[0,N]^{3}\times[0,1]$
and using periodicity in $x_{1}$, we need to prove that
$\int_{\Omega}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\lesssim_{\epsilon}N^{9+\epsilon}.$
Bourgain’s argument from [2] for the case $\alpha=2$ of Theorem 1.3 involves
three successive decouplings. We simplify it slightly it and reduce it to only
two decouplings.
Step 1. Cover $\Omega$ with cubes $B$ of side length $1$, apply $l^{6}(L^{6})$
decoupling (Theorem 2.2) on each $B$ (or rather $NB$, after rescaling), then
sum these estimates to get
$\int_{\Omega}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\lesssim_{\epsilon}N^{2+\epsilon}\sum_{J_{1}\subset
I_{1}}\sum_{J_{2}\subset
I_{2}}\sum_{B\subset\Omega}\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}.$
Here $J_{1},J_{2}$ are intervals of length $N^{1/2}$.
The remaining part of the argument will show the uniform estimate
$O(N^{6+\epsilon})$ for each term
$\sum_{B\subset\Omega}\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}.$
Fix $J_{i}=[h_{i},h_{i}+N^{1/2}]$.
Step 2. Note that when $x\in\Omega$
$|{\mathcal{E}}_{J_{i}}(x)|=|\sum_{m\leq
N^{1/2}}c_{m}(x_{4})e(mu_{i}+m^{2}w_{i}+\eta_{i}(m)x_{3})|$
where
$\begin{cases}u_{i}=x_{1}+\frac{2h_{i}}{N}x_{2}+\phi_{3}^{\prime}(\frac{h_{i}}{N})x_{3}\\\
w_{i}=\frac{x_{2}}{N}+\phi_{3}^{\prime\prime}(\frac{h_{i}}{N})\frac{x_{3}}{2N}\\\
\eta_{i}(m)={m^{3}}\frac{\phi_{3}^{\prime\prime\prime}(\frac{h_{i}}{N})}{3!N^{2}}+{m^{4}}\frac{\phi_{3}^{\prime\prime\prime\prime}(\frac{h_{i}}{N})}{4!N^{3}}+\ldots\end{cases}.$
(12)
We hide the whole contribution from $x_{4}$ into the coefficient
$c_{m}(x_{4})$. Indeed, since
$\phi_{4}(\frac{h_{i}+m}{N})Nx_{4}=\phi_{4}(\frac{h_{i}}{N})Nx_{4}+m\phi_{4}^{\prime}(\frac{h_{i}}{N})x_{4}+O(1),$
$x_{4}$ does not contribute significantly with quadratic or higher order
terms, so it produces no cancellations. We will only use that
$c_{m}(x_{4})=O(1)$.
At this point, we seek a change of variables. We want the new domain of
integration to be a rectangular box, to allow us to separate the four-variable
integral $\int_{\Omega}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}$ into
the product of two-variable integrals. Note that the ranges of
$x_{1},x_{2},x_{3}$ are the same, $[0,N]$, but $x_{4}$ is restricted to the
smaller interval $[0,1]$. We cannot use periodicity to extend the range of
$x_{4}$ to $[0,N]$, because the individual waves
$e(m\phi_{4}^{\prime}(\frac{h_{i}}{N})x_{4})$ have different periods with
respect to the variable $x_{4}$. Because of this, the variable $x_{4}$ is
practically useless from this point on, it will not generate oscillations. To
generate a fourth variable with range $[0,N]$ for the purpose of a change of
coordinates, Bourgain produces a piece of magic.
First, he applies (8) on each cube $NB$
$\displaystyle\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}$
$\displaystyle=N^{-4}\int_{NB}|{\mathcal{E}}_{J_{1}}(\frac{\cdot}{N}){\mathcal{E}}_{J_{2}}(\frac{\cdot}{N})|^{6}$
$\displaystyle\lesssim
N^{-8}\int_{NB}|{\mathcal{E}}_{J_{1}}(\frac{\cdot}{N})|^{6}\int_{NB}|{\mathcal{E}}_{J_{2}}(\frac{\cdot}{N})|^{6}=\int_{B}|{\mathcal{E}}_{J_{1}}|^{6}\int_{B}|{\mathcal{E}}_{J_{2}}|^{6}.$
Second, he uses the following abstract inequality, that only relies on the
positivity of $|{\mathcal{E}}_{J_{i}}|^{6}$
$\sum_{B\subset\Omega}\int_{B}|{\mathcal{E}}_{J_{1}}|^{6}\int_{B}|{\mathcal{E}}_{J_{2}}|^{6}\lesssim\int_{\Omega}dx\int_{(y,z)\in[-1,1]^{4}\times[-1,1]^{4}}|{\mathcal{E}}_{J_{1}}(x+y){\mathcal{E}}_{J_{2}}(x+z)|^{6}dydz.$
Using periodicity in the $y_{1},z_{1}$ variables, this is
$\lesssim\frac{1}{N^{2}}\int_{x_{1}\in[0,N]\atop{}_{x_{4},y_{2},y_{3},y_{4},z_{2},z_{3},z_{4}\in[-1,1]}}dx_{1}\ldots
dz_{4}\int_{y_{1},z_{1},x_{2},x_{3}\in[0,N]}|{\mathcal{E}}_{J_{1}}(x+y){\mathcal{E}}_{J_{2}}(x+z)|^{6}dy_{1}dz_{1}dx_{2}dx_{3}.$
In short, the variable $x_{1}$ is now replaced with the new variables $y_{1}$
and $z_{1}$. It remains to prove that
$\int_{y_{1},z_{1},x_{2},x_{3}\in[0,N]}|{\mathcal{E}}_{J_{1}}(x+y){\mathcal{E}}_{J_{2}}(x+z)|^{6}dy_{1}dz_{1}dx_{2}dx_{3}\lesssim_{\epsilon}N^{7+\epsilon},$
(13)
uniformly over $x_{1},x_{4},y_{2},y_{3},y_{4},z_{2},z_{3},z_{4}$. With these
variables fixed, we make the affine change of variables
$(y_{1},z_{1},x_{2},x_{3})\mapsto(u_{1},u_{2},w_{1},w_{2})$
$\begin{cases}u_{1}=(y_{1}+x_{1})+\frac{2h_{1}}{N}(x_{2}+y_{2})+\phi_{3}^{\prime}(\frac{h_{1}}{N})(x_{3}+y_{3})\\\
u_{2}=(z_{1}+x_{1})+\frac{2h_{2}}{N}(x_{2}+z_{2})+\phi_{3}^{\prime}(\frac{h_{2}}{N})(x_{3}+z_{3})\\\
w_{1}=\frac{x_{2}+y_{2}}{N}+\phi_{3}^{\prime\prime}(\frac{h_{1}}{N})\frac{x_{3}+y_{3}}{2N}\\\
w_{2}=\frac{x_{2}+z_{2}}{N}+\phi_{3}^{\prime\prime}(\frac{h_{2}}{N})\frac{x_{3}+z_{3}}{2N}\end{cases}.$
(14)
The Jacobian is $\sim\frac{1}{N^{2}}$, due to (3). Note that
$x_{3}+y_{3}=A(w_{1}-w_{2})$, where $A$ depends just on $h_{1},h_{2}$, and
$|A|\sim N$. Using (12) we may write the last integral as
$N^{2}\times$ $\int_{|u_{i}|\lesssim N\atop{}_{|w_{i}|\lesssim
1}}|\sum_{m_{i}\leq
N^{\frac{1}{2}}}c_{m_{1},m_{2}}e(m_{1}u_{1}+m_{1}^{2}w_{1}+m_{2}u_{2}+m_{2}^{2}w_{2}+(\eta_{1}(m_{1})+\eta_{2}(m_{2}))A(w_{1}-w_{2}))|^{6}.$
(15)
The coefficient $c_{m_{1},m_{2}}$ depends only on
$m_{1},m_{2},x_{4},y_{4},z_{4}$, but not on the variables of integration
$u_{i},w_{i}$. The argument of each exponential may be rewritten as
$\frac{m_{1}}{N^{1/2}}u_{1}N^{1/2}+(\psi_{1}(\frac{m_{1}}{N^{1/2}})+\psi_{2}(\frac{m_{2}}{N^{1/2}}))w_{1}N+$
$\frac{m_{1}}{N^{1/2}}u_{2}N^{1/2}+(\psi_{3}(\frac{m_{1}}{N^{1/2}})+\psi_{4}(\frac{m_{2}}{N^{1/2}}))w_{2}N$
where
$\begin{cases}\psi_{1}(\xi)=\xi^{2}+&{\xi^{3}}\frac{A\phi_{3}^{\prime\prime\prime}(\frac{h_{1}}{N})}{3!N^{3/2}}+{\xi^{4}}\frac{A\phi_{3}^{\prime\prime\prime\prime}(\frac{h_{1}}{N})}{4!N^{2}}+\ldots\\\
\psi_{2}(\xi)=&{\xi^{3}}\frac{A\phi_{3}^{\prime\prime\prime}(\frac{h_{2}}{N})}{3!N^{3/2}}+{\xi^{4}}\frac{A\phi_{3}^{\prime\prime\prime\prime}(\frac{h_{2}}{N})}{4!N^{2}}+\ldots\\\
\psi_{3}(\xi)=&{-\xi^{3}}\frac{A\phi_{3}^{\prime\prime\prime}(\frac{h_{1}}{N})}{3!N^{3/2}}-{\xi^{4}}\frac{A\phi_{3}^{\prime\prime\prime\prime}(\frac{h_{1}}{N})}{4!N^{2}}-\ldots\\\
\psi_{4}(\xi)=\xi^{2}-&{\xi^{3}}\frac{A\phi_{3}^{\prime\prime\prime}(\frac{h_{2}}{N})}{3!N^{3/2}}-{\xi^{4}}\frac{A\phi_{3}^{\prime\prime\prime\prime}(\frac{h_{2}}{N})}{4!N^{2}}-\ldots\end{cases}.$
These functions satisfy the requirements in Theorem 2.4. The integral (15) is
the same as
$N^{-3}\int_{u_{i}=O(N^{3/2})\atop{}_{w_{i}=O(N)}}|\sum_{m_{1}\leq
N^{1/2}}\sum_{m_{2}\leq
N^{1/2}}c_{m_{1},m_{2}}e((u_{1},u_{2},w_{1},w_{2})\cdot\Psi(\frac{m_{1}}{N^{1/2}},\frac{m_{2}}{N^{1/2}}))|^{6}du_{1}du_{2}dw_{1}dw_{2}.$
If we cover the domain of integration with $\sim N$ balls $B_{N}$ and apply
(10) on each of them, we may dominate the above expression by
$N^{-3}N(N^{\frac{1}{2}+\epsilon}N^{4/6})^{6}=N^{5+\epsilon}.$
This proves (13) and ends the proof. Note that this argument treats the cubic
and higher order terms as perturbations of quadratic factors, as explained in
the proof of (6).
In summary, what is special about the case $\alpha=2$ is that the range of
$x_{3}$ in our initial integral over $\Omega$ is $[0,N]$. This was needed in
producing the large spatial range $w_{i}=O(N)$ for our final variables,
crucial for the application of (10). This inequality provides decoupling into
point masses, reducing the initial exponential sum to individual waves. In
Section 6 we will see that when $\alpha$ is slightly smaller than 2,
inequality (11) will have to replace (10), leading to quadratic Weyl sums
whose handling demands number theory.
###### Remark 3.1.
It is not clear whether a version of Bourgain’s method could be made to work
in the range $2<\alpha<3$. If successful, this would potentially provide a new
argument for Vinogradov’s Mean Value Theorem in ${\mathbb{R}}^{3}$. Decoupling
on cubes with size $N^{\beta-1}$ and using (8) on balls $B_{N^{\beta}}$ leads
to variables $y_{1},z_{1}$ with associated period equal to 1, much bigger than
their range $N^{\beta-1}$. The change of variables (14) is no longer efficient
in this case.
## 4\. Proof of Theorem 1.3 in the case $\alpha=\frac{3}{2}$
This time we let $\Omega=[0,1]\times[0,N]\times[0,N^{1/2}]\times[0,N^{1/2}]$.
Recall that
${\mathcal{E}}_{I}(x)=\sum_{n\in
I}e(nx_{1}+\frac{n^{2}}{N}x_{2}+\phi_{3}(\frac{n}{N})Nx_{3}+\phi_{4}(\frac{n}{N})Nx_{4}).$
We need to prove
$\int_{\Omega}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}dx\lesssim_{\epsilon}N^{8+\epsilon}.$
(16)
In this case, we will only need assumptions (1) and (2), but not (3). We start
by presenting a general principle that will explain the subtleties of our
argument. See also Remark 4.3.
Consider two partitions of $\Omega$, one into cubes $B$ with side length $l$
and another one into cubes $\Delta$ with side length $L\geq l$. The intervals
$J_{i}$ have length $\sqrt{\frac{N}{l}}$ and partition $I_{i}$. The intervals
$U_{i}$ have length $\sqrt{\frac{N}{L}}$ and partition $I_{i}$. The following
holds, via two applications of Theorem 2.3 (on cubes $B$ and $\Delta$,
combined with Minkowski’s inequality)
$\|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}\|_{L^{6}(\Omega)}\lesssim_{\epsilon}N^{\epsilon}(\sum_{J_{1},J_{2}}\|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}\|_{L^{6}(\Omega)}^{2})^{1/2}\lesssim_{\epsilon}N^{\epsilon}(\sum_{U_{1},U_{2}}\|{\mathcal{E}}_{U_{1}}{\mathcal{E}}_{U_{2}}\|_{L^{6}(\Omega)}^{2})^{1/2}.$
(17)
Also, combining the above inequalities with Hölder shows that
$\|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}\|_{L^{6}(\Omega)}\lesssim_{\epsilon}$
$N^{\epsilon}(\sharp(J_{1},J_{2}))^{\frac{1}{3}}(\sum_{J_{1},J_{2}}\|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}\|_{L^{6}(\Omega)}^{6})^{1/6}\lesssim_{\epsilon}N^{\epsilon}(\sharp(U_{1},U_{2}))^{\frac{1}{3}}(\sum_{U_{1},U_{2}}\|{\mathcal{E}}_{U_{1}}{\mathcal{E}}_{U_{2}}\|_{L^{6}(\Omega)}^{6})^{1/6}.$
(18)
Invoking periodicity in $x_{1},x_{2}$ and the invariance of (1), (2), (3)
under the change of sign $\phi_{k}\mapsto-\phi_{k}$, (16) is equivalent with
proving that
$\int_{[-N^{1/2},N^{1/2}]\times[-N,N]\times[-N^{1/2},N^{1/2}]\times[-N^{1/2},N^{1/2}]}|{\mathcal{E}}_{I_{1}}(x){\mathcal{E}}_{I_{2}}(x)|^{6}dx\lesssim_{\epsilon}N^{8+\frac{1}{2}+\epsilon}.$
(19)
We first demonstrate the inefficiency of $l^{6}L^{6}$ decoupling for this
case, by working with the smaller domain
$S=[-o(N^{1/2}),o(N^{1/2})]^{4}.$
We cover $S$ with unit cubes and apply decoupling into intervals $J_{1},J_{2}$
of length $N^{1/2}$ as in the previous section, to dominate
$\int_{S}|{\mathcal{E}}_{I_{1}}(x){\mathcal{E}}_{I_{2}}(x)|^{6}dx\lesssim_{\epsilon}N^{\epsilon}N^{6(\frac{1}{2}-\frac{1}{6})}\sum_{J_{1},J_{2}}\int_{S}|{\mathcal{E}}_{J_{1}}(x){\mathcal{E}}_{J_{2}}(x)|^{6}dx.$
(20)
We will next show that the right hand side is too big, thus leading to an
overestimate for our initial integral. When $J=[h,h+N^{1/2}]$ and $n=h+m\in J$
we write
$\phi_{k}(\frac{n}{N})=\phi_{k}(\frac{h}{N})+\phi_{k}^{\prime}(\frac{h}{N})\frac{m}{N}+\frac{1}{2}\phi_{k}^{\prime\prime}(\frac{h}{N})(\frac{m}{N})^{2}+O(\frac{m}{N})^{3}.$
If $|x_{3}|,|x_{4}|\ll N^{1/2}$ and $m\leq N^{1/2}$, we guarantee that the
contribution from higher order terms is small
$O(\frac{m}{N})^{3}N(|x_{3}|+|x_{4}|)\ll 1.$
If we collect the contributions from linear and quadratic terms we find
$|{\mathcal{E}}_{J}(x)|=|\sum_{m\leq N^{1/2}}e(mu+m^{2}w+o(1))|$
where
$\begin{cases}u=x_{1}+\frac{h}{N}x_{2}+\phi_{3}^{\prime}(\frac{h}{N})x_{3}+\phi_{4}^{\prime}(\frac{h}{N})x_{4}\\\
w=\frac{x_{2}}{N}+\frac{1}{2}\phi_{3}^{\prime\prime}(\frac{h}{N})\frac{x_{3}}{N}+\frac{1}{2}\phi_{4}^{\prime\prime}(\frac{h}{N})\frac{x_{4}}{N}\end{cases}.$
Using Lemma 9.1 we write
$\int_{S}|{\mathcal{E}}_{J_{1}}(x){\mathcal{E}}_{J_{2}}(x)|^{6}dx\gtrsim$
$N^{2}\int_{(u_{1},u_{2})\in[0,o(N^{1/2})]^{2}\atop{(w_{1},w_{2})\in[0,o(N^{-1/2})]^{2}}}|\sum_{m\leq
N^{1/2}}e(mu_{1}+m^{2}w_{1}+o(1))|^{6}|\sum_{m\leq
N^{1/2}}e(mu_{2}+m^{2}w_{2}+o(1))|^{6}.$
We now use the fact that we have constructive interference
$|\sum_{m\leq N^{1/2}}e(mu+m^{2}w+o(1))|\sim N^{1/2}$
on the set of measure $\sim\frac{1}{N}$
$(u,w)\in(\bigcup_{l\in\\{0,1,\ldots,o(\sqrt{N})\\}}[l,l+\frac{1}{\sqrt{N}}])\times[0,\frac{1}{N}].$
It follows that
$\int_{S}|{\mathcal{E}}_{J_{1}}(x){\mathcal{E}}_{J_{2}}(x)|^{6}dx\gtrsim
N^{2}N^{6}N^{-2}=N^{6}.$
It is not hard to prove that this lower bound is sharp, but this has no
relevance to us here. The point of working with the symmetric domain $S$ was
to make sure that $w_{1},w_{2}\sim\frac{1}{N}$ are in the new domain of
integration. Going back to (20), the $l^{6}(L^{6})$ decoupling method leads to
the upper bound
$\int_{S}|{\mathcal{E}}_{I_{1}}(x){\mathcal{E}}_{I_{2}}(x)|^{6}dx\lesssim_{\epsilon}N^{9+\epsilon}.$
This falls short by the factor $N^{1/2}$ from proving (19).
The second inequality in (4) shows that using $l^{6}(L^{6})$ decoupling on
cubes $\Delta$ that are larger than $N$ will only worsen the upper bounds we
get. On the other hand, working with smaller cubes will render decoupling
inefficient. The resulting exponential sums will be very difficult to handle
using number theory, since the cubic terms are no longer $O(1)$ in this case.
Let us now describe the correct approach, that will critically rely on
$l^{2}$, rather than $l^{6}$ decoupling. The following level set estimate will
play a key role in various counting arguments. The main strength of the lemma
is in the case when $|l_{1}|\sim|l_{2}|$.
Throughout the remainder of the paper, the letter $l$ will be used to denote
integers, and their relative proximity to powers of $2$ will be denoted using
the symbol $\sim$. We make the harmless convention to write $0\sim 2^{0}$.
###### Lemma 4.1.
Assume $\phi_{3}$, $\phi_{4}$ satisfy (1) and (2). Let $l_{1},l_{2}$ with
$\max\\{|l_{1}|,|l_{2}|\\}\sim 2^{j}$, $j\geq 0$, and let
$f(t)=l_{1}\phi_{3}^{\prime\prime}(t)+l_{2}\phi_{4}^{\prime\prime}(t).$
Then we can partition the range of $f$ into sets $R_{s}$ with $0\leq s\leq j$,
each of which is the union of at most two intervals of length $\sim 2^{s}$,
such that for each $v\in R_{s}$ we have
$|f^{-1}(v+[-O(1),O(1)])\cap[\frac{1}{2},1]|\lesssim\frac{1}{\sqrt{2^{j+s}}}.$
All implicit constants are universal over all pairs of such $\phi_{3}$,
$\phi_{4}$ and over $l_{1},l_{2},s$.
###### Proof.
The result is trivial if $l_{1}=l_{2}=0$, so we will next assume that
$\max\\{|l_{1}|,|l_{2}|\\}\geq 1$.
We restrict $f$ to the interval $[\frac{1}{2},1]$. Since
$\begin{bmatrix}f^{\prime}(t)\\\
f^{\prime\prime}(t)\end{bmatrix}=\begin{bmatrix}\phi_{3}^{(3)}(t)&\phi_{4}^{(3)}(t)\\\
\phi_{3}^{(4)}(t)&\phi_{4}^{(4)}(t)\end{bmatrix}\begin{bmatrix}l_{1}\\\
l_{2}\end{bmatrix},$
(2) implies that for each $t\in[\frac{1}{2},1]$ we have
$\max\\{|f^{\prime}(t)|,|f^{\prime\prime}(t)|\\}\sim 2^{j}.$ (21)
We let $t_{0}$ be a point in $[\frac{1}{2},1]$ where $|f^{\prime}|$ attains
its minimum. If $|f^{\prime}(t_{0})|\sim 2^{j}$, then we may take $R_{j}$ to
be the whole range of $f$, and all other $R_{s}$ to be empty. Indeed, the Mean
Value Theorem shows that
$|f(t_{1})-f(t_{2})|\gtrsim 1$
whenever $|t_{1}-t_{2}|\gtrsim 2^{-j}$. It is worth observing that if
$|l_{1}|\gg|l_{2}|$, then (3) would immediately guarantee that
$|f^{\prime}(t_{0})|\sim 2^{j}$.
We now assume that $|f^{\prime}(t_{0})|\ll 2^{j}$. Due to (21), we must have
that $|f^{\prime\prime}(t_{0})|\sim 2^{j}$. We write for $t\in[\frac{1}{2},1]$
$f(t)=f(t_{0})+f^{\prime}(t_{0})(t-t_{0})+\frac{f^{\prime\prime}(t_{0})(t-t_{0})^{2}}{2}+O(2^{j}(t-t_{0})^{3}).$
(22)
Case 1. Consider $s$ with $2^{j}\geq
2^{s}>C\max\\{\frac{|f^{\prime}(t_{0})|^{2}}{2^{j}},1\\}$, for some large
enough $C$ independent of $j$. Using this and (22), we see that
$|f(t)-f(t_{0})|\ll 2^{s}\;\text{ whenever }|t-t_{0}|\ll 2^{\frac{s-j}{2}}.$
(23)
Define
$R_{s}=\\{v:\;|v-f(t_{0})|\sim 2^{s}\\}.$
Let $v\in R_{s}$ and let $w=v+O(1)$. Thus, we also have $|w-f(t_{0})|\sim
2^{s}$. Let $t_{1},t_{2}$ be such that $f(t_{1})=v$, $f(t_{2})=w$. Using (23)
it follows that $|t_{1}-t_{0}|,|t_{2}-t_{0}|\gtrsim 2^{\frac{s-j}{2}}$. Our
assumption shows that $2^{\frac{s-j}{2}}\gg\frac{|f^{\prime}(t_{0})|}{2^{j}}$.
Thus, $|t_{1}-t_{0}|,|t_{2}-t_{0}|\gg\frac{|f^{\prime}(t_{0})|}{2^{j}}$, and
using (22) again we conclude that
$|f(t_{i})-f(t_{0})|\sim 2^{j}|t_{i}-t_{0}|^{2}.$
Thus, $|t_{i}-t_{0}|\sim 2^{\frac{s-j}{2}}$. Using again (22) we find that if
$t_{1},t_{2}$ are on the same side of $t_{0}$ then
$|f(t_{1})-f(t_{2})|\sim 2^{\frac{s+j}{2}}|t_{1}-t_{2}|.$
We conclude that $|t_{1}-t_{2}|\lesssim\frac{1}{\sqrt{2^{j+s}}}$, as desired.
Next, we define $R_{s}$ for smaller values of $s$. We distinguish two cases.
Case 2a. Assume now that $|f^{\prime}(t_{0})|\leq 2^{j/2}$. For $s$ such that
$2^{s}$ is the largest dyadic power $\leq
C\max\\{\frac{|f^{\prime}(t_{0})|^{2}}{2^{j}},1\\}=C$ we define
$R_{s}=\\{v:\;|v-f(t_{0})|\lesssim 2^{s}\\}.$
We also let $R_{s^{\prime}}=\emptyset$ for smaller values of $s^{\prime}$. Let
$v\in R_{s}$ and $w=v+O(1)$. Let $t_{1},t_{2}$ be such that $f(t_{1})=v$,
$f(t_{2})=w$. Since in fact $|f(t_{i})-f(t_{0})|\lesssim 1$, (22) forces
$|t_{i}-t_{0}|\lesssim 2^{-j/2}\sim\frac{1}{\sqrt{2^{j+s}}}$, as desired.
Case 2b. Assume now that $|f^{\prime}(t_{0})|>2^{j/2}$. For $s$ such that
$2^{s}$ is the largest dyadic power $\leq
C\max\\{\frac{|f^{\prime}(t_{0})|^{2}}{2^{j}},1\\}=C\frac{|f^{\prime}(t_{0})|^{2}}{2^{j}}$
we define
$R_{s}=\\{v:\;|v-f(t_{0})|\lesssim 2^{s}\\}.$
We also let $R_{s^{\prime}}=\emptyset$ for smaller values of $s^{\prime}$. Let
$v\in R_{s}$ and $w=v+O(1)$. Let $t_{1},t_{2}$ be such that $f(t_{1})=v$,
$f(t_{2})=w$. Using that $|f^{\prime}(t)|\geq|f^{\prime}(t_{0})|$ for all $t$,
we find that
$|f(t_{1})-f(t_{2})|\geq|t_{1}-t_{2}||f^{\prime}(t_{0})|.$
We conclude that
$|t_{1}-t_{2}|\lesssim\frac{1}{|f^{\prime}(t_{0})|}\sim\frac{1}{\sqrt{2^{j+s}}},$
as desired.
∎
From now on, we will implicitly assume that all Weyl sums are smooth, as in
Lemma 9.2. This can be easily arranged using partitions of unity, namely
working with smooth $\gamma$ satisfying
$\sum_{l\in{\mathbb{Z}}}\gamma(\cdot+l)=1_{\mathbb{R}}.$
To simplify notation, these weights will be ignored.
Cover $\Omega$ with unit cubes
$B=B_{p,l_{1},l_{2}}=[0,1]\times[p,p+1]\times[l_{1},l_{1}+1]\times[l_{2},l_{2}+1]$
with
$p\leq N,\;\;l_{1},l_{2}\leq N^{1/2}.$
We first write
$\int_{\Omega}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\sim\sum_{B\subset\Omega}\int_{B}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}.$
We use $l^{2}$ decoupling (Theorem 2.3) on each $B$
$\int_{B}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\lesssim_{\epsilon}N^{\epsilon}(\sum_{J_{1}\subset
I_{1}}\sum_{J_{2}\subset
I_{2}}(\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6})^{1/3})^{3}$
where $J_{i}$ is of the form $[h_{i},h_{i}+N^{1/2}]$. When $x\in B$ and
$J=[h,h+N^{1/2}]$
$|{\mathcal{E}}_{J}(x)|=|\sum_{m\leq
N^{1/2}}e(mu+m^{2}w+{m^{3}}v+O(N^{-1/4}))|$
where
$\begin{cases}u=x_{1}+\frac{2h}{N}x_{2}+\phi_{3}^{\prime}(\frac{h}{N})x_{3}+\phi_{4}^{\prime}(\frac{h}{N})x_{4}\\\
w=\frac{x_{2}}{N}+\frac{1}{2}\phi_{3}^{\prime\prime}(\frac{h}{N})\frac{x_{3}}{N}+\frac{1}{2}\phi_{4}^{\prime\prime}(\frac{h}{N})\frac{x_{4}}{N}\\\
v=\frac{\phi_{3}^{\prime\prime\prime}(\frac{h}{N}){x_{3}}+\phi_{4}^{\prime\prime\prime}(\frac{h}{N}){x_{4}}}{6N^{2}}\end{cases}.$
(24)
The term $O(N^{-1/4})$ can be dismissed as it produces tiny errors consistent
with square root cancellation. Note that since $v=O(N^{-3/2})$, we have
$|\sum_{m\leq N^{1/2}}e(mu+m^{2}w+{m^{3}}v)|\approx|\sum_{m\leq
N^{1/2}}e(mu+m^{2}w)|.$
See Lemma 9.2 for a rigorous argument. The key point is that we may dismiss
the cubic terms.
Write
$I(h_{1},h_{2},B)=$
$\int_{(u_{1},u_{2},w_{1},w_{2})\in[0,1]^{2}\times[\frac{a_{1}-O(1)}{N},\frac{a_{1}+O(1)}{N}]\times[\frac{a_{2}-O(1)}{N},\frac{a_{2}+O(1)}{N}]}|\prod_{i=1}^{2}\sum_{m_{i}\leq
N^{1/2}}e(m_{i}u_{i}+m_{i}^{2}w_{i})|^{6}du_{1}du_{2}dw_{1}dw_{2},$
where
$\begin{cases}a_{1}=p+\frac{l_{1}}{2}\phi_{3}^{\prime\prime}(\frac{h_{1}}{N})+\frac{l_{2}}{2}\phi_{4}^{\prime\prime}(\frac{h_{1}}{N})\\\
a_{2}=p+\frac{l_{1}}{2}\phi_{3}^{\prime\prime}(\frac{h_{2}}{N})+\frac{l_{2}}{2}\phi_{4}^{\prime\prime}(\frac{h_{2}}{N})\end{cases}.$
(25)
Via the change of variables with Jacobian $\sim\frac{1}{N^{2}}$ (Lemma 9.1)
$\begin{cases}u_{1}=x_{1}+\frac{2h_{1}}{N}x_{2}+\phi_{3}^{\prime}(\frac{h_{1}}{N})x_{3}+\phi_{4}^{\prime}(\frac{h_{1}}{N})x_{4}\\\
w_{1}=\frac{x_{2}}{N}+\frac{1}{2}\phi_{3}^{\prime\prime}(\frac{h_{1}}{N})\frac{x_{3}}{N}+\frac{1}{2}\phi_{4}^{\prime\prime}(\frac{h_{1}}{N})\frac{x_{4}}{N}\\\
u_{2}=x_{1}+\frac{2h_{2}}{N}x_{2}+\phi_{3}^{\prime}(\frac{h_{2}}{N})x_{3}+\phi_{4}^{\prime}(\frac{h_{2}}{N})x_{4}\\\
w_{2}=\frac{x_{2}}{N}+\frac{1}{2}\phi_{3}^{\prime\prime}(\frac{h_{2}}{N})\frac{x_{3}}{N}+\frac{1}{2}\phi_{4}^{\prime\prime}(\frac{h_{2}}{N})\frac{x_{4}}{N}\end{cases}$
we see that
$\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}\lesssim
N^{2}I(h_{1},h_{2},B).$
Writing
$I_{a}=\int_{[0,1]\times[\frac{a-O(1)}{N},\frac{a+O(1)}{N}]}|\sum_{m\leq
N^{1/2}}e(mu+m^{2}w)|^{6}dudw$
we find that
$\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}\lesssim
N^{2}I_{a_{1}}I_{a_{2}}.$
Let us analyze (25). The question is, for fixed $B$, what are the values of
$a_{1},a_{2}$ that arise (modulo $O(1)$ error terms), and what is their
multiplicity, when $h_{1},h_{2}$ range through the multiples of $N^{1/2}$ in
$[1,N]$.
Assume $l_{1}\sim 2^{j_{1}}$, $l_{2}\sim 2^{j_{2}}$, with
$2^{j_{1}},2^{j_{2}}\leq N^{1/2}$. We may assume $j_{1}\leq j_{2}$, the other
case is completely similar. We apply Lemma 4.1 to
$f(t)=\frac{1}{2}(l_{1}\phi_{3}^{\prime\prime}(t)+l_{2}\phi_{4}^{\prime\prime}(t))$.
For each $0\leq s_{1},s_{2}\leq j_{2}$ and each $p$ we have
$O(2^{s_{1}+s_{2}})$ pairs $(a_{1},a_{2})$ of integers with $a_{1}-p\in
R_{s_{1}}(l_{1},l_{2})$ and $a_{2}-p\in R_{s_{2}}(l_{1},l_{2})$. Note that we
index the intervals $R_{s_{i}}$ from Lemma 4.1 by $l_{1},l_{2}$. For each such
pair $(a_{1},a_{2})$, (25) has
$O(\frac{N}{2^{j_{2}}2^{\frac{s_{1}+s_{2}}{2}}})$ solutions $(h_{1},h_{2})$.
When we count solutions, we tolerate error terms of size $O(1)$.
Thus
$\displaystyle\sum_{B\subset\Omega}\int_{B}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}$
$\displaystyle\lesssim N^{2}\sum_{p\leq N}\sum_{2^{j_{2}}\lesssim
N^{1/2}}\sum_{2^{j_{1}}\lesssim 2^{j_{2}}}\sum_{l_{1}\sim
2^{j_{1}}}\sum_{l_{2}\sim 2^{j_{2}}}\sum_{s_{1},s_{2}\leq
j_{2}}(\frac{N}{2^{j_{2}+\frac{s_{1}+s_{2}}{2}}})^{3}(\sum_{a_{1}\in
p+R_{s_{1}}(l_{1},l_{2})}\sum_{a_{2}\in
p+R_{s_{2}}(l_{1},l_{2})}I_{a_{1}}^{1/3}I_{a_{2}}^{1/3})^{3}$
$\displaystyle\lesssim N^{2}\sum_{p\leq N}\sum_{2^{j_{2}}\lesssim
N^{1/2}}\sum_{2^{j_{1}}\lesssim
2^{j_{2}}}2^{j_{1}+j_{2}}(\frac{N}{2^{j_{2}}})^{3}\sum_{s_{1},s_{2}\leq
j_{2}}(\sum_{a_{1}\in
p+R_{s_{1}}(l_{1},l_{2})}I_{a_{1}}^{2/3})^{3/2}(\sum_{a_{2}\in
p+R_{s_{2}}(l_{1},l_{2})}I_{a_{2}}^{2/3})^{3/2}.$
The last inequality follows from Cauchy–Schwarz. Next, we observe that
$p+R_{s_{i}}(l_{1},l_{2})\subset[p-O(2^{j_{2}}),p+O(2^{j_{2}})]$. These
intervals are roughly the same for roughly $2^{j_{2}}$ values of $p$. We can
thus dominate the above by
$\displaystyle\begin{split}&{\;\lessapprox}\;N^{2}\sum_{2^{j_{2}}\lesssim
N^{1/2}}\sum_{2^{j_{1}}\lesssim
2^{j_{2}}}2^{j_{1}+j_{2}}(\frac{N}{2^{j_{2}}})^{3}2^{j_{2}}\sum_{H\subset[0,N]\atop{|H|={2^{j_{2}}}}}(\sum_{a\in
H}I_{a}^{2/3})^{3}\\\ &\sim N^{5}\sum_{2^{j}\lesssim
N^{1/2}}\sum_{H\subset[0,N]\atop{|H|={2^{j}}}}(\sum_{a\in
H}I_{a}^{2/3})^{3}.\end{split}$ (26)
The sum runs over pairwise disjoint intervals $H$. It is easily seen to be
$O(N^{8})$, by using the following lemma with $M=N^{1/2}$.
###### Lemma 4.2.
Let
$I_{a}=\int_{[0,1]\times[\frac{a-O(1)}{M^{2}},\frac{a+O(1)}{M^{2}}]}|\sum_{m\leq
M}e(mu+m^{2}w)|^{6}dudw$
For each $2^{j}\leq M^{2}$ we have
$\sum_{H\subset[0,M^{2}]\atop{|H|={2^{j}}}}(\sum_{a\in
H}I_{a}^{2/3})^{3}{\;\lessapprox}\;M^{4}2^{2j}+M^{6}.$
###### Proof.
The arcs
$\\{x\in[0,1):\;{\operatorname{dist}\,}(x-\frac{b}{q},{\mathbb{Z}})\leq\frac{1}{qM}\\}$,
with $1\leq b\leq q\leq M$ and $(b,q)=1$, cover $[0,1)$. They may overlap,
which leads to double counting in our argument, but this will be harmless.
We consider the contribution from those $I_{a}$ with $\frac{a}{M^{2}}$ in some
arc with $q\sim Q$. Here $Q$ is dyadic and $Q\lesssim M$. We separate the
proof into two cases. Note that $H/M^{2}\subset[0,1]$ and has length
$2^{j}/M^{2}$. Also, $|b/q-b^{\prime}/q^{\prime}|\geq 1/qq^{\prime}$.
Case 1. $Q^{2}>\frac{M^{2}}{2^{j}}$. Each $H/M^{2}$ intersects
$\lesssim\frac{2^{j}Q^{2}}{M^{2}}$ arcs with $q\sim Q$. For each such $b/q$,
and each $1\leq 2^{k}\leq\frac{M}{q}$ there are $\sim\frac{M}{2^{k}Q}$ values
of $a$ with
$|\frac{a}{M^{2}}-\frac{b}{q}|\sim\frac{1}{qM2^{k}}.$
Call ${\mathbb{A}}(Q,k)$ the collection of all these $a$. For each
$a\in{\mathbb{A}}(Q,k)$, Lemma 9.2 gives
$I_{a}\lesssim\frac{1}{2^{k}M^{2}}(M^{1/2}2^{k/2})^{6}=2^{2k}M.$
The contribution from $a\in{\mathbb{A}}(Q,k)$ is
$\sum_{H\subset[0,M^{2}]\atop{|H|={2^{j}}}}(\sum_{a\in
H\cap{\mathbb{A}}(Q,k)}I_{a}^{2/3})^{3}\lesssim\frac{M^{2}}{2^{j}}(\frac{2^{j}Q^{2}}{M^{2}}\frac{M}{2^{k}Q})^{3}(M2^{2k})^{2}=M2^{k}2^{2j}Q^{3}.$
This is easily seen to be $O(2^{2j}M^{4})$, since $2^{k}=O(MQ^{-1})$ and
$Q=O(M)$. The contribution to the full sum is acceptable, since there are
${\;\lessapprox}\;1$ values of $Q$ and $k$.
Case 2. $Q^{2}<\frac{M^{2}}{2^{j}}$. There are $\lesssim Q^{2}$ arcs with
$q\sim Q$. Essentially, each $H$ is either disjoint from all these (so not
contributing at this stage) or (essentially) contained inside one of them. We
distinguish two subcases.
(a) If $\frac{2^{j}}{M^{2}}<\frac{1}{QM2^{k}}$ (this is stronger than
$Q^{2}<\frac{M^{2}}{2^{j}}$), there are $\frac{1}{QM2^{k}}\frac{M^{2}}{2^{j}}$
intervals $H/M^{2}$ contained in
$[\frac{b}{q}-\frac{1}{QM2^{k}},\frac{b}{q}+\frac{1}{QM2^{k}}]$. Their
contribution is
$\displaystyle\sum_{b<q\sim
Q}\sum_{H/M^{2}\subset[\frac{b}{q}-\frac{1}{QM2^{k}},\frac{b}{q}+\frac{1}{QM2^{k}}]\atop{|H|={2^{j}}}}(\sum_{a\in
H}I_{a}^{2/3})^{3}$ $\displaystyle\lesssim
Q^{2}\frac{M}{Q2^{k}2^{j}}2^{3j}(2^{2k}M)^{2}$
$\displaystyle=2^{2j}M^{3}Q2^{3k}.$
Using our assumption, this is $O(2^{-j}M^{6})$.
(b) If $\frac{2^{j}}{M^{2}}>\frac{1}{QM2^{k}}$, for each $b/q$ with $q\sim Q$
there is only one $H/M^{2}$ that intersects
$|t-\frac{b}{q}|\sim\frac{1}{qM2^{k}}$
with at most $M^{2}\frac{1}{qM2^{k}}$ values of $a$ contributing from $H$. The
contribution from the $O(Q^{2})$ arcs with denominator $\sim Q$ is
$\lesssim Q^{2}(\frac{M}{Q2^{k}})^{3}(2^{2k}M)^{2}=\frac{2^{k}M^{5}}{Q}.$
Since $2^{k}\lesssim M$, this term is $O(M^{6})$.
∎
###### Remark 4.3.
One may wonder whether there is a clever way to estimate the sum
$\sum_{B\subset\Omega}(\sum_{J_{1}\subset I_{1}}\sum_{J_{2}\subset
I_{2}}(\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6})^{1/3})^{3},$
without using number theory. To this end, the most natural thing to try is to
use Minkowski’s inequality and to bound this expression by
$(\sum_{J_{1}\subset I_{1}}\sum_{J_{2}\subset
I_{2}}(\int_{\Omega}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6})^{1/3})^{3}.$
(27)
However, a change of variables as before shows that for each $J_{1},J_{2}$
$\displaystyle\int_{\Omega}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}$
$\displaystyle\geq
N^{-1/2}\int_{[0,N^{1/2}]^{4}}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}$
$\displaystyle\sim
N^{-1/2}N^{2}[\int_{(u,w)\in[0,N^{1/2}]\times[0,N^{-1/2}]}|\sum_{m=1}^{N^{1/2}}e(mu+m^{2}w)|^{6}dudw]^{2}$
$\displaystyle\sim N^{11/2}.$
Using this approach, the upper bound we get for (27) is $N^{8+\frac{1}{2}}$.
As in our earlier attempt to use $l^{6}$ rather than $l^{2}$ decoupling, this
estimate falls short by a factor of $N^{1/2}$ from the sharp upper bound
$N^{8}$.
Also, due to (17), the expression (27) is only getting larger if $J_{i}$ are
replaced with smaller intervals. Thus, decoupling on cubes larger than $B$
(such as $N^{1/2}$-cubes) only worsens our upper bound.
A similar computation shows that the only case of Conjecture 1.1 that can be
approached with $l^{6}L^{6}$ decoupling is the case $\alpha=2$ discussed in
the previous section.
## 5\. The case $\frac{3}{2}<\alpha\leq\frac{9}{5}$
Let
$\Omega=[0,N^{\frac{2\alpha}{3}-1}]\times[0,N]\times[N^{\frac{1}{2}},N^{\alpha-1}]\times[0,N^{\beta-1}]$.
Using 1-periodicity in $x_{1}$, we need to prove that
$\int_{\Omega}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\lesssim_{\epsilon}N^{7+\frac{2\alpha}{3}+\epsilon}.$
We cover $\Omega$ with cubes $B$ of side length $N^{\frac{2\alpha}{3}-1}$ and
write
$\int_{\Omega}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\sim\sum_{B\subset\Omega}\int_{B}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}.$
The size of these cubes is the smallest that will make cubic terms negligible
after the decoupling. Since we need $\frac{2\alpha}{3}-1\leq\beta-1$ in order
to not exceed $\Omega$, this leads to the restriction $\alpha\leq\frac{9}{5}$.
Note also that the $x_{3}$ coordinate is $\geq N^{1/2}$. We can afford this
omission because of the $\alpha=\frac{3}{2}$ case discussed in the previous
section. Since we are about to decouple on cubes $B$ with size larger than 1,
Remark 4.3 tells us that applying this method for $x$ near the origin leads to
losses. Our next argument will make explicit use of the fact that $x_{3}$ is
away from the origin.
We use $l^{2}$ decoupling (Theorem 2.3) on each $B$ (or rather $NB$, after
rescaling)
$\int_{B}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\lesssim_{\epsilon}N^{\epsilon}(\sum_{J_{1}\subset
I_{1}}\sum_{J_{2}\subset
I_{2}}(\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6})^{1/3})^{3}$
where $J_{i}$ is of the form $[h_{i},h_{i}+M]$, with
$M=N^{1-\frac{\alpha}{3}}$. We write $B$ as
$[0,N^{\frac{2\alpha}{3}-1}]\times[pN^{\frac{2\alpha}{3}-1},(p+1)N^{\frac{2\alpha}{3}-1}]\times[l_{1}N^{\frac{2\alpha}{3}-1},(l_{1}+1)N^{\frac{2\alpha}{3}-1}]\times[l_{2}N^{\frac{2\alpha}{3}-1},(l_{2}+1)N^{\frac{2\alpha}{3}-1}]$
with the integers $0\leq p\leq M^{2}$, $N^{\frac{3}{2}-\frac{2\alpha}{3}}\leq
l_{1}\leq N^{\frac{\alpha}{3}}$ and $0\leq l_{2}\leq N^{3-\frac{5\alpha}{3}}$.
Since $\alpha>\frac{3}{2}$, we have that $l_{1}\gg l_{2}$.
We let as before
$I_{a}=\int_{[0,1]\times[\frac{a-O(1)}{M^{2}},\frac{a+O(1)}{M^{2}}]}|\sum_{m\leq
M}e(mu+m^{2}w)|^{6}dudw$
With a change of variables as in the previous section, we have
$\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}\sim
N^{2}(N^{\frac{2\alpha}{3}-1})^{2}I_{a_{1}}I_{a_{2}}$
where
$\begin{cases}a_{1}=p+\frac{l_{1}}{2}\phi_{3}^{\prime\prime}(\frac{h_{1}}{N})+\frac{l_{2}}{2}\phi_{4}^{\prime\prime}(\frac{h_{1}}{N})\\\
a_{2}=p+\frac{l_{1}}{2}\phi_{3}^{\prime\prime}(\frac{h_{2}}{N})+\frac{l_{2}}{2}\phi_{4}^{\prime\prime}(\frac{h_{2}}{N})\end{cases}.$
(28)
It is crucial that the cubic (and also the higher order) term is $O(1)$, cf.
(24)
$m^{3}\frac{\phi_{3}^{\prime\prime\prime}(\frac{h}{N}){x_{3}}+\phi_{4}^{\prime\prime\prime}(\frac{h}{N}){x_{4}}}{N^{2}}=O(1),\;\;\forall
x\in\Omega,$
so it may be neglected according to Lemma 9.2.
If $l_{1}\sim 2^{j_{1}}$, it is immediate that $|a_{1}-p|\lesssim 2^{j_{1}}$,
$|a_{2}-p|\lesssim 2^{j_{1}}$. Also, for fixed $a_{1},a_{2},p,l_{1},l_{2}$,
(28) has $O((\frac{N}{M2^{j_{1}}})^{2})$ solutions $(h_{1},h_{2})$, modulo
$O(1)$. We do not need Lemma 4.1 here, since this time $l_{1}$ is much larger
than $l_{2}$. We now dominate
$\int_{\Omega}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}$ as before by
$\lesssim_{\epsilon}N^{\epsilon}\sum_{j_{1}:\;N^{\frac{3}{2}-\frac{2\alpha}{3}}\leq
2^{j_{1}}\leq N^{\frac{\alpha}{3}}}\sum_{l_{1}\sim
2^{j_{1}}}\sum_{j_{2}:\;2^{j_{2}}\leq N^{3-\frac{5\alpha}{3}}}\sum_{l_{2}\sim
2^{j_{2}}}(\frac{N}{M2^{j_{1}}})^{6}N^{2}(N^{\frac{2\alpha}{3}-1})^{2}\sum_{p\leq
M^{2}}(\sum_{a:\;|a-p|\lesssim 2^{j_{1}}}I_{a}^{1/3})^{6}.$
We use Cauchy–Schwarz for the last expression to dominate the above by
$N^{\epsilon}\sum_{j:\;N^{\frac{3}{2}-\frac{2\alpha}{3}}\leq 2^{j}\leq
N^{\frac{\alpha}{3}}}2^{j}N^{3-\frac{5\alpha}{3}}2^{-6j}N^{2\alpha}N^{2}(N^{\frac{2\alpha}{3}-1})^{2}2^{j}2^{3j}\sum_{|H|\sim
2^{j}}(\sum_{a\in H}I_{a}^{2/3})^{3}$
$=N^{3+\frac{5\alpha}{3}+\epsilon}\sum_{j:\;N^{\frac{3}{2}-\frac{2\alpha}{3}}\leq
2^{j}\leq N^{\frac{\alpha}{3}}}2^{-j}\sum_{|H|\sim 2^{j}}(\sum_{a\in
H}I_{a}^{2/3})^{3}.$
Using Lemma 4.2, this is dominated by
$N^{3+\frac{5\alpha}{3}+\epsilon}\sum_{j:\;N^{\frac{3}{2}-\frac{2\alpha}{3}}\leq
2^{j}\leq N^{\frac{\alpha}{3}}}(M^{4}2^{j}+M^{6}2^{-j})\lesssim
N^{\epsilon}(N^{7+\frac{2\alpha}{3}}+N^{\frac{15}{2}+\frac{\alpha}{3}}).$
This is $O(N^{7+\frac{2\alpha}{3}+\epsilon})$, as desired, since
$\alpha>\frac{3}{2}$.
## 6\. The case $\frac{9}{5}\leq\alpha<2$
Let
$\Omega=[0,N^{\delta}]\times[0,N]\times[N^{\frac{4}{5}},N^{\alpha-1}]\times[0,N^{\beta-1}].$
Because of the case addressed in the previous section, we may assume
$x_{3}\geq N^{\frac{4}{5}}$. This will buy us some extra flexibility in
choosing $\delta$. In fact, we can work with any $\delta$ satisfying
$2-\frac{3\beta}{2}\leq\delta\leq\frac{9}{5}-\beta.$ (29)
We need to prove that
$\int_{\Omega}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\lesssim_{\epsilon}N^{8+\delta+\epsilon}.$
We will first decouple on cubes $B$ with side length $N^{\beta-1}$. This is
the largest size that is available to us, due to the range in the $x_{4}$
variable. Unlike the case from the previous section, the resulting intervals
are not small enough to make the cubic terms negligible, to allow us to use
estimates for quadratic Weyl sums. We will accomplish that by means of a
further decoupling, on cubes of side length $N^{\delta}$, similar to the case
$\alpha=2$ described earlier.
To get started, we use $l^{2}$ decoupling (Theorem 2.3) on each cube $B$ of
side length $N^{\beta-1}$
$\int_{B}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\lesssim_{\epsilon}N^{\epsilon}(\sum_{J_{1}\subset
I_{1}}\sum_{J_{2}\subset
I_{2}}(\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6})^{1/3})^{3},$
where $J_{i}=[h_{i},h_{i}+M]$ has length $M=N^{1-\frac{\beta}{2}}$.
Next, we cover $\Omega$ with boxes
$\Delta=[0,N^{\delta}]\times[pN^{\delta},(p+1)N^{\delta}]\times[lN^{\delta},(l+1)N^{\delta}]\times[0,N^{\beta-1}],$
with $p\leq N^{1-\delta}$, $N^{\frac{4}{5}-\delta}\leq l\leq
N^{\alpha-1-\delta}$. If we sum up the above inequality over cubes
$B\subset\Delta$ and use Minkowski’s inequality, we find
$\int_{\Delta}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\lesssim_{\epsilon}N^{\epsilon}(\sum_{J_{1}\subset
I_{1}}\sum_{J_{2}\subset
I_{2}}(\sum_{B\subset\Delta}\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6})^{1/3})^{3}.$
(30)
Next, we fix $J_{1},J_{2}$ and perform a second decoupling for the term
$\int_{\Delta}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}$. We proceed as
in Section 3
$\displaystyle\int_{B}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}$
$\displaystyle=N^{-4}\int_{NB}|{\mathcal{E}}_{J_{1}}(\frac{\cdot}{N}){\mathcal{E}}_{J_{2}}(\frac{\cdot}{N})|^{6}$
$\displaystyle\lesssim
N^{-4-4\beta}\int_{NB}|{\mathcal{E}}_{J_{1}}(\frac{\cdot}{N})|^{6}\int_{NB}|{\mathcal{E}}_{J_{2}}(\frac{\cdot}{N})|^{6}=N^{4-4\beta}\int_{B}|{\mathcal{E}}_{J_{1}}|^{6}\int_{B}|{\mathcal{E}}_{J_{2}}|^{6}.$
Then
$\sum_{B\subset\Delta}\int_{B}|{\mathcal{E}}_{J_{1}}|^{6}\int_{B}|{\mathcal{E}}_{J_{2}}|^{6}\lesssim
N^{4-4\beta}\int_{\Delta}dx\int_{(y,z)\in[0,N^{\beta-1}]^{4}\times[0,N^{\beta-1}]^{4}}|{\mathcal{E}}_{J_{1}}(x+y){\mathcal{E}}_{J_{2}}(x+z)|^{6}dydz.$
Combining these two and using periodicity in the $y_{1},z_{1}$ variables we
get
$\int_{\Delta}|{\mathcal{E}}_{J_{1}}{\mathcal{E}}_{J_{2}}|^{6}\lesssim
N^{6-6\beta-2\delta}\times$
$\int_{(x_{1},x_{4},y_{2},y_{3},y_{4},z_{2},z_{3},z_{4})\in S}dx_{1}\ldots
dz_{4}\int_{y_{1},z_{1}\in[0,N^{\delta}]\atop{x_{2}\in[pN^{\delta},(p+1)N^{\delta}]\atop{x_{3}\in[lN^{\delta},(l+1)N^{\delta}]}}}|{\mathcal{E}}_{J_{1}}(x+y){\mathcal{E}}_{J_{2}}(x+z)|^{6}dy_{1}dz_{1}dx_{2}dx_{3},$
where $S$ is characterized by
$0\leq
x_{1}\in[0,N^{\delta}],\;x_{4},y_{2},y_{3},y_{4},z_{2},z_{3},z_{4}\in[0,N^{\beta-1}].$
We seek to estimate the second integral uniformly over
$x_{1},x_{4},y_{2},y_{3},y_{4},z_{2},z_{3},z_{4}$. With these variables fixed,
we make the affine change of variables
$(y_{1},z_{1},x_{2},x_{3})\mapsto(u_{1},u_{2},w_{1},w_{2})$
$\begin{cases}u_{1}=(y_{1}+x_{1})+\frac{2h_{1}}{N}(x_{2}+y_{2})+\phi_{3}^{\prime}(\frac{h_{1}}{N})(x_{3}+y_{3})+\phi_{4}^{\prime}(\frac{h_{1}}{N})(x_{4}+y_{4})\\\
u_{2}=(z_{1}+x_{1})+\frac{2h_{2}}{N}(x_{2}+z_{2})+\phi_{3}^{\prime}(\frac{h_{2}}{N})(x_{3}+z_{3})+\phi_{4}^{\prime}(\frac{h_{2}}{N})(x_{4}+y_{4})\\\
w_{1}=\frac{x_{2}}{N}+\phi_{3}^{\prime\prime}(\frac{h_{1}}{N})\frac{x_{3}}{2N}\\\
w_{2}=\frac{x_{2}}{N}+\phi_{3}^{\prime\prime}(\frac{h_{2}}{N})\frac{x_{3}}{2N}\end{cases}.$
The Jacobian is $\sim\frac{1}{N^{2}}$, due to (3). The second integral is
comparable to
$N^{2}\int_{(u_{i},w_{i})\in[0,N^{\delta}]\times[\frac{a_{i}-O(1)}{M_{*}^{2}},\frac{a_{i}+O(1)}{M_{*}^{2}}]}\prod_{i=1}^{2}|\sum_{m_{i}\leq
M}e(m_{i}u_{i}+m_{i}^{2}w_{i}+\eta_{i}(m_{i})x_{3})|^{6}du_{1}dw_{1}du_{2}dw_{2}.$
(31)
Here $M_{*}=N^{\frac{1}{2}-\frac{\delta}{2}}$,
$a_{i}=p+l\frac{\phi_{3}^{\prime\prime}(\frac{h_{i}}{N})}{2}$ and
$\eta_{i}(m)={m^{3}}\frac{\phi_{3}^{\prime\prime\prime}(\frac{h_{i}}{N})}{3!N^{2}}+{m^{4}}\frac{\phi_{3}^{\prime\prime\prime\prime}(\frac{h_{i}}{N})}{4!N^{3}}+\ldots$.
Note that since $\frac{v}{N}=O(M^{-2})$ for $v$ equal to any of the variables
$x_{4},y_{2},y_{3},y_{4},z_{2},z_{3},z_{4}$, we have dismissed the
contribution of these variables associated with quadratic (as well as the
higher order) terms. See Lemma 9.2.
We may apply again Theorem 2.4, using that $x_{3}=A(w_{1}-w_{2})$ with
$A=O(N)$. Note however that this time we cannot decouple into point masses (as
in (10)), since $M_{*}$ is significantly larger than 1. Instead, applying (11)
with $N=(\frac{M}{M_{*}})^{2}$ we dominate (31) by
$N^{2+\epsilon}\times$ (32)
$(\sum_{J_{1}^{\prime},J_{2}^{\prime}}[\int_{(u_{i},w_{i})\in[0,N^{\delta}]\times[\frac{a_{i}-O(1)}{M_{*}^{2}},\frac{a_{i}+O(1)}{M_{*}^{2}}]}\prod_{i=1}^{2}|\sum_{m_{i}\in
J_{i}^{\prime}}e(m_{i}u_{i}+m_{i}^{2}w_{i}+\eta_{i}(m_{i})x_{3})|^{6}du_{1}dw_{1}du_{2}dw_{2}]^{\frac{1}{3}})^{3}.$
The intervals $J_{i}^{\prime}$ partitioning $[1,M]$ have length $M_{*}$. What
we have gained by doing this decoupling is that, when $m_{i}$ is confined to a
small interval $J_{i}^{\prime}=[h_{i}^{\prime},h_{i}^{\prime}+M_{*}]$, the
contribution of the term
$\eta_{i}(m_{i})=\eta_{i}(h_{i}^{\prime}+m_{i}^{\prime})=\eta_{i}(h_{i}^{\prime})+\eta_{i}^{\prime}(h_{i}^{\prime})m_{i}^{\prime}+\eta_{i}^{\prime\prime}(h_{i}^{\prime})\frac{(m_{i}^{\prime})^{2}}{2}+O(\frac{(m_{i}^{\prime})^{3}}{N^{2}})$
(33)
can be neglected. To see this, note first that
$\eta_{i}^{\prime\prime}(h_{i}^{\prime})=\sum_{n\geq
3}\phi_{3}^{(n)}(\frac{h_{i}}{N})\frac{(h_{i}^{\prime})^{n-2}}{N^{n-1}(n-2)!}.$
Making another linear change of variables such that
$w_{i}^{\prime}=w_{i}+\frac{\eta_{i}^{\prime\prime}(h_{i}^{\prime})}{2}A(w_{1}-w_{2}),$
we write, using that $|x_{3}|\leq N^{\alpha-1}$
$\prod_{i=1}^{2}|\sum_{m_{i}\in
J_{i}^{\prime}}e(m_{i}u_{i}+m_{i}^{2}w_{i}+\eta_{i}(m_{i})x_{3})|=\prod_{i=1}^{2}|\sum_{m_{i}^{\prime}\in[1,M_{*}]}e(m_{i}^{\prime}u_{i}^{\prime}+(m_{i}^{\prime})^{2}w_{i}^{\prime}+O((m_{i}^{\prime})^{3}\frac{N^{\alpha-1}}{N^{2}}))|.$
The range of $w_{i}^{\prime}$ is (a subset of)
$[\frac{a_{i}^{\prime}-O(1)}{M_{*}^{2}},\frac{a_{i}^{\prime}+O(1)}{M_{*}^{2}}]$,
where
$a_{i}^{\prime}=p+\frac{l}{2}\sum_{n\geq
2}\phi_{3}^{(n)}(\frac{h_{i}}{N})\frac{(h_{i}^{\prime})^{n-2}}{N^{n-2}(n-2)!}=p+l\frac{\phi_{3}^{{}^{\prime\prime}}(\frac{h_{i}+h_{i}^{\prime}}{N})}{2}.$
Since $\alpha-1-\frac{\beta}{2}\leq\delta$ by (29), we have that
$a_{i}-a_{i}^{\prime}=O(1)$. Thus, the quadratic term in (33) will not affect
the domain of integration. Moreover, the contribution of the higher order
terms in (33) is negligible (cf. Lemma 9.2), as long as we can guarantee that
we have $\frac{N^{\alpha-1}}{N^{2}}=O(M_{*}^{-3})$. This is equivalent to
$\delta\geq 1-\frac{2\beta}{3}$, and follows from (29) and the fact that
$\beta\leq\frac{6}{5}$. Under this assumption, we dominate (32) by
$N^{2+\epsilon}\left(\sum_{J_{1}^{\prime},J_{2}^{\prime}}\left[\prod_{i=1}^{2}\int_{[0,N^{\delta}]\times[\frac{a_{i}-O(1)}{M_{*}^{2}},\frac{a_{i}+O(1)}{M_{*}^{2}}]}|\sum_{m_{i}\in
J_{i}^{\prime}}e(m_{i}u_{i}+m_{i}^{2}w_{i})|^{6}du_{i}dw_{i}\right]^{\frac{1}{3}}\right)^{3}.$
This is $\sim N^{2+2\delta+\epsilon}(\frac{M}{M_{*}})^{6}I_{a_{1}}I_{a_{2}}$,
where
$I_{a}=\int_{[0,1]\times[\frac{a-O(1)}{M_{*}^{2}},\frac{a+O(1)}{M_{*}^{2}}]}|\sum_{m\leq
M_{*}}e(mu+m^{2}w)|^{6}dudw$
is independent of $J_{1}^{\prime},J_{2}^{\prime}$. Recall that
$\begin{cases}a_{1}=p+l\frac{\phi_{3}^{\prime\prime}(\frac{h_{1}}{N})}{2}\\\
a_{2}=p+l\frac{\phi_{3}^{\prime\prime}(\frac{h_{2}}{N})}{2}\end{cases}.$ (34)
Assume now that $l\sim 2^{j}$, with
$N^{\frac{4}{5}-\delta}\lesssim 2^{j}\lesssim N^{\alpha-1-\delta}.$
For fixed $p,l$, and fixed $(a_{1},a_{2})$ (within a factor of $O(1)$), the
system (34) has $\lesssim(\frac{N}{2^{j}M})^{2}$ solutions $(h_{1},h_{2})$.
Getting back to (30), summing over $\Delta\subset\Omega$ we find that
$\int_{\Omega}|{\mathcal{E}}_{I_{1}}{\mathcal{E}}_{I_{2}}|^{6}\lesssim_{\epsilon}$
$N^{6-6\beta-2\delta}|S|N^{2+2\delta+\epsilon}N^{3+3\delta}\sum_{p\leq
M_{*}^{2}}\sum_{N^{\frac{4}{5}-\delta}\lesssim 2^{j}\lesssim
N^{\alpha-1-\delta}}2^{-6j}\sum_{l\sim 2^{j}}(\sum_{|a-p|\lesssim
2^{j}}I_{a}^{1/3})^{6}.$
We use Cauchy–Schwarz to dominate this by
$N^{4+4\delta+\beta+\epsilon}\sum_{N^{\frac{4}{5}-\delta}\lesssim
2^{j}\lesssim
N^{\alpha-1-\delta}}\sum_{H\subset[1,M_{*}^{2}]\atop{|H|=2^{j}}}2^{-j}(\sum_{a\in
H}I_{a}^{2/3})^{3}.$
Using Lemma 4.2, it remains to check that
$\sum_{N^{\frac{4}{5}-\delta}\lesssim 2^{j}\lesssim
N^{\alpha-1-\delta}}M_{*}^{4}2^{j}+\sum_{N^{\frac{4}{5}-\delta}\lesssim
2^{j}\lesssim N^{\alpha-1-\delta}}M_{*}^{6}2^{-j}\lesssim
N^{4-\beta-3\delta}.$
The first sum is in order, since $\alpha+\beta=3$. So is the second sum, as
long as $\delta\leq\frac{9}{5}-\beta$, which is guaranteed by (29).
## 7\. Proof of Theorem 1.2
This section shows that Theorem 1.3 implies Theorem 1.2. The argument is
inspired by [2].
The parameter $K$ will be very large and universal, independent of $N$,
$\phi_{k}$. The larger the $K$ we choose to work with, the smaller the
$\epsilon$ from the $N^{\epsilon}$ loss will be at the end of the section.
###### Proposition 7.1.
Assume $\alpha+\beta=3$ and $\frac{3}{2}\leq\alpha\leq 2.$ Assume
$\phi_{3},\phi_{4}:(0,3)\to{\mathbb{R}}$ are real analytic and satisfy (1),
(2) and (3). Let as before $\omega_{3}=[0,N^{\alpha}]$,
$\omega_{4}=[0,N^{\beta}]$ and
${\mathcal{E}}_{I,N}(x)=\sum_{n\in
I}e(nx_{1}+n^{2}x_{2}+\phi_{3}(\frac{n}{N})x_{3}+\phi_{4}(\frac{n}{N})x_{4}).$
We consider arbitrary integers $N_{0},M$ satisfying $1\leq
M\leq\frac{N_{0}}{K}$ and $N_{0}+[M,2M]\subset[\frac{N}{2},N]$.
Let $H_{1},H_{2}$ be intervals of length $\frac{M}{K}$ inside $N_{0}+[M,2M]$
such that ${\operatorname{dist}\,}(H_{1},H_{2})\geq\frac{M}{K}$. Then
$\int_{[0,1]\times[0,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{H_{1},N}(x){\mathcal{E}}_{H_{2},N}(x)|^{6}\lesssim_{\epsilon}N^{7}M^{2+\epsilon}.$
###### Proof.
Write $H_{1}=N_{0}+I_{1}$, $H_{2}=N_{0}+I_{2}$ with $I_{1},I_{2}$ intervals of
length $\frac{M}{K}$ inside $[M,2M]$ and with separation $\geq\frac{M}{K}$. We
use the following expansion, certainly valid for all $m$ in $I_{i}$.
$\displaystyle\phi_{3}(\frac{N_{0}+m}{N})$ $\displaystyle=Q_{3}(m)+\sum_{n\geq
3}\frac{\phi_{3}^{(n)}(\frac{N_{0}}{N})}{n!}(\frac{m}{N})^{n}$
$\displaystyle=Q_{3}(m)+(\frac{M}{N})^{3}\sum_{n\geq
3}\frac{\phi_{3}^{(n)}(\frac{N_{0}}{N})(\frac{M}{N})^{n-3}}{n!}(\frac{m}{M})^{n}.$
Here $Q_{3}(m)=A+Bm+Cm^{2}$ with $B=O(\frac{1}{N})$, $C=O(\frac{1}{N^{2}})$.
We introduce the analogue $\tilde{\phi_{3}}$ of $\phi_{3}$ at scale $M$
$\tilde{\phi_{3}}(t)=\sum_{n\geq
3}\frac{\phi_{3}^{(n)}(\frac{N_{0}}{N})(\frac{M}{N})^{n-3}}{n!}t^{n}.$
This series is convergent as long as $\frac{N_{0}}{N}+t\in(0,3)$, so the new
function is certainly real analytic on $(0,2)$, since $N_{0}\leq N$.
Let $\delta>0$ be conveniently small. By choosing $K$ large enough we can make
$\frac{M}{N}$ as small as we wish, so we may guarantee that for each
$t\in[\frac{1}{2},1]$ we have
$|\tilde{\phi}_{3}^{(3)}(t)-{\phi}_{3}^{(3)}(\frac{N_{0}}{N})|\leq\delta.$
(35)
Thus, we can guarantee (3) for $\tilde{\phi}_{3}$, with a slightly smaller,
but uniform value of $A_{4}$. The same will work with (1) and (2), as it will
soon become clear. To this end, we may also enforce
$|\tilde{\phi}_{3}^{(4)}(t)|\leq\delta.$ (36)
We also define, with $Q_{4}(m)=D+Em+Fm^{2}$ satisfying $E=O(\frac{1}{N})$,
$F=O(\frac{1}{N}^{2})$
$\displaystyle\phi_{4}(\frac{N_{0}+m}{N})$ $\displaystyle=Q_{4}(m)+\sum_{n\geq
3}\frac{\phi_{4}^{(n)}(\frac{N_{0}}{N})}{n!}(\frac{m}{N})^{n}$
$\displaystyle=Q_{4}(m)+\frac{\phi_{4}^{(3)}(\frac{N_{0}}{N})}{3!}(\frac{m}{N})^{3}+(\frac{M}{N})^{4}\sum_{n\geq
4}\frac{\phi_{4}^{(n)}(\frac{N_{0}}{N})(\frac{M}{N})^{n-4}}{n!}(\frac{m}{M})^{n}.$
The last two terms are equal to
$\frac{\phi_{4}^{(3)}(\frac{N_{0}}{N})}{\phi_{3}^{(3)}(\frac{N_{0}}{N})}(\frac{M}{N})^{3}\tilde{\phi}_{3}(\frac{m}{M})+(\frac{M}{N})^{4}\sum_{n\geq
4}\frac{\phi_{4}^{(n)}(\frac{N_{0}}{N})\phi_{3}^{(3)}(\frac{N_{0}}{N})-\phi_{4}^{(3)}(\frac{N_{0}}{N})\phi_{3}^{(n)}(\frac{N_{0}}{N})}{\phi_{3}^{(3)}(\frac{N_{0}}{N})n!}(\frac{M}{N})^{n-4}(\frac{m}{M})^{n}.$
Let $\tilde{\phi}_{4}$ be the analogue of $\phi_{4}$ at scale $M$ defined by
$\tilde{\phi}_{4}(t)=\sum_{n\geq
4}\frac{\phi_{4}^{(n)}(\frac{N_{0}}{N})\phi_{3}^{(3)}(\frac{N_{0}}{N})-\phi_{4}^{(3)}(\frac{N_{0}}{N})\phi_{3}^{(n)}(\frac{N_{0}}{N})}{\phi_{3}^{(3)}(\frac{N_{0}}{N})n!}(\frac{M}{N})^{n-4}t^{n}.$
As before, by choosing $K$ large enough, we can arrange that for all
$t\in[\frac{1}{2},1]$
$|\tilde{\phi}_{4}^{(4)}(t)-\frac{\phi_{4}^{(4)}(\frac{N_{0}}{N})\phi_{3}^{(3)}(\frac{N_{0}}{N})-\phi_{4}^{(3)}(\frac{N_{0}}{N})\phi_{3}^{(4)}(\frac{N_{0}}{N})}{\phi_{3}^{(3)}(\frac{N_{0}}{N})}|\leq\delta.$
Combining this with (35) and (36) we may arrange that
$\det\begin{bmatrix}\tilde{\phi}_{3}^{(3)}(t)&\tilde{\phi}_{3}^{(4)}(t)\\\
\tilde{\phi}_{4}^{(3)}(s)&\tilde{\phi}_{4}^{(4)}(s)\end{bmatrix}-\det\begin{bmatrix}\phi_{3}^{(3)}(\frac{N_{0}}{N})&\phi_{3}^{(4)}(\frac{N_{0}}{N})\\\
\phi_{4}^{(3)}(\frac{N_{0}}{N})&\phi_{4}^{(4)}(\frac{N_{0}}{N})\end{bmatrix}$
is as small in absolute value as we wish, uniformly over
$t,s\in[\frac{1}{2},1]$. In particular, we can guarantee (2) for the pair
$(\tilde{\phi}_{3},\tilde{\phi}_{4})$, with slightly modified, but uniform
values of $A_{2},A_{3}$. Similar comments apply regarding (1).
Now
$\displaystyle|{\mathcal{E}}_{H_{k},N}(x)|$ $\displaystyle=|\sum_{m\in
I_{k}}e(mx_{1}+(m^{2}+2N_{0}m)x_{2}+\phi_{3}(\frac{N_{0}+m}{N})x_{3}+\phi_{4}(\frac{N_{0}+m}{N})x_{4})|$
$\displaystyle=|\sum_{m\in
I_{i}}e(m(x_{1}+2N_{0}x_{2}+Bx_{3}+Ex_{4})+m^{2}(x_{2}+Cx_{3}+Fx_{4})$
$\displaystyle+(\frac{M}{N})^{3}\tilde{\phi}_{3}(\frac{m}{M})(x_{3}+\frac{\phi_{4}^{(3)}(\frac{N_{0}}{N})}{\phi_{3}^{(3)}(\frac{N_{0}}{N})}x_{4})$
$\displaystyle+(\frac{M}{N})^{4}\tilde{\phi}_{4}(\frac{m}{M})x_{4})|.$
Recall $N_{0}\sim N$, $B,E=O(1/N)$, $C,F=O(1/N^{2})$. We make the change of
variables
$\begin{cases}y_{1}=x_{1}+2N_{0}x_{2}+Bx_{3}+Ex_{4}\\\
y_{2}=x_{2}+Cx_{3}+Fx_{4}\\\
y_{3}=(\frac{M}{N})^{3}(x_{3}+\frac{\phi_{4}^{(3)}(\frac{N_{0}}{N})}{\phi_{3}^{(3)}(\frac{N_{0}}{N})}x_{4})\\\
y_{4}=(\frac{M}{N})^{4}x_{4}\end{cases}.$
Due to periodicity, we may extend the range of $x_{1}$ to $[0,N_{0}]$. This
linear transformation maps
$[0,N_{0}]\times[0,1]\times\omega_{3}\times\omega_{4}$ to a subset of a box
$\tilde{\omega}_{1}\times\tilde{\omega}_{2}\times\tilde{\omega}_{3}\times\tilde{\omega_{4}}$
centered at the origin, with dimensions roughly
$N_{0},1,M^{3}N^{\alpha-3},M^{4}N^{\beta-4}$.
Thus
$|{\mathcal{E}}_{H_{k},N}(x)|=|{\mathcal{E}}_{I_{k},M}(y)|$
where
${\mathcal{E}}_{I_{k},M}(y)=\sum_{m\in
I_{k}}e(my_{1}+m^{2}y_{2}+\tilde{\phi}_{3}(\frac{m}{M})y_{3}+\tilde{\phi}_{4}(\frac{m}{M})y_{4}).$
We may write, using again periodicity in $y_{1}$ and $y_{2}$
$\displaystyle\int_{[0,1]\times[0,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{H_{1},N}(x){\mathcal{E}}_{H_{2},N}(x)|^{6}$
$\displaystyle=\frac{1}{N_{0}}\int_{[0,N_{0}]\times[0,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{H_{1},N}(x){\mathcal{E}}_{H_{2},N}(x)|^{6}$
$\displaystyle\leq(\frac{N}{M})^{7}\int_{[0,1]\times[0,1]\times\tilde{\omega}_{3}\times\tilde{\omega_{4}}}|{\mathcal{E}}_{I_{1},M}(y){\mathcal{E}}_{I_{2},M}(y)|^{6}.$
Finally, we use Theorem 1.3 with $N=M$, noting that
$\tilde{\omega}_{3}\subset[-M^{\alpha},M^{\alpha}]$ and
$\tilde{\omega}_{4}\subset[-M^{\beta},M^{\beta}]$, to estimate the last
expression by
$(\frac{N}{M})^{7}M^{9+\epsilon}=N^{7}M^{2+\epsilon}.$
∎
We can now prove Theorem 1.2.
Choose $K$ large enough, depending on $\epsilon$. Write ${\mathcal{H}}_{n}(I)$
for the collection of dyadic intervals in $I$ with length $\frac{N}{2K^{n}}$.
We write $H_{1}\not\sim H_{2}$ to imply that $H_{1},H_{2}$ are not neighbors.
Then
$|{\mathcal{E}}_{I,N}(x)|\leq
3\max_{H\in{\mathcal{H}}_{1}(I)}|{\mathcal{E}}_{H,N}(x)|+K^{10}\max_{H_{1}\not\sim
H_{2}\in{\mathcal{H}}_{1}(I)}|{\mathcal{E}}_{H_{1},N}(x){\mathcal{E}}_{H_{2},N}(x)|^{1/2}.$
We repeat this inequality until we reach intervals in ${\mathcal{H}}_{l}$ of
length $\sim 1$, that is $K^{l}\sim N$. We have
$\displaystyle|{\mathcal{E}}_{I,N}(x)|$ $\displaystyle\lesssim
3^{l}+l3^{l}K^{10}\max_{1\leq n\leq
l}\max_{H\in{\mathcal{H}}_{n}(I)}\max_{H_{1}\not\sim
H_{2}\in{\mathcal{H}}_{n+1}(H)}|{\mathcal{E}}_{H_{1},N}(x){\mathcal{E}}_{H_{2},N}(x)|^{1/2}$
$\displaystyle\lesssim(\log N)N^{\log_{K}3}\max_{1\leq n\leq
l}\max_{H\in{\mathcal{H}}_{n}(I)}\max_{H_{1}\not\sim
H_{2}\in{\mathcal{H}}_{n+1}(H)}|{\mathcal{E}}_{H_{1},N}(x){\mathcal{E}}_{H_{2},N}(x)|^{1/2}.$
Using Corollary 7.1 we finish the proof
$\displaystyle\int_{[0,1]\times[0,1]\times\omega_{3}\times\omega_{4}}$
$\displaystyle|{\mathcal{E}}_{I,N}|^{12}$
$\displaystyle\lesssim_{K}N^{\log_{K}3}\sum_{n}\sum_{H\in{\mathcal{H}}_{n}(I)}\max_{H_{1}\not\sim
H_{2}\in{\mathcal{H}}_{n+1}(H)}\int_{[0,1]\times[0,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{H_{1},N}(x){\mathcal{E}}_{H_{2},N}(x)|^{6}$
$\displaystyle\lesssim_{K,\epsilon}N^{\epsilon+\log_{K}3}\sum_{n}{K^{n}}N^{7}(\frac{N}{K^{n}})^{2}$
$\displaystyle\lesssim_{K,\epsilon}N^{\epsilon+\log_{K}3}N^{9}.$
Choosing $K$ large enough, we may make $\log_{K}3$ as small as we wish.
## 8\. Other values of $p$
The reason Theorem 1.2 was accessible via the bilinear result in Theorem 1.3
has to do with the fact that $6$ is the critical exponent for the decoupling
for the parabola (at the canonical scale). Thus, our arguments rely
fundamentally on this dimensional reduction. In [6], the small cap decoupling
for the parabola is settled, and the associated critical exponents lie between
4 and 6. In principle, this new tool can be used to determine $L^{p}$ moments
for curves in ${\mathbb{R}}^{4}$, in the range $8\leq p\leq 12$.
There are many possible things to consider in this direction. One is the
following extension of Conjecture 1.1, that we use to illustrate a different
type of obstruction that appears in this regime. This was observed in [1], in
a related context.
###### Conjecture 8.1 (Square root cancellation in $L^{p}$).
Let $11\leq p\leq 12$. Assume $\alpha\geq\beta\geq 0$ satisfy
$\alpha+\beta=\frac{p}{2}-3$. Let $\phi_{3}$, $\phi_{4}$, $\omega_{3}$,
$\omega_{4}$ be as in Theorem 1.2. Then
$\int_{[0,1]\times[0,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{[\frac{N}{2},N],N}|^{p}\lesssim_{\epsilon}N^{p-3+\epsilon}.$
The case $\beta=0$ was proved in [6] in the larger range $9\leq p<12$. As
mentioned earlier, when $\beta=0$, the curve collapses to a three dimensional
curve. However, the next result shows that the restriction $p\geq 11$ is
needed if $\beta>0.$ The new obstruction can be described as constructive
interference on spatially disjoint blocks.
###### Theorem 8.2.
Assume $p<11$. Let $\omega_{3}=[-N^{\alpha},N^{\alpha}]$,
$\omega_{4}=[-N^{\beta},N^{\beta}]$, $\alpha\geq\beta$ and
$\alpha+\beta=\frac{p}{2}-3$. Assume also that $\beta>0$.
Then, for some $\delta>0$ and $\phi_{3}(t)=t^{3}$, $\phi_{4}(t)=t^{4}$ we have
$\int_{[-1,1]\times[-1,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{[\frac{N}{2},N],N}|^{p}\gtrsim
N^{p-3+\delta}.$
###### Proof.
Lemma 8.3 shows that the integral is greater than
$\sum_{J\subset
I}\int_{[-1,1]\times[-1,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{J,N}|^{p},$
where the sum runs over intervals $J$ of length $M<N$, partitioning
$[\frac{N}{2},N]$. The parameter $M$ will be determined later. In some sense,
the components ${\mathcal{E}}_{J,N}$ behave as if they were spatially
supported on pairwise disjoint sets.
By periodicity
$\int_{[-1,1]\times[-1,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{J,N}|^{p}=N^{-3}\int_{[-N^{2},N^{2}]\times[-N,N]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{J,N}|^{p}.$
Write $H=[h+1,h+M]$. Note that
$|{\mathcal{E}}_{J,N}(x)|=|\sum_{1\leq m\leq
M}e(my_{1}+m^{2}y_{2}+m^{3}y_{3}+m^{4}y_{4})|$
where
$\begin{cases}y_{1}=x_{1}+2h{x_{2}}+\frac{3h^{2}}{N^{3}}x_{3}+\frac{4h^{3}}{N^{4}}x_{4}\\\
y_{2}=x_{2}+\frac{3h}{N^{3}}x_{3}+\frac{6h^{2}}{N^{4}}x_{4}\\\
y_{3}=\frac{x_{3}}{N^{3}}+\frac{4h}{N^{4}}x_{4}\\\
y_{4}=\frac{x_{4}}{N^{4}}\end{cases}.$
This change of variables maps
$[-N^{2},N^{2}]\times[-N,N]\times\omega_{3}\times\omega_{4}$ to a set
containing
$S=[-o(N^{2}),o(N^{2})]\times[-o(N),o(N)]\times[-o(N^{\alpha-3}),o(N^{\alpha-3})]\times[-o(N^{\beta-4}),o(N^{\beta-4})].$
We have used that $3\geq\alpha\geq\beta-1$. Thus
$\int_{[-1,1]\times[-1,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{J,N}(x)|^{p}dx$
$\geq N^{4}\int_{S}|\sum_{1\leq m\leq
M}e(my_{1}+m^{2}y_{2}+m^{3}y_{3}+m^{4}y_{4})|^{p}dy.$
Let
$M=\max\\{N^{1-\frac{\alpha}{3}},N^{1-\frac{\beta}{4}}\\}.$
Note that since $\beta>0$ we have that $M=N^{1-\epsilon}$ for some
$\epsilon>0$. Note also that
$[0,o(M^{-3})]\times[0,o(M^{-4})]\subset[0,o(N^{\alpha-3})]\times[0,o(N^{\beta-4})].$
Using constructive interference we get
$\int_{S}|\sum_{1\leq m\leq
M}e(my_{1}+m^{2}y_{2}+m^{3}y_{3}+m^{4}y_{4})|^{p}dy\gtrsim M^{p-10}N^{3}.$
Putting things together we conclude that
$\int_{[-1,1]\times[-1,1]\times\omega_{3}\times\omega_{4}}|{\mathcal{E}}_{[\frac{N}{2},N],N}|^{p}\gtrsim\frac{N}{M}N^{4}M^{p-10}N^{3}.$
Note that this is $\geq N^{p-3+\delta}$, for some $\delta>0$.
∎
###### Lemma 8.3.
Let ${\mathcal{R}}$ be a collections of rectangular boxes $R$ in
${\mathbb{R}}^{n}$, with pairwise disjoint doubles $2R$. Let $F$ be a Schwartz
function in ${\mathbb{R}}^{n}$ that can be written as
$F=\sum_{R\in{\mathcal{R}}}F_{R},$
with the spectrum of $F_{R}$ inside $R$. Then for each $2\leq p\leq\infty$ we
have
$(\|F_{R}\|_{p})_{l^{p}({\mathcal{R}})}\lesssim\|F\|_{p}.$
The implicit constant is independent of $F$ and ${\mathcal{R}}$.
###### Proof.
Interpolate between $2$ and $\infty$. ∎
## 9\. Auxiliary results
This section records two auxiliary results that are used repeatedly throughout
the paper.
###### Lemma 9.1.
Assume $t,s\in[\frac{1}{2},1]$ satisfy $|t-s|\sim 1$. The Jacobian of the
transformation $y=\psi(x)$
$\begin{cases}y_{1}=x_{1}+2tx_{2}+\phi_{3}^{\prime}(t)x_{3}+\phi_{4}^{\prime}(t)x_{4}\\\
y_{2}=x_{1}+2sx_{2}+\phi_{3}^{\prime}(s)x_{3}+\phi_{4}^{\prime}(s)x_{4}\\\
y_{3}=\frac{2x_{2}}{N}+\phi_{3}^{\prime\prime}(t)\frac{x_{3}}{N}+\phi_{4}^{\prime\prime}(t)\frac{x_{4}}{N}\\\
y_{4}=\frac{2x_{2}}{N}+\phi_{3}^{\prime\prime}(s)\frac{x_{3}}{N}+\phi_{4}^{\prime\prime}(s)\frac{x_{4}}{N}\end{cases}$
is $\sim\frac{1}{N^{2}}.$
Moreover, $\psi$ maps cubes $Q$ with side length $L$ to subsets of rectangular
boxes of dimensions roughly $L\times L\times\frac{L}{N}\times\frac{L}{N}$.
If the cube $Q$ is centered at the origin, $\psi(Q)$ contains the rectangular
box $[-o(L),o(L)]^{2}\times[-o(\frac{L}{N}),o(\frac{L}{N})]^{2}$.
###### Proof.
Let $\phi_{1}(u)=u$, $\phi_{2}(u)={u^{2}}$. Then the Jacobian is
$\frac{1}{N^{2}}\operatorname{det}\left[\begin{array}[]{cccc}\phi_{1}^{\prime}(t)&\phi_{2}^{\prime}(t)&\phi_{3}^{\prime}(t)&\phi_{4}^{\prime}(t)\\\
\phi_{1}^{\prime}(s)&\phi_{2}^{\prime}(s)&\phi_{3}^{\prime}(s)&\phi_{4}^{\prime}(s)\\\
\phi_{1}^{\prime\prime}(t)&\phi_{2}^{\prime\prime}(t)&\phi_{3}^{\prime\prime}(t)&\phi_{4}^{\prime\prime}(t)\\\
\phi_{1}^{\prime\prime}(s)&\phi_{2}^{\prime\prime}(s)&\phi_{3}^{\prime\prime}(s)&\phi_{4}^{\prime\prime}(s)\end{array}\right].$
(37)
Note that
$\operatorname{det}\left[\begin{array}[]{cccc}\phi_{1}^{\prime}(t)&\phi_{2}^{\prime}(t)&\phi_{3}^{\prime}(t)&\phi_{4}^{\prime}(t)\\\
\phi_{1}^{\prime}(s)&\phi_{2}^{\prime}(s)&\phi_{3}^{\prime}(s)&\phi_{4}^{\prime}(s)\\\
\phi_{1}^{\prime\prime}(t)&\phi_{2}^{\prime\prime}(t)&\phi_{3}^{\prime\prime}(t)&\phi_{4}^{\prime\prime}(t)\\\
\phi_{1}^{\prime\prime}(s)&\phi_{2}^{\prime\prime}(s)&\phi_{3}^{\prime\prime}(s)&\phi_{4}^{\prime\prime}(s)\end{array}\right]=\lim_{\epsilon\to
0}\frac{1}{\epsilon^{2}}\operatorname{det}\left[\begin{array}[]{cccc}\phi_{1}^{\prime}(t)&\phi_{2}^{\prime}(t)&\phi_{3}^{\prime}(t)&\phi_{4}^{\prime}(t)\\\
\phi_{1}^{\prime}(s)&\phi_{2}^{\prime}(s)&\phi_{3}^{\prime}(s)&\phi_{4}^{\prime}(s)\\\
\phi_{1}^{\prime}(t+\epsilon)&\phi_{2}^{\prime}(t+\epsilon)&\phi_{3}^{\prime}(t+\epsilon)&\phi_{4}^{\prime}(t+\epsilon)\\\
\phi_{1}^{\prime}(s+\epsilon)&\phi_{2}^{\prime}(s+\epsilon)&\phi_{3}^{\prime}(s+\epsilon)&\phi_{4}^{\prime}(s+\epsilon)\end{array}\right].$
A generalization of the Mean-Value Theorem (see [9], Voll II, part V, Chap 1,
No. 95) guarantees that
$\operatorname{det}\left[\begin{array}[]{cccc}\phi_{1}^{\prime}(t)&\phi_{2}^{\prime}(t)&\phi_{3}^{\prime}(t)&\phi_{4}^{\prime}(t)\\\
\phi_{1}^{\prime}(s)&\phi_{2}^{\prime}(s)&\phi_{3}^{\prime}(s)&\phi_{4}^{\prime}(s)\\\
\phi_{1}^{\prime}(t+\epsilon)&\phi_{2}^{\prime}(t+\epsilon)&\phi_{3}^{\prime}(t+\epsilon)&\phi_{4}^{\prime}(t+\epsilon)\\\
\phi_{1}^{\prime}(s+\epsilon)&\phi_{2}^{\prime}(s+\epsilon)&\phi_{3}^{\prime}(s+\epsilon)&\phi_{4}^{\prime}(s+\epsilon)\end{array}\right]=$
$\displaystyle=\epsilon^{2}(t-s)^{2}(t+\epsilon-s)(s+\epsilon-t)\operatorname{det}\left[\begin{array}[]{cccc}\phi_{1}^{\prime}(\tau_{1})&\phi_{2}^{\prime}(\tau_{1})&\phi_{3}^{\prime}(\tau_{1})&\phi_{4}^{\prime}(\tau_{1})\\\
\phi_{1}^{\prime\prime}(\tau_{2})&\phi_{2}^{\prime\prime}(\tau_{2})&\phi_{3}^{\prime\prime}(\tau_{2})&\phi_{4}^{\prime\prime}(\tau_{2})\\\
\phi_{1}^{\prime\prime\prime}(\tau_{3})&\phi_{2}^{\prime\prime\prime}(\tau_{3})&\phi_{3}^{\prime\prime\prime}(\tau_{3})&\phi_{4}^{\prime\prime\prime}(\tau_{3})\\\
\phi_{1}^{\prime\prime\prime\prime}(\tau_{4})&\phi_{2}^{\prime\prime\prime\prime}(\tau_{4})&\phi_{3}^{\prime\prime\prime\prime}(\tau_{4})&\phi_{4}^{\prime\prime\prime\prime}(\tau_{4})\end{array}\right]$
$\displaystyle=\epsilon^{2}(t-s)^{2}(t+\epsilon-s)(s+\epsilon-t)\operatorname{det}\left[\begin{array}[]{cccc}\phi_{3}^{\prime\prime\prime}(\tau_{3})&\phi_{4}^{\prime\prime\prime}(\tau_{3})\\\
\phi_{3}^{\prime\prime\prime\prime}(\tau_{4})&\phi_{4}^{\prime\prime\prime\prime}(\tau_{4})\end{array}\right]$
for some $\tau_{i}\in[\frac{1}{2},1]$ depending on $t,s,\epsilon$. The
conclusion follows by letting $\epsilon\to 0$ and using (2).
The second statement is immediate. To prove the last one, assume
$y\in[-cL,cL]^{2}\times[-c\frac{L}{N},c\frac{L}{N}]^{2},$
for some small enough $c$, independent of $N$. We need to prove that
$y=\psi(x)$ for some $x\in Q$. This can be seen by solving for $x$. For
example,
$x_{1}\sim
N^{2}\operatorname{det}\left[\begin{array}[]{cccc}y_{1}&\phi_{2}^{\prime}(t)&\phi_{3}^{\prime}(t)&\phi_{4}^{\prime}(t)\\\
y_{2}&\phi_{2}^{\prime}(s)&\phi_{3}^{\prime}(s)&\phi_{4}^{\prime}(s)\\\
y_{3}&\frac{\phi_{2}^{\prime\prime}(t)}{N}&\frac{\phi_{3}^{\prime\prime}(t)}{N}&\frac{\phi_{4}^{\prime\prime}(t)}{N}\\\
y_{4}&\frac{\phi_{2}^{\prime\prime}(s)}{N}&\frac{\phi_{3}^{\prime\prime}(s)}{N}&\frac{\phi_{4}^{\prime\prime}(s)}{N}\end{array}\right].$
This and (1) show that
$|x_{1}|\lesssim
N^{2}(\frac{|y_{1}|+|y_{2}|}{N^{2}}+\frac{|y_{3}|+|y_{4}|}{N}).$
The same inequality holds for all $x_{i}$, which proves the desired statement.
∎
###### Lemma 9.2.
Let $\gamma$ be a Schwartz function supported on $[-2,2]$. Define the smooth
Weyl sums for $u,w,v\in{\mathbb{R}}$
$G(u,w,v)=\sum_{k\in{\mathbb{Z}}}\gamma(k/M)e(ku+k^{2}w+k^{3}v).$
Let $1\leq b\leq q\leq M$ with $(b,q)=1$. Assume that
${\operatorname{dist}\,}(w-\frac{b}{q},{\mathbb{Z}}):=\varphi\leq\frac{1}{qM}$
and that $|v|\lesssim\frac{1}{M^{3}}$. Then for each $\epsilon>0$ we have
$|G(u,w,v)|\lesssim_{\epsilon}\frac{M^{\epsilon}}{q^{1/2}}\min\\{M,\frac{1}{\varphi^{1/2}}\\}$
if
$u\in{\mathcal{M}}=\bigcup_{m\in{\mathbb{Z}}}[\frac{m}{q}-\varphi
M^{1+\epsilon},\frac{m}{q}+\varphi M^{1+\epsilon}]$
and
$|G(u,w,v)|\lesssim_{\epsilon}M^{-100}$
if
$u\not\in{\mathcal{M}}.$
###### Proof.
Invoking periodicity, we may assume that $w=\frac{b}{q}+\varphi$ with
$|\varphi|\leq\frac{1}{Mq}$. Using the representation $k=rq+k_{1}$, $0\leq
k_{1}\leq q-1$ and the Poisson summation formula we get
$G(u,w,v)=\sum_{k_{1}=0}^{q-1}e(k_{1}^{2}b/q)\sum_{r\in{\mathbb{Z}}}\gamma(\frac{k_{1}+rq}{M})e((rq+k_{1})u+(rq+k_{1})^{2}\varphi+(rq+k_{1})^{3}v)$
$=\sum_{m\in{\mathbb{Z}}}\left[\frac{1}{q}\sum_{k_{1}=0}^{q-1}e(k_{1}^{2}b/q-k_{1}m/q)\right]\left[\int_{\mathbb{R}}\gamma(y/M)e((u+\frac{m}{q})y+\varphi
y^{2}+vy^{3})dy\right]$ $=\sum_{m\in{\mathbb{Z}}}S(b,m,q)J(u,v,\varphi,m,q)$
(38)
where
$S(b,m,q)=\frac{1}{q}\sum_{k=0}^{q-1}e(k^{2}b/q-km/q)$ $\displaystyle
J(u,v,\varphi,m,q)$
$\displaystyle=\int_{{\mathbb{R}}}\gamma(y/M)e((u+\frac{m}{q})y+\varphi
y^{2}+vy^{3})dy$
$\displaystyle=M\int_{{\mathbb{R}}}\gamma(z)e(M(u+\frac{m}{q})z+\varphi
M^{2}z^{2}+vM^{3}z^{3})dz.$
If $|\varphi|\lesssim\frac{1}{M^{2}}$, we are content with the bound
$|J(u,v,\varphi,m,q)|\lesssim M$.
Assume now that $|\varphi|\gg\frac{1}{M^{2}}$. The classical van der Corput
estimate (second derivative test) reads
$|\int_{{\mathbb{R}}}\gamma(z)e(Az+Bz^{2}+Cz^{3})dz|\lesssim|B|^{-1/2},$
if $|B|\gg|C|$. In our case $|B|=|\varphi|M^{2}\gg 1\gtrsim|vM^{3}|=|C|$. In
either case we get
$|J(u,v,\varphi,m,q)|\lesssim\min\\{M,|\varphi|^{-1/2}\\}.$
On the other hand, repeated integration by parts (first derivative test) shows
that for each $\alpha>0$
$|J(u,v,\varphi,m,q)|\lesssim_{\alpha}\frac{1}{A^{\alpha}}$
when $|A|=M|u+\frac{m}{q}|\geq M^{\epsilon}\varphi M^{2}$. Thus, when
$u\in{\mathcal{M}}$, only $O(M^{\epsilon})$ values of $m$ will have a non-
negligible contribution to the sum, while if $u\not\in{\mathcal{M}}$ then the
contribution from all $m$ will be negligible.
Combining these with the classical estimate
$|S(b,m,q)|\lesssim\frac{1}{\sqrt{q}}$
finishes the argument.
∎
## References
* [1] Bourgain, J. Decoupling inequalities and some mean-value theorems, J. Anal. Math. 133 (2017), 313-334
* [2] Bourgain, J Decoupling, exponential sums and the Riemann zeta function, J. Amer. Math. Soc. 30 (2017), no. 1, 205-224
* [3] Bourgain, J. and Demeter, C. The proof of the $l^{2}$ Decoupling Conjecture, Annals of Math. 182 (2015), no. 1, 351-389.
* [4] Bourgain, J. and Demeter, C. Decouplings for surfaces in ${\mathbb{R}}^{4}$, J. Funct. Anal. 270 (2016), no. 4, 1299-1318
* [5] Bourgain, J., Demeter, C. and Guth, L. Proof of the main conjecture in Vinogradov’s mean value theorem for degrees higher than three, Ann. of Math. (2) 184 (2016), no. 2, 633-682
* [6] Demeter, C., Guth, L. and Wang, H, Small cap decoupling, GAFA 30 (2020), no. 4, 989-1062
* [7] Jung, H. A sharp $L^{10}$ decoupling for the twisted cubic, arXiv:2011.10539
* [8] Huxley, M. N. Area, lattice points, and exponential sums, London Mathematical Society Monographs. New Series, 13. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1996
* [9] Polya, G. and Szegö, G., Problems and Theorems in Analysis, Springer-Verlag, New York, 1976.
* [10] Wooley, Trevor D. The cubic case of the main conjecture in Vinogradov’s mean value theorem, Adv. Math. 294 (2016), 532-561
|
# Minimal instance with no weakly stable matching for three-sided problem with
cyclic incomplete preferences
E.Yu. Lerner, R.E. Lerner
###### Abstract
Given $n$ men, $n$ women, and $n$ dogs, each man has an incomplete preference
list of women, each woman does an incomplete preference list of dogs, and each
dog does an incomplete preference list of men. We understand a family as a
triple consisting of one man, one woman, and one dog such that each of them
enters in the preference list of the corresponding agent. We do a matching as
a collection of nonintersecting families (some agents, possibly, remain
single). A matching is said to be nonstable, if one can find a man, a woman,
and a dog which do not live together currently but each of them would become
“happier” if they do. Otherwise the matching is said to be stable (a weakly
stable matching in 3-DSMI-CYC problem). We give an example of this problem for
$n=3$ where no stable matching exists. Moreover, we prove the absence of such
an example for $n<3$. Such an example was known earlier only for $n=6$ (Biro,
McDermid, 2010). The constructed examples also allows one to decrease (in two
times) the dimension of the recently constructed analogous example for
complete preference lists (Lam, Plaxton, 2019).
## 1 Introduction
Assume that there are $n$ men and $n$ women, and each one among them has a
preference list of representatives of the opposite sex. A partition into
heterogeneous families with no tuple of man and woman, who prefer each other
rather than their partners (if they have ones), is called stable matching. The
initial case of complete preference lists was studied by D. Gale and L.S.
Shaple: a stable matching necessarily exists, an $O(n^{2})$-hard algorithm for
forming it was proposed in [4]. Note that the algorithm is also applicable for
the case of incomplete preference lists, but some men and women, possibly,
remain single. A certain modification of the Gale–Shaple algorithm (see, for
example, [5]) allows one to find, if possible, a matching without single men
and women or to prove its absence, otherwise.
In the case of random complete preference lists (when for each man and each
woman the distribution of all permutations of representatives of the opposite
sex is independent and uniform) the time necessary for finding a stable
matching is $\Theta(n\ln(n))$ [5]. In this case, the mean value of the number
of stable matchings also has the asymptote $n\ln(n)$ [9].
In [5] D. Knuth states the question whether it is possible to generalize the
theory of stable matchings to the case of three genders. The most interesting
variant in the $k$-gender case occurs when preferences are cyclic:
representatives of the 1st gender rank representatives of the 2nd one, the
latter do representatives of the 3rd gender, etc., and each representative of
the $k$th gender has a preference list of representatives of the 1st gender
(see [7, Chapter 5.6] for the non-cyclic variants of the $k$-gender case).
A tuple containing exactly one representative of each gender is called a
family, and the set of disjoint families is called a matching. A matching is
said to be weakly stable, if there is no tuple outside this matching, each
member of which would become “happier”, if they live together. In what
follows, for brevity, we use the term “a stable matching” instead of the term
“a weakly stable matching”.
Let the number of representatives of each gender equal $n$. In [2], one proves
that with complete preference lists a stable matching always exists, provided
that $n\leqslant k$ (where $k$ is the number of genders). In [3], Eriksson et
al. generalize this result for the case when $k=3$ and $n=k+1=4$. Ibid, one
states the conjecture that the problem of finding a stable matching in
3-gender case with complete preference lists (problem 3-DSM-CYC or just 3DSM)
has a solution for any $n$. Using a satisfiability problem formulation and an
extensive computer-assisted search, the authors of [8] prove the validity of
the conjecture stated by Eriksson et al. for $n=5$. In [10], one proves that
with random preference lists the mean value of stable matchings in problem
3DSM grows as $\Omega(n^{2}\ln^{2}(n))$.
The 3DSMI-problem (3-dimensional stable matching with incomplete preference
lists) was studied by P. Biró and E. McDermid [1]. According to results a
solution of 3DSMI does not necessarily exists in contrast to the two-
dimensional case; they give an explicit example of problem 3DSMI for $n=6$
with no stable matching. Moreover, they prove that the problem of establishing
the solvability of 3DSMI is NP-complete. Ibid, they state the problem of
constructing an instance with no weakly stable matching for $n<6$.
Finally, contrary to expectations, the conjecture stated by Eriksson et al.
was recently refuted in [6]. Lam and Paxton associate problem 3DSMI with a
certain problem 3DSM, where $n$ is 15 times greater than the initial
dimension; this problem is solvable if and only if so is the initial problem
3DSMI. Therefore, the problem of establishing the solvability of problem 3DSM
is NP-complete. The example described in the paper [1] allows one to construct
an instance of problem 3DSM for $n=90=6\times 15$ with no stable matching.
For this reason, the problem of finding an instance of 3DSMI with no weakly
stable matching for $n<6$ becomes more actual. The construction of such
instances for the least possible values of $n$ is the goal of this paper.
First we constructed an instance of 3DSMI problem for $n=4$ and proved the
absence of such instances for $n<3$. But after failing to prove the absence of
such instances for $n=3$, we have proposed an algorithm for the computer
search of all possible instances of 3DSMI problems for $n=3$. Unexpectedly,
the algorithm has succeeded in constructing instances of rather simple 3DSMI
problems without weakly stable matching for $n=3$.
The rest part of the paper has the following structure. In Sect. 2, we present
the formal definitions of 3DSMI-CYC in terms of the graph theory. In Sect. 4,
we study some properties of graphs of problem 3DSMI-CYC, prove the absence of
counterexamples for $n<3$. In Sect. 3 we describe various cases of problem
3DSMI for $n=3$ and consider the result of their computer enumeration. We
consider several instances and explicitly prove the absence of a stable
matching for each of them. In Sect. 4, we conclude by mentioning some
potential future work. In Appendix, we describe our example for $n=4$ and
prove that in this case no stable matching exists.
## 2 The statement of 3DSMI-CYC in terms of the graph theory
Let $G$ be some directed graph. Denote the set of its edges by $E$; assume
that no edges are multiple. Let the vertex set $V$ of the graph $G$ consist of
3 subsets, namely, the set of men $M$, women $F$, and dogs $D$. Any vertex
$v\in M$ has outgoing edges directed to (certain) vertices in $F$, any vertex
$v\in F$ has outgoing edges directed to (certain) vertices in $D$, and any
vertex $v\in D$ has outgoing edges directed to (certain) vertices in $D$.
Assume that $|M|=|F|=|D|$ (otherwise we supplement the corresponding subgraph
with vertices that are not connected with the rest part of the graph). The
number $n=|M|=|F|=|D|$ is called the problem dimension. Evidently, the length
of all cycles in the graph $G$ is a multiple of $3$. Note also that this
condition ensures the possibility to divide the vertex set of any orgraph $G$
into 3 subsets $M$, $F$, $D$ so that all its edges are directed as indicated
above.
Each edge $(v,v^{\prime})$, $v,v^{\prime}\in V$, corresponds to some positive
integer $r(v,v^{\prime})$ which is called the rank of this edge. For fixed
$v\in V$, all possible ranks $r(v,v_{1}),\ldots,r(v,v_{k})$ coincide with
$\\{1,\ldots,k\\}$, where $k$ is the outgoing vertex degree $v$ (if
$r(v,v^{\prime})=1$, then $v^{\prime}$ is the best preference for $v$, and so
on).
We understand a three-sided matching as a subgraph $H(V)$ of the graph $G$,
where each vertex $v\in V$ has at most one outgoing edge and the following
condition is fulfilled: if a vertex $v$ has an outgoing edge, then this edge
belongs to a cycle of length 3 in the graph $H$. Cycles of length 3 in the
graph $H$ are called families. Evidently, each family, accurate to a cyclic
shift, takes the form $(m,f,d)$, where $m\in M$, $f\in F$, and $d\in D$. Note
that in what follows, for convenience of denotations of families, we do not
fix the order of genders in a family, i. e., we treat denotations of families
as triples derived from an initial one by a cyclic shift as equivalent.
In what follows, we sometimes use the notion of a family in a wider sense,
namely, as any cycle of length 3 in the graph $G$. However, if some three-
sided matching $H$ is fixed, then we describe other cycles of length 3
explicitly, applying the term “a family” only to cycles that enter in a three-
sided matching.
A matching ${\mathcal{M}}$ is a collection of all families of a three-sided
matching $H$. For a vertex $v$, $v\in V$, in the matching ${\mathcal{M}}$, the
rank $R_{\mathcal{M}}(v)$ is defined as the rank of the edge that goes out of
this vertex in the subgraph $H$. If some vertex $v$ in the subgraph $H$ has no
outgoing edge, then $R_{\mathcal{M}}(v)$ is set to $+\infty$.
A triple $(v,v^{\prime},v^{\prime\prime})$ is said to be blocking for some
matching ${\mathcal{M}}$, if it is a cycle in the graph $G$, and
$r(v,v^{\prime})<R_{\mathcal{M}}(v),\quad
r(v^{\prime},v^{\prime\prime})<R_{\mathcal{M}}(v^{\prime}),\quad
r(v^{\prime\prime},v)<R_{\mathcal{M}}(v^{\prime\prime}).$
A matching ${\mathcal{M}}$ is said to be stable, if no blocking triple exists
for it.
Problem 3DSMI (3-dimensional stable matching with incomplete preference lists)
consists in finding a stable matching for a given graph $G$. It is well known
that it does not necessarily exists. Moreover, the problem of establishing its
existence for a given graph $G$ is NP-complete. As was mentioned in the
Introduction, this fact was proved by Biro and McDermid. They have constructed
an explicit example of the graph $G$ of dimension 6, for which no stable
matching exists. Moreover, the question of constructing similar examples for
lesser dimensions was also stated by the mentioned authors.
## 3 The absence of examples of problem 3DSMI with no stable matching for
$n<3$
Let $G$ and $G^{\prime}$ be two directed graphs defined on one and the same
vertex set $V$ but, generally speaking, having distinct edge sets. Assume that
rank functions $r_{G}$ and $r_{G^{\prime}}$ are defined on $E$ and
$E^{\prime}$, correspondingly. Let $L\subseteq E\cap E^{\prime}$. We say that
ranking orders $r_{G}$ and $r_{G^{\prime}}$ coincide on $L$, if for any two
edges $(v,v^{\prime}),(v,v^{\prime\prime})$ in $L$,
$r_{G}(v,v^{\prime})<r_{G}(v,v^{\prime\prime})\quad\iff\quad
r_{G^{\prime}}(v,v^{\prime})<r_{G^{\prime}}(v,v^{\prime\prime}).$
###### Lemma 1
For any graph $G$ of problem 3DSMI of dimension $n$ there exists a graph
$G^{\prime}$ of the same dimension such that the outgoing degree of each its
vertex is nonzero and there is the following correspondence between graphs $G$
and $G^{\prime}$:
1) the set of all possible families of graphs $G$ and $G^{\prime}$ coincide;
2) the ranking order of all edges that enter in these families also coincide.
Proof: Let $v$ be a vertex in the graph $G$ having no outgoing edges. Then $v$
enters in no family of the graph $G$. Let us delete this vertex together with
all edges that enter in it. Repeating this procedure several times, we get a
graph $\widehat{G}$ such that each its vertex has at least one outgoing edge
and its set of families coincides with that of the initial graph $G$. Let the
symbol $\widehat{V}$ stand for the vertex set of the graph $\widehat{G}$,
denote the set of its edges by $\widehat{E}$. Without loss of generality, we
assume that the set of families of graphs $G$ and $\widehat{G}$ is nonempty.
In this case, the set $\widehat{V}$ contains at least one vertex for each
gender.
Let us now restore the initial vertices belonging to the set
$V\setminus\widehat{V}$ and for each of them arbitrarily construct at least
one edge directed to some vertex in $\widehat{V}$ that corresponds to a proper
gender. Since the incoming degree of restored vertices equals zero, they, as
earlier, can enter in no family. Note that $\widehat{E}\subseteq E$ and,
consequently, one can construct a rank function for the obtained graph
$G^{\prime}$ preserving the ranking order of the graph $G$ on $\widehat{E}$.
The obtained graph $G^{\prime}$ with the rank function defined in the
indicated way is the desired one. $\square$
Lemma 1 allows one, when studying problems 3DSMI of dimension $n$, to restrict
oneself to considering the corresponding graphs $G$ with nonzero outgoing
degrees of all vertices, which we do in what follows.
Let the symbol $G^{\prime\prime}$ stands for a subgraph of the graph $G$
consisting of its edges of rank 1. We call $G^{\prime\prime}$ the basic
subgraph of the graph $G$. Since each vertex in the basic subgraph has exactly
one outgoing edge, $G^{\prime\prime}$ represents a collection of cycles, whose
lengths are multiples of 3, and trees of edges that lead to these cycles.
###### Theorem 1
Problem 3DSMI of dimension $n\leqslant 2$ always has a stable matching.
Proof: Note that with $n=1$ the assertion of the lemma is trivial. In what
follows, we restrict ourselves to considering the case of $n=2$. Note also
that in this case a nonstable matching can contain only one family.
The basic subgraph of the graph $G$ contains cycles either of length 3 or of
length 6. Let us consider both cases sequentially. In the first case, the
exist vertices $v_{0},v_{1},v_{2}$ such that
$r(v_{0},v_{1})=r(v_{1},v_{2})=r(v_{2},v_{0})=1$. Therefore, if the family
$(v_{0},v_{1},v_{2})$ is a matching, then these vertices enter in no blocking
triple. But then the consideration is reduced to the case of $n=1$, which, as
was mentioned above, is trivial.
It remains to consider the case when the basic subgraph of the graph $G$ is a
cycle of length 6, i.e., $C=(v_{0},v_{1},\ldots,v_{5})$. Without loss of
generality, we assume that the graph $G$, which represents a counterexample to
Theorem 1, along with the cycle $C$ contains the edge $(v_{2},v_{0})$ of rank
2. Then the only possible blocking triple to the matching of one family
$(v_{0},v_{1},v_{2})$ is $(v_{2},v_{3},v_{4})$. Consequently, the graph $G$
also contains the edge $(v_{4},v_{2})$. But then the only possible blocking
triple for the matching consisting of one family $(v_{2},v_{3},v_{4})$ is
$(v_{4},v_{5},v_{0})$. In turn, the graph $G$ that consists of only a basic
cycle $C$ and edges $(v_{0},v_{4})$, $(v_{4},v_{2})$, $(v_{2},v_{0})$ of rank
2 has a stable matching consisting of one family $(v_{0},v_{4},v_{2})$.
Therefore, the graph $G$, along with the cycle $C$, contains at least 4 edges.
Consequently, the graph $G$ of dimension $n=2$ has a matching of two families,
and it is stable by definition. $\square$
## 4 The examples of graphs $G$ of dimension $n=3$ with no stable matching
In this section, we consider the case of $n=3$. Let us first classify all
graphs of the problem of this dimension; this will facilitate their computer
search.
If the basic subgraph of the graph $G$ contains cycles of length 3, then there
exist vertices $v_{0},v_{1},v_{2}$ such that
$r(v_{0},v_{1})=r(v_{1},v_{2})=r(v_{2},v_{0})=1$. Therefore, if the family
$(v_{0},v_{1},v_{2})$ enters in a matching, then these vertices can enter in
no blocking triple. But then we again get the case of $n=2$ which is mentioned
in the statement of Theorem 1 (compare this paragraph with the beginning of
the proof of Theorem 1).
Therefore, the basic subgraph of the graph $G$ represents either a cycle of
length 9, or a cycle of length 6 with three edges that lead to this cycle.
Altogether, accurate to the cyclic symmetry, there are 6 such subgraphs; they
are shown in Fig. 1
Figure 1: 6 variants of the basic subgraph of the graph $G$.
Each of 9 vertices of these subgraphs in the graph $G$ can have outgoing edges
that lead to two remaining vertices of the corresponding gender (here we
understand remaining vertices as those that differ from the vertex, to which
the edge of the basic graph $G^{\prime\prime}$ is already directed). Generally
speaking, the total number of possible cases is 5, namely,
1) the considered vertex has no more outgoing edges;
2)-3) the considered vertex has one more outgoing edge that leads to some
vertex among two ones;
4)-5) the considered vertex has two edges, their ranks are equal to 2 and 3,
we can associate ranks with these edges in two ways.
Therefore, it suffices to consider $6\times 5^{9}$ problems 3DSMI.
Evidently, for each of these problems there exist at most 27 families (27
blocking triples). The number of possible three-sided matching, as one can
easily calculate, also is not so large. Namely, there exist, evidently, at
most 27 matchings consisting of one family. In addition, there exist at most
108 matchings consisting of two families, namely, there are 27 ways to form a
triple consisting of representatives of genders that enter in no matching, and
4 ways to choose partners among two women and two dogs entering the matching
for a fixed man that also enters this matching. Finally, there is at least
36=108/3 three-sided matchings of 3 triples, in the previous enumeration each
of them can be taken into account three times because each of 3 considered
triples does not necessarily enters in the matching. Therefore, the total
amount of three-sided matchings does not exceed 36+108+27=171. For each of
them we need to find the first triple among 27 potential blocking ones that
really is blocking.
Therefore, the total amount of considered cases does not exceed $6\times
5^{9}\times 171\times 27\approx 54\times 10^{9}$. For generating these cases,
we have written a program in Python. See the version of this program that
calculates the number of counterexamples for each of basic graphs shown in
Fig. 1 at https://github.com/reginalerner/3dsm/.
For the first basic graph shown in Fig. 1 (a cycle formed by 9 vertices), no
such graph for problem 3SDMI with no stable matching was obtained. As is
mentioned in the Introduction, we even did not expect to find such instances
for $n=3$. To our surprise, the computer search has found such counterexamples
for each of the rest basic graphs. One of them is shown in Fig. 2. For
convenience, we enumerate vertices of the graph by numbers $v$,
$v=0,1,\ldots,8$. The value $v\bmod 3$ defines the gender that corresponds to
the vertex $v$ .
501234678 Figure 2: The graph of problem 3DSMI of dimension 3 with no stable
matching consisting of 16 edges. The rank of all edges indicated by solid bold
lines equals 1. The dashed lines represent the edges with the rank 2. The rank
of the “dotted” edge equals 3.
###### Theorem 2
The graph shown in Fig. 2 has no stable matching.
Proof: Fig. 2 evidently demonstrates that each possible cycle of length 3
takes one of the following forms: $(0,1,5)$, $(0,7,8)$, $(1,2,3)$, $(1,5,3)$,
$(2,3,4)$, $(3,4,5)$, and $(4,8,6)$. These cycles form families, while
collections of disjoint families form matchings ${\mathcal{M}}$ in the
problem.
Evidently, if one can add a cycle $(v,v^{\prime},v^{\prime\prime})$ to a
matching ${\mathcal{M}}$ (vertices $v,v^{\prime},v^{\prime\prime}$ do not
enter in ${\mathcal{M}}$), then ${\mathcal{M}}$ is unstable, i.e., the triple
$(v,v^{\prime},v^{\prime\prime})$ is blocking for ${\mathcal{M}}$. Therefore,
candidates for stable matchings should be supplemented with possible cycles.
We call such matchings uncompletable and consider only such ones.
The union of vertices of three cycles that are listed above does not coincide
with the set of all vertices of the graph shown in Fig. 2. On the other hand,
by using the direct search method we can prove that any set consisting of one
triple is completable. Therefore, each uncompletable matching consists of two
families. Below we give their complete list together with blocking triples:
1) $\\{(0,1,5),(2,3,4)\\}$, the blocking triple is $(4,8,6)$;
2) $\\{(0,1,5),(4,8,6)\\}$, the blocking triple is $(1,2,3)$;
3) $\\{(0,7,8),(1,2,3)\\}$, the blocking triple is $(3,4,5)$;
4) $\\{(0,7,8),(1,5,3)\\}$, the blocking triple is $(2,3,4)$;
5) $\\{(0,7,8),(2,3,4)\\}$, the blocking triple is $(0,1,5)$;
6) $\\{(0,7,8),(3,4,5)\\}$, the blocking triple is $(0,1,5)$;
7) $\\{(1,2,3),(4,8,6)\\}$, the blocking triple is $(0,7,8)$ or $(3,4,5)$;
8) $\\{(1,5,3),(4,8,6)\\}$, the blocking triple is $(0,7,8)$. $\square$
One can easily give other examples of graphs with the same set of cycles,
uncompletable matchings, and blocking triples. In particular, this property is
characteristic for the graph that differs from that shown in Fig. 2 by the
presence of the additional edge $(7,2)$ of rank 2 or the additional edge
$(6,1)$ of rank 2, or both of these edges.
Moreover, one can find other graphs consisting of 16 edges that have no stable
matching. One of them is shown in Fig. 3 (any other graph with this property
differs from the indicated one only in the fact that ranks of edges $(0,4)$
and $(0,7)$ have interchanged). Note that the graph shown in Fig. 3 is similar
to that in our example for $n=4$ (see, for comparison, Fig. 4 in Appendix).
501234678 Figure 3: One more 3D graph of problem 3DSMI with no stable matching
consisting of 16 edges. Denotations are the same as in Fig. 2.
These graphs define the following families of forming matchings in problem
3DSMI: $(0,1,8)$, $(0,4,5)$, $(0,7,5)$, $(1,2,6)$, $(2,3,7)$, $(3,4,5)$, and
$(3,7,5)$.
The list of matchings with blocking triples looks as follows:
1) $\\{(0,1,8),(2,3,7)\\}$, the blocking triple is $(3,4,5)$;
2) $\\{(0,1,8),(3,4,5)\\}$, the blocking triple is $(1,2,6)$;
3) $\\{(0,1,8),(3,7,5)\\}$, the blocking triple is $(1,2,6)$;
4) $\\{(0,4,5),(1,2,6)\\}$, the blocking triple is $(2,3,7)$;
5) $\\{(0,4,5),(2,3,7)\\}$, the blocking triple is $(0,1,8)$;
6) $\\{(0,7,5),(1,2,6)\\}$, the blocking triple is $(2,3,7)$;
7) $\\{(1,2,6),(3,4,5)\\}$, the blocking triple is $(0,7,5)$.
8) $\\{(1,2,6),(3,7,5)\\}$, the blocking triple is $(0,4,5)$.
Since the counterexamples considered above are diverse, they have some common
properties. We are going to describe them in a separate paper.
## 5 Concluding remarks
In this paper, we study the problem stated by Biro and McDermid in [1],
namely, we seek for instances with no weakly stable matching for 3-DSM-CYC
with $n<6$. In particular, we find the minimal value of $n$, with which such
instances exist, and describe some of them. For $n=3$, all counterexamples
seem to have one and the same structure. We are going to consider this
structure in a separate paper.
The idea of this study is due to the work of Lam and Paxton [6], who give an
example of problem 3DSM-CYC for $n=90$ with no stable matching. This example
is based on an analogous example proposed by Biro and McDermid for problem
3DSMI-CYC with $n=6$. Our example constructed for problem 3DSMI with $n=3$
allows one to make the dimension of an example for problem 3DSM with no stable
matching as low as $n=45$. According to results obtained in Sect. 3, the
further decrease of $n$ for problem 3DSMI is impossible. However, it seems
possible to find problem 3DSM with no stable matching with $n<45$ using some
other methods.
Actually, Lam and Paxton studied not only 3-DSM-CYC, but also its $k$-gender
analog, $k$-DSM-CYC, for arbitrary $k\geqslant 3$. First they have represented
problem 3-DSMI-CYC as a particular case of $k$-DSMI-CYC with $n^{2}$
representatives of each gender. Then by the reduction from $k$-DSMI-CYC they
have proved that $k$-DSM-CYC is NP-complete.
Note that some development of ideas proposed in the paper [6] allows one to
rather easily construct a counterexample of dimension $n=5$ for $k$-DSMI-CYC,
$k>3$, basing on the graph shown in Fig. 2 via subdivision of outcoming edges
of woman vertexes. Any of subdivided edges is converted to the chain with
$k-3$ vertexes inside, one for each new gender. A $k$-gender family should
contain the new vertexes from subdivided edge, so there is a biunique
correspondence between new $k$-gender families and old 3-gender ones. If no
stable matching exists for 3-gender families, then neither one exists for the
new $k$-gender graph.
Therefore, for any $k>2$, we have constructed an instance of problem $k$-DSMI-
CYC with $n=5$ with no stable matching (where lists of preferences of two
women, two men, and two dogs that are not shown in Fig. 2 can be arbitrary).
The question about the existence of such counterexamples for $n=3$ and $n=4$
still remains open.
We hope that this work can be useful in studying other questions related to
other aspects of the generalization of the theory of stable matchings to the
$k$-dimensional case, $k>2$. In our opinion, this study is far from
completion.
## References
* [1] P. Biró, E. McDermid, Three-sided stable matchings with cyclic preferences, Algorithmica 58 (2010) 5–18.
* [2] E. Boros, V. Gurvich, S. Jaslar, D. Krasner, Stable matchings in three-sided systems with cyclic preferences. Discrete Math. 289(1–3) (2004) 1–10.
* [3] K. Eriksson, J. Söstrand, P. Strimling, Three-dimensional stable matching with cyclic preferences, Math. Soc. Sci. 52 (2006) 77–87.
* [4] D. Gale, L.S. Shapley, College admissions and the stability of marriage, Am. Math. Mon. 69 (1962), 9–15.
* [5] D.E. Knuth, Stable marriage and its relation to other combinatorial problems: an introduction to the mathematical analysis of algorithms, in: CRM Proceedings and Lecture Notes, 1996.
* [6] C.-K. Lam, C.G. Plaxton, On the Existence of Three-Dimensional Stable Matchings with Cyclic Preferences, in: Lecture Notes in Computer Science, 11801, 2019. Algorithmic Game Theory, 329–342.
* [7] D.F. Manlove, Algorithmics of matching under preferences, Theor. Comput. Sci. World Scientific, 2013.
* [8] K. Pashkovich, L. Poirrier, Three-dimensional stable matching with cyclic preferences, Optimization Letters (2020).
* [9] B. Pittel, The average number of stable matchings, SIAM J. Discrete Math. 2 (1989) 530–549.
* [10] B. Pittel, On random stable matchings: Cyclic ones with strict preferences and two-sided ones with partially ordered preferences, Advances in Appl. Math. 120 (2020) 1–27.
## Appendix. An example of a graph $G$ of dimension $n=4$ with no stable
matching
$v_{2}$$v_{1}$$v_{0}$$v_{8}$$v_{7}$$v_{6}$$v_{5}$$v_{4}$$v_{3}$$w_{2}$$w_{3}$$w_{4}$
Figure 4: An example of a graph with no stable matching for $n=4$
Consider the graph $G$ shown in Fig. 4. Here bold lines indicate edges of rank
1, dashed ones do edges of rank 2, and dotted ones do edges of rank 3.
Therefore, the vertex set of the graph is
$V=\\{v_{0},\ldots,v_{8},w_{2},w_{3},w_{4}\\},$
the remainder $i\bmod 3$ of the division of the vertex index $i$ by 3 defines
the gender that corresponds to this vertex. Edges of the graph $G$ take one of
the following forms:
1) $(v_{i},v_{(i+1)\bmod 9})$, $i=0,\ldots,8$;
2) $(w_{i},v_{i-2})$, $i=2,3,4$;
3) $(v_{i},w_{i+1})$, $i=1,2,3$;
4) $(v_{i},v_{(i+4)\bmod 9})$, $i=0,\ldots,8$.
The rank of edges of the 1st and 2nd kinds equals 1; the rank of edges of the
3d kind equals 2; the rank of edges of the 4th kind equals 2 for
$i=0,4,5,\ldots,8$ and 3 for $i=1,2,3$. In what follows in this section, we
consider indices of all vertices modulo 9, for brevity of notations, we omit
the symbol $\bmod\,9$ in subscripts.
###### Theorem 3
The graph $G$ shown in Fig. 4 has no stable matching.
Proof: Analogously to the proof of Theorem 2, we consider only uncompletable
matchings.
One can easily see that all possible families of the graph $G$ take one of the
following forms:
$(v_{i},v_{i+1},v_{i+5}),i=0,\ldots,8,\quad\text{or}\quad(v_{i},v_{i+1},w_{i+2}),i=0,1,2.$
All uncompletable matchings for the graph $G$ consist of two or three such
families.
In particular, uncompletable matchings of two families take the form
$\\{(v_{i},v_{i+1},v_{i+5}),(v_{i+2},v_{i+3},v_{i+7})\\},\quad i=0,\ldots,8,$
where one or two vertices among $v_{i+5},v_{i+7}$ can be replaced with those
$w_{i+2}$ and $w_{i+4}$, correspondingly, (certainly, this replacement is
possible, only if a vertex $w$ with the corresponding index exists). In any
case, for such a matching ${\mathcal{M}}$ the blocking triple takes the form
$(v_{i+3},v_{i+4},v_{i+8})$. Really, by definition,
$R_{\mathcal{M}}(v_{i+4})=R_{\mathcal{M}}(v_{i+8})=\infty,\quad\text{and}\quad
R_{\mathcal{M}}(v_{i+3})>1.$
Uncompletable matchings of three families for the graph $G$ take the form:
${\mathcal{M}}_{1}(i)=\\{(v_{i},v_{i+1},v_{i+5}),(v_{i+2},v_{i+6},v_{i+7}),(v_{i+3},v_{i+4},v_{i+8})\\},\quad
i=0,\ldots,8,$
or
${\mathcal{M}}_{2}(i)=\\{(v_{i},v_{i+1},w_{i+2}),(v_{i+2},v_{i+6},v_{i+7}),(v_{i+3},v_{i+4},v_{i+8})\\},\quad
i=0,1,2.$
Note that ${\mathcal{M}}_{2}(i)\equiv{\mathcal{M}}_{3}((i+3)\bmod
9)\equiv{\mathcal{M}}_{4}((i+6)\bmod 9)$, where
${\mathcal{M}}_{3}(i)=\\{(v_{i},v_{i+1},v_{i+5}),(w_{i+8},v_{i+6},v_{i+7}),(v_{i+3},v_{i+4},v_{i+8})\\},\quad
i=3,4,5;$
${\mathcal{M}}_{4}(i)=\\{(v_{i},v_{i+1},v_{i+5}),(v_{i+2},v_{i+6},v_{i+7}),(v_{i+3},v_{i+4},w_{i+5})\\},\quad
i=6,7,8.$
In addition, ${\mathcal{M}}_{1}(i)\equiv{\mathcal{M}}_{1}((i+3)\bmod
9)\equiv{\mathcal{M}}_{1}((i+6)\bmod 9)$. Therefore, there exist, as a total,
6 uncompletable matchings of three families:
${\mathcal{M}}_{1}(0),{\mathcal{M}}_{2}(0),{\mathcal{M}}_{1}(1),{\mathcal{M}}_{2}(1),{\mathcal{M}}_{1}(2),{\mathcal{M}}_{2}(2).$
For matchings ${\mathcal{M}}_{1}(0),{\mathcal{M}}_{2}(0)$ the blocking triple
is $(v_{1},v_{2},w_{3})$: $R_{{\mathcal{M}}_{1}(0)}(v_{1})=3>1$,
$R_{{\mathcal{M}}_{2}(0)}(v_{1})=2>1$,
$R_{{\mathcal{M}}_{1}(0)}(v_{2})=R_{{\mathcal{M}}_{2}(0)}(v_{2})=3>2$.
Analogously, for matchings ${\mathcal{M}}_{1}(1),{\mathcal{M}}_{2}(1)$ the
blocking triple is $(v_{2},v_{3},w_{4})$; for matchings
${\mathcal{M}}_{1}(2),{\mathcal{M}}_{2}(2)$ the blocking triple is
$(v_{0},v_{1},w_{2})$. $\square$
|
# Distributional Anchor Regression
Lucas Kook1,2111Corresponding author, email<EMAIL_ADDRESS>, Beate
Sick1,2, Peter Bühlmann3
(1University of Zurich, Switzerland 2Zurich University of Applied Sciences,
Switzerland
3ETH Zurich, Switzerland)
###### Abstract
Prediction models often fail if train and test data do not stem from the same
distribution. Out-of-distribution (OOD) generalization to unseen, perturbed
test data is a desirable but difficult-to-achieve property for prediction
models and in general requires strong assumptions on the data generating
process (DGP). In a causally inspired perspective on OOD generalization, the
test data arise from a specific class of interventions on exogenous random
variables of the DGP, called anchors. Anchor regression models, introduced by
Rothenhäusler et al. (2018), protect against distributional shifts in the test
data by employing causal regularization. However, so far anchor regression has
only been used with a squared-error loss which is inapplicable to common
responses such as censored continuous or ordinal data. Here, we propose a
distributional version of anchor regression which generalizes the method to
potentially censored responses with at least an ordered sample space. To this
end, we combine a flexible class of parametric transformation models for
distributional regression with an appropriate causal regularizer under a more
general notion of residuals. In an exemplary application and several
simulation scenarios we demonstrate the extent to which OOD generalization is
possible.
##### Keywords
anchor regression, covariate shift, diluted causality, distributional
regression, transformation models, out-of-distribution generalization
## 1 Introduction
Common methods in supervised statistical learning assume the test data to
follow the same distribution as the training data. This is implicitly
exploited in e.g., cross-validation or by randomly splitting a dataset into a
training and a test set, which has been demonstrated to be potentially flawed
(Efron, 2020) due to concept drift or domain shift where new (test) data do
not follow the same distribution as the training data. More recently, the
problem has been referred to as out-of-distribution (OOD) generalization (Sun
et al., 2019). The desire to achieve reliable test predictions under
distributional shifts is ubiquitous in many fields of machine learning and
statistics, such as transfer learning (Pan and Yang, 2009; Rojas-Carulla et
al., 2018), domain adaptation (Magliacane et al., 2018; Redko et al., 2020),
multi-task learning (Caruana, 1997), representation learning (Mitrovic et al.,
2020) or prediction models in medical statistics (Subbaswamy and Saria, 2019).
Accordingly, many different formulations of the problem of OOD generalization
exist in the literature (a detailed overview can be found in Chen and
Bühlmann, 2020). We will frame OOD generalization as the problem of robustly
predicting an outcome in novel, unseen environments, based on data from a few
observed environments and extend on the idea of anchor regression and causal
regularization (Rothenhäusler et al., 2018; Bühlmann, 2020; Bühlmann and
Ćevid, 2020) to develop distributional anchor regression. In such a framework,
training a model on heterogeneous data is not a disadvantage but rather a
necessity.
### 1.1 Related work
It has been known for decades that a causal model is robust towards
arbitrarily strong perturbations on components other than the response
(Haavelmo, 1943). However, identifying causal structures is not only difficult
but often leads to sub-par prediction performance when the test data contain
perturbations of bounded strength (Rothenhäusler et al., 2018). Rothenhäusler
et al. introduce linear anchor regression, which allows a trade-off between
prediction performance and robustness against shift perturbations of a certain
size. The framework of linear anchor regression was extended to deal with
nonlinear regression between the response and covariates (Bühlmann, 2020).
Furthermore, Christiansen et al. (2020) provide a causal framework to decide
which assumptions are needed for and to what extent OOD generalization is
possible.
Anchor regression is related to Instrumental Variables (IV) regression.
However, the main IV assumption that the instrument $A$ does not directly
affect some hidden confounding variables $H$ is dropped, at the price of non-
identifiability of the causal parameter (Angrist et al., 1996). A graphical
description of the issue is given in Figure 1.
$X$$A$$Y$$H$
$X$$A$$Y$$H$
Figure 1: Graphical models for the response variable $Y$, covariates $X$ and
hidden confounders $H$: IV regression with instruments $A$ (left) and anchor
regression with anchor $A$ (right). In anchor regression, $A$ is only required
to be a source node but is allowed to directly influence response, covariates
and hidden confounders.
### 1.2 Our Contribution
In this work we develop a framework for distributional anchor regression in
the broad class of transformation models (TMs, Hothorn et al., 2014). The
resulting class of anchor TMs generalizes (non-) linear anchor regression to
potentially censored responses and characterizes the full conditional
distribution of $Y|\text{\boldmath$X$}=\text{\boldmath$x$}$ instead of
estimating solely the conditional mean function. While the $L_{2}$ anchor loss
can be decomposed into a squared error and causal regularization term
penalizing correlation between anchors and residuals, we propose a
distributional anchor loss based on the negative log-likelihood and replacing
the least-squares residuals by the more general score residuals. The proposed
causal regularizer induces uncorrelatedness between the anchors and these
score residuals. The resulting procedure is tailored towards protecting
against distributional shifts induced by the anchors and naturally
interpolates between the unpenalized maximum-likelihood estimate and a
solution for which anchors and residuals are strictly uncorrelated. The latter
may be thought of as a distributional IV-like objective but it generally does
not estimate the causal model due to the fact that the anchor $A$ can also
directly influence $H$ and $Y$ (see Figure 1). It leads to some invariance of
the score residuals across the values of the anchors $A$, and such an
invariance property has been referred to as “diluted causality” (Bühlmann,
2020).
We implement all methods and algorithms in the R language for statistical
computing (R Core Team, 2020) and the code is available on GitHub. In the
appendix we present further details on notation, computation, and score
residuals.
## 2 Background
First, we introduce structural equation models (SEMs) before recapping linear
anchor regression. In Section 2.3, we switch perspectives from modelling the
conditional expectation to transformation models which enable to capture
entire conditional distributions. The notation used in this work is described
in Appendix A.
### 2.1 Structural Equation Models
Let $Y$ be a response which takes values in $\mathbb{R}$, $X$ be a random
vector of covariates taking values in $\mathbb{R}^{p}$, $H$ denotes hidden
confounders with sample space $\mathbb{R}^{d}$, and $A$ are exogenous
variables (called anchors, due to exogeneity; source node in the graph in
Figure 1) taking values in $\mathbb{R}^{q}$. The SEM governing linear anchor
regression is given by
$\displaystyle\begin{pmatrix}Y\\\ \text{\boldmath$X$}\\\
\text{\boldmath$H$}\end{pmatrix}\leftarrow\text{$\mathbf{B}$}\begin{pmatrix}Y\\\
\text{\boldmath$X$}\\\
\text{\boldmath$H$}\end{pmatrix}+\text{$\mathbf{M}$}\text{\boldmath$A$}+\text{\boldmath$\varepsilon$},$
(1)
with $(1+p+d)\times(1+p+d)$-matrix $\mathbf{B}$ which corresponds to the
structure of the SEM in terms of a directed acyclic graph (DAG), the effect of
$A$ enters linearly via the $(1+p+d)\times q$-matrix $\mathbf{M}$, and
$\varepsilon$ denotes the error term with mutually independent components. The
“$\leftarrow$” symbol is algebraically a distributional equality sign. It
emphasizes the structural character of the SEM, saying that, e.g., $Y$ is only
a function of the parents of the node $Y$ in the structural DAG and the
additive component
$(\text{\boldmath$M$}\text{\boldmath$A$}+\text{\boldmath$\varepsilon$})_{1}$.
The anchors $A$ may be continuous or discrete. In the special case of discrete
anchors each level can be viewed as an “environment”.
We define perturbations as intervening on $A$, e.g., by
$\operatorname{do}(\text{\boldmath$A$}=\text{\boldmath$a$})$, which replaces
$A$ by $a$ in the SEM while leaving the underlying mechanism, i.e., the
coefficients in the SEM, unchanged. In this work we restrict ourselves to
$\operatorname{do}$\- (Pearl, 2009) and $\operatorname{push}$-interventions
(Markowetz et al., 2005) on $A$, which in turn lead to shifts in the
distribution of $X$. Since $A$ is exogenous and a source node in the graph,
the specific type of intervention does not play a major role. Christiansen et
al. (2020) show that under the above conditions OOD generalization is possible
in linear models.
### 2.2 Linear Anchor Regression
Linear anchor regression with its corresponding causal regularization
estimates the linear regression parameter $\beta$ as
$\displaystyle\hat{\text{\boldmath$\beta$}}=\operatorname*{{arg\,min}}_{\text{\boldmath$\beta$}}\bigg{\\{}\left\lVert(\operatorname{Id}-\boldsymbol{\Pi}_{\text{$\mathbf{A}$}})(\text{\boldmath$y$}-\text{$\mathbf{X}$}\text{\boldmath$\beta$})\right\rVert_{2}^{2}/n+\gamma\left\lVert\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}(\text{\boldmath$y$}-\text{$\mathbf{X}$}\text{\boldmath$\beta$})\right\rVert_{2}^{2}/n\bigg{\\}},$
where $0\leq\gamma\leq\infty$ is a regularization parameter and
$\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}=\text{$\mathbf{A}$}(\text{$\mathbf{A}$}^{\top}\text{$\mathbf{A}$})^{-1}\text{$\mathbf{A}$}^{\top}$
denotes the orthogonal projection onto the column space of the anchors
(Rothenhäusler et al., 2018). For $\gamma=1$ one obtains ordinary least
squares, $\gamma\to\infty$ corresponds to two-stage least squares as in
Instrumental Variables regression and $\gamma=0$ is adjusting for the anchor
variables $A$ (being equivalent to ordinary least squares when regressing $Y$
on $X$ and $A$). Causal regularization encourages, for large values of
$\gamma$, uncorrelatedness of the anchors $A$ and the residuals. As a
procedure, causal regularization does not depend at all on the SEM in eq. (1).
However, as described below, the method inherits a distributional robustness
property, whose formulation depends on the SEM in eq. (1).
Rothenhäusler et al. (2018) establish the duality between the $L_{2}$ loss in
linear anchor regression and optimizing a worst case risk over specific shift
perturbations. The authors consider shift perturbations $\nu$, which are
confined to be in the set
$\displaystyle
C_{\gamma}:=\big{\\{}\text{\boldmath$\nu$}:\text{\boldmath$\nu$}=\text{$\mathbf{M}$}\text{\boldmath$\delta$},\;\text{\boldmath$\delta$}\mbox{
independent of
}\text{\boldmath$\varepsilon$},\;\mathbb{E}[\text{\boldmath$\delta$}\text{\boldmath$\delta$}^{\top}]\preceq\gamma\mathbb{E}[\text{\boldmath$A$}\text{\boldmath$A$}^{\top}]\big{\\}},$
and which generate the perturbed response $Y^{\text{\boldmath$\nu$}}$, and
covariates $\text{\boldmath$X$}^{\text{\boldmath$\nu$}}$ via
$\displaystyle\begin{pmatrix}Y^{\text{\boldmath$\nu$}}\\\
\text{\boldmath$X$}^{\text{\boldmath$\nu$}}\\\
\text{\boldmath$H$}^{\text{\boldmath$\nu$}}\end{pmatrix}\leftarrow\text{$\mathbf{B}$}\begin{pmatrix}Y^{\text{\boldmath$\nu$}}\\\
\text{\boldmath$X$}^{\text{\boldmath$\nu$}}\\\
\text{\boldmath$H$}^{\text{\boldmath$\nu$}}\end{pmatrix}+\text{\boldmath$\nu$}+\text{\boldmath$\varepsilon$}.$
The set $C_{\gamma}$ contains all vectors which lie in the span of the columns
of $M$ and thus in the same direction as the exogenous contribution $M$$A$ of
the anchor variables. The average squared size of $\delta$ is limited to
$\gamma$ times the smallest eigenvalue of the centered anchor’s variance-
covariance matrix. Now, the explicit duality between the worst case risk over
all shift perturbations of limited size and the $L_{2}$ anchor loss is given
by
$\displaystyle\sup_{\text{\boldmath$\nu$}\in
C_{\gamma}}\mathbb{E}\left[(Y^{\text{\boldmath$\nu$}}-(\text{\boldmath$X$}^{\text{\boldmath$\nu$}})^{\top}\text{\boldmath$\beta$})^{2}\right]=\mathbb{E}\left[((\operatorname{Id}-P_{\text{\boldmath$A$}})(Y-\text{\boldmath$X$}^{\top}\text{\boldmath$\beta$}))^{2}\right]+\gamma\mathbb{E}\left[(P_{\text{\boldmath$A$}}(Y-\text{\boldmath$X$}^{\top}\text{\boldmath$\beta$}))^{2}\right],$
(2)
where $P_{\text{\boldmath$A$}}=\mathbb{E}[\cdot|\text{\boldmath$A$}]$ is the
population analogue of $\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}$. We note that
the right-hand side is the population analogue of the objective function in
anchor regression. Hence, causal regularization in anchor regression provides
guarantees for optimizing worst-case risk across a class of shift
perturbations. The details are provided in Rothenhäusler et al. (2018).
### 2.3 Transformation Models
We now switch perspective from models for the conditional mean to the
conditional distributions. Specifically, we consider transformation models
(Hothorn et al., 2014). TMs decompose the conditional distribution of
$Y|\text{\boldmath$x$}$ into a pre-defined simple distribution function
$F_{Z}$, with log-concave density $f_{Z}$, and a (semi-) parametric
transformation function $h(y|\text{\boldmath$x$})$, which is monotone non-
decreasing in $y$
$\displaystyle
F_{Y|\text{\boldmath$x$}}(y|\text{\boldmath$x$})=F_{Z}(h(y|\text{\boldmath$x$})).$
This way, the problem of estimating a conditional distribution simplifies to
estimating the parameters of the transformation function $h=F_{Z}^{-1}\circ
F_{Y|\text{\boldmath$x$}}$ (since $F_{Z}$ is pre-specified and parameter-
free). Depending on the complexity of $h$, very flexible conditional
distributions can be modelled. Hothorn et al. (2018) give theoretical
guarantees for the existence and uniqueness of the transformation function $h$
for absolute continuous, countably infinite and ordered discrete random
variables. For the sake of generality, $h$ is parametrized in terms of a basis
expansion in the argument $y$ which can be as simple as a linear function in
$y$ or as complex as a basis of splines to approximate a smooth function in
$y$. In this work, we assume the transformation function for a continuous
responses can be additively decomposed into a linear predictor in $x$ and a
smooth function in $y$ which is modelled as a Bernstein polynomial of order
$P$ with parameters $\text{\boldmath$\theta$}\in\mathbb{R}^{P+1}$ (Hothorn et
al., 2018), such that
$h(y|\text{\boldmath$x$})=\text{\boldmath$b$}_{\text{Bs},P}(y)^{\top}\text{\boldmath$\theta$}+\text{\boldmath$x$}^{\top}\beta$.
Monotonicity of
$\text{\boldmath$b$}_{\text{Bs},P}(y)^{\top}\text{\boldmath$\theta$}$ and
thereby of $h(y|\text{\boldmath$x$})$ can then be enforced via the $P$ linear
constraints $\theta_{1}\leq\theta_{2}\leq\dots\theta_{P+1}$. In case of an
ordinal response taking values in $\\{y_{1},y_{2},\dots,y_{K}\\}$, the
transformation function is a monotone increasing step function,
$h(y_{k}|\text{\boldmath$x$})=\theta_{k}+\text{\boldmath$x$}^{\top}\text{\boldmath$\beta$}$,
for $k=1,\dots,K-1$ and the additional constraint $\theta_{K}=+\infty$. We
summarize a transformation model based on its simple distribution function
$F_{Z}$, basis $b$, which may include covariates, and parameters $\vartheta$,
such that
$F_{Y|\text{\boldmath$x$}}(y|\text{\boldmath$x$})=F_{Z}(\text{\boldmath$b$}(y,\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$})$.
For instance, for a transformation model with continuous response and
explanatory variables $x$ we thus use
$\text{\boldmath$b$}(y,\text{\boldmath$x$})=(\text{\boldmath$b$}_{\text{Bs},P}(y)^{\top},\text{\boldmath$x$}^{\top})^{\top}$
and
$\text{\boldmath$\vartheta$}=(\text{\boldmath$\theta$}^{\top},\text{\boldmath$\beta$}^{\top})^{\top}$,
yielding
$h(y|\text{\boldmath$x$})=\text{\boldmath$b$}_{\text{Bs},P}(y)^{\top}\text{\boldmath$\theta$}+\text{\boldmath$x$}^{\top}\text{\boldmath$\beta$}$.
For a TM with ordinal response we substitute the Bernstein basis with a dummy
encoding of the response, which we denote by $\tilde{\text{\boldmath$y$}}$
(e.g., Kook et al., 2020). Also note that the unconditional case is covered by
the above formulation as well, by omitting all explanatory variables from the
TM’s basis.
Figure 2: Illustration of an unconditional transformation model
$(1-\exp(-\exp(\cdot)),\text{\boldmath$b$}_{\text{Bs},6},\text{\boldmath$\vartheta$})$
for the Old Faithful Geyser data (Azzalini and Bowman, 1990) using a Bernstein
polynomial basis expansion of order 6 for the transformation function,
$h(y)=\text{\boldmath$b$}_{\text{Bs},6}(y)$. The colored regions indicate the
transport of probability mass from $\mathbb{P}_{Y}$ (lower right) to
$\mathbb{P}_{Z}$ (upper left) via the transformation function
$h(y)=\text{\boldmath$b$}(y)^{\top}\text{\boldmath$\vartheta$}$ (upper right).
If $h$ is continuously differentiable, the density of $Y$ is given by
$f_{Y}(y)=f_{Z}(h(y))h^{\prime}(y)$.
Figure 2 illustrates the intuition behind transformation models. The
transformation function (upper right panel) transforms the complex, bimodal
distribution of $Y$ (lower panel) to $F_{Z}=F_{\operatorname{MEV}}$, the
standard minimum extreme value distribution (upper left panel).
###### Definition 1 (Transformation model, Definition 4 in Hothorn et al.
(2018))
The triple
($F_{Z}$, $b$, $\vartheta$) is called transformation model.
###### Example 1 (Linear regression)
The normal linear regression model (Lm) is commonly formulated as
$Y=\beta_{0}+\text{\boldmath$x$}^{\top}\tilde{\text{\boldmath$\beta$}}+\varepsilon$,
$\varepsilon\sim\mathcal{N}(0,\sigma^{2})$, or
$Y|\text{\boldmath$x$}\sim\mathcal{N}(\beta_{0}+\text{\boldmath$x$}^{\top}\tilde{\text{\boldmath$\beta$}},\sigma^{2}).$
For a distributional treatment we write the above expression as
$\displaystyle
F_{Y|\text{\boldmath$x$}}(y|\text{\boldmath$x$})=\Phi\left(\frac{y-\beta_{0}-\text{\boldmath$x$}^{\top}\tilde{\text{\boldmath$\beta$}}}{\sigma}\right)=\Phi(\vartheta_{1}+\vartheta_{2}y-\text{\boldmath$x$}^{\top}\text{\boldmath$\beta$}),$
(3)
which can be understood as a transformation model by letting
$\vartheta_{1}=-\beta_{0}/\sigma$, $\vartheta_{2}=1/\sigma$ and
$\text{\boldmath$\beta$}=\tilde{\text{\boldmath$\beta$}}/\sigma$. Formally, it
corresponds to the model
$\displaystyle(F_{Z},\text{\boldmath$b$},\text{\boldmath$\vartheta$})=(\Phi,(1,y,\text{\boldmath$x$}^{\top})^{\top},(\vartheta_{1},\vartheta_{2},-\text{\boldmath$\beta$}^{\top})^{\top}).$
Note that the baseline transformation, $h(y|\text{\boldmath$X$}=0)$, is
constrained to be linear with constant slope $\vartheta_{2}$. Due to the
linearity of $h$ and the choice $F_{Z}=\Phi$, the modeled distribution of
$Y|\text{\boldmath$x$}$ will always be normal with constant variance. By
parametrizing $h$ in a smooth way, we arrive at much more flexible conditional
distributions for $Y|\text{\boldmath$x$}$.
The parameters of a TM can be jointly estimated using maximum-likelihood. The
likelihood can be written in terms of the simple distribution function
$F_{Z}$, which makes its evaluation computationally more convenient. For a
single datum $\\{(\underaccent{\bar}{\ry},\bar{y}],\text{\boldmath$x$}\\}$
with potentially censored response the log-likelihood contribution is given by
(Lindsey et al., 1996)
$\displaystyle\ell(\text{\boldmath$\vartheta$};y,\text{\boldmath$x$})=\begin{cases}\log
f_{Z}(\text{\boldmath$b$}(y,\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$})+\log\left(\text{\boldmath$b$}^{\prime}(y,\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$}\right),&y=(\underaccent{\bar}{\ry}+\bar{y})/2,\;\text{exact,}\\\
\log
F_{Z}(\text{\boldmath$b$}(\bar{y},\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$}),&y\in(-\infty,\bar{y}],\;\text{left,}\\\
\log\left(1-F_{Z}(\text{\boldmath$b$}(\underaccent{\bar}{\ry},\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$})\right),&y\in(\underaccent{\bar}{\ry},+\infty),\;\text{right,}\\\
\log\left(F_{Z}(\text{\boldmath$b$}(\bar{y},\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$})-F_{Z}(\text{\boldmath$b$}(\underaccent{\bar}{\ry},\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$})\right),&y\in(\underaccent{\bar}{\ry},\bar{y}],\;\text{interval.}\end{cases}$
The likelihood is always understood as conditional on $X$ when viewing the
covariables as random. Allowing for censored observations is of practical
importance, because in many applications the response of interest is not
continuous or suffers from inaccuracies, which can be taken into account via
uninformative censoring.
###### Example 2 (Lm, cont’d)
For an exact datum $\\{y,\text{\boldmath$x$}\\}$ the log-likelihood in the
normal linear regression model is given by
$\displaystyle\ell(\vartheta_{1},\vartheta_{2},\text{\boldmath$\beta$};y,\text{\boldmath$x$})=\log\phi\big{(}\vartheta_{1}+\vartheta_{2}y-\text{\boldmath$x$}^{\top}\text{\boldmath$\beta$}\big{)}+\log(\vartheta_{2}),$
using the density approximation to the likelihood (Lindsey et al., 1996).
Here, $\phi$ denotes the standard normal density, and
$\text{\boldmath$b$}^{\prime}(y,\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$}=\frac{\partial\text{\boldmath$b$}(y,\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$}}{\partial
y}=\vartheta_{2}$.
Now that we have established TMs and the log-likelihood function to estimate
their parameters, we also need a more general notion of the residuals to
formulate a causal regularizer for a distributional anchor loss. Most
importantly, these residuals have to fulfill the same requirements as least
squares residuals in the $L_{2}$ anchor loss. That is, they have to have zero
expectation and a positive definite covariance matrix (e.g., Theorem 3 in
Rothenhäusler et al., 2018). In the survival analysis literature, score
residuals have received considerable attention, and fulfill the above
requirements at least asymptotically (Lagakos, 1981; Barlow and Prentice,
1988; Therneau et al., 1990; Farrington, 2000). We now define score residuals
for the general class of transformation models.
###### Definition 2 (Score residuals)
Let $(F_{Z},\text{\boldmath$b$},\text{\boldmath$\vartheta$})$ be a fully
specified TM. On the scale of the transformation function, add an additional
parameter, $\alpha$, to arrive at the TM
$(F_{Z},(\text{\boldmath$b$}^{\top},1)^{\top},(\text{\boldmath$\vartheta$}^{\top},-\alpha)^{\top})$
with distribution function
$F_{Y|\text{\boldmath$x$}}(y|\text{\boldmath$x$})=F_{Z}(\text{\boldmath$b$}(y,\text{\boldmath$x$})^{\top}\text{\boldmath$\vartheta$}-\alpha)$.
Because the model is fully specified, $\alpha$ is constrained to $0$. The
score residual for a single datum $y\in(\underaccent{\bar}{y},\bar{y}]$ is now
defined as
$\displaystyle
r:=\frac{\partial}{\partial\alpha}\ell(\text{\boldmath$\vartheta$},\alpha;y,\text{\boldmath$x$})\bigg{\rvert}_{\hat{\text{\boldmath$\vartheta$}},\alpha\equiv
0},$ (4)
which can be understood as the score contribution of a single observation to
test $\alpha=0$ for a covariate which is not included in the model. When
viewed as a random variable, the vector of score residuals has mean zero
asymptotically and its components are asymptotically uncorrelated (Farrington,
2000).
The score residuals can be derived in closed form for a transformation model
and observations under any form of uninformative censoring
$\displaystyle
r=\begin{cases}-f_{Z}^{\prime}(\text{\boldmath$b$}(y,\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}})/f_{Z}(\text{\boldmath$b$}(y,\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}}),&y=(\underaccent{\bar}{\ry}+\bar{y})/2,\;\text{exact,}\\\
-f_{Z}(\text{\boldmath$b$}(\bar{y},\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}})/F_{Z}(\text{\boldmath$b$}(\bar{y},\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}}),&y=(-\infty,\bar{y}],\;\text{left,}\\\
f_{Z}(\text{\boldmath$b$}(\underaccent{\bar}{\ry},\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}})/(1-F_{Z}(\text{\boldmath$b$}(\underaccent{\bar}{\ry},\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}})),&y=(\underaccent{\bar}{\ry},+\infty),\;\text{right,}\\\
(f_{Z}(\text{\boldmath$b$}(\underaccent{\bar}{\ry},\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}})-f_{Z}(\text{\boldmath$b$}(\bar{y},\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}})\big{/}&y=(\underaccent{\bar}{\ry},\bar{y}],\;\text{interval.}\\\
\quad(F_{Z}(\text{\boldmath$b$}(\bar{y},\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}})-F_{Z}(\text{\boldmath$b$}(\underaccent{\bar}{\ry},\text{\boldmath$x$})^{\top}\hat{\text{\boldmath$\vartheta$}}))\end{cases}$
###### Example 3 (Lm, cont’d)
By including the addtitional intercept parameter in the normal linear model in
eq. (3), the score residuals are given by
$\displaystyle\frac{\partial}{\partial\alpha}\ell(\vartheta_{1},\vartheta_{2},\text{\boldmath$\beta$},\alpha;y,\text{\boldmath$x$})\bigg{\rvert}_{\hat{\vartheta}_{1},\hat{\vartheta}_{2},\hat{\text{\boldmath$\beta$}},\alpha\equiv
0}$
$\displaystyle=\frac{\partial}{\partial\alpha}\big{\\{}\log\phi\big{(}\vartheta_{1}+\vartheta_{2}y-\text{\boldmath$x$}^{\top}\text{\boldmath$\beta$}-\alpha)+\log(\vartheta_{2})\big{\\}}\bigg{\rvert}_{\hat{\vartheta}_{1},\hat{\vartheta}_{2},\hat{\text{\boldmath$\beta$}},\alpha\equiv
0}$
$\displaystyle=\hat{\vartheta}_{1}+\hat{\vartheta}_{2}y-\text{\boldmath$x$}^{\top}\hat{\text{\boldmath$\beta$}}=\frac{y-\hat{\beta}_{0}-\text{\boldmath$x$}^{\top}\hat{\text{\boldmath$\beta$}}}{\hat{\sigma}}.$
In this simple case the score residuals are equivalent to scaled least-square
residuals, which underlines the more general nature of score residuals.
We are now ready to cast transformation models into the framework of SEMs. In
the definition below, we define linear SEMs for TMs in terms of the
transformed random variable
$Z:=h(Y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$})$ and
transform it back to the scale of $Y$. This strategy is motivated by SEMs for
generalized linear models (GLMs), which parametrize the natural parameter,
instead of modelling the response directly (e.g., Skrondal and Rabe-Hesketh,
2004).
###### Definition 3 (Linear structural equation transformation model)
Let the conditional distribution of
$Y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$}$ be given by
the transformation model cdf
$F_{Y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$}}=F_{Z}\circ
h$. We set up a SEM for the transformed response
$\displaystyle
h(Y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$})=F_{Z}^{-1}(F_{Y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$}}(Y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$})),$
which is distributed according to $F_{Z}$. By Corollary 1 in Hothorn et al.
(2018), the transformation function exists, is unique and monotone non-
decreasing in its argument. The following distributional equalities and
structural equations summarize the data generating process
$\displaystyle
F_{Y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$}}(y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$})$
$\displaystyle=F_{Z}(h(y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$}))$
$\displaystyle h(Y)$
$\displaystyle\leftarrow\text{\boldmath$b$}(Y)^{\top}\text{\boldmath$\theta$}+\text{$\mathbf{B}$}_{Y\text{\boldmath$X$}}\text{\boldmath$X$}+\text{$\mathbf{B}$}_{Y\text{\boldmath$H$}}\text{\boldmath$H$}+\text{$\mathbf{M}$}_{Y}\text{\boldmath$A$}$
$X$
$\displaystyle\leftarrow\text{$\mathbf{B}$}_{\text{\boldmath$X$}\text{\boldmath$X$}}\text{\boldmath$X$}+\text{$\mathbf{B}$}_{\text{\boldmath$X$}\text{\boldmath$H$}}\text{\boldmath$H$}+\text{$\mathbf{M}$}_{\text{\boldmath$X$}}\text{\boldmath$A$}+\varepsilon_{\text{\boldmath$X$}}$
$H$
$\displaystyle\leftarrow\text{$\mathbf{B}$}_{\text{\boldmath$H$}\text{\boldmath$H$}}\text{\boldmath$H$}+\text{$\mathbf{M}$}_{\text{\boldmath$H$}}\text{\boldmath$A$}+\varepsilon_{\text{\boldmath$H$}}$
$A$ $\displaystyle\leftarrow\varepsilon_{\text{\boldmath$A$}}.$
In contrast to the linear SEM in eq. (1), the SEM is set up for the
transformed response.
The basis expansion term
$\text{\boldmath$b$}(y)^{\top}\text{\boldmath$\theta$}$ can be viewed as an
intercept function, which fixes the overall shape of the transformation
function. The remaining additive components of the transformation function, in
turn, solely shift the transformation up- or downwards with the covariates.
This may seem restrictive at first, however, all covariates influence not only
the conditional mean, but all higher conditional moments of
$F_{Y|\text{\boldmath$x$},\text{\boldmath$h$},\text{\boldmath$a$}}$. A
graphical representation of the SEM from Definition 3 is given in Figure 3.
However, we do not display the possibility that some components of $X$
directly influence each other, and likewise for $H$. In fact, in the
simulations in Section 4, the coefficients
$\text{$\mathbf{B}$}_{\text{\boldmath$X$}\text{\boldmath$X$}}=\text{$\mathbf{B}$}_{\text{\boldmath$H$}\text{\boldmath$H$}}=0$.
$X$$A$$h(Y|\text{\boldmath$X$},\text{\boldmath$H$},\text{\boldmath$A$})$$H$
$\text{$\mathbf{M}$}_{\text{\boldmath$X$}}$
$\text{$\mathbf{B}$}_{Y\text{\boldmath$X$}}$
$\text{$\mathbf{B}$}_{\text{\boldmath$X$}\text{\boldmath$H$}}$
$\text{$\mathbf{B}$}_{Y\text{\boldmath$H$}}$
$\text{$\mathbf{M}$}_{\text{\boldmath$H$}}$ $\text{$\mathbf{M}$}_{Y}$ Figure
3: Linear structural equation model for a transformation model. Instead of
setting up the SEM on the scale of $Y$, it is set up on the scale of the
transformation function $h$. The conditional distribution of $Y$ is still
fully determined by $h$ and $F_{Z}$.
Next, we will present our main proposal on distributional anchor regression to
achieve robust TMs with respect to perturbations on the anchor variables $A$.
## 3 Distributional Anchor Regression
We now formulate distributional anchor regression, for which we consider a
distributional loss function, i.e., the negative log-likelihood, which can
take into account censored observations and captures the conditional
distribution of $Y|\text{\boldmath$X$}$, and a causal regularizer involving
score residuals. We first give some intuition how to arrive at the
distributional anchor loss, starting from the $L_{2}$ anchor loss. One can
decompose the $L_{2}$ anchor loss
$\displaystyle
L_{2}(\text{\boldmath$\beta$};\text{\boldmath$y$},\text{$\mathbf{X}$},\text{$\mathbf{A}$})=\frac{1}{2}\bigg{(}\left\lVert(\operatorname{Id}-\boldsymbol{\Pi}_{\text{$\mathbf{A}$}})(\text{\boldmath$y$}-\text{$\mathbf{X}$}\text{\boldmath$\beta$})\right\rVert_{2}^{2}/n+\gamma\left\lVert\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}(\text{\boldmath$y$}-\text{$\mathbf{X}$}\text{\boldmath$\beta$})\right\rVert_{2}^{2}/n\bigg{)}$
into a squared error and a pure penalty term. We rewrite
$\displaystyle
L_{2}(\text{\boldmath$\beta$};\text{\boldmath$y$},\text{$\mathbf{X}$},\text{$\mathbf{A}$})$
$\displaystyle\propto\left\lVert(\operatorname{Id}-\boldsymbol{\Pi}_{\text{$\mathbf{A}$}})(y-\text{$\mathbf{X}$}\text{\boldmath$\beta$})\right\rVert_{2}^{2}+\gamma\left\lVert\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}(\text{\boldmath$y$}-\text{$\mathbf{X}$}\beta)\right\rVert_{2}^{2}$
$\displaystyle=\left\lVert\text{\boldmath$y$}-\text{$\mathbf{X}$}\text{\boldmath$\beta$}\right\rVert_{2}^{2}+(\gamma-1)\left\lVert\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}(\text{\boldmath$y$}-\text{$\mathbf{X}$}\text{\boldmath$\beta$})\right\rVert_{2}^{2},$
which is a sum of the squared-error loss and a causal regularizer involving
the $L_{2}$ norm of the residuals
$(\text{\boldmath$y$}-\text{$\mathbf{X}$}\text{\boldmath$\beta$})$ projected
linearly onto the space spanned by the columns of $\mathbf{A}$ to encourage
uncorrelatedness between the residuals and the anchor variables. The cross-
terms when expanding the $L_{2}$ norm vanish because
$\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}$ is an orthogonal projection. Now an
intuitive choice for the distributional anchor regression loss would be
$\displaystyle
L(\text{\boldmath$\beta$};\text{\boldmath$y$},\text{$\mathbf{X}$},\text{$\mathbf{A}$})\propto-\ell(\text{\boldmath$\beta$};\text{\boldmath$y$},\text{$\mathbf{X}$})+(\gamma-1)\left\lVert\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}\text{\boldmath$r$}\right\rVert_{2}^{2},$
where the negative log-likelihood induced by a transformation model,
$\ell(\cdot)$, replaces the squared error loss and, most importantly, $r$
denotes the vector of score residuals as defined in Section 2.3. Thus, the
loss encourages uncorrelatedness between the anchor variables and the score
residuals, particularly for large values of $\gamma$. The origin and
importance of score residuals is outlined in Appendix B. We now give a
definition for the distributional anchor loss.
###### Definition 4 (Distributional anchor loss)
Consider a linear TM and its SEM, as in Definition 3. Then, the distributional
anchor loss is defined as
$\displaystyle
L(\text{\boldmath$\vartheta$};\text{\boldmath$y$},\text{$\mathbf{X}$},\text{$\mathbf{A}$},\xi)=-\ell(\text{\boldmath$\vartheta$};\text{\boldmath$y$},\text{$\mathbf{X}$})/n+\xi\left\lVert\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}\text{\boldmath$r$}\right\rVert_{2}^{2}/n,$
where $\ell(\cdot)$ denotes the log-likelihood induced by a TM, $r$ denotes
the vector of score residuals and $\xi\in[0,+\infty)$ controls the extent of
causal regularization. As mentioned earlier, the log-likelihood is conditional
on $X$.
###### Example 4 (Lm, cont’d)
For normal linear regression with constant variance, the $L_{2}$ anchor loss
and the distributional anchor loss are equivalent. This is because
$\displaystyle
L(\vartheta_{1},\vartheta_{2},\text{\boldmath$\beta$};\text{\boldmath$y$},\text{$\mathbf{X}$},\xi)$
$\displaystyle=-\log\phi(\vartheta_{1}+\vartheta_{2}\text{\boldmath$y$}-\text{$\mathbf{X}$}\text{\boldmath$\beta$})/n-\log(\vartheta_{2})+\xi\lVert\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}(\vartheta_{1}+\vartheta_{2}\text{\boldmath$y$}-\text{$\mathbf{X}$}\text{\boldmath$\beta$})\rVert_{2}^{2}/n$
$\displaystyle=\lVert\text{\boldmath$y$}-\beta_{0}-\text{$\mathbf{X}$}\tilde{\text{\boldmath$\beta$}}\rVert_{2}^{2}/(2\sigma^{2}n)+\xi\lVert\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}(\text{\boldmath$y$}-\beta_{0}-\text{$\mathbf{X}$}\tilde{\text{\boldmath$\beta$}})\rVert_{2}^{2}/(\sigma^{2}n)+C.$
Absorbing all additive constants into $C$ and factoring out the variance
renders the above objective equivalent to the $L_{2}$ anchor loss for
$\xi=(\gamma-1)/2$.
In the following Section, we will empirically evaluate the prediction
performance of transformation models estimated under the distributional anchor
loss. Computational details for fitting TMs using the distributional anchor
loss, are given in Appendix C.
## 4 Empirical Results
We begin the section by describing the experimental setup in the application
and simulation studies and then present the results in Sections 4.1 and 4.2.
We consider median housing prices in the BostonHousing2 dataset (Harrison and
Rubinfeld, 1978) to illustrate an application of anchor TMs in normal linear
regression (Lm), which assumes normality and equal variances. To lift these
assumptions, a continuous probit (c-probit) regression is used to model more
complex, skewed distributions, which are typical for housing prices. Lastly we
use a continuous logit (c-logit) model, which now takes into account the
censored observations in the BostonHousing2 dataset and enables more easily
interpretable shift effects on the log-odds scale. Furthermore, the proposed
distributional framework for anchor regression is evaluated in simulation
studies for Lm, c-probit and ordered logistic regression (o-logit). A summary
of the models used to empirically evaluate anchor TMs is given in Table 1.
Table 1: Transformation models used to illustrate the distributional anchor loss. $F_{\operatorname{SL}}=\operatorname{expit}$ denotes the standard logistic distribution. By $\tilde{\text{\boldmath$y$}}$ we denote the dummy encoded response, $\tilde{\text{\boldmath$y$}}=\text{\boldmath$e$}(k)$, for $Y$ taking class $y_{k}$, $k=1,\dots,K$. Here, $\text{\boldmath$e$}(k)$ denotes the $k$th unit vector. In the experiments, the basis functions for $y$ are Bernstein polynomials with maximum order $P$, $\text{\boldmath$b$}_{\text{Bs},P}(y)\in\mathbb{R}^{P+1}$. Because the transformation function $h(y)=\text{\boldmath$b$}(y)^{\top}\text{\boldmath$\vartheta$}$ must be monotone non-decreasing, we require some constraints on the parameters of the transformation function. Name | Transformation Model | Constraints | Type Response
---|---|---|---
Lm | $\left(\Phi,(1,y,\text{\boldmath$x$}^{\top})^{\top},(\theta_{1},\theta_{2},-\text{\boldmath$\beta$}^{\top})^{\top}\right)$ | $\theta_{2}>0$ | Continuous
c-probit | $\left(\Phi,(\text{\boldmath$b$}_{\text{Bs},P}^{\top},\text{\boldmath$x$}^{\top})^{\top},(\text{\boldmath$\theta$}^{\top},-\text{\boldmath$\beta$}^{\top})^{\top}\right)$ | $\theta_{1}\leq\dots\leq\theta_{P+1}$ | Continuous
c-logit | $\left(F_{\operatorname{SL}},(\text{\boldmath$b$}_{\text{Bs},P}^{\top},\text{\boldmath$x$}^{\top})^{\top},(\text{\boldmath$\theta$}^{\top},-\text{\boldmath$\beta$}^{\top})^{\top}\right)$ | $\theta_{1}\leq\dots\leq\theta_{P+1}$ | Continuous
o-logit | $\left(F_{\operatorname{SL}},(\tilde{\text{\boldmath$y$}}^{\top},\text{\boldmath$x$}^{\top})^{\top},(\text{\boldmath$\theta$}^{\top},-\text{\boldmath$\beta$}^{\top})^{\top}\right)$ | $\theta_{1}<\dots<\theta_{K-1},$ | Ordinal
| | $\theta_{K}=+\infty$ |
### 4.1 Application: BostonHousing2
For the BostonHousing2 data we wish to predict corrected median housing prices
(cmedv) from several socio-economical and environmental factors. These include
per capita crime rates (crim), average number of rooms (rm), and nitric oxide
concentration (nox) among others. Each observation corresponds to a single
census tract in Boston. Individual cities will serve as anchors in this
example because they are plausibly exogenous factors that induce heterogeneity
in the observed covariates and housing prices. “Leave-one-environment-out”
(LOEO) cross validation is used to demonstrate the change in estimated
regression coefficients and NLL comparing a plain model without causal
regularization ($\xi=0$) to three different anchor TMs over a large range of
causal regularization (Figure 4). For some of the left-out towns the
conditional distribution of cmedv will differ from the training distribution
and contain unseen perturbations. In this case, the town will be harder to
predict, leading to a worse cross-validated NLL compared to the environments
which are not perturbed. We hypothesize, an anchor TM should improve
prediction performance for the census tracts in these hard-to-predict towns,
in analogy to the distributional robustness results described in eq. (2),
whereas it should perform worse than a plain TM for environments with only
mild perturbations.
Figure 4: Leave-one-environment-out cross validation under increasing causal
regularization for the BostonHousing2 data, with town as anchors. A linear
(Lm), continuous probit (c-probit) and continuous logit (c-logit) model is
fitted on 91 towns and used to predict the left out town. A: Out-of-sample NLL
for the left-out census tracts. Beacon Hill, Back Bay and North End are
consistently hardest to predict. Consequently, for these towns the cross-
validated NLL improves with increasing causal regularization up to a certain
point. For the majority of the remaining towns prediction performance
decreases. We thus indeed improve worst-case prediction, in analogy to eq.
(2). Note that $\log_{10}\xi=-\infty$ corresponds to the unpenalized model. B:
Estimated regression coefficients, which are interpretable as difference in
means (Lm), difference in transformed means (c-probit) and log odds-ratios
(c-logit). Solely the c-logit model accounts for right-censored observations.
With increasing causal regularization the estimates shrink towards zero,
indicating that town may be a weak instrument (see Appendix D).
First, a linear model assuming homoscedasticity and conditional normality is
used to estimate the conditional distribution of cmdev depending on the socio-
economic factors described above. A notable reduction in the observed worst-
case loss is already observed for mild causal regularization
($\xi\in\\{1,10\\}$) without loosing too much predictive performance for the
other environments (Figure 4 A). For stronger causal regularization, the
cross-validated NLL becomes gradually worse. However, assuming a symmetric
distribution for prices ignores the typical skewness of these outcomes. The
c-probit model allows a non-linear basis expansion in the response and thus
relaxes the homoscedasticity and conditional normality assumption. Essentially
the same gain in terms of worst-case CV NLL is observed for the c-probit model
compared to Lm. Figure 5 shows the predicted conditional densities for the
three observations in Boston Beacon Hill and emphasizes the importance of
modelling cmedv using a right-skewed distribution. A disadvantage of switching
from a linear probit to a non-linear probit model is the loss of
interpretability of the individual regression coefficients (e.g., Fahrmeir et
al., 2007).
Figure 5: Density estimates for the three census tracts (Loc 1, Loc 2, Loc 3)
in Boston Beacon Hill, the hardest to predict town in terms of LOEO cross-
validated NLL for $\xi=10$ (cf. Figure 4). Lm assumes equal variances and
conditional normality, whereas c-probit loosens this assumption leading to
more accurate, skewed distributions. Only c-logit takes into account right
censoring in the data.
However, also this disadvantage can be overcome in the framework of anchor TMs
by specifying a different distribution function, $F_{Z}$, while keeping the
basis expansion in the outcome equally flexible. In addition, the housing
prices above $\$50^{\prime}000$ (cmedv = 50) are right-censored in the
BostonHousing2 data, which is commonly ignored in analyses, but crucial to
capture the uncertainty in predicting the skewed outcome (Gilley and Kelley
Pace, 1996). The c-logit model now takes into account right censoring of the
observations and yields regression coefficients that are interpretable as log
odds-ratios (Lohse et al., 2017). Indeed, for census tract Loc 1 the c-logit
model puts more probability mass on prices of cmedv beyond $\$50^{\prime}000$
relative to the density at its mode compared to the c-probit model, which
treats the censored observations as exact (Figure 5). Taking into right
censoring apparently made out-of-sample prediction for Boston Beacon Hill,
Back Bay and North End easier, but the improvement through causal
regularization diminished slightly (Figure 4 A).
Coefficient estimates for all three models are shown in Figure 4 B. With
increasing amount of causal regularization, all estimates shrink towards 0,
which indicates town may be a weak instrument (Imbens and Rosenbaum, 2005),
for more details see Appendix D. However, intermediate amounts of causal
regularization yield estimates for which anchors and score residuals are
somewhat de-correlated and still lead to the desired robustness against
perturbations in unseen environments.
### 4.2 Simulations
In this the section, the results of the simulation scenarios are presented.
The parameters for the SEMs used to simulate from the models in Table 1 are
summarized in Table 2.
#### 4.2.1 Experimental Setup
We begin with a comparison of linear anchor regression and the distributional
version of linear anchor regression in scenario la, which was first used to
study anchor regression in Bühlmann (2020). The non-linear scenario nla also
stems from Bühlmann (2020), which we use to show how shift transformation
models can estimate non-linear conditional expectation function, albeit for
their linear model formulation in the covariates. For the last two scenarios
iv1 and iv2, the IV assumptions hold, i.e., the anchors influence only $X$.
Scenario iv1 showcases discrete anchors and a continuous response and a non-
linear transformation function, which we model by a continuous probit
regression. Scenario iv2 features an ordinal response and a more detailed
simulation, including various shift strengths. In scenarios la, iv1 and iv2,
the effect from $X$ to $Y$ is denoted by $\beta$, whereas the non-linear $f$
is used in scenario nla. For the data generating processes that involve
transformation models, the transformation function $h$ is specified. For
ordinal responses the number of classes, $K$, and for continuous outcomes, the
maximum order of the Bernstein basis, $P$, determines the number of parameters
for the baseline transformation. The parameters of the Bernstein basis are
fixed by applying the transformation function $h$ to a $(P+1)$-vector of
evenly spaced values in the desired support of $Y$. In turn, such a basis
approximation leads to a distribution approximation for the true distribution
of $Y$ which improves as $P$ increases. However, the transformation function
is constrained to be monotone non-decreasing, which makes a parsimonious
parametrization sufficient.
Table 2: Simulation scenarios used to empirically evaluate distributional
anchor regression. Scenarios la and nla are adapted from Bühlmann (2020) and
will be used to evaluate linear and continuous probit anchor TMs. The seven
covariates omitted in the table in both scenarios are noise covariates, i.e.,
$\beta_{j}=0,\;j\neq 2,3$. In nla,
$f(\text{\boldmath$X$})=X_{2}+X_{3}+\mathds{1}(X_{2}\leq
0)+\mathds{1}(X_{2}\leq 0.5)\mathds{1}(X_{3}\leq 1)$. Both use
$n_{\text{train}}=300$ and $n_{\text{test}}=2000$ observations. In the iv
scenarios the instrumental variables assumptions hold, because the anchor
neither influences the hidden confounders nor the response. The scenarios
generalize Example 1 in Rothenhäusler et al. (2018) to anchor TMs with a
continuous outcome (iv1) and an ordinal outcome (iv2). Both use
$n_{\text{train}}=1000$ and $n_{\text{test}}=2000$ observations.
Scenario | Model | Distributions | Parameters | Perturbation
---|---|---|---|---
la | Lm | $\begin{array}[]{rl}\varepsilon_{Y}&\sim\mathcal{N}(0,0.25^{2})\\\ \varepsilon_{\text{\boldmath$X$}}&\sim\mathcal{N}_{10}(0,\operatorname{Id})\\\ \varepsilon_{H}&\sim\mathcal{N}(0,1)\\\ \varepsilon_{\text{\boldmath$A$}}&\sim\mathcal{N}_{2}(0,\operatorname{Id})\\\ \end{array}$ | $\begin{array}[]{rl}\beta_{j}&=3,\;j=2,3\\\ \text{$\mathbf{M}$}_{Y}&=(-2,0)\\\ \text{$\mathbf{M}$}_{X_{j}}&\sim\mathcal{N}_{2}(0,\operatorname{Id}),\;j=1,\dots,10\\\ \text{$\mathbf{M}$}_{\text{\boldmath$H$}}&=0\\\ \text{$\mathbf{B}$}_{Y\text{\boldmath$H$}}&=1\\\ \text{$\mathbf{B}$}_{\text{\boldmath$X$}\text{\boldmath$H$}}&=(1,\dots,1)\\\ \end{array}$ | $\text{\boldmath$A$}\sim\mathcal{N}_{2}(0,10\operatorname{Id}))$
nla | c-probit | $\begin{array}[]{rl}\varepsilon_{Y}&\sim\mathcal{N}(0,0.25^{2})\\\ \varepsilon_{\text{\boldmath$X$}}&\sim\mathcal{N}_{10}(0,0.5^{2}\operatorname{Id})\\\ \varepsilon_{H}&\sim\mathcal{N}(0,1)\\\ \varepsilon_{\text{\boldmath$A$}}&\sim\mathcal{N}_{2}(0,\operatorname{Id})\end{array}$ | $\begin{array}[]{rl}f&=f(X_{2},X_{3})\\\ \text{$\mathbf{M}$}_{Y}&=\text{$\mathbf{M}$}_{\text{\boldmath$H$}}=0\\\ \text{$\mathbf{M}$}_{X_{j}}&=(1,1),\;j=1,\dots,10\\\ \text{$\mathbf{B}$}_{Y\text{\boldmath$H$}}&=3\\\ \text{$\mathbf{B}$}_{\text{\boldmath$X$}\text{\boldmath$H$}}&=(2,\dots,2)\\\ \end{array}$ | $\begin{array}[]{l}\text{\boldmath$A$}\sim\mathcal{N}_{2}(\text{\boldmath$\mu$},\operatorname{Id}))\\\ \text{\boldmath$\mu$}\sim\mathcal{N}_{2}(1,2^{2}\operatorname{Id})\end{array}$
iv1 | c-probit | $\begin{array}[]{rl}\varepsilon_{X}&\sim\mathcal{N}(0,0.75^{2})\\\ \varepsilon_{H}&\sim\mathcal{N}(0,0.75^{2})\\\ \varepsilon_{A}&\sim\operatorname{Rademacher}\end{array}$ | $\begin{array}[]{rl}\beta&=0.3\\\ \text{$\mathbf{M}$}_{Y}&=\text{$\mathbf{M}$}_{H}=0\\\ \text{$\mathbf{M}$}_{X}&=0.3\\\ \text{$\mathbf{B}$}_{YH}&=\text{$\mathbf{B}$}_{XH}=0.6\\\ h&=\Phi^{-1}\circ F_{\chi^{2}_{3}}\\\ P&=6\\\ \end{array}$ | $\begin{array}[]{l}\operatorname{do}(A=3.6)\end{array}$
iv2 | o-logit | $\begin{array}[]{rl}\varepsilon_{X}&\sim\mathcal{N}(0,0.75^{2})\\\ \varepsilon_{H}&\sim\mathcal{N}(0,0.75^{2})\\\ \varepsilon_{A}&\sim\operatorname{Rademacher}\end{array}$ | $\begin{array}[]{rl}\beta&=0.5\\\ \text{$\mathbf{M}$}_{Y}&=\text{$\mathbf{M}$}_{H}=0\\\ \text{$\mathbf{M}$}_{X}&\in\\{-1,0.5,1\\}\\\ \text{$\mathbf{B}$}_{YH}&=\text{$\mathbf{B}$}_{XH}=1.5\\\ h&=F_{\operatorname{SL}}^{-1}\circ\operatorname{Id}\\\ K&\in\\{4,6,10\\}\\\ \end{array}$ | $\begin{array}[]{l}\operatorname{do}(A=a)\\\ a\in\\{0,1,1.8,3\\}\end{array}$
#### 4.2.2 Scenario la
The linear anchor scenario la was first presented in Bühlmann (2020) for the
$L_{2}$ anchor loss. The performance gain of using anchor regression compared
to a plain linear model is shown in Figure 6 for the $L_{2}$ anchor loss
(left) and the distributional anchor loss (right).
Figure 6: Test performance averaged over 100 simulations for scenario la for
$n_{\text{train}}=300$ and $n_{\text{test}}=2000$. The $\alpha$-quantiles of
test absolute prediction error $\text{APE}:=\lvert y-\hat{y}\rvert$, where
$\hat{y}$ denotes the conditional median, is shown for linear anchor
regression (A) and the negative log-likelihood contributions for
distributional (conditionally Gaussian) linear $L_{2}$ anchor regression (B).
The two models are equivalent up to estimating the residual variance via
maximum likelihood in the distributional anchor TM. The change in perspective
from an $L_{2}$ to a distributional loss requires different evaluation
metrics, of which the log-likelihood, being a proper scoring rule (Good,
1952), is the most natural choice.
A performance gain across all quantiles of the log-likelihood contributions
can be observed. However, the larger the quantile, the higher the performance
gain. The extent of causal regularization was chosen based on the theoretical
insight that, in a multivariate normal model, $\gamma$ can be interpreted as
the quantile of a $\chi^{2}_{1}$ distribution, which relates the expected size
of the unobserved perturbations to the conditional mean squared error given
the anchors (Lemma 1 in Rothenhäusler et al., 2018).
#### 4.2.3 Scenario nla
In scenario nla, which features non-linear anchor regression, a continuous
probit model is fitted. Figure 7 shows a vast gain in performance over all
quantiles, comparable to what was observed in Bühlmann (2020) with a different
nonlinear anchor boosting method.
Figure 7: Test performance over 100 simulations for scenario nla with
$n_{\text{train}}=300$ and $n_{\text{test}}=2000$. $\operatorname{NLL}$ (A)
and $\alpha$-quantiles of the negative log-likelihood contributions (B) for
the c-probit anchor TM. The test data are generated under strong push-
interventions on the distribution of the anchors (cf. Table 2).
This gain in performance can be explained in the causal generalization
framework of Christiansen et al., because the causal function linearly extends
outside the support of $\text{\boldmath$X$}_{\text{train}}$.
#### 4.2.4 Scenario iv1
In scenario la the anchors influence the response and hidden confounders,
violating the instrumental variable assumptions. Scenario iv1 explores binary
anchors as valid instruments, while the baseline transformation
$\text{\boldmath$b$}_{\text{Bs},6}(y)^{\top}\text{\boldmath$\theta$}$ is non-
linear. Note that although the model formulation is linear in $\beta$, the
conditional expectation function may be non-linear, because of the non-linear
transformation function. This is inspired by Example 1 in Rothenhäusler et al.
(2018) and translates it into a transformation model SEM from Definition 3 for
continuous but non-normal outcomes.
Figure 8: Test performance over 100 simulations for scenario iv1 with
$n_{\text{train}}=1000$ and $n_{\text{test}}=2000$. Quantiles of the
individual log-likelihood contributions (A) and estimates of $\beta$ (B) for
increasingly strong causal regularization. The ground truth is indicated by a
dashed line. The test data are generated under the intervention
$\operatorname{do}(\text{\boldmath$A$}=3.6)$.
The test data were generated using $\operatorname{do}(A=3.6)$, which leads to
better predictive performance under stronger causal regularization (Figure 8
A). Additionally, although $A$ is a valid instrument, the causal parameter
seems to be slightly biased in the finite sample case for larger $\xi$ (Figure
8 B).
#### 4.2.5 Scenario iv2
The instrumental variable assumptions hold also in the last scenario iv2.
However, the response’s distribution is now ordered categorical and more
varying parameters are considered, including the number of classes of the
response, the strength of the instruments and the perturbations in the test
data (cf. Table 2).
Figure 9: Test performance and coefficient estimates over 200 simulations for
scenario iv2. Because the results are comparable for differing sample sizes
and numbers of classes, solely the results for $n_{\text{train}}=1000$ and
$K=10$ are displayed. A: Test log-likelihood contributions for varyingly
strong instruments (columns) and perturbation sizes (rows). B: Parameter
estimates $\hat{\beta}$ for all intervention scenarios together, since they do
not influence estimation. The simulated ground truth $\beta=0.5$ is indicated
with a dashed line.
Figure 9 depicts test NLL alongside the estimated shift parameter,
$\hat{\beta}$. Also here, in case of strong perturbations anchor regression
protects against unseen perturbations for larger $\xi$ (e.g.,
$\operatorname{do}(A=1.8)$ and $\operatorname{do}(A=3)$ for
$\text{$\mathbf{M}$}_{\text{\boldmath$A$}}=-0.5$) resulting in improved test
predictions. However, if the shift perturbations are not innovative, test
prediction suffers with increasing amounts of causal regularization (e.g.,
$\operatorname{do}(A=1)$ for $\text{$\mathbf{M}$}_{\text{\boldmath$X$}}=2$).
Note the interplay between the strength of the anchors,
$\text{$\mathbf{M}$}_{\text{\boldmath$X$}}$, and the strength of the shift
interventions. For larger $\text{$\mathbf{M}$}_{\text{\boldmath$X$}}$, the
training data becomes more heterogeneous and the larger shifts are not as
innovative, resulting in a weaker performance of anchor TMs for increasing
$\xi$. Again, the estimated shift parameter seems to be slightly biased
(Figure 9 B).
## 5 Discussion and Outlook
The proposed method of distributional anchor regression generalizes (non-)
linear anchor regression beyond the assumptions of normality and
homoscedasticity and beyond estimating solely a conditional mean.
In an exemplary analysis of the BostonHousing2 data we have illustrated the
flexibility of anchor TMs and demonstrated a gain in prediction performance in
terms of worst-case cross validated log-likelihood, while preserving
interpretability and appropriately accounting for censored observations. The
simulations show comparable results to established linear and non-linear
anchor regression models under both IV and invalid IV scenarios and extend the
notion of invariance between residuals and environments to other than
continuous responses. Although anchor TMs are generally unable to recover the
causal parameter, we argue that the “diluted causal” (Bühlmann, 2020)
parameter,
$\hat{\text{\boldmath$\beta$}}^{\to\infty}:=\hat{\text{\boldmath$\beta$}}(\xi)$
as $\xi\to\infty$, is interesting in its own right, for it induces invariance
between anchors and score residuals and allows robust test predictions in the
presence of distributional shifts. Much like the causal parameter, the diluted
causal parameter leads to (aspects of) invariant conditional distributions
across environments. Even though the powerful causal interpretation is lost,
distributional anchor regression yields models that allow causally flavored
interpretations in terms of stabilization and robustification across
environments.
Anchor TMs estimate the whole conditional distribution and thus enable robust
prediction of a multitude of responses, which we demonstrated for (censored)
continuous and ordered categorical responses. Possible extensions of anchor
TMs are numerous. For instance, other types of responses include count and
time-to-event data. The framework of anchor TMs contains a fully parametric
version of the Cox proportional hazard model (Hothorn et al., 2018), although
an extension to classical survival models is also possible. For instance, the
Cox proportional hazard model (Cox, 1972) can be fitted by substituting the
likelihood for the partial likelihood (Cox, 1975) in the distributional anchor
loss, while the score residuals are equivalent to martingale residuals (cf.
Appendix B; Barlow and Prentice, 1988; Therneau et al., 1990). As in high-
dimensional linear and non-linear anchor regression, anchor TMs could be
fitted under a lasso penalty (Tibshirani, 1996). The idea of using a different
class of residuals can also be translated to other model classes, such as
deviance residuals for GLMs, as long as the theoretical requirements discussed
in Section 3 are met.
In terms of future work, further theoretical investigation of the
distributional anchor loss, such as bounds on the generalization error, is
warranted. So far we restricted distributional regression to linear (in $x$)
TMs because of their already highly flexible nature and simplicity in the
considered DGPs. However, more complex experimental designs require, for
instance, random effects or time-varying effects of covariates in time-to-
event data. Taken together, anchor TMs lay the foundation for future work on
distributional extensions of anchor regression.
##### Acknowledgements.
The research of L. Kook and B. Sick was supported by the Novartis Research
Foundation (FreeNovation 2019). The research of P. Bühlmann was supported by
the European Research Council under the Grant Agreement No 786461 (CausalStats
- ERC-2017-ADG). We thank Torsten Hothorn for fruitful discussions and input
on the exemplary application. We also thank Malgorzata Roos for her helpful
comments on the manuscript.
## References
* Abadi et al. [2015] Martín Abadi et al. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems, 2015\. URL https://tensorflow.org/. Software available from tensorflow.org.
* Angrist et al. [1996] Joshua D. Angrist, Guido W. Imbens, and Donald B. Rubin. Identification of Causal Effects Using Instrumental Variables. _Journal of the American Statistical Association_ , 91(434):444–455, 1996. doi: 10.1080/01621459.1996.10476902. URL https://www.tandfonline.com/doi/abs/10.1080/01621459.1996.10476902.
* Azzalini and Bowman [1990] A. Azzalini and A. W. Bowman. A Look at Some Data on the Old Faithful Geyser. _Applied Statistics_ , 39(3):357, 1990. doi: 10.2307/2347385. URL https://www.jstor.org/stable/10.2307/2347385?origin=crossref.
* Barlow and Prentice [1988] William E. Barlow and Ross L. Prentice. Residuals for relative risk regression. _Biometrika_ , 75(1):65–74, 1988. doi: 10.1093/biomet/75.1.65. URL https://academic.oup.com/biomet/article-lookup/doi/10.1093/biomet/75.1.65.
* Boyd and Vandenberghe [2004] Stephen Boyd and Lieven Vandenberghe. _Convex Optimization_. Cambridge University Press, 2004. doi: 10.1017/CBO9780511804441.
* Bühlmann [2020] Peter Bühlmann. Invariance, Causality and Robustness. _Statistical Science_ , 35(3):404–426, 2020. doi: 10.1214/19-STS721. URL https://projecteuclid.org/euclid.ss/1599789698.
* Bühlmann and Ćevid [2020] Peter Bühlmann and Domagoj Ćevid. Deconfounding and Causal Regularisation for Stability and External Validity. _International Statistical Review_ , 88(1):114–134, 2020. doi: 10.1111/insr.12426. URL https://onlinelibrary.wiley.com/doi/10.1111/insr.12426.
* Caruana [1997] Rich Caruana. Multitask Learning. _Machine Learning_ , 28(1):41–75, 1997. doi: 10.1023/A:1007379606734. URL https://link.springer.com/article/10.1023/A:1007379606734.
* Chen and Bühlmann [2020] Yuansi Chen and Peter Bühlmann. Domain adaptation under structural causal models. _arXiv preprint arXiv:2010.15764_ , 2020. URL https://arxiv.org/abs/2010.15764.
* Christiansen et al. [2020] Rune Christiansen, Niklas Pfister, Martin Emil Jakobsen, Nicola Gnecco, and Jonas Peters. A causal framework for distribution generalization. _arXiv preprint_ , 2020. URL https://arxiv.org/abs/2006.07433.
* Cox [1972] D. R. Cox. Regression Models and Life-Tables. _Journal of the Royal Statistical Society: Series B (Methodological)_ , 34(2):187–202, 1972. doi: 10.1111/j.2517-6161.1972.tb00899.x. URL https://doi.wiley.com/10.1111/j.2517-6161.1972.tb00899.x.
* Cox [1975] D. R. Cox. Partial likelihood. _Biometrika_ , 62(2):269–276, 1975. doi: 10.1093/biomet/62.2.269. URL https://academic.oup.com/biomet/article-lookup/doi/10.1093/biomet/62.2.269.
* Cox and Snell [1968] D. R. Cox and E. J. Snell. A General Definition of Residuals. _Journal of the Royal Statistical Society: Series B (Methodological)_ , 30(2):248–265, 1968. doi: 10.1111/j.2517-6161.1968.tb00724.x. URL https://doi.wiley.com/10.1111/j.2517-6161.1968.tb00724.x.
* Efron [2020] Bradley Efron. Prediction, Estimation, and Attribution. _Journal of the American Statistical Association_ , 115(530):636–655, 2020. doi: 10.1080/01621459.2020.1762613. URL https://doi.org/10.1080/01621459.2020.1762613.
* Fahrmeir et al. [2007] Ludwig Fahrmeir, Thomas Kneib, Stefan Lang, and Brian Marx. _Regression_. Springer, 2007.
* Farrington [2000] C P Farrington. Residuals for Proportional Hazards Models with Interval-Censored Survival Data. _Biometrics_ , 56(2):473–482, 2000. doi: 10.1111/j.0006-341X.2000.00473.x. URL https://doi.wiley.com/10.1111/j.0006-341X.2000.00473.x.
* Fu et al. [2020] Anqi Fu, Balasubramanian Narasimhan, and Stephen Boyd. CVXR: An R package for disciplined convex optimization. _Journal of Statistical Software_ , 2020. URL https://arxiv.org/abs/1711.07582. Accepted for publication.
* Gilley and Kelley Pace [1996] Otis W. Gilley and R. Kelley Pace. On the Harrison and Rubinfeld data. _Journal of Environmental Economics and Management_ , 31(3):403–405, 1996. doi: 10.1006/jeem.1996.0052. URL https://linkinghub.elsevier.com/retrieve/pii/S0095069696900522.
* Good [1952] I. J. Good. Rational decisions. _Journal of the Royal Statistical Society. Series B (Methodological)_ , 14(1):107–114, 1952. doi: 10.1111/j.2517-6161.1952.tb00104.x. URL https://www.jstor.org/stable/2984087.
* Haavelmo [1943] Trygve Haavelmo. The Statistical Implications of a System of Simultaneous Equations. _Econometrica_ , 11(1):1, 1943. doi: 10.2307/1905714. URL https://www.jstor.org/stable/1905714?origin=crossref.
* Harrison and Rubinfeld [1978] David Harrison and Daniel L. Rubinfeld. Hedonic housing prices and the demand for clean air. _Journal of Environmental Economics and Management_ , 5(1):81–102, 1978. doi: 10.1016/0095-0696(78)90006-2. URL https://linkinghub.elsevier.com/retrieve/pii/0095069678900062.
* Hothorn [2020] Torsten Hothorn. Most Likely Transformations: The mlt Package. _Journal of Statistical Software_ , 92(1), 2020. doi: 10.18637/jss.v092.i01. URL https://www.jstatsoft.org/v92/i01/.
* Hothorn et al. [2014] Torsten Hothorn, Thomas Kneib, and Peter Bühlmann. Conditional transformation models. _Journal of the Royal Statistical Society. Series B: Statistical Methodology_ , 76(1):3–27, 2014. doi: 10.1111/rssb.12017. URL https://www.jstor.org/stable/24772743.
* Hothorn et al. [2018] Torsten Hothorn, Lisa Möst, and Peter Bühlmann. Most Likely Transformations. _Scandinavian Journal of Statistics_ , 45(1):110–134, 2018. doi: 10.1111/sjos.12291. URL https://doi.wiley.com/10.1111/sjos.12291.
* Imbens and Rosenbaum [2005] Guido W. Imbens and Paul R. Rosenbaum. Robust, accurate confidence intervals with a weak instrument: quarter of birth and education. _Journal of the Royal Statistical Society: Series A (Statistics in Society)_ , 168(1):109–126, 2005. doi: 10.1111/j.1467-985X.2004.00339.x. URL https://doi.wiley.com/10.1111/j.1467-985X.2004.00339.x.
* Kingma and Ba [2015] Diederik P. Kingma and Jimmy Lei Ba. Adam: A method for stochastic optimization. In _3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings_. International Conference on Learning Representations, ICLR, 2015. URL https://arxiv.org/abs/1412.6980v9.
* Kook and Hothorn [2020] Lucas Kook and Torsten Hothorn. _Regularized Transformation Models: The tramnet Package_ , 2020. URL https://cran.r-project.org/package=tramnet.
* Kook et al. [2020] Lucas Kook, Lisa Herzog, Torsten Hothorn, Oliver Dürr, and Beate Sick. Ordinal Neural Network Transformation Models: Deep and interpretable regression models for ordinal outcomes. _arXiv preprint arXiv:2010.08376_ , 2020. URL https://arxiv.org/abs/2010.08376.
* Lagakos [1981] S. W. Lagakos. The graphical evaluation of explanatory variables in proportional hazard regression models. _Biometrika_ , 68(1):93–98, 1981. doi: 10.1093/biomet/68.1.93. URL https://academic.oup.com/biomet/article-lookup/doi/10.1093/biomet/68.1.93.
* Lindsey et al. [1996] James K Lindsey et al. _Parametric statistical inference_. Oxford University Press, 1996.
* Lohse et al. [2017] Tina Lohse, Sabine Rohrmann, David Faeh, and Torsten Hothorn. Continuous outcome logistic regression for analyzing body mass index distributions. _F1000Research_ , 6:1933, 2017. doi: 10.12688/f1000research.12934.1. URL https://doi.org/10.12688/f1000research.12934.1.
* Magliacane et al. [2018] Sara Magliacane, Thijs van Ommen, Tom Claassen, Stephan Bongers, Philip Versteeg, and Joris M Mooij. Domain adaptation by using causal inference to predict invariant conditional distributions. In _Advances in Neural Information Processing Systems_ , pages 10846–10856, 2018.
* Markowetz et al. [2005] Florian Markowetz, Steffen Grossmann, and Rainer Spang. Probabilistic soft interventions in conditional gaussian networks. In _Tenth International Workshop on Artificial Intelligence and Statistics_ , pages 214–221. Society for Artificial Intelligence and Statistics, 2005.
* Mitrovic et al. [2020] Jovana Mitrovic, Brian McWilliams, Jacob Walker, Lars Buesing, and Charles Blundell. Representation learning via invariant causal mechanisms. _arXiv preprint arXiv:2010.07922_ , 2020. URL https://arxiv.org/abs/2010.07922.
* Pan and Yang [2009] Sinno Jialin Pan and Qiang Yang. A survey on transfer learning. _IEEE Transactions on knowledge and data engineering_ , 22(10):1345–1359, 2009.
* Pearl [2009] Judea Pearl. _Causality_. Cambridge university press, 2009.
* R Core Team [2020] R Core Team. _R: A Language and Environment for Statistical Computing_. R Foundation for Statistical Computing, Vienna, Austria, 2020. URL https://www.R-project.org/.
* Redko et al. [2020] Ievgen Redko, Emilie Morvant, Amaury Habrard, Marc Sebban, and Younès Bennani. A survey on domain adaptation theory. _arXiv preprint arXiv:2004.11829_ , 2020. URL https://arxiv.org/abs/2004.11829.
* Rojas-Carulla et al. [2018] Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, and Jonas Peters. Invariant Models for Causal Transfer Learning. _Journal of Machine Learning Research_ , 19:1–34, 2018\. doi: 10.5555/3291125.3291161. URL https://jmlr.org/papers/v19/16-432.html.
* Rothenhäusler et al. [2018] Dominik Rothenhäusler, Nicolai Meinshausen, Peter Bühlmann, and Jonas Peters. Anchor regression: heterogeneous data meets causality. _arXiv preprint arXiv:1801.06229_ , 2018. URL https://arxiv.org/abs/1801.06229. To appear in J. Royal Statistical Society: Series B (Statistical Methodology).
* Skrondal and Rabe-Hesketh [2004] Anders Skrondal and Sophia Rabe-Hesketh. _Generalized latent variable modeling: Multilevel, longitudinal, and structural equation models_. Crc Press, 2004.
* Stigler [2016] Stephen M Stigler. _The seven pillars of statistical wisdom_. Harvard University Press, 2016.
* Subbaswamy and Saria [2019] Adarsh Subbaswamy and Suchi Saria. From development to deployment: dataset shift, causality, and shift-stable models in health AI. _Biostatistics_ , 21(2):345–352, 2019. doi: 10.1093/biostatistics/kxz041. URL https://academic.oup.com/biostatistics/advance-article/doi/10.1093/biostatistics/kxz041/5631850.
* Sun et al. [2019] Yu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei A Efros, and Moritz Hardt. Test-time training for out-of-distribution generalization. _arXiv preprint arXiv:1909.13231_ , 2019. URL https://arxiv.org/abs/1909.13231.
* Therneau et al. [1990] T. M. Therneau, P. M. Grambsch, and T. R. Fleming. Martingale-based residuals for survival models. _Biometrika_ , 77(1):147–160, 1990. doi: 10.1093/biomet/77.1.147. URL https://academic.oup.com/biomet/article-lookup/doi/10.1093/biomet/77.1.147.
* Tibshirani [1996] Robert Tibshirani. Regression Shrinkage and Selection Via the Lasso. _Journal of the Royal Statistical Society: Series B (Methodological)_ , 58(1):267–288, 1996. doi: 10.1111/j.2517-6161.1996.tb02080.x. URL https://rss.onlinelibrary.wiley.com/doi/10.1111/j.2517-6161.1996.tb02080.x.
## Appendix A Notation
Random variables are written in uppercase italic, $Y$, and realizations
thereof in lowercase, $y$. When stacked to a vector of $n$ observations, we
write $\text{\boldmath$y$}=(y_{1},\dots,y_{n})^{\top}$. Random vectors are
written like random variables, but in bold, $X$, and realizations thereof in
lowercase bold, $x$. Stacked to a matrix for $n$ observations, we write
$\text{$\mathbf{X}$}=(\text{\boldmath$x$}_{1}^{\top},\dots,\text{\boldmath$x$}_{n}^{\top})^{\top}\in\mathbb{R}^{n\times
p}$. Matrices are written in bold, normal uppercase, $\mathbf{A}$, vectors in
bold italic lowercase, $a$. The probability measure of a random variable $Y$
is denoted by $\mathbb{P}_{Y}$. Coefficient matrices in the SEMs are denoted
by $\mathbf{M}$ for the anchors $A$ and by $\mathbf{B}$ for all other
components. When restricting the coefficient matrix $\mathbf{B}$ to a single
component, e.g., the effect of $X$ on $Y$, we write
$\text{$\mathbf{B}$}_{Y\text{\boldmath$X$}}$. Because for $\mathbf{M}$ it is
clear from context, we omit the $A$ in the index.
## Appendix B Background on Score Residuals
Stigler [2016] considers residuals the seventh “pillar of statistical wisdom”,
which highlights their conceptual importance. Here, we will briefly justify
the use of the score residual in transformation models based on the work of
Lagakos [1981], who introduced martingale residuals for survival analysis
under interval censoring. Farrington [2000] presents a summary of the
different types of residuals used in survival analysis. The scope of score
residuals in transformation models is to take the notion of a residual and
generalize it to a wider class of models, namely TMs, and allow for all kinds
of uninformative censoring.
In Definition 2, we note that the score residual can be interpreted as the
score contribution of a newly introduced intercept parameter, which is
constrained to zero. This is equivalent to formulating a score test for
$\alpha=0$ for a covariate that is not yet included in the model. Since
$\alpha$ is constrained to zero in the whole procedure, we do not need to
evaluate the model under the alternative hypothesis. Note also, that we
restrict $\alpha$ to an intercept term on the scale of the transformation
function. Farrington gives a compelling argument why one should do so. This
connection is drawn next.
As an adjustment of Cox-Snell residuals [Cox and Snell, 1968], Lagakos
proposes a centered version of Cox-Snell residuals. Barlow and Prentice [1988]
later drew the connection to stochastic calculus and coined the term
“martingale residual” [see also Therneau et al., 1990]. For interval and right
censored, as well as exact responses, Lagakos residuals have a direct
connection to score statistics [Farrington, 2000]. The setting is based in
survival analysis, but the connection to transformation models will become
apparent. Lagakos starts from a general proportional hazards model where
$\displaystyle\lambda(t|\text{\boldmath$x$})=\lambda(t)\exp(\text{\boldmath$x$}^{\top}\text{\boldmath$\beta$})$
denotes the hazard function depending on time $t$ and covariates $x$, which
can be decomposed into a product of a baseline hazard function $\lambda(t)$
and the exponential function of a linear predictor in the covariates $\beta$.
The cumulative hazard function is then defined as
$\displaystyle\Lambda(t|\text{\boldmath$x$})=\int_{0}^{t}\lambda(u|\text{\boldmath$x$})du.$
From there, the cdf can be recovered as
$\displaystyle
F_{T}(t|\text{\boldmath$x$})=1-\exp(-\Lambda(t)\exp(\text{\boldmath$x$}^{\top}\text{\boldmath$\beta$})).$
By definition $\Lambda(t)>0\;\forall t$, hence
$\displaystyle
F_{T}(t|\text{\boldmath$x$})=1-\exp(-\exp(\log\Lambda(t)+\text{\boldmath$x$}^{\top}\text{\boldmath$\beta$})),$
which is a transformation model with Minimum-Extreme-Value inverse-link,
$F_{Z}=F_{\operatorname{MEV}}$, and the transformation function,
$h(t)=\log\Lambda(t)$, is the log cumulative baseline hazard. Farrington now
assumes that the baseline hazard function belongs to a family $\mathcal{F}$
that is closed under scaling by a factor $\gamma\in\mathbb{R^{++}}$, i.e.,
$\displaystyle\Lambda(t\rvert
x)\in\mathcal{F}\Rightarrow\gamma\Lambda(t|\text{\boldmath$x$})\in\mathcal{F}.$
The Lagakos residual is now defined as
$\displaystyle\hat{r}^{L}=\frac{\partial}{\partial\alpha}\ell\bigg{\rvert}_{\hat{\text{\boldmath$\beta$}},\hat{S}_{0}}$
for $\alpha=\log\gamma$ and $\hat{S}_{0}$ being the estimated survivor curve
under $\text{\boldmath$x$}=0$, i.e.,
$\hat{S}_{0}=\exp(-\hat{\Lambda}(t|\text{\boldmath$x$}=0))$. This translates
to the family of log (cumulative) hazard functions being closed under addition
of $\alpha=\log\gamma\in\mathbb{R}$, which is exactly the case for
transformation models. This step corresponds to adding and constraining a new
intercept term to zero on the scale of the transformation function. Lagakos
residuals behave like the usual score statistic, i.e., they have mean zero
asymptotically and are asymptotically uncorrelated.
## Appendix C Computational Details
Anchor TMs, simulation scenarios and visualizations are written in the R
language for statistical computing [Version 3.6.3, R Core Team, 2020]. To
implement distributional anchor regression, note that the transformation model
log-likelihood is concave w.r.t. $\vartheta$, unless all observations are
right censored. The penalty resulting from the quadratic form
$\text{\boldmath$r$}^{\top}\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}\text{\boldmath$r$}$
is convex if $r$, from Definition 2, is affine or convex [Boyd and
Vandenberghe, 2004]. In case of exact observations in Lm and c-probit, the
resulting penalty is convex and thus solvable by a constrained convex
optimizer. Kook and Hothorn [2020] implement regularized transformation models
under elastic net penalties in tramnet, using the domain-specific language
optimizer CVXR [Fu et al., 2020]. From tramnet, we use the TM likelihood
implementation. For the interval censored or right censored models (o-logit,
c-logit) we fit the models using (stochastic) gradient descent (SGD) from
package TensorFlow [Abadi et al., 2015]. The implementation of the interval
censored log-likelihood for ordinal TMs was taken from Kook et al. [2020] and
we used SGD with the Adam optimizer [Kingma and Ba, 2015] with learning rate
$10^{-3}$, batch size $250$ and $200$ epochs. Parameters were initialized with
the maximum likelihood estimate for $\xi=0$ obtained via tram::Polr()
[Hothorn, 2020].
## Appendix D Invalid Instruments and Shrinkage of Estimates
In the linear IV setting an instrument is called weak if it has little impact
on the explanatory variables $X$. Consequently, a two-stage least squares
procedure will yield homogeneous predictions for $X$ in the first stage,
because
$\displaystyle\text{\boldmath$X$}\perp\text{\boldmath$A$}\implies\boldsymbol{\Pi}_{\text{$\mathbf{A}$}}\text{$\mathbf{X}$}\equiv\operatorname{const.}$
This leads to regressing the response on a matrix with constant columns,
making it impossible to separate $\beta$ from an intercept. Thus, in case of
weak instruments, the resulting estimates $\hat{\text{\boldmath$\beta$}}$ will
shrink towards $0$ as $\xi\to\infty$ when using a causal regularizer. This
explains the effect seen for the BostonHousing2 data in Section 4.1 and makes
the diluted causal parameter impossible to study, because it is equal to
constant 0. However, intermediary amounts of causal regularization yield a
benefit in terms of worst-case LOEO CV (cf. Figure 4).
|
# Word Alignment by Fine-tuning Embeddings on Parallel Corpora
Zi-Yi Dou, Graham Neubig
Language Technologies Institute, Carnegie Mellon University
<EMAIL_ADDRESS>
###### Abstract
Word alignment over parallel corpora has a wide variety of applications,
including learning translation lexicons, cross-lingual transfer of language
processing tools, and automatic evaluation or analysis of translation outputs.
The great majority of past work on word alignment has worked by performing
unsupervised learning on parallel text. Recently, however, other work has
demonstrated that pre-trained contextualized word embeddings derived from
multilingually trained language models (LMs) prove an attractive alternative,
achieving competitive results on the word alignment task even in the absence
of explicit training on parallel data. In this paper, we examine methods to
marry the two approaches: leveraging pre-trained LMs but fine-tuning them on
parallel text with objectives designed to improve alignment quality, and
proposing methods to effectively extract alignments from these fine-tuned
models. We perform experiments on five language pairs and demonstrate that our
model can consistently outperform previous state-of-the-art models of all
varieties. In addition, we demonstrate that we are able to train multilingual
word aligners that can obtain robust performance on different language pairs.
Our aligner, AWESoME (Aligning Word Embedding Spaces of Multilingual
Encoders), with pre-trained models is available at
https://github.com/neulab/awesome-align.
## 1 Introduction
Word alignment is a useful tool to tackle a variety of natural language
processing (NLP) tasks, including learning translation lexicons (Ammar et al.,
2016; Cao et al., 2019), cross-lingual transfer of language processing tools
(Yarowsky et al., 2001; Padó and Lapata, 2009; Tiedemann, 2014; Agić et al.,
2016; Mayhew et al., 2017; Nicolai and Yarowsky, 2019), semantic parsing
(Herzig and Berant, 2018) and speech recognition (Xu et al., 2019). In
particular, word alignment plays a crucial role in many machine translation
(MT) related methods, including guiding learned attention (Liu et al., 2016),
incorporating lexicons during decoding (Arthur et al., 2016), domain
adaptation (Hu et al., 2019), unsupervised MT (Ren et al., 2020) and automatic
evaluation or analysis of translation models (Bau et al., 2018; Stanovsky et
al., 2019; Neubig et al., 2019; Wang et al., 2020). However, with neural
networks advancing the state of the arts in almost every field of NLP, tools
developed based on the 30-year-old IBM word-based translation models (Brown et
al., 1993), such as GIZA++ (Och and Ney, 2003) or fast-align (Dyer et al.,
2013), remain popular choices for word alignment tasks.
Figure 1: Cosine similarities between subword representations in a parallel
sentence pair before and after fine-tuning. Red boxes indicate the gold
alignments.
One alternative to using statistical word-based translation models to learn
alignments would be to instead train state-of-the-art neural machine
translation (NMT) models on parallel corpora, and extract alignments
therefrom, as examined by Luong et al. (2015); Garg et al. (2019); Zenkel et
al. (2020). However, these methods have two disadvantages (also shared with
more traditional alignment methods): (1) they are directional and the source
and target side are treated differently and (2) they cannot easily take
advantage of large-scale contextualized word embeddings derived from language
models (LMs) multilingually trained on monolingual corpora (Devlin et al.,
2019; Lample and Conneau, 2019; Conneau et al., 2020), which have proven
useful in other cross-lingual transfer settings (Libovickỳ et al., 2019; Hu et
al., 2020b). In the field of word alignment, Sabet et al. (2020) have recently
proposed methods to align words using multilingual contextualized embeddings
and achieve good performance even in the absence of explicit training on
parallel data, suggesting that these are an attractive alternative for neural
word alignment.
In this paper, we investigate if we can combine the best of the two lines of
approaches. Concretely, we leverage pre-trained LMs and fine-tune them on
parallel text with not only LM-based objectives, but also unsupervised
objectives over the parallel corpus designed to improve alignment quality.
Specifically, we propose a self-training objective, which encourages aligned
words to have further closer contextualized representations, and a parallel
sentence identification objective, which enables the model to bring parallel
sentences' representations closer to each other. In addition, we propose to
effectively extract alignments from these fine-tuned models using probability
thresholding or optimal transport.
We perform experiments on five different language pairs and demonstrate that
our model can achieve state-of-the-art performance on all of them. In
analysis, we find that these approaches also generate more aligned
contextualized representations after fine-tuning (see Figure 1 as an example)
and we can incorporate supervised signals within our paradigm. Importantly, we
show that it is possible to train multilingual word aligners that can obtain
robust performance even in zero-shot settings, making them a valuable tool
that can be used out-of-the-box with good performance over a wide variety of
language pairs.
## 2 Methods
Formally, the task of word alignment can be defined as: given a sentence
$\mathbf{x}=\langle x_{1},\cdots,x_{n}\rangle$ in the source language and its
corresponding parallel sentence $\mathbf{y}=\langle y_{1},\cdots,y_{m}\rangle$
in the target language, a word aligner needs to find a set of pairs of source
and target words:
$A=\\{\langle x_{i},y_{j}\rangle:x_{i}\in\mathbf{x},y_{j}\in\mathbf{y}\\},$
where for each word pair $\langle x_{i},y_{j}\rangle$, $x_{i}$ and $y_{j}$ are
semantically similar to each other within the context of the sentence.
In the following paragraphs, we will first illustrate how we extract
alignments from contextualized word embeddings, then describe our objectives
designed to improve alignment quality.
### 2.1 Extracting Alignments from Embeddings
Contextualized word embedding models such as BERT (Devlin et al., 2019) and
RoBERTa (Liu et al., 2019) represent words using continuous vectors calculated
in context, and have achieved impressive performance on a diverse array of NLP
tasks. Multilingually trained word embedding models such as multilingual BERT
can generate contextualized embeddings across different languages. These
models can be used to extract contextualized word embeddings
$h_{\mathbf{x}}=\langle h_{x_{1}},\cdots,h_{x_{n}}\rangle$ and
$h_{\mathbf{y}}=\langle h_{y_{1}},\cdots,h_{y_{m}}\rangle$ for each pair of
parallel sentences $\mathbf{x}$ and $\mathbf{y}$. Specifically, this is done
by extracting the hidden states of the $i$-th layer of the model, where $i$ is
an empirically-chosen hyper-parameter. Given these contextualized word
embeddings, we propose two methods to calculate unidirectional alignment
scores based on probability simplexes and optimal transport. We then turn
these alignment scores into alignment matrices and reconcile alignments in the
forward and backward directions.
Figure 2: Extracting word alignments from multilingual BERT using probability
thresholding (softmax). Red boxes denote the gold alignments.
#### Probability Thresholding.
In this method, for each word in the source/target sentence, we calculate a
value on the probability simplex for each word in the aligned target/source
sentence, and then select all values that exceed a particular threshold as
``aligned'' words. Concretely, taking inspiration from attention mechanisms
(Bahdanau et al., 2015; Vaswani et al., 2017), we take the contextualized
embeddings $h_{\mathbf{x}}$ and $h_{\mathbf{y}}$ and compute the dot products
between them and get the similarity matrix:
$S=h_{\mathbf{x}}h_{\mathbf{y}}^{T}.$
Then, we apply a normalization function $\mathcal{N}$ to convert the
similarity matrix into values on the probability simplex
$S_{\mathbf{xy}}=\mathcal{N}(S)$, and treat $S_{\mathbf{xy}}$ as the source-
to-target alignment matrix. In this paper, we propose to use softmax and a
sparse variant $\alpha$-entmax (Peters et al., 2019) to do the normalization.
Compared with the softmax function, $\alpha$-entmax can produce sparse
alignments for any $\alpha>1$ and assign non-zero probability to a short list
of plausible word pairs, where a higher $\alpha$ will lead to a more sparse
alignment.
#### Optimal Transport.
The goal of optimal transport (Monge, 1781; Cuturi, 2013) is to find a mapping
that moves probability from one distribution to another, which can be used to
find an optimal matching of similar words between two sequences (Kusner et
al., 2015). Formally, in a discrete optimal transport problem, we are given
two point sets $\\{{x_{i}}\\}_{i=1}^{n}$ and $\\{{y_{j}}\\}_{j=1}^{m}$
associated with their probability distributions $p_{\mathbf{x}}$ and
$p_{\mathbf{y}}$ where $\sum_{i}p_{x_{i}}=1$ and $\sum_{j}p_{y_{j}}=1$. Also,
a function $C({x_{i}},{y_{j}})$ defines the cost of moving point ${x_{i}}$ to
${y_{j}}$. The goal of optimal transport is to find a mapping that moves
probability mass from $\\{{x_{i}}\\}_{i=1}^{n}$ to $\\{{y_{j}}\\}_{j=1}^{m}$
and the total cost of moving the mass between points is minimized. In other
words, it finds the transition matrix $S_{\mathbf{xy}}$ that minimizes:
$\sum_{i,j}C({x_{i}},{y_{j}}){S_{\mathbf{xy}}}_{ij},$ (1)
where $S_{\mathbf{xy}}\mathbf{1}_{m}=p_{\mathbf{x}}$ and
$S_{\mathbf{xy}}^{T}\mathbf{1}_{n}=p_{\mathbf{y}}$. The resulting transition
matrix is self-normalized and sparse (Swanson et al., 2020), making it
appealing alternative towards extracting alignments from word embeddings.
In this paper, we propose to adapt optimal transport techniques to the task of
word alignment. Concretely, we treat the parallel sentences $\mathbf{x}$ and
$\mathbf{y}$ as two point sets and assume each word is uniformly distributed.
The cost function is obtained by computing the pairwise distance (e.g. cosine
distance) between $h_{\mathbf{x}}$ and $h_{\mathbf{y}}$, and all the distance
values are scaled to [0, 1] with min-max normalization. The optimal transition
matrix ${S_{\mathbf{xy}}}$ to Equation 1 can be calculated using the Sinkhorn-
Knopp matrix scaling algorithm (Sinkhorn and Knopp, 1967). If the value of
${S_{\mathbf{xy}}}_{ij}$ is high, ${x_{i}}$ and ${y_{j}}$ are likely to have
similar semantics and values that exceed a particular threshold will be
considered as ``aligned''.
#### Extracting Bidirectional Alignments.
After we obtain both the source-to-target and target-to-source alignment
probability matrices $S_{\mathbf{xy}}$ and $S_{\mathbf{yx}}$ using the
previous methods, we can deduce the final alignment matrix by taking the
intersection of the two matrices:
$A=(S_{\mathbf{xy}}>c)*(S_{\mathbf{yx}}^{T}>c),$
where $c$ is a threshold and $A_{ij}=1$ means $x_{i}$ and $y_{j}$ are aligned.
Note that growing heuristics such as grow-diag-final (Och and Ney, 2000; Koehn
et al., 2005) that are popular in statistical word aligners can also be
applied in our alignment extraction algorithms, and we will demonstrate the
effect of these heuristics in the experiment section.
#### Handling Subwords.
Subword segmentation techniques (Sennrich et al., 2016; Kudo and Richardson,
2018) are widely used in training LMs, thus the above alignment extraction
methods can only produce alignments on the subword level. To convert them to
word alignments, we follow previous work (Sabet et al., 2020; Zenkel et al.,
2020) and consider two words to be aligned if any of their subwords are
aligned. Figure 2 shows a concrete example of how we extract word-level
alignments from a pre-trained embedding model.
### 2.2 Fine-tuning Contextualized Embeddings for Word Alignment
While language models can be used to produce reasonable word alignments even
without any fine-tuning (Sabet et al., 2020), we propose objectives that
further improve their alignment ability if we have access to parallel data.
#### Masked Language Modeling (MLM).
Gururangan et al. (2020) suggest that we can gain improvements in downstream
tasks by further pre-training LMs on the task datasets. Therefore, we propose
to fine-tune the LMs with a masked language modeling objective on both the
source and target side of parallel corpora. Specifically, given a pair of
parallel sentences $\mathbf{x}$ and $\mathbf{y}$, we choose 15% of the token
positions randomly for both $\mathbf{x}$ and $\mathbf{y}$, and for each chosen
token, we replace it with (1) the [MASK] token 80% of the time (2) a random
token 10% of the time and (3) unchanged 10% of the time. The model is trained
to reconstruct the original tokens given the masked sentences
$\mathbf{x}^{mask}$ and $\mathbf{y}^{mask}$:
$L_{MLM}=\log p(\mathbf{x}|\mathbf{x}^{mask})+\log
p(\mathbf{y}|\mathbf{y}^{mask}).$ (2)
#### Translation Language Modeling (TLM).
The MLM objective only requires monolingual data and the model cannot make
direct connections between parallel sentences. To solve the issue, similarly
to Lample and Conneau (2019), we concatenate parallel sentences $\mathbf{x}$
and $\mathbf{y}$ and perform MLM on the concatenated data. Compared with MLM,
the translation language modeling (TLM) objective enable the model to align
the source and target representations. Different from Lample and Conneau
(2019), we feed source and target sentences twice in different orders instead
of resetting the positions of target sentences:
$\displaystyle L_{TLM}$ $\displaystyle=\log
p([\mathbf{x};\mathbf{y}]|[\mathbf{x}^{mask};\mathbf{y}^{mask}])$ (3)
$\displaystyle+\log
p([\mathbf{y};\mathbf{x}]|[\mathbf{y}^{mask};\mathbf{x}^{mask}]).$
#### Self-training Objective (SO).
We also propose a self-training objective for fine-tuning LMs which is similar
to the EM algorithm used in the IBM models and the agreement constraints in
Tamura et al. (2014). Specifically, at each training step, we first use our
alignment extraction methods (described in Section 2.1) to extract the
alignment $A$ for $\mathbf{x}$ and $\mathbf{y}$, then maximize the following
objective:
$L_{SO}=\sum_{i,j}A_{ij}\frac{1}{2}(\frac{S_{{\mathbf{xy}}_{ij}}}{n}+\frac{S_{{\mathbf{yx}}_{ij}}}{m}).$
(4)
Intuitively, this objective encourages words aligned in the first pass of
alignment to have further closer contextualized representations. In addition,
because of the intersection operation during extraction, the self-training
objective can ideally reduce spurious alignments and encourage the source-to-
target and target-to-source alignments to be symmetrical to each other by
exploiting their agreement (Liang et al., 2006).
#### Parallel Sentence Identification (PSI).
We also propose a contrastive parallel sentence identification loss that
attempts to make parallel sentences more similar than mismatched sentence
pairs (Liu and Sun, 2015; Legrand et al., 2016). This encourages the overall
alignments of embeddings on both word and sentence level to be closer
together. Concretely, we randomly select a pair of parallel or non-parallel
sentences $\langle\mathbf{x}^{\prime},\mathbf{y}^{\prime}\rangle$ from the
training data with equal probability. Then, the model is required to predict
whether the two sampled sentences are parallel or not. The representation of
the first [CLS] token is fed into a multi-layer perceptron to output a
prediction score $s(\mathbf{x}^{\prime},\mathbf{y}^{\prime})$. Denoting the
binary label as $l$, the objective function can be written as:
$L_{PSI}=l\log
s(\mathbf{x}^{\prime},\mathbf{y}^{\prime})+(1-l)\log(1-s(\mathbf{x}^{\prime},\mathbf{y}^{\prime})).$
(5)
#### Consistency Optimization (CO).
While the self-training objective can potentially improve the symmetricity
between forward and backward alignments, following previous work on machine
translation and multilingual representation learning (Cohn et al., 2016; Zhang
et al., 2019; Hu et al., 2020a), we use an objective to explicitly encourage
the consistency between the two alignment matrices. Specifically, we maximize
the trace of $S_{\mathbf{xy}}^{\mathrm{T}}S_{\mathbf{yx}}$:
$L_{CO}=-\frac{\text{trace}(S_{\mathbf{xy}}^{\mathrm{T}}S_{\mathbf{yx}})}{\min(m,n)}.$
(6)
#### Our Final Objective.
In summary, our training objective is a combination of the proposed objectives
and we train the model with them jointly at each training step:
$L=L_{MLM}+L_{TLM}+L_{SO}+L_{PSI}+\beta L_{CO},$
where $\beta$ is set to 0 or 1 in our experiments.
| De-En | Fr-En | Ro-En | Ja-En | Zh-En
---|---|---|---|---|---
#Train Sents. | 1.9M | 1.1M | 450K | 444K | 40K
#Test Sents. | 508 | 447 | 248 | 582 | 450
Table 1: Statistics of datasets.
Model | Setting | De-En | Fr-En | Ro-En | Ja-En | Zh-En
---|---|---|---|---|---|---
Baseline
SimAlign | w/o fine-tuning | 18.8 | 7.6 | 27.2 | 46.6 | 21.6
fast_align | bilingual | 27.0 | 10.5 | 32.1 | 51.1 | 38.1
eflomal | bilingual | 22.6 | 8.2 | 25.1 | 47.5 | 28.7
GIZA++ | bilingual | 20.6 | 5.9 | 26.4 | 48.0 | 35.1
Zenkel et al. (2020) | bilingual | 16.0 | 5.0 | 23.4 | - | -
Chen et al. (2020) | bilingual | 15.4 | 4.7 | 21.2 | - | -
Ours
$\alpha$-entmax | w/o fine-tuning | 18.1 | 5.6 | 29.0 | 46.3 | 18.4
bilingual | 16.1 | 4.1 | 23.4 | 38.6 | 15.4
multilingual ($\beta$ = 0) | 15.4 | 4.1 | 22.9 | 37.4 | 13.9
multilingual ($\beta$ = 1) | 15.0 | 4.5 | 20.8 | 38.7 | 14.5
| zero-shot | 16.0 | 4.3 | 28.4 | 44.0 | 13.9
softmax | w/o fine-tuning | 17.4 | 5.6 | 27.9 | 45.6 | 18.1
bilingual | 15.6 | 4.4 | 23.0 | 38.4 | 15.3
multilingual ($\beta$ = 0) | 15.3 | 4.4 | 22.6 | 37.9 | 13.6
multilingual ($\beta$ = 1) | 15.1 | 4.5 | 20.7 | 38.4 | 14.5
| zero-shot | 15.7 | 4.6 | 27.2 | 43.7 | 14.0
Table 2: Performance (AER) of our models in bilingual, multilingual and zero-
shot settings. The best scores for each alignment extraction method are in
bold and the overall best scores are in italicized bold.
## 3 Experiments
In this section, we first present our main results, then conduct several
ablation studies and analyses of our models.
### 3.1 Setup
#### Datasets.
We perform experiments on five different language pairs, namely German-English
(De-En), French-English (Fr-En), Romanian-English (Ro-En), Japanese-English
(Ja-En) and Chinese-English (Zh-En). For the De-En, Fr-En, Ro-En datasets, we
follow the experimental setting of previous work (Zenkel et al., 2019; Garg et
al., 2019; Zenkel et al., 2020). The training and test data for Ro-En and Fr-
En are provided by Mihalcea and Pedersen (2003). The Ro-En training data are
also augmented by the Europarl v8 corpus (Koehn, 2005). For the De-En data,
the Europarl v7 corpus is used as training data and the gold alignments are
provided by Vilar et al. (2006). The Ja-En dataset is obtained from the Kyoto
Free Translation Task (KFTT) word alignment data (Neubig, 2011), and the
Japanese sentences are tokenized with the KyTea tokenizer (Neubig et al.,
2011). The Zh-En dataset is obtained from the TsinghuaAligner
website111http://nlp.csai.tsinghua.edu.cn/~ly/systems/TsinghuaAligner/TsinghuaAligner.html.
We treat their evaluation set as the training data and use the test set in Liu
and Sun (2015). The De-En, En-Fr, Zh-En datasets contain the distinction
between sure and possible alignment links. The statistics of these datasets
are shown in Table 1. We use the Ja-En development set to tune the hyper-
parameters.
#### Baselines.
We compare our models with:
* •
fast_align (Dyer et al., 2013): a popular statistical word aligner which is a
simple, fast reparameterization of IBM Model 2.
* •
eflomal (Östling and Tiedemann, 2016): an efficient statistical word aligner
using a Bayesian model with Markov Chain Monte Carlo (MCMC) inference.
* •
GIZA++ (Och and Ney, 2003; Gao and Vogel, 2008): an implementation of IBM
models. Following previous work (Zenkel et al., 2020), we use five iterations
each for Model 1, the HMM model, Model 3 and Model 4.
* •
SimAlign (Sabet et al., 2020): a BERT-based word aligner that is not fine-
tuned on any parallel data. The authors propose three alignment extraction
methods and we implement their IterMax model with default parameters.
* •
Zenkel et al. (2020) and Chen et al. (2020): two state-of-the-art neural word
aligners based on MT models.
#### Implementation Details.
Our main results are obtained by using the probability thresholding method on
the contextualized embeddings in the 8-th layer of multilingual BERT-Base
(mBERT; Devlin et al. (2019)) and we will discuss this choice in our ablation
studies. We use the AdamW optimizer (Loshchilov and Hutter, 2019) with a
learning rate of 2e-5 and the batch size is set to 8. Following Peters et al.
(2019), we set $\alpha$ to 1.5 for $\alpha$-entmax. The threshold $c$ is set
to 0 for $\alpha$-entmax and 0.001 for softmax and optimal transport. Unless
otherwise stated, $\beta$ is set to 0. We mainly evaluate the model
performance using Alignment Error Rate (AER).
### 3.2 Main Results
We first train our model on each individual language pair, then investigate if
it is possible to train multilingual word aligners.
#### Bilingual Model Performance.
From Table 2, we can see that our softmax model can achieve consistent
improvements over the baseline models, demonstrating the effectiveness of our
proposed method. Surprisingly, directly extracting alignments from mBERT (the
w/o fine-tuning setting) can already achieve better performance than the
popular statistical word aligner GIZA++ on 4 out of 5 settings, especially in
the Zh-En setting where the size of parallel data is small.
#### Multilingual Model Performance.
We also randomly sample 200k parallel sentence pairs from each language pair
(except for Zh-En where we take all of its 40k parallel sentences) and
concatenate them together to train multilingual word aligners. As shown in
Table 2, the multilingually trained word aligners can achieve further
improvements and they consistently outperform our bilingual word aligners and
all the baselines even though the size of training data for each individual
language pair is smaller. The results demonstrate that we can indeed obtain a
neural word aligner that has state-of-the-art and robust performance across
different language pairs. We also test the performance of our consistency
optimization objective in this setting. We can see that incorporating this
objective ($\beta$=1) can significantly improve the model performance on Ro-
En, while it also deteriorates the Ja-En and Zh-En performance by a non-
negligible margin. We find that this is because the CO objective can
significantly improve the alignment recall while sacrificing the precisions,
and our Ro-En dataset tends to favor models with high recall and the Ja-En and
Zh-En datasets have an opposite tendency.
#### Zero-Shot Performance.
In this paragraph, we want to find out how our models perform on language
pairs that it has never seen during training. To this end, for each language
pair, we train our model with data of all the other language pairs and test
its performance on the target language pair. Results in Table 2 demonstrate
that training our models with parallel data on _other_ language pairs can
still improve the model performance on the target language pair. This is a
very important result, as it indicates that our model can be used as a off-
the-shelf tool for multilingual word alignment for any language supported by
the underlying embeddings, _regardless of whether parallel data has been used
for training or not_.
### 3.3 Ablation Studies
| Component | De-En | Fr-En | Ro-En | Ja-En | Zh-En | Speed
---|---|---|---|---|---|---|---
Prob. | softmax | 17.4 | 5.6 | 27.9 | 45.6 | 18.1 | 33.22
$\alpha$-entmax | 18.1 | 5.6 | 29.0 | 46.3 | 18.4 | 32.36
OT | Cosine | 24.4 | 15.7 | 33.7 | 54.0 | 31.1 | 3.36
Dot Product | 25.4 | 17.1 | 34.1 | 54.2 | 30.9 | 3.82
Euclidean | 20.7 | 15.1 | 33.3 | 53.2 | 29.8 | 3.05
Table 3: Comparisons of probability thresholding (Prob.) and optimal transport
(OT) for alignment extraction. We try both softmax and $\alpha$-entmax for
probability thresholding and different cost functions for optimal transport.
We measure both the extraction speed (#sentences/seconds) and the alignment
quality (AER) on five language pairs, namely German-English (De-En), French-
English (Fr-En), Romanian-English (Ro-En), Japanese-English (Ja-En), and
Chinese-English (Zh-En). The best scores are in bold.
In this part, we compare the performance of different alignment extraction
methods, pre-trained embedding models and training objectives.
#### Alignment Extraction Methods.
We first compare the performance of our two proposed alignment extraction
methods, namely the probability thresholding and optimal transport techniques.
We use the representations of the 8-th layer of mBERT following Sabet et al.
(2020).
As shown in Table 3, probability thresholding methods can consistently
outperform optimal transport by a large margin on the five language pairs. In
addition, probability thresholding methods are much faster than optimal
transport. softmax is marginally better than $\alpha$-entmax, yet one
advantage of $\alpha$-entmax is that we do not need to manually set the
threshold. Therefore, we use both softmax and $\alpha$-entmax to obtain the
main results.
#### Pre-trained Embedding Models.
In this paragraph, we investigate the performance of three different types of
pre-trained embedding models, including mBERT, XLM (Lample and Conneau, 2019)
and XLM-R (Conneau et al., 2020). For XLM, we have tried its three released
models: 1) XLM-15 (MLM) pre-trained with MLM and supports 15 languages; 2)
XLM-15 (MLM+TLM) pre-trained with both the MLM and TLM objectives and supports
15 languages; 3) XLM-100 (MLM) pre-trained with MLM and supports 100
languages. We use softmax to extract the alignments.
Because XLM-15 does not support Japanese or Romanian, we only report the
performance on the three other language pairs in Table 4. We take
representations from different layers and report the performance of the best
three layers. We can see that while XLM-15 (MLM+TLM) can achieve the best
performance on De-En and Fr-En, the best layer is not consistent across
language pairs. On the other hand, the optimal configurations for mBERT are
consistent across language pairs. In addition, considering mBERT supports many
more languages than XLM-15 (MLM+TLM), we will use mBERT in the following
sections.
Model | Layer | De-En | Fr-En | Zh-En
---|---|---|---|---
mBERT | 7 | 18.7 | 6.1 | 19.1
8 | 17.4 | 5.6 | 18.1
9 | 18.8 | 6.1 | 20.1
XLM-15 (MLM) | 4 | 21.1 | 6.8 | 25.3
5 | 20.4 | 6.1 | 26.1
6 | 23.2 | 7.7 | 33.3
XLM-15 (MLM+TLM) | 4 | 16.4 | 4.9 | 18.6
5 | 16.2 | 4.7 | 23.7
6 | 18.8 | 5.7 | 26.2
XLM-100 (MLM) | 7 | 20.5 | 8.5 | 30.8
8 | 19.8 | 8.2 | 28.6
9 | 19.9 | 8.8 | 29.3
XLM-R | 5 | 24.4 | 10.3 | 33.2
6 | 23.1 | 9.2 | 30.7
7 | 24.7 | 11.5 | 28.1
Table 4: Comparisons of different LMs in terms of AER. We extract alignments
using softmax and take representations from different layers of LMs. The best
scores for each individual model are in bold and the overall best scores are
in italicized bold.
Model | Objective | De-En | Fr-En | Ro-En | Ja-En | Zh-En
---|---|---|---|---|---|---
softmax | All | 15.3 | 4.4 | 22.6 | 37.9 | 13.6
2-7 | All w/o MLM | 15.3 | 4.4 | 22.8 | 38.6 | 13.7
| All w/o TLM | 15.5 | 4.7 | 22.9 | 39.7 | 14.0
| All w/o SO | 16.9 | 4.8 | 23.0 | 39.1 | 15.4
| All w/o PSI | 15.4 | 4.4 | 22.7 | 37.9 | 13.8
Table 5: Ablation studies on our training objectives in multilingual
settings.
#### Training Objectives.
We also conduct ablation studies on each of our training objectives. We can
see from Table 5 that the self-training objective can best improve the model
performance. Also, the translation language modeling and parallel sentence
identification objectives can marginally benefit the model. The masked
language modeling objective, on the other hand, cannot always improve the
model and can sometimes even deteriorate the model performance, possibly
because the TLM objective already provides the model with sufficient
supervision signals.
### 3.4 Analysis
We conduct several analyses to better understand our models. Unless otherwise
stated, we perform experiments on the softmax model using mBERT.
#### Incorporating Supervised Signals.
We investigate if our models can benefit from supervised signals. If we have
access to word-level gold labels for word alignment, we can simply utilize
them in our self-training objectives. Specifically, we can set $A_{ij}$ in
Equation 4 to 1 if and only if they are aligned. In our experimental settings,
we have gold labels for all the Zh-En sentences and 653 sentences from the Ja-
En development set. Table 6 demonstrates that training our models with as few
as 653 labeled sentences can dramatically improve the alignment quality, and
combining labeled and unlabeled parallel data can further improve the model
performance. This analysis demonstrate the generality of our models as they
can also be applied in semi-supervised settings.
Lang. | Unsup. | Sup. | Semi-Sup.
---|---|---|---
Zh-En | 15.3 | 12.5 | -
Ja-En | 38.4 | 31.6 | 30.0
Table 6: Incorporating supervised word alignment signals into our model can
further improve the model performance in terms of AER.
#### Growing Heuristics.
As stated in Section 2.1, because our alignment extraction methods essentially
take the intersection of forward and backward alignments, growing heuristics
can also be applied in our settings. The main motivation of growing heuristics
is to improve the recall of the resulting alignments. While effective in
statistical word aligners, as shown in Table 7, the growing heuristics only
improve our alignment extraction method on the vanilla mBERT model in the Ro-
En setting while degrading the model performance on all the other language
pairs. After fine-tuning, the growing heuristics can only hurt the model
performance, possibly because the self-training objective encourages the
forward and backward alignments to be symmetrical. Based on these results, we
do not adopt the growing heuristics in our models.
Model | Ext. | De-En | Fr-En | Ro-En | Ja-En | Zh-En
---|---|---|---|---|---|---
mBERT | X-En | 24.7 | 14.4 | 31.9 | 54.7 | 27.4
En-X | 22.6 | 12.2 | 32.0 | 52.7 | 29.9
softmax | 17.4 | 5.6 | 27.9 | 45.6 | 18.1
gd | 18.7 | 9.2 | 27.0 | 48.5 | 23.4
gd-final | 18.6 | 9.3 | 26.9 | 48.7 | 23.2
Ours-Multi. | X-En | 20.2 | 12.9 | 25.4 | 42.1 | 19.3
En-X | 18.1 | 9.3 | 25.9 | 41.7 | 23.5
softmax | 15.3 | 4.4 | 22.6 | 37.9 | 13.6
gd | 16.3 | 8.1 | 23.1 | 38.2 | 18.3
gd-final | 16.5 | 8.3 | 23.2 | 38.7 | 18.5
Table 7: The grow-diag-final heuristic can only improve our alignment
extraction method in the Romanian-English setting without fine-tuning. ``gd''
refers to grow-diag.
Model | Prec. % | Rec. % | F1 %
---|---|---|---
BERT-En (zero-shot) | 53.1 | 54.3 | 52.7
fast_align | 51.5 | 59.8 | 55.2
GIZA++ | 56.5 | 64.1 | 60.0
SimAlign | 59.9 | 67.6 | 63.5
Ours | 60.6 | 68.5 | 64.3
Table 8: Our model is also effective in an annotation projection setting
where we train a BERT-based NER model on English data and test it on Spanish
data. The best scores are in bold.
Model | En | Fr | Es | De | El | Bg | Ru | Tr | Ar | Vi | Th | Zh | Hi | Sw | Ur | Ave.
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
mBERT | 81.3 | 73.4 | 74.3 | 70.5 | 66.9 | 68.2 | 68.5 | 59.5 | 64.3 | 70.6 | 50.7 | 68.8 | 59.3 | 49.4 | 57.5 | 65.5
Ours | 81.5 | 74.1* | 74.9* | 71.2* | 67.1 | 68.7* | 68.6 | 61.0* | 66.2* | 70.5 | 53.8* | 69.1 | 59.8* | 50.6* | 58.6* | 66.4*
Table 9: Results of mBERT and our fine-tuned model on XNLI (Conneau et al.,
2018). Our objectives can improve the model cross-lingual transfer ability.
``*'' denotes significant differences using paired bootstrapping (p$<$0.05) .
Figure 3: An example of extracting alignments from our fine-tuned model using
softmax. Red boxes indicate the gold alignments. The fine-tuned model can
generate more accurate alignments then vanilla mBERT (Figure 2).
#### Annotation Projection.
Word alignment has been a useful tool in cross-lingual annotation projection
(Yarowsky et al., 2001; Nicolai and Yarowsky, 2019). Therefore, it would be
interesting to see if our model can be beneficial in these settings. To this
end, we evaluate our model and baselines on cross-lingual named entity
recognition (NER). We train a BERT-based NER model on the CoNLL 2003 English
data (Tjong Kim Sang and De Meulder, 2003) and test it on the CoNLL 2002
Spanish data (Tjong Kim Sang, 2002). We use Google Translate to translate
Spanish test set into English, predict the labels using the NER model, then
project the labels from English to Spanish using word aligners. From Table 8,
we can see that our model is also better than baselines in this setting,
demonstrating its usefulness in cross-lingual annotation projection.
#### Sentence-Level Representation Transfer.
We also test if the aligned representations are beneficial for sentence-level
cross-lingual transfer. In doing so, we perform experiments on XNLI (Conneau
et al., 2018), which evaluates cross-lingual sentence representations in 15
languages on the task of natural language inference (NLI). We train our models
with the provided 10k parallel data on the 15 languages, fine-tune our model
on the English NLI data, then test its performance on other languages. As
shown in Table 9, our model can outperform the baseline, indicating the
aligned word representations can also be helpful for sentence-level cross-
lingual transfer.
#### Alignment Examples.
We also conduct qualitative analyses as shown in Figure 1, 2 and 3. After
fine-tuning, the learned contextualized representations are more aligned, as
the cosine distances between semantically similar words become closer, and the
extracted alignments are more accurate. More examples are shown in Appendix
13.
## 4 Related Work
Based on the IBM translation models (Brown et al., 1993), many statistical
word aligners have been proposed (Vogel et al., 1996; Östling and Tiedemann,
2016), including the current most popular tools GIZA++ (Och and Ney, 2000,
2003; Gao and Vogel, 2008) and fast_align (Dyer et al., 2013).
Recently, there is a resurgence of interest in neural word alignment (Tamura
et al., 2014; Alkhouli et al., 2018). Based on NMT models trained on parallel
corpora, researchers have proposed several methods to extract alignments from
them (Luong et al., 2015; Zenkel et al., 2019; Garg et al., 2019; Li et al.,
2019) and successfully build an end-to-end neural model that can outperform
statistical tools (Zenkel et al., 2020). However, there is an inherent
discrepancy between translation and word alignment: translation models are
directional and the source and target side are treated differently, while word
alignment is a non-directional task. Therefore, certain adaptations are
required for translation models to perform word alignment.
Another disadvantage of MT-based word aligners is that they cannot easily
utilize contextualized embeddings. Using learned representations to improve
word alignment have been investigated (Sabet et al., 2016; Pourdamghani et
al., 2018). Recently, pre-trained LMs (Peters et al., 2018; Devlin et al.,
2019; Brown et al., 2020) have proven to be useful in cross-lingual transfer
(Libovickỳ et al., 2019; Hu et al., 2020b). In word alignment, Sabet et al.
(2020) propose effective methods to extract alignments from multilingual LMs
without explicit training on parallel data. In this work, we propose better
alignment extraction methods and combine the best of the two worlds by fine-
tuning contextualized embeddings on parallel data.
There are also work on supervised neural word alignment (Stengel-Eskin et al.,
2019; Nagata et al., 2020). However, supervised data are not always
accessible, making their methods inapplicable in many scenarios. In this
paper, we demonstrate that our model can incorporate supervised signals if
available and perform semi-supervised learning, which is a more realistic and
general setting.
Some work on bilingual lexicon induction also share similar general ideas with
ours. For example, Zhang et al. (2017) minimize the earth mover’s distance to
match the embedding distributions from different languages. Similarly, Grave
et al. (2019) present an algorithm to align point clouds with Procrustes
(Schönemann, 1966) in Wasserstein distance for unsupervised embedding
alignment.
## 5 Discussion and Conclusion
We present a neural word aligner that achieves state-of-the-art performance on
five diverse language pairs and obtains robust performance in zero-shot
settings. We propose to fine-tune multilingual embeddings with objectives
suitable for word alignment and develop two alignment extraction methods. We
also demonstrate its applications in semi-supervised settings. We hope our
word aligner can be a tool that can be used out-of-the-box with good
performance over various language pairs. Future directions include designing
better training objectives and experimenting on more language pairs.
Also, note that we mainly evaluate our word aligners using AER following
previous work, which has certain limitations. For example, it may not be well-
correlated with statistical machine translation performance Fraser and Marcu
(2007) and different types of alignments can be suitable for different tasks
or conditions (Lambert et al., 2012; Stymne et al., 2014). Although we have
evaluated models in annotation projection and cross-lingual transfer settings,
alternative metrics (Tiedemann, 2005; Søgaard and Wu, 2009; Ahrenberg, 2010)
are also worth considering in the future.
## Acknowledgement
We thank our reviewers for helpful suggestions.
## References
* Agić et al. (2016) Željko Agić, Anders Johannsen, Barbara Plank, Héctor Martínez Alonso, Natalie Schluter, and Anders Søgaard. 2016\. Multilingual projection for parsing truly low-resource languages. _Transactions of the Association for Computational Linguistics_.
* Ahrenberg (2010) Lars Ahrenberg. 2010. Alignment-based profiling of europarl data in an english-swedish parallel corpus. In _Proceedings of the International Conference on Language Resources and Evaluation_.
* Alkhouli et al. (2018) Tamer Alkhouli, Gabriel Bretschner, and Hermann Ney. 2018. On the alignment problem in multi-head attention-based neural machine translation. In _Proceedings of the Conference on Machine Translation_.
* Ammar et al. (2016) Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah A Smith. 2016. Massively multilingual word embeddings. _arXiv preprint_.
* Arthur et al. (2016) Philip Arthur, Graham Neubig, and Satoshi Nakamura. 2016. Incorporating discrete translation lexicons into neural machine translation. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Bahdanau et al. (2015) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In _Proceedings of the International Conference on Learning Rrepresentations_.
* Bau et al. (2018) Anthony Bau, Yonatan Belinkov, Hassan Sajjad, Nadir Durrani, Fahim Dalvi, and James Glass. 2018. Identifying and controlling important neurons in neural machine translation. In _Proceedings of the International Conference on Learning Representations_.
* Brown et al. (1993) Peter F Brown, Stephen A Della Pietra, Vincent J Della Pietra, and Robert L Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. _Computational linguistics_.
* Brown et al. (2020) Tom B. Brown, Benjamin Pickman Mann, Nick Ryder, Melanie Subbiah, Jean Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, G. Krüger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric J Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. _arXiv preprint_.
* Cao et al. (2019) Steven Cao, Nikita Kitaev, and Dan Klein. 2019. Multilingual alignment of contextual word representations. In _Proceedings of the International Conference on Learning Representations_.
* Chen et al. (2020) Yun Chen, Yang Liu, Guanhua Chen, Xin Jiang, and Qun Liu. 2020. Accurate word alignment induction from neural machine translation. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Cohn et al. (2016) Trevor Cohn, Cong Duy Vu Hoang, Ekaterina Vymolova, Kaisheng Yao, Chris Dyer, and Gholamreza Haffari. 2016. Incorporating structural alignment biases into an attentional neural translation model. In _Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics_.
* Conneau et al. (2020) Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Conneau et al. (2018) Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross-lingual sentence representations. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Cuturi (2013) Marco Cuturi. 2013. Sinkhorn distances: Lightspeed computation of optimal transport. _Proceedings of the Advances in Neural Information Processing Systems_.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics_.
* Dyer et al. (2013) Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A simple, fast, and effective reparameterization of IBM model 2. In _Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics_.
* Fraser and Marcu (2007) Alexander Fraser and Daniel Marcu. 2007. Measuring word alignment quality for statistical machine translation. _Computational Linguistics_.
* Gao and Vogel (2008) Qin Gao and Stephan Vogel. 2008. Parallel implementations of word alignment tool. In _Software Engineering, Testing, and Quality Assurance for Natural Language Processing_.
* Garg et al. (2019) Sarthak Garg, Stephan Peitz, Udhyakumar Nallasamy, and Matthias Paulik. 2019. Jointly learning to align and translate with transformer models. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Grave et al. (2019) Edouard Grave, Armand Joulin, and Quentin Berthet. 2019. Unsupervised alignment of embeddings with wasserstein procrustes. In _Proceedinds of the International Conference on Artificial Intelligence and Statistics_.
* Gururangan et al. (2020) Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Herzig and Berant (2018) Jonathan Herzig and Jonathan Berant. 2018. Decoupling structure and lexicon for zero-shot semantic parsing. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Hu et al. (2020a) Junjie Hu, Melvin Johnson, Orhan Firat, Aditya Siddhant, and Graham Neubig. 2020a. Explicit alignment objectives for multilingual bidirectional encoders. _arXiv preprint_.
* Hu et al. (2020b) Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020b. XTREME: A massively multilingual multi-task benchmark for evaluating cross-lingual generalisation. In _Proceedings of the International Conference on Machine Learning_.
* Hu et al. (2019) Junjie Hu, Mengzhou Xia, Graham Neubig, and Jaime G Carbonell. 2019. Domain adaptation of neural machine translation by lexicon induction. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Koehn (2005) Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In _MT summit_.
* Koehn et al. (2005) Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In _Proceedings of the International Workshop on Spoken Language Translation_.
* Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing: System Demonstrations_.
* Kusner et al. (2015) Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to document distances. In _Proceedings of the International Conference on Machine Learning_.
* Lambert et al. (2012) Patrik Lambert, Simon Petitrenaud, Yanjun Ma, and Andy Way. 2012. What types of word alignment improve statistical machine translation? _Machine Translation_.
* Lample and Conneau (2019) Guillaume Lample and Alexis Conneau. 2019. Cross-lingual language model pretraining. In _Proceedings of the Advances in Neural Information Processing Systems_.
* Legrand et al. (2016) Joël Legrand, Michael Auli, and Ronan Collobert. 2016. Neural network-based word alignment through score aggregation. In _Proceedings of the Conference on Machine Translation_.
* Li et al. (2019) Xintong Li, Guanlin Li, Lemao Liu, Max Meng, and Shuming Shi. 2019. On the word alignment from neural machine translation. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Liang et al. (2006) Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In _Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics_.
* Libovickỳ et al. (2019) Jindřich Libovickỳ, Rudolf Rosa, and Alexander Fraser. 2019. How language-neutral is multilingual BERT? _arXiv preprint_.
* Liu et al. (2016) Lemao Liu, Masao Utiyama, Andrew Finch, and Eiichiro Sumita. 2016. Neural machine translation with supervised attention. In _Proceedings of the International Conference on Computational Linguistics_.
* Liu and Sun (2015) Yang Liu and Maosong Sun. 2015. Contrastive unsupervised word alignment with non-local features. In _Proceedings of the AAAI Conference on Artificial Intelligence_.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. _arXiv preprint_.
* Loshchilov and Hutter (2019) I. Loshchilov and F. Hutter. 2019. Decoupled weight decay regularization. In _Proceedings of the International Conference on Learning Representations_.
* Luong et al. (2015) Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention-based neural machine translation. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* MacCartney et al. (2008) Bill MacCartney, Michel Galley, and Christopher D Manning. 2008. A phrase-based alignment model for natural language inference. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Mayhew et al. (2017) Stephen Mayhew, Chen-Tse Tsai, and Dan Roth. 2017. Cheap translation for cross-lingual named entity recognition. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Mihalcea and Pedersen (2003) Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In _Proceedings of Workshop on Building and Using Parallel Texts_.
* Monge (1781) Gaspard Monge. 1781. Mémoire sur la théorie des déblais et des remblais. _Histoire de l'Académie Royale des Sciences de Paris_.
* Nagata et al. (2020) Masaaki Nagata, Chousa Katsuki, and Masaaki Nishino. 2020. A supervised word alignment method based on cross-language span prediction using multilingual BERT. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Neubig (2011) Graham Neubig. 2011. The Kyoto free translation task. http://www.phontron.com/kftt.
* Neubig et al. (2019) Graham Neubig, Zi-Yi Dou, Junjie Hu, Paul Michel, Danish Pruthi, and Xinyi Wang. 2019. compare-mt: A tool for holistic comparison of language generation systems. In _Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: System Demonstrations_.
* Neubig et al. (2011) Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable japanese morphological analysis. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Nicolai and Yarowsky (2019) Garrett Nicolai and David Yarowsky. 2019. Learning morphosyntactic analyzers from the Bible via iterative annotation projection across 26 languages. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Och and Ney (2000) Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Och and Ney (2003) Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. _Computational Linguistics_.
* Östling and Tiedemann (2016) Robert Östling and Jörg Tiedemann. 2016. Efficient word alignment with Markov Chain Monte Carlo. _The Prague Bulletin of Mathematical Linguistics_.
* Padó and Lapata (2009) Sebastian Padó and Mirella Lapata. 2009. Cross-lingual annotation projection for semantic roles. _Journal of Artificial Intelligence Research_.
* Peters et al. (2019) Ben Peters, Vlad Niculae, and André FT Martins. 2019. Sparse sequence-to-sequence models. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Peters et al. (2018) Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. In _Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics_.
* Pourdamghani et al. (2018) Nima Pourdamghani, Marjan Ghazvininejad, and Kevin Knight. 2018. Using word vectors to improve word alignments for low resource machine translation. In _Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics_.
* Ren et al. (2020) Shuo Ren, Yu Wu, Shujie Liu, Ming Zhou, and Shuai Ma. 2020. A retrieve-and-rewrite initialization method for unsupervised machine translation. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Sabet et al. (2020) Masoud Jalili Sabet, Philipp Dufter, François Yvon, and Hinrich Schütze. 2020. Simalign: High quality word alignments without parallel training data using static and contextualized embeddings. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing: Findings_.
* Sabet et al. (2016) Masoud Jalili Sabet, Heshaam Faili, and Gholamreza Haffari. 2016. Improving word alignment of rare words with word embeddings. In _Proceedings of the International Conference on Computational Linguistics_.
* Schönemann (1966) Peter H Schönemann. 1966. A generalized solution of the orthogonal procrustes problem. _Psychometrika_.
* Sennrich et al. (2016) Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Sinkhorn and Knopp (1967) Richard Sinkhorn and Paul Knopp. 1967. Concerning nonnegative matrices and doubly stochastic matrices. _Pacific Journal of Mathematics_.
* Søgaard and Wu (2009) Anders Søgaard and Dekai Wu. 2009. Empirical lower bounds on translation unit error rate for the full class of inversion transduction grammars. In _Proceedings of the International Conference on Parsing Technologies_.
* Stanovsky et al. (2019) Gabriel Stanovsky, Noah A Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Stengel-Eskin et al. (2019) Elias Stengel-Eskin, Tzu-ray Su, Matt Post, and Benjamin Van Durme. 2019. A discriminative neural model for cross-lingual word alignment. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Stymne et al. (2014) Sara Stymne, Jörg Tiedemann, and Joakim Nivre. 2014. Estimating word alignment quality for smt reordering tasks. In _Proceedings of the Workshop on Statistical Machine Translation_.
* Sultan et al. (2014) Md Arafat Sultan, Steven Bethard, and Tamara Sumner. 2014. Back to basics for monolingual alignment: Exploiting word similarity and contextual evidence. _Transactions of the Association for Computational Linguistics_.
* Swanson et al. (2020) Kyle Swanson, Lili Yu, and Tao Lei. 2020. Rationalizing text matching: Learning sparse alignments via optimal transport. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Tamura et al. (2014) Akihiro Tamura, Taro Watanabe, and Eiichiro Sumita. 2014. Recurrent neural networks for word alignment model. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Tiedemann (2005) Jörg Tiedemann. 2005. Optimization of word alignment clues. _Natural Language Engineering_.
* Tiedemann (2014) Jörg Tiedemann. 2014. Rediscovering annotation projection for cross-lingual parser induction. In _Proceedings of the International Conference on Computational Linguistics_.
* Tjong Kim Sang (2002) Erik F. Tjong Kim Sang. 2002. Introduction to the CoNLL-2002 shared task: Language-independent named entity recognition. In _Proceedings of the Conference on Natural Language Learning_.
* Tjong Kim Sang and De Meulder (2003) Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. In _Proceedings of the Conference on Natural Language Learning_.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In _Proceedings of the Advances in Neural Information Processing Systems_.
* Vilar et al. (2006) David Vilar, Maja Popović, and Hermann Ney. 2006. AER: Do we need to “improve” our alignments? In _Proceedings of the International Workshop on Spoken Language Translation_.
* Vogel et al. (1996) Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word alignment in statistical translation. In _Proceedings of the International Conference on Computational Linguistics_.
* Wang et al. (2020) Shuo Wang, Zhaopeng Tu, Shuming Shi, and Yang Liu. 2020. On the inference calibration of neural machine translation. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Xu et al. (2019) Hainan Xu, Shuoyang Ding, and Shinji Watanabe. 2019. Improving end-to-end speech recognition with pronunciation-assisted sub-word modeling. In _Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing_.
* Yao et al. (2013a) Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013a. A lightweight and high performance monolingual word aligner. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Yao et al. (2013b) Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013b. Semi-Markov phrase-based monolingual alignment. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Yarowsky et al. (2001) David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In _Proceedings of the International Conference on Human Language Technology Research_.
* Zenkel et al. (2019) Thomas Zenkel, Joern Wuebker, and John DeNero. 2019. Adding interpretable attention to neural translation models improves word alignment. _arXiv preprint_.
* Zenkel et al. (2020) Thomas Zenkel, Joern Wuebker, and John DeNero. 2020. End-to-end neural word alignment outperforms GIZA++. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_.
* Zhang et al. (2017) Meng Zhang, Yang Liu, Huanbo Luan, and Maosong Sun. 2017. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing_.
* Zhang et al. (2019) Zhirui Zhang, Shuangzhi Wu, Shujie Liu, Mu Li, Ming Zhou, and Tong Xu. 2019. Regularizing neural machine translation by target-bidirectional agreement. In _Proceedings of the AAAI Conference on Artificial Intelligence_.
## Appendix A Implementation Details
We use the AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate
of 2e-5 and the batch size is set to 8. Following Peters et al. (2019), we set
$\alpha$ to 1.5 for $\alpha$-entmax. The threshold $c$ is set to 0 for
$\alpha$-entmax and 0.001 for softmax and optimal transport. We train our
models on one 2080 Ti for one epoch and it takes 3 to 24 hours for the model
to converge depending on the size of the dataset. We evaluate the model
performance using Alignment Error Rate (AER).
## Appendix B Analysis
In this section, we conduct more analyses of our models.
#### Monolingual Alignment.
We also investigate how our models perform in monolingual alignment settings.
Previous methods MacCartney et al. (2008); Yao et al. (2013a, b); Sultan et
al. (2014) typically exploit external resources such as WordNet to tackle the
problem. As shown in Table 10, mBERT can outperform previous methods in terms
of recall and F1 without any fine-tuning. Our multilingually fine-tuned model
can achieve better recall and slightly better F1 score than the vanilla mBERT
model, and fine-tuning our model with supervised signals can achieve further
improvements.
Model | Prec. % | Rec.% | F1 %
---|---|---|---
Baseline
Yao et al. (2013a) | 91.3 | 82.0 | 86.4
Yao et al. (2013b) | 90.4 | 81.9 | 85.9
Sultan et al. (2014) | 93.5 | 82.6 | 87.6
Ours
mBERT | 87.0 | 89.0 | 88.0
Ours-Multilingual | 87.0 | 89.3 | 88.1
Ours-Supervised | 87.2 | 89.8 | 88.5
Table 10: Our model is also effective in monolingual alignment settings.
#### Sensitivity Analysis.
We also conduct a sensitivity analysis on the threshold $c$ for our softmax
alignment extraction method. As shown in Table 11, our method is relatively
robust to this threshold. In particular, after fine-tuning, the AERs change
within 0.5% when varying the threshold.
Model | c. | De-En | Fr-En | Ro-En | Ja-En | Zh-En
---|---|---|---|---|---|---
mBERT | 1e-6 | 17.3 | 6.0 | 27.2 | 45.2 | 18.9
1e-5 | 17.3 | 5.9 | 27.4 | 45.1 | 18.6
1e-4 | 17.3 | 5.7 | 27.6 | 45.3 | 18.3
1e-3 | 17.4 | 5.6 | 27.9 | 45.6 | 18.1
1e-2 | 17.7 | 5.6 | 28.4 | 45.8 | 18.2
1e-1 | 18.1 | 5.6 | 28.9 | 46.3 | 18.3
5e-1 | 18.4 | 5.6 | 29.5 | 47.0 | 18.7
Ours-Multilingual | 1e-6 | 15.4 | 4.6 | 22.7 | 38.2 | 14.1
1e-5 | 15.4 | 4.5 | 22.7 | 38.1 | 14.0
1e-4 | 15.3 | 4.5 | 22.6 | 37.9 | 13.9
1e-3 | 15.3 | 4.4 | 22.6 | 37.9 | 13.8
1e-2 | 15.3 | 4.3 | 22.7 | 37.9 | 13.8
1e-1 | 15.4 | 4.3 | 22.8 | 38.0 | 13.8
5e-1 | 15.4 | 4.2 | 23.0 | 38.2 | 13.9
Table 11: Our softmax alignment extraction method is relatively robust to the
threshold $c$.
(a) mBERT Itermax
(b) mBERT softmax
(c) Fine-tuned IterMax
(d) Fine-tuned softmax
Figure 4: Extracting alignments from our model using IterMaxSabet et al.
(2020) and our softmax method from the vanilla and fine-tuned mBERT models.
#### Comparisons with IterMax.
IterMax is the best alignment extraction method in SimAlign Sabet et al.
(2020). The results in the main paper have demonstrated that our alignment
extraction methods are able to outperform IterMax. In Figure 4, we can see
that the IterMax algorithm tends to sacrifice precision for a small
improvements in recall, while our model can generate more accurate alignments.
Model | Objective | De-En | Fr-En | Ro-En | Ja-En | Zh-En
---|---|---|---|---|---|---
Ours-Bilingual
$\alpha$-entmax | All | 16.1 | 4.1 | 23.4 | 38.6 | 15.4
2-7 | All w/o MLM | 15.6 | 4.2 | 23.3 | 38.8 | 15.1
| All w/o TLM | 16.4 | 4.3 | 23.7 | 40.1 | 15.3
| All w/o SO | 17.8 | 4.7 | 23.9 | 39.4 | 16.3
| All w/o PSI | 16.5 | 4.2 | 23.1 | 38.5 | 15.4
softmax | All | 15.6 | 4.4 | 23.0 | 38.4 | 15.3
2-7 | All w/o MLM | 15.5 | 4.2 | 23.2 | 38.9 | 14.9
| All w/o TLM | 15.9 | 4.5 | 23.7 | 40.1 | 15.1
| All w/o SO | 17.4 | 4.7 | 23.2 | 38.6 | 16.3
| All w/o PSI | 15.6 | 4.3 | 23.1 | 38.8 | 15.4
Ours-Multilingual
$\alpha$-entmax | All | 15.4 | 4.1 | 22.9 | 37.4 | 13.9
2-7 | All w/o MLM | 15.1 | 4.2 | 22.8 | 37.8 | 13.7
| All w/o TLM | 16.4 | 4.4 | 23.3 | 39.7 | 14.4
| All w/o SO | 17.5 | 4.6 | 23.6 | 40.0 | 15.6
| All w/o PSI | 15.5 | 3.9 | 23.0 | 38.2 | 14.1
softmax | All | 15.3 | 4.4 | 22.6 | 37.9 | 13.6
2-7 | All w/o MLM | 15.3 | 4.4 | 22.8 | 38.6 | 13.7
| All w/o TLM | 15.5 | 4.7 | 22.9 | 39.7 | 14.0
| All w/o SO | 16.9 | 4.8 | 23.0 | 39.1 | 15.4
| All w/o PSI | 15.4 | 4.4 | 22.7 | 37.9 | 13.8
Table 12: Ablation studies on training objectives.
#### Ablation Studies on Training Objectives.
Table 12 presents more ablation studies on our training objectives. We can see
that the self training objective is the most effective one, with the
translation language modeling objective being the second and the parallel
sentence identification objective being the third. The masked language
modeling objective can sometimes hurt the model performance, possibly because
of the translation language modeling objective.
#### Experiments on More Language Pairs.
We also test our alignment extraction methods on other language pairs
following the setting of Sabet et al. (2020) without fine-tuning as shown in
Table 13.222Their English-Persian dataset is unavailable at the time of
writing the paper.
Model | En-Cs | En-Hi
---|---|---
GIZA++ | 18.2 | 51.8
SimAlign | 13.4 | 40.2
Ours (softmax, $c$=1e-3) | 12.3 | 41.2
Ours (softmax, $c$=1e-5) | 12.7 | 39.5
Ours (softmax, $c$=1e-7) | 13.3 | 39.2
Table 13: Performance on more language pairs.
#### More Qualitative Examples.
In addition to the examples provided in the main text, we also present some
randomly sampled samples in Figure 5. We can clearly see that our model learns
more aligned representations than the baseline model.
Figure 5: Cosine similarities between subword representations in a parallel
sentence pair before and after fine-tuning. Red boxes indicate the gold
alignments.
|
# Towards Understanding How Readers Integrate Charts and Captions: A Case
Study with Line Charts
Dae Hyun Kim<EMAIL_ADDRESS>Stanford University/Tableau
ResearchStanford/Palo AltoCalifornia94305, 94306 , Vidya Setlur
<EMAIL_ADDRESS>Tableau ResearchPalo AltoCalifornia94306 and Maneesh
Agrawala<EMAIL_ADDRESS>Stanford UniversityStanfordCalifornia94305
(2021)
###### Abstract.
Charts often contain visually prominent features that draw attention to
aspects of the data and include text captions that emphasize aspects of the
data. Through a crowdsourced study, we explore how readers gather takeaways
when considering charts and captions together. We first ask participants to
mark visually prominent regions in a set of line charts. We then generate text
captions based on the prominent features and ask participants to report their
takeaways after observing chart-caption pairs. We find that when both the
chart and caption describe a high-prominence feature, readers treat the doubly
emphasized high-prominence feature as the takeaway; when the caption describes
a low-prominence chart feature, readers rely on the chart and report a higher-
prominence feature as the takeaway. We also find that external information
that provides context, helps further convey the caption’s message to the
reader. We use these findings to provide guidelines for authoring effective
chart-caption pairs.
Captions; line charts; visually prominent features; takeaways.
††copyright: rightsretained††journalyear: 2021††conference: CHI Conference on
Human Factors in Computing Systems; May 8–13, 2021; Yokohama,
Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’21), May 8–13, 2021, Yokohama, Japan††doi: 10.1145/3411764.3445443††isbn:
978-1-4503-8096-6/21/05††ccs: Human-centered computing Empirical studies in
visualization
## 1\. Introduction
Charts provide graphical representations of data that can draw a reader’s
attention to various visual features such as outliers and trends. Readers are
initially drawn towards the most visually salient components in the chart such
as the chart title and the labels (Matzen et al., 2017). However, they
eventually apply their cognitive processes to extract meaning from the most
prominent chart features (Card et al., 1999; Tufte, 1990). Consider the line
chart at the beginning of this article. What do you think are the main visual
features of the chart and what are its key takeaways?
Such charts are often accompanied by text captions that emphasize specific
aspects of the data as chosen by the chart author. In some cases, the data
emphasized in the caption corresponds to the most visually prominent features
of the chart and in other cases it does not. Prior studies have shown that
charts with captions can improve both recall and comprehension of some aspects
of the underlying information, compared to seeing the chart or the caption
text alone (Bransford, 1979; Nugent, 1983; Large et al., 1995; Hegarty and
Just, 1993). But far less is known about how readers integrate information
between charts and captions, especially when the data emphasized by the
visually prominent features of the chart differs from the data that is
emphasized in the caption.
[An example line graph]An example line graph showing 30 year fixed mortgage
rates.
Consider the visually prominent features in our initial line chart and then
consider each of the following caption possibilities one at a time. How do
your takeaways change with each one?
(1) The chart shows the 30-year fixed mortgage rate between 1970 and 2018.
(2) The 30-year fixed mortgage rate increased slightly from 1997 to 1999.
(3) The 30-year fixed mortgage rate reached its peak of 18.45% in 1981.
(4) The 30-year fixed mortgage rate reached its peak of 18.45% in 1981 due to
runaway inflation.
The first caption simply describes the dimensions graphed in the chart and
only provides redundant information that could be read from the axis labels.
Automated caption generation tools often create such basic descriptive
captions (Tableau Software, 2020; Microsoft Q & A, 2020). The next three
captions each emphasizes aspects of the data corresponding to a visual feature
of the chart (i.e., upward trend, peak) by explicitly mentioning the
corresponding data point or trend. However, the second caption emphasizes a
feature of low visual prominence – a relatively local and small rise in the
chart between 1997 and 1999. The third caption describes the most visually
prominent feature of the chart – the tallest peak that occurs in 1981. The
fourth caption also describes this most visually prominent feature, but adds
external information that is not present in the chart and provides context for
the data.
In this paper, we examine two main hypotheses - (1) When a caption emphasizes
more visually prominent features of the chart, people are more likely to treat
those features as the takeaway; even when a caption emphasizes a less visually
prominent feature, people are still more likely to treat a more visually
prominent feature in the chart as the takeaway. (2) When a caption contains
external information for context, the information serves to further emphasize
the feature described in the caption and readers are therefore more likely to
treat that feature as the takeaway.
We considered univariate line charts for our work because they are among the
most common basic charts and are easily parameterizable, making them useful
for the initial exploration of our hypotheses. We synthesized $27$ single
charts with carefully chosen parameters and collected $16$ real-world single
line charts to confirm the generalizability of our findings. We ran a data
collection activity on the $43$ single-line charts, where we asked $219$
participants to mark visually prominent regions on the line charts. We
generated text captions for the ranked set of prominent features using
templates to control variations in natural language. Finally, we conducted a
crowdsourced study with a new set of $2168$ participants to report their
takeaways after seeing the chart-caption pairs.
Our findings from the study support both of our hypotheses. Referring back to
our initial line chart, when the caption mentions the most prominent feature
as in the third caption (i.e., the peak in 1981), readers will probably take
away information from that feature. When the caption mentions a less prominent
feature as in the second caption (i.e., the increase from 1997 to 1999), there
is a mismatch in the message between the chart and the caption. Readers will
have a strong tendency to go with the message conveyed in the chart and take
away information about the peak value. Finally, the external information about
the peak value present in the fourth caption will reinforce the message in the
caption and the readers will more likely take away information about the peak.
These findings help better understand the relationship between charts and
their captions when conveying information about certain aspects of the data to
the reader. Based on these studies, we provide guidelines for authoring charts
and captions together in order to emphasize the author’s intended takeaways.
Visualization authors can more effectively convey their message to readers by
ensuring that both charts and captions emphasize the same set of features.
Specifically, authors could make visual features that are related to their key
message, more prominent through visual cues (e.g., highlighting or zooming
into a focus area, adding annotations) (Egeth and Yantis, 1997; Liang and
Huang, 2010) or include external information in the caption to further
emphasize the feature described in the caption. Often, an alternative chart
representation may be more conducive to making certain visual features more
prominent.
## 2\. Related Work
Our work is related to two lines of research: (1) Cognitive Understanding of
Charts and (2) Caption Generation Tools.
### 2.1. Cognitive Understanding of Charts
The prevalence of text with visuals has led researchers to explore how readers
specifically understand information in figures with accompanying text in
several domains. Li et al. (Li et al., 2018) conducted studies to demonstrate
that figures with text can convey essential information and better aid
understanding than just text alone for scientific publications in a biomedical
domain. Odell et al. (Ottley et al., 2016) demonstrated that having text that
accurately describes important findings in medical diagnostic images can
increase physicians’ speed and accuracy on Bayesian reasoning tasks while
making life-critical judgments for patients. Xiong et al. (Xiong et al., 2019)
showed that background knowledge can affect viewers’ visual perception of data
as they tend to see the pattern in the data corresponding to the background
knowledge as more visually salient. Kong et al. (Kong et al., 2018) explored
the impact of titles on visualization interpretation with different degrees of
misalignment between a visualization and its title. A title contains a miscued
slant when the visualization emphasizes one side of the story through visual
cues but the title’s message addresses the other (less emphasized) side of the
story. Titles have a contradictory slant where the information conveyed in the
title is not presented at all in the visualization. They observe that even
though the title of a visualization may not be recalled, the title can still
measurably impact the remembered contents of a chart. Specifically, titles
with a contradictory slant trigger more people to identify bias compared to
titles with a miscued slant, while visualizations are perceived as impartial
by the majority of viewers (Kong et al., 2019). Elzer et al. (Elzer et al.,
2005) conducted a study to better understand the extent to which captions
contribute to recognizing the intended message of an information graphic for
sight-impaired users. They find that the caption strengthens the intended
message of the graphic. Carberry et al. (Carberry et al., 2006) showed that
the communicative goals of infographics in digital libraries are often not
repeated in the text of the articles. Their work looked into how information
in the graphics could be better utilized for summarizing a document by
employing a Bayesian network.
However, this previous research has not explored the relationship between
charts and their captions with respect to how they work together to emphasize
certain aspects of the data to the reader.
### 2.2. Caption Generation Tools
A number of visual analysis tools help users design charts and captions from
an input data table (Cui et al., 2018; Demiralp et al., 2017; Hu et al., 2018;
Vartak et al., 2015; Wills and Wilkinson, 2010). These captions generally only
describe the data attributes and visual encodings that are in play in the
charts and do not highlight key takeaways. Nevertheless, authors often include
text with a chart to help emphasize an intended message to their audience.
PostGraphe (Fasciano and Lapalme, 1996) generated reports integrating graphics
and text from a list of user-defined intentions about the underlying data such
as the comparison of variables. SAGE (Mittal et al., 1995) used natural
language generation techniques to produce explanatory captions for information
graphics. The system generates captions based on the structural and spatial
relations of the graphical objects and their properties along with
explanations describing the perceptual complexity of the data attributes in
the graphics. SumTime (Yu et al., 2003) used pattern recognition techniques to
generate textual summaries of time-series data. The iGRAPH-Lite system (Ferres
et al., 2007) made information in a graphic accessible to blind users by using
templates to provide textual summaries of what the graphic looks like. The
summaries however, do not focus on the higher-level takeaway conveyed by the
graphic. Chen et al. (Chen et al., 2019b) produced natural language
descriptions for figures by identifying relations among labels present in the
figures.
Other work has explored natural language generation techniques for assembling
multiple caption units together to form captions (Qian et al., 2020). Deep
learning techniques based on neural networks automate caption generation tasks
for news images (Chen et al., 2019a). Elzer et al. (Elzer et al., 2011)
identified communicative signals that represent the intent of messages
portrayed in basic bar charts by applying a Bayesian network methodology for
reasoning about these signals and generating captions. Liu et al. (Liu et al.,
2009; Wei et al., 2010; Liu et al., 2012) explored the integration of text
analytics algorithms with interactive visualization tools to help users
understand and interpret the summarization results. Contexifier (Hullman et
al., 2013) automatically annotated visualizations of stock behavior with news
article headlines taking into account visual salience contextual relevance,
and key events from the articles. Kim et al. (Kim et al., 2020) introduced an
automatic chart question answering pipeline that generates visual explanations
that refer to visual features of charts using a template-based natural
language generation approach. Voder (Srinivasan et al., 2019) generated data
facts for visualizations with embellishments to help users interpret
visualizations and communicate their findings. While an evaluation of that
system suggested that interactive data facts aided users in interpreting
visualizations, the paper did not specifically explore the interplay between
data facts and the visualizations and their effects on the readers’ takeaways.
These systems focus on helping authors with auto-generated text that can be
associated with graphics; however, the work does not evaluate what information
readers gather from the generated captions with their corresponding graphics.
Our paper specifically explores how similarities and differences between what
is visually emphasized in a line chart and textually emphasized in its
caption, can affect what readers take away from the information when presented
together. Future directions from our work could extend the functionality of
chart authoring tools by providing automatic suggestions for captions as well
as for chart presentation to help the reader take away information that is
consistently emphasized by both the chart and caption.
## 3\. Study
Figure 1. Our study pipeline. The inputs to the study are $27$ synthetic and
$16$ real-world charts. Yellow boxes represent steps where we employed
crowdsourcing. The green box indicates that the step did not involve
crowdsourcing.
[Our study pipeline.]Our study pipeline. The inputs to the study are 27
synthetic and 16 real-world charts. Yellow boxes represent steps where we
employed crowdsourcing. The green box indicates that the step did not involve
crowdsourcing.
We conducted a crowdsourced study to understand how captions describing
features of varying prominence levels and the effect of including or not
including external information for context, interacts with the chart in
forming the readers’ takeaways. Through an initial data collection activity,
we asked participants to identify features in the line charts that they
thought were visually prominent. We generated captions corresponding to those
marked features of various levels of prominence. We then ran a study asking a
new set of participants to type their takeaways after viewing a chart and
caption pair. Figure 1 shows the study pipeline.
### 3.1. Datasets
Figure 2. The $27$ data shapes generated for the study and their top three
prominent features. Columns represent the nine possible global shapes and rows
represent the three possible local outlier types. Here, ‘flat’, ‘inc’, and
‘dec’ denote flat, increasing, and decreasing trends respectively. ‘none’,
‘neg’, and ‘pos’ denote none, negative, and positive outlier types
respectively. Red, green, and blue regions indicate the top three prominent
features in order.
[The 27 data shapes with top three prominent features.]The 27 data shapes
generated for the study and their top three prominent features. Columns
represent the nine possible global shapes and rows represent the three
possible local outlier types. Here, ‘flat’, ‘inc’, and ‘dec’ denote flat,
increasing, and decreasing trends respectively. ‘none’, ‘neg’, and ‘pos’
denote none, negative, and positive outlier types respectively. Red, green,
and blue regions indicate the top three prominent features in order.
Figure 3. The $16$ real-world charts. Red, green, and blue regions indicate
the top three prominent features in order.
[The 16 real-world charts with top three prominent features.]The 16 real-world
charts. Red, green, and blue regions indicate the top three prominent features
in order.
We ran the study on two different datasets - (1) synthetically generated line
charts that we designed to ensure good coverage of a variety of visual
features that occur in line charts and (2) line charts gathered from real-
world sources to serve as a more ecologically valid setting for our study.
Synthetic Charts. We generated a set of synthetic line charts with common
visual features (i.e., trends, extrema, and inflection points) while
maintaining realistic global shapes. To keep the overall design space
tractable, we limited global shapes to include at most two trends (i.e., up,
down, and flat) and added at most one perturbation to induce features (e.g.
inflection points) in either the positive or negative direction, resulting in
a total of $27$ data shapes (Figure 2). To provide context to the charts, we
labeled the x-axis with time unit values implying that the chart represents a
time series. Specifically, we selected the start and end of the x-axis from
the set of years {1900, 1910, 1920,…, 2020}. To label the y-axis, we chose a
domain for the y-axis and its value range from the MassVis dataset (Borkin et
al., 2013).
Real-world Charts. To build a more ecologically representative dataset of line
charts with various shapes, styles, and domains, we collected $16$ charts
(Figure 3) from sources such as The Washington Post (Washington Post, 2020),
Pew Research (Pew Research, 2020), Wikipedia (Wikipedia, 2020), and Tableau
Public (Tableau Public, 2020). Because our study focuses on prominence arising
from intrinsic features in line charts, we removed all graphical elements that
could potentially affect the prominence of the features in the charts (e.g.,
text annotations, highlighting, and background shading). In addition, we
removed all text except for the axis labels (e.g. chart titles) so that the
captions serve as the primary source of text provided with the chart. We added
axis labels to those charts without labels to ensure readability.
### 3.2. Identify Visually Prominent Features
Figure 4. The line on the bottom left shows the prominence curve for the line
chart above. From this curve, we obtain the most prominent (red), the second
most prominent (green), and the third most prominent (blue) features in the
chart. The $10$ caption variants (one of them being a no-caption variant)
generated based on these prominent features, are shown on the right. The text
colors indicate the types of fill-in values based on the caption templates;
purple for dimensions, fuchsia for the feature description, blue for data
values, and brown for the time period.
[Generation of 10 caption variants based on the prominence curve.]The line on
the bottom left shows the prominence curve for the line chart above. From this
curve, we obtain the most prominent (red), the second most prominent (green),
and the third most prominent (blue) features in the chart. The 10 caption
variants (one of them being a no-caption variant) generated based on these
prominent features, are shown on the right. The text colors indicate the types
of fill-in values based on the caption templates; purple for dimensions,
fuchsia for the feature description, blue for data values, and brown for the
time period.
To identify the most visually prominent features in our dataset, we recruited
at least five workers from Amazon Mechanical Turk (Amazon Mechanical Turk,
2020) for each line chart and asked them to draw rectangular bounding boxes
around the top three most prominent features in the chart. We also asked them
to briefly describe each marked feature in their own words so that we could
differentiate between trend and slope features versus peak, inflection, and
other point features.
In each trial of the data collection, we presented one of the $43$ line
charts. Because we were seeking subjective responses, each participant
completed only one trial to avoid biases that might arise from repeated
exposure to the task. Participation was limited to English speakers in the
U.S. with at least a 98% acceptance rate and $5000$ approved tasks. We payed a
rate equivalent to $2 / 10 mins.
We asked a total of $219$ participants (average of $5.09$ per chart) to label
the top three features for a total of $657$ prominence boxes. We then
aggregated all of the feature bounding boxes provided by first projecting each
box onto the x-axis, to form a 1D interval (Figure 4 upper left). We weighted
each interval inversely proportional to the ranking provided by the
participant. Specifically, the top ranked feature bounding box for each
participant was assigned a weight of $3$, while the 3rd ranked feature was
assigned a weight of $1$. We noticed that bounding boxes corresponding to the
same features were pretty consistent in the central regions although the exact
boundary drawn by the participants varied. In order to boost the signal in the
central regions while suppressing the noise in the boundary regions, we
multiplied the weight assigned to each interval by a Gaussian factor centered
at the interval and with standard deviation set to half the width of the
interval. Summing all of the Gaussian weighted intervals, we obtained a
prominence curve (Figure 4 bottom left). However, a region defined by a local
maximum of the curve may not have an obvious one-to-one mapping with a feature
in the chart because it roughly indicates a high prominence region instead of
pinpointing a specific visual feature. We considered all the bounding boxes
containing the region along with the participants’ text descriptions of the
features to associate the local maximum to a certain feature. We iterated this
process for the region around the top three local maximum to identify three
prominent features. Results of the algorithm for the charts in our dataset are
shown in Figures 2 and 3.
### 3.3. Caption Generation
To carefully control the language used in the captions and keep the number of
conditions manageable, we generated captions using templates that only vary
the feature mentioned and whether external information is introduced. Using
the templates, we produced the following caption variants: (1) two captions
(one with and one without external information) for each of the top three
visually prominent features identified earlier, (2) two captions (one with and
one without external information) describing a minimally prominent feature
that is neither an extremum nor an inflection point, and (3) a basic caption
that simply describes the domain represented in the chart without describing a
particular feature.
We generated $10$ caption variants (including the no caption variant in which
we presented a chart without caption) for each of the $43$ charts, providing a
total of $430$ chart-caption pairs. We manually generated all the captions
rather than using the original captions for the real-world charts to control
for word use and grammatical structure. For real-world charts, we searched for
information from the document that they originally appeared in, to extract
information not present in the charts. In particular, we looked for
information about potential reasons for trends or change (e.g. the external
information included in the caption about the most prominent feature in Figure
4) or comparisons with a similar entity (e.g. comparison between Macron’s
approval rating with Trump’s approval rating in the second most prominent
feature in Figure 4). For synthetically generated charts and real-world charts
that were not accompanied with additional information about their features, we
referenced Wikipedia (Wikipedia, 2020) articles to create a plausible context.
We employed simple language templates for caption generation to minimize the
effects of linguistic variation (Table 1). The captions generated with the
templates were allowed to vary in the features they described in the charts.
To make the descriptions of the features appear natural, we included words the
participants used to describe the features during the prominent feature
collection phase. Because the participants usually described each of the
features using a noun occasionally with an adjective modifier (e.g. “sharp
increase”), we manually lemmatized the words and modified the forms to
correctly fit into our template (e.g. “sharply increased” in the caption about
the third most prominent figure in Figure 4).
Feature | Template
---|---
Extremum | [dimension] reached its [extrema-word] of [value] in [time-period].
Trend | [dimension] [slope-word] in — between [time-period].
Inflection | [dimension] started [slope-word] in [time-period].
Point | [dimension] was [value] in [time-period].
Table 1. Examples of templates we employed for generating captions about
specific features. The text colors indicate the types of fill-in values based
on the caption templates; purple for dimensions, fuchsia for feature
descriptions, blue for data values, and brown for time periods. Examples of
filled in captions are in Figure 4 (right).
### 3.4. Collect Takeaways for Charts & Captions
Figure 5. The procedure for collecting takeaways for chart-caption pairs. The
images show simplified versions of the screen that the participants saw during
each step.
[The procedure for collecting takeaways for chart-caption pairs.]The procedure
for collecting takeaways for chart-caption pairs. The images show simplified
versions of the screen that the participants saw during each step.
#### 3.4.1. Design
We ran a between-subjects design study for collecting takeaways for charts and
their captions. For each of the $43$ charts, we presented one of the ten
variants (including the no caption variant) (examples in Figure 4):
(1) [1st w/o ext] Caption for most prominent feature, no external info.
(2) [1st w/ ext] Caption for most prominent feature, has external info.
(3) [2nd w/o ext] Caption for 2nd most prominent feature, no external info.
(4) [2nd w/ ext] Caption for 2nd most prominent feature, has external info.
(5) [3rd w/o ext] Caption for 3rd most prominent feature, no external info.
(6) [3rd w/ ext] Caption for 3rd most prominent feature, has external info.
(7) [non-pro w/o ext] Caption for non-prominent feature, no external info.
(8) [non-pro w/ ext] Caption for non-prominent feature, has external info.
(9) [basic] Caption about domain represented in the chart and $x$-range
(10) [no cap] No caption
#### 3.4.2. Procedure
The study began with a screening test to ensure that the participant had a
basic understanding of line charts and could read values and encodings,
extract extrema and trends, and compare values (Figure 5 first step). Only
participants who passed this test were allowed to continue with the study.
After they read the instructions, the participants were presented with a chart
and a caption underneath the chart, similar to most charts in the real world
(unless it is the no-caption variant) (Figure 5 second step). We did not
impose a time constraint on the amount of time spent looking at the chart and
the caption to allow participants sufficient time to read and digest the
information at their own pace, like document reading in the real world. On the
next screen for collecting takeaways, the chart and the caption were removed
to constrain readers to provide the takeaways based on memory instead of
simply re-reading from the chart and the caption. The participants were asked
to list as many text takeaways as they could in the order of importance
(Figure 5 third step). Finally, using a 5-point Likert scale, we asked how
much they relied on the chart and caption individually when determining their
takeaways.
We asked each participant to provide takeaways for exactly one chart-caption
pair to prevent potential biases from already having read a different caption
about a chart. From $2168$ participants (average of $5.04$ per chart-caption
pair), we collected a total of $4953$ takeaways (average of $2.28$ per
participant).
#### 3.4.3. Labeling Takeaways
In order to analyze the takeaways, we manually labeled each takeaway with the
corresponding chart feature described. Since participants often described
multiple chart features in a single takeaway, we first split each takeaway
into separate takeaways for each visual feature mentioned. At the end of this
process, we identified on average $1.31$ features per takeaway. If the
referenced feature was one of three most prominent features or the non-
prominent feature we identified during caption generation, we labeled the
takeaway with the corresponding feature, otherwise we labeled the takeaway as
referring to an other feature. If the takeaway did not refer to any specific
feature in the chart, we labeled the takeaway as a non-feature. Examples of
non-feature takeaways include an extrapolation such as “The value will
continue to rise after 2020” or a judgment such as “I should buy gold” when
looking at a chart showing the price of gold over time. One of the authors
labeled the features and discussed any confusing cases with the other authors
to converge on the final label.
## 4\. Results
Figure 6. Study results. Each column shows bar charts for each prominence
level mentioned in the caption (i.e., the leftmost bar chart is for captions
mentioning the 1st ranked visual feature, the next bar chart is for captions
mentioning the 2nd ranked visual feature, while the rightmost bar chart is for
the no-caption condition). Within a bar chart, each bar represents the
percentage of takeaways mentioning the visual feature at that prominence
level. For example, the leftmost bar in each bar chart represents the
percentage of total takeaways that mention the top ranked takeaway. Each bar
chart also reports the percentage of Other features and Non-features that were
mentioned in the takeaways. These charts aggregate data for captions with and
without external information. The percentages do not sum to $100\%$ as some
takeaways mention multiple takeaways.
[Study results.]Study results. Each column shows bar charts for each
prominence level mentioned in the caption (i.e., the leftmost bar chart is for
captions mentioning the 1st ranked visual feature, the next bar chart is for
captions mentioning the 2nd ranked visual feature, while the rightmost bar
chart is for the no-caption condition). Within a bar chart, each bar
represents the percentage of takeaways mentioning the visual feature at that
prominence level. For example, the leftmost bar in each bar chart represents
the percentage of total takeaways that mention the top ranked takeaway. Each
bar chart also reports the percentage of Other features and Non-features that
were mentioned in the takeaways. These charts aggregate data for captions with
and without external information. The percentages do not sum to $100\%$ as
some takeaways mention multiple takeaways.
The primary goal of our study is to understand what readers take away when
charts and captions are presented together and how the emphasis on different
prominent features and presence of external information affects the takeaways.
We analyze our results with respect to two hypotheses:
[H1] When captions emphasize more visually prominent features of the chart,
people are more likely to treat the features as the takeaway; when a caption
emphasizes a less visually prominent feature, people are less likely to treat
that feature as the takeaway and more likely to treat a more visually
prominent feature in the chart as the takeaway.
[H2] When captions contain external information for context, the external
information serves to further emphasize the feature presented in the caption
and people are therefore more likely to treat that feature as the takeaway,
compared to when the caption does not contain external information.
Assessing H1. To evaluate H1, we examine how varying the prominence of a
visual feature mentioned in a caption (independent variable), affects the
visual feature mentioned in the takeaways (dependent variable). Figure 6
summarizes the study results for the synthetic charts (top row) and the real-
world charts (bottom row).
In general, these results suggest that when a caption mentions visual features
of differing prominence levels, the takeaways also differ. Omnibus Pearson’s
chi-squared tests confirm a significant difference between the bar charts for
the 5 different caption conditions in both the synthetic
($\chi^{2}(20)=202.211$, $p<0.001$) and real world ($\chi^{2}(20)=207.573$,
$p<0.001$) datasets. These results also suggest that when the caption mentions
a specific feature, the takeaways also tend to mention that feature, when
compared to the baseline ‘no-caption’ condition.
| Caption-Takeaway 1 | Caption-Takeaway 2 | |
---|---|---|---|---
Source | Caption | Takeaway | Caption | Takeaway | $Z$ | $p$
Block 1. Takeaways mentioning feature in caption vs. without caption
Synthetic | 1st | 1st | no cap | 1st | $2.846$ | $0.002^{*}$
2nd | 2nd | no cap | 2nd | $4.641$ | $<0.001^{*}$
3rd | 3rd | no cap | 3rd | $3.643$ | $0.001^{*}$
non-pro | non-pro | no cap | non-pro | $6.195$ | $<0.001^{*}$
Real-world | 1st | 1st | no cap | 1st | $1.660$ | $0.049$
2nd | 2nd | no cap | 2nd | $4.225$ | $<0.001^{*}$
3rd | 3rd | no cap | 3rd | $3.347$ | $<0.001^{*}$
non-pro | non-pro | no cap | non-pro | $4.732$ | $<0.001^{*}$
Block 2. Between takeaways mentioning feature in caption
Synthetic | 1st | 1st | 2nd | 2nd | $1.782$ | $0.037$
2nd | 2nd | 3rd | 3rd | $0.705$ | $0.044$
3rd | 3rd | non-pro | non-pro | $8.989$ | $<0.001^{*}$
Real-world | 1st | 1st | 2nd | 2nd | $3.708$ | $<0.001^{*}$
2nd | 2nd | 3rd | 3rd | $0.363$ | $0.358$
3rd | 3rd | non-pro | non-pro | $5.940$ | $<0.001^{*}$
Block 3. When caption $=$ 1st: takeaway $=$ 1st vs. takeaway $\neq$ 1st
Synthetic | 1st | 1st | 1st | 2nd | $8.168$ | $<0.001^{*}$
1st | 1st | 1st | 3rd | $8.275$ | $<0.001^{*}$
1st | 1st | 1st | non-pro | $19.463$ | $<0.001^{*}$
Real-world | 1st | 1st | 1st | 2nd | $9.981$ | $<0.001^{*}$
1st | 1st | 1st | 3rd | $11.301$ | $<0.001^{*}$
1st | 1st | 1st | non-pro | $11.536$ | $<0.001^{*}$
Block 4. When caption $\neq$ 1st: takeaway $=$ 1st vs. takeaway $=$ caption
Synthetic | 2nd | 2nd | 2nd | 1st | $3.829$ | $<0.001^{*}$
3rd | 3rd | 3rd | 1st | $0.258$ | $0.398$
non-pro | 1st | non-pro | non-pro | $8.342$ | $<0.001^{*}$
Real-world | 2nd | 2nd | 2nd | 1st | $2.010$ | $0.022$
3rd | 3rd | 3rd | 1st | $2.521$ | $0.006^{*}$
non-pro | 1st | non-pro | non-pro | $5.454$ | $<0.001^{*}$
Table 2. Pairwise Z-test results of comparisons between various ratios of
takeaways that mention a certain feature (third, fifth columns) when provided
a caption describing a certain feature (second, fourth columns). The tests
were one-sided with the alternative hypothesis that the ratio of takeaways for
‘Caption-Takeaway 1’ is greater than the ratio of takeaways for ‘Caption-
Takeaway 2’. Asterisks indicate significance with Bonferroni correction.
Figure 7. (Top row) Comparison of percentages of takeaways that mention the
same feature as the caption for the synthetic (a) and real-world (b) datasets
(i.e., darker bars on the left correspond to the red bar from Figure 6a, the
green bar from 6b, the blue bar from 6c, and the grey bar from 6d), and
percentages of takeaways that mention the feature in the no caption condition
(i.e., the right lighter-hued bars in the chart correspond to the bars from
Figure 6e). (Middle row) Percentage of takeaways mentioning the visual
features at each prominence level when presented with the basic caption.
(Bottom row) Dividing the left bars in charts (top row)a and (top row)b based
on whether the caption contains external information (purple bars) or does not
(olive bars). The leftmost Any bars show aggregates over all prominence
levels. Asterisks indicate significant difference.
[(Top row) Comparisons of results in Figure 6. (Middle row) Results with basic
caption. (Bottom row) Percentage of takeaways mentioning the visual features
at each prominence level when presented with the basic caption.](top row)
Comparison of percentages of takeaways that mention the same feature as the
caption for the synthetic (a) and real-world (b) datasets (i.e., darker bars
on the left correspond to the red bar from Figure 6a, the green bar from 6b,
the blue bar from 6c, and the grey bar from 6d), and percentages of takeaways
that mention the feature in the no caption condition (i.e., the right lighter-
hued bars in the chart correspond to the bars from Figure 6e). (middle row)
Percentage of takeaways mentioning the visual features at each prominence
level when presented with the basic caption. (bottom row) Dividing the left
bars in charts (top row)a and (top row)b based on whether the caption contains
external information (purple bars) or does not (olive bars). The leftmost Any
bars show aggregates over all prominence levels. Asterisks indicate
significant difference.
Figures 7a and 7b collect the percentage of takeaways that mention the same
feature as in the caption for the synthetic and the real-world datasets
respectively (left darker bars) and compare them with the percentages
corresponding to the no-caption case (lighter-hued bars on the right). We see
that captions do play a role in forming takeaways and the takeaway is thus
more likely to mention that feature (i.e., each darker bar in Figures 7a and
7b is usually longer than the corresponding lighter-hued bar to its right).
Planned pairwise Z-tests with Bonferroni correction are shown in Table 2.
Block 1 shows that the differences between the corresponding color bars are
significant for the second most prominent, third most prominent, and non-
prominent features. For the most prominent feature, we find that while a
higher proportion of people mentioned the most prominent feature in their
takeaways when the caption mentions it, the difference is only significant for
the synthetic charts. We believe that this is possibly because people already
include the most prominent features in their takeaways in the no-caption
condition and the difference hence is not significant.
While we confirmed that both the chart and caption play a role as to what the
reader takes away from them, the key question is how the chart and the caption
interact with each other – Do they have a synergistic effect when they
emphasize the same feature? Which one wins over when they emphasize different
features? Referring to Figure 6, we see the synergistic effect of the double-
emphasis from the chart and caption when they emphasize the same feature
(Figures 6a and 6f). In particular, the participants took away from the most
prominent feature significantly more often than from any other feature in the
chart (Table 2 Block 3). When the caption diverged from the chart and
described a feature that was not prominent, the participants relied more on
the chart and took away from the most prominent feature significantly more
than the feature described in the caption (Table 2 Block 4, rows 3 and 6;
Figures 6d and 6i). When the caption did not diverge as much and described the
second or the third most prominent feature, the takeaways mentioned the
feature described in the caption more than the most prominent feature (Table 2
Block 4, rows 1, 2, 4, and 5; Figures 6b, 6c, 6g, and 6h). However, the
difference was smaller than the difference between the ratio of people who
took away from the most prominent feature and the ratio of people who took
away from any of the other features. We believe this result may be due to the
fact that the charts still had more influence on the readers than the captions
as the second and the third most prominent feature are still among the top
prominent features and are among the features emphasized by the chart.
We observe from Figure 7 that the chart also plays an important role in what
people take away – when a caption mentions a higher-prominent feature, the
takeaways more consistently mentions that feature. Specifically, we see that
the bars for the higher-prominence features are taller than the bars for the
lower-prominence features, indicating an increase in the effectiveness of
chart in reinforcing the message in the caption. Planned pairwise Z-tests with
Bonferroni correction between each subsequent pair of bars (red bar vs. green
bar, green bar vs. blue bar, blue bar vs. gray bar) (Table 2 Block 2) find
that the red bar vs. green bar is significant for real-world charts and the
blue bar vs. gray bar is significant both synthetic and real-world charts,
whereas the green bar vs. blue bar difference is not significant. We believe
that the visual prominence levels for some of the top-ranked features are
similar in several charts (i.e., the difference in prominence between the 1st
and 2nd ranked features is small) in our dataset and this results in a smaller
difference between them, although the trend is in the right direction.
| | Reported Reliance
---|---|---
Source | Caption Type | Chart | Caption
Block 1. Overall
Synthetic | all | $4.675\pm 0.670$ | $2.624\pm 1.609$
Real-world | all | $4.536\pm 0.784$ | $2.779\pm 1.679$
Block 2. Prominence
Synthetic | 1st | $4.590\pm 0.711$ | $3.249\pm 1.327$
2nd | $4.567\pm 0.814$ | $3.082\pm 1.433$
3rd | $4.567\pm 0.726$ | $3.059\pm 1.408$
non-pro | $4.775\pm 0.549$ | $2.447\pm 1.429$
| basic | $4.850\pm 0.377$ | $2.593\pm 1.320$
Real-world | 1st | $4.494\pm 0.838$ | $3.405\pm 1.481$
2nd | $4.462\pm 0.890$ | $3.165\pm 1.359$
3rd | $4.503\pm 0.805$ | $3.236\pm 1.354$
non-pro | $4.595\pm 0.718$ | $2.680\pm 1.545$
| basic | $4.628\pm 0.601$ | $2.718\pm 1.568$
Block 3. External Information
Synthetic | w/o ext | $4.679\pm 0.688$ | $2.798\pm 1.402$
w/ ext | $4.573\pm 0.728$ | $3.110\pm 1.448$
Real-world | w/o ext | $4.606\pm 0.741$ | $3.061\pm 1.481$
w/ ext | $4.424\pm 0.875$ | $3.194\pm 1.439$
Table 3. The reported reliance on the chart and the caption respectively on
5-point Likert scales. Block 1 shows the reported reliance across all the
captions. Block 2 shows the reported reliance depending on the prominence of
the feature described in the chart and Block 3 shows the reported reliance
depending on the inclusion of external information. The values are reported in
the form of $\mu\pm\sigma$.
Table 3 shows average and standard deviation of how much the participants
reported to have relied on the chart and the caption respectively on a 5-point
Likert scale. The results in Table 3 Block 1 suggest that the participants
drew information from both the chart and the caption when determining their
takeaways, although they consistently relied on the chart more than the
caption. These results potentially shed light on why participants took away
more often from the chart than the caption when they began to diverge – they
relied more on the chart than the caption. The results further suggest that
the participants’ tendency to rely on the charts grew while their tendency to
rely on the captions declined as the prominence of the feature described in
the caption decreased (Table 3 Block 2). We found a significant drop in the
self-reported reliance on the caption when the caption described a non-
prominent feature compared to when it described the third-most prominent
feature (synthetic: Mann-Whitney $U=28941$, $p<0.001$; real-world: Mann-
Whitney $U=9666$, $p<0.001$) whereas the increase in the reported reliance on
the chart when the caption described a non-prominent feature compared to when
it described the third-most prominent feature was only significant with the
synthetic charts (Mann-Whitney $U=32844.5$, $p<0.001$). Although the general
trend is in the right direction, we did not find significant differences in
the reliance scores when the caption mentioned one of the top three prominent
features. This may be because the difference in prominence is not as great
among these features as it is with the non-prominent feature. These results
are in line with our findings from the takeaways; we find that when the chart
contains a high-prominence visual feature, but the caption emphasizes a low-
prominence feature, participants relied more on the chart and less on the
caption.
Considering all these results together suggests that we can accept our
hypothesis H1 – readers take away from the highly prominent features when the
chart and caption both emphasize the same feature and that their inclination
to rely more on the most prominent feature instead of the feature described in
the caption becomes greater when the caption describes a less prominent
feature.
H1 Additional Results. We also collected takeaways for charts with basic
captions that describe the axes of the chart. (Figure 7 \- middle row). We
find that the percentage of takeaways for each of the features is similar to
that of the no-caption condition. In fact, Pearson’s chi-square test finds no
significant difference between the takeaway histograms of the basic caption
and the no-caption conditions (synthetic: $\chi^{2}(4)=1.564$, $p=0.815$;
real-world: $\chi^{2}(4)=7.168$, $p=0.127$). While automated captioning tools
(Tableau Software, 2020; Microsoft Q & A, 2020) generate captions
corresponding to our basic captions, we were unable to find evidence that
these captions affect what people take away. Such captions may help readers
with accessibility needs; however, we believe further exploration will help
future systems determine appropriate uses for such captions.
Assessing H2. To evaluate H2, we examine whether including external content
information in the caption makes it more likely for readers to take away the
feature mentioned in the caption. We find that people are significantly more
likely to mention the feature described in the caption when it includes
external information than when it does not (Figures 7e and Figures 7f Any
bars). A pairwise Z-test finds significant difference between these ratios
(synthetic: $Z=2.273,p=0.011$; real-world: $Z=2.032,p=0.021$). In addition,
the reported reliance on the chart and the captions shifted towards the
captions with external information, which is in-line with our findings (Figure
3 Block 3). Specifically, the reported reliance on the chart was significantly
lower with external information (synthetic: Mann-Whitney $U=137318$,
$p<0.001$; real-world: Mann-Whitney $U=45292$, $p=0.001$); the reported
reliance on the caption was higher with external information, but the
difference was only significant for the synthetic charts (synthetic: Mann-
Whitney $U=131594$, $p<0.001$; real-world: Mann-Whitney $U=48599.5$,
$p=0.132$).
The results together suggest that we can accept H2 that states that including
external information in the caption helps reinforce the message in the caption
and users are more likely to take away from the feature described in the
caption.
H2 Additional Results. Figure 7 (bottom row) breaks down the ratio of the
takeaways that mention the feature described in the caption by level of
prominence of the feature. The figure shows that there is usually an increase
in the ratio of the takeaways that mentioned the feature described in the
caption when the caption included external information for each level of
prominence. Among the differences, we only found significant difference when
the caption mentioned a non-prominent feature for synthetic charts ($Z=3.027$,
$p=0.001$). Further study could shed light on the correlation between the
prominence of the feature described in the caption and how external
information affects the readers’ takeaways.
## 5\. Design Guidelines
(a) “The cheap Yen and PM Abe’s tourism policy caused the number of tourists
in Japan to steeply rise between 2011 and 2018.”
(b) “Due to the 2008 Financial Crisis, the number of tourists in Japan
decreased in 2009.”
Figure 8. Examples of chart-caption pairs authored to emphasize the same
feature in the data. (a) Both the caption and chart emphasize the sharp
positive trend. (b) The original chart is modified to zoom into a portion of
the time range and the feature is made more visually prominent with an
annotation showing the dip in the number of tourists. The caption describes
that dip with additional context.
[Examples of chart-caption pairs authored to emphasize the same feature in the
data together.]Examples of chart-caption pairs authored to emphasize the same
feature in the data. (a) Both the caption and chart emphasize the sharp
positive trend. (b) The original chart is modified to zoom into a portion of
the time range and the feature is made more visually prominent with an
annotation showing the dip in the number of tourists. The caption describes
that dip with additional context.
Our findings indicate that the readers will take away from the feature doubly
emphasized by both the chart and caption if they provide a coherent message.
However, when the chart and caption diverge in terms of the feature that they
are emphasizing, readers are less likely to use information from the caption
in their takeaways. To improve the efficacy of the chart-caption pair, authors
could (1) design the chart to make the feature described in the caption more
prominent and (2) include external information in the caption to give more
context to the information in the caption.
There are several ways for authors to emphasize aspects of the data in a chart
so that readers’ attention is drawn to these visual features. One technique is
to ensure that aspects of the data such as trends and outliers are presented
at the right level of detail or interval range; too-broad of a measurement
interval may hide a signal. For example, assume that we were given the chart
in Figure 8(b)a with the caption in Figure 8(b)b. The decrease in 2009 is not
very prominent because the large increase starting in 2011 overshadows the
decrease. Zooming closer to the intended feature and cropping out irrelevant
features (Figure 8(b)b), helps make the feature more visually prominent.
However, when zooming into the data in this manner, authors must take
precaution to avoid removing important information or rendering the chart
misleading (O’Brien and Lauer, 2018; Pandey et al., 2015).
A simple way to further facilitate effective chart reading is to enhance the
visualization with highlighting and overlays such as annotations to guide the
audience’s attention to the image area they are describing (Kong and Agrawala,
2012) (Figure 8(b)b). Sometimes, a different chart altogether may be more
effective to emphasize a particular aspect of the data. For example,
converting continuous data in line charts into discrete values could help
emphasize individual values that the author would like to focus on. The
consistency between the redesigned chart-caption pairs helps readers take away
from the doubly emphasized feature (Figure 8(b)).
## 6\. Future Work
Chart and caption authoring tools. We would like to explore how this work can
provide interesting implications for _both_ chart and caption design to help
the author effectively convey a specific point of view. Enhancements to
visualization authoring tools could suggest chart design alternatives given a
feature that the author would like to emphasize. Specifically, the system
could go further by emphasizing features in the chart according to the main
message the author wants to convey by automatically adding annotations to the
chart, adding highlights, and adjusting levels of detail so that the chart and
the caption deliver a concerted message. This will require formulating a high-
level language specification that the authors can use to communicate to the
system about their intents or a natural language processing module that can
infer the authors’ intents based on the captions they write. Coordinating
interaction between the chart and the caption such that hovering over the text
in the caption would highlight the corresponding visual feature in the chart
and vice-versa, is another interesting direction to pursue to help the reader.
The resulting system would be a significant extension of the interactive
document reader presented by Kong et al. and Kim et al. (Kong et al., 2014;
Kim et al., 2018). On the captioning side, a system could classify basic
captions, captions about high-prominence features, and captions about low-
prominence features. Based on the classification, the system could suggest
external information to further emphasize the information presented.
Further exploration of caption variations. In this work, we use a template-
based approach for generating captions to minimize the effect of the variation
of natural language and to keep the experiment size reasonable.
Simultaneously, we carefully vary the visual feature described in the caption
and the presence of external information to best understand how people read
captions and charts together to form their takeaways. Future work could study
captions with various natural language expressions and different ways of
emphasis. It would be useful to understand whether the relationship between
multiple features in a caption (e.g., a simple list - “There were major dips
in employment in 2008 and 2020.” or a comparison - “The dip in 2020 was
greater than the dip in 2008.”) has an effect on what readers take away.
Studying how our findings generalize to other types of external information
(e.g., extrapolation, breakdown into subcategories) would be an interesting
direction to pursue.
Generalization to other chart types. Our work explores how readers take away
information when presented with univariate line charts and captions. Basic
chart types still have prominent features (e.g., extrema in bar charts,
outliers in scatterplots) and less prominent features (e.g., a point in a
cluster in scatterplots). We expect similar findings would hold for those
other chart types. We leave it to future work to confirm this intuition.
## 7\. Conclusion
In this paper, we examine what readers take away from both a chart and its
caption. Our results suggest that when the caption mentions visual features of
differing prominence levels, the takeaways differ. When the caption mentions a
specific feature, the takeaways also tend to mention that feature. We also
observed that when a caption mentions a visually prominent feature, the
takeaways more consistently mention that feature. On the other hand, when the
caption mentions a less prominent feature, the readers’ takeaways are more
likely to mention the most prominent prominence features than the feature
described in the caption. We also find that including external information in
the caption makes the readers more likely to form their takeaways based on the
feature described in the caption. From the results of our study, we propose
guidelines to better design charts and captions together; using visual cues
and alternative chart representations, visual features can be made more
prominent and be further emphasized by their descriptions in the caption.
Design implications from this work provide opportunities for the authoring of
chart and caption pairs in visual analysis tools to effectively convey a
specific point of view to the reader.
###### Acknowledgements.
The authors thank the Stanford University HCI Group and Tableau Research for
their feedback during the development of the studies. The authors also thank
the reviewers for their feedback during the review cycle. This work is
supported by NSF award III-1714647.
## References
* (1)
* Amazon Mechanical Turk (2020) Amazon Mechanical Turk 2020. Amazon Mechanical Turk. https://www.mturk.com.
* Borkin et al. (2013) Michelle A Borkin, Azalea A Vo, Zoya Bylinskii, Phillip Isola, Shashank Sunkavalli, Aude Oliva, and Hanspeter Pfister. 2013\. What makes a visualization memorable? _IEEE Transactions on Visualization and Computer Graphics_ 19, 12 (2013), 2306–2315.
* Bransford (1979) J. Bransford. 1979\. _Human Cognition: Learning, Understanding, and Remembering_. Wadsworth Publishing Company. https://books.google.com/books?id=7TKLnQEACAAJ
* Carberry et al. (2006) Sandra Carberry, Stephanie Elzer, and Seniz Demir. 2006\. Information Graphics: An Untapped Resource for Digital Libraries. In _Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval_ (Seattle, Washington, USA) _(SIGIR ’06)_. Association for Computing Machinery, New York, NY, USA, 581–588. https://doi.org/10.1145/1148170.1148270
* Card et al. (1999) Stuart K. Card, Jock D. Mackinlay, and Ben Shneiderman (Eds.). 1999\. _Readings in Information Visualization: Using Vision to Think_. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
* Chen et al. (2019a) Charles Chen, Ruiyi Zhang, Sungchul Kim, Scott Cohen, Tong Yu, Ryan Rossi, and Razvan Bunescu. 2019a. Neural caption generation over figures. In _Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers_. 482–485.
* Chen et al. (2019b) Charles Chen, Ruiyi Zhang, Eunyee Koh, Sungchul Kim, Scott Cohen, Tong Yu, Ryan Rossi, and Razvan Bunescu. 2019b. Figure captioning with reasoning and sequence-level training. _arXiv preprint arXiv:1906.02850_ (2019).
* Cui et al. (2018) Zhe Cui, Sriram Karthik Badam, Adil Yalçin, and Niklas Elmqvist. 2018. DataSite: Proactive Visual Data Exploration with Computation of Insight-based Recommendations. _CoRR_ abs/1802.08621 (2018). arXiv:1802.08621 http://arxiv.org/abs/1802.08621
* Demiralp et al. (2017) Çağatay Demiralp, Peter J Haas, Srinivasan Parthasarathy, and Tejaswini Pedapati. 2017. Foresight: Rapid data exploration through guideposts. _arXiv preprint arXiv:1709.10513_ (2017).
* Egeth and Yantis (1997) H. Egeth and S. Yantis. 1997. Visual attention: control, representation, and time course. _Annual review of psychology_ 48 (1997), 269–97.
* Elzer et al. (2005) Stephanie Elzer, Sandra Carberry, Daniel Chester, Seniz Demir, Nancy Green, Ingrid Zukerman, and Keith Trnka. 2005. Exploring and exploiting the limited utility of captions in recognizing intention in information graphics. In _ACL_. 223–230.
* Elzer et al. (2011) Stephanie Elzer, Sandra Carberry, and Ingrid Zukerman. 2011\. The automated understanding of simple bar charts. _Artificial Intelligence_ 175, 2 (2011), 526–555.
* Fasciano and Lapalme (1996) Massimo Fasciano and Guy Lapalme. 1996. Postgraphe: a system for the generation of statistical graphics and text. In _Eighth International Natural Language Generation Workshop_.
* Ferres et al. (2007) Leo Ferres, Petro Verkhogliad, Gitte Lindgaard, Louis Boucher, Antoine Chretien, and Martin Lachance. 2007. Improving Accessibility to Statistical Graphs: The IGraph-Lite System. In _Proceedings of the 9th International ACM SIGACCESS Conference on Computers and Accessibility_ (Tempe, Arizona, USA) _(Assets ’07)_. Association for Computing Machinery, New York, NY, USA, 67–74. https://doi.org/10.1145/1296843.1296857
* Hegarty and Just (1993) Mary Hegarty and Marcel-Adam Just. 1993. Constructing mental models of machines from text and diagrams. _Journal of memory and language_ 32, 6 (1993), 717–742.
* Hu et al. (2018) Kevin Hu, Diana Orghian, and César Hidalgo. 2018. DIVE: A Mixed-Initiative System Supporting Integrated Data Exploration Workflows. In _Proceedings of the Workshop on Human-In-the-Loop Data Analytics_. ACM, 5.
* Hullman et al. (2013) Jessica Hullman, Nicholas Diakopoulos, and Eytan Adar. 2013\. Contextifier: automatic generation of annotated stock visualizations. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_. 2707–2716.
* Kim et al. (2020) Dae Hyun Kim, Enamul Hoque, and Maneesh Agrawala. 2020\. Answering Questions about Charts and Generating Visual Explanations. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–13.
* Kim et al. (2018) Dae Hyun Kim, Enamul Hoque, Juho Kim, and Maneesh Agrawala. 2018. Facilitating Document Reading by Linking Text and Tables. In _UIST_ _(UIST ’18)_. ACM, 423–434.
* Kong et al. (2018) Ha-Kyung Kong, Zhicheng Liu, and Karrie Karahalios. 2018\. Frames and slants in titles of visualizations on controversial topics. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_. 1–12.
* Kong et al. (2019) Ha-Kyung Kong, Zhicheng Liu, and Karrie Karahalios. 2019\. Trust and Recall of Information across Varying Degrees of Title-Visualization Misalignment. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3290605.3300576
* Kong and Agrawala (2012) N. Kong and M. Agrawala. 2012. Graphical Overlays: Using Layered Elements to Aid Chart Reading. _IEEE Transactions on Visualization and Computer Graphics_ 18, 12 (2012), 2631–2638.
* Kong et al. (2014) Nicholas Kong, Marti A. Hearst, and Maneesh Agrawala. 2014\. Extracting References Between Text and Charts via Crowdsourcing. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (Toronto, Ontario, Canada) _(CHI ’14)_. ACM, New York, NY, USA, 31–40. https://doi.org/10.1145/2556288.2557241
* Large et al. (1995) Andrew Large, Jamshid Beheshti, Alain Breuleux, and Andre Renaud. 1995. Multimedia and comprehension: The relationship among text, animation, and captions. _Journal of the American society for information science_ 46, 5 (1995), 340–347.
* Li et al. (2018) Pengyuan Li, Xiangying Jiang, and Hagit Shatkay. 2018\. Extracting Figures and Captions from Scientific Publications. In _Proceedings of the 27th ACM International Conference on Information and Knowledge Management_ (Torino, Italy) _(CIKM ’18)_. Association for Computing Machinery, New York, NY, USA, 1595–1598. https://doi.org/10.1145/3269206.3269265
* Liang and Huang (2010) J. Liang and M. L. Huang. 2010. Highlighting in Information Visualization: A Survey. In _2010 14th International Conference Information Visualisation_. 79–85.
* Liu et al. (2009) Shixia Liu, Michelle X. Zhou, Shimei Pan, Weihong Qian, Weijia Cai, and Xiaoxiao Lian. 2009\. Interactive, Topic-Based Visual Text Summarization and Analysis. In _Proceedings of the 18th ACM Conference on Information and Knowledge Management_ (Hong Kong, China) _(CIKM ’09)_. Association for Computing Machinery, New York, NY, USA, 543–552. https://doi.org/10.1145/1645953.1646023
* Liu et al. (2012) Shixia Liu, Michelle X. Zhou, Shimei Pan, Yangqiu Song, Weihong Qian, Weijia Cai, and Xiaoxiao Lian. 2012. TIARA: Interactive, Topic-Based Visual Text Summarization and Analysis. _ACM Trans. Intell. Syst. Technol._ 3, 2, Article 25 (Feb. 2012), 28 pages. https://doi.org/10.1145/2089094.2089101
* Matzen et al. (2017) Laura E Matzen, Michael J Haass, Kristin M Divis, Zhiyuan Wang, and Andrew T Wilson. 2017. Data visualization saliency model: A tool for evaluating abstract data visualizations. _IEEE transactions on visualization and computer graphics_ 24, 1 (2017), 563–573.
* Microsoft Q & A (2020) Microsoft Q & A 2020. Microsoft Q & A. https://powerbi.microsoft.com/en-us/documentation/powerbi-service-q-and-a/.
* Mittal et al. (1995) Vibhu Mittal, Steven Roth, Johanna Moore, Joe Mattis, and Giuseppe Carenini. 1995. Generating Explanatory Captions for Information Graphics. _Proceeedings of the International Joint Conference on Artificial Intelligence_ , 1276–1283.
* Nugent (1983) Gwen C Nugent. 1983\. Deaf students’ learning from captioned instruction: The relationship between the visual and caption display. _The Journal of Special Education_ 17, 2 (1983), 227–234.
* O’Brien and Lauer (2018) Shaun O’Brien and Claire Lauer. 2018. Testing the susceptibility of users to deceptive data visualizations when paired with explanatory text. In _Proceedings of the 36th ACM International Conference on the Design of Communication_. 1–8.
* Ottley et al. (2016) A. Ottley, E. M. Peck, L. T. Harrison, D. Afergan, C. Ziemkiewicz, H. A. Taylor, P. K. J. Han, and R. Chang. 2016\. Improving Bayesian Reasoning: The Effects of Phrasing, Visualization, and Spatial Ability. _IEEE Transactions on Visualization and Computer Graphics_ 22, 1 (2016), 529–538.
* Pandey et al. (2015) Anshul Vikram Pandey, Katharina Rall, Margaret L Satterthwaite, Oded Nov, and Enrico Bertini. 2015\. How deceptive are deceptive visualizations? An empirical analysis of common distortion techniques. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_. 1469–1478.
* Pew Research (2020) Pew Research 2020. Pew Research. https://www.pewresearch.org.
* Qian et al. (2020) Xin Qian, Eunyee Koh, Fan Du, Sungchul Kim, and Joel Chan. 2020. A Formative Study on Designing Accurate and Natural Figure Captioning Systems. In _Extended Abstracts of the 2020 CHI Conference_. 1–8.
* Srinivasan et al. (2019) A. Srinivasan, S. M. Drucker, A. Endert, and J. Stasko. 2019. Augmenting Visualizations with Interactive Data Facts to Facilitate Interpretation and Communication. _IEEE Transactions on Visualization and Computer Graphics_ 25, 1 (2019), 672–681. https://doi.org/10.1109/TVCG.2018.2865145
* Tableau Public (2020) Tableau Public 2020. Tableau Public. https://public.tableau.com.
* Tableau Software (2020) Tableau Software 2020. Tableau Software. http://www.tableau.com.
* Tufte (1990) Edward Tufte. 1990\. _Envisioning Information_. Graphics Press, USA.
* Vartak et al. (2015) Manasi Vartak, Samuel Madden, and Aditya N Parmeswaran. 2015\. SEEDB : Supporting Visual Analytics with Data-Driven Recommendations. (2015).
* Washington Post (2020) Washington Post 2020. Washington Post. https://www.washingtonpost.com.
* Wei et al. (2010) Furu Wei, Shixia Liu, Yangqiu Song, Shimei Pan, Michelle X. Zhou, Weihong Qian, Lei Shi, Li Tan, and Qiang Zhang. 2010. TIARA: A Visual Exploratory Text Analytic System. In _Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (Washington, DC, USA) _(KDD ’10)_. Association for Computing Machinery, New York, NY, USA, 153–162. https://doi.org/10.1145/1835804.1835827
* Wikipedia (2020) Wikipedia 2020. Wikipedia. https://www.wikipedia.org.
* Wills and Wilkinson (2010) Graham Wills and Leland Wilkinson. 2010. Autovis: automatic visualization. _Information Visualization_ 9, 1 (2010), 47–69.
* Xiong et al. (2019) Cindy Xiong, Lisanne van Weelden, and Steven Franconeri. 2019\. The curse of knowledge in visual data communication. _IEEE Transactions on Visualization and Computer Graphics_ (2019).
* Yu et al. (2003) Jin Yu, Ehud Reiter, Jim Hunter, and Somayajulu Sripada. 2003\. SumTime-Turbine: A Knowledge-Based System to Communicate Gas Turbine Time-Series Data. In _Developments in Applied Artificial Intelligence_ , Paul W. H. Chung, Chris Hinde, and Moonis Ali (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 379–384.
|
# Probabilistic Solar Power Forecasting: Long Short-Term Memory Network vs
Simpler Approaches
Vinayak Sharma<EMAIL_ADDRESS>Jorge Ángel González Ordiano Ralf Mikut
Umit Cali University of North Carolina at Charlotte Institute for Automation
and Applied Informatics,
Karlsruhe Institute of Technology Colorado State University Norwegian
University of Science and Technology
###### Abstract
The high penetration of volatile renewable energy sources such as solar, make
methods for coping with the uncertainty associated with them of paramount
importance. Probabilistic forecasts are an example of these methods, as they
assist energy planners in their decision-making process by providing them with
information about the uncertainty of future power generation. Currently, there
is a trend towards the use of deep learning probabilistic forecasting methods.
However, the point at which the more complex deep learning methods should be
preferred over more simple approaches is not yet clear. Therefore, the current
article presents a simple comparison between a long short-term memory neural
network and other more simple approaches. The comparison consists of training
and comparing models able to provide one-day-ahead probabilistic forecasts for
a solar power system. Moreover, the current paper makes use of an open source
dataset provided during the Global Energy Forecasting Competition of 2014
(GEFCom14).
###### keywords:
GEFCom14, Neural Networks, Quantile Regressions, LSTM, Probabilistic
Forecasting
## 1 Introduction
Over the past couple of years solar power has become one of the most popular
renewable energy sources (RES). Unfortunately, the generation of solar power
depends completely on the Sun [5]. This dependency on weather adds uncertainty
and variability to the generation of solar power. To deal with this
uncertainty, solar forecasts are made in-order to predict the future power
generation.
Solar power forecasts can be categorized into deterministic and probabilistic
forecasts [3]. Some examples of deterministic forecasting methods present in
literature can be found in [1, 7, 13, 23, 24]. While deterministic forecasts
predict only the expected future generation, probabilistic forecasts offer a
description of the forecast uncertainty. This additional information helps in
managing resources, as well as, in calculating risks associated with future
decisions [4, 11]. Furthermore, economic benefits can also be gained from
using probabilistic forecasts, since they improve the decision making
capabilities within electricity markets [22].
Various methodologies to generate probabilistic solar power forecasts have
been discussed in literature. For example, nearest neighbor approaches [25],
vector auto-regressive (VAR) models [6], methods for estimating volatility
[9], and ensemble models [2]. Additionally, examples of solar power
probabilistic forecasting using deep learning techniques can also be found in
literature, e.g., in [10]. However, even though deep learning methodologies
have gained in popularity in the past couple of years, they have often under-
performed in terms of accuracy when compared to other statistical forecasting
techniques [19].
For this reason, the current article presents a small experiment with the goal
of defining a starting point for understanding the limitations of deep
learning probabilistic forecasting methodologies. To be more specific, the
experiment consists in training, evaluating, and comparing solar power
probabilistic forecasts based on quantile regressions [8] obtained using a
long-short term memory (LSTM) neural network (i.e. a deep learning approach)
and more simple techniques (i.e. polynomials and a fully connected artificial
neural network). Furthermore, the open source dataset of the Global Energy
Forecasting Competition of 2014 is used for the experiment.
The remainder of the current paper is divided as follows. Section 2 presents a
brief description of the various methods tested. Thereafter, Section 3
describes more in detail the conducted experiment. Afterwards, Section 4
presents the obtained results and finally, Section 5 offers the conclusion and
outlook of this work.
## 2 Methods
Quantile regressions are useful at estimating the uncertainty of a time
series’ future. A finite time series $\\{P[k];k=1,\dots,K\\}$ is defined as a
sequence of observations $P[k]$ measured at different points in time; with the
timestep $k$ defining the order of the observation in the sequence and
$K\in\mathbb{N}_{>0}$ representing the time series’ length.
In turn, a quantile regression can be viewed as a model able to estimate a
quantile with a probability $\tau\in(0,1)$ of a future value $P[k+H]$ at a
forecast horizon $H\in\mathbb{N}_{>0}$. For instance, a quantile regression
that takes auto-regressive and exogenous values as input can be defined as:
$\hat{P}_{\tau}[k+H]=f(P[k],\dots,P[k-H_{\mathrm{l}}],\mathbf{u}^{T}[k],\dots,\mathbf{u}^{T}[k-H_{\mathrm{l}}];\hat{\boldsymbol{\theta}}_{\tau})\text{
;}$ (1)
where $\hat{P}_{\tau}[k+H]$ is the quantile estimate, $H_{\mathrm{l}}$ is the
number of used lags, and $\mathbf{u}[k]$ represents a vector containing
observations of exogenous time series at timestep $k$. Moreover,
$\hat{\boldsymbol{\theta}}_{\tau}$ is a vector containing the estimated
regression parameters, which are traditionally obtained through the
minimization of the sum of pinball-losses [8]. Furthermore, one of the most
important properties of quantile regressions is the fact that pairs of them
can be combined to form intervals with a certain probability of containing a
future time series’ value (i.e. probabilistic forecasts). Finally, more
detailed information of the models used in the present article can be found in
the following sections.
### 2.1 Simple Models
#### 2.1.1 Polynomials
Quantile regressions trained using a polynomial model are multiple linear
quantile regressions, whose features can be raised to a maximal allowed
degree. Some examples of this type of model can be found in [14].
#### 2.1.2 Fully Connected Artificial Neural Network
The fully connected artificial neural network (FCANN) used in the present
article is a simple multilayer perceptron [15] with only one hidden layer. The
advantage of this model over the polynomials is the fact that it can more
easily describe non-linear relations between its inputs and its desired output
(i.e. the solar power time series’ future values). It needs to be mentioned,
that the FCANN quantile regressions are trained using a method described in
[12].
### 2.2 Long Short-Term Memory Neural Network
A Long Short-Term Memory (LSTM) neural network model [18] is part of the
Recurrent Neural Network (RNN) family. An RNN is a neural network able to
learn temporal dependencies in data. In other words, RNNs can establish a
correlation between the previous data points and the current data point in the
training sequence [17]. This property makes them ideal for solar power
forecasting. However, in cases where long-term relationships need to be
learned, traditional RNNs face the problem of gradient vanishing. LSTMs solve
this issue by using an additional unit called a memory cell [18] that helps
them in learning and explaining long-term relationships [10]. LSTM quantile
regressions can be obtained using the pinball-loss as error function during
training.
## 3 Experiment
### 3.1 Data
The dataset used comes from the solar track of the Global Energy Forecasting
Competition of 2014 (i.e. GEFCom14) [16]. It contains three different sets of
time series with hourly power measurements of three solar power systems in
Australia (normalized to values between 0 and 1), as well as, a number of
corresponding weather forecast time series for the period of April $1^{st}$,
2012 to July $1^{st}$, 2014. In the present work, only the forecast weather
time series containing forecasts of the solar surface radiation, solar thermal
radiation, and top net solar radiation from the 1st zoneID are used.
Additionally, the data of only one of the solar power systems is utilized;
with 70% of the data used for training and 30% for testing.
### 3.2 Experiment Description
The experimental setup, to compare the performance of the LSTM to that of the
other models, consists in forecasting daily 99 quantiles (i.e.
$\tau=0.01,0.02,$ $\dots,0.99$) of the next 24 hours of solar power generation
(i.e. $H=24$, due to the time series’ hourly resolution). Furthermore, the
same input data is used for all quantile regressions; i.e. the solar power
measured over the past 24 hours and the forecast radiation values for the next
day.
The polynomial models used have maximal allowed degrees of one up to three,
hence they are referred to as Poly1, Poly2, and Poly3. In turn, the simple
FCANN models are multilayer perceptrons with one hidden layer and 10 hidden
neurons with a tanh activation function. Additionally, a forward feature
selection is applied to select the four most relevant features with which both
the polynomial and FCANN models are later trained. Moreover, to improve the
forecast accuracy, the night values are removed during training and
automatically set to zero during testing. Note that all polynomial and FCANN
models are trained using the MATLAB open-source toolbox SciXMiner [20].
Finally, an LSTM model with one input layer, one hidden layer, and one output
layer is developed using the Keras API 111keras.io. The hidden layer consists
of 100 hidden neurons with a sigmoid activation function. Additionally,
dropout is applied to avoid overfitting the model. Notice that the neural
network architecture (i.e. its hyper-parameters) was selected after conducting
a series of preliminary experiments.
The value used to evaluate the results on a test set of size $N$ is the
pinball-loss averaged over all the estimated quantiles (as in [16]), i.e.:
$\begin{split}Q_{\mathrm{PL}}&=\operatorname{mean}\\{Q_{\mathrm{PL},0.01},\dots,Q_{\mathrm{PL},\tau},\dots,Q_{\mathrm{PL},0.99}\\},\text{with}\\\
Q_{\mathrm{PL},\tau}&=\dfrac{1}{N}\sum^{K-H}_{k=1}\begin{dcases}(\tau-1)~{}(\hat{P}_{\tau}[k+H]-P[k+H])&\text{,
if }P[k+H]>P_{\tau}[k+H]\\\ \tau~{}(\hat{P}_{\tau}[k+H]-P[k+H])&\text{,
else}\end{dcases}\end{split}\text{ .}$ (2)
In the previous equation, $Q_{\mathrm{PL},\tau}$ is the pinball-loss obtained
by a quantile regression with probability $\tau$, while $Q_{\mathrm{PL}}$ is
the average of the pinball-losses obtained by all estimated regressions.
Please notice that a comparison based on computation time is excluded from the
present article, as some models are created with MATLAB and others with
Python. Nevertheless, due to its relevance, such a comparison is to be done in
future related works.
## 4 Results
The results on the test set from the above described experiments are presented
in Table 1.
Model | Avg. Pinball-loss [%]
---|---
Poly1 | 1.70
Poly2 | 1.59
Poly3 | 1.66
FCANN | 1.43
LSTM | 1.43
Table 1: Average pinball-loss test set results from all the models
As the contents of Table 1 show, the LSTM outperforms all of the polynomial
models. Nonetheless, the difference in pinball-loss between the LSTM model and
the best performing polynomial model is not significantly large, as it just
amounts to $0.15\%$. Additionally, the FCANN model has in average the same
performance as the more complex LSTM model. The underwhelming performance of
the LSTM regressions may be caused by different reasons. For instance, their
extensive need for a large training dataset, as it is known that deep learning
methodologies need large amounts of data to accurately learn the relationship
between the dependent and the independent variables [21]. Furthermore, the
manually selected hyper-parameters may also be behind the LSTM’s underwhelming
performance, as this manual selection does not assure that the optimal set of
parameters is found. Another explanation could be that the existing real-world
non-linearities can be covered by FCANN as good as by LSTM
For the sake of illustration, Figure 1 depicts the interval forecasts obtained
by the FCANN and LSTM models.
Figure 1: Interval forecasts obtained with the FCANN and LSTM quantile
regressions
As can be seen in Figure 1, the LSTM intervals seem to be larger than the ones
obtained by the FCANN regressions. Therefore it can be argued, that the LSTM
may be overestimating in some degree the uncertainty. This aspect needs to be
considered in future related works, if the accuracy of the herein LSTM-based
probabilistic forecasts is to be improved.
## 5 Conclusion and Outlook
The main contribution of the current article is to present a comparison
between a long short-term memory (LSTM) model and other more simple
approaches; specifically some polynomial models and a simple fully connected
artificial neural network (FCANN). The comparison consists in obtaining and
evaluating 24 hour ahead probabilistic solar forecasts. The experiment shows
that the LSTM model performs slightly better than the polynomials and obtains
the same results as the FCANN. Therefore, it can be argued that the complex
LSTM may not always provide the best solution, at least not for the dataset
evaluated in this paper. Henceforth, the current article recommends the use of
simpler/classical forecasting methodologies as a preliminary benchmarking step
before exploring more complex deep learning methods.
Also, since the underwhelming performance of the LSTM may be caused by a sub-
optimal selection of hyper-parameters, hyper-parameter selection via automated
machine learning (AutoML) techniques has to be studied in future related
works. Moreover, aspects like multiple runs of the neural networks and
computation time need also to be taken into consideration in future
experiments. At the same time, comparisons as the one presented herein for the
case of probabilistic wind and/or load forecasts also need to be studied in
the future.
## Acknowledgement
The present contribution is supported by the Helmholtz Association under the
Joint Initiative “Energy System 2050 — A Contribution of the Research Field
Energy”
## References
## References
* [1] M. Abuella and B. Chowdhury. Solar power forecasting using support vector regression. arXiv preprint arXiv:1703.09851, 2017.
* [2] S. Alessandrini, L. D. Monache, S. Sperati, and G. Cervone. An analog ensemble for short-term probabilistic solar power forecast. Applied Energy, 157:95 – 110, 2015.
* [3] J. Antonanzas, N. Osorio, R. Escobar, R. Urraca, F. Martinez-de Pison, and F. Antonanzas-Torres. Review of photovoltaic power forecasting. Solar Energy, 136:78–111, 2016.
* [4] R. Appino, J. A. González Ordiano, R. Mikut, T. Faulwasser, and V. Hagenmeyer. On the use of probabilistic forecasts in scheduling of renewable energy sources coupled to storages. Applied Energy, 210:1207–1218, 2017.
* [5] T. Bak, J. Nowotny, M. Rekas, and C. Sorrell. Photo-electrochemical hydrogen generation from water using solar energy. materials-related aspects. International Journal of Hydrogen Energy, 27(10):991 – 1022, 2002\.
* [6] R. Bessa, A. Trindade, C. S. Silva, and V. Miranda. Probabilistic solar power forecasting in smart grids using distributed information. International Journal of Electrical Power & Energy Systems, 72:16 – 23, 2015. The Special Issue for 18th Power Systems Computation Conference.
* [7] M. Diagne, M. David, P. Lauret, J. Boland, and N. Schmutz. Review of solar irradiance forecasting methods and a proposition for small-scale insular grids. Renewable and Sustainable Energy Reviews, 27:65 – 76, 2013.
* [8] L. Fahrmeir, T. Kneib, S. Lang, and B. Marx. Regression: Models, Methods and Applications. Springer, Berlin, Germany, 2013.
* [9] M. Fliess, C. Join, and C. Voyant. Prediction bands for solar energy: New short-term time series forecasting techniques. Solar Energy, 166:519–528, May 2018.
* [10] A. Gensler, J. Henze, B. Sick, and N. Raabe. Deep learning for solar power forecasting — an approach using autoencoder and LSTM neural networks. In 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 002858–002865, Oct 2016.
* [11] T. Gneiting and M. Katzfuss. Probabilistic forecasting. Annual Review of Statistics and Its Application, 1(1):125–151, 2014\.
* [12] J. Á. González Ordiano, L. Gröll, R. Mikut, and V. Hagenmeyer. Probabilistic energy forecasting using the nearest neighbors quantile filter and quantile regression [in press]. International journal of forecasting, 2019.
* [13] J. A. González Ordiano, S. Waczowicz, V. Hagenmeyer, and R. Mikut. Energy forecasting tools and services. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 8(2):e1235, 2018.
* [14] J. A. González Ordiano, S. Waczowicz, M. Reischl, R. Mikut, and V. Hagenmeyer. Photovoltaic power forecasting using simple data-driven models without weather data. Computer Science - Research and Development, 32:237–246, 2017.
* [15] T. Hastie. The elements of statistical learning: data mining, inference, and prediction. Springer series in statistics. Springer US, New York, USA, New York, NY, second edition, corrected at 11th printing 2016 edition, 2016.
* [16] T. Hong, P. Pinson, S. Fan, H. Zareipour, A. Troccoli, and R. J. Hyndman. Probabilistic energy forecasting: Global energy forecasting competition 2014 and beyond. International Journal of Forecasting, 32(3):896 – 913, 2016.
* [17] W. Kong, Z. Y. Dong, Y. Jia, D. J. Hill, Y. Xu, and Y. Zhang. Short-term residential load forecasting based on LSTM recurrent neural network. IEEE Transactions on Smart Grid, 10(1):841–851, Jan 2019.
* [18] Z. C. Lipton. A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019, 2015.
* [19] S. Makridakis, E. Spiliotis, and V. Assimakopoulos. Statistical and machine learning forecasting methods: Concerns and ways forward. PLOS ONE, 13(3):1–26, 03 2018.
* [20] R. Mikut, A. Bartschat, W. Doneit, J. A. González Ordiano, B. Schott, J. Stegmaier, S. Waczowicz, and M. Reischl. The MATLAB toolbox SciXMiner: User’s manual and programmer’s guide. Technical report, arXiv:1704.03298, 2017.
* [21] M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, N. Seliya, R. Wald, and E. Muharemagic. Deep learning applications and challenges in big data analytics. Journal of Big Data, 2(1):1, Feb 2015.
* [22] M. Roulston, D. Kaplan, J. Hardenberg, and L. Smith. Using medium-range weather forecasts to improve the value of wind energy production. Renewable Energy, 28(4):585 – 602, 2003.
* [23] V. Sharma. Deterministic and Probabilistic Forecasting for Wind and Solar Power Using Advance Data Analytics and Machine Learning Techniques. PhD thesis, 2018. Copyright - Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works; Last updated - 2018-08-23.
* [24] V. Sharma, U. Cali, V. Hagenmeyer, R. Mikut, and J. A. González Ordiano. Numerical weather prediction data free solar power forecasting with neural networks. In Proceedings of the Ninth International Conference on Future Energy Systems, e-Energy ’18, pages 604–609, New York, NY, USA, 2018. ACM.
* [25] Y. Zhang and J. Wang. GEFCom2014 probabilistic solar power forecasting based on k-nearest neighbor and kernel density estimator. In 2015 IEEE Power Energy Society General Meeting, pages 1–5, July 2015.
|
# Renormalization group improved pressure for hot and dense quark matter
Jean-Loïc Kneur<EMAIL_ADDRESS>Laboratoire Charles Coulomb
(L2C), UMR 5221 CNRS-Université Montpellier, 34095 Montpellier, France Marcus
Benghi Pinto<EMAIL_ADDRESS>Departamento de Física, Universidade
Federal de Santa Catarina, 88040-900 Florianópolis, SC, Brazil Tulio E.
Restrepo<EMAIL_ADDRESS>Departamento de Física, Universidade
Federal de Santa Catarina, 88040-900 Florianópolis, SC, Brazil CFisUC -
Center for Physics of the University of Coimbra, Department of Physics,
Faculty of Sciences and Technology, University of Coimbra, 3004-516 Coimbra,
Portugal
###### Abstract
We apply the renormalization group optimized perturbation theory (RGOPT) to
evaluate the quark contribution to the QCD pressure at finite temperatures and
baryonic densities, at next-to-leading order (NLO). Our results are compared
to NLO and state-of-the-art higher orders of standard perturbative QCD (pQCD)
and hard thermal loop perturbation theory (HTLpt). The RGOPT resummation
provides a nonperturbative approximation, exhibiting a drastically better
remnant renormalization scale dependence than pQCD, thanks to built-in
renormalization group invariance consistency. At NLO, upon simply adding to
the RGOPT-resummed quark contributions the purely perturbative NLO glue
contribution, our results show a remarkable agreement with ab initio lattice
simulation data for temperatures $0.25\lesssim T\lesssim 1\,{\rm GeV}$, with a
remnant scale dependence drastically reduced as compared to HTLpt.
## I Introduction
The complete prediction of the phase diagram describing strongly interacting
matter transitions represents one of the major theoretical challenges in
contemporary particle physics, despite the enormous progress achieved by
lattice QCD (LQCD) numerical simulations. The main reason is that the well
documented sign problem [1], which arises when finite chemical potential
($\mu$) values are considered, prevents LQCD to be reliably applied at
intermediate to high finite baryonic densities, while at low densities the
problem may be circumvented, e.g., by performing a Taylor expansion around
vanishing chemical potential results. In particular, within the latter regime
LQCD has been very successful in predicting [2] that a crossover occurs at a
pseudocritical temperature close to $T_{pc}\approx 155\,{\rm MeV}$ when
$\mu=0$. One alternative to describe the low temperature-high density domain
is to employ effective quark theories [3], such as the Nambu–Jona-Lasinio
model [4], evaluating physical quantities within some analytical
nonperturbative framework (e.g., mean field theory, MFT). This approach
predicts that the (chiral) phase transition at low-$T$ and finite $\mu$ is of
the first kind [5] so that, as a byproduct, one should observe a critical end
point (CP) signalled by a second order phase transition taking place at
intermediate values of $T$ and $\mu$ where the first order transition boundary
terminates. This intriguing possibility is about to be tested in heavy-ion
collisions experiments by decreasing the beam energy, $\sqrt{s_{NN}}$, so that
the baryonic density increases. In view of these experiments it is an
unfortunate situation that theoretical predictions using the full QCD
machinery cannot be consistently carried out with the currently available
nonperturbative techniques.
As already emphasized LQCD is plagued by the sign problem while analytical
tools such as the large-$N$ approximation (which is related to MFT) may
produce misleading results at criticality. On the other extreme standard
thermal perturbation theory (PT) is unreliable at the relevant temperature and
chemical potential ranges. Indeed, despite the asymptotic freedom (AF)
property, its convergence can only be achieved at temperatures many orders of
magnitude larger than the critical one. Even at intermediate temperatures, it
is well-known that thermal PT is plagued by severe infrared divergences, and
has to be resummed to be more compatible with strong coupling regimes (for
pedagogical reviews and lecture notes see, e.g., Refs. [6, 7] and the very
recent Ref. [8]). Yet, even the state-of-the-art, highest available order
thermal PT [9, 10], that incorporates a suitable resummation of infrared
divergences, becomes more poorly accurate at moderate to low temperatures. A
very successful alternative resummation method is to systematically expand
from the start around a quasiparticle mass [12, 11, 13], that also more
directly avoids infrared divergences apart from improving convergence issues.
This is actually close to analytical resummation approaches also used at zero-
temperature, reminiscent of the traditional Hartree approximation and its
variational generalizations, suitable to tackle the infrared divergence issues
of massless theories. Basically one essentially deforms the original
Lagrangian by a Gaussian mass term to be treated as an interaction, defining a
modified perturbative expansion leading to a sequence of (variationally
improved) approximations at successive orders.
The latter approaches appear under various names in the literature, such as
optimized perturbation theory (OPT) [14, 15, 16] (as we dub it here), linear
$\delta$ expansion (LDE) [17], variational perturbation theory (VPT) [18], or
screened perturbation theory (SPT) [12, 19] in the thermal context. Remark
that adding a Gaussian term does not change the polynomial structure of the
theory so that the process is compatible with the usual renormalization
procedure. Already at NLO one usually goes beyond the simple Hartree
approximation since the variational mass is “nonperturbatively” dressed by
incorporating different topologies (exchange terms, vertex corrections, etc)
order by order. Moreover, at leading order the OPT has the welcome property of
exactly reproducing large-$N$ results [20]. As discussed, e.g., in Ref. [21]
this technique has been used to describe successfully a variety of physical
situations, involving phase transitions in different models. On the other
hand, for thermal theories, the SPT method has been generalized over the past
two decades in order to be compatible with Yang-Mills theories. This
generalization was made possible thanks to the hard thermal loop (HTL) gauge-
invariant effective Lagrangian originally built by Braaten and Pisarski [11].
The high temperature expansion based on the HTL Lagrangian, known as hard
thermal loop perturbation theory (HTLpt) [13], has been employed in a series
of applications up to NNLO (three-loops), to describe the QCD thermodynamics,
considering both the glue [22] and quark [23, 24, 25] sectors at finite
temperatures and baryonic densities. Given the intrinsic technical
difficulties associated with the HTLpt evaluations, the NNLO state-of-the-art
calculations performed typically in Refs. [24, 25] represents a remarkable
achievement. Unfortunately it is worth noting a serious remnant issue, also
plaguing standard PT but not sensibly reduced in HTLpt: namely the sensitivity
to the arbitrary renormalization scale is observed to substantially increase
when higher orders are considered. More precisely, as compared to PT the NNLO
HTLpt predictions in Refs.[24, 25] are very close to the lattice results for
temperatures down to $T\gtrsim 2\,T_{pc}$ for the commonly chosen “central”
renormalization scale choice, $M=2\pi\sqrt{T^{2}+\mu^{2}/\pi^{2}}$. However,
even a moderate scale variation of a factor 2 dramatically affects the
pressure and related thermodynamical quantities by relative variations of
order 1 or more. It has been argued [25] that resumming logarithms may help to
improve the situation but, as explained in Refs.[27, 26], it appears that the
lack of renormalization group (RG) invariance is more basically rooted within
the HTLpt approach.
More recently an alternative combining the OPT framework with RG properties
has been proposed: the renormalization group optimized perturbation theory
(RGOPT)[28, 29, 27, 26]. The main novelty is that it restores RG invariance at
all stages of the calculation, in particular when fixing the arbitrary
variational mass parameter. At vanishing temperatures it has been used in QCD
up to high (three and four-loop) orders to estimate the basic coupling
$\alpha_{s}$ [29], predicting values very compatible with the world averages
[30]. Also accurate values of the (vacuum) quark condensate were obtained at
four-loop[31] and five-loop[32] orders. Concerning thermal theories the RGOPT
has been applied to the simpler scalar $\phi^{4}$ model [27, 26] at NLO, as
well as to the nonlinear sigma model (NLSM) [33]. In these thermal
applications the RGOPT and PT/SPT predictions for the pressure have been
compared, showing how the former approximation ameliorates the generic
residual scale dependence of thermal perturbation theories at increasing
perturbative orders. More recently we have evaluated the quark contribution to
the QCD pressure at two-loop (NLO) at finite densities and vanishing
temperatures, showing how the method improves over perturbative QCD (pQCD)
[34]. In the present work we extend our approach to include the effects of a
thermal bath. Note that applying the RGOPT readily to the glue contributions
is beyond the present scope, due to some specific technical difficulties as
briefly explained below (work in this direction is in progress [35]).
Therefore in the present application the RGOPT resummation will be applied
strictly only to the quark sector, while the gluons will be treated as in
standard (NLO) PT. In the end both contributions will be combined in order to
produce our complete final prediction for the NLO QCD pressure.
The paper is organized as follows. In the next section we briefly review our
starting point, the perturbative expressions considered for the (massive)
quark pressure at NLO, for which the basic RGOPT construction is recalled. In
Sec. III the RGOPT is precisely defined for the quark pressure up to NLO (two-
loop). Details of our two possible prescriptions at NLO are described in Sec.
IV (that may be skipped by the reader only interested in the main results).
Then Sec.V illustrates our main results for the pressure, both for the pure
quark sector and for the full QCD one. We compare our results with the NLO and
state-of-the-art higher orders of both PT and HTLpt, and also to lattice data
for the complete QCD pressure. Sec. VI contains our conclusions and
perspectives. Finally, three appendices specify some formulas and additional
details used in our analysis.
## II Quark Contribution to the QCD Pressure
### II.1 RG invariant perturbative pressure
Figure 1: Feynman diagrams contributing to the perturbative quark pressure up
to NLO ${\cal O}(g)$.
At two-loop order-$g$ 111In all what follows we normalize for convenience the
(running) coupling in the $\overline{\rm MS}$-scheme as $g(M)\equiv
4\pi\alpha_{S}(M)$., the contribution of (massive) quarks to the QCD
perturbative pressure (the Feynman diagrams displayed in Fig. 1) can be
obtained by combining the vacuum ($T=\mu=0$) results of Ref.[31] and the
$T,\mu\neq 0$ results of Refs.[36, 37]. Considering the case of degenerate
masses $m_{u}=m_{d}=m_{s}\equiv m$, the renormalized pressure in the
$\overline{\rm MS}$ renormalization scheme, normalized per flavor, is
$\displaystyle\frac{P^{PT}_{1}}{N_{f}\,N_{c}}$ $\displaystyle=$
$\displaystyle-\frac{m^{4}}{8\pi^{2}}\left(\frac{3}{4}-L_{m}\right)+2T^{4}J_{1}\left(\frac{m}{T},\frac{\mu}{T}\right)-3g\frac{m^{4}}{2\left(2\pi\right)^{4}}C_{F}\left(L_{m}^{2}-\frac{4}{3}L_{m}+\frac{3}{4}\right)$
(1) $\displaystyle-$ $\displaystyle
gC_{F}\left\\{\left[\frac{m^{2}}{4\pi^{2}}\left(2-3L_{m}\right)+\frac{T^{2}}{6}\right]T^{2}J_{2}\left(\frac{m}{T},\frac{\mu}{T}\right)+\frac{T^{4}}{2}J^{2}_{2}\left(\frac{m}{T},\frac{\mu}{T}\right)+m^{2}T^{2}J_{3}\left(\frac{m}{T},\frac{\mu}{T}\right)\right\\}\;,$
where $g\equiv g(M)$, $L_{m}\equiv\ln(m/M)$, $M$ is the arbitrary
renormalization scale, $C_{F}=(N_{c}^{2}-1)/(2N_{c})$, $N_{c}=3$, and
$N_{f}=3$. The in-medium and thermal effects are included in the
(dimensionless) single integrals:
$J_{1}\left(\frac{m}{T},\frac{\mu}{T}\right)=\int\frac{d^{3}{\bf\hat{p}}}{\left(2\pi\right)^{3}}\left\\{\ln\left[1+e^{-\left(E_{p}+\frac{\mu}{T}\right)}\right]+\ln\left[1+e^{-\left(E_{p}-\frac{\mu}{T}\right)}\right]\right\\}\;,$
(2)
with ${\bf\hat{p}}\equiv{\bf p}/T$,
$E_{p}=\sqrt{{\bf\hat{p}}^{2}+\frac{m^{2}}{T^{2}}}$,
$J_{2}\left(\frac{m}{T},\frac{\mu}{T}\right)=\int\frac{d^{3}{\bf\hat{p}}}{\left(2\pi\right)^{3}}\frac{1}{E_{p}}\left[f^{+}(E_{p})+f^{-}(E_{p})\right]\;,$
(3)
as well as in the double integral (after performing exactly the angular
integration over $p\cdot q/(|p||q|)$)
$J_{3}\left(\frac{m}{T},\frac{\mu}{T}\right)=\frac{1}{(2\pi)^{4}}\int^{\infty}_{0}\int^{\infty}_{0}\frac{d\hat{p}\,\hat{p}\,d\hat{q}\,\hat{q}}{E_{p}E_{q}}\left\\{\Sigma_{+}\ln\left[\frac{E_{p}E_{q}-\frac{m^{2}}{T^{2}}-\hat{p}\hat{q}}{E_{p}E_{q}-\frac{m^{2}}{T^{2}}+\hat{p}\hat{q}}\right]+\Sigma_{-}\ln\left[\frac{E_{p}E_{q}+\frac{m^{2}}{T^{2}}+\hat{p}\hat{q}}{E_{p}E_{q}+\frac{m^{2}}{T^{2}}-\hat{p}\hat{q}}\right]\right\\}\;,$
(4)
where
$\displaystyle\Sigma_{\pm}$ $\displaystyle=$ $\displaystyle
f^{+}\left(E_{p}\right)f^{\pm}\left(E_{q}\right)+f^{-}\left(E_{p}\right)f^{\mp}\left(E_{q}\right),$
(5)
in terms of the Fermi-Dirac distributions for anti-quarks ($+$ sign) and
quarks ($-$ sign),
$f^{\pm}(E_{p})=\frac{1}{1+e^{(E_{p}\pm\frac{\mu}{T})}}\;,$ (7)
where $\mu$ represents the quark chemical potential, which relates to the
baryonic chemical potential via $\mu_{B}=3\mu$.
In the present work we consider the case of symmetric quark matter and so do
not distinguish the chemical potentials associated with the different flavors
($\mu_{s}=\mu_{u}=\mu_{d}\equiv\mu$). The generalization to the case of
chemical equilibrium needed to impose, e.g., $\beta$ equilibrium should be
straightforward. Also relevant for our purpose and comparisons is the well-
known resulting two-loop pressure expression for strictly massless quarks
(that simplifies considerably since the $J_{i}$ integrals reduce to simple
analytical expressions in this case, given for completeness in Appendix A):
$\frac{P^{PT}_{1}(m\to
0)}{P_{SB}(T,\mu)}=1-\frac{25g(M)}{42\pi^{2}}\,\left(\frac{1+\frac{72}{5}{\hat{\mu}}^{2}+\frac{144}{5}{\hat{\mu}}^{4}}{1+\frac{120}{7}{\hat{\mu}}^{2}+\frac{240}{7}{\hat{\mu}}^{4}}\right)$
(8)
with ${\hat{\mu}}=\mu/(2\pi T)$. The Stefan-Boltzmann (SB) ideal gas limit
reads
$P_{SB}(T,\mu)=T^{4}N_{f}N_{c}\left(\frac{7\pi^{2}}{180}\right)\left(1+\frac{120}{7}{\hat{\mu}}^{2}+\frac{240}{7}{\hat{\mu}}^{4}\right)\;.$
(9)
Coming back to the massive quark case, we next define the standard homogenous
RG operator,
$M\frac{d}{dM}=M\,\partial_{M}+\beta(g)\,\partial_{g}-m\,\gamma_{m}(g)\,\partial_{m}\;,$
(10)
where our normalization convention for the QCD $\beta$-function and anomalous
mass dimension $\gamma_{m}$ is
$\beta\left(g\right)=-2b_{0}g^{2}-2b_{1}g^{3}+\mathcal{O}\left(g^{4}\right)\;,$
(11)
$\gamma_{m}\left(g\right)=\gamma_{0}g+\gamma_{1}g^{2}+\mathcal{O}\left(g^{3}\right)\;,$
(12)
where to two-loop order,
$\displaystyle\left(4\pi\right)^{2}b_{0}$ $\displaystyle=$ $\displaystyle
11-\frac{2}{3}N_{f},$ (13) $\displaystyle\left(4\pi\right)^{4}b_{1}$
$\displaystyle=$ $\displaystyle 102-\frac{38}{3}N_{f},$ (14)
$\displaystyle\gamma_{0}$ $\displaystyle=$
$\displaystyle\frac{1}{2\pi^{2}}\;,\;\;\left(4\pi\right)^{4}\gamma_{1}=\frac{404}{3}-\frac{40}{9}N_{f}\;.$
(15)
As is well known, the massive pressure (equivalently the massive vacuum
energy) is lacking perturbative RG-invariance: it can be easily checked that
applying Eq.(10) to Eq.(1) leaves a remnant term of leading order. Now to turn
Eq.(1) into a (perturbatively) RG invariant quantity, we proceed as in
Refs.[31, 27, 26] (or closer to the present case, as in Ref.[34]), by
subtracting a finite zero-point contribution,
$\frac{P^{RGPT}}{N_{c}N_{f}}=\frac{P^{PT}}{N_{c}N_{f}}-\frac{m^{4}}{g}\sum_{k}s_{k}g^{k}\;,$
(16)
where the $s_{i}$ are determined at successive perturbative orders so that
$M\frac{d}{dM}\left(\frac{P^{RGPT}}{N_{c}N_{f}}\right)=0\,,$ (17)
up to neglected higher order terms. Since our evaluations are being carried up
to NLO, to restore perturbative RG invariance at this order it is sufficient
to add the first two $s_{0}$, $s_{1}$ coefficients that involve the
coefficients of $\beta(g)$, $\gamma_{m}(g)$ through Eq.(10). One finds
explicitly [31, 34]
$s_{0}=-\left[(4\pi)^{2}(b_{0}-2\gamma_{0})\right]^{-1},$ (18)
$s_{1}=-\frac{1}{4}\left[\frac{b_{1}-2\gamma_{1}}{4(b_{0}-2\gamma_{0})}-\frac{1}{12\pi^{2}}\right].$
(19)
### II.2 Implementing the RGOPT for the quark pressure
The RGOPT requires to variationally deform the Lagrangian, by rescaling the
coupling (consistently for every standard interaction terms), while also
adding a modified Gaussian interpolating (mass) term. Explicitly, following
the prescription [29, 34]
${\cal L}_{QCD}^{RGOPT}={\cal L}_{QCD}|_{g\to\delta
g}-m(1-\delta)^{a}{\overline{\psi}}_{q}\psi_{q},$ (20)
where ${\cal L}_{QCD}$ is the standard massless QCD Lagrangian and $m$ is an
arbitrary quark mass at this stage. This is equivalent to perform the
substitution $m\to m(1-\delta)^{a}$ and $g\to\delta g$ in $P^{RGPT}$, Eq.
(16), reexpanded in powers of $\delta$, and finally setting $\delta\to 1$ to
recover the massless limit. At any finite order this leaves a residual
$m$-dependence, that is appropriately fixed by a stationarity criterion[14],
the mass optimization prescription (MOP):
$\frac{\partial{P}^{RGOPT}}{\partial m}\Bigr{|}_{\overline{m}}\equiv 0\;.$
(21)
The next step is to fix the arbitrary exponent $a$ that we introduced in
Eq.(20), by expanding to leading order-$\delta^{0}$ and requiring the
resulting pressure to satisfy, since Eq.(21) is used, the reduced (massless)
RG equation:
$\left[M\partial_{M}+\beta(g)\partial_{g}\right]P=0\;.$ (22)
This procedure yields
$a=\frac{\gamma_{0}}{2b_{0}}\,,$ (23)
that only depend on the universal (scheme-independent) LO RG coefficients, in
agreement with previous RGOPT applications to which we refer for demonstration
[29, 26, 34].
Accordingly at lowest nontrivial order the resulting RGOPT pressure is given,
keeping all terms of formally one-loop order, by
$\frac{P^{RGOPT}_{0}}{N_{f}N_{c}}=-\frac{2m^{4}}{\left(4\pi\right)^{2}}\left(\frac{3}{4}-L_{m}\right)-m^{4}\,s_{1}+2T^{4}J_{1}\left(\frac{m}{T},\frac{\mu}{T}\right)+\frac{m^{4}}{\left(4\pi\right)^{2}g\,b_{0}}\,.$
(24)
Remark that the LO $s_{0}$ coefficient, Eq.(18), has produced the last term
$\propto 1/b_{0}$ in Eq.(24) after algebraic simplifications. There is a
subtlety here: as Eq.(19) shows $s_{1}$ involves two-loop RG coefficients and
thus it is not mandatory to restore (perturbative) RG invariance at LO, that
requires only $s_{0}\neq 0$ as explained. Yet, since $s_{1}$ enters the
pressure formally at ${\cal O}(1)$, it appears sensible to include it also
within our one-loop RGOPT result Eq.(24), incorporating in this way a priori
more complete RG properties. (Actually, the difference between the LO
prescriptions with $s_{1}\neq 0$ or taking more simply $s_{1}=0$ is not
drastic). At the one-loop level the coupling runs according to the well-known
expression
$g\left(M\right)=\frac{1}{2b_{0}\,\ln(M/\Lambda_{\overline{\rm MS}})}.$ (25)
Proceeding similarly at the next RGOPT order, the NLO pressure reads (after
setting $\delta=1$)
$\displaystyle\frac{P^{RGOPT}_{1}}{N_{f}N_{c}}$ $\displaystyle=$
$\displaystyle-\frac{m^{4}}{8\pi^{2}}\left(\frac{3}{4}-L_{m}\right)+2T^{4}\,J_{1}\left(\frac{m}{T},\frac{\mu}{T}\right)+\frac{m^{4}}{\left(2\pi\right)^{2}}\left(\frac{\gamma_{0}}{b_{0}}\right)\left(\frac{1}{2}-L_{m}\right)$
(26) $\displaystyle+$ $\displaystyle
m^{2}\left(\frac{\gamma_{0}}{b_{0}}\right)T^{2}\,J_{2}\left(\frac{m}{T},\frac{\mu}{T}\right)+\frac{m^{4}}{\left(4\pi\right)^{2}b_{0}}\left\\{\frac{1}{g}\left(1-\frac{\gamma_{0}}{b_{0}}\right)+\left[\left(b_{1}-2\gamma_{1}\right)\pi^{2}-\frac{\left(b_{0}-2\gamma_{0}\right)}{3}\right]\right\\}$
$\displaystyle-$ $\displaystyle
3gC_{F}\frac{m^{4}}{2\left(2\pi\right)^{4}}\left(L_{m}^{2}-\frac{4}{3}L_{m}+\frac{3}{4}\right)$
$\displaystyle-$ $\displaystyle
gC_{F}\left\\{\left[\frac{m^{2}}{4\pi^{2}}\left(2-3L_{m}\right)+\frac{T^{2}}{6}\right]T^{2}J_{2}\left(\frac{m}{T},\frac{\mu}{T}\right)+\frac{T^{4}}{2}J^{2}_{2}\left(\frac{m}{T},\frac{\mu}{T}\right)+m^{2}T^{2}\,J_{3}\left(\frac{m}{T},\frac{\mu}{T}\right)\right\\}.$
The exact two-loop (2L) running coupling, analogue of the one-loop Eq.(25), is
obtained by solving for $g(M)$ the implicit relation (see, e.g., Ref. [38])
$\ln\frac{M}{\Lambda_{\overline{\text{MS}}}}=\frac{1}{2b_{0}\,g}+\frac{b_{1}}{2b_{0}^{2}}\ln\left(\frac{b_{0}g}{1+\frac{b_{1}}{b_{0}}g}\right),$
(27)
for a given $\Lambda_{\overline{\text{MS}}}$ value (this also defines the
$\Lambda_{\overline{\text{MS}}}$ basic scale in our normalization
conventions). In the numerical illustrations below, we will use a value very
close to the latest world average value[30], $\Lambda_{\overline{\rm
MS}}=335\,{\rm MeV}$ for $N_{f}=3$, that equivalently corresponds to
$\alpha_{s}(N_{f}=3,1.5\,{\rm GeV})\simeq 0.326$. (NB the latter $\alpha_{s}$
value precisely compares with the one taken in the literature for the NLO PT
and HTLpt pressures[23]).
## III RG-optimized resummation
### III.1 One-loop RGOPT
Before proceeding to our most relevant NLO results, derived basically from
Eq.(26), it is useful to examine the probably more transparent RGOPT features
at the lowest nontrivial ($\delta^{0}$) LO. We recall that at this order the
pressure already satisfies the massless RG Eq.(22) exactly, via the RG-driven
exponent Eq.(23) of the variationally modified Lagrangian, Eq.(20).
Consequently the arbitrary mass $m$ may be fixed only by using the MOP
Eq.(21). The latter acting on the LO pressure Eq.(24) can easily be cast into
the form
$\frac{1}{b_{0}\,g}+\ln\frac{m^{2}}{M^{2}}-1-16\pi^{2}s_{1}-8\pi^{2}\frac{T^{2}}{m^{2}}J_{2}\left({\frac{m}{T}},{\frac{\mu}{T}}\right)=0,$
(28)
whose nontrivial solution gives a RG invariant dressed mass
$\overline{m}(g,T,\mu)$, since the combination $1/(b_{0}\,g(M))+\ln
m^{2}/M^{2}$ is trivially $M$-independent according to Eq.(25). (NB for more
generality we keep $s_{1}$ unspecified at this stage, while for numerics below
we will take $s_{1}\neq 0$ as given by Eq.(19)). Once inserting $\overline{m}$
in Eq.(24) it produces a (one-loop) exactly RG invariant pressure, that takes
the compact form:
$\frac{P^{RGOPT}_{0}}{N_{f}N_{c}}=2T^{4}\,J_{1}\left(\frac{\overline{m}}{T},\frac{\mu}{T}\right)+\frac{T^{2}}{2}{\overline{m}}^{2}J_{2}\left(\frac{\overline{m}}{T},\frac{\mu}{T}\right)-\frac{\overline{m}^{4}}{32\pi^{2}}\;,$
(29)
where it is understood that $\overline{m}$ is the nontrivial solution of
Eq.(28) 222Notice that the explicit dependence upon $s_{1}$ cancelled in
$P^{RGOPT}_{0}$ Eq.(29) upon using Eq.(28), but the solution $\overline{m}$ of
Eq.(28) does depend on $s_{1}$ as specified.. Some properties of the dressed
mass $\overline{m}(g,T,\mu)$ may be more transparent from considering the
above expressions in the high temperature approximation (and $\mu=0$ to
simplify), upon using well-known $T\gg m$ limits of the thermal integrals [7]
$J_{1},J_{2}$, given in Appendix A. This gives from Eq.(28)
${\overline{m}}^{2}(g,T,\mu=0)=T^{2}\frac{\pi^{2}}{3}\left[\frac{1}{2b_{0}g}-\ln\left(\frac{Me^{\gamma_{E}}}{\pi
T}\right)-8\pi^{2}s_{1}\right]^{-1}\;\simeq\frac{3}{8}g\,T^{2}+{\cal
O}(g^{2}),$ (30)
or, equivalently using Eq.(25)
${\overline{m}}^{2}(T,\mu=0)=T^{2}\frac{\pi^{2}}{3}\left[\ln\left(\frac{\pi
T}{e^{\gamma_{E}-\frac{53}{84}}\,\Lambda_{\overline{\rm
MS}}}\right)\right]^{-1}\;,$ (31)
where we used $8\pi^{2}s_{1}=-53/84$ for $N_{f}=3$. As seen in Eq.(30), for
small coupling $\overline{m}$ admits a perturbative expansion having the
expected form of a thermal screening mass. We stress however that
$\overline{m}$ is unrelated to the perturbative Debye mass [36], which at one-
loop order has the well-known expression (for $\mu=0$):
$m_{PT}^{2}=\frac{g}{6}\,T^{2}+{\cal O}(g^{2}).$ (32)
In contrast $\overline{m}$ in Eq.(30) represents an intermediate variational
quantity, whose meaning is merely once being inserted in
$P({\overline{m}},g,T,\mu)$ to define the (optimized) physical pressure at a
given order. Remark that, upon embedding RG invariance properties via the
subtraction terms in Eq.(16), leading to $\overline{m}$ in Eq.(28), the LO
RGOPT pressure (29) involves nontrivial interaction terms. Indeed upon
perturbatively reexpanding Eq.(29) using Eq.(30), it can be seen to resum
arbitrary higher order contributions, although only those contributions
induced by the specific leading order RG dependence333In the simpler $O(N)$
$\phi^{4}$ model, the analogous LO RGOPT[26] resums all large-$N$
contributions, reproducing the exactly known large-$N$ pressure[39], including
nonanalytic terms $\sim\lambda^{3p/2}$, $p\geq 1$, typical of a boson gas
pressure. . Accordingly at LO and in the high-$T$ approximation, using
Eq.(25), Eq.(29) takes the simpler form
$\frac{P^{RGOPT}_{0}}{P_{SB}}\simeq 1-\frac{5}{14}\left[\ln\left(\frac{\pi
T}{e^{\gamma_{E}-\frac{53}{84}}\,\Lambda_{\overline{\rm
MS}}}\right)\right]^{-1}\;,$ (33)
normalized to the SB ideal quark gas $P_{SB}$ Eq.(9) (here for $\mu=0$). The
fact that the higher order contributions may be absorbed essentially into a
one-loop running coupling (for $\mu=0$ and high-$T$ limits) is a peculiar LO
feature of our construction: as we will see below at NLO the more involved RG-
induced higher order corrections are not so simply incorporated. Another RGOPT
feature is manifest in Eq.(33): at high-$T$ the explicit $M$-dependence in
Eq.(30) has been automatically traded for a dependence in $g(\sim\pi
T/\Lambda_{\overline{\rm MS}})$, consequently from scale invariance, rather
than being an extra convenient scale choice $M\sim\pi T$ to absorb $\ln(M/\pi
T)$ terms like in more standard (non-resummed) thermal perturbative
expansions.
The LO pressure Eq.(29) is however not expected to give a very realistic
approximation of the complete higher order pressure, as it only relies on LO
RG-invariance properties embedded within an essentially free gas pressure. The
LO dressed mass $\overline{m}$ of Eq.(28) with exact $T$-dependence is
illustrated as function of $T$ in Fig. 3 (where it is also compared to NLO
RGOPT dressed masses to be specified below). The corresponding pressure
Eq.(29) is illustrated e.g. in Figs.5 and 6 for $\mu=0$ or in Figs. 9 and 10
for $\mu\neq 0$, where it is also compared with NLO RGOPT and other NLO
results. We will next proceed to the more realistic NLO RGOPT pressure: most
of the above features will be maintained, except that the scale invariance can
only be achieved approximately beyond LO, as we will examine.
### III.2 Two-loop RGOPT
At NLO the RG Eq.(22) is no longer automatically satisfied by Eq.(26) from
Eq.(23), and can be thus considered as an independent constraint. Following
[27, 33, 34] we can in principle use either the MOP Eq.(21) or the RG Eq.(22),
defining two possible alternative dressed mass $\overline{m}(g,T,\mu)$: we
will consider in the following both prescriptions, for completeness and
comparison purposes. Accordingly the coupling $g(M)$ is simply determined from
standard PT, i.e. with its running at (exact) two-loop order given by Eq.(27)
and scale $M$ chosen as a combination of $\pi T$ and $\mu$ when both $T,\mu$
are non-zero. A drawback however is that NLO partly spoils the sought RG
invariance properties: while at LO, as shown previously the running coupling
Eq.(25) perfectly matches the RGOPT pressure Eq.(24) such as to produce exact
(one-loop) scale invariance, at NLO the more involved running coupling Eq.(27)
has no reason to exactly match the scale dependence of Eq.(26). Accordingly a
remnant scale dependence inevitably arises at NLO, as we will exhibit by
varying the scale by a factor 2 around central $M\sim 2\pi T$ (for $\mu=0$).
Nevertheless, this RGOPT scale dependence is generically milder[27, 26, 33]
than the one produced by standard PT, as will be further illustrated in the
present analysis. This happens essentially since perturbative RG invariance is
maintained by construction for an arbitrary variational mass, such that the
RGOPT residual scale dependence is expected to remain moderate (and, to
further decrease at NNLO) even for relatively low temperature where the
resulting dressed thermal mass is not necessarily perturbatively screened.
Using the standard PT running coupling also more directly compares with the
same common prescription in other related thermal resummations approaches,
like HTLpt typically. But one should keep in mind that identifying the
arbitrary renormalization scale $M$ to be ${\cal O}(\pi T)$ is strictly valid
only at sufficiently high temperatures.
A known unwelcome feature of any related OPT/SPT approaches is the generally
increasing number of possible solutions of e.g. Eq.(21) at increasing orders.
In contrast in our construction Eq.(23) further guarantees that the only
acceptable solutions are those matching[29] the perturbative asymptotic
freedom (AF) behavior for $g\to 0$ at $T=0$: a simple but compelling criteria
that often selects a unique solution, even at five-loop order so far explored
[29, 31]. As it happens however regarding the NLO quark pressure Eq.(26),
imposing either Eq.(21) or Eq.(22) both fail to readily give a real dressed
mass $\overline{m}(g,T,\mu)$ for a substantial part of the physically relevant
$T,\mu$ range. This is admittedly a technical burden of such methods, but the
occurrence of complex variational solutions has no deeper physical meaning.
Rather, it may be viewed to some extent as an accident of the specific
$\overline{\rm MS}$ scheme in which the original perturbative coefficients
were calculated, given that nonreal solutions are often expected upon exactly
solving nonlinear equations, like in the present case solving for $m$ the NLO
Eqs.(21) or (22). At the same time we wish to maintain these relations as
exact as possible in order to capture RG resummation properties beyond PT. A
crude escape could be simply to take the real part of the solutions, but that
potentially loses some of the sought RG properties. The nonreal solution issue
also occurred in the simpler $T=\mu=0$ case [29] as well as within the
$T=0,\mu\neq 0$ cold quark matter application [34], where it was cured by
performing a renormalization scheme change (RSC)[29]. The latter allows for
the recovery of real solutions by modifying perturbative coefficients while
keeping RG consistency by definition. Of course for such a solution to work
the RSC should not be arbitrary, but fixed by a sensible prescription, and
importantly such that it remains a moderate (i.e. perturbative) deviation from
the original scheme. More specifically in [34] a relevant NLO RSC parameter
$B_{2}$ was uniquely fixed by requiring collinearity of the RG and MOP curves
in the $\\{m,g\\}$ plane (that precisely expresses the recovering of real
solutions). Technically this implies to nullify the determinant of partial
derivatives of the RG and MOP equations, and to solve the latter together
with, e.g., Eq.(21) for $\\{B_{2},\overline{m}(B_{2},g)\\}$. While solving
such a coupled system was easily manageable for the (entirely analytical)
$T=0,\mu\neq 0$ NLO expressions in [34], it becomes numerically quite
challenging for the rather involved $T,\mu\neq 0$ NLO dependence from Eq.(26).
Therefore in the present study, seeking as much as possible for simplicity, we
will exploit the RSC arbitrariness quite similarly to recover real solutions,
but via simpler alternative prescriptions precisely defined in next Sec. IV.
The reader mainly interested in concrete results for the thermodynamical
quantities may skip this section proceeding directly to Sec.V.
## IV NLO prescriptions
### IV.1 Simple RSC parametrization
Let us first specify for our later purposes the RSC to be used. Since one
basically introduces a variational (quark) mass, the most natural and simplest
RSC can be defined by modifying only the mass parameter:
$m\to m^{\prime}(1+B_{2}g^{2})\,,$ (34)
where a single $B_{2}$ coefficient parametrizes a perturbative NLO scheme
change from the original ${\overline{\text{MS}}}$-scheme 444Eq.(34) has also
the welcome property that it does not affect the definition of the reference
QCD scale $\Lambda_{\overline{\text{MS}}}$, in contrast with a similar
perturbative modification acting on the coupling (see Ref.[29] for details)..
As is well-known, for a perturbative series truncated at order $g^{k}$ (like
in the present case the original order-$g$ pressure Eq.(1)), different schemes
differ formally by remnant terms of order ${\cal O}(g^{k+1})$, such that the
difference between two schemes is expected to decrease at higher orders for
sufficiently small coupling value. Note that we perform the perturbative RSC
Eq.(34) consistently on the original PT expression (1) prior to its
(nonperturbative) modification induced from Eq.(20) with the subsequent
$\delta$-expansion. The net RSC modification to the pressure is to add an
extra term, $-4g\,(m^{\prime})^{4}s_{0}B_{2}$, entering thus the resulting
exact NLO Eq.(21) or Eq.(22). Thus Eq.(34) modifies the latter equations
purposefully, now considering those equations as constraints for the arbitrary
mass $m^{\prime}$, after the (nonperturbative) modifications from Eq.(20)555To
avoid excessive notation proliferation, in what follows once having performed
the replacement implied by Eq. (34) we simply rename $m^{\prime}\to m$ the
variational mass to be determined from Eq.(21) or Eq.(22).. Accordingly
$B_{2}$ may be considered as an extra variational parameter, quite similarly
to $m$, thus to be fixed by a definite prescription as will be specified
below.
### IV.2 AF-compatible NLO dressed mass solutions
To identify some relevant properties of the sought dressed
$\overline{m}(g,T,\mu)$ solutions we consider first the MOP Eq.(21) more
explicitly at NLO, thus applied to Eq.(26). It is convenient to formally solve
it in a first stage for $\ln[m^{2}/M^{2}]$, as that would give simply an exact
quadratic equation at $T=\mu=0$. Accordingly the two equations (that are
implicit in $m$ for $T,\mu\neq 0$) can be conveniently written, after
straightforward algebra, as
$-\ln\frac{m^{2}}{M^{2}}+B_{mop}\mp\frac{2\pi}{3g}\sqrt{D_{mop}}=0\;,$ (35)
where for $T,\mu\neq 0$, $B_{mop}$ and $D_{mop}$ take a relatively compact
form:
$B_{mop}=-\frac{7\pi^{2}}{9g}+\frac{5}{6}+4\pi^{2}\left(J_{2}^{\prime}+\frac{T^{2}}{m^{2}}J_{2}\right),$
(36) $\displaystyle D_{mop}$ $\displaystyle=$ $\displaystyle
9\frac{\pi^{2}}{4}-\frac{47}{6}g-g^{2}\left(\frac{35}{16\pi^{2}}+288\frac{\pi^{2}}{7}B_{2}\right)$
(37)
$\displaystyle+36\pi^{2}g^{2}\left(J_{2}^{\prime\,2}+\frac{T^{4}}{m^{4}}J_{2}^{2}\right)+9g(g-2\pi^{2})\left(\frac{T^{2}}{m^{2}}J_{2}-J_{2}^{\prime}\right)$
$\displaystyle+8\pi^{2}g^{2}\frac{T^{2}}{m^{2}}\left(3J_{2}-1\right)J_{2}^{\prime}-48\pi^{2}g^{2}\left(J_{3}^{\prime}+\frac{T^{2}}{m^{2}}J_{3}\right)\;,$
where $J_{i}\equiv J_{i}(m^{2}/T^{2},\mu/T)$,
$J_{i}^{\prime}\equiv\partial_{x}J_{i}(x)$, (note that here $x\equiv
m^{2}/T^{2}$). In Eq.(37) we explicitly separated the $T,\mu$-independent part
within $D_{mop}$ in the very first line to make its $T,\mu\to 0$ limit clear
(remark also that $D_{mop}(T=0)$ does not depend on $m$).
One first property of Eq.(35) is exhibited from expanding it perturbatively to
the first few terms. That gives
$\ln\frac{\overline{m}^{2}}{M^{2}}(-)\simeq-\frac{16\pi^{2}}{9g}+\frac{139}{54}+8\pi^{2}\frac{T^{2}}{\overline{m}^{2}}J_{2}+{\cal
O}(g),$ (38)
and
$\ln\frac{\overline{m}^{2}}{M^{2}}(+)\simeq\frac{2\pi^{2}}{9g}-\frac{49}{54}+8\pi^{2}J_{2}^{\prime}+{\cal
O}(g).$ (39)
One easily recognizes that, for $T\to 0$ the leading term for $g\to 0$ in
$\overline{m}^{2}(-)$ has the correct AF behavior:
$\ln\frac{\overline{m}^{2}}{M^{2}}(-)\sim-1/(b_{0}g)$, noting that
$b_{0}=9/(16\pi^{2})$ (for $N_{f}=3$), which as recalled above is a compelling
requirement of the RGOPT. In contrast the other $(+)$ solution has a wrong
sign and coefficient, thus drastically in contradiction with AF for $g\to 0$.
Therefore clearly only the above equation (35) with $(-)$ is to be selected.
It is further instructive to investigate the behavior of those two solutions
for $T\neq 0$, taking for simplicity the high-$T$ approximation (and $\mu=0$,
see Eq.(52)). After straightforward algebra one obtains, for the first few
perturbative expansion terms:
$\frac{\overline{m}^{2}_{(-)}}{T^{2}}=\frac{3}{8}g\left[1-\frac{3}{8\pi^{2}}g\left(3L_{T}+\frac{85}{36}\right)\right]^{-1}+g^{2}\,\left(\frac{67}{288\pi^{2}}+6J_{3}(0,0)\right)+{\cal
O}(g^{3}),$ (40)
where we defined for short
$L_{T}\equiv\ln\left(\frac{Me^{\gamma_{E}}}{\pi T}\right).$ (41)
As seen the AF-compatible solution $\overline{m}(-)$ has a typical
perturbative thermal screening mass behavior $m\sim\sqrt{g}\,T$, with a
coefficient here mainly determined by RG properties (notice that the first
order term is consistent with our LO above result, Eq.(30)). In contrast the
non-AF-compatible Eq.(35) with $(+)$ has $\overline{m}(+)$ solutions for
$T,\mu\neq 0$ having a coupling dependence that cannot be cast into the form
of a perturbative expansion for small enough $g$. Moreover the corresponding
real solutions generally give $m/T\gg 1$, unless $g$ is very large (see
Appendix B). The latter features give further compelling reasons to rejecting
this non-AF solution also for the $T,\mu\neq 0$ case. Thus as anticipated the
AF-compatibility criteria leads to a unique MOP solution.
The purely perturbative expansion Eq.(40) is however not expected to give a
very good approximation for relatively low 666In particular the exact NLO
$\overline{m}$ as obtained below can be such that $\overline{m}/T>1$ at
sufficiently low $T$ (see Fig. 3), somewhat invalidating the high-$T$
approximation. $T$, and obviously not useful anyway for $\mu\neq 0$. Yet,
before to proceed below with the more elaborate RSC prescription to solve
exactly Eq.(35), it may be instructive to illustrate the results of using the
simple perturbative solution Eq.(40), inserted in our NLO pressure Eq.(26),
and with the resulting expression being truncated simply at first order in $g$
(i.e. this is accordingly the NLO generalization of Eq.(33)). This is shown in
Fig. 2, compared to the true standard NLO massless quark PT pressure Eq.(8).
This result represents a good consistency check of our procedure: the two
pressures are not strictly identical but very close, since after expressing
the optimized mass $\overline{m}(g)$, the RGOPT is expected to approximate a
massless theory. (Note that replacing $m$ in Eq.(26) instead by, e.g., the
standard thermal Debye mass Eq.(32), would give results more drastically
departing from the massless PT pressure). Now more interestingly, the main
purpose of the RGOPT is rather to provide higher order deviations from
standard PT, induced by higher order RG-induced terms, as we will exhibit
next.
Figure 2: Perturbatively re-expanded NLO RGOPT pressure $P(T,\mu=0)$ (red
band) compared with standard perturbative NLO pressure Eq.(8) (blue band),
with scale dependence $\pi T\leq M\leq 4\pi T$.
### IV.3 NLO mass optimization prescription
Going back to the exact MOP Eq.(35), Eq.(37) involves the RSC parameter
$B_{2}$ as induced from Eq.(34). In the original $\overline{\rm MS}$ scheme,
i.e. $B_{2}\equiv 0$, $D_{mop}$ from Eq.(37) can take negative values for not
particularly large couplings777 For example for $T=\mu=0$ where only the first
three terms of Eq.(37) are nonvanishing, $D_{mop}\leq 0$ for rather moderate
$g\geq 2.64$, i.e $\alpha_{S}\geq 0.21$.. As anticipated above it therefore
renders the (exact) $\overline{m}(g,T,\mu)$ solution not always real, except
in a rather limited range of physically interesting $T$ and/or $\mu$ values.
Remark however that since the (perturbatively leading) first term in Eq.(37)
is positive, this lost of real $\overline{m}$ solutions arises solely when
considering the exact Eq.(35): now since all our results were obtained from
modifying perturbative NLO expressions, one may simply (re)expand Eq.(35) in
perturbation, that gives a real expression at arbitrary orders (as partially
illustrated by the first few orders of such an expansion in Eq.(40)). But it
is soon realized that this is a poor approximation of the actual exact
expression, even for $g$ slightly below the value at which $D_{mop}$ becomes
negative. Accordingly it would partly lose the sought good RG properties, due
to RG-induced contributions being perturbatively truncated. Now with $D_{mop}$
not too far from being positive, a more efficient way to recover real
solutions is from an appropriately chosen $B_{2}$ value such that $D_{mop}>0$.
Let us thus define precisely our prescription for the MOP Eq.(35): in a first
stage we fix the arbitrary RSC parameter $B_{2}$ in Eq.(37) such that
$D_{mop}>0$. Next, the resulting modified AF-matching Eq.(35) with $(-)$ is
solved exactly (numerically) for $\overline{m}(g,T,\mu)$, recovering real
solutions for practically most relevant $g$ values. Note that simply requiring
$D_{mop}\geq 0$ does not give a unique prescription, but it happens to be
rather constrained: first $D_{mop}=0$ is excluded, as it would spoil the
crucial AF-compatibility of Eq.(38), that at least requires the LO (first)
term of Eq.(37). On the other hand if $D_{mop}>0$ would be too large, the AF-
matching $(-)$ Eq.(35) would take too negative values no longer giving a real
solution (i.e., it cannot cross the $x$-axis). Since the problem comes from
some negative terms within Eq.(37), a prescription that appears minimal is to
fix $B_{2}$ such as to cancel solely the largest (in magnitude)
$T,\mu$-independent negative term within Eq.(37), $-(47/6)g$. Explicitly that
gives:
$B_{2}=-\frac{329}{1728\pi^{2}\,g}\;.$ (42)
The latter $B_{2}$ prescription is very simple, and the resulting
$\overline{m}_{MOP}$ solution remains real for practically all physically
relevant $g(T,\mu)$ values, while still including nontrivial higher order
corrections induced from all remnant terms of Eq. (35). Other slightly
different $B_{2}$-fixing prescriptions are possible for $T,\mu\neq 0$, but a
notable property is that for different $B_{2}$ choices, that imply different
exact $\overline{m}(B_{2})$ solutions, the resulting physical pressure
$P(\overline{m}(B_{2}),B_{2},\cdots)$ Eq.(26) happens to be largely
insensitive to those unphysical $B_{2}$ parameter choices provided that
$\overline{m}(B_{2})$ remains real. This welcome feature is to be traced to
the underlying RSC properties, together with the further perturbative
screening from Eq.(40): ${\overline{m}^{2}}\sim(3/8)gT^{2}+{\cal O}(g^{2})$:
as easily checked, $B_{2}$ from Eq.(34) only appears at higher order ${\cal
O}(g^{3})$ both in the perturbatively expanded $\overline{m}$ Eq.(40) and
corresponding re-expanded pressure from Eq.(26). In other words once $B_{2}$
is adjusted to recover a real $\overline{m}$ solution of Eq.(35), the
discrepancies between possibly different $B_{2}$ prescriptions are somewhat
hidden within perturbatively higher order terms.
To close this subsection, we illustrate in Fig. 3 the resulting dressed
thermal masses as function of the temperature, both at LO from Eq.(28), and
NLO from MOP Eqs.(35), (42). As already mentioned their behavior is
essentially that of screening thermal masses, except that those are determined
from RG properties. We also compare in Fig. 3 with the similar dressed thermal
mass as obtained from the alternative RG prescription, to be specified in next
subsection.
Figure 3: Exact LO RGOPT thermal mass (dot-dashed) compared with exact MOP and
RG NLO thermal mass for $\pi\leq M\leq 4\pi T$ at $\mu_{B}=0$.
Correspondingly Fig. 4 illustrates the relevant RSC deviation $B_{2}g^{2}$ in
Eq.(34) resulting from Eq.(42) as function of $T$. As an important crosscheck,
it shows that the departure from the original $\overline{\rm MS}$-scheme
remains quite moderate.
Figure 4: RSC parameter $B_{2}g^{2}(M)$ for the MOP and RG prescriptions for
$\pi T\leq M\leq 4\pi T$ at $\mu_{B}=0$.
### IV.4 Alternative NLO RG prescription
Alternatively, the other very relevant prescription, as anticipated in
Sec.III.2, is to consider the RG Eq.(22) instead of the MOP Eq.(35) to
determine the dressed mass $\overline{m}(g,T,\mu)$. Once expressed for
$\ln(m^{2}/M^{2})$ it takes a similar quadratic form as Eq.(35), conveniently
normalized as
$-\ln\frac{m^{2}}{M^{2}}+B_{rg}\mp\frac{8\pi^{2}}{g}\sqrt{\frac{2}{3}D_{rg}}=0\;,$
(43)
where explicitly
$B_{rg}=-\frac{1}{b_{0}\,g}+\frac{172}{81}-\frac{64}{81}\left(\frac{4g}{9\pi^{2}}\right)\,\frac{1}{1+\frac{4g}{9\pi^{2}}}+8\pi^{2}\frac{T^{2}}{m^{2}}J_{2},$
(44)
and
$D_{rg}=-\left(\frac{3}{7}B_{2}+\frac{11}{384\pi^{4}}\right)g^{2}-\frac{g}{27}\frac{(4g+81\pi^{2})}{(4g+9\pi^{2})^{2}}+g^{2}\,\frac{T^{4}}{m^{4}}J_{2}\left(J_{2}-\frac{1}{6}\right)-g^{2}\,\frac{T^{2}}{m^{2}}J_{3}.$
(45)
Now, similarly to the previous MOP Eq.(35), for $B_{2}=0$ one obtains
generally nonreal solutions since in $D_{rg}$ some contributions happen to be
negative. In contrast with Eq.(35) however, the crucial AF-matching for the RG
solution is already guaranteed solely from the first term in (44), up to
higher order terms. These features strongly suggest the prescription fixing
the arbitrary RSC parameter $B_{2}$ as simply to fully cancel $D_{rg}$:
$D_{rg}(B_{2})\equiv 0.$ (46)
Eq.(46) determines $B_{2}$ trivially using Eq.(45), leading to a single real
AF-compatible solution $\overline{m}_{RG}$ determined from the first two terms
of Eq.(43), the latter being still an implicit equation in $m$ for $T,\mu\neq
0$ via $J_{2}$ entering Eq.(44). Eq.(46) may appear a rather peculiar choice,
but there happen to be very few other choices to recover a real RG solution.
We stress that for any (MOP or RG) prescriptions the resulting
$\overline{m}(B_{2})$ is an intermediate variational parameter without much
physical meaning outside its use in the pressure. Here the resulting
$\overline{m}_{RG}(B_{2})$ still involves arbitrary higher order
contributions, as well as nontrivial $T,\mu$ dependence via $B_{rg}$ in
Eq.(44). Similarly as for the MOP above prescription, we have checked that for
other $B_{2}$ choices, as long as being moderately different from Eq.(46), our
numerical RG results for $T,\mu\neq 0$ are not strongly dependent upon those
choices.
The dressed exact thermal mass $\overline{m}_{RG}$ resulting from
Eqs.(43),(46) is illustrated as function of the temperature in Fig. 3, and
compared with the previously discussed LO mass from Eq.(28) and
$\overline{m}_{MOP}$ from Eqs.(35), (42). As seen the dressed masses are
numerically quite different, but such differences in the two alternative NLO
variational masses are drastically reduced within the physical pressure as
will be illustrated below. The corresponding RSC deviation $B_{2}g^{2}$
obtained from Eq.(46) is illustrated in Fig. 4 as function of $T$, and
compared to the similar MOP $B_{2}g^{2}$ from Eq.(42). Note that despite the
visible discrepancies between the two expressions, they are numerically not
drastically different and both behave smoothly, except at very low $T\lesssim
0.5$ GeV: as already mentioned above the important feature is that the induced
departure from the original $\overline{\rm MS}$-scheme remains moderate.
## V RGOPT pressure results at NLO
To obtain the full benefit from the RGOPT, in particular the optimally reduced
scale dependence, a price to pay as a result of the variational approach is to
first solve exactly numerically for the dressed mass (either from Eq.(21) or
alternatively Eq.(22)), prior to its use in the RGOPT pressure at NLO,
Eq.(26). Such a procedure is moreover complicated by the onus of complex
solutions, cured by the appropriate RSC as specified above in Sec. IV. But the
relevant NLO expressions (35) or alternatively (43) are reasonably simple and
the numerical procedure is straightforward. Before illustrating the resulting
exact NLO RGOPT pressure, we start this section with another intermediate
(more perturbative) prescription, to show the gradual improvement typically
concerning the remnant scale dependence.
### V.1 A simple perturbative approximation
The simplest we can do to recover real solutions without going through RSC
considerations as elaborated previously in Sec.IV, while capturing at the same
time more accurate $T,\mu$ dependence, is to expand $\overline{m}$ from the
MOP Eq.(35) perturbatively to NLO ${\cal O}(g^{2})$, but keeping the exact
thermal integrals in the resulting expression. This gives after simple algebra
888As an algebraic subtlety, one should first expand perturbatively the (AF-
matching) Eq.(35) with $(-)$ before to solve it formally for $m^{2}/T^{2}$,
otherwise one loses the latter AF-matching properties.
$\frac{m^{2}_{MOP}}{T^{2}}=\frac{9}{2}gJ_{2}+g^{2}\left[\frac{17}{9}J^{\prime}_{2}(1-12J_{2})+\frac{34}{3}J_{3}+\left(\frac{20371}{1728\pi^{2}}-\frac{81}{32\pi^{2}}\ln\frac{m^{2}}{M^{2}}\right)J_{2}\right],$
(47)
therefore still to be solved numerically as an implicit function since
$J_{i}\equiv J_{i}(\frac{m}{T},\frac{\mu}{T})$.
Figure 5: Comparison of NLO RGOPT quark pressure Eq.(26) with
$\overline{m}(g^{2})$ (green, thin lines), LO RGOPT (dotdashed), NLO PT (blue,
dashed), NLO HTLpt quark pressure (red, dotted) with scale dependence $\pi
T\leq M\leq 4\pi T$ (bands) and central scale $M=2\pi T$ (lines) at
$\mu_{B}=0$.
The above expression readily gives a real solution, and allows to consider
$\mu\neq 0$ within the thermal integrals (and within the running coupling as
well) while still keeping a relatively simple “perturbative-like” expression.
Inserting the solution of Eq.(47) into the RGOPT NLO quark pressure Eq.(26)
(keeping also exact thermal integrals consistently in the latter), gives the
results illustrated for $\mu=0$ in Fig. 5, compared with the standard NLO PT
pressure Eq.(8), and also with the NLO HTLpt (quark) pressure. [NB for a
consistent comparison with the latter at this stage, we have extracted only
the quark contributions within the complete QCD NLO HTLpt pressure, which is
not a trivial separation as in the case of NLO pQCD. Details are explained in
Appendix C].
Alternatively, proceeding similarly with the RG Eq.(43) and (46) gives
$\frac{m^{2}_{RG}}{T^{2}}=\frac{9}{2}gJ_{2}+\frac{g^{2}}{32\pi^{2}}\,\left(172-81\ln\frac{m^{2}}{M^{2}}\right)J_{2},$
(48)
observing that the LO term and the $\ln M$ dependence are identical to those
in Eq.(47). This illustrates that although the MOP and RG prescriptions are
quite different if considering their exact determinations, perturbatively they
differ only by ${\cal O}(g^{2})$ terms, thus formally higher order than the
original NLO perturbative pressure from which they were both constructed.
Moreover, inserting Eq.(48) within Eq.(26) gives almost identical results as
in Fig. 5. Note also that in both Eqs.(47) and (48) the running $g(M)$ exactly
cancels the $M$-dependence at ${\cal O}(g^{2})$, as easily checked using
Eqs.(25), (15), and (52).
As seen in Fig. 5 the RGOPT pressure with the (MOP or RG)
$\overline{m}(g^{2})$ approximation has a more pronounced decrease, i.e. a
departure from the ideal gas limit, than the standard NLO PT (pQCD) quark
pressure and than LO RGOPT for moderate and low $T$ values, that is mainly
traced to the higher order $g^{2}$ contributions in Eq.(47) or Eq.(48).
Actually, it is rather closer to the higher orders standard pQCD pressure, as
will be illustrated below, partly due to Eq.(26) and the thermal functions
$J_{i}$ being kept exact. (If perturbatively reexpanded, the resulting
pressure gets back closer to the NLO pQCD result). This is in contrast with
the NLO HTLpt pressure, that remains very close to the ideal gas limit except
at very low $T$ as seen in Fig. 5 999We mention that the NLO HTLpt pressure in
Fig. 5 (and similarly below in Figs. 6-7, Figs. 9-10) is somewhat different
than the results in Ref.[23], specially at very low $T$. This is due to
considering here only its pure quark contributions, and partly also from using
the exact Eq.(27) instead of a more approximate two-loop running expression
used in [23].. In Fig. 5 the RGOPT pressure also exhibits a better
renormalization scale dependence as compared with NLO pQCD (at least for $T>1$
GeV), although this is only a moderate improvement. Very similar results are
obtained for $\mu\neq 0$, that we omit to illustrate. We will see below that
the more elaborate untruncated RGOPT pressure, accounting for higher orders in
$\overline{m}(g)$, has a more drastically improved scale dependence, which is
a main expected RGOPT feature.
### V.2 Hot quark matter: $T\neq 0$, $\mu=0$
#### V.2.1 MOP prescription
The resummation properties of the NLO RGOPT become more evident when one
compares it with the standard perturbative one (pQCD) at the same NLO. We
illustrate (first for $\mu=0$) the exact NLO RGOPT pressure
$P(\overline{m},g,T,\mu)$ obtained from our first $\overline{m}_{MOP}$
prescription, defined by solving Eqs.(35),(42) (as explained in details
Subsec.IV.3). In Fig. 6 the pressure is displayed as function of the
temperature, compared with the LO RGOPT and the standard NLO pQCD Eq.(8), for
the scale dependence $\pi T\leq M\leq 4\pi T$. The reduction of scale
dependence stemming from the now exact (untruncated) NLO RGOPT appears
substantial (about a factor $\sim 2$ improvement for e.g. $T\sim 1$ GeV). The
HTLpt NLO (quark) pressure[23] is also shown in the same figure for
comparison. We observe that the (NLO) quark HTLpt pressure has a small
residual scale dependence for most $T$ values (which is partly a consequence
of limiting it to the quark only contribution), but does not depart very much
from the ideal gas limit, in contrast with the RGOPT pressure. This latter
feature is similar concerning the complete QCD NLO HTLpt[23]), while a more
drastic departure from the ideal gas is only obtained at NNLO for HTLpt[25].
Figure 6: RGOPT quark pressures as function of temperature at LO and NLO (MOP
prescription) compared with standard NLO PT (pQCD) and NLO HTLpt pressures,
with scale dependence $\pi T\leq M\leq 4\pi T$ at $\mu_{B}=0$.
Figure 7: Same captions as for Fig.6 but with the RGOPT pressure obtained from
alternative $\overline{m}_{RG}$ prescription Eqs.(43), (46).
#### V.2.2 Alternative RG prescription
Similarly to Fig. 6, we illustrate in Fig. 7 the exact NLO RGOPT pressure as
obtained from the alternative $\overline{m}_{RG}$ prescription defined from
solving Eqs.(43) and (46) (explained in details in Subsec.IV.4). As is seen
the RGOPT reduction of remnant scale dependence is even more substantial than
for the previous $\overline{m}_{MOP}$ prescription. The efficient reduction of
remnant scale dependence with respect to standard NLO pQCD is also shown more
quantitatively in Fig. 8, illustrating the maximal scale variations, $\Delta
P/P\equiv(P(M=4\pi T)/P(M=\pi T)-1$, for the different approximations as
indicated.
Figure 8: $\Delta P/P\equiv P(M=4\pi T)/P(M=\pi T)-1$ as function of
temperature (for $\mu_{B}=0$) for the different NLO RGOPT prescriptions
compared to standard NLO pQCD, with scale dependence $\pi T\leq M\leq 4\pi T$.
Despite the numerically quite different MOP and RG dressed mass (see Fig 3),
the resulting physical pressures are much closer for the two prescriptions,
except at very low $T$ values (i.e., very large coupling). This is a
reasonable crosscheck of the moderate dependence upon the details of the
optimization prescriptions, already observed here at NLO. For both the MOP and
RG prescriptions lower pressure values are obtained at moderate temperatures
as compared to LO RGOPT, NLO HTLpt and NLO pQCD in Figs 6, 7.
### V.3 Hot and dense quark matter
Figure 9: RGOPT pressure as function of the temperature at LO and NLO (MOP
prescription), compared with NLO pQCD and NLO HTLpt pressures, with scale
variation $\pi\sqrt{T^{2}+\mu^{2}/\pi^{2}}\leq M\leq
4\pi\sqrt{T^{2}+\mu^{2}/\pi^{2}}$ at $\mu_{B}=1.2$ GeV.
Figure 10: Same captions as in Fig. 9 with alternative NLO RG prescription.
We now consider a nonzero chemical potential values. Since the MOP (35), (42)
and RG (43), (46) prescriptions are defined quite generically they can be
readily applied to the more general $T,\mu\neq 0$ case. As a representative
physical value we illustrate our results for $\mu_{B}=1.2$ GeV. For the
renormalization scale variation range we take as is common
$\pi\sqrt{T^{2}+\mu^{2}/\pi^{2}}\leq M\leq 4\pi\sqrt{T^{2}+\mu^{2}/\pi^{2}}$
within the exact NLO running coupling Eq.(27). This gives the results for the
pressure as a function of temperature as shown in Fig. 9 and Fig. 10 for the
MOP and RG prescriptions respectively. As is seen, for this rather sizable
$\mu_{B}$ value the qualitative picture is very similar to the $\mu_{B}=0$
case above: namely the remnant scale dependence reduction from RGOPT is
drastic as compared to pQCD, and sensible departures with respect to both pQCD
and HTLpt are obtained from resummation effects at relatively low
temperatures. These results appear to support the robustness of the RGOPT for
a more reliable exploration of hot and dense matter.
### V.4 Including glue contribution: confrontation to lattice results
In principle a rather similar RGOPT treatment of the pure glue sector should
be possible, generalizing within our framework the hard thermal loop (HTL)
originally proposed in [11], that essentially introduces a gauge-invariant
(non-local) effective Lagrangian properly describing a gluon (thermal) “mass”
term in the HTLpt approximation. However, this requires technically the
evaluation of presently unknown and quite involved thermal integrals, that we
leave for future work [35]. Therefore as above anticipated in the present work
we treat the pure glue contribution most conservatively in a standard
perturbative manner. At the same NLO, the standard perturbative pure glue
contribution has the well-known expression [40]
$\frac{P^{PT}_{g}}{P_{g,SB}}=1-\frac{15}{4}\left(\frac{g}{4\pi^{2}}\right)+{\cal
O}(g^{2}),$ (49)
where the ideal gluon gas pressure is $P_{g,SB}=(8\pi^{2}/45)\,T^{4}$. Thus we
simply add the perturbative NLO contribution Eq.(49) (properly normalized) to
our NLO RGOPT quark contributions Eq.(26), and for the numerical illustrations
below we normalize our results to the full ideal pressure of quarks plus
gluons: $P_{SB}\to P_{q,SB}+P_{g,SB}$ 101010As a slight abuse of notation,
note that in Figs. 5-10 where only quark contributions are included, $P_{SB}$
designates the sole quark ideal pressure Eq.(9), while in Figs.11-13 below
$P_{SB}\equiv P_{q,SB}+P_{g,SB}$.
Following the progressive elaboration levels as in the previously shown quark
pressure approximations, we first illustrate in Fig.11 the results of using
the simple perturbatively re-expanded approximation for $\overline{m}$,
Eq.(47), for the quark contribution, but supplemented now by the NLO glue
contribution, Eq.(49). The resulting RGOPT pressure is compared with both the
(massless quark) state-of-the-art ${\rm N}^{3}$LO pQCD, which expression is
taken from Ref. [9], and to available LQCD results from Ref. [41, 42, 43]. As
is seen, adding the NLO PT glue contribution puts our results in the right
ballpark of LQCD data, with clearly visible improvement as compared to pQCD,
both for the central scale choice and resulting remnant scale uncertainty. (We
also note that using instead the similar RG perturbative approximation Eq.(48)
gives almost undistinguishable results from Fig. 11, illustrating the low
order perturbative consistency of the two different MOP and RG prescriptions).
Figure 11: RGOPT $P(\overline{m}(g^{2}))$ plus NLO $P^{PT}_{g}$ pressure as
function of $T$ (green band) compared to (${\rm N}^{3}$LO, $g^{3}\ln g$) pQCD
(light blue band), with scale dependence $\pi T\leq M\leq 4\pi T$, and to
lattice data [41, 42, 43] at $\mu_{B}=0$.
Figure 12: Full NLO RGOPT (MOP prescription) plus NLO $P^{PT}_{g}$ pressure
as function of $T$ (grey band) compared to (${\rm N}^{3}LO\,g^{3}\ln g$) pQCD
(light blue band), with scale dependence $\pi T\leq M\leq 4\pi T$, and to
lattice data [41, 42, 43] at $\mu_{B}=0$.
Figure 13: Full NLO RGOPT (RG prescription) plus NLO $P^{PT}_{g}$ pressure
(brown band) compared to ${\rm N}^{3}{\rm LO}\,g^{3}\ln g$ pQCD (light blue
band), NLO HTLpt (light green band) and NNLO HTLpt (light red band), with
scale dependence $\pi T\leq M\leq 4\pi T$, and to lattice data [41, 42, 43] at
$\mu_{B}=0$.
Next in Figs.12 and 13, we illustrate similarly the results obtained upon
adding the NLO PT glue contributions Eq.(49) to the NLO RGOPT quark pressure
respectively for the (exact) MOP and RG prescriptions. These are compared with
the state-of-the-art ${\rm N}^{3}$LO pQCD [9], and to LQCD results [41, 42,
43]. As seen the RGOPT results get closer to LQCD data, with a further reduced
scale dependence, as compared to pQCD. In Fig.13 we compare in addition with
both NLO [23] and the state-of-the-art NNLO HTLpt [25]. The pressure from the
RG prescription gives the smallest residual scale uncertainties, and is in
remarkable agreement with LQCD data in [41] for the central scale $M=2\pi T$,
for temperatures as low as $T\sim 0.25$ GeV up to $T=1$ GeV, the highest value
considered in [41]. (More precisely let us mention that for the five available
LQCD points in [41] with $T>0.3\,{\rm GeV}$ the central scale agreement is at
the few permille level, and even slightly better when considering their
estimated continuum data). It is also in good agreement with more recent LQCD
data [42] at intermediate $T$. The RGOPT pressure is somewhat closer to LQCD
results from [41] than the NNLO HTLpt pressure for $0.5\,{\rm GeV}\lesssim
T\lesssim 1\,{\rm GeV}$, while at higher $T$ values HTLpt is nearer to the
results of [43], and RGOPT shows more sizeable differences of order $5-7\%$. A
concomitant feature however is the visible tension between low [41] and higher
$T$ [43] LQCD data in their common temperature range 111111We show LQCD data
as given in publicly available files[41, 42, 43], that do not include
systematic uncertainties..
Let us briefly mention that we have tried some variants of our prescriptions
in order to check the stability of our results. First, the other RSC
prescription to recover real solutions, mentioned above in Subsec. III.2 and
used in Ref.[34], is to require the collinearity of the vectors tangent to the
MOP and RG curves considered as functions of $(m,g)$ (see Eq.(4.7) of
Ref.[34]). In the present $T\neq 0$ case it is however numerically much more
involved than our simpler prescriptions above (in particular to identify the
AF-compatible solutions at moderate and low $T$ values). Yet we could check
that the resulting pressure is roughly similar to the one given by the MOP
prescription in Figs. 6, 12. Next, we have also considered a variant of the RG
prescription, by including the NNLO $\sim g\,m^{4}\,s_{2}$ subtraction term of
Eq.(16), that is formally of NLO ${\cal O}(g)$ 121212 This variant is the next
order analogue of including the NLO coefficient $s_{1}\neq 0$ within LO RGOPT,
see e.g. Eq.(30).. The $s_{2}$ expression [31, 32] incorporates three-loop
order RG coefficient dependence, thus for consistency we took a three-loop
perturbative running coupling generalizing Eq.(27). We remark that the
resulting pressure for this variant hardly shows visible differences with
Figs. 13, reflecting a good stability, so that we omit to illustrate it.
Another physical quantity of interest is the trace anomaly (or equivalently
interaction measure). The latter has the well-known expression
$\Delta\equiv\varepsilon-3P=T\frac{\partial P}{\partial
T}-4P=T^{5}\,\partial(P/T^{4})/\partial T,$ (50)
(where the second and third equalities are of course valid only for $\mu=0$).
As previously we add the pure glue NLO PT expression to our RGOPT quark
contribution. The result is illustrated, for our best RG prescription, in
Fig.14 where it is compared to LQCD data[41, 42, 43] only. A very good
agreement with LQCD results of [41, 42] is obtained for $0.3\,{\rm
GeV}\lesssim T\lesssim 1\,{\rm GeV}$, while there are more visible differences
with the higher $T$ results from [43]. Just for indication is also delineated
the part of the remnant scale uncertainties originating solely from the RGOPT
quark contributions (dashed lines) within the total uncertainties that also
include the ones coming from the (standard) NLO PT glue contribution.
Figure 14: NLO RGOPT (RG prescription) trace anomaly
$\Delta\equiv\varepsilon-3P$ (including $\Delta^{PT}_{g}$) (brown band)
compared to lattice data [41, 42, 43]. The additional dashed lines illustrate
the scale uncertainty originating solely from RGOPT quark contributions within
the full scale uncertainty added by $\Delta_{g}^{PT}$ (brown) band.
To conclude this section it may be worth to recap the origin of the drastic
differences between RGOPT and HTLpt, the latter being also basically a
variational modification of the original QCD Lagrangian with mass terms,
although based on the more elaborate HTL effective Lagrangian[11] (in
particular with a thermal gluon mass parameter, $m_{D}$). There are
essentially three important differences:
* •
First, the perturbative RG-restoring subtraction terms, like in Eq.(16)
typically, are missing in HTLpt. Accordingly the latter lacks perturbative RG-
invariance formally by a leading order term of the massive theory pressure,
${\cal O}(m^{4})\ln(M/m)$. Now since for any (gluon or quark) thermal masses,
$m^{2}\sim\\#gT^{2}$, and HTLpt is also based on high temperature expansions,
the latter uncancelled term is effectively only a three-loop order effect,
thus largely screened and harmless at LO, and moderate even at NLO. In
contrast this mismatch plainly resurfaces at NNLO HTLpt, presumably mainly
explaining the large remnant scale dependence observed in Refs.[22, 24, 25].
* •
Second, the interpolating Lagrangian used in HTLpt is linear, namely with an
exponent $a=1$ in the HTL equivalent of Eq.(20), instead of our RG-determined
Eq.(23). As we have shown[26] this generally spoils RG invariance even when
the latter is fulfilled perturbatively by the original pressure.
* •
Finally, remark that upon choosing a variational mass prescription Eq.(21) in
HTLpt (as was done e.g. in [23, 24]), nonreal $\overline{m}$ may occur,
similarly to what happens for RGOPT (although it happens rather at NNLO in
HTLpt). In NNLO HTLpt applications this issue is avoided simply by replacing
the gluon $\overline{m}_{D}$ arbitrary mass by a perturbative thermal mass
[22, 25], and taking the quark mass $\overline{m}_{q}=0$. However, enforcing
perturbative masses is partly lacking the a priori more nonperturbative
behaviour rooted in variational prescriptions.
## VI Conclusions and perspectives
We have applied our RGOPT resummation approach at NLO at finite temperature
and density for the QCD quark matter. As explained it generates
nonperturbative approximations with consistent RG properties already at LO
(one-loop). Our NLO results have been compared to NLO and state-of-the-art
${\rm N}^{3}\mbox{LO}$ pQCD predictions as well as to the state-of-the-art
(NNLO) HTLpt results. Scale variations in the range $\pi T\leq M\leq 4\pi T$
show that at NLO the method reduces scale uncertainties drastically as
compared to pQCD. Since RG properties are consistently embedded within the
RGOPT, we stress that generically the scale uncertainty bands observed at NLO
should further shrink by considering the NNLO, ${\cal O}(g^{2})$.
Our two possible ‘MOP’ or ‘RG’ prescriptions reflect the often non-uniqueness
of variational approaches, although here their respective solution is unique
from the compelling AF-matching requirement. Moreover the visible prescription
difference for the resulting dressed mass (see Fig.3) is perturbatively
consistent at low orders (Eqs.(47), (48)), and is substantially reduced within
the resulting physical pressures. Using the RG Eq.(22) prescription, that more
directly embeds consistent RG properties, not surprisingly gives the best
remnant scale dependence at NLO (at it also happened in other considered
models [26]). Note that once a specific RSC is adjusted to recover real
solutions, the discrepancies between possibly different RSC prescriptions are
formally perturbatively higher order terms. Nevertheless since we consider all
expressions exactly rather than perturbatively truncated, numerically the RSC
has a moderate net effect on the final pressure results. As we have
illustrated, any perturbative reexpansion of the exact solutions somehow
degrades the scale dependence.
Concerning the full QCD pressure, due to some present technical limitations in
applying the RGOPT plainly to the glue sector, in this work we have adopted a
simple-minded approach, adding the formally same order purely perturbative NLO
glue contributions to the pure quark sector, resummed by RGOPT. We have
confronted the resulting predictions for the QCD pressure with available LQCD
results. For our best RG prescription the central scale $M=2\pi T$ results are
in remarkable agreement with the LQCD results [41, 42] for temperatures as low
as $T\sim 0.25$ GeV, which lies within the nonperturbative regime, up to $T=1$
GeV. The striking matching with LQCD results from Ref. [41] as seen in Fig.13
may be partly numerically accidental, but variants of our prescription,
specifically the MOP pressure in Fig.12, still appears in very good agreement
given our essentially NLO construction. Moreover the RG properties native to
the RGOPT are not accidental in drastically reducing the scale dependence
problem, particularly when comparing our NLO results to NNLO HTLpt. There are
however some visible differences between our results and higher $1\,{\rm
GeV}\lesssim T\lesssim 2\,{\rm GeV}$ LQCD data[43]. We remark that the LQCD
pressure results in [41] and in Ref. [43] appear to be in tension in their
common temperature range, while the trace anomaly shows more continuity 131313
Note that the LQCD simulations in Refs. [41, 42, 43] primarily calculate the
trace anomaly $\Delta$, the pressure being derived by the integral method,
i.e. essentially from numerically integrating the last equality in Eq.(50)., a
feature that may call for more investigations independently of our results.
When comparing with $2+1$ flavor LQCD as here illustrated, one may also keep
in mind our presently not fully realistic approximation of $N_{f}=3$
degenerate flavors. As illustrated the RGOPT properties extend without
degradation to sizable chemical potential values, $\mu_{B}=1.2$ GeV, that
indicates the potential of our approach towards a more systematic exploration
of hot and dense matter. Future applications may consider the inclusion of
physical quark masses to generate a more realistic equation of state.
###### Acknowledgements.
We thank Peter Petreczky for bringing the results of Ref. [43] to our
attention. We thank Eduardo Fraga and Rudnei Ramos for related discussions.
M.B.P. is partially supported by Conselho Nacional de Desenvolvimento
Científico e Tecnológico (CNPq-Brazil), Process No. 303846/2017-8, and by
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-(CAPES-
Brazil)-Finance Code 001. This author also thanks the Charles Coulomb
Laboratory, in Montpellier, for the hospitality. T.E.R. thanks the support and
hospitality of CFisUC where part of this work was developed and acknowledges
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq-Brazil)
and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES-Brazil)
for PhD grants at different periods of time. This work was financed in part by
INCT-FNA (Process No. 464898/2014-5).
## Appendix A high-$T$ limit
We give here for completeness the well-known $T\gg m,\mu=0$ approximations
(see e.g.[7, 36]) of the basic thermal integrals defined in Eqs (2)-(4):
$2J_{1}(T\gg
m,\mu=0)\approx\frac{7\pi^{2}}{180}-\frac{m^{2}}{12\,T^{2}}+\frac{m^{4}}{T^{4}}\frac{2}{(4\pi)^{2}}\left[\frac{3}{4}-\ln\left(\frac{me^{\gamma_{E}}}{\pi
T}\right)\right]+{\cal O}\left(\frac{m^{6}}{T^{6}}\right)\;,$ (51) $J_{2}(T\gg
m,\mu=0)\approx\frac{1}{12}+\frac{1}{4\pi^{2}}\frac{m^{2}}{T^{2}}\left[\ln\left(\frac{me^{\gamma_{E}}}{\pi
T}\right)-\frac{1}{2}\right]+{\cal O}\left(\frac{m^{4}}{T^{4}}\right)\;,$ (52)
and the more complicated genuine two-loop integral $J_{3}$ of Eq.(4) has a
finite $m\to 0$ limit (however not analytically integrable to our knowledge,
we give below its numerically integrated approximate value):
$J_{3}\left(\frac{m}{T}\to
0,\frac{\mu}{T}=0\right)=\frac{4}{(2\pi)^{4}}\int_{0}^{\infty}d\hat{p}\int_{0}^{\infty}d\hat{q}\,n_{F}(\hat{p})n_{F}(\hat{q})\,\ln\left(\frac{|\hat{p}-\hat{q}|}{\hat{p}+\hat{q}}\right)+{\cal
O}\left(\frac{m^{2}}{T^{2}}\right)\simeq-0.00129532+{\cal
O}\left(\frac{m^{2}}{T^{2}}\right)$ (53)
where $\hat{p},\hat{q}\equiv p/T,q/T$ and $n_{F}(p)=(e^{p}+1)^{-1}$ is the
Fermi-Dirac distribution.
## Appendix B Numerical $\overline{m}$ solutions at NLO
We discuss here in some details the behavior of the exact NLO numerical
solutions for the two MOP or RG prescriptions as defined in Secs.IV.3, and
IV.4. Note that using directly the MOP Eq.(35) or the RG Eq.(43) makes the AF
solution identification obvious. Concerning the MOP Eq.(35), once $B_{2}$ is
consistently determined by Eq.(42) such as to recover $D_{mop}>0$ in Eq.(37),
one sees from the structure of (35) that $(-)$ (AF) solutions only exist if
$-\ln(m^{2}/M^{2})+B_{mop}>0$, and conversely $(+)$ (non-AF) solutions only
exist if $-\ln(m^{2}/M^{2})+B_{mop}<0$.
Figure 15: AF and non-AF roots of MOP Eqs.(35), (42) for $M=2\pi T$ and for
two representative $T$ values, $T=0.5$ GeV (dashed), $T=1$ GeV (thick).
Once $M,g(M)$ are taken to be $T,\mu$-dependent via the perturbative running
coupling Eq.(27), Eq.(35) becomes a function of $m/T$ and
$g(T/\Lambda_{\overline{MS}})$. Despite the nonlinear dependence in $m/T$, at
the level of Eq.(35) both the AF and non-AF solutions happen to be unique in
their respective existence range. This is illustrated in Fig. 15 (for $\mu=0$)
for two representative low to moderate temperatures, respectively $T=0.5$ and
$T=1$ GeV, and for the central scale choice $M=2\pi T$. It is also clear that
for any $T$ the smallest solution is the AF one: Indeed for $g(\pi T\leq M\leq
4\pi T)$, $-\ln(m^{2}/M^{2})+B_{mop}$ is a monotonically decreasing function
of $m$ for fixed $T$, and is $>0$ (respectively $<0$) below (respectively
above) a given $m_{0}$, such that necessarily
$\overline{m}(\mbox{AF})<m_{0}<\overline{m}(\mbox{non-AF})$. The value of
$m_{0}$ depends quite strongly on $T$ (and $M$): typically for the input
corresponding to Fig. 15 with $M=2\pi T$, one finds $m_{0}\simeq 1.28(1.91)$
for $T=0.5(1.0)$ GeV respectively. (Note also that in Fig. 15 the non-AF
solution is unrealistically large with respect to $T$, that also makes it easy
to unambiguously select the correct AF-matching solutions).
At $\mu=0$, following the AF-matching $\overline{m}$ of Eq.(35) continuously
from $T=0$ to arbitrary $T$ is in principle possible, although only for a
fixed scale $M$ (thus a fixed $g(M)$) unrelated to $T$, otherwise obviously at
some small $M\sim\pi T$ one hits on $M\sim\Lambda_{\overline{MS}}$ where the
perturbative coupling diverges. For sizable $\mu\neq 0$ the latter problem if
avoided if defining as conventional $M\sim\pi\sqrt{T^{2}+\mu^{2}/\pi^{2}}$
(provided that one is not in the case of both $T\ll\mu$ and small $\mu$).
Finally concerning the RG Eq.(43), both NLO solutions are already AF-matching,
giving thus a unique solution upon using the prescription Eq.(46). Numerically
the exact $\overline{m}_{RG}$ solution of Eq.(43) is somewhat larger than
$\overline{m}_{MOP}$ for a given $T$, as illustrated in Fig. 3.
## Appendix C NLO and NNLO HTLpt expressions
For completeness we specify here how the NLO[23] and NNLO[25] HTLpt pressure
expressions were precisely used when compared with other results. In
particular for consistent comparison purposes in Figs. 5,6, and Figs. 9,10 we
aim to pin down the HTLpt equivalent of the sole quark contributions, as shown
up to NLO in Fig. 1, but with the quark and gluon propagators and quark-gluon
vertex replaced with HTL-dressed ones consistently. More precisely from first
comparing Eq.(51) of [23] to the pure glue NLO HTLpt pressure (given e.g. in
Eq.(4.8) of second Ref. in [22]), it is not difficult to single out all terms
originating solely from the pure quark vacuum energy. Next, from the resulting
pressure we have rederived the (variationally determined) dressed thermal
gluon $m_{D}$ and quark $m_{q}$ mass as in Eqs.(55),(56) in [23], that amounts
to remove in these expressions the pure glue contributions (terms $\propto
c_{A}$ in Eq.(55) in [23]).
At NNLO of HTLpt, a well-defined separation between pure quark and pure glue
contributions appear ambiguous as these become more entangled. When comparing
with our complete QCD RGOPT pressure e.g. in Fig.13 and subsequent figures, we
obviously took the complete QCD NNLO HTLpt pressure, as given e.g. in
Eqs.(4.5),(4.6) of Ref.[25] (see also Ref.[24]).
## References
* [1] P. de Forcrand, PoS LAT 2009, 010 (2009); G. Aarts, J. Phys. Conf. Ser. 706, 022004 (2016).
* [2] Y. Aoki, G. Endrodi, Z. Fodor, S. D. Katz and K. K. Szabo, Nature 443, 675 (2006); Y. Aoki, S. Borsanyi, S. Durr, Z. Fodor, S. D. Katz, S. Krieg and K. K. Szabo, JHEP 06, 088 (2009); S. Borsanyi et al. [Wuppertal-Budapest], JHEP 09, 073 (2010); A. Bazavov, T. Bhattacharya, M. Cheng, C. DeTar, H. T. Ding, S. Gottlieb, R. Gupta, P. Hegde, U. M. Heller and F. Karsch, et al. Phys. Rev. D 85, 054503 (2012).
* [3] R. Machleidt and D. R. Entem, Phys. Rept. 503, 1 (2011).
* [4] Y. Nambu and G. Jona-Lasinio, Phys. Rev. 122, 345 (1961).
* [5] M. Buballa, Phys. Rept. 407, 205 (2005).
* [6] J. P. Blaizot, E. Iancu and A. Rebhan, In *Hwa, R.C. (ed.) et al.: Quark gluon plasma* 60-122 [hep-ph/0303185]; U. Kraemmer and A. Rebhan, Rept. Prog. Phys. 67, 351 (2004).
* [7] M. Laine and A. Vuorinen, Lect. Notes Phys. 925, 1 (2016).
* [8] J. Ghiglieri, A. Kurkela, M. Strickland and A. Vuorinen, Phys. Rept. 880, 1 (2020).
* [9] K. Kajantie, M. Laine, K. Rummukainen and Y. Schroder, Phys. Rev. D 67, 105008 (2003).
* [10] A. Vuorinen, Phys. Rev. D 68, 054017 (2003).
* [11] E. Braaten and R. D. Pisarski, Phys. Rev. D 45, 1827 (1992).
* [12] F. Karsch, A. Patkos and P. Petreczky, Phys. Lett. B 401, 69 (1997); S. Chiku and T. Hatsuda, Phys. Rev. D 58, 076001 (1998); J. O. Andersen, E. Braaten and M. Strickland, Phys. Rev. D 63, 105008 (2001); J. O. Andersen and M. Strickland, Phys. Rev. D 64, 105012 (2001); J. O. Andersen and M. Strickland, Annals Phys. 317, 281 (2005).
* [13] J. O. Andersen, E. Braaten and M. Strickland, Phys. Rev. Lett. 83, 2139 (1999); J. O. Andersen, E. Braaten and M. Strickland, Phys. Rev. D 61, 074016 (2000).
* [14] P.M. Stevenson, Phys. Rev. D 23, 2916 (1981); Nucl. Phys. B 203, 472 (1982).
* [15] A. Okopinska, Phys. Rev. D 35, 1835-1847 (1987) doi:10.1103/PhysRevD.35.1835;
* [16] H. Yamada, Z. Phys. C 59, 67-76 (1993) doi:10.1007/BF01555840
* [17] A. Duncan and M. Moshe, Phys. Lett. B 215, 352-358 (1988) doi:10.1016/0370-2693(88)91447-5; H. F. Jones and M. Moshe, Phys. Lett. B 234, 492-496 (1990) doi:10.1016/0370-2693(90)92045-K .
* [18] R. P. Feynman and H. Kleinert, Phys. Rev. A 34, 5080 (1986); H. Kleinert, Phys. Rev. D 57, 2264 (1998); Phys. Lett. B 434, 74 (1998); Phys. Rev. D 60, 085001 (1999); Mod. Phys. Lett. B 17, 1011 (2003).
* [19] J. O. Andersen, L. Kyllingstad and L. E. Leganger, JHEP 0908, 066 (2009).
* [20] S. K. Gandhi, H. F. Jones and M. B. Pinto, Nucl. Phys. B 359, 429 (1991).
* [21] J. L. Kneur, M. B. Pinto and R. O. Ramos, Phys. Rev. D 74, 125020 (2006).
* [22] J. O. Andersen, M. Strickland and N. Su, Phys. Rev. Lett. 104, 122003 (2010); JHEP 1008, 113 (2010).
* [23] N. Haque, M. G. Mustafa and M. Strickland, Phys. Rev. D 87, 105007 (2013).
* [24] J. O. Andersen, L. E. Leganger, M. Strickland and N. Su, JHEP 1108, 053 (2011); S. Mogliacci, J. O. Andersen, M. Strickland, N. Su and A. Vuorinen, JHEP 1312, 055 (2013); N. Haque, J. O. Andersen, M. G. Mustafa, M. Strickland and N. Su, Phys. Rev. D 89, 061701 (2014).
* [25] N. Haque, A. Bandyopadhyay, J. O. Andersen, M. G. Mustafa, M. Strickland and N. Su, JHEP 1405, 027 (2014).
* [26] J. L. Kneur and M. B. Pinto, Phys. Rev. D 92, 116008 (2015).
* [27] J. L. Kneur and M. B. Pinto, Phys. Rev. Lett. 116, 031601 (2016).
* [28] J. L. Kneur and A. Neveu, Phys. Rev. D 81, 125012 (2010).
* [29] J. L. Kneur and A. Neveu, Phys. Rev. D 88, 074025 (2013).
* [30] M. Tanabashi et al. [Particle Data Group], Phys. Rev. D 98, 030001 (2018).
* [31] J. L. Kneur and A. Neveu, Phys. Rev. D 92, 074027 (2015).
* [32] J. L. Kneur and A. Neveu, Phys. Rev. D 101, 074009 (2020).
* [33] G. N. Ferrari, J. L. Kneur, M. B. Pinto and R. O. Ramos, Phys. Rev. D 96, 116009 (2017).
* [34] J. L. Kneur, M. B. Pinto and T. E. Restrepo, Phys. Rev. D 100, 114006 (2019).
* [35] J. L. Kneur and M. B. Pinto; in preparation.
* [36] J. I. Kapusta and C. Gale, “Finite-temperature field theory: Principles and applications” (Cambridge University Press, 2006).
* [37] M. Laine and Y. Schröder, Phys. Rev. D 73, 085009 (2006).
* [38] J. L. Kneur, Phys. Rev. D 57, 2785 (1998).
* [39] I. T. Drummond, R. R. Horgan, P. V. Landshoff and A. Rebhan, Nucl. Phys. B 524, 579 (1998).
* [40] E. V. Shuryak, Sov. Phys. JETP 47, 212 (1978); S. A. Chin, Phys. Lett. B 78, 552 (1978).
* [41] S. Borsanyi, G. Endrodi, Z. Fodor, A. Jakovac, S. D. Katz, S. Krieg, C. Ratti and K. K. Szabo, JHEP 11, 077 (2010).
* [42] S. Borsanyi, Z. Fodor, C. Hoelbling, S. D. Katz, S. Krieg and K. K. Szabo, Phys. Lett. B 730, 99 (2014).
* [43] A. Bazavov, P. Petreczky and J. H. Weber, Phys. Rev. D 97, 014510 (2018).
|
[a]Ciaran Hughes
# Theory Overview of Heavy Exotic Spectroscopy
###### Abstract
This proceeding broadly overviews the current landscape of heavy exotic
spectroscopy. Such work includes the composition of certain $X$, $Y$, and $Z$
states, and proceeds to discuss tetraquarks made exclusively of four quarks.
The slides for this talk can be found at the following link.
## 1 What Defines a State to be Exotic?
In order to review exotic states, it is first necessary to know what defines a
state as exotic. By definition, being exotic is being non-conventional. So,
what do we mean by a conventional state111A state is rigorously defined to be
a pole singularity of the S-matrix.? A conventional state is defined to be any
state which we understand “well enough”222This definition also applies to
nuclear physics, where there also exists exotic nuclei.. Currently, mesons and
baryons are the only states deemed to be conventional, since they are
phenomenologically understood well enough in a potential model [1]. Everything
else, such as four-quark states, are classified as exotic. This definition of
conventional is ever changing, and if we understood a new class of hadrons
“well enough”, we would include this new class under the conventional
umbrella.
For illustrative purposes, the current (as of 2020) status of exotic states in
the charmonium sector is shown in Fig. 1 and 2 of Richard Lebed’s talk [2].
Taking all exotic states from all sectors, there have been 44 experimentally
observed exotic candidates, and 15 experimentally established333See the
non-$q\bar{q}$ mesons 2020 review from the PDG. Also, PDG defines an
established signal as being seen by two independent experiments with greater
than $5\sigma$ precision. exotic candidates. Colloquially, these exotic
candidates are called $XYZ$ states, but the PDG has defined a naming
convention444See Naming Scheme for Hadrons 2020 review from the PDG..
### 1.1 What Can The Exotic States Be?
Gell-Mann knew in 1964 that color confinement allowed a plethora of states
like $\bar{q}gq$, $\bar{q}\bar{q}qq$, $\bar{q}qqqq$, etc, where $q$ represents
a quark, and $g$ an excited gluon degree of freedom. For current exotic
states, there are four exotic configurations largely being considered:
1. 1.
Hybrids. These $\bar{q}gq$ configurations exist when there is an excited gluon
in an octet representation which combines with a $\bar{q}q$ in an octet
representation. Phenomenological models include the constituent gluon model
and the flux tube model.
2. 2.
Molecules. A state composed of two or more constituent hadrons, typically
bound by a Yakawa like force.
3. 3.
Hadro-Quarkonium. A constituent hadron core with a quark and an anti-quark
cloud orbiting the core.
4. 4.
Compact Tetraquarks. A four quark state composed of a tightly arranged $qq$
diquark and a separate $\bar{q}\bar{q}$ anti-diquark.
The goal of exotic spectroscopy is to correctly match which exotic
configurations map onto a specific experimental exotic signal. Certain models
start from the proposition that exotic signals arise from a single particular
exotic configuration, e.g, are solely a compact tetraquark. However, a single
configuration frequently explains only certain aspects of the data, but not
all. This has given rise to debate about which single configuration is most
apt, for example to describe the $X(3872)$. Such debate disappears if we start
from the view point that the most general solution consists of an admixture of
multiple configurations. It is then not so surprising that exotic candidates
exhibit complicated phenomena which can be explained by the mixture of
configurations.
## 2 The $\chi_{c1}(3872)$ aka $X(3872)$
The $X(3872)$ was the first exotic candidate discovered, in 2003 by BELLE, and
consequently is the most well known and studied. Its quantum numbers have been
established to be $J^{PC}=1^{++}$, $Q=0$, and $I^{G}=0^{+}$. Its decay modes
include $\mathcal{B}(\pi^{+}\pi^{-}J/\psi)>3.2\%$. Since charmonium states
like the $J/\psi$ have suppressed annihilation effects, the $X(3872)$ must
have at least $\bar{c}c$ valence components. Other modes include
$\mathcal{B}(D^{0}\bar{D^{*}})>30\%$, and
$\mathcal{B}(\rho^{0}J/\psi)\sim\mathcal{B}(\omega J/\psi)$. The isospin
violations in this latter decay enter during the decay process, and are caused
[5] by the 8 MeV mass difference between the $D^{0}\bar{D^{*}}$ and
$D^{+}\bar{D^{*-}}$ components of the $X(3872)$, i.e., isospin violations do
not arise in the state itself, but during the decay process.
One amazing piece of information about the $X(3872)$ is that its binding
energy lies exactly on the $D^{0}\bar{D^{*}}$ threshold, with a binding energy
of $0\pm 180$ keV. It also has a very narrow width of $\Gamma<1.2$ MeV. Being
so close to the two-particle S-wave threshold indicates cusp effects can
occur, however a pure cusp explanation of the data has been ruled out [6].
Both a virtual state and bound state are compatible with experimental data
[6]. In a potential model which only includes the $\bar{c}c$ component, the
nearest state, the $\chi_{c1}(2P)$, is over 100 MeV too high [7]. However,
when the $\bar{c}c$ is coupled to the two meson $D\bar{D}^{*}$ component, then
the energy of the $\chi_{c1}(2P)$ drops and is largely in line with the
experimental $X(3872)$ [7]. This is the reason that the $X(3872)$ is
considered to be an exotic candidate. Lattice QCD studies find a bound state
pole with a binding energy of $\mathcal{O}(10)$ MeV at unphysical pion masses,
and indicate that only $\bar{c}c$ and $D\bar{D}$ are important contributions
to the wavefunction. They disfavor a diquark interpretation. Still, lattice
QCD calculations with physical pion mass could find a different binding energy
and pole structure.
The most likely model of the $X(3872)$ is an appreciable mixture of a
$\bar{c}c$ and a $D\bar{D}$. There has been an interesting proposal that
highlights that we can experimentally measure the binding energy an order of
magnitude more precisely by using a triangle singularity with low energy
$D^{*0}\bar{D}^{0*}$s [8].
## 3 The $\psi(4230)$ aka $Y(4230)$ aka $Y(4260)$ State
In 2020, the PDG replaced the previously known $Y(4260)$ (found by BELLE in
2013) with the $Y(4230)$ (found by BESIII in 2017), due to the latter being
more precise. The quantum numbers of $Y$ states are defined to be
$J^{PC}=1^{--}$, $Q=0$, and $I^{G}=0^{-}$. The $Y(4230)$ decays to
$\pi^{+}\pi^{-}J/\psi$. Because charmonium states are annihilation suppressed,
the $Y(4230)$ must have a minimal $\bar{c}c$ valence component. However, the
most notable statement about the $Y(4230)$ is its lack of open-charm decays
into $D\bar{D}$ states - unlike excited charmonium states555Charmonium states
above threshold decay into open-charm thresholds $2-3$ orders of magnitude
more frequently than to hidden-charm thresholds.. The lack of open-charm
decays implies that the $Y(4230)$ has to contain more than just $\bar{c}c$,
making it an exotic candidate.
Any model for the $Y(4230)$ needs to explain the lack of open-charm decays.
One model suggests that the $Y(4230)$ is a $D\bar{D}_{1}(2420)$ molecular
state. The needed $65$ MeV binding energy is within reach of a potential model
[9], making it a viable option. As molecular states decay through their
constituents, this explains the $Y(4230)=D\bar{D}_{1}(2420)\to D\pi^{+}D^{*-}$
decay, as well as the $Y(4230)\to\gamma X(3872)$ decay (caused by a triangle
singularity). However, by studying the $\pi\pi$ and $K\bar{K}$ distributions
in the final states of $Y(4230)$ decays, [10] finds that the $Y(4230)$ needs
to have a sizable but not dominant $SU(3)$-flavor octet component. This means
that the $Y(4230)$ needs a $SU(3)$-flavor singlet component.
The $Y(4230)$ is also consistent with a hybrid scenario. Here, lattice QCD
energies of hybrids are around 180 MeV too high, but if systematic errors were
included then the masses are roughly consistent [1]. Additionally, with
potentials extracted from the lattice, pNRQCD finds a hybrid energy consistent
with the $Y(4230)$ [1]. Due to hybrid dynamics, decays of hybrids into S-wave
open charm are forbidden, making the S-P wave $D\bar{D}_{1}$ channel the
dominant decay process. Moreover, heavy quark spin symmetry is broken more in
hybrids than in quarkonia, occurring at
$\mathcal{O}(\Lambda_{\text{QCD}}/m_{Q})$. This could help explain the
observation of the $Y(4230)$ decaying into both the heavy quark spin triplet
$\pi^{+}\pi^{-}J/\psi$ and the spin singlet $\pi^{+}\pi^{-}h_{c}$ at
comparable rates.
Given the current experimental data, the most likely model for the $Y(4230)$
is a mixture of a $D\bar{D}_{1}$ molecule and a hybrid, causing a bound state
pole.
## 4 The $Z_{c}(3900)$ and $Z_{b}(10610)$ States
The $Z_{b}(10610)$ has decay modes $\mathcal{B}((B\bar{B}^{*})^{+})=86\%$,
$\mathcal{B}(\Upsilon(nS)\pi^{+})\sim 3\%$. For the $Z_{c}(3900)$ we have
$\mathcal{B}((D\bar{D}^{*})^{\pm})/\mathcal{B}(J/\psi(nS)\pi^{\pm})=6.2$.
Again, due to quarkonium annihilation effects being suppressed, and the $Z$
states being charged, the $Z$ states must have a minimal valence contribution
consisting of four quarks $\bar{Q}Qq_{1}q_{2}$. $Z$ states are defined by the
quantum numbers $Q=\pm 1,0$, $I^{G}=1^{+}$, and $J^{PC}=1^{+-}$.
For the $Z_{b}(10610)$, its large branching fraction into the S-wave
$B\bar{B}^{*}$ threshold is explained by its small binding energy of $3(3)$
MeV relative to that threshold. In fact, recent lattice QCD work [11]
extracted the potential between the $B$ and $\bar{B}^{*}$ and found sizable
attraction in the potential for small $r$. Certain parameterisations of the
potential allow for a virtual state - which would be identified as the
$Z_{b}(10610)$. It should not be surprising then that the $Z_{b}(10610)$ has
been identified as a virtual state when examining the experimental data by
[12]. As a shallow virtual state, it would be molecular [4], and potentially
be a mixture of multiple molecular components including $B\bar{B}^{*}$,
$\pi\Upsilon$, etc.
Similarly, the $Z_{c}(3900)$ is 13 MeV lower than the S-wave $D\bar{D}^{*}$
threshold. A lattice QCD [1] calculation includes diquark and two meson type
operators but does not find evidence of a bound state or narrow resonance.
However, [13] indicates that this lattice work is consistent with a virtual
state or broad resonance, which is also consistent with experimental data. To
distinguish between these cases, smaller bin sizes and better energy
resolution are needed in experimental data.
For the spatial distribution, the $Z_{c}(3900)$ is most likely a multi-
molecular system, where the mixing between the $\pi
J/\psi(\rho\eta_{c})-D\bar{D}^{*}$ is as important as the diagonal parts of
the potential [14].
## 5 Four Quark States Containing Two or More Heavy Quarks
As we have seen, the most understood exotic states are likely a mixture of
very different components, e.g., the $X(3872)$ is likely a mixture of
$\bar{c}c$ and $D\bar{D}^{*}$. This mixing can, and does, cause complicated
experimental phenomena, and this in turn makes it difficult to exclude regions
in the theoretical space of models. As such, we should first attempt to find
an exotic state that is solely composed of a single component and understand
this state fully. Afterwards, we can use this understanding when describing
the multi-component exotic candidates.
To make full use of our intuition, we should focus on bound states rather than
virtual or resonance states. A bound state is likely when at least two of the
quarks are heavy, as then there is a possibility of the formation of diquark
or anti-diquark pairs due to the non-relativistic behavior. The $QQ$ diquark
can be in a $\bar{3}/6$ representation with an attractive/repulsive potential.
Phenomenologically, the attractive $QQ$ diquark may be deep enough in the
heavy-quark potential to cause binding. For a heavy-light diquark $Qq$, there
is no known mechanism derived from QCD that can cause this structure to exist,
and so I will not discuss it further.
### 5.1 Bound $\bar{b}\bar{b}bb$ States
The above reasoning prompted my coauthors and I to search for a bound
$\bar{b}\bar{b}bb$ state using lattice QCD. We searched in three channels
($J^{PC}=0^{++}$, $1^{+-}$, and $2^{++}$) and used a full basis of two-meson
and diquark anti-diquark interpolating operators. We did not find evidence for
any bound state below the lowest S-wave thresholds, as shown in Fig. 9 of
[15]. When a signal is not found, a bound needs to be set on the likelihood
that you missed that signal. This is frequently not done in lattice QCD
calculations, but should be. In Fig. 11 [15] we show the probability that we
would have missed a bound state at a specific energy within our statistical
precision. As such, we have ruled out the possibility of a $\bar{b}\bar{b}bb$
compact tetraquark bound state to $5\sigma$.
### 5.2 Resonance $\bar{c}\bar{c}cc$ States
In 2020, LHCb found evidence [16] of a state which decays into 2$J/\psi$. Its
mass was around 700 MeV above the 2$J/\psi$ threshold, and was called the
$X(6900)$. Consequently, it would be composed of $\bar{c}\bar{c}cc$. Being
above threshold, this state is a resonance. A compact tetraquark with (anti-)
diquark constituents is the only likely model for the state due to the non-
relativistic behavior of charm quarks in this energy region. The $X(6900)$ is
very exciting as it is the first clear evidence for a state that can be
explained by a diquark model which can be connected to QCD.
There are a few noticeable features in Fig. 2 of the LHCb data. First, there
are three bumps, and second, there is a dip around 6.8 GeV. All these features
need to be explained by a model. As we have discussed, two body S-wave
thresholds are important for exotic states. Some important S-wave thresholds
are the $2\chi_{c0}$, $\chi_{c1}\chi{c0}$, and the $\Xi_{cc}\bar{\Xi}_{cc}$
open flavour threshold.
As these compact tetraquark states should be narrow, the broad width of the
experimental bumps hint that multiple states are contributing to the signal.
LHCb have not yet performed an amplitude analysis which could distinguish
nearby $J^{PC}$ states. What most models seem to agree on is that the
$X(6900)$ is some combination of multiple 2S $0^{++}$ states. Then the 1S
$0^{++}$ state could either be the first bump around $6.5$ GeV [17], or
alternatively could be below the $2J/\psi$ threshold (where LHCb does not have
data). If the 1S $0^{++}$ is below the $2J/\psi$ threshold, then the first
bump would be some combination of 1P $0^{-+}$ states [18]. The dip is
explained by destructive interference from the $2\chi_{c0}$ threshold becoming
accessible. Further, such states are likely in the bottom sector.
### 5.3 Bound $bb\bar{u}\bar{d}$ State
A $J^{P}=1^{+}$, $I=0$ exotic state composed of $bb\bar{u}\bar{d}$ quarks has
been predicted to be bound within a multitude of different theoretical
frameworks. Fig. 8 of [19] illustrates this consensus between lattice NRQCD
calculations, potentials extracted from lattice QCD, and phenomenological
model calculations. For a review talk of this state, and similar
$Q_{1}Q_{2}\bar{q_{3}}\bar{q_{4}}$ exotic states, see Anthony Francis’ talk.
In fact, there is a straightforward intuitive understanding why this state is
bound. First, start with the two $b$-quarks. Assume they are in the infinitely
heavy quark mass limit. Then they arrange themselves into an attractive
$\bar{3}$ diquark. The $\bar{u}$ and $\bar{d}$ then form a light quark cloud
that screens the $bb$ interaction. This is sufficient to form deep binding
around $100$ MeV. Adding in finite $b$-quark mass corrections does not change
this intuitive picture. Given the robustness of this prediction, a
confirmation is needed from experiment, potentially via the
$bb\bar{u}\bar{d}\to\Xi^{0}_{bc}\bar{p}$ or $\to B^{-}D^{+}\pi^{-}$ processes.
If found, this would be the first bound state that is composed exclusively of
four quarks, and would be a very useful leverage point in our understanding of
exotics.
### 5.4 The Bound $D_{s0}(2317)$ State
The $D_{s0}(2317)$ has many similar features to the $X(3872)$. First, prior to
its discovery, it was expected to be the $j=\frac{1}{2}^{+}$ state composed of
$c\bar{s}$ quarks, above threshold, and very broad through its decay to $DK$
[7]. However, experimentally, the $D_{s0}(2317)$ state is below the
$D^{0}K^{+}$ threshold and very narrow, in contrast to quark model
expectations. This is what makes it an exotic candidate. Secondly, when quark
potential model calculations include the two-meson $DK$ couplings to the
$c\bar{s}$ component, the eigenstate mass dramatically shifts downwards and in
line with experimental results. Multiple lattice QCD calculations [20] have
been performed, taking into account the various systematical errors, and find
a bound state pole nearby the experimental mass. These lattice QCD results
find that both the $c\bar{s}$ and two-meson $DK$ components are important, but
the diquark operators are not. For these reasons, the $D_{s0}(2317)$ is a
prototype to understanding the $X(3872)$. Yet, the $D_{s0}(2317)$ is easier to
study theoretically. We can use our understanding of the $D_{s0}(2317)$ as
input into understanding the $X(3872)$.
It is useful to ask [21] what perturbations can be performed to the
$D_{s0}(2317)$ system in order to understand exotics more quantitatively, and
the $X(3872)$ specifically? With lattice QCD, we can explore how exotics
change (both their mass, pole type, and residue) as we vary the quark mass - a
task that experiment cannot perform. In this way, we can supplement
experimental data with lattice QCD. Specifically, [21] advocates examining the
$D_{s0}\to D~{}K$ process as a function of strange quark mass $m_{s}$, and
using leading order heavy-quark effective field theory to describe heavy-light
systems, and leading order chiral perturbation theory for light-light
pseudoscalar mesons. If we take $m_{s}\to m_{s}-\epsilon$, then $M_{D}$
reduces by $-\epsilon$, while $M_{K}^{2}$ changes by $-B\epsilon$, with $B$
the leading order $\chi$-PT constant. Applying this logic to $D_{s0}\to
D~{}K$, we see that we can cause $D_{s0}$ to sit right at threshold, and even
to lie above threshold. Studying how the $D_{s0}$ changes would quantitatively
illuminate the role of the S-wave threshold effects on exotic states. It would
also be interesting to see the bound state pole move in the complex plane and
become a virtual/resonance state as the threshold effects change as a function
of $m_{s}$. We could even move the binding energy to be exactly that of the
$X(3872)$, and understand this arch-typical state more quantitatively.
Such a calculation is computationally feasible also. All the expensive pieces
can be reused, and only the cheap unphysical strange quark propagators would
need to be recomputed. Additionally, the systematic errors would be minimal.
Consequently, such a program is imminently needed and has few minimal
downsides.
## 6 Conclusion
Taken together, we have explored the latest exotic state developments. This
has included the experimentally established $X(3872)$, $Y(4230)$,
$Z_{c}(3900)$, $Z_{b}(10610)$, the recently discovered all charm tetraquark
$X(6900)$, and the $D_{s0}(2317)$. We also discussed the non-existence of a
bound all-bottom tetraquark, how lattice QCD can be used to study slight
perturbation of the $D_{s0}(2317)$ in order to quantitatively understand
S-wave threshold effects, and how imminent results are needed from experiments
to discover the bound $bb\bar{u}\bar{d}$ state.
We did not get to discuss all of the interesting exotic states currently in
the literature. However, given the physics we have learned, we can generalise
our knowledge to a multitude of other states. For example, the $Z_{b}(10650)$
is likely the $B^{*}\bar{B}^{*}$ spin-partner of the $Z_{b}(10610)$.
Similarly, the $Z_{c}(4020)$ is likely the $D^{*}\bar{D}^{*}$ spin-partner of
the $Z_{c}(3900)$. The heavy flavour equivalent of the $X(3872)$, the $X_{b}$,
has likely not been seen as the $\chi_{b1}(2P)$ is below the open-bottom
threshold. This is in contrast to the charm case, which highlights the
importance of the $\chi_{c1}(2P)$ for the $X(3872)$. The pentaquark $P_{c}$ is
likely a molecule of $\bar{c}c$ and a nucleon. LHCb have released preliminary
results about a exotic flavour $cs\bar{u}\bar{d}$ state, where the lattice QCD
HadSpec collaboration finds suggestions of this state in the $I=0$,
$J^{P}=0^{+}$ channel.
There are other established states that all models fit accurately. More
experimental data is needed to nullify some of the models describing these
states. The exotic candidates in this class are the $Y(4360)$ (which could be
the $D_{1}D^{*}$ partner of the $Y(4230)$), the $Y(4660)$ (which could be the
$D_{s}D_{s1}$strange partner of the $Y(4230)$), and the $Z(4430)$.
The future of exotic spectroscopy is bright. With the BELLE and BES upgrades,
the new PANDA experiment, and the continuous measurements from the LHC,
further data continues to be collected and analysed. Such work will continue
to find more exotic candidates, and further shed light into the exotic ways
Nature operates at the smallest of scales.
## References
* [1] N. Brambilla, S. Eidelman, C. Hanhart, A. Nefediev, C.-P. Shen, C.E. Thomas et al., _The $XYZ$ states: experimental and theoretical status and perspectives_, _Phys. Rept._ 873 (2020) 1 [1907.07583].
* [2] R. Lebed, “The Spectroscopy of Exotics in the Dynamical Diquark Model.” 2020.
* [3] F.-K. Guo, C. Hanhart, U.-G. Meißner, Q. Wang, Q. Zhao and B.-S. Zou, _Hadronic molecules_ , _Rev. Mod. Phys._ 90 (2018) 015004 [1705.00141].
* [4] I. Matuschek, V. Baru, F.-K. Guo and C. Hanhart, _On the nature of near-threshold bound and virtual states_ , 2007.05329.
* [5] Z.-Y. Zhou and Z. Xiao, _Comprehending Isospin breaking effects of $X(3872)$ in a Friedrichs-model-like scheme_, _Phys. Rev. D_ 97 (2018) 034011 [1711.01930].
* [6] LHCb collaboration, _Study of the lineshape of the $\chi_{c1}(3872)$ state_, _Phys. Rev. D_ 102 (2020) 092005 [2005.13419].
* [7] J. Ferretti, G. Galatà and E. Santopinto, _Interpretation of the X(3872) as a charmonium state plus an extra component due to the coupling to the meson-meson continuum_ , _Phys. Rev. C_ 88 (2013) 015207 [1302.6857].
* [8] F.-K. Guo, _Novel Method for Precisely Measuring the $X(3872)$ Mass_, _Phys. Rev. Lett._ 122 (2019) 202002 [1902.11221].
* [9] X.-K. Dong, Y.-H. Lin and B.-S. Zou, _Prediction of an exotic state around 4240 MeV with $J^{PC}=1^{-+}$ as C-parity partner of Y(4260) in molecular picture_, _Phys. Rev. D_ 101 (2020) 076003 [1910.14455].
* [10] Y.-H. Chen, L.-Y. Dai, F.-K. Guo and B. Kubis, _Nature of the $Y(4260)$: A light-quark perspective_, _Phys. Rev. D_ 99 (2019) 074016 [1902.10957].
* [11] S. Prelovsek, H. Bahtiyar and J. Petkovic, _$Z_{b}$ tetraquark channel from lattice QCD and Born-Oppenheimer approximation_, _Phys. Lett. B_ 805 (2020) 135467 [1912.02656].
* [12] Q. Wang, V. Baru, A. Filin, C. Hanhart, A. Nefediev and J.-L. Wynen, _Line shapes of the $Z_{b}(10610)$ and $Z_{b}(10650)$ in the elastic and inelastic channels revisited_, _Phys. Rev. D_ 98 (2018) 074023 [1805.07453].
* [13] M. Albaladejo, P. Fernandez-Soler and J. Nieves, _$Z_{c}(3900)$ : Confronting theory and lattice simulations_, _Eur. Phys. J. C_ 76 (2016) 573 [1606.03008].
* [14] P.G. Ortega, J. Segovia, D.R. Entem and F. Fernández, _The $Z_{c}$ structures in a coupled-channels model_, _Eur. Phys. J. C_ 79 (2019) 78 [1808.00914].
* [15] C. Hughes, E. Eichten and C.T.H. Davies, _Searching for beauty-fully bound tetraquarks using lattice nonrelativistic QCD_ , _Phys. Rev. D_ 97 (2018) 054505 [1710.03236].
* [16] LHCb collaboration, _Observation of structure in the $J/\psi$-pair mass spectrum_, _Sci. Bull._ 65 (2020) 1983 [2006.16957].
* [17] M. Karliner and J.L. Rosner, _Interpretation of structure in the di- $J/\psi$ spectrum_, _Phys. Rev. D_ 102 (2020) 114039 [2009.04429].
* [18] J.F. Giron and R.F. Lebed, _Simple spectrum of $c\bar{c}c\bar{c}$ states in the dynamical diquark model_, _Phys. Rev. D_ 102 (2020) 074003 [2008.01631].
* [19] L. Leskovec, S. Meinel, M. Pflaumer and M. Wagner, _Lattice QCD investigation of a doubly-bottom $\bar{b}\bar{b}ud$ tetraquark with quantum numbers $I(J^{P})=0(1^{+})$_, _Phys. Rev. D_ 100 (2019) 014503 [1904.04197].
* [20] G.K. Cheung, C.E. Thomas, D.J. Wilson, G. Moir, M. Peardon and S.M. Ryan, _$DK$ $I=0,$ $D\bar{K}$ $I=0,1$ scattering and the $D_{s0}^{\ast}(2317)$ from lattice QCD_, 2008.06432.
* [21] E. Eichten and C. Hughes, _Exploring S-Wave Threshold Effects in QCD: A Heavy-Light Approach_ , _Phys. Lett. B_ 802 (2020) 135250 [1911.02024].
|
# Sparse expanders have negative curvature
Justin Salez111CEREMADE, CNRS, UMR 7534, Université Paris-Dauphine, PSL
University, 75016 Paris, France
###### Abstract
We prove that bounded-degree expanders with non-negative Ollivier-Ricci
curvature do not exist, thereby solving a long-standing open problem suggested
by Naor and Milman and publicized by Ollivier (2010). In fact, this remains
true even if we allow for a vanishing proportion of large degrees, large
eigenvalues, and negatively-curved edges. To establish this, we work directly
at the level of Benjamini-Schramm limits, and exploit the entropic
characterization of the Liouville property on stationary random graphs to show
that non-negative curvature and spectral expansion are incompatible “at
infinity”. We then transfer this result to finite graphs via local weak
convergence. The same approach also applies to the Bacry-Emery curvature
condition CD$(0,\infty)$, thereby settling a recent conjecture of Cushing, Liu
and Peyerimhoff (2019).
###### Contents
1. 1 Introduction
1. 1.1 Non-negative curvature
2. 1.2 Main result
3. 1.3 Proof outline
2. 2 Random rooted graphs
1. 2.1 Local weak convergence
2. 2.2 Tightness, unimodularity and stationarity
3. 2.3 Spectral radius, entropy and the Liouville property
3. 3 Proof of the main result
1. 3.1 Setting the stage
2. 3.2 Non-negative curvature implies zero entropy
3. 3.3 Zero entropy implies poor spectral expansion
## 1 Introduction
### 1.1 Non-negative curvature
The _Ricci curvature_ of a manifold is a fundamental concept in Riemannian
geometry, see e.g. [28]. In two celebrated works [42, 43], Ollivier proposed a
notion of curvature based on optimal transport which applies to arbitrary
metric spaces, hence in particular to the discrete setting of graphs.
Specifically, let $G=(V_{G},E_{G})$ be a locally finite connected graph. As
usual, write $\deg_{G}(x)$ for the degree of a vertex $x$, and ${\rm
d}_{G}(x,y)$ for the length of a minimal path from $x$ to $y$ in $G$. Let also
$P_{G}\colon V_{G}\times V_{G}\to[0,1]$ denote the transition matrix of the
lazy simple random walk on $G$, i.e.
$\displaystyle P_{G}(x,y)$ $\displaystyle:=$
$\displaystyle\left\\{\begin{array}[]{ll}\frac{1}{2\deg_{G}(x)}&\textrm{if
}\\{x,y\\}\in E_{G};\\\ \frac{1}{2}&\textrm{if }x=y;\\\
0&\textrm{else}.\end{array}\right.$
The _Ollivier-Ricci curvature_ at an edge $\\{x,y\\}\in E_{G}$ is defined as
$\displaystyle\kappa_{G}(x,y)$ $\displaystyle:=$ $\displaystyle
1-\mathcal{W}_{1}\left(P_{G}(x,\cdot),P_{G}(y,\cdot)\right),$
where $\mathcal{W}_{1}$ denotes the $L^{1}-$Wasserstein distance on
$\mathcal{P}_{1}(V_{G},{\rm d}_{G})$, see (21) below. Note that the
computation of $\kappa_{G}(x,y)$ amounts to solving a finite-dimensional
linear optimization problem, and is therefore amenable to standard algorithmic
techniques (see [22] for a beautiful interactive curvature calculator). The
Ollivier-Ricci curvature of the whole graph is then defined as
$\displaystyle\kappa(G)$ $\displaystyle:=$ $\displaystyle\inf_{\\{x,y\\}\in
E_{G}}\kappa_{G}(x,y).$
This fundamental geometric quantity measures how distances are contracted, on
average, under the action of $P_{G}$. When $\kappa(G)\geq 0$, the graph $G$ is
called _non-negatively curved_. This is the case, for example, when $G$ is the
Cayley graph of an abelian group, as witnessed by the obvious coupling that
uses the same random generators for both trajectories. Non-negative curvature
is equivalent to the requirement that $P_{G}$ is a contraction under the
Wasserstein metric $\mathcal{W}_{1}$, and constitutes the essence of the
powerful _path coupling method_ for bounding mixing times [18]. Consequences
in terms of geometry, mixing, and concentration of measure have been massively
investigated, and quantified by a variety of functional inequalities. The
literature is too vast for an exhaustive account, and we refer the reader to
the seminal papers [42, 43, 34, 30], the survey [44], and the more recent
works [24, 41, 21, 32, 40] for details, variations, references, and open
problems. In particular, the present work was motivated by the following long-
standing question, due to Naor and Milman, and publicized by Ollivier [44,
Problem T]. Recall that a _family of expanders_ is a sequence of finite graphs
with uniformly bounded degrees, diverging sizes, and spectral gap bounded away
from $0$.
###### Question 1 (Problem T in [44]).
Is there a family of non-negatively curved expanders ?
An instructive special class of graphs for which non-negative curvature is
completely understood is that of cubic graphs. Specifically, it was shown in
[22] that prism graphs and Möbius ladders are the only cubic graphs with non-
negative Ollivier-Ricci curvature. Since these are not expanders, the answer
to Question 1 is negative for cubic graphs. To the best of our knowledge, this
is the only result in the direction of Question 1, despite the rich body of
works on non-negative curvature.
### 1.2 Main result
In the present paper, we answer Question 1 negatively in full generality, as
well as its CD$(0,\infty)$ analogue raised by Cushing, Liu and Peyerimhoff
[23, Conjecture 9.11], see Remark 1 below. Moreover, we show that the answer
to Question 1 remains negative even if we significantly relax the required
properties. Specifically, denote by $\Delta(G)$ the maximum degree of a finite
graph $G$, and by
$1\ =\ \lambda_{1}(G)\ \geq\ \lambda_{2}(G)\ \geq\ldots\ \geq\ \lambda_{N}(G)\
\geq\ 0,$
the $N=|V_{G}|$ ordered eigenvalues of its transition matrix $P_{G}$. With
these notations, Question 1 simply asks whether there exist constants
$\Delta\geq 1$, $\rho<1$ and arbitrary large graphs satisfying
1. (A)
sparsity: $\Delta(G)\leq\Delta;$
2. (B)
spectral expansion: $\lambda_{2}(G)\leq\rho;$
3. (C)
non-negative curvature: $\kappa(G)\geq 0.$
Our main result says that no large graph can even come close to satisfying
these three requirements.
###### Theorem 2 (Main result).
Fix $\Delta\geq 1$ and $\rho\in(0,1)$. Then, there exists a constant
$\varepsilon=\varepsilon_{\Delta,\rho}>0$ such that _every_ finite graph $G$
must satisfy one of the following conditions:
* •
either $G$ is far from satisfying the sparsity requirement (A), in the
following sense:
$\displaystyle\sum_{x\in V_{G}}\deg_{G}(x)\log\deg_{G}(x)$ $\displaystyle>$
$\displaystyle(\Delta\log\Delta)|V_{G}|;$
* •
or $G$ is far from satisfying the expansion requirement (B), in the following
sense:
$\displaystyle\mathrm{card}\\{i\colon\lambda_{i}(G)>\rho\\}$
$\displaystyle\geq$ $\displaystyle\varepsilon|V_{G}|;$
* •
or $G$ is far from satisfying the curvature requirement (C), in the following
sense:
$\displaystyle\mathrm{card}\\{e\in E_{G}\colon\kappa_{G}(e)<-\varepsilon\\}$
$\displaystyle\geq$ $\displaystyle\varepsilon|E_{G}|.$
Note that the conclusion is only meaningful for large graphs, since the second
condition is trivially satisfied when $|V_{G}|\leq\frac{1}{\varepsilon}$. Here
is an equivalent – but perhaps more intuitive – formulation.
###### Theorem 3 (Rephrasing).
Let $G_{n}=(V_{n},E_{n}),n\geq 1$ be finite graphs with the sparsity property
$\displaystyle\sup_{n\geq 1}\left\\{\frac{1}{|V_{n}|}\sum_{x\in
V_{n}}\deg_{G_{n}}(x)\log\deg_{G_{n}}(x)\right\\}$ $\displaystyle<$
$\displaystyle\infty.$ (2)
Suppose in addition that the Ollivier-Ricci curvature is almost non-negative
on most edges, i.e.
$\displaystyle\forall\varepsilon>0,\quad\frac{1}{|E_{n}|}\,\mathrm{card}\\{e\in
E_{n}\colon\kappa_{G_{n}}(e)<-\varepsilon\\}$
$\displaystyle\xrightarrow[n\to\infty]{}$ $\displaystyle 0.$ (3)
Then, a macroscopic proportion of eigenvalues of the transition matrix must
accumulate near $1$:
$\displaystyle\forall\rho<1,\quad\liminf_{n\to\infty}\left\\{\frac{1}{|V_{n}|}\,\mathrm{card}\\{i\colon\lambda_{i}(G_{n})\geq\rho\\}\right\\}$
$\displaystyle>$ $\displaystyle 0.$ (4)
Here again, the theorem is only meaningful in the large-size limit
$|V_{n}|\to\infty$, since the conclusion (4) trivially holds otherwise. The
high-level message is that on large sparse graphs, non-negative curvature (in
an even weak sense) induces extremely poor spectral expansion. This stands in
stark contrast with the traditional idea – quantified by a broad variety of
functional inequalities over the past decade – that non-negative curvature is
associated with _good_ mixing behavior.
###### Remark 1 (Bacry-Emery curvature).
Bacry and Emery [7, 8, 9] developed a different notion of non-negative
curvature based on $\Gamma-$calculus and known as the _CD $(0,\infty)$_
condition, see also [33, 26]. Since this notion is local, our proof also
applies, with the role of Theorem 11 being played by a recent result of Hua
[27, Theorem 2]. Consequently, there is no family of expanders satisfying _CD
$(0,\infty)$_, as conjectured by Cushing, Liu and Peyerimhoff [23, Conjecture
9.11]. We note that the weaker statement obtained by replacing _CD
$(0,\infty)$_ with _CD $(0,n)$_ was recently established by Münch [39]. We
warmly thank David Cushing, Shiping Liu and Florentin Münch for pointing this
out.
###### Remark 2 (Laziness).
The literature actually contains a whole family of variants
$(\kappa_{\alpha})_{\alpha\in[0,1)}$ of the Ollivier-Ricci curvature $\kappa$,
obtained by replacing the matrix $P_{G}$ with its $\alpha-$idle version:
$\displaystyle P_{G}^{(\alpha)}$ $\displaystyle:=$
$\displaystyle(2-2\alpha)P_{G}+(2\alpha-1)\,{\rm Id}.$
There is even a continuous-time version $\kappa_{\star}:=\lim_{\alpha\to
1}\frac{\kappa_{\alpha}}{1-\alpha}$, proposed in [34] and largely adopted
since then. In fact, it was later shown (see [19, Remark 5.4]) that
$\frac{\kappa_{\alpha}}{1-\alpha}\leq\kappa_{\star}\ =\ 2\kappa,$ where
$\kappa=\kappa_{1/2}$ is the version considered in the present paper.
Consequently, our result is stated in the strongest possible form, and applies
to all versions of the Ollivier-Ricci curvature.
###### Remark 3 (Eigenvectors).
Our proof will actually reveal more than (4): not only are there many
eigenvalues near $1$, but the corresponding eigenvectors furthermore charge
most vertices significantly. In other words, the poor spectral expansion of
non-negatively curved graphs is not restricted to any specific region: it
applies everywhere. See Remark 6 for a precise statement.
### 1.3 Proof outline
#### Proof outline.
The most natural route towards Question 1 would consist in looking for a
quantitative upper-bound on the spectral gap of a finite non-negatively curved
graph, in terms of its size and maximum degree. Interestingly, we do _not_
pursue this approach here. Neither do we try to obtain asymptotic estimates
along a sequence of sparse graphs $(G_{n})_{n\geq 1}$ with non-negative
curvature. Instead, we work directly at the elegant level of _local weak
limits_ of finite graphs, and exploit their built-in _stationarity_ to prove
that non-negative curvature and spectral expansion are incompatible “at
infinity”. This relies on the central concept of _asymptotic entropy_ , and
its classical relations with the Liouville property and the spectral radius.
We then transfer this incompatibility result to finite graphs via a relative-
compactness argument. As far as we know, the idea of using local weak limits
as a tool to deduce generic bounds on the mixing parameters of sparse Markov
chains have not received much attention. We firmly believe that this viewpoint
will have many applications.
#### Further questions.
The surprising “$\deg\log\deg$” requirement (2) is used to define the
asymptotic entropy on which our whole argument relies. We do not know whether
it is necessary for the conclusion (4) to hold, or whether it can be further
relaxed. Note that some degree restriction is necessary, since the complete
graph satisfies $\lambda_{2}(G)=\kappa(G)=1/2$, regardless of its size. Also,
a drawback of our approach – as of any limit argument – is its non-
quantitative nature. It would be interesting to find an explicit upper-bound
(vanishing as $n\to\infty$) on the spectral gap of a non-negatively curved
graph with $n$ vertices and maximum degree $\Delta$, i.e. to estimate
$\displaystyle\gamma_{\Delta}(n)$ $\displaystyle:=$
$\displaystyle\max\\{1-\lambda_{2}(G)\colon|V_{G}|=n,\Delta(G)\leq\Delta,\kappa(G)\geq
0\\}.$
#### Organization of the paper.
The remainder of the paper is organized as follows: Section 2 offers a brief,
self-contained introduction to the framework of random rooted graphs. In
particular, we recall the definition of local weak convergence (Section 2.1),
introduce the key notions of _unimodularity_ , _stationarity_ and _tightness_
(Section 2.2), and gather important results on the _asymptotic entropy_ of
random walks on stationary graphs (Section 2.3). Section 3 is devoted to the
proof of the main result, which is reduced (in Section 3.1) to the following
two main steps:
1. 1.
Proving that non-negative curvature implies zero-entropy (Section 3.2).
2. 2.
Proving that zero-entropy causes poor spectral expansion (Section 3.3).
#### Acknowledgment.
The author warmly thanks Itai Benjamini, David Cushing, Nicolas Curien,
Shiping Liu, Russell Lyons, Florentin Münch and Pierre Pansu for many
wonderful comments, connections and references. This work was partially
supported by Institut Universitaire de France.
## 2 Random rooted graphs
In this section, we provide a self-contained introduction to the framework of
_local weak convergence_. This limit theory for sparse graphs was introduced
by Benjamini and Schramm [14] and developed further by Aldous and Steele [2]
and Aldous and Lyons [1]. The limit points are _random rooted graphs_ enjoying
a powerful form of _stationarity_. They describe the “internal” geometry of
large graphs, as seen from a uniformly chosen vertex. Local weak limits are
often much more convenient to work with than the finite-graph sequences that
they approximate, and have been shown to capture the asymptotic behavior of a
number of natural graph parameters, see, e.g. [35, 17, 16, 3]. The present
paper can be viewed as another illustration of the strength of this modern
viewpoint.
### 2.1 Local weak convergence
#### The space of rooted graphs.
All graphs considered in this paper will be simple, undirected, countable, and
locally finite. A _rooted graph_ is a pair $(G,o)$, where $G$ is a graph and
$o$ is a distinguished vertex, called the _root_. Two rooted graphs $(G,o)$
and $(G^{\prime},o^{\prime})$ are _isomorphic_ , written $G\simeq G^{\prime}$,
if there is a bijection $\phi\colon V_{G}\to V_{G^{\prime}}$ which preserves
the root ($\phi(o)=o^{\prime}$) and the edges:
$\displaystyle\forall x,y\in V_{G},\quad\\{x,y\\}\in E_{G}$
$\displaystyle\Longleftrightarrow$
$\displaystyle\left\\{\phi(x),\phi(y)\right\\}\in E_{G^{\prime}}.$
We let ${\mathscr{G}_{\bullet}}$ denote the set of connected rooted graphs,
considered up to the isomorphism relation $\simeq$. To lighten the exposition,
we will use the same notation $(G,o)$ for the rooted graph and its equivalence
class. We write $\mathcal{B}_{t}(G,o)$ for the _ball of radius $t$ around the
root_ in $G$, i.e. the (finite) rooted subgraph of $G$ induced by the set
$\\{x\in V_{G}\colon{\rm d}_{G}(o,x)\leq t\\}$. We equip
${\mathscr{G}_{\bullet}}$ with the _local metric_ ${{\rm
d}}_{\textsc{loc}}\colon{\mathscr{G}_{\bullet}}\times{\mathscr{G}_{\bullet}}\to[0,1]$,
defined by
$\displaystyle{{\rm d}}_{\textsc{loc}}((G,o),(G^{\prime},o^{\prime}))$
$\displaystyle:=$ $\displaystyle\frac{1}{1+r},\quad\textrm{ with }\quad r\ =\
\sup\\{t\geq
0\colon\mathcal{B}_{t}(G,o)\simeq\mathcal{B}_{t}(G^{\prime},o^{\prime})\\}.$
In words, two elements of ${\mathscr{G}_{\bullet}}$ are “close” to each other
if one has to look “far away” from the root to distinguish them apart. It can
be shown that $({\mathscr{G}_{\bullet}},{{\rm d}}_{\textsc{loc}})$ is a
complete separable metric space. We equip it with its Borel $\sigma-$algebra,
and call ${\mathscr{G}_{\bullet}}-$valued random variables _random rooted
graphs_.
#### Local weak convergence.
Write $\mathcal{P}({\mathscr{G}_{\bullet}})$ for the space of Borel
probability measures on ${\mathscr{G}_{\bullet}}$, equipped with the usual
topology of weak convergence. If $G$ is an arbitrary finite graph, define its
_local profile_ $\mathcal{L}_{G}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ to be
the empirical distribution of all possible _rootings_ of $G$, i.e.
$\displaystyle\mathcal{L}_{G}$ $\displaystyle:=$
$\displaystyle\frac{1}{|V_{G}|}\sum_{x\in V_{G}}\delta_{(G,x)},$ (5)
where $(G,x)$ is here implicitly restricted to the connected component of $x$
if $G$ is not connected. Finally, if $G_{n}=(V_{n},E_{n}),{n\geq 1}$ are
finite graphs whose local profiles $(\mathcal{L}_{G_{n}})_{n\geq 1}$ admit a
limit $\mathcal{L}$ in $\mathcal{P}({\mathscr{G}_{\bullet}})$, we call
$\mathcal{L}$ the _local weak limit_ of the sequence $(G_{n})_{n\geq 1}$, and
write simply
$\displaystyle G_{n}$ $\displaystyle\xrightarrow[n\to\infty]{}$
$\displaystyle\mathcal{L}.$
In words, $\mathcal{L}$ is the law of a random rooted graph which describes
how the deterministic graph $G_{n}$ asymptotically looks when seen from a
uniformly chosen root. More formally,
$\displaystyle\frac{1}{|V_{n}|}\sum_{x\in V_{n}}f(G_{n},x)$
$\displaystyle\xrightarrow[n\to\infty]{}$
$\displaystyle\mathcal{L}\left[f(G,o)\right]\ \triangleq\
\int_{\mathscr{G}_{\bullet}}f\,{\rm d}\mathcal{L},$ (6)
for each continuous, bounded observable
$f\colon{\mathscr{G}_{\bullet}}\to\mathbb{R}$. The left-hand side can be
thought of as a spatial average of “local contributions” from the various
vertices of $G_{n}$. In short, local weak convergence allows one to
conveniently replace the asymptotic analysis of such averages with the direct
computation of an expectation at the root of a certain random graph.
#### Local observables.
The class of continuous functions on ${\mathscr{G}_{\bullet}}$ clearly
contains (but is not restricted to) all $t-$_local_ observables $(t\geq 0)$,
where $f\colon{\mathscr{G}_{\bullet}}\to\mathbb{R}$ is called $t-$_local_ if
the value $f(G,o)$ is determined by the (isomorphic class of the) finite ball
$\mathcal{B}_{t}(G,o)$. Here is a short list of examples, which will be used
throughout the paper without notice:
* •
The root degree $(G,o)\mapsto\deg_{G}(o)$ is $1-$local.
* •
The minimum curvature at $o$, $(G,o)\mapsto\min_{x\sim o}\kappa_{G}(o,x)$ is
$2-$local.
* •
For each $t\geq 0$, the return probability $(G,o)\mapsto P_{G}^{t}(o,o)$ is
$t-$local (in fact, $(\lfloor t/2\rfloor+1)-$local).
* •
For each $t\geq 0$, the $t-$step entropy $(G,o)\mapsto-\sum_{x\in
V_{G}}P_{G}^{t}(o,x)\log P_{G}^{t}(o,x)$ is $t-$local.
### 2.2 Tightness, unimodularity and stationarity
#### Tightness.
One of the many reasons for the success of the local weak convergence
framework (compared to other limit theories for sparse graphs) is the fact
that every “reasonable” sequence of sparse graphs admits a local weak limit.
The following tightness criterion, due to Benjamini, Lyons and Schramm, gives
an honest mathematical content to this vague claim. Note, of course, that
passing to sub-sequences is unavoidable.
###### Theorem 4 (Tightness, see Theorem 3.1 in [12]).
Let $G_{n}=(V_{n},E_{n}),{n\geq 1}$ be finite graphs so that
$\displaystyle\sup_{n\geq 1}\left\\{\frac{1}{|V_{n}|}\sum_{x\in
V_{n}}\phi\left(\deg_{G_{n}}(x)\right)\right\\}$ $\displaystyle<$
$\displaystyle\infty,$
for some function $\phi\colon\mathbb{Z}_{+}\to\mathbb{R}_{+}$ satisfying
$\phi(d)\gg d$ as $d\to\infty$. Then, $(G_{n})_{n\geq 1}$ has a subsequence
which admits a local weak limit.
In particular, this criterion applies to the sequence $(G_{n})_{n\geq 1}$ in
Theorem 3, with $\phi(d)=d\log d$. This will ensure that we can “pass to the
limit” and study the question of existence of non-negatively curved expanders
directly at the level of local weak limits.
#### Unimodularity.
Local weak limits of finite graphs happen to enjoy a powerful distributional
invariance, which is directly inherited from the fact that the root is equally
likely to be any vertex under the local profile (5). More precisely, a measure
$\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ is called _unimodular_ if
it satisfies
$\displaystyle\mathcal{L}\left[\sum_{x\in V_{G}}f(G,o,x)\right]$
$\displaystyle=$ $\displaystyle\mathcal{L}\left[\sum_{x\in
V_{G}}f(G,x,o)\right],$ (7)
for every Borel function $f\colon{\mathscr{G}_{\bullet\bullet}}\to[0,\infty]$,
where ${\mathscr{G}_{\bullet\bullet}}$ denotes the analogue of the space
${\mathscr{G}_{\bullet}}$ with two distinguished roots instead of one.
Thinking of $f(G,o,x)$ as an amount of mass sent from $o$ to $x$, the identity
(7) expresses the fact that the expected masses received and sent by the root
coincide. This _Mass Transport Principle_ is clearly satisfied when
$\mathcal{L}$ is the local profile of a finite graph, and is preserved under
weak convergence. Thus, we obtain the following fundamental result.
###### Theorem 5 (Inherited unimodularity).
All local weak limits of finite graphs are unimodular.
Whether the converse holds is a notoriously hard open problem with deep
implications, see [1, 25, 12]. Let us here record a first simple consequence
of unimodularity, which will be useful.
###### Lemma 6 (Everything shows at the root, see Lemma 2.3 in [1]).
Suppose that $\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ is
unimodular, and let $B\subseteq{\mathscr{G}_{\bullet}}$ be a Borel set such
that $\mathcal{L}(B)=1$. Then we also have,
$\displaystyle\mathcal{L}\left(\\{\forall x\in V_{G},\ (G,x)\in B\\}\right)$
$\displaystyle=$ $\displaystyle 1.$
###### Proof.
Just apply the Mass Transport Principle with $f(G,o,x)={\bf 1}_{(G,o)\notin
B}$. ∎
#### Stationarity.
Under a mild integrability condition and a trivial change of measure,
unimodularity can be rephrased as _reversibility_ under a natural Markov chain
on ${\mathscr{G}_{\bullet}}$. We will here only need the weaker notion of
_stationarity_. Specifically, we say that a law
$\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ is _stationary_ if it is
invariant for the Markov chain on ${\mathscr{G}_{\bullet}}$ which, at each
step, keeps the underlying graph as it is and moves the root according to the
transition matrix $P_{G}$. In other words, $\mathcal{L}$ is stationary if
$\displaystyle\mathcal{L}\left[\sum_{x\in V_{G}}P^{t}_{G}(o,x)h(G,x)\right]$
$\displaystyle=$ $\displaystyle\mathcal{L}\left[h(G,o)\right],$ (8)
for every Borel function $h\colon{\mathscr{G}_{\bullet}}\to[0,\infty]$ and
every $t\geq 0$ (equivalently, for $t=1$). The relation with unimodularity is
summed up in the following classical lemma (see, e.g. [10]).
###### Lemma 7 (Degree-biasing).
Let $\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ be a unimodular law
with ${\deg}(\mathcal{L}):=\mathcal{L}[\deg_{G}(o)]<\infty.$ Then, the law
$\widehat{\mathcal{L}}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ defined by the
following change of measure is stationary:
$\displaystyle{\rm d}{\widehat{\mathcal{L}}}(G,o)$ $\displaystyle:=$
$\displaystyle\frac{\deg_{G}(o)}{\deg(\mathcal{L})}\,{\rm d}\mathcal{L}(G,o).$
(9)
###### Proof.
Apply the Mass Transport Principle to $\mathcal{L}$ with $f(G,o,x)=h(G,o){\bf
1}_{\\{x,o\\}\in E_{G}}$. ∎
###### Remark 4 (Mutual absolute continuity).
It follows from (9) that the original law $\mathcal{L}$ and its degree-biased
version ${\widehat{\mathcal{L}}}$ are mutually absolutely continuous. In other
words, we have
$\displaystyle\mathcal{L}(B)=1$ $\displaystyle\Longleftrightarrow$
$\displaystyle{\widehat{\mathcal{L}}}(B)=1,$
for any Borel set $B\subseteq{\mathscr{G}_{\bullet}}$, allowing us to transfer
results from one law to the other.
### 2.3 Spectral radius, entropy and the Liouville property
Stationarity is a powerful property, because it enables the development of an
_ergodic theory_ of random rooted graphs. See the inspiring works [37] on
Galton-Watson trees, [10] on random rooted graphs, and [11] on general random
environments. In particular, a classical application of Kingman’s sub-additive
ergodic theorem allows one to define the (quenched) _asymptotic entropy_ of
random walks on stationary random graphs, as recalled in the following lemma.
###### Lemma 8 (Entropy).
Let $\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ be stationary with
$\mathcal{L}[\log\deg_{G}(o)]<\infty$. Then the limit
$\displaystyle\mathscr{H}(G,o)$ $\displaystyle:=$
$\displaystyle\lim_{t\to\infty}\frac{1}{t}\sum_{x\in
V_{G}}P^{t}_{G}(o,x)\log\frac{1}{P_{G}^{t}(o,x)},$
exists $\mathcal{L}-$almost-surely and in
$L^{1}({\mathscr{G}_{\bullet}},\mathcal{L})$, and does not depend on the
choice of the root $o$.
We will henceforth simply write $\mathscr{H}(G)$ instead of
$\mathscr{H}(G,o)$, and call this the _entropy_ of $G$.
###### Proof.
Let $(G,o)$ have law $\mathcal{L}$, and conditionally on $(G,o)$, let
$X=(X_{t})_{t\geq 0}$ be a lazy simple random walk on $G$ starting from
$X_{0}=o$. For $0\leq s\leq t$, define a non-negative random variable
$Z_{s,t}$ by
$\displaystyle Z_{s,t}$ $\displaystyle:=$
$\displaystyle\log\frac{1}{P_{G}^{t-s}(X_{s},X_{t})}.$
Note that $Z_{t,s}\stackrel{{\scriptstyle d}}{{=}}Z_{0,t-s}$. Indeed, for any
Borel function $f\colon\mathbb{R}_{+}\to\mathbb{R}_{+}$, we have by definition
$\displaystyle{\mathbb{E}}\left[f(Z_{s,t})\right]$ $\displaystyle=$
$\displaystyle{\mathbb{E}}\left[\sum_{x,y\in
V_{G}}P_{G}^{s}(o,x)P_{G}^{t-s}(x,y)f\left(\log\frac{1}{P_{G}^{t-s}(x,y)}\right)\right]$
$\displaystyle=$ $\displaystyle{\mathbb{E}}\left[\sum_{y\in
V_{G}}P_{G}^{t-s}(o,y)f\left(\log\frac{1}{P_{G}^{t-s}(o,y)}\right)\right]$
$\displaystyle=$ $\displaystyle{\mathbb{E}}\left[f(Z_{0,t-s})\right],$
where the second line uses the stationarity (8) with
$h(G,o)=\sum_{y}P_{G}^{t-s}(o,y)f\left(\log\frac{1}{P_{G}^{t-s}(o,y)}\right)$.
Moreover, the trivial inequality $P_{G}^{t}(o,y)\geq
P_{G}^{s}(o,x)P_{G}^{t-s}(x,y)$ readily implies the sub-additive property
$\displaystyle Z_{0,t}$ $\displaystyle\leq$ $\displaystyle Z_{0,s}+Z_{s,t}.$
(10)
Finally, the assumption $\mathcal{L}[\log\deg_{G}(o)]<\infty$ ensures that
${\mathbb{E}}[Z_{0,1}]<\infty$. Consequently, Kingman’s sub-additive ergodic
theorem (see, e.g. [38, Theorem 14.44]) guarantees the existence of a non-
negative, integrable random variable $Z_{\infty}$ such that almost-surely and
in $L^{1}$,
$\displaystyle\frac{Z_{0,t}}{t}$ $\displaystyle\xrightarrow[t\to\infty]{}$
$\displaystyle Z_{\infty}.$
Averaging this convergence over the random walk $X$ (i.e., taking conditional
expectation given the random rooted graph) yields the existence of the limit
$\mathscr{H}(G,o)$. By Lemma 6, the same is true if $o$ is replaced by any
$x\in V_{G}$. Moreover, the sub-additive property (10) with $s=1$ shows that
$\displaystyle\mathscr{H}(G,o)$ $\displaystyle\leq$ $\displaystyle\sum_{x\in
V_{G}}P_{G}(o,x)\mathscr{H}(G,x),$
$\mathcal{L}-$almost-surely. Since $\theta\mapsto(\theta-a)_{+}$ is monotone
and convex for $a\geq 0$, this inequality implies
$\displaystyle\forall a\geq 0,\quad\left(\mathscr{H}(G,o)-a\right)_{+}$
$\displaystyle\leq$ $\displaystyle\sum_{x\in
V_{G}}P_{G}(o,x)\left(\mathscr{H}(G,x)-a\right)_{+}.$
But the two sides have the same law by stationarity, so they must coincide
$\mathcal{L}-$almost-surely. The fact that this is true for all $a\geq 0$
deterministically forces the equality $\mathscr{H}(G,x)=\mathscr{H}(G,o)$ for
all neighbours $x$ of $o$, and hence for all $x\in V_{G}$ by Lemma 6. ∎
#### The Liouville property.
One of the interests of asymptotic entropy lies in its relation with the
Liouville property. A function $f\colon V_{G}\to\mathbb{R}$ is called
_harmonic_ on $G$ if $P_{G}f=f$, where
$\displaystyle\forall x\in V_{G},\quad(P_{G}f)(x)$ $\displaystyle:=$
$\displaystyle\sum_{y\in V_{G}}P_{G}(x,y)f(y).$ (11)
This is trivially the case, in particular, when $f$ is constant. The graph $G$
has the _Liouville property_ if it admits no non-constant bounded harmonic
function. For stationary random graphs, this functional-analytic property
turns out to admit the following simple entropic characterization.
###### Theorem 9 (Entropic characterization of the Liouville property).
The equivalence
$\displaystyle\mathscr{H}(G)=0$ $\displaystyle\Longleftrightarrow$
$\displaystyle G\textrm{ has the Liouville property},$
holds almost-surely under any stationary law
$\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ with
$\mathcal{L}[\log\deg_{G}(o)]<\infty$.
This remarkable result has a long history: it originates with the pioneering
works of Avez [5, 6, 4], and was then made famous in a celebrated paper of
Kaimanovich and Vershik [31]. In the present setting of stationary random
graphs, the implication $\Longrightarrow$ was established by Benjamini and
Curien [10], and refined by Benjamini, Duminil-Copin, Kozma and Yadin [11].
The converse $\Longleftarrow$ was proved by Carrasco Piaggio and Lessa [20]
(see also [13]), but under an additional growth assumption. Since this is the
implication that we are going to use, we need to give more details.
###### Proof of Theorem 9.
Fix a connected graph $G$, and let $X=(X_{t})_{t\geq 0}$ denote a lazy simple
random walk on $G$ starting at some fixed vertex $o\in V_{G}$. Write ${\bf
P}^{G}$ for its law, which is a probability measure on the product space
$V^{\mathbb{Z}_{+}}_{G}$. On this space, let $\mathcal{I}$ denote the
$\sigma-$field of all events which are invariant under the natural shift
$(x_{t})_{t\geq 0}\mapsto(x_{t+1})_{t\geq 0}$. Then [38, Proposition 14.12]
states that
$\displaystyle G\textrm{ has the Liouville property}$
$\displaystyle\Longleftrightarrow$ $\displaystyle\mathcal{I}\textrm{ is ${\bf
P}^{G}-$trivial}.$
On the other hand, writing
$\mathcal{T}=\bigcap_{t=0}^{\infty}\sigma(x_{t},x_{t+1},\ldots)$ for the tail
$\sigma-$field on $V^{\mathbb{Z}_{+}}_{G}$, we have
$\displaystyle\mathcal{I}\textrm{ is ${\bf P}^{G}-$trivial}$
$\displaystyle\Longleftrightarrow$ $\displaystyle\mathcal{T}\textrm{ is ${\bf
P}^{G}-$trivial},$
by Theorem [38, Theorem 14.18] and because $X$ is lazy. Finally, the
equivalence
$\displaystyle\mathcal{L}\left(\mathcal{T}\textrm{ is ${\bf
P}^{G}-$trivial}\right)=1$ $\displaystyle\Longleftrightarrow$
$\displaystyle\mathcal{L}(\mathscr{H}(G)=0)=1,$
was proved in [10, Theorem 3.2] for any stationary law $\mathcal{L}$ with
$\mathcal{L}[\log\deg_{G}(o)]<\infty$. Thus,
$\displaystyle\mathcal{L}(G\textrm{ has the Liouville property})=1$
$\displaystyle\Longleftrightarrow$
$\displaystyle\mathcal{L}\left(\mathscr{H}(G)=0\right)=1,$ (12)
and this annealed statement will actually suffice for the present paper.
However, deducing the quenched claim is easy, as we now explain. Define the
events $A:=\\{G\textrm{ has the Liouville property}\\}$ and
$B:=\\{\mathscr{H}(G)=0\\}$, and let $A\Delta B$ denote their symmetric
difference. We want to show that
$\displaystyle\mathcal{L}(A\Delta B)$ $\displaystyle=$ $\displaystyle 0,$ (13)
for any stationary law $\mathcal{L}$ with
$\mathcal{L}[\log\deg_{G}(o)]<\infty$. We already know this if $A,B$ are
$\mathcal{L}-$trivial, thanks to (12). Moreover, the events $A,B$ are clearly
_root-invariant_ , in the sense that
$\displaystyle(G,o)\in A$ $\displaystyle\Longrightarrow$
$\displaystyle\\{\forall x\in V_{G},(G,x)\in A\\}.$
Consequently, (13) holds under the extra assumption that _root-invariant
events are $\mathcal{L}-$trivial_. But this is known as _ergodicity_ , and any
stationary law can be decomposed as a mixture of ergodic laws, by [1, Theorem
4.7]. Thus, (13) extends to all stationary laws $\mathcal{L}$ with
$\mathcal{L}[\log\deg_{G}(o)]<\infty$. ∎
#### Spectral radius.
The entropy $\mathscr{H}(G)$ is related to several other fundamental graph-
theoretical quantities, such as the _speed_ , _growth_ , or _spectral radius_
, see [38]. Let us recall the last notion. Fix a rooted graph
$(G,o)\in{\mathscr{G}_{\bullet}}$. For any $t,s\geq 0$, we trivially have
$P_{G}^{t+s}(o,o)\geq P_{G}^{t}(o,o)P_{G}^{s}(o,o).$ By Fekete’s lemma, we
deduce that the limit
$\displaystyle\varrho(G,o)$ $\displaystyle:=$
$\displaystyle\lim_{t\to\infty}\left(P_{G}^{t}(o,o)\right)^{\frac{1}{t}},$
(14)
exists in $(0,1]$. Moreover, the connectivity of $G$ together with the trivial
inequality
$\displaystyle P^{t+2s}_{G}(o,o)$ $\displaystyle\geq$ $\displaystyle
P^{s}_{G}(o,x)P^{t}_{G}(x,x)P^{s}_{G}(x,o),$
shows that $\varrho(G,o)$ does not depend on the choice of the root $o$. Thus,
we will henceforth simply write $\varrho(G)$, and call this quantity the
_spectral radius_ of $G$.
###### Lemma 10 (Spectral radius vs entropy).
The inequality
$\displaystyle\mathscr{H}(G)$ $\displaystyle\geq$ $\displaystyle
2\log\frac{1}{\varrho(G)},$
holds almost-surely under any stationary law $\mathcal{L}$ with
$\mathcal{L}[\log\deg_{G}(o)]<\infty$.
###### Proof.
For any rooted graph $(G,o)$ and any $t\geq 0$, we have by concavity
$\displaystyle\log\left(P_{G}^{2t}(o,o)\right)$ $\displaystyle=$
$\displaystyle\log\left(\sum_{x\in V_{G}}P_{G}^{t}(o,x)P^{t}_{G}(x,o)\right)$
$\displaystyle\geq$ $\displaystyle\sum_{x\in V_{G}}P_{G}^{t}(o,x)\log
P_{G}^{t}(x,o)$ $\displaystyle=$ $\displaystyle\sum_{x\in
V_{G}}P_{G}^{t}(o,x)\log P_{G}^{t}(o,x)+\sum_{x\in
V_{G}}P_{G}^{t}(o,x)\log\left(\frac{\deg_{G}(o)}{\deg_{G}(x)}\right),$
where the last line uses the reversibility
$\deg_{G}(o)P_{G}^{t}(o,x)=\deg_{G}(x)P_{G}^{t}(x,o)$. Dividing by $-2t$ and
taking the limit as $t\to\infty$ in
$L^{1}({\mathscr{G}_{\bullet}},\mathcal{L})$ yields the claim, provided we can
show that
$\displaystyle\frac{1}{t}\sum_{x\in
V_{G}}P_{G}^{t}(o,x)\log\left(\frac{\deg_{G}(o)}{\deg_{G}(x)}\right)$
$\displaystyle\xrightarrow[t\to\infty]{L^{1}({\mathscr{G}_{\bullet}},\mathcal{L})}$
$\displaystyle 0.$
But this follows from the crude bound
$\displaystyle\mathcal{L}\left[\left|\sum_{x\in
V_{G}}P_{G}^{t}(o,x)\log\left(\frac{\deg_{G}(o)}{\deg_{G}(x)}\right)\right|\right]$
$\displaystyle\leq$ $\displaystyle\mathcal{L}\left[\sum_{x\in
V_{G}}P_{G}^{t}(o,x)\left(\log\deg_{G}(o)+\log\deg_{G}(x)\right)\right]$
$\displaystyle=$ $\displaystyle 2\mathcal{L}\left[\log\deg_{G}(o)\right],$
where the second line simply uses the stationarity property (8) with
$h(G,o)=\log\deg_{G}(o)$. ∎
###### Remark 5 (Unimodular analogues).
By Lemma 7 and Remark 4, all results in this section also apply to any
unimodular law $\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ with
$\mathcal{L}[\deg_{G}(o)\log\deg_{G}(o)]<\infty$.
## 3 Proof of the main result
We are now ready to prove our main result. We work with the formulation given
in Theorem 3. Section 3.1 below reduces it to two key results, which are then
proved in Sections 3.2 and 3.3.
### 3.1 Setting the stage
Let $G_{n}=(V_{n},E_{n})$, $n\geq 1$ be finite graphs satisfying the
assumptions of Theorem 3, i.e.
$\displaystyle\sup_{n\geq 1}\left\\{\frac{1}{|V_{n}|}\sum_{x\in
V_{n}}\deg_{G_{n}}(x)\log\deg_{G_{n}}(x)\right\\}$ $\displaystyle<$
$\displaystyle\infty;$ (15)
$\displaystyle\forall\varepsilon>0,\quad\frac{1}{|E_{n}|}\,\mathrm{card}\\{e\in
E_{n}\colon\kappa_{G_{n}}(e)<-\varepsilon\\}$
$\displaystyle\xrightarrow[n\to\infty]{}$ $\displaystyle 0.$ (16)
Recall that our goal is to establish
$\displaystyle\forall\rho\in(0,1),\quad\liminf_{n\to\infty}\left\\{\frac{1}{|V_{n}|}\,\mathrm{card}\\{i\colon\lambda_{i}(G_{n})>\rho\\}\right\\}$
$\displaystyle>$ $\displaystyle 0.$ (17)
By (15) and Theorem 4, we may assume, upon extracting a subsequence if
necessary, that
$\displaystyle G_{n}$ $\displaystyle\xrightarrow[n\to\infty]{}$
$\displaystyle\mathcal{L},$ (18)
for some $\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$. Note that
$\mathcal{L}$ is automatically unimodular by Theorem 5, and such that
$\displaystyle\mathcal{L}\left[\deg_{G}(o)\log\deg_{G}(o)\right]$
$\displaystyle<$ $\displaystyle\infty.$ (19)
Just like the degree, the curvature is a local notion, hence it also “passes
to the limit”, i.e
$\displaystyle\mathcal{L}\left(\kappa(G)\geq 0\right)$ $\displaystyle=$
$\displaystyle 1.$ (20)
###### Proof.
As already mentioned, the observable $f\colon(G,o)\mapsto\min_{x\sim
o}\kappa_{G}(o,x)$ is $2-$local, hence continuous on
${\mathscr{G}_{\bullet}}$. By the Portmanteau Theorem, we deduce that for any
$\varepsilon>0$,
$\displaystyle\mathcal{L}\left(f<-\varepsilon\right)$ $\displaystyle\leq$
$\displaystyle\liminf_{n\to\infty}\mathcal{L}_{G_{n}}\left(f<-\varepsilon\right)$
$\displaystyle=$
$\displaystyle\liminf_{n\to\infty}\left\\{\frac{1}{|V_{n}|}\mathrm{card}\\{o\in
V_{n}\colon f(G_{n},o)<-\varepsilon\right\\}$ $\displaystyle\leq$
$\displaystyle\liminf_{n\to\infty}\left\\{\frac{2}{|V_{n}|}\mathrm{card}\\{e\in
E_{n}\colon\kappa_{G_{n}}(e)<-\varepsilon\right\\}$ $\displaystyle=$
$\displaystyle\mathcal{L}\left[\deg_{G}(o)\right]\liminf_{n\to\infty}\left\\{\frac{1}{|E_{n}|}\mathrm{card}\\{e\in
E_{n}\colon\kappa_{G_{n}}(e)<-\varepsilon\right\\},$
where the last inequality follows from the observation that
$\frac{2|E_{n}|}{|V_{n}|}\to\mathcal{L}[\deg_{G}(o)]$, by the continuity and
uniform integrability of $(G,o)\mapsto\deg_{G}(o)$. Sending $\varepsilon\to 0$
yields $\mathcal{L}(f<0)=0$, by (16). To conclude, we simply apply Lemma 6 to
the event $B=\\{f\geq 0\\}$. ∎
The first crucial step in our proof consists in deducing from (20) that the
entropy is zero under $\mathcal{L}$. This is the content of the following
theorem, which will be proved in Section 3.2.
###### Theorem 11 (Non-negative curvature implies zero-entropy).
The implication
$\displaystyle\kappa(G)\geq 0$ $\displaystyle\Longrightarrow$
$\displaystyle\mathscr{H}(G)=0$
holds almost-surely under any stationary law
$\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ satisfying
$\mathcal{L}\left[\log\deg_{G}(o)\right]<\infty$.
In view of Remark 4, this result also applies to any unimodular law
$\mathcal{L}\in\mathcal{P}({\mathscr{G}_{\bullet}})$ satisfying
$\mathcal{L}\left[\deg_{G}(o)\log\deg_{G}(o)\right]<\infty$, hence in
particular to the limit $\mathcal{L}$ in (18). Combining this with Lemma 10,
we immediately deduce that our local weak limit satisfies
$\displaystyle\mathcal{L}(\rho(G)=1)$ $\displaystyle=$ $\displaystyle 1.$
It turns out that this simple condition suffices to guarantee (17). This is
the content of the following second result, established in Section 3.3 below,
and which completes the proof of our main result.
###### Theorem 12 (Zero-entropy implies poor spectral expansion).
Let $G_{n}=(V_{n},E_{n}),{n\geq 1}$ be finite graphs having local weak limit
$\mathcal{L}$, and suppose that $\mathcal{L}\left(\rho(G)=1\right)=1$. Then,
for any $\rho<1$,
$\displaystyle\liminf_{n\to\infty}\left\\{\frac{1}{|V_{n}|}\,\mathrm{card}\left\\{i\colon\lambda_{i}(G_{n})>\rho\right\\}\right\\}$
$\displaystyle>$ $\displaystyle 0.$
In fact, a stronger statement about eigenvectors will be derived, as claimed
in Remark 3.
### 3.2 Non-negative curvature implies zero entropy
Consider a connected graph $G$ and two vertices $x,y\in V_{G}$. The proof of
Theorem 11 relies on the following intuitive idea: if $G$ has non-negative
curvature and bounded degrees, then it takes time $O({\rm d}_{G}^{2}(x,y))$
for two random walks starting at $x$ and $y$ to meet. This classical
observation constitutes the very essence of the path coupling method of
Bordewich and Dyer [18]. It was later re-discovered and further developed by
Münch [40]. We will here prove a refinement that does not require bounded
degrees, see Corollary 16 below. Write $\mathcal{B}_{x},\mathcal{B}_{y}$ for
the balls of radius $1$ around $x$ and $y$, and recall that the Wassertein
distance $\mathcal{W}_{1}\left(P_{G}(x,\cdot),P_{G}(y,\cdot)\right)$ is
defined as
$\displaystyle\mathcal{W}_{1}\left(P_{G}(x,\cdot),P_{G}(y,\cdot)\right)$
$\displaystyle=$
$\displaystyle\inf_{\pi}\left\\{\sum_{u\in\mathcal{B}_{x}}\sum_{v\in\mathcal{B}_{y}}\pi(u,v)\,{\rm
d}_{G}(u,v)\right\\},$ (21)
where the infimum runs over all probability distributions
$\pi\in\mathcal{P}(\mathcal{B}_{x}\times\mathcal{B}_{y})$ with marginals
$P_{G}(x,\cdot)$ and $P_{G}(y,\cdot)$. By compactness, the above infimum is
actually achieved, and the minimizers will be called _optimal couplings_. As
in [18, 40], our first task consists in showing that an optimal coupling can
always be chosen so as to assign a “decent” probability to the “good” set
$\displaystyle\Gamma$ $\displaystyle:=$
$\displaystyle\left\\{(u,v)\in\mathcal{B}_{x}\times\mathcal{B}_{y}\colon{\rm
d}_{G}(u,v)<{\rm d}_{G}(x,y)\right\\}.$
The argument crucially uses the laziness of $P_{G}$ but is otherwise rather
general.
###### Lemma 13 (Good optimal couplings).
If $x\neq y$, then there is an optimal coupling $\pi$ such that
$\displaystyle\pi\left(\Gamma\right)$ $\displaystyle\geq$
$\displaystyle\frac{1}{2}\max\left\\{\frac{1}{\deg_{G}(x)},\frac{1}{\deg_{G}(y)}\right\\}.$
###### Proof.
By compactness, we can find an optimal coupling $\pi$ which, among all optimal
couplings, maximizes $\pi(\Gamma)$. Suppose for a contradiction that this
“doubly optimal” coupling satisfies
$\displaystyle\pi\left(\Gamma\right)$ $\displaystyle<$
$\displaystyle\frac{1}{2\deg_{G}(x)}.$ (22)
The set ${A}:=\\{u\in\mathcal{B}_{x}\colon(u,y)\in\Gamma\\}$ is not empty,
since it contains the first vertex on a geodesic from $x$ to $y$. Thus,
$\pi(A\times\mathcal{B}_{y})\geq 1/(2\deg_{G}(x)).$ In view of (22), this
forces $\pi((A\times\mathcal{B}_{y})\setminus\Gamma)>0$, i.e.
$\displaystyle\exists(x_{0},y_{0})\in(A\times\mathcal{B}_{y})\setminus\Gamma,\quad\pi(x_{0},y_{0})$
$\displaystyle\geq$ $\displaystyle\varepsilon,$ (23)
for some $\varepsilon>0$. On the other hand, we have
$\pi(A\times\\{y\\})+\pi(A^{c}\times\\{y\\})=P_{G}(y,y)\ =\ \frac{1}{2}.$ This
forces $\pi(A^{c}\times\\{y\\})>0$, because
$\pi(A\times\\{y\\})\leq\pi(\Gamma)<\frac{1}{2}$. In other words,
$\displaystyle\exists x_{1}\in A^{c},\quad\pi(x_{1},y)$ $\displaystyle\geq$
$\displaystyle\varepsilon,$ (24)
provided $\varepsilon>0$ is chosen small enough. We now use the vertices
$x_{0},y_{0},x_{1}$ found at (23)-(24) to construct a new coupling
$\widehat{\pi}$ which contradicts the optimality of $\pi$. For all
$(u,v)\in\mathcal{B}_{x}\times\mathcal{B}_{y}$, we set
$\displaystyle\widehat{\pi}(u,v)$ $\displaystyle:=$
$\displaystyle\left\\{\begin{array}[]{ll}\pi(u,v)&\textrm{ if
}u\notin\\{x_{0},x_{1}\\}\textrm{ and }b\notin\\{y_{0},y\\};\\\
\pi(u,v)-\varepsilon&\textrm{ if }(u,v)=(x_{0},y_{0})\textrm{ or
}(u,v)=(x_{1},y);\\\ \pi(u,v)+\varepsilon&\textrm{ if }(u,v)=(x_{0},y)\textrm{
or }(u,v)=(x_{1},y_{0}).\end{array}\right.$
By construction, $\widehat{\pi}$ is non-negative on
$\mathcal{B}_{x}\times\mathcal{B}_{y}$ and has the same marginals as $\pi$.
Thus, it is a coupling of $P_{G}(x,\cdot),P_{G}(y,\cdot)$. This coupling is
moreover optimal, since
$\displaystyle\sum_{u\in\mathcal{B}_{x}}\sum_{v\in\mathcal{B}_{y}}{\rm
d}_{G}(u,v)\left(\widehat{\pi}(u,v)-\pi(u,v)\right)$ $\displaystyle=$
$\displaystyle\varepsilon\left({\rm d}_{G}(x_{0},y)+{\rm
d}_{G}(x_{1},y_{0})-{\rm d}_{G}(x_{0},y_{0})-{\rm d}_{G}(x_{1},y)\right)$
$\displaystyle\leq$ $\displaystyle\varepsilon\left({\rm d}_{G}(x,y)-1+{\rm
d}_{G}(x_{1},y_{0})-{\rm d}_{G}(x,y)-{\rm d}_{G}(x_{1},y)\right)$
$\displaystyle\leq$ $\displaystyle 0,$
where the first inequality uses $x_{0}\in A$ and $(x_{0},y_{0})\notin\Gamma$,
while the second uses the triangle inequality ${\rm
d}_{G}(x_{1},y_{0})\leq{\rm d}_{G}(x_{1},y)+{\rm d}_{G}(y,y_{0})$. Finally,
since $\Gamma$ contains $(x_{1},y)$ but not $(x_{0},y_{0}),(x_{1},y)$, we have
$\displaystyle\widehat{\pi}(\Gamma)$ $\displaystyle\geq$
$\displaystyle\pi(\Gamma)+\varepsilon,$
contradicting the definition of $\pi$. Thus, (22) can not be true, and the
claim follows by symmetry. ∎
We will also need the following technical lemma, which is of independent
interest and quantifies the intuition that non-negative super-martingales that
“move a lot” must “quickly” hit zero.
###### Lemma 14 (Non-negative super-martingales quickly hit zero).
Let $\tau:=\inf\\{t\geq 0\colon Z_{t}=0\\}$ be the hitting time of zero by a
non-negative super-martingale $Z=(Z_{t})_{t\geq 0}$. Suppose that $Z_{0}=z$,
and that all increments $(Z_{t+1}-Z_{t})_{t\geq 0}$ are upper-bounded by a
constant $K$. Then,
$\displaystyle{\mathbb{P}}\left(\tau\geq t\right)$ $\displaystyle\leq$
$\displaystyle z\left(\frac{2a+K-z}{a^{2}}\right)+{\mathbb{P}}\left(\tau\geq
t,\sum_{s=0}^{t-1}W_{s}<a^{2}\right),$
for all $t\in\mathbb{Z}_{+},a>0$, where
$W_{s}={\mathbb{E}}\left[(Z_{s+1}-Z_{s})^{2}|\mathscr{F}_{s}\right]$ and
$(\mathscr{F}_{s})_{s\geq 0}$ is the underlying filtration.
###### Proof.
First note that the process $Z$ is trivially square-integrable, because
$Z_{t}\in[0,z+Kt]$ for each $t\geq 0$. Now fix $t\geq 0$ and $a>0$, and
consider the bounded stopping time
$\displaystyle\sigma$ $\displaystyle:=$ $\displaystyle\inf\left\\{s\geq
0\colon Z_{s}\geq a\right\\}\wedge t.$
Using the Optional Stopping Theorem, the non-negativity of $Z$ and the
definition of $\sigma$, we have
$\displaystyle z$ $\displaystyle\geq$
$\displaystyle{\mathbb{E}}\left[Z_{\sigma\wedge\tau}\right]$
$\displaystyle\geq$ $\displaystyle{\mathbb{E}}\left[Z_{\sigma\wedge\tau}{\bf
1}_{(\sigma<\tau\wedge t)}\right]$ $\displaystyle\geq$ $\displaystyle
a{\mathbb{P}}\left(\sigma<\tau\wedge t\right).$
On the other hand, observe that for all $s\geq 0$, we may rewrite $W_{s}$ as
$\displaystyle W_{s}$ $\displaystyle=$
$\displaystyle{\mathbb{E}}\left[Z_{s+1}^{2}-Z_{s}^{2}|\mathscr{F}_{s}\right]+2Z_{s}{\mathbb{E}}[Z_{s}-Z_{s+1}|\mathscr{F}_{s}].$
Note that the second conditional expectation is non-negative by assumption.
Moreover, we have $Z_{s}\leq a$ on the event $\\{\sigma>s\\}$, which is in
$\mathscr{F}_{s}$. Thus,
$\displaystyle W_{s}{\bf 1}_{\sigma>s}$ $\displaystyle\leq$
$\displaystyle{\mathbb{E}}\left[\left(Z_{s+1}^{2}-Z_{s}^{2}\right){\bf
1}_{\sigma>s}|\mathscr{F}_{s}\right]+2a{\mathbb{E}}\left[\left(Z_{s}-Z_{s+1}\right){\bf
1}_{\sigma>s}|\mathscr{F}_{s}\right].$
Taking expectations and summing over all $s\geq 0$, we obtain
$\displaystyle{\mathbb{E}}\left[\sum_{s=0}^{\sigma-1}W_{s}\right]$
$\displaystyle\leq$
$\displaystyle{\mathbb{E}}\left[Z_{\sigma}^{2}\right]-2a{\mathbb{E}}[Z_{\sigma}]-z^{2}+2az$
$\displaystyle\leq$ $\displaystyle(K+a-z)z,$
where the second inequality follows from the observations that $Z_{\sigma}\leq
K+a$ and ${\mathbb{E}}[Z_{\sigma}]\leq z$. Let us now use these two estimates
to conclude. By union bound, we have
$\displaystyle{\mathbb{P}}\left(\tau\geq t\right)$ $\displaystyle\leq$
$\displaystyle{\mathbb{P}}\left(\sigma<\tau\wedge
t\right)+{\mathbb{P}}\left(\sigma\wedge\tau\geq t\right)$ $\displaystyle\leq$
$\displaystyle{\mathbb{P}}\left(\sigma<\tau\wedge
t\right)+{\mathbb{P}}\left(\tau\geq
t,\sum_{s=0}^{\sigma-1}W_{s}\geq\sum_{s=0}^{t-1}W_{s}\right)$
$\displaystyle\leq$ $\displaystyle{\mathbb{P}}\left(\sigma<\tau\wedge
t\right)+{\mathbb{P}}\left(\sum_{s=0}^{\sigma-1}W_{s}\geq
a^{2}\right)+{\mathbb{P}}\left(\tau\geq t,\sum_{s=0}^{t-1}W_{s}<a^{2}\right)$
$\displaystyle\leq$
$\displaystyle\frac{z}{a}+\frac{(K+a-z)z}{a^{2}}+{\mathbb{P}}\left(\tau\geq
t,\sum_{s=0}^{t-1}W_{s}<a^{2}\right).$
This is exactly the claimed bound. ∎
Combining these two lemmas, we may now deduce the following estimate, which
exploits non-negative curvature to control the action of $P_{G}$ on the
variations of bounded observables.
###### Proposition 15 (Variational estimate via non-negative curvature).
Let $G$ be a connected graph with $\kappa(G)\geq 0$. Then, for any $f\colon
V_{G}\to[-1,1]$, any vertices $x,y\in V_{G}$, and any
$a>0,t\in\mathbb{Z}_{+}$,
$\displaystyle{|P^{t}_{G}f(x)-P^{t}_{G}f(y)|}$ $\displaystyle\leq$
$\displaystyle\frac{8{\rm
d}_{G}(x,y)}{a}+2{\mathbb{P}}\left(\sum_{s=0}^{t-1}\frac{1}{\deg_{G}(X_{s})}<2a^{2}\right),$
where $X$ denotes a lazy random walk on $G$ starting from $x$.
###### Proof.
Let $(X,Y)$ be the Markov chain on $V_{G}\times V_{G}$ which, from any state
$(x,y)\in V_{G}\times V_{G}$, draws the next state according to the “good”
optimal coupling of $P_{G}(x,\cdot),P_{G}(y,\cdot)$ described in Lemma 13. We
use the standard notations
${\mathbb{P}}_{(x,y)}(\cdot),{\mathbb{E}}_{(x,y)}[\cdot]$ to specify the
choice of the initial state. Since the two coordinates $X,Y$ are marginally
distributed as lazy random walks on $G$, we have
$\displaystyle\left|P^{t}_{G}f(x)-P^{t}_{G}f(y)\right|$ $\displaystyle=$
$\displaystyle\left|{\mathbb{E}}_{x,y}\left[f(X_{t})\right]-{\mathbb{E}}_{x,y}\left[f(Y_{t})\right]\right|$
$\displaystyle\leq$
$\displaystyle{\mathbb{E}}_{x,y}\left[\left|f(X_{t})-f(Y_{t})\right|\right]$
$\displaystyle\leq$ $\displaystyle 2{\mathbb{P}}_{x,y}\left(X_{t}\neq
Y_{t}\right)$ $\displaystyle\leq$ $\displaystyle
2{\mathbb{P}}_{x,y}\left(\tau>t\right),$
where $\tau=\inf\\{t\geq 0\colon X_{t}=Y_{t}\\}$ denotes the meeting time of
the two walkers. Note that $\tau$ is also the hitting time of zero by the non-
negative process $Z=(Z_{t})_{t\geq 0}$ defined as follows:
$\displaystyle\forall t\geq 0,\quad Z_{t}$ $\displaystyle:=$
$\displaystyle{\rm d}_{G}(X_{t},Y_{t}).$
We claim that $Z$ is a super-martingale w.r.t. the natural filtration
$(\mathscr{F}_{t})_{t\geq 0}$ associated with $(X,Y)$. Indeed, by the Markov
property and the optimality of the chosen couplings, this claim reduces to
$\displaystyle\mathcal{W}_{1}\left(P_{G}(x,\cdot),P_{G}(y,\cdot)\right)$
$\displaystyle\leq$ $\displaystyle{\rm d}_{G}(x,y),$
for all $x,y\in V_{G}$. But this inequality readily follows from the
assumption $\kappa_{G}(x,y)\geq 0$ in the case $\\{x,y\\}\in E_{G}$, and it
then automatically extends to all $x,y\in V_{G}$ by the triangle inequality of
$\mathcal{W}_{1}(\cdot,\cdot)$ (see, e.g., [45]). On the other hand, Lemma 13
ensures that on the event $\\{\tau>t\\}$,
$\displaystyle{\mathbb{E}}_{x,y}\left[(Z_{t+1}-Z_{t})^{2}|\mathscr{F}_{t}\right]$
$\displaystyle\geq$ $\displaystyle\frac{1}{2\deg_{G}(X_{t})}.$
Finally, note that the distance between the two walkers can not increase by
more than $2$ at each step. Thus, we may invoke Lemma 14 to conclude that
$\displaystyle{\mathbb{P}}_{x,y}\left(\tau\geq t\right)$ $\displaystyle\leq$
$\displaystyle 2{\rm
d}_{G}(x,y)\left(\frac{a+1}{a^{2}}\right)+{\mathbb{P}}_{x,y}\left(\sum_{s=0}^{t-1}\frac{1}{\deg_{G}(X_{s})}<2a^{2}\right)$
$\displaystyle\leq$ $\displaystyle\frac{4{\rm
d}_{G}(x,y)}{a}+{\mathbb{P}}_{x,y}\left(\sum_{s=0}^{t-1}\frac{1}{\deg_{G}(X_{s})}<2a^{2}\right),$
where the second line follows from the first if $a\geq 1$, and is trivial
otherwise. ∎
In particular, this applies to any bounded harmonic function $f$, after a
trivial normalization. Since $P^{t}_{G}f=f$ for all $t\geq 0$, we may send
$t\to\infty$ and then $a\to\infty$ in the resulting estimate to obtain the
following key result, which ensures that non-negatively curved graphs satisfy
the Liouville property, provided they have a “decent proportion” of vertices
with “reasonable” degree.
###### Corollary 16 (Liouville property and non-negative curvature).
Let $G$ be a connected graph with $\kappa(G)\geq 0$. Fix $o\in V_{G}$ and
suppose that the simple random walk $X$ on $G$ starting from $o$ satisfies
$\displaystyle{\mathbb{P}}\left(\sum_{t=0}^{\infty}\frac{1}{\deg_{G}(X_{t})}=\infty\right)$
$\displaystyle=$ $\displaystyle 1.$ (26)
Then, $G$ has the Liouville property.
A simple situation where the above condition trivially holds is that where $G$
has bounded degrees. In that case, the Liouville property was recently
established by Jost, Münch, and Rose [29]. Our relaxation allows for arbitrary
large degrees, as long as the random walk can avoid them from times to times.
This is the case under any stationary law by Birkhoff’s Ergodic Theorem,
allowing us to prove Theorem 11.
###### Proof of Theorem 11.
Let $(G,o)$ have law $\mathcal{L}$ and, conditionally on $(G,o)$, let $X$ be a
lazy random walk starting from the root. Then the process $Z=(Z_{t})_{t\geq
0}$ defined by
$\displaystyle\forall t\geq 0,\quad Z_{t}$ $\displaystyle:=$
$\displaystyle\frac{1}{\deg_{G}(X_{t})}$
is stationary, in the usual sense that its law is invariant under the shift
$(z_{t})_{t\geq 0}\mapsto(z_{t+1})_{t\geq 0}$ on $[0,1]^{\mathbb{Z}_{+}}$.
Thus, Birkhoff’s Ergodic Theorem (see, e.g. [38, Theorem 14.43]) ensures that
$\displaystyle\frac{1}{t}\sum_{s=0}^{t-1}Z_{s}$
$\displaystyle\xrightarrow[t\to\infty]{}$
$\displaystyle{\mathbb{E}}[Z_{1}|\mathscr{I}],$
almost-surely, where $\mathscr{I}$ is the invariant $\sigma-$algebra. Since
$Z_{1}$ is almost-surely positive, we deduce
$\displaystyle\sum_{s=0}^{\infty}Z_{s}$ $\displaystyle=$
$\displaystyle\infty,$
almost-surely. In other words, the random graph $(G,o)$ satisfies (26) almost-
surely. By the above corollary, this implies that $G$ has the Liouville
property almost-surely on the event $\\{\kappa(G)\geq 0\\}$. By Theorem 9, we
conclude that $\mathscr{H}(G)=0$ almost-surely on the same event. ∎
### 3.3 Zero entropy implies poor spectral expansion
This final section is devoted to proving Theorem 12, which relates the
eigenvalues of finite graphs to the spectral radius of their local weak
limits. If $G$ is a finite graph, the $N=|V_{G}|$ eigenvalues
$\lambda_{1}(G)\geq\ldots\geq\lambda_{N}(G)$ of its transition matrix $P_{G}$
can be conveniently encoded into a probability measure
$\mu_{G}\in\mathcal{P}([0,1])$, called the _empirical eigenvalue distribution_
of the matrix $P_{G}$:
$\displaystyle\mu_{G}$ $\displaystyle:=$
$\displaystyle\frac{1}{N}\sum_{i=1}^{N}\delta_{\lambda_{i}(G)}.$
It turns out that the large-size asymptotics of this fundamental object can be
understood directly at the level of local weak limits. When $P_{G}$ is
replaced with the more standard adjacency matrix, this classical observation
is the starting point of a rich and well-established theory, see the
comprehensive introductory survey [15] by Bordenave, and the references
therein.
#### Local spectral measures.
The transition kernel $P_{G}$ of a graph $G$ can be viewed as a linear
operator acting via (11) on the Hilbert space
$\displaystyle\ell^{2}(G)$ $\displaystyle:=$
$\displaystyle\left\\{f\in\mathbb{C}^{V_{G}}\colon\sum_{o\in
V_{G}}\deg_{G}(o)|f(o)|^{2}<\infty\right\\},$
with inner product $\langle f,g\rangle=\sum_{o\in
V_{G}}\deg_{G}(o)\overline{f(o)}g(o)$. The stochasticity, laziness and
reversibility
$\displaystyle\sum_{y\in V_{G}}P_{G}(x,y)=1,\qquad P_{G}(x,x)\geq
1/2,\qquad\deg_{G}(x)P_{G}(x,y)=\deg_{G}(y)P_{G}(y,x),$
easily (and classically) imply that $P_{G}$ is a positive contraction on
$\ell^{2}(G)$, i.e.
$\displaystyle\forall f\in\ell^{2}(G),\qquad 0\ \leq\ \langle f,P_{G}f\rangle\
\leq\langle\ f,f\rangle.$
In particular, for each $o\in V_{G}$, the spectral theorem for self-adjoint
operators ensures the existence of a _local spectral measure_
$\mu_{(G,o)}\in\mathcal{P}([0,1])$, characterized by the moment identity
$\displaystyle\forall t\geq 0,\quad\int_{0}^{1}\lambda^{t}\mu_{(G,o)}({\rm
d}\lambda)$ $\displaystyle=$ $\displaystyle P^{t}_{G}(o,o).$ (27)
As we will now see, $\mu_{(G,o)}$ can be interpreted as the local contribution
of $o$ to the spectrum of $P_{G}$. Local spectral measures are a powerful tool
to investigate the mixing properties of graphs, see [36].
#### The finite case.
When $G$ is finite with $N$ vertices, there is an orthonormal basis
$(\phi_{1},\ldots,\phi_{N})$ of $\ell^{2}(G)$ consisting of eigenvectors of
$P_{G}$ with eigenvalues $\lambda_{1}(G),\ldots,\lambda_{N}(G)$, and we easily
find
$\displaystyle\mu_{(G,o)}$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{N}\deg_{G}(o)|\phi_{i}(o)|^{2}\delta_{\lambda_{i}(G)}.$
(28)
Thus, the local spectral measure $\mu_{(G,o)}$ is a mixture of Dirac masses
located at the various eigenvalues of $P_{G}$, and weighted by the squared
amplitudes of the corresponding eigenvectors at $o$. Moreover, thanks to the
orthonormality of $(\phi_{1},\ldots,\phi_{N})$, the identity (28) readily
implies
$\displaystyle\mu_{G}$ $\displaystyle=$
$\displaystyle\frac{1}{|V_{G}|}\sum_{o\in V_{G}}\mu_{(G,o)}.$ (29)
In other words, the empirical eigenvalue distribution of a finite graph $G$
coincides with the spatial average of its local spectral measures.
#### Spectral continuity.
In light of (6), it is tempting to pass to the limit in the formula (29) along
a convergent sequence of finite graphs $(G_{n})_{n\geq 1}$. This is made
rigorous by the following continuity principle. As usual, $\mathcal{P}([0,1])$
is here equipped with the topology of weak convergence.
###### Lemma 17 (Spectral continuity).
The map $(G,o)\mapsto\mu_{(G,o)}$ is continuous on ${\mathscr{G}_{\bullet}}$.
In particular, if a sequence of graphs $(G_{n})_{n\geq 1}$ admits a local weak
limit $\mathcal{L}$, then
$\displaystyle\mu_{G_{n}}({\rm d}\lambda)$
$\displaystyle\xrightarrow[n\to\infty]{}$ $\displaystyle\mu_{\mathcal{L}}({\rm
d}\lambda)\ :=\ \mathcal{L}\left[\mu_{(G,o)}({\rm d}\lambda)\right].$
###### Proof.
For each fixed $t\geq 0$, the observable $(G,o)\mapsto P^{t}_{G}(o,o)$ is
clearly $t-$local, hence continuous. In particular, via the identity (27), the
convergence $(G_{n},o_{n})\to(G,o)$ in ${\mathscr{G}_{\bullet}}$ implies
$\displaystyle\forall t\geq
0,\quad\int_{0}^{1}\lambda^{t}\,\mu_{(G_{n},o_{n})}({\rm d}\lambda)$
$\displaystyle\xrightarrow[n\to\infty]{}$
$\displaystyle\int_{0}^{1}\lambda^{t}\,\mu_{(G,o)}({\rm d}\lambda).$ (30)
Since convergence in $\mathcal{P}([0,1])$ is equivalent to the convergence of
moments, we conclude that
$\mu_{(G_{n},o_{n})}\xrightarrow[n\to\infty]{}\mu_{(G,o)}$, and the continuity
is proved. Similarly, the second claim is obtained by applying (6) to the
$t-$local observable $f\colon(G,o)\mapsto P^{t}_{G}(o,o)$, for each $t\geq 1$.
∎
###### Corollary 18 (Unit spectral radius implies poor spectral expansion).
Let $G_{n}=(V_{n},E_{n}),{n\geq 1}$ be finite graphs having a local weak limit
$\mathcal{L}$ such that $\mathcal{L}(\rho(G)=1)=1$. Then, for any
$0\leq\rho<1$,
$\displaystyle\liminf_{n\to\infty}\,\mu_{G_{n}}\left([\rho,1]\right)$
$\displaystyle>$ $\displaystyle 0.$ (31)
Moreover, we have the refinement
$\displaystyle\sup_{n\geq 1}\,\frac{\left|\left\\{x\in
V_{n}\colon\mu_{(G_{n},x)}([\rho,1])\leq\varepsilon\right\\}\right|}{|V_{n}|}$
$\displaystyle\xrightarrow[\varepsilon\to 0]{}$ $\displaystyle 0.$ (32)
###### Proof.
Fix $0\leq\rho<1$. By the second part of Lemma 17 and the Portmanteau Theorem,
we have
$\displaystyle\liminf_{n\to\infty}\mu_{G_{n}}([\rho,1])$ $\displaystyle\geq$
$\displaystyle\mathcal{L}\left[\mu_{(G,o)}((\rho,1])\right].$ (33)
On the other hand, comparing (27) with the definition of the spectral radius,
we see that $\rho(G)$ is exactly the supremum of the support of $\mu_{(G,o)}$,
for any $(G,o)\in{\mathscr{G}_{\bullet}}$. In other words,
$\displaystyle\mu_{(G,o)}((\rho,1])>0$ $\displaystyle\Longleftrightarrow$
$\displaystyle\rho(G)>\rho.$
In particular, since $\mathcal{L}(\rho(G)=1)=1$, the right-hand side of (33)
is positive, as desired. To prove the second claim, note that the continuity
of $(G,o)\mapsto\mu_{(G,o)}$ implies that the event
$F_{\varepsilon}=\left\\{\mu_{(G,o)}([\rho,1])\leq\varepsilon\right\\}$ is
closed in ${\mathscr{G}_{\bullet}}$. Consequently, the convergence
$G_{n}\to\mathcal{L}$ implies
$\displaystyle\limsup_{n\to\infty}\mathcal{L}_{G_{n}}(F_{\varepsilon})$
$\displaystyle\leq$ $\displaystyle\mathcal{L}(F_{\varepsilon}),$
and the right-hand side tends to
$\mathcal{L}(F_{0})\leq\mathcal{L}(\rho(G)\leq\rho)=0$ as $\varepsilon\to 0$.
The limsup can then be replaced with a sup, since for each $n\geq 1$,
$\mathcal{L}_{G_{n}}(F_{\varepsilon})$ decreases monotonically to $0$ with
$\varepsilon$. ∎
###### Remark 6 (Corollary 18 vs Theorem 12).
The statement (31) asserts that a macroscopic proportion of eigenvalues of
$G_{n}$ accumulate in $[\rho,1]$, which is exactly the conclusion of Theorem
12. The refinement (32), on the other hand, constitutes a rigorous
formalization of the “delocalization” announced in Remark 3. To see this,
recall that for any graph $G$ with $N$ vertices, we have by (28),
$\displaystyle\mu_{(G,x)}([\rho,1])$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{N}\deg_{G}(x)|\phi_{i}(x)|^{2}{\bf
1}_{\lambda_{i}(G)\geq\rho}.$
In words, the number $\mu_{(G,x)}([\rho,1])\in[0,1]$ measures the cumulative
squared amplitude at $x$ of all the basis eigenvectors corresponding to “bad”
eigenvalues (those in $[\rho,1]$). In particular, the set $\\{x\in
V_{G}\colon\mu_{(G,x)}([\rho,1])\leq\varepsilon\\}$ represents the region
where these “bad” eigenvectors have a small cumulative squared amplitude. The
statement (32) asserts that the relative size of this region can be made
arbitrarily small by choosing $\varepsilon$ small, uniformly in $n$. Thus, bad
eigenvectors have their cumulative mass “spread out” across most vertices.
## References
* [1] David Aldous and Russell Lyons. Processes on unimodular random networks. Electron. J. Probab., 12:no. 54, 1454–1508, 2007.
* [2] David Aldous and J. Michael Steele. The objective method: probabilistic combinatorial optimization and local weak convergence. In Probability on discrete structures, volume 110 of Encyclopaedia Math. Sci., pages 1–72. Springer, Berlin, 2004.
* [3] Venkat Anantharam and Justin Salez. The densest subgraph problem in sparse random graphs. Ann. Appl. Probab., 26(1):305–327, 2016.
* [4] A. Avez. Harmonic functions on groups. In Differential geometry and relativity, pages 27–32. Mathematical Phys. and Appl. Math., Vol. 3. 1976.
* [5] André Avez. Entropie des groupes de type fini. C. R. Acad. Sci. Paris Sér. A-B, 275:A1363–A1366, 1972.
* [6] André Avez. Théorème de Choquet-Deny pour les groupes à croissance non exponentielle. C. R. Acad. Sci. Paris Sér. A, 279:25–28, 1974.
* [7] D. Bakry and Michel Émery. Diffusions hypercontractives. In Séminaire de probabilités, XIX, 1983/84, volume 1123 of Lecture Notes in Math., pages 177–206. Springer, Berlin, 1985.
* [8] Dominique Bakry. Étude des transformations de Riesz dans les variétés riemanniennes à courbure de Ricci minorée. In Séminaire de Probabilités, XXI, volume 1247 of Lecture Notes in Math., pages 137–172. Springer, Berlin, 1987.
* [9] Dominique Bakry, Ivan Gentil, and Michel Ledoux. Analysis and geometry of Markov diffusion operators, volume 348 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Cham, 2014.
* [10] Itai Benjamini and Nicolas Curien. Ergodic theory on stationary random graphs. Electron. J. Probab., 17:no. 93, 20, 2012.
* [11] Itai Benjamini, Hugo Duminil-Copin, Gady Kozma, and Ariel Yadin. Disorder, entropy and harmonic functions. Ann. Probab., 43(5):2332–2373, 2015.
* [12] Itai Benjamini, Russell Lyons, and Oded Schramm. Unimodular random trees. Ergodic Theory Dynam. Systems, 35(2):359–373, 2015.
* [13] Itai Benjamini, Elliot Paquette, and Joshua Pfeffer. Anchored expansion, speed and the Poisson-Voronoi tessellation in symmetric spaces. Ann. Probab., 46(4):1917–1956, 2018.
* [14] Itai Benjamini and Oded Schramm. Recurrence of distributional limits of finite planar graphs. Electron. J. Probab., 6:no. 23, 13, 2001.
* [15] Charles Bordenave. Spectrum of random graphs. In Advanced topics in random matrices, volume 53 of Panor. Synthèses, pages 91–150. Soc. Math. France, Paris, 2017.
* [16] Charles Bordenave, Marc Lelarge, and Justin Salez. The rank of diluted random graphs. Ann. Probab., 39(3):1097–1121, 2011.
* [17] Charles Bordenave, Marc Lelarge, and Justin Salez. Matchings on infinite graphs. Probab. Theory Related Fields, 157(1-2):183–208, 2013.
* [18] Magnus Bordewich and Martin Dyer. Path coupling without contraction. J. Discrete Algorithms, 5(2):280–292, 2007.
* [19] D. P. Bourne, D. Cushing, S. Liu, F. Münch, and N. Peyerimhoff. Ollivier-Ricci idleness functions of graphs. SIAM J. Discrete Math., 32(2):1408–1424, 2018.
* [20] Matías Carrasco Piaggio and Pablo Lessa. Equivalence of zero entropy and the Liouville property for stationary random graphs. Electron. J. Probab., 21:Paper No. 55, 24, 2016.
* [21] D. Cushing, S. Kamtue, J. Koolen, S. Liu, F. Münch, and N. Peyerimhoff. Rigidity of the Bonnet-Myers inequality for graphs with respect to Ollivier Ricci curvature. Adv. Math., 369:107188, 53, 2020.
* [22] David Cushing, Riikka Kangaslampi, Valtteri Lipiäinen, Shiping Liu, and George W. Stagg. The graph curvature calculator and the curvatures of cubic graphs. Experimental Mathematics, 0(0):1–13, 2019.
* [23] David Cushing, Shiping Liu, and Norbert Peyerimhoff. Bakry-Émery curvature functions on graphs. Canad. J. Math., 72(1):89–143, 2020.
* [24] Ronen Eldan, James R. Lee, and Joseph Lehec. Transport-entropy inequalities and curvature in discrete-space Markov chains. In A journey through discrete mathematics, pages 391–406. Springer, Cham, 2017.
* [25] Gábor Elek. On the limit of large girth graph sequences. Combinatorica, 30(5):553–563, 2010.
* [26] Max Fathi and Yan Shu. Curvature and transport inequalities for Markov chains in discrete spaces. Bernoulli, 24(1):672–698, 2018.
* [27] Bobo Hua. Liouville theorem for bounded harmonic functions on manifolds and graphs satisfying non-negative curvature dimension condition. Calc. Var. Partial Differential Equations, 58(2):Paper No. 42, 8, 2019.
* [28] Jürgen Jost. Riemannian geometry and geometric analysis. Universitext. Springer, Cham, seventh edition, 2017.
* [29] Jürgen Jost, Florentin Münch, and Christian Rose. Liouville property and non-negative ollivier curvature on graphs, 2019\.
* [30] Aldéric Joulin and Yann Ollivier. Curvature, concentration and error estimates for Markov chain Monte Carlo. Ann. Probab., 38(6):2418–2442, 2010.
* [31] V. A. Kaĭmanovich and A. M. Vershik. Random walks on discrete groups: boundary and entropy. Ann. Probab., 11(3):457–490, 1983.
* [32] Mark Kempton, Gabor Lippner, and Florentin Münch. Large scale Ricci curvature on graphs. Calc. Var. Partial Differential Equations, 59(5):Paper No. 166, 17, 2020.
* [33] Bo’az Klartag, Gady Kozma, Peter Ralli, and Prasad Tetali. Discrete curvature and abelian groups. Canad. J. Math., 68(3):655–674, 2016.
* [34] Yong Lin, Linyuan Lu, and Shing-Tung Yau. Ricci curvature of graphs. Tohoku Math. J. (2), 63(4):605–627, 2011.
* [35] Russell Lyons. Asymptotic enumeration of spanning trees. Combin. Probab. Comput., 14(4):491–522, 2005.
* [36] Russell Lyons and Shayan Oveis Gharan. Sharp bounds on random walk eigenvalues via spectral embedding. Int. Math. Res. Not. IMRN, (24):7555–7605, 2018.
* [37] Russell Lyons, Robin Pemantle, and Yuval Peres. Ergodic theory on Galton-Watson trees: speed of random walk and dimension of harmonic measure. Ergodic Theory Dynam. Systems, 15(3):593–619, 1995.
* [38] Russell Lyons and Yuval Peres. Probability on trees and networks, volume 42 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, New York, 2016.
* [39] Florentin Münch. Li-Yau inequality under $CD(0,n)$ on graphs. arXiv e-prints, page arXiv:1909.10242, September 2019.
* [40] Florentin Münch. Non-negative Ollivier curvature on graphs, reverse Poincaré inequality, Buser inequality, Liouville property, Harnack inequality and eigenvalue estimates. arXiv e-prints, page arXiv:1907.13514, July 2019.
* [41] Florentin Münch and Radosław K. Wojciechowski. Ollivier Ricci curvature for general graph Laplacians: heat equation, Laplacian comparison, non-explosion and diameter bounds. Adv. Math., 356:106759, 45, 2019.
* [42] Yann Ollivier. Ricci curvature of metric spaces. C. R. Math. Acad. Sci. Paris, 345(11):643–646, 2007.
* [43] Yann Ollivier. Ricci curvature of Markov chains on metric spaces. J. Funct. Anal., 256(3):810–864, 2009.
* [44] Yann Ollivier. A survey of Ricci curvature for metric spaces and Markov chains. In Probabilistic approach to geometry, volume 57 of Adv. Stud. Pure Math., pages 343–381. Math. Soc. Japan, Tokyo, 2010.
* [45] Cédric Villani. Topics in optimal transportation, volume 58 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2003.
|
# Cyclotomic expansions for $\mathfrak{gl}_{N}$ link invariants
via interpolation Macdonald polynomials
Anna Beliakova Institut für Mathematik
Universität Zürich
Winterthurerstrasse 190
CH-8057 Zürich
Switzerland<EMAIL_ADDRESS>and Eugene Gorsky Department of Matematics
University of California, Davis
One Shields Avenue, Davis CA 95616, USA<EMAIL_ADDRESS>
###### Abstract.
In this paper we construct a new basis for the cyclotomic completion of the
center of the quantum $\mathfrak{gl}_{N}$ in terms of the interpolation
Macdonald polynomials. Then we use a result of Okounkov to provide a dual
basis with respect to the quantum Killing form (or Hopf pairing). The main
applications are: 1) cyclotomic expansions for the $\mathfrak{gl}_{N}$
Reshetikhin–Turaev link invariants and the universal $\mathfrak{gl}_{N}$ knot
invariant; 2) an explicit construction of the unified $\mathfrak{gl}_{N}$
invariants for integral homology 3-spheres using universal Kirby colors. These
results generalize those of Habiro for $\mathfrak{sl}_{2}$. In addition, we
give a simple proof of the fact that the universal $\mathfrak{gl}_{N}$
invariant of any evenly framed link and the universal $\mathfrak{sl}_{N}$
invariant of any $0$-framed algebraically split link are $\Gamma$-invariant,
where $\Gamma=Y/2Y$ with the root lattice $Y$.
## 1\. Introduction
In a series of papers [1, 2, 15] Habiro, the first author et al. defined
unified invariants of homology 3-spheres that belong to the Habiro ring and
dominate Witten–Reshetikhin–Turaev (WRT) invariants. Unified invariants
provide an important tool to study structural properties of the WRT
invariants. In [3, 5] they were used to prove integrality of the
$\mathfrak{sl}_{2}$ WRT invariants for all 3-manifolds at all roots of unity.
The theory of unified invariants for $\mathfrak{sl}_{2}$ is based on
cyclotomic expansions for the colored Jones polynomial and for the universal
knot invariant constructed as follows. Given a framed oriented link $L$ in the
3-sphere, we open its components to obtain a bottom tangle $T$, presented by a
diagram $D$ (see Figure 1). For a ribbon Hopf algebra $U_{q}\mathfrak{g}$, the
universal link invariant $J_{L}(\mathfrak{g};q)$ is obtained by spliting $D$
intro elementary pieces: crossings, caps and cups and then by associating to
them $R^{\pm 1}$-matrices, and pivotal elements, respectively.
Figure 1. An example of the clasp bottom tangle
For a knot $K$, $J_{K}(\mathfrak{g};q)$ belongs to (some completion of) the
center $\mathcal{Z}(U_{q}\mathfrak{g})$. In the easiest case
$\mathfrak{g}=\mathfrak{sl}_{2}$, the center is generated by the Casimir $C$.
For a $0$-framed knot $K$, Habiro showed that there are coefficients
$a_{m}(K)\in\mathbb{Z}[q^{\pm 1}]$ such that
(1) $\displaystyle
J_{K}(\mathfrak{sl}_{2};q)=\sum^{\infty}_{m=0}a_{m}(K)\,\sigma_{m}\;\quad\text{with}\;\quad\sigma_{m}=\prod_{i=1}^{m}\left(C^{2}-(q^{i}+q^{-i}+2)\right).$
Replacing $C^{2}$ in (1) by its value $q^{n}+q^{-n}+2$ on the $n$-dimensional
irreducible representation $V_{n-1}$, we get the $n$-colored Jones polynomial
of $K$ (normalized to 1 for the unknot)
(2)
$J_{K}(V_{n-1},q)=\sum_{m=0}^{\infty}(-1)^{m}q^{-\frac{m(m+1)}{2}}a_{m}(K)\,(q^{1+n};q)_{m}(q^{1-n};q)_{m}$
where $(a;q)_{m}=(1-a)(1-aq)\dots(1-aq^{m-1})$. Equation (2) is known as a
cyclotomic expansion of the colored Jones polynomial. Thus, Habiro’s series
(1) dominates all colored Jones polynomials of $K$. To prove the fact that
$J_{K}(\mathfrak{sl}_{2};q)$ belongs to the even part of
$\mathcal{Z}(U_{q}\mathfrak{sl}_{2})$, generated by $C^{2}$, Habiro used the
whole power of the theory of bottom tangles developed in [16].
In this paper we give a simple proof for the “evenness” of the universal
invariant of algebraically split links for all quantum groups of type $A$.
Recall that $U_{q}\mathfrak{g}$ has a natural action of a finite group
$\Gamma=Y/2Y$ where $Y$ is the root lattice of $\mathfrak{g}$. For
$\mathfrak{g}=\mathfrak{gl}_{N}$, $\Gamma=\mathbb{Z}_{2}^{N}$ and for
$\mathfrak{g}=\mathfrak{sl}_{N}$, $\Gamma=\mathbb{Z}_{2}^{N-1}$.
###### Theorem 1.1.
The universal $\mathfrak{gl}_{N}$ invariant of any evenly framed link is
$\Gamma$-invariant. The universal $\mathfrak{sl}_{N}$ invariant of any
0-framed algebraically split link is $\Gamma$-invariant.
The quantum group $U_{q}\mathfrak{gl}_{N}$ admits a finite dimensional
irreducible representation $V(\lambda)$ with highest weight $v^{\lambda}$ for
any partition $\lambda=(\lambda_{1}\geq\dots\geq\lambda_{N})$ with $N$ parts
and $v^{2}=q$. To prove Theorem 1.1 we extend the Reshetikhin-Turaev
invariants to tangles colored with representations $L(\zeta)\otimes
V(\lambda)$ where $L(\zeta)$ is a one-dimensional representation of
$U_{q}\mathfrak{gl}_{N}$ for $\zeta\in\Gamma$. Then the claim follows from the
comparison of the $\mathfrak{gl}_{N}$ Reshetikhin-Turaev link invariants
colored with $L(\zeta)\otimes V(\lambda)$ and $V(\lambda)$.
The next main result of the paper establishes an explicit basis in the
$\Gamma$-invariant part of the center $\mathcal{Z}$ of
$U_{q}\mathfrak{gl}_{N}$. It generalizes Habiro’s basis
$\\{\sigma_{m}\,|\,m\in\mathbb{N}\\}$ for the even part of
$\mathcal{Z}(U_{q}\mathfrak{sl}_{2})$.
###### Theorem 1.2.
There exists a family of central elements $\sigma_{\lambda}\in\mathcal{Z}$
labeled by partitions $\lambda$ with at most $N$ parts with the following
properties:
* (a)
$\sigma_{\lambda}$ is $\Gamma$-invariant and annihilates $L(\zeta)\otimes
V(\mu)$ for all $\zeta\in\Gamma$ and partitions $\mu$ with at most $N$ parts
not containing $\lambda$;
* (b)
$\sigma_{\lambda}$ does not annihilate $V(\lambda)$ and acts on it by an
explicit scalar (see Theorem 8.2).
The proof uses the theory of interpolation Macdonald polynomials developed in
[23, 24, 29, 30, 31, 32, 36]. This theory allows one to reconstruct a
symmetric function $f(x_{1},\ldots,x_{N})$ from its values at special points
$x_{i}=q^{-\mu_{i}-N+i}$ where $\mu$ is an arbitrary partition with at most
$N$ parts. The connection between the center of $U_{q}\mathfrak{gl}_{N}$ and
symmetric functions goes through the quantum Harish-Chandra isomorphism, and
we interpret $f(q^{-\mu_{1}-N+1},\dots,q^{-\mu_{N}})$ as the scalar by which
the element of the center $f$ acts on the irreducible representation
$V({\mu})$. Interpolation Macdonald polynomials then correspond to a natural
basis in the center of $U_{q}\mathfrak{gl}_{N}$.
The polynomials $\sigma_{\lambda}$ yield a basis in the $\Gamma$-invariant
parts of both the center $\mathcal{Z}$ and its completion (a function in the
completion is determined by its values on all finite-dimensional
representations). We use a formula of Okounkov [29] to give explicit expansion
of a given central element $z$ in the basis $\sigma_{\lambda}$ in terms of the
scalars by which $z$ acts on all finite-dimensional representations
$V({\lambda})$. This leads to an expansion of the universal knot invariant in
the basis $\sigma_{\lambda}$, where the coefficients are related to
Reshetikhin-Turaev invariants of the same knot colored by $V({\mu})$ via an
explicit triangular matrix $(d_{\lambda,\mu})$ which does not depend on the
knot.
###### Theorem 1.3.
For any evenly framed knot $K$, there exist Laurent polynomials
$a_{\lambda}(K)\in\mathbb{Z}[q,q^{-1}]$ such that the universal invariant of
$K$ has the following expansion:
(3)
$J_{K}(\mathfrak{gl}_{N};q)=\sum_{\lambda}a_{\lambda}(K)\,\sigma_{\lambda}\ .$
Moreover, the coefficients $a_{\lambda}(K)$ can be computed in terms of the
Reshetikhin-Turaev invariants as follows:
$a_{\lambda}(K)=\sum_{\mu\subset\lambda}{d_{\lambda,\mu}(q^{-1})}\;J_{K}(V(\mu),q)$
where the coefficients $d_{\lambda,\mu}(q)$ are defined in Theorem 10.17.
We prove Theorem 1.3 as Proposition 8.7. We would like to emphasize that the
fact that $a_{\lambda}(K)$ are Laurent polynomials in $q$ is highly
nontrivial. Indeed, we have computed the tables of coefficients
$d_{\lambda,\mu}(q)$ for $\mathfrak{gl}_{2}$ in Section 11.4 and these are
complicated rational functions, so a priori $a_{\lambda}(K)$ are rational
functions as well. Theorem 1.3 thus encodes certain divisibility properties
for the linear combinations of colored invariants of $K$. We refer to Section
11.5 for the explicit computation of the coefficients $a_{\lambda}(K)$ for the
figure eight knot.
We call (3) a cyclotomic expansion of the universal $\mathfrak{gl}_{N}$ knot
invariant. The name cyclotomic is justified by the fact that (3) has well-
defined evaluations at any root of unity by Lemma 10.29 below. Note that for
$N=2$ and a $0$-framed knot, our expansion does not coincide with that of
Habiro, simply because if an element $z\in U_{q}\mathfrak{gl}_{2}$ is central
and $\Gamma$-invariant, it does not imply $z$ has a decomposition in even
powers of the Casimir. Therefore, our cyclotomic expansion is rather a
generalization of $F_{\infty}$ in [37] or [4, eq.(3.14)], both having
interesting application in the theory of non semisimple invariants of links
and 3-manifolds.
For our next application, assume $M$ is an integral homology 3-sphere obtained
by $\mathbf{\varepsilon}$-surgery on an $\ell$-component algebraically split
$0$-framed link $L$ with $\mathbf{\varepsilon}\in\\{\pm 1\\}^{\ell}$.
Following Habiro–Le, we define an $\mathfrak{gl}_{N}$ unified invariant $I(M)$
as
$I(M)=\langle\,r^{\otimes\mathbf{\varepsilon}},J_{L}(\mathfrak{gl}_{N};q)\,\rangle\
$
where $r$ is the $\mathfrak{gl}_{N}$ ribbon element and
$\langle\cdot,\cdot\rangle$ is the Hopf pairing. In the case of
$\mathfrak{sl}_{N}$ Habiro–Le proved [19] that the unified invariant belongs
to a cyclotomic completition of the polynomial ring
$\widehat{\mathbb{Z}[q]}:={\lim\limits_{\overleftarrow{\hskip 5.69054ptn\hskip
5.69054pt}}}\;\frac{\mathbb{Z}[q]}{((q;q)_{n})}\ $
known as Habiro ring. Using interpolation, we are able to express $I(M)$ in
terms of special linear combinations of Reshetikhin–Turaev invariants of $L$,
called Kirby colors. For this we diagonalize the Hopf pairing, i.e. find a
basis $P_{\mu}$ that is orthonormal to $\sigma_{\lambda}$ and orthogonal to
$V(\lambda)$ with respect to the Hopf pairing. This allows us to give explicit
formulas for the universal Kirby colors $\omega_{\pm}$ (see (25)) in the basis
$P_{\mu}$ and to prove the following result.
###### Theorem 1.4.
The unified invariant
$I(M)=J_{L}(\omega_{\epsilon_{1}},\dots,\omega_{\epsilon_{\ell}})\;\in\;\widehat{\mathbb{Z}[q]}$
belongs to the Habiro ring and dominates $\mathfrak{gl}_{N}$ WRT invariants of
$M_{\pm}$ at all roots of unity. Moreover, $I(M)$ is equal to the
$\mathfrak{sl}_{N}$ Habiro–Le invariant of $M_{\pm}$.
To prove that $I(M)$ is equal to the $\mathfrak{sl}_{N}$ Habiro–Le invariant
we show the equality of the universal $\mathfrak{gl}_{N}$ and
$\mathfrak{sl}_{N}$ invariants for $0$-framed algebraically split links, and
the fact that the $\mathfrak{gl}_{N}$ and $\mathfrak{sl}_{N}$ twist forms
$x\mapsto\langle r^{\pm 1},x\rangle$ on them coincide. It follows that $I(M)$
belongs to the Habiro ring. Then we establish invariance of Kirby colors
$\omega_{\pm}$ under Hoste moves (a version of Fenn–Rourke moves between
algebraically split links) in Lemma 9.1, and finally deduce the equality
$I(M)=J_{L}(\omega_{\epsilon_{1}},\dots,\omega_{\epsilon_{\ell}})$.
The main advantage of Theorem 1.4 compared to Habiro–Le approach is the
interpretation of $I(M)$ as the Reshetikhin–Turaev invariant of $L$ colored by
$\omega_{\mathbf{\varepsilon}}$. This leads to various striking divisibility
results and allows us to extend our cyclotomic expansion to links.
###### Corollary 1.5.
Given an $\ell$ component algebraically split $0$-framed link $L$, then for
all but finitely many partitions $\lambda_{i}$ with $1\leq i\leq\ell$, there
exist positive integers $n=n(\lambda_{i},N)$, such that
$J_{L}(P^{\prime}_{\lambda_{1}},\dots,P^{\prime}_{\lambda_{\ell}})\in(q;q)_{n}\,\mathbb{Z}[q,q^{-1}]\
$
where $P^{\prime}_{\lambda}=v^{|\lambda|}\,\dim_{q}V(\lambda)\,P_{\lambda}$ is
a scalar multiple of $P_{\lambda}$.
This is a generalization of the famous integrability theorem in [15, Thm.
8.2]. The authors do not know any direct proof of Corollary 1.5 without using
the theory of unified invariants. Based on Corollary 9.4 we obtain a
cyclotomic expansion for the Reshetikhin-Turaev invariants of $L$:
(4)
$J_{L}(\lambda_{1},\dots,\lambda_{\ell})=v^{\sum_{i}|\lambda_{i}|}\sum_{\mu_{i}\subset\lambda_{i}}\prod^{l}_{j=1}c_{\lambda_{j},\mu_{j}}(q^{-1})\,J_{L}(P^{\prime}_{\mu_{1}},\dots,P^{\prime}_{\mu_{\ell}})$
where the matrix
$\left[c_{\lambda,\mu}(q)\right]_{\lambda,\mu}:=\left[F_{\lambda}(q^{-\mu_{i}-N+i})\right]_{\lambda,\mu}$
is the inverse of $\left[d_{\lambda,\mu}(q)\right]_{\lambda,\mu}$. This
generalizes equation $(8.2)$ in [15].
In addition, in the case of knot surgeries we give a direct proof of the fact
that
$I(M_{\pm})=J_{L}(\omega_{\pm})\;\in\;\widehat{\mathbb{Z}[v]}$
by using our cyclotomic expansion and the interpolation theory.
Finally, we would like to comment on potential ideas for categorification of
these results. The ring of symmetric polynomials in $N$ variables is naturally
categorified by the category of annular $\mathfrak{gl}_{N}$-webs, with
morphisms given by annular foams [6, 33, 34, 13, 11]. By the work of the
second author and Wedrich [13], one can interpret it as a symmetric monoidal
Karoubian category generated by one object $E$ corresponding to a single
essential circle. The symmetric polynomials are then categorified by the Schur
functors of $E$.
We expect the categorified interpolation polynomials to correspond to
interpolation Macdonald polynomials where $q$ plays the role of quantum
grading and $t$ of the homological grading (after some change of variables).
We recall the general definitions and properties of these polynomials from
[29] in Appendix. The key obstacle for categorification of interpolation
polynomials is that they are not homogeneous. Therefore one needs to enrich
the category and allow additional morphisms between $E$ and identity.
On the other hand, the conjectures of the second author, Negut and Rasmussen
([12], see [10, 11] for further discussions) relate a version of the annular
category to the derived category of the Hilbert scheme of points on the plane.
The interpolation Macdonald polynomials appear in that context as well [7].
The paper is organized as follows. After recalling the definitions, we compare
the Reshetikhin–Turaev invariants of tangles colored by $V(\lambda)$ and
$L(\zeta)\otimes V(\lambda)$ in Section 4. In the next two sections we
summarize known results about the center of $U_{q}\mathfrak{gl}_{N}$, define
its completion and prove Theorem 1.1 in Section 6.2. The remaining results are
proven in Sections 8, 9 assuming some facts about interpolation.
In the last sections we develop the theory of the interpolation Macdonald
polynomials, starting from the one variable case. We define multi-variable
interpolation polynomials, state and prove their properties in Section 10.2.
Next, we solve the interpolation problem in two ways, one using the approach
of Okounkov (Theorem 10.17), and another using Hopf pairing (see (38)). We
study divisibility of $F_{\lambda}(q^{a_{1}},\ldots,q^{a_{n}})$ by quantum
factorials in Section 10.5 (see Lemma 10.29). Section 11 is focused on various
stability properties of the interpolation polynomials such as adding a column
to a partition $\lambda$ (Proposition 11.8) and changing $N$ for a fixed Young
diagram $\lambda$. In particular, in Proposition 11.5 we describe a HOMFLY-PT
analogue of the interpolation polynomials depending on an additional parameter
$A=q^{N}$. We provide lots of examples and tables of interpolation
polynomials, especially for $\mathfrak{gl}_{2}$. In Appendix A we collect some
additional known facts about the interpolation Macdonald polynomials and the
Habiro ring.
## Acknowledgments
The authors would like to thank Pavel Etingof, Kazuo Habiro, Thang Le, Matt
Hogancamp, Andrei Okounkov and Paul Wedrich for the useful discussions. We
thank Satoshi Nawata for his comments on the first version of the paper and
for bringing the reference [22] to our attention. Our work was partially
supported by the NSF grants DMS-1700814 (E.G.), DMS-1760329 (E.G.) and the
NCCR SwissMAP (A.B.).
## 2\. Notations and conventions
### 2.1. $q$-binomial formulas
Throughout the paper we will use the following notations for the $q$-series.
The $q$-Pochhammer symbols are defined as
$(a;q)_{m}=\prod_{i=0}^{m-1}(1-aq^{i}),\
(a;q)_{\infty}=\prod_{i=0}^{\infty}(1-aq^{i}),\ m\geq 0.$
It is easy to see that
$(a;q)_{m+k}=(a;q)_{m}(aq^{m};q)_{k},\
(a;q)_{m}=\frac{(a;q)_{\infty}}{(aq^{m};q)_{\infty}}.$
We will use two normalizations for $q$-binomial coefficients defined as
follows:
$\\{a\\}_{q}=1-q^{a},\ [a]_{q}=\frac{\\{a\\}_{q}}{\\{1\\}_{q}},\
[a]_{q}!=[1]_{q}\cdots[a]_{q},\
\binom{a}{b}_{q}=\frac{[a]_{q}!}{[b]_{q}![a-b]_{q}!}\ .$
Note that
$[a]_{q}=\frac{(q;q)_{a}}{(1-q)^{a}},\
\binom{a}{b}_{q}=\frac{(q;q)_{a}}{(q;q)_{b}(q;q)_{a-b}}.$
Finally, the $q$-binomial formula gives
$(a;q)_{m}=\sum_{j=0}^{m}(-1)^{j}q^{\frac{j(j-1)}{2}}\binom{m}{j}_{q}a^{j}.$
Let us also define symmetric $q$-numbers. For this we chose $v$ such that
$v^{2}=q$ and set
$\\{a\\}=v^{a}-v^{-a},\quad[a]:=\frac{\\{a\\}}{\\{1\\}},\quad\left[\begin{array}[]{cc}\\!a\\!\\\
\\!b\\!\end{array}\right]=\frac{\\{a\\}!}{\\{b\\}!\\{a-b\\}!}\ .$
We will use all these formulas throughout the paper without a reference.
### 2.2. Partitions
We will work with partitions
$\lambda=(\lambda_{1}\geq\lambda_{2}\geq\ldots\lambda_{N})$ which we will
identify with the corresponding Young diagrams in French notation, where the
rows have length $\lambda_{i}$. Transpose diagram to $\lambda$ is denoted by
$\lambda^{\prime}$, and $|\lambda|=\sum\lambda_{i}$. Given a box in a Young
diagram, we define its arm, co-arm, leg and co-leg as in Figure 2.
$l^{\prime}$$l$$a^{\prime}$$a$ Figure 2. Arm, co-arm, leg and co-leg
We define the hook length as $h(\square)=a(\square)+l(\square)+1$, and the
content $c(\square)=a^{\prime}-l^{\prime}$.
Let
$n(\lambda)=\sum(i-1)\lambda_{i}=\sum_{\square}l^{\prime}(\square)=\sum_{\square}l(\square),$
then
$n(\lambda^{\prime})=\sum\frac{\lambda_{i}(\lambda_{i}-1)}{2}=\sum_{\square}a^{\prime}(\square)=\sum_{\square}a(\square).$
The content of $\lambda$ is defined as
$c(\lambda)=\sum_{\square}c(\square)=n(\lambda^{\prime})-n(\lambda).$
Let $\bar{\lambda}_{i}=\lambda_{i}+N-i$ for $1\leq i\leq N$, then we have the
following identity
(5) $\prod_{\Box\in\lambda}(1-t^{h(\Box)})=\frac{\prod_{i\geq
1}\prod^{\bar{\lambda}_{i}}_{j=1}\left(1-t^{j}\right)}{\prod_{i<j}\left(1-t^{\bar{\lambda}_{i}-\bar{\lambda}_{j}}\right)}$
and we define
(6) $\displaystyle D_{N}(\lambda)$
$\displaystyle=\sum_{i=1}^{N}\frac{(\bar{\lambda}_{i})(\bar{\lambda}_{i}-1)}{2}=\sum_{i}\frac{\lambda_{i}(\lambda_{i}-1)}{2}+\sum_{i}(N-i)\lambda_{i}+\sum_{i=1}^{N}\binom{N-i}{2}$
(7)
$\displaystyle=n(\lambda^{\prime})+(N-1)|\lambda|-n(\lambda)+\binom{N}{3}=c(\lambda)+(N-1)|\lambda|+\binom{N}{3}.$
## 3\. Quantum groups
### 3.1. Quantum $\mathfrak{gl}_{N}$
The quantum group $\mathcal{U}=U_{q}\mathfrak{gl}_{N}$ is a
$\mathbb{C}(v)$-algebra generated by $E_{1},\ldots,E_{N-1}$,
$F_{1},\ldots,F_{N-1}$, $K^{\pm 1}_{1},\ldots,K_{N}^{\pm 1}$ satisfying the
following relations:
(8) $K_{i}E_{i}=vE_{i}K_{i},\ K_{i}F_{i}=v^{-1}F_{i}K_{i},\
K_{i+1}E_{i}=v^{-1}E_{i}K_{i+1},\ K_{i+1}F_{i}=vF_{i}K_{i+1}$ (9)
$[E_{i},F_{j}]=\delta_{ij}\frac{K_{i}K_{i+1}^{-1}-K_{i+1}K_{i}^{-1}}{v-v^{-1}},\
[K_{i},K_{j}]=0,$ (10) $E_{i}^{2}E_{j}-[2]E_{i}E_{j}E_{i}+E_{j}E_{i}^{2}=0\
\text{if}\ |i-j|=1\ \text{and}\ [E_{i},E_{j}]=0\ \text{otherwise}$
and analogously for $F_{i}$, where $v^{2}=q$. To simplify the notation we set
$\mathcal{K}_{i}:=K_{i}K^{-1}_{i+1}$. Then the Hopf algebra structure on
$\mathcal{U}$ (i.e. coproduct, antipode and counit) can be defined as follows:
$\Delta(E_{i})=E_{i}\otimes 1+\mathcal{K}_{i}\otimes E_{i},\
\Delta(F_{i})=1\otimes F_{i}+F_{i}\otimes\mathcal{K}_{i}^{-1},\
\Delta(K_{i}^{\pm 1})=K_{i}^{\pm 1}\otimes K_{i}^{\pm 1},$ $S(K_{i}^{\pm
1})=K_{i}^{\mp 1},\ S(E_{i})=-E_{i}\mathcal{K}_{i}^{-1},\
S(F_{i})=-\mathcal{K}_{i}F_{i}$ $\varepsilon(K_{i}^{\pm
1})=1,\varepsilon(E_{i})=\varepsilon(F_{i})=0.$
Usually $\mathcal{U}$ is considered as a subalgebra of $\mathcal{U}_{h}$ that
is an $h$-adically complete $\mathbb{C}[[h]]$-algebra topologically generated
by $E_{i}$, $F_{i}$ and $H_{j}$ for $1\leq i\leq N-1$ and $1\leq j\leq N$ with
$v=\exp{h/2},\quad K_{i}=v^{H_{i}}=\exp{hH_{i}/2}$
satisfying (9), (10) and
$H_{i}E_{i}-E_{i}H_{i}=E_{i},\;\;H_{i}F_{i}-F_{i}H_{i}=-F_{i},\;\;H_{i+1}E_{i}-E_{i}H_{i+1}=-E_{i},\;\;H_{i+1}F_{i}-F_{i}H_{i+1}=F_{i}$
replacing (8). Rewriting the defining relations in terms of the generators
$e_{i}={E_{i}}(v-v^{-1}),\quad
F_{i}^{(n)}=\frac{F_{i}^{n}}{[n]!}\quad\text{and}\quad
K_{j}\quad\text{for}\quad 1\leq i\leq N-1,\quad 1\leq j\leq N$
we obtain an integral version $\mathcal{U}_{\mathbb{Z}}$ as a Hopf algebra
over $\mathbb{Z}[v,v^{-1}]\subset\mathbb{C}(v)\subset\mathbb{C}[[h]]$.
The quantum group $\mathfrak{gl}_{N}$ has a fundamental representation
$\mathbb{C}^{N}$ with basis $v_{1},\ldots,v_{N}$ such that
$K_{i}v_{j}=v^{\delta_{ij}}v_{j},\ E_{i}v_{j}=\begin{cases}v_{i}&\text{if}\
j=i+1\\\ 0&\text{otherwise}\\\
\end{cases},F_{i}v_{j}=\begin{cases}v_{i+1}&\text{if}\ j=i\\\
0&\text{otherwise}.\\\ \end{cases}$
It generates a braided monoidal category with simple objects $V(\lambda)$,
where $\lambda$ is a partition with at most $N$ parts. These are highest
weight modules where $K_{i}$ act on the highest weight vector by
$v^{\lambda_{i}}$. The fundamental representation corresponds to
$\lambda=(1)$. The representations $V(\lambda)$ have integral basis where
$\mathcal{U}_{\mathbb{Z}}$ acts by $\mathbb{Z}[v,v^{-1}]$-valued matrices.
### 3.2. Ribbon structure
The Hopf algebra $\mathcal{U}_{h}$ admits a ribbon Hopf algebra structure (see
e.g. [8, Cor. 8.3.16]). The universal $R$-matrix has the form
$\mathcal{R}=D\Theta$ where the diagonal part $D$ and the quasi-$R$-matrix are
defined as follows
$D=v^{\sum^{N}_{i=1}H_{i}\otimes
H_{i}}\quad\text{and}\quad\Theta=\sum_{\mathbf{n}\in\mathbb{N}^{N-1}}F_{\mathbf{n}}\otimes
e_{\mathbf{n}}$
where for any sequence of non-negative integers
$\mathbf{n}=(n_{1},\dots,n_{N-1})$, the elements $e_{\mathbf{n}}$ and
$F_{\mathbf{n}}$ are defined by equations (66) and (67) in [19] and form
topological bases of the positive and negative parts in the triangular
decomposition of $\mathcal{U}_{\mathbb{Z}}$. The inverse matrix
$\mathcal{R}^{-1}=\iota(\Theta)D^{-1}$ is obtained by applying the involution
$\iota:v\to v^{-1}$.
The ribbon element and its inverse have the form
(11)
$r=\sum_{\mathbf{n}}F_{\mathbf{n}}\;\mathcal{K}_{\mathbf{n}}\;r_{0}\;e_{\mathbf{n}}\quad\text{and}\quad
r^{-1}=\sum_{\mathbf{n}}\iota(F_{\mathbf{n}})\;\mathcal{K}_{-\mathbf{n}}\;r^{-1}_{0}\;\iota(e_{\mathbf{n}})$
where $r_{0}=K_{-2\rho}v^{-\sum_{i}H^{2}_{i}}$ and
$K_{-2\rho}=\prod^{N}_{i=1}K_{i}^{2i-N-1}$ is the pivotal element. Here for
any sequence of integers $\mathbf{n}\in\mathbb{Z}^{N-1}$ we set
$\mathcal{K}_{\mathbf{n}}=\prod_{i}\mathcal{K}^{n_{i}}_{i}$, and denote by
$\rho=\left(\frac{N-1}{2},\frac{N-3}{2},\ldots,\frac{1-N}{2}\right)=\frac{1-N}{2}(1,\ldots,1)+(N-1,N-2,\ldots,0)$
the half sum of all positive roots. Using the central element
$K=\prod^{N}_{i=1}K_{i}$, we can write the previous definitions as follows:
$r^{-1}_{0}=K^{N}\;\prod^{N}_{i=1}K^{-2i}_{i}\;v^{\sum_{i}H_{i}(H_{i}+1)},\quad
K_{-2\rho}=K^{-N-1}\prod^{N}_{i=1}K^{2i}_{i}\ .$
The central element $r^{-1}$ acts on $V(\lambda)$ by the multiplication with
$\theta_{V(\lambda)}=v^{(\lambda,\lambda+2\rho)}=v^{N|\lambda|}q^{c(\lambda)}$
where $(\lambda,\mu)=\sum^{N}_{i=1}\lambda_{i}\mu_{i}$, $c(\lambda)$ is the
content of $\lambda$ and $v^{2}=q$.
### 3.3. Even part of $\mathcal{U}$
The algebra $\mathcal{U}$ has a natural grading by
$\Gamma=\mathbb{Z}_{2}^{N}=\\{\pm 1\\}^{N}$ where
$\zeta=(\zeta_{1},\ldots,\zeta_{N})\in\Gamma$ acts on $K_{i}$ by $\zeta_{i}$,
on $E_{i}$ by 1 and on $F_{i}$ by $\zeta_{i}\zeta_{i+1}$. It is easy to see
that the defining relations are preserved under this action. Following [19],
we call an element of $\mathcal{U}_{N}$ even or $\Gamma$-invariant if it is
preserved under the action of $\Gamma$.
Let us denote by $\mathcal{U}^{\text{ev}}_{\mathbb{Z}}$ a
$\mathbb{Z}[q,q^{-1}]$-subalgebra of $\mathcal{U}_{\mathbb{Z}}$ generated by
$e_{i}$, $F^{(n)}_{i}\mathcal{K}_{i}$ and $K^{2}_{j}$ for $1\leq i\leq N-1$
and $1\leq j\leq N$. It is easy to check that
$\mathcal{U}^{\text{ev}}_{\mathbb{Z}}$ is $\Gamma$-invariant.
The action of $\Gamma$ descends on the category $\mathit{Rep}(\mathcal{U})$ of
all finite-dimensional representations. Given
$\zeta=(\zeta_{1},\ldots,\zeta_{N})\in\Gamma$, we can define a one-dimensional
representation $L(\zeta)$ where $E_{i}$ and $F_{i}$ act by zero, and $K_{i}$
act by $\zeta_{i}$. We can also define representation $V(\lambda)\otimes
L(\zeta)$ where $K_{i}$ act on the highest weight vector by
$\zeta_{i}v^{\lambda_{i}}$.
###### Lemma 3.1.
The action of $\mathcal{U}$ on $V(\lambda)\otimes L(\zeta)$ agrees with the
$\Gamma$-twisted action of $\mathcal{U}$ on $V(\lambda)$.
###### Proof.
Indeed, $\Delta(F_{i})=1\otimes F_{i}+F_{i}\otimes\mathcal{K}_{i}^{-1}$, so
$F_{i}$ acts on $V(\lambda)\otimes L(\zeta)$ via
$F_{i}\otimes\mathcal{K}_{i}^{-1}=F_{i}\zeta_{i}\zeta_{i+1}$. Similarly,
$E_{i}$ acts on $V(\lambda)\otimes L(\zeta)$ via $E_{i}\otimes 1=E_{i}$ and
$K_{i}$ acts via $K_{i}\otimes K_{i}=K_{i}\zeta_{i}.$ ∎
### 3.4. The subalgebra $U_{q}\mathfrak{sl}_{N}$
We define $U_{q}\mathfrak{sl}_{N}$ as a subalgebra of $\mathcal{U}$ generated
by $E_{i},F_{i}$ and $\mathcal{K}^{\pm 1}_{i}:=K^{\pm 1}_{i}K^{\mp 1}_{i+1}$
for $1\leq i\leq N-1$. The Hopf algebra $U_{q}\mathfrak{sl}_{N}$ also admits
an integral version $\mathcal{U}_{\mathbb{Z}}\mathfrak{sl}_{N}$ generated by
$e_{i},\;\;F^{(n)}_{i}\quad\text{and}\quad\mathcal{K}^{\pm 1}_{i}$
over $\mathbb{Z}[q,q^{-1}]$. The braiding $\mathcal{R}=D^{\prime}\Theta$ with
$\Theta$ as for $\mathfrak{gl}_{N}$, but different diagonal part
$D^{\prime}=v^{\sum^{N-1}_{i=1}\frac{{\mathcal{H}}_{i}\otimes{\mathcal{H}}_{i}}{2}}\quad\text{where}\quad{\mathcal{H}}_{i}=H_{i}-H_{i+1}\
.$
The ribbon element is defined by (11) with
$r_{0}=K_{-2\rho}\prod^{N-1}_{i=1}v^{{-\mathcal{H}}_{i}^{2}/2}$. The pivotal
element $K_{-2\rho}$ does not change. Note that the $\Gamma$-invariant part of
$U_{q}\mathfrak{sl}_{N}$ generated by $e_{i}$, $F^{(n)}_{i}\mathcal{K}_{i}$
and $\mathcal{K}^{2}_{j}$ for $1\leq i,j\leq N-1$ has a smaller Cartan part
than its $\mathfrak{gl}_{N}$ analogue.
###### Example 3.2.
For $N=2$ the product $K_{1}K_{2}$ is central. By denoting
$\mathcal{K}=K_{1}K_{2}^{-1},E=E_{1},F=F_{1}$ we get the standard presentation
for $U_{q}(\mathfrak{sl}_{2})$:
$\mathcal{K}E=v^{2}E\mathcal{K},\;\mathcal{K}F=v^{-2}F\mathcal{K},\;[E,F]=\frac{\mathcal{K}-\mathcal{K}^{-1}}{v-v^{-1}}.$
### 3.5. Universal invariant
Lawrence, Reshetikhin, Ohtsuki and Kauffman constructed quantum group valued
universal link invariants. As it was already mentioned in the introduction,
the universal invariant of a link is defined by splitting a diagram of its
bottom tangle into elementary pieces and by associating $R$-matrices and
pivotal elements to them. For more details and references we recommend to
consult [16, Sec. 7.3]. However, we admit here the convention from [19, Sec.
2.7] and write the contributions from left to right along the orientation of
each component.
## 4\. Ribbon structure on $\mathit{Rep}(\mathcal{U})$
The aim of this section is to compare the Reshetikhin-Turaev invariants of a
bottom tangle whose components are colored with $V(\lambda)$ and
$V(\lambda)\otimes L(\zeta)$. This will be later used to prove Theorem 1.1.
Let us denote by $\mathcal{R}_{\mathbb{Q}}$ the representation ring of
$\mathit{Rep}(\mathcal{U})$ over $\mathbb{Q}(v)$. Given an $l$ component link
$L$, Reshetikhin–Turaev functor associated with Lie algebra $\mathfrak{g}$
provides a $\mathbb{Q}(v)$-multilinear map
$\displaystyle
J_{L}:\mathcal{R}_{\mathbb{Q}}\times\dots\times\mathcal{R}_{\mathbb{Q}}$
$\displaystyle\to\mathbb{Q}(v)$ $\displaystyle(\mu_{1},\dots,\mu_{l})$
$\displaystyle\mapsto\bigotimes_{i}\mathrm{Tr}^{V(\mu_{i})}_{q}\left(J_{L}(\mathfrak{g};q)\right)=:J_{L}(\mathfrak{g};\mu_{1},...,\mu_{l})$
normalized to $\prod_{i}\dim_{q}(V(\mu_{i}))$ for the $0$-framed
$(\mu_{1},\dots,\mu_{l})$-colored unlink. In cases when $\mathfrak{g}$ is
fixed in the context, we will remove it from the notation for simplicity.
Note that in the case of a knot, we have
$J_{K}(\lambda)=\dim_{q}(V(\lambda))J_{K}(V(\lambda),q)$ where the last
invariant is the colored Jones polynomial used in Introduction and normalized
to be 1 for the unknot.
The universal $R$-matrix defines a braiding between the representations
$V(\lambda)$. We can extend this braiding to $\mathit{Rep}(\mathcal{U})$ as
follows. Clearly, $L(\zeta)\otimes L(\zeta^{\prime})\simeq
L(\zeta\zeta^{\prime})$ and we define the braiding between $L(\zeta)$ and
$L(\zeta^{\prime})$ to be trivial. Let $V$ be a finite-dimensional
representation of $\mathcal{U}$ where the eigenvalues of $K_{i}$ are integral
powers of $v$. Given $\zeta\in\Gamma$ we consider a $\mathbb{C}$-linear map
$T_{V}(\zeta):V\to V$ which acts by $\prod\zeta_{i}^{a_{i}}$ on the weight
subspace of $V$ where $K_{i}$ acts as $v^{a_{i}}$.
###### Lemma 4.1.
The maps
$c_{\zeta,V}:=\mathrm{swap}\circ(\mathrm{Id}\otimes
T_{V}(\zeta)):L(\zeta)\otimes V\to V\otimes L(\zeta)$
with inverses
$c_{V,\zeta}:=\mathrm{swap}\circ(T_{V}(\zeta)\otimes\mathrm{Id}):V\otimes
L(\zeta)\to L(\zeta)\otimes V$
define a braiding on $\mathit{Rep}(\mathcal{U})$.
###### Proof.
First, let us check that $\text{swap}\circ(\mathrm{Id}\otimes T_{V}(\zeta))$
intertwines the actions of $\mathcal{U}$ on both sides. Indeed, let $v\in V$
be a vector with weight $(v^{a_{1}},\ldots,v^{a_{N}})$, then $E_{i}v$ has
weight $(v^{a_{1}},\ldots,v^{a_{i}+1},v^{a_{i+1}-1},\ldots,v^{a_{N}})$ while
$F_{i}v$ has weight
$(v^{a_{1}},\ldots,v^{a_{i}-1},v^{a_{i+1}+1},\ldots,v^{a_{N}})$.
Let $\bullet$ denote the basis vector in $L(\zeta)$, then
$c_{\zeta,V}E_{i}(\bullet\otimes
v)=c_{\zeta,V}\left(\zeta_{i}\zeta_{i+1}\bullet\otimes
E_{i}(v)\right)=\zeta_{1}^{a_{1}}\cdots\zeta_{i}^{a_{i}}\zeta_{i+1}^{a_{i+1}}\cdots\zeta^{a_{N}}_{N}E_{i}(v)\otimes\bullet,\\\
c_{\zeta,V}F_{i}(\bullet\otimes v)=c_{\zeta,V}\left(\bullet\otimes
F_{i}(v)\right)=\zeta_{1}^{a_{1}}\cdots\zeta_{i}^{a_{i}-1}\zeta_{i+1}^{a_{i+1}+1}\cdots\zeta^{a_{N}}_{N}F_{i}(v)\otimes\bullet,\\\
c_{\zeta,V}K_{i}(\bullet\otimes v)=c_{\zeta,V}\left(\zeta_{i}\bullet\otimes
K_{i}(v)\right)=\zeta_{1}^{a_{1}}\cdots\zeta_{i}^{a_{i}+1}\cdots\zeta^{a_{N}}_{N}K_{i}(v)\otimes\bullet,\\\
$
while
$E_{i}c_{\zeta,V}(\bullet\otimes
v)=E_{i}(\zeta_{1}^{a_{1}}\cdots\zeta_{N}^{a_{N}}v\otimes\bullet)=\zeta_{1}^{a_{1}}\cdots\zeta^{a_{N}}_{N}E_{i}(v)\otimes\bullet,\\\
F_{i}c_{\zeta,V}(\bullet\otimes
v)=F_{i}(\zeta_{1}^{a_{1}}\cdots\zeta_{N}^{a_{N}}v\otimes\bullet)=\zeta_{1}^{a_{1}}\cdots\zeta_{i}^{a_{i}-1}\zeta_{i+1}^{a_{i+1}+1}\cdots\zeta^{a_{N}}_{N}F_{i}(v)\otimes\bullet,\\\
K_{i}c_{\zeta,V}(\bullet\otimes
v)=K_{i}(\zeta_{1}^{a_{1}}\cdots\zeta_{N}^{a_{N}}v\otimes\bullet)=\zeta_{1}^{a_{1}}\cdots\zeta_{i}^{a_{i}+1}\cdots\zeta^{a_{N}}_{N}K_{i}(v)\otimes\bullet.\\\
$
Next, we observe that
$T_{V}(\zeta)T_{V}(\zeta^{\prime})=T_{V}(\zeta\zeta^{\prime})$ and
$T_{U\otimes V}(\zeta)=T_{U}(\zeta)\otimes T_{V}(\zeta)$, so $c_{\zeta,V}$
indeed defines a braiding. Even more concretely, we get the braiding as the
composition
(12) $c_{L(\zeta)\otimes V,L(\zeta^{\prime})\otimes U}:L(\zeta)\otimes
V\otimes L(\zeta^{\prime})\otimes
U\xrightarrow{c_{V,\zeta^{\prime}}}L(\zeta)\otimes L(\zeta^{\prime})\otimes
V\otimes U=L(\zeta^{\prime})\otimes L(\zeta)\otimes V\otimes
U\xrightarrow{c_{V,U}}\\\ L(\zeta^{\prime})\otimes L(\zeta)\otimes U\otimes
V\xrightarrow{c_{\zeta,U}}L(\zeta^{\prime})\otimes U\otimes L(\zeta)\otimes
V.$
∎
The representations $L(\zeta)$ are self-dual, and it is easy to see that the
braiding $c_{\zeta,V}$ is compatible with changing $V$ to $V^{*}$. Therefore,
$\mathit{Rep}(\mathcal{U})$ with objects $L(\zeta)\otimes V$ form a pivotal
braided monoidal category.
The quantum dimension of $L(\zeta)$ equals to the trace of the action of the
pivotal element, which is $(\prod_{i}\zeta_{i})^{N+1}$. The twist coefficient
$\theta_{L(\zeta)}$ is defined as the action of the ribbon element on
$L(\zeta)$, and is given by $(\prod_{i}\zeta_{i})^{N}$.
###### Lemma 4.2.
$\mathit{Rep}(\mathcal{U})$ is a ribbon category with twist
$\theta_{L(\zeta)\otimes V}=\theta_{L(\zeta)}\theta_{V}$.
###### Proof.
By definition $\theta_{L(\zeta)\otimes
V}=c_{\zeta,V}\theta_{L(\zeta)}\theta_{V}c_{V,\zeta}=\theta_{L(\zeta)}\theta_{V}$.
∎
### 4.1. Braiding in $\mathit{Rep}(U_{q}\mathfrak{sl}_{N})$
In this section, we study the action of $\Gamma$ and the corresponding
braiding for $U_{q}\mathfrak{sl}_{N}$, starting from $N=2$. Similarly to the
previous section, $U_{q}\mathfrak{sl}_{2}$ has a one dimensional
representation $L(-1)$ where $E$ and $F$ act by 0 and $\mathcal{K}$ acts by
$-1$. The action of $U_{q}\mathfrak{sl}_{2}$ on $L(-1)\otimes V$ is equivalent
to $\mathbb{Z}_{2}$-twisted action on $V$ where $\mathbb{Z}_{2}$ scales $E$ by
1 and $F,\mathcal{K}$ by $-1$.
One can attempt to define a braiding for $U_{q}\mathfrak{sl}_{2}$. Since $E$
and $F$ shift the weights by $2$, it is easy to see that the analogue of
$T_{V}$ should act by $(\sqrt{-1})^{a}$ on a subspace with weight $v^{a}$, and
it does not square to identity. Nevertheless, it squares to $\pm\text{id}$ on
each irreducible representation. This means that braiding relations on
$\mathit{Rep}(U_{q}\mathfrak{sl}_{2})$ hold up to sign.
To pin down this sign, we define the sign automorphism $\Sigma_{V}$ which acts
by $(-1)^{a}$ on a subspace with weight $v^{a}$. Since $E,F$ shift the weight
by $\pm v^{2}$, $\Sigma_{V}$ commutes with the action of
$U_{q}\mathfrak{sl}_{2}$ on $V$. The operator $\Sigma_{V}$ acts on the
irreducible representation $V(n)$ by a scalar $(-1)^{n}$. Also, it is easy to
see that $\Sigma_{V\oplus W}=\Sigma_{V}\oplus\Sigma_{W}$ and $\Sigma_{V\otimes
W}=\Sigma_{V}\otimes\Sigma_{W}$.
###### Lemma 4.3.
The operators $T_{V}$ and $\Sigma_{V}$ satisfy the following properties:
* (a)
We have
$T_{V}^{2}=\Sigma_{V},\
c_{L(-1),V}=c^{-1}_{L(-1),V}(1\otimes\Sigma_{V})=(\Sigma_{V}\otimes
1)c^{-1}_{L(-1),V}$
* (b)
Let $c_{V,W}:V\otimes W\to W\otimes V$ be the braiding, then
$c_{V,W}(\Sigma_{V}\otimes 1)=(1\otimes\Sigma_{V})c_{V,W},\
c_{V,W}(1\otimes\Sigma_{W})=(\Sigma_{W}\otimes 1)c_{V,W}$
* (c)
We have $c_{L(-1),V\otimes W}=c_{L(-1),V}\circ c_{L(-1),W}.$
* (d)
The braiding with $L(-1)$ satisfies Yang-Baxter equation, that is, the
following diagram commutes:
${L(-1)\otimes V\otimes W}$${V\otimes L(-1)\otimes W}$${V\otimes W\otimes
L(-1)}$${L(-1)\otimes W\otimes V}$${W\otimes L(-1)\otimes V}$${W\otimes
V\otimes
L(-1)}$$\scriptstyle{c_{L(-1),V}}$$\scriptstyle{c_{V,W}}$$\scriptstyle{c_{L(-1),W}}$$\scriptstyle{c_{V,W}}$$\scriptstyle{c_{L(-1),W}}$$\scriptstyle{c_{L(-1),V}}$
###### Proof.
Part (a) is clear. To prove (b), observe that the action of
$U_{q}\mathfrak{sl}_{2}\otimes U_{q}\mathfrak{sl}_{2}$ on $V\otimes W$
commutes with both $\Sigma_{V}\otimes 1$ and $1\otimes\Sigma_{V}$, and the
$R$-matrix is an element of the completion of $U_{q}\mathfrak{sl}_{2}\otimes
U_{q}\mathfrak{sl}_{2}$.
Given a pair of vectors $u\in V,w\in W$ such that $Ku=v^{i}u$ and $Kw=v^{j}w$,
we get $K(u\otimes w)=v^{i+j}u\otimes w$, so $T_{V\otimes W}=T_{V}\otimes
T_{W}$. Since $c_{L(-1),V}=\mathrm{swap}\circ(\mathrm{Id}\otimes T_{V})$, we
get the desired relation. Finally, (d) follows from (c). ∎
We can generalize the above results to representations of
$U_{q}\mathfrak{sl}_{N}$ as follows. For $\zeta\in\mathbb{Z}_{2}^{N-1}$ there
is a one-dimensional representation $L(\zeta)$ of $U_{q}(\mathfrak{sl}_{N})$
where $E_{i},F_{i}$ act by 0 and $\mathcal{K}_{i}=K_{i}K_{i+1}^{-1}$ act by
$\zeta_{i}$ ($1\leq i\leq N-1$). Given a representation $V$ where all weights
of $\mathcal{K}_{i}$ are integral powers of $v$, we can define an operator
$T_{\zeta,V}:V\to V$ which acts by $\zeta^{A^{-1}\mathbf{a}}$ on a subspace
where $\mathcal{K}_{i}$ acts by $v^{a_{i}}$. Here $A$ is the Cartan matrix for
$\mathfrak{sl}_{N}$ given by
(13) $A=\left(\begin{matrix}2&-1&0&\ldots&0\\\ -1&2&-1&\ldots&0\\\
0&-1&2&\ldots&0\\\ \vdots&\vdots&\vdots&\ddots&\vdots\\\ 0&0&0&\ldots&2\\\
\end{matrix}\right)$
and $\mathbf{a}=(a_{1},\ldots,a_{N-1})$. Note that $\det(A)=N$, so $A^{-1}$
has rational entries with denominator $N$ and one needs to choose an $N$-th
root of $(-1)$ to define $\zeta^{A^{-1}\mathbf{a}}$. Define
$\Sigma_{\zeta,V}=T^{2}_{\zeta,V}$.
###### Lemma 4.4.
The operators $T_{\zeta,V}$ and $\Sigma_{\zeta,V}$ satisfy the following
properties:
* (a)
$T_{\zeta,V}E_{i}=\zeta_{i}E_{i}T_{\zeta,V},T_{\zeta,V}F_{i}=\zeta_{i}F_{i}T_{\zeta,V}$
* (b)
$\Sigma_{\zeta,V}$ commutes with the action of $U_{q}\mathfrak{sl}_{N}$ on $V$
* (c)
The map $c_{L(\zeta),V}=\mathrm{swap}\circ(\mathrm{Id}\otimes
T_{\zeta,V}):L(\zeta)\otimes V\to V\otimes L(\zeta)$ is a morphism of
$U_{q}\mathfrak{sl}_{N}$-representations
* (d)
The maps $T_{\zeta,V}$ and $\Sigma_{\zeta,V}$ satisfy all equations in Lemma
4.3 with $L(-1)$ changed to $L(\zeta)$.
###### Proof.
(a) The operator $F_{i}$ changes the weight
$\mathbf{a}=(a_{1},\ldots,a_{N-1})$ by $Ae_{i}$, so if
$\mathcal{K}_{i}v=v^{a_{i}}v$ then
$T_{\zeta,V}F_{i}(v)=\zeta^{A^{-1}(\mathbf{a}+Ae_{i})}F_{i}v=\zeta^{A^{-1}\mathbf{a}+e_{i}}F_{i}v=\zeta_{i}F_{i}T_{\zeta,V}(v).$
The proof for $E_{i}$ is similar. Part (b) immediately follows from (a).
For (c), we observe that the action of $E_{i}$ on $L(\zeta)\otimes V$ is the
same as the action on $V$, while the actions of $F_{i},\mathcal{K}_{i}$ are
twisted by $\zeta_{i}$. On the other hand, the action of $F_{i}$ on $V\otimes
L(\zeta)$ is the same as the action on $V$, while the actions of
$E_{i},\mathcal{K}_{i}$ are twisted by $\zeta_{i}$. Therefore by (a) the
operator $c_{L(\zeta),V}$ intertwines the actions of $U_{q}\mathfrak{sl}_{N}$
on $L(\zeta)\otimes V$ and $V\otimes L(\zeta)$.
Finally, the proof of the rest of Lemma 4.3 extends to
$U_{q}\mathfrak{sl}_{N}$ verbatim. ∎
###### Remark 4.5.
The above construction of $T_{\zeta,V}$ and $\Sigma_{\zeta,V}$ can be extended
to an arbitrary semisimple Lie algebra with Cartan matrix $A$. The action of
$\Sigma_{\zeta,V}$ can be interpreted in terms of projection of the weight
lattice to its quotient by the root lattice.
We draw a tangle colored by a representation $V=V(\lambda)$ using solid lines,
and a tangle colored by $L(\zeta)$ by dotted lines. If a component is colored
by $L(\zeta)\otimes V$, we draw a dotted line on the left of a solid line and
parallel to it. The crossings between solid and dotted lines correspond to
$c^{\pm}_{L(\zeta),V}$ depicted in Figure 3. Note that unlike
$\mathfrak{gl}_{N}$ case, $c_{L(\zeta),V}$ does not square to identity and we
have to distinguish under- and over-crossings between solid and dotted lines.
This allows us to define Reshetikhin-Turaev invariants for framed tangles
colored by representations of $U_{q}\mathfrak{sl}_{N}$ of the form
$L(\zeta)\otimes V(\lambda)$.
Using the notations as in Figure 3, we can visualize the statement of Lemma
4.4 in Figure 4.
$\Sigma$ Figure 3. The operators $c_{L(\zeta),V}$, $c^{-1}_{L(\zeta),V}$ and
$\Sigma_{\zeta}$.
(a)$=$$\Sigma$$=$$\Sigma$$=$$\Sigma$
(b)$\Sigma$$=$$\Sigma$
(d)
Figure 4. Diagrammatics for Lemma 4.4
###### Theorem 4.6.
(a) Let $L$ be an algebraically split $0$-framed link with $\ell$ components.
Then for arbitrary partitions $\lambda_{1},\ldots,\lambda_{\ell}$ and
$\zeta_{1},\ldots,\zeta_{\ell}\in\Gamma$ the following identity of
Reshetikhin–Turaev invariants holds:
(14) $J_{L}\left({\mathfrak{sl}_{N}};V(\lambda_{1})\otimes
L(\zeta_{1}),\ldots,V(\lambda_{\ell})\otimes L(\zeta_{\ell})\right)=\\\
J_{L}\left({\mathfrak{sl}_{N}};V(\lambda_{1}),\ldots,V(\lambda_{\ell})\right)\cdot\dim_{q}L(\zeta_{1})\cdots\dim_{q}L(\zeta_{\ell}),$
where $\dim_{q}L(\zeta_{i})=\mathrm{Tr}^{L(\zeta_{i})}_{q}(1)=\pm 1$.
(b) Let $L$ be an arbitrary link with evenly framed components, if $N$ is odd.
Then (14) holds for $\mathfrak{gl}_{N}$ Reshetikhin–Turaev invariants.
###### Proof.
(a) We use the results of Lemmas 4.3 and 4.4 and the above diagrammatic
notation. By Lemma 4.3(a), we can change crossings between dotted and solid
lines at a cost of placing $\Sigma_{\zeta}$ on solid lines. By doing this
iteratively, we can make all dotted lines to be above solid lines. At this
stage, each solid component of $L$ acquires several copies of $\Sigma_{\zeta}$
and $\Sigma_{\zeta}^{-1}$ at various places of the link diagram. The number of
these copies (with signs) equals the linking number between this component and
the dotted part which is even by our assumption. By Lemma 4.3(b) we can
combine all these copies of $\Sigma_{\zeta}$ together and cancel out. Finally,
using Lemma 4.3(d), we can separate the dotted and solid links. By changing
the crossings in the dotted link, we transform it to the $0$-framed unlink.
Therefore the invariant of the solid link equals
$J_{L}\left({\mathfrak{sl}_{N}};V(\lambda_{1}),\ldots,V(\lambda_{\ell})\right)$
while the invariant of the dotted link equals
$\dim_{q}L(\zeta_{1})\cdots\dim_{q}L(\zeta_{\ell})$.
The proof of (b) is similar, except that $\Sigma_{V}$ is trivial for all $V$.
As before we can unknot dotted components. Now the ribbon element acts on
$L(\zeta)$ by $\theta_{L(\zeta)}=(\prod_{i}\zeta_{i})^{N}$, and hence, any
(even if $N$ is odd) number of them acts by $1$. The result follows.
∎
## 5\. Center of $\mathcal{U}$
Let $\mathcal{Z}$ be the center of $\mathcal{U}_{\mathbb{Z}}$. In this section
we recall the main facts known about $\mathcal{Z}$.
### 5.1. Harish-Chandra isomorphism
Let $(\mathcal{U}^{0}_{\mathbb{Z}})^{S_{N}}:=\mathbb{Z}[v,v^{-1}][{K^{\pm
1}_{1}},\ldots,K_{N}^{\pm 1}]^{S_{N}}$ be the Cartan part of
$\mathcal{U}_{\mathbb{Z}}$ invariant under the Weyl group action. After a
multiplication by an appropriate power of the central element
$K:=\prod^{N}_{i=1}K_{i}$, each element of $(\mathcal{U}^{0})^{S_{N}}$ can be
viewed as a symmetric function in $N$ variables. This allows to identify
$(\mathcal{U}^{0}_{\mathbb{Z}})^{S_{N}}$ with the ring of symmetric functions
divided by powers of the elementary symmetric polynomial $e_{N}=K$. In the
classical case, this ring can be identified with the center using the Harish-
Chandra isomorphism. After quantization, the image of the Harish-Chandra
homomorphism belongs to
$\mathit{Sym}=\mathbb{Z}[v^{\pm 1},e_{N}^{-1}][x_{1},\dots,x_{N}]^{S_{N}}$
where $x_{i}=K_{i}^{2}$ (compare e.g. [20, Ch. 6]). In this section we will
furthermore identify $\mathit{Sym}$ with the Grothendieck ring $\mathcal{R}$
of $\mathit{Rep}(\mathcal{U})$ with coefficients $\mathbb{Z}[v^{\pm 1}]$.
First, the character map
$\mathit{ch}:\mathcal{R}\to\mathit{Sym}$
sends a representation $U$ to its character $\mathit{ch}(U)$. Clearly,
$\mathit{ch}(U\oplus V)=\mathit{ch}(U)+\mathit{ch}(V)$ and
$\mathit{ch}(U\otimes V)=\mathit{ch}(U)\mathit{ch}(V)$, so $\mathit{ch}$ is a
ring homomorphism. The character of $V(\lambda)$ equals the Schur function
$s_{\lambda}(x_{1},\dots,x_{N})$, while the character of $L(\zeta)$ equals
$\zeta_{1}\cdots\zeta_{N}$.
The Harish-Chandra map
$\mathit{hc}:\mathcal{Z}\to\mathit{Sym}$
is defined as follows. Let $\phi$ be a central element in $\mathcal{U}$, it
acts in the Verma module $\Delta(\lambda)$ by some scalar
$\phi|_{\Delta(\lambda)}$. We define $\mathit{hc}(\phi)$ to be the polynomial
in $\mathit{Sym}$ defined by the condition
$\mathit{hc}(\phi)(q^{\rho+\lambda})=\phi|_{\Delta(\lambda)}\quad\text{for
all}\quad\lambda$
where $\rho=\left(\frac{N-1}{2},\frac{N-3}{2},\ldots,\frac{1-N}{2}\right)$.
Note that the product $\phi\phi^{\prime}$ acts on $\Delta(\lambda)$ by the
product of the corresponding scalars, so $\mathit{hc}$ is also a ring
homomorphism. It is known to be an isomorphism (see e.g. [20, Ch. 6]).
Finally, the map $\xi:\mathcal{R}\to\mathcal{Z}$ is defined by
$\xi=\mathit{hc}^{-1}\circ\mathit{ch}$. It is a composition of two ring
homomorphisms and hence a ring homomorphism too. Hence, we get the commutative
diagram:
${\mathcal{R}}$${\mathcal{Z}}$${\mathit{Sym}}$$\scriptstyle{\mathit{ch}}$$\scriptstyle{\xi}$$\scriptstyle{\mathit{hc}}$
In Lemma 5.3 we will show that $\xi$ actually coincides with the Drinfeld map.
###### Example 5.1.
The central element $K=K_{1}\cdots K_{N}$ acts on $V(\lambda)$ by a scalar
$v^{\sum\lambda_{i}}$. Since $\sum\rho_{i}=0$, we get $\mathit{hc}(K_{1}\cdots
K_{N})=y_{1}\cdots y_{N}$.
###### Example 5.2.
The center of $U_{q}\mathfrak{sl}_{2}$ is generated by the Casimir element:
$C=(v-v^{-1})^{2}FE+v\mathcal{K}+v^{-1}\mathcal{K}^{-1}$
It acts on a representation $V_{m}$ by $v^{m+1}+v^{-m-1}$, so
$\mathit{hc}(C)=y+y^{-1}$ (note that $v^{\rho}=v$ in this case). On the other
hand, $\mathit{ch}(V_{1})=y+y^{-1}$, so $\xi(V_{1})=C$, where $V_{1}$ the
2-dimensional representation.
Similarly, we can consider the corresponding central element in
$U_{q}\mathfrak{gl}_{2}$ defined by
$C_{\mathfrak{gl}_{2}}=(v-v^{-1})^{2}FE+vK_{1}K_{2}^{-1}+v^{-1}K_{1}^{-1}K_{2}.$
It acts on a representation $V(\lambda)$ by a scalar
$v^{1+\lambda_{1}-\lambda_{2}}+v^{-1-\lambda_{1}+\lambda_{2}}=\frac{y_{1}}{y_{2}}+\frac{y_{2}}{y_{1}},\
\quad y_{1}=v^{1/2+\lambda_{1}},y_{2}=v^{-1/2+\lambda_{2}},$
so
$\mathit{hc}(C_{\mathfrak{gl}_{2}})=\frac{y_{1}}{y_{2}}+\frac{y_{2}}{y_{1}}=\frac{y^{2}_{1}+y^{2}_{2}}{y_{1}y_{2}}=e^{-1}_{2}(y_{1},y_{2})(x_{1}+x_{2})$.
### 5.2. Hopf pairing
The Hopf pairing $\langle U,V\rangle$ of two representations
$U,V\in\mathcal{R}$ is defined as the Reshetikhin–Turaev invariant of the Hopf
link with components labeled by $U$ and $V$. This is a symmetric bilinear
pairing on $\mathcal{R}$. The map $\xi$ is related to the Hopf pairing as
follows:
###### Lemma 5.3.
The Hopf pairing on $\mathcal{R}$ can be computed as
$\langle U,V\rangle=\mathrm{Tr}_{q}^{U}(\xi(V)).$
###### Proof.
Consider the Drinfeld map $D$ [9] which sends a representation $V$ to a
central element corresponding to the universal invariant of the following
tangle:
$D(V):=$$V$
By e.g. [14, eq. (20)] (see also [19, Proposition 8.19] and references
therein) the eigenvalue of $D(V)$ on the irreducible representation
$V(\lambda)$ equals $\mathit{ch}(q^{\lambda+\rho})$ where $\mathit{ch}$ is the
character of $V$. By the definition of the Harish-Chandra map, this means that
$\mathit{hc}(D(V))=\mathit{ch}(V)$, and
$D(V)=\mathit{hc}^{-1}(ch(V))=\xi(V),$
so $\xi$ agrees with the Drinfeld map. Now $\langle
U,V\rangle=\mathrm{Tr}_{q}^{U}(D(V))=\mathrm{Tr}_{q}^{U}(\xi(V))$ or more
precisely,
$\langle
V(\lambda),V(\mu)\rangle=s_{\lambda}(q^{\mu+\rho})s_{\mu}(q^{\rho})\quad\text{where}\quad\dim_{q}V(\mu)=s_{\mu}(q^{\rho}).$
∎
Using the Drinfeld isomorphism $\xi$ we can extend the Hopf pairing to the
center by setting
$\langle
z_{1},z_{2}\rangle:=\langle\xi^{-1}(z_{1}),\xi^{-1}(z_{2})\rangle\quad\text{for
any}\quad z_{1},z_{2}\in\mathcal{Z}\ .$
## 6\. Cyclotomic completion and the universal invariant
The universal invariant of a link belongs a priori to a (completed) tensor
product of copies of $\mathcal{U}_{h}$, rather than $\mathcal{U}$ or
$\mathcal{U}_{\mathbb{Z}}$, due to the diagonal part of the $R$-matrix. The
aim of this section is to define a certain completion of
$\mathcal{U}_{\mathbb{Z}}$ and its tensor powers, such that the universal
$\mathfrak{gl}_{N}$ invariant of evenly framed links belongs to it. Since the
action of $\Gamma$ extends to the completion, this will allow us to speak
about $\Gamma$-invariance of $J_{L}(\mathfrak{gl}_{N};q)$.
### 6.1. Cyclotomic completion of $\mathcal{U}_{\mathbb{Z}}$
Given $n\in\mathbb{N}$, we define a family of two-sided ideals
$\mathcal{U}^{(n)}_{\mathbb{Z}}$ as the minimal filtration such that
$\mathcal{U}^{(n)}_{\mathbb{Z}}\mathcal{U}^{(m)}_{\mathbb{Z}}\subset\mathcal{U}^{(m+n)}_{\mathbb{Z}}$
and
$(q;q)_{n},\;\;e^{n}_{i},\;\;f_{n}(K^{2}_{j})\in\mathcal{U}^{(n)}_{\mathbb{Z}}$
for any $1\leq i\leq N-1$ and $1\leq j\leq N$ where $f_{n}(x)=(x;q)_{n}$. In
other words, $\mathcal{U}^{(n)}_{\mathbb{Z}}$ is the two-sided ideal generated
by the products
(15) $(q;q)_{a}\;e_{\mathbf{m}}\;f_{c_{1}}(K^{2}_{1})\cdots
f_{c_{N}}(K^{2}_{N}),\quad\text{with}\quad a+\sum_{i}m_{i}+\sum_{i}c_{i}=n\ .$
###### Lemma 6.1.
We have
$\Delta\left(f_{n}\left(K^{2}_{i}\right)\right)=\sum_{a=0}^{n}\binom{n}{a}_{q}f_{a}(K^{2}_{i})\otimes
K_{i}^{2(n-a)}f_{n-a}(K^{2}_{i}).$
###### Proof.
We prove Lemma by induction in $n$. For $n=0$ it is clear. The induction step
follows from the identities
$f_{n+1}(K^{2}_{i})=f_{n}(K_{i})(1-q^{n}K^{2}_{i})$
and
$\Delta(1-q^{n}K^{2}_{i})=1\otimes 1-q^{n}K^{2}_{i}\otimes
K^{2}_{i}=(1-q^{a}K^{2}_{i})\otimes
q^{n-a}K^{2}_{i}+1\otimes(1-q^{n-a}K^{2}_{i}).$
∎
###### Proposition 6.2.
a) $\mathcal{U}^{(n)}_{\mathbb{Z}}$ is the left ideal generated by (15).
b) $\mathcal{U}^{(n)}_{\mathbb{Z}}$ form a Hopf algebra filtration, that is
$\Delta\,\mathcal{U}^{(n)}_{\mathbb{Z}}\subset\,\sum_{i+j=n}\,\mathcal{U}^{(i)}_{\mathbb{Z}}\otimes\mathcal{U}^{(j)}_{\mathbb{Z}}$.
c) Assume that $\lambda_{i}\leq k$ for all $i$. Given arbitrary $m$, there
exists $n=n(k,m)$ such that the elements of $\mathcal{U}^{(n)}_{\mathbb{Z}}$
act on the integral basis of $V(\lambda)$ by matrices divisible by
$(q;q)_{m}$.
###### Proof.
a) Observe that by Lemma 10.5 we get
$f_{n}(q^{s}K^{2}_{i})\in\mathcal{U}^{(n)}_{\mathbb{Z}}$ for all integer $s$.
Now the statement follows from the identities
$f_{n}(K^{2}_{i})F_{i}^{(s)}=F_{i}^{(s)}f_{n}(q^{-s}K^{2}_{i}),\
f_{n}(K^{2}_{i+1})F_{i}^{(s)}=F_{i}^{(s)}f_{n}(q^{s}K^{2}_{i+1})$
and
$f_{n}(K^{2}_{i})e^{s}_{i}=e^{s}_{i}f_{n}(q^{s}K^{2}_{i}),\
f_{n}(K^{2}_{i+1})e^{s}_{i}=e^{s}_{i}f_{n}(q^{-s}K^{2}_{i+1}).$
b) Follows from the identity
$\Delta(e_{j}^{m})=\sum_{i=0}^{m}\binom{m}{i}_{q}e_{j}^{m-i}\mathcal{K}^{i}\otimes
e_{j}^{i}$
and Lemma 6.1.
c) By (a), it is sufficient to check the statement for $e_{i}^{n}$ and
$f_{n}(K^{2}_{i})$. If $\lambda_{i}\leq k$ then for $n>k$ $e_{i}^{n}$
annihilates $V(\lambda)$, while $f_{n}(K^{2}_{i})$ acts on a vector with
weight $(v^{\lambda_{1}},\ldots,v^{\lambda_{N}})$ by
$f_{n}(q^{\lambda_{i}})=(q^{\lambda_{i}};q)_{n}$ which is divisible by
$(q;q)_{n}$. ∎
By Proposition 6.2(b), the filtration
$\mathcal{U}_{\mathbb{Z}}=\mathcal{U}^{(0)}_{\mathbb{Z}}\supset\mathcal{U}^{(1)}_{\mathbb{Z}}\supset\dots\mathcal{U}^{(n)}_{\mathbb{Z}}\supset\dots$
is a Hopf algebra filtration of $\mathcal{U}_{\mathbb{Z}}$ with respect to a
descending filtration of ideals $I_{n}=((q;q)_{n})$ in $\mathbb{Z}[v,v^{-1}]$
in the sense of [18, Sec. 4]. Hence, the completion
$\widehat{\mathcal{U}}:={\lim\limits_{\overleftarrow{\hskip 5.69054ptn\hskip
5.69054pt}}}\;\;\frac{\mathcal{U}_{\mathbb{Z}}}{\mathcal{U}^{(n)}_{\mathbb{Z}}}$
is a complete Hopf algebra over the ring
$\widehat{\mathbb{Z}[v]}:={\lim\limits_{\overleftarrow{\hskip 5.69054ptn\hskip
5.69054pt}}}\;\;\frac{\mathbb{Z}[v]}{((q;q)_{n})}.$
We refer to [18, Section 4] for details. Analogously, we define the
$\Gamma$-invariant subalgebra
$\widehat{\mathcal{U}^{\text{ev}}}:={\lim\limits_{\overleftarrow{\hskip
5.69054ptn\hskip
5.69054pt}}}\;\;\frac{\mathcal{U}^{\text{ev}}_{\mathbb{Z}}}{\mathcal{U}^{(n)}_{\mathbb{Z}}}$
as a complete Hopf algebra over the Habiro ring $\widehat{\mathbb{Z}[q]}$. Let
us now extend the completion to the tensor powers of
$\mathcal{U}_{\mathbb{Z}}$. For this we define the filtration for
$\mathcal{U}_{\mathbb{Z}}^{\otimes l}$ for $l\geq 1$ as follows
${\mathcal{F}}_{n}(\mathcal{U}_{\mathbb{Z}}^{\otimes
l})=\sum^{l}_{i=1}\mathcal{U}_{\mathbb{Z}}^{\otimes
i-1}\otimes\mathcal{U}^{(n)}_{\mathbb{Z}}\otimes\mathcal{U}_{\mathbb{Z}}^{\otimes
l-i}$
and the completed tensor product $\mathcal{U}_{\mathbb{Z}}^{\hat{\otimes}l}$
with respect to this filtration will be the image of the homomorphism
${\lim\limits_{\overleftarrow{\hskip 5.69054ptn\hskip
5.69054pt}}}\;\;\frac{\mathcal{U}_{\mathbb{Z}}^{\otimes
l}}{{\mathcal{F}}_{n}(\mathcal{U}_{\mathbb{Z}}^{\otimes
l})}\;\;\to\;\;\mathcal{U}_{h}^{{\otimes}k}$
where on the right hand side we use the $h$-adically completed tensor product.
### 6.2. Hopf pairing and universal invariants
Let us denote by $c\in\mathcal{U}_{h}{\otimes}\,\mathcal{U}_{h}$ the double
braiding or the universal invariant of the clasp tangle in Figure 1, given by
$c=(S\,\otimes\,\text{id})\,\mathcal{R}_{21}\mathcal{R}\ .$
The main point about this element is that it is dual to the Hopf pairing or
the quantum Killing form (compare [19, Sec. 4]). Hence, after writing
$c=\sum_{i}c(i)\otimes c^{\prime}(i)$ the Hopf pairing is defined by setting
(16) $\langle c(i),c^{\prime}(j)\rangle:=\delta_{ij}$
Restricting to the Cartan part this gives us (compare [19, Lemma 3.12])
(17) $\displaystyle D^{-2}=\prod^{N}_{i=1}q^{-H_{i}\otimes
H_{i}}=\prod^{N}_{i=1}\sum_{n_{i}}(-1)^{n_{i}}\frac{h^{n_{i}}}{n_{i}!}H_{i}^{n_{i}}\,\otimes\,H_{i}^{n_{i}}$
and hence, $\langle
H_{i}^{n},H_{j}^{m}\rangle=\delta_{ij}\delta_{nm}(-1)^{n}\frac{n!}{h^{n}}$. We
deduce that $\langle K_{i}^{2},K_{j}^{2}\rangle=q^{-1}$ or, more generally,
$\langle K_{i}^{2a},K_{j}^{2b}\rangle=\delta_{ij}q^{-ab}$
defines the Hopf pairing on the $\Gamma$-invariant part of the Cartan. In
Section 10 we construct another basis for the Cartan given by
$\prod^{N}_{i=1}f_{n_{i}}(K^{2}_{i})$ such that $\langle
f_{n},f_{m}\rangle=\delta_{nm}(-1)^{n}q^{-n}(q;q)_{n}$. In this new basis, we
can rewrite the Cartan part of the clasp element as follows:
(18)
$D^{-2}=\sum_{\mathbf{n}\in\mathbb{N}^{N}}\prod^{N}_{i=1}\frac{(-1)^{n_{i}}q^{n_{i}}}{(q;q)_{n_{i}}}f_{n_{i}}(K^{2}_{i})\otimes
f_{n_{i}}(K^{2}_{i})$
For $\mathfrak{sl}_{N}$ similar computations will give
$(D^{\prime})^{-2}=\sum_{\mathbf{n}\in\mathbb{N}^{N-1}}\prod^{N-1}_{i=1}\frac{(-1)^{n_{i}}q^{n_{i}}}{(q;q)_{n_{i}}}f_{n_{i}}(\mathcal{K}_{i})\otimes
f_{n_{i}}(\mathcal{K}^{2}_{i})\ $
(compare Section B.1 in [19]).
Let us denote by
$\text{Inv}\,(\mathcal{U})=\\{u\in\mathcal{U}\,|\,x\vartriangleright
u=\epsilon(x)u\quad\forall x\in\mathcal{U}\\}$
the invariant part of $\mathcal{U}$ under the adjoint action
$x\vartriangleright u:=x_{(1)}uS(x_{(2)})$ in Sweedler notation. The main
advantage of the usage of bottom tangles in the definition of
$J_{L}(\mathfrak{gl}_{N};q)$ is that in this case
$J_{L}(\mathfrak{gl}_{N};q)\in\text{Inv}\,(\mathcal{U})$ (compare [15,
Sec.4.3]). As a corollary, we get the following:
###### Proposition 6.3.
Given an $l$-component evenly framed link $L$, the universal invariant
$J_{L}(\mathfrak{gl}_{N};q)$ is a well defined element of $\text{\rm
Inv}\,\left(\widehat{\mathcal{U}}^{\hat{\otimes}l}\right)$.
###### Proof.
By definition, $J_{K}$ is obtained by multiplying together elementary pieces,
such as $F_{\mathbf{n}}$, $e_{\mathbf{n}}$, $K^{\pm 1}_{2\rho}$, $D^{\pm 1}$,
and by then taking a sum over all indices. The linking between different
components and framing will make appear powers of $D^{\pm 2}$ that we can
decompose using the basis elements $f_{n}(K^{2}_{i})$ of the completion by
(18). Note that we can collect all diagonal contributions of each component by
using formulas like
$D(E_{i}\otimes 1)D^{-1}=E_{i}\otimes\mathcal{K}_{i}\quad\text{and}\quad
D(1\otimes F_{j})D^{-1}=\mathcal{K}^{-1}_{j}\otimes F_{j}\ .$
Since framing is assumed to be even, we will have an even number of $D$-parts.
Hence using (18) and the explicit form of the quasi $R$-matrix $\Theta$, we
get the claim. ∎
###### Remark 6.4.
For $\mathfrak{sl}_{N}$ we can build the same completion after replacing
$K_{i}$ with $\mathcal{K}_{i}$. Then the arguments in the proof of Proposition
6.3 will show us that for any algebraically split link the universal
invariants belongs to this completion.
Proof of Theorem 1.1.
Using Proposition 6.3 and remark above, we can define the action of $\Gamma$
on each component of $J_{L}(\mathfrak{g};q)$ separately. We will denote by
$J_{L}^{\zeta_{1},\ldots,\zeta_{\ell}}(\mathfrak{g};q)$ the result of this
action. Then we have
$J_{L}\left(V(\lambda_{1})\otimes L(\zeta_{1}),\ldots,V(\lambda_{\ell})\otimes
L(\zeta_{\ell})\right)=$
$\bigotimes^{l}_{i=1}\mathrm{Tr}^{V(\lambda_{i})\otimes
L(\zeta_{i})}_{q}\left(J_{L}(\mathfrak{g};q)\right)=\bigotimes^{l}_{i=1}\mathrm{Tr}^{V(\lambda_{i})}_{q}\left(J_{T}^{\zeta_{1},\ldots,\zeta_{\ell}}(\mathfrak{g};q)\right)\cdot\dim_{q}L(\zeta_{1})\cdots\dim_{q}L(\zeta_{\ell}).$
The second equation follows from Lemma 3.1. By Theorem 4.6 we conclude that
$J_{L}^{\zeta_{1},\ldots,\zeta_{\ell}}\left(\lambda_{1},\ldots,\lambda_{\ell}\right)=J_{L}\left(\lambda_{1},\ldots,\lambda_{\ell}\right)$
for all $\lambda_{1},\ldots,\lambda_{\ell}$ under the assumptions of Theorem
1.1, therefore
$J_{L}(\mathfrak{g};q)=J_{L}^{\zeta_{1},\ldots,\zeta_{\ell}}(\mathfrak{g};q)$
and hence, $J_{L}(\mathfrak{g};q)$ is $\Gamma$-invariant under the same
assumptions.
$\hfill\Box$
###### Corollary 6.5.
For any $\ell$-component evenly framed link $L$, $J_{L}(\mathfrak{gl}_{N};q)$
belongs to the $\Gamma$-invariant part of $\text{\rm
Inv}\left(\widehat{\mathcal{U}}^{\hat{\otimes}\ell}\right)$. Moreover, for
every $0$-framed algebraically split link $L$,
$J_{L}(\mathfrak{gl}_{N};q)=J_{L}(\mathfrak{sl}_{N};q)\ .$
###### Proof.
The first statement is the direct consequence of Theorem 1.1. The second one
follows from the fact that the only difference in the definitions of both
invariants is in the diagonal part of the $R$-matrix, that does not contribute
since the linking matrix vanishes and the rules for moving of $D$ and
$D^{\prime}$ along a component of the link coincide. ∎
### 6.3. Twist forms
Let us denote by $\widehat{\mathcal{Z}}$ the center of
$\widehat{\mathcal{U}}$. In what follows, we will be particularly interested
in the following twist forms
${\mathcal{T}}_{\pm}:\widehat{\mathcal{Z}}\to\widehat{\mathcal{Z}}\quad\text{
given by}\quad{\mathcal{T}}_{\pm}(z):=\langle r^{\pm 1},z\rangle$
the Hopf pairing with the ribbon element. On the $\Gamma$-invariant Cartan
part they are easy to compute, given the Hopf pairing between the generators
$H_{i}$ in Section 6.2 . We have
(19) ${\mathcal{T}}_{\pm}(K_{2\mathbf{a}})=\langle r^{\pm
1}_{0},K_{2\mathbf{a}}\rangle=v^{\pm(\mathbf{a},2\rho-\mathbf{a})}\in\mathbb{Z}[v,v^{-1}]\
$
for any $\mathbf{a}\in\mathbb{Z}^{N}$. Now equation (16) allows to extend the
twists form to $\widehat{\mathcal{U}}^{\text{ev}}$ as follows:
${\mathcal{T}}_{\pm}(F_{\mathbf{m}}\mathcal{K}_{\mathbf{m}}K_{2\mathbf{a}}e_{\mathbf{n}})=\delta_{\mathbf{m},\mathbf{n}}q^{(\rho,\sum_{i}n_{i}\alpha_{i})}v^{\pm(\mathbf{a},2\rho-\mathbf{a})}\in\mathbb{Z}[v,v^{-1}]\
$
where $\alpha_{i}=e_{i}-e_{i+1}$ are the simple roots. Observe that after
restriction to $\mathcal{U}_{q}^{\text{ev}}\mathfrak{sl}_{N}$, i.e. replacing
$K_{2\mathbf{a}}$ with $\mathcal{K}_{2\mathbf{b}}$ in the above formula, the
result belong to $\mathbb{Z}[q,q^{-1}]$ and coincide with [19, eq. (102)] for
any $\mathbf{b}\in\mathbb{Z}^{N-1}$.
## 7\. Habiro’s basis for $\mathcal{Z}(U_{q}\mathfrak{sl}_{2})$
In this section we summarize Habiro’s results for $\mathfrak{sl}_{2}$ in the
way suitable for our generalization.
Habiro [15] defined a remarkable family of central elements in
$\mathcal{Z}(U_{q}\mathfrak{sl}_{2})$:
(20)
$\sigma_{m}:=\prod^{m}_{i=1}\left(C^{2}-(v^{i}+v^{-i})^{2}\right)=\prod_{i=1}^{m}(C-v^{i}-v^{-i})(C+v^{i}+v^{-i}).$
Since $C$ acts on the $(j+1)$-dimensional representation $V_{j}$ by a scalar
$v^{j+1}+v^{-j-1}$, the polynomial $\sigma_{m}$ is completely characterized by
the following properties:
* (a)
(Parity) $\sigma_{m}$ is $\Gamma=\mathbb{Z}_{2}$-invariant.
* (b)
(Vanishing) $\sigma_{m}$ annihilates the representations $V_{j}$ for $j<m$.
* (c)
(Normalization) $\sigma_{m}$ acts on the representation $V_{m}$ by a scalar
(21) $\prod^{m}_{i=1}\left((v^{m+1}+v^{-m-1})^{2}-(v^{i}+v^{-i})^{2}\right)\
.$
Note that parity implies that $\sigma_{m}$ also annihilates the
representations $L(-1)\otimes V_{j}$ for $j<m$. By using the Harish-Chandra
isomorphism, we can alternatively consider the polynomials
$T_{m}(y):=\mathit{hc}(\sigma_{m}):=\prod^{m}_{i=1}(yv^{i}-y^{-1}v^{-i})(yv^{-i}-y^{-1}v^{i})=(-1)^{m}\prod^{m}_{i=1}q^{-i}(1-y^{2}q^{i})(1-y^{-2}q^{i})$
which are characterized by the following properties:
* (a)
(Parity) $T_{m}$ is $\mathbb{Z}_{2}$-invariant, that is, $T_{m}(-y)=T_{m}(y)$
* (b)
(Vanishing) $T_{m}(\pm v^{j+1})=0$ for $j<m$
* (c)
(Normalization) $T_{m}(v^{m+1})$ is given in (21).
Habiro proved that $\\{\sigma_{m}\\}_{m\geq 0}$ form a basis in (a certain
completion of) the $\Gamma$-invariant part of the center. Hence, the elements
$S_{m}=\xi^{-1}(\sigma_{m})$, given by
$S_{m}:=\prod_{i=1}^{m}(V_{1}-v^{i}-v^{-i})(V_{1}+v^{i}+v^{-i})$
form a basis of $\mathcal{R}$. We will show that
$P_{n}=\prod_{i=0}^{n-1}(V_{1}-v^{2i+1}-v^{-2i-1})\in\mathcal{R}$
is a dual basis to $\\{S_{m}\\}_{m\geq 0}$ with respect to the Hopf pairing.
The following is a slight reformulation of [15, Prop. 6.3].
###### Lemma 7.1.
We have
$\langle P_{n},S_{m}\rangle=\frac{\\{2n+1\\}!}{\\{1\\}}\delta_{n,m}\ .$
###### Proof.
Clearly, one has
$\xi(P_{n})=\prod_{i=0}^{n-1}(C-v^{2i+1}-v^{-2i-1})$
which annihilates $V_{2i}$ for $i<n$. We have the following cases:
1) For $n<m$ we have $\langle
P_{n},S_{m}\rangle=\mathrm{Tr}_{q}^{P_{n}}(\sigma_{m})$. Since $P_{n}$ is in
span of $V_{i}$ for $i\leq n$ and $\sigma_{m}$ annihilates all these, we get
$\langle P_{n},S_{m}\rangle=0$.
2) For $m<n$ we have $\langle
P_{n},S_{m}\rangle=\mathrm{Tr}_{q}^{S_{m}}(\xi(P_{n}))$. Since $S_{m}$ is in
span of $V_{2i}$ for $i\leq n$ and
$\langle P_{n},V_{2i}\rangle=\\{i+n\\}\dots\\{i-n+1\\}[2i+1]\ .$
Hence $P_{n}$ annihilates all these, we get $\langle P_{n},S_{m}\rangle=0$.
3) Finally, for $n=m$ we observe that $P_{n}$ has a unique copy of $V_{n}$ and
$\langle P_{n},S_{n}\rangle=\langle
V_{n},S_{n}\rangle=\mathrm{Tr}_{q}^{V_{n}}(\sigma_{n})$
which is easy to compute. ∎
We can use the above results to compute the coefficients in the decomposition
of any central element into $\\{\sigma_{m}\\}_{m\geq 0}$.
###### Lemma 7.2.
Let $\phi$ be a $\mathbb{Z}_{2}$-invariant element in
$\mathcal{Z}(U_{q}\mathfrak{sl}_{2})$ which acts on $V_{j}$ by a scalar
$\phi_{j}$. Then
$\phi=\sum
a_{n}\sigma_{n},\;\text{where}\;\;a_{n}=\sum_{i=0}^{n}(-1)^{n-i}\frac{\\{2i+2\\}\\{i+1\\}}{\\{n+i+2\\}!\\{n-i\\}!}\;\phi_{i}\
.$
###### Proof.
We have ([15, Lemma 6.1])
$P_{n}=\sum_{i=0}^{n}(-1)^{n-i}\frac{[2i+2]}{[n+i+2]}\left[\begin{array}[]{cc}\\!2n+1\\!\\\
\\!n+1+i\\!\end{array}\right]V_{i}.$
If $\phi=\sum a_{m}\sigma_{m}$ then
$a_{n}=\frac{\\{1\\}}{\\{2n+1\\}!}\mathrm{Tr}_{q}^{P_{n}}(\phi)=\frac{\\{1\\}}{\\{2n+1\\}!}\sum_{i=0}^{n}(-1)^{n-i}\frac{[2i+2]}{[n+i+2]}\left[\begin{array}[]{cc}\\!2n+1\\!\\\
\\!n+1+i\\!\end{array}\right]\mathrm{Tr}_{q}^{V_{i}}(\phi)=$
$\sum_{i=0}^{n}(-1)^{n-i}\frac{\\{2i+2\\}\\{1\\}}{\\{n+i+2\\}!\\{n-i\\}!}\dim_{q}(V_{i})\phi_{i}.$
Using $\dim_{q}(V_{i})=[i+1]$ we obtain the result. ∎
Habiro proved that for any $0$-framed knot $K$, there exist
$a_{n}(K)\in\mathbb{Z}[q,q^{-1}]$ such that
$J_{K}(\mathfrak{sl}_{2};q)=\sum_{n\geq 0}a_{n}(K)\,\sigma_{n}$
known as a cyclotomic expansion of the colored Jones polynomial of the knot
$K$.
## 8\. New basis for the center of $\widehat{\mathcal{U}}$
Recall that $\widehat{\mathcal{Z}}$ is the center of the completion
$\widehat{\mathcal{U}}$. In this section we construct the basis
$\\{\sigma_{\lambda}\\}_{\lambda}$ of the $\Gamma$-invariant part of
$\widehat{\mathcal{Z}}$. Furthermore, we explicitly define its dual
$\\{P_{\lambda}\\}_{\lambda}$ with respect to the Hopf pairing. This allows us
to construct the cyclotomic expansion of $J_{K}(\mathfrak{gl}_{N};q)$ for any
$0$-framed knot $K$.
The proof uses the existence and properties of interpolation Macdonald
polynomials [29] which are summarized in the following theorem.
###### Theorem 8.1.
There is a family of symmetric polynomials $F_{\lambda}(x_{1},\ldots,x_{N};q)$
such that:
* (a)
$F_{\lambda}$ is in the span of Schur functions $s_{\mu}$ for $\mu\leq\lambda$
with the leading term
$F_{\lambda}=(-1)^{|\lambda|+\binom{N}{2}}q^{D_{N}(\lambda)}s_{\lambda}+\ldots\
.$
* (b)
$F_{\lambda}(q^{-\mu_{1}-N+1},\ldots,q^{-\mu_{N}})=0$ unless $\mu$ contains
$\lambda$.
* (c)
$F_{\lambda}(q^{-\lambda_{1}-N+1},\ldots,q^{-\lambda_{N}})=(-1)^{\binom{N}{2}}q^{n(\lambda)+\binom{N}{3}}\prod_{\square\in\lambda}(1-q^{-h(\square)})$.
* (d)
Any function $F$ in the completion can be written as
(22)
$F(x_{1},\ldots,x_{N})=\sum_{\lambda,\;\mu\subset\lambda}d_{\mu,\lambda}(q)F(q^{-\mu_{1}-N+1},\ldots,q^{-\mu_{N}})F_{\lambda}(x_{1},\ldots,x_{N};q)$
where $d_{\lambda,\mu}$ are explicit coefficients prescribed by Theorem 10.17.
We discuss the definition and give more details on interpolation Macdonald
polynomials in Section 10.
###### Theorem 8.2.
There exists a family of central elements $\sigma_{\lambda}\in\mathcal{Z}$
with the following properties:
* (a)
$\sigma_{\lambda}$ is $\Gamma$-invariant and annihilates $L(\zeta)\otimes
V(\mu)$ for all $\mu$ not containing $\lambda$ and $\zeta\in\Gamma$.
* (b)
$\mathit{hc}(\sigma_{\lambda})$ is in the span of
$s_{\mu}(x_{1},\ldots,x_{N})$ for $\mu\leq\lambda$, with the leading term
$\mathit{hc}(\sigma_{\lambda})=(-1)^{|\lambda|+\binom{N}{2}}v^{(N-1)|\lambda|}q^{D_{N}(\lambda)}s_{\lambda}+\ldots\
.$
* (c)
$\sigma_{\lambda}$ acts on $V(\lambda)$ by a scalar
$\sigma_{\lambda}|_{V(\lambda)}=(-1)^{\binom{N}{2}}q^{-n(\lambda)-\binom{N}{3}}\prod_{\square\in\lambda}(1-q^{h(\square)})\
.$
###### Proof.
Define $\sigma_{\lambda}=\mathit{hc}^{-1}(g_{\lambda})$, where
$g_{\lambda}(x_{1},\dots,x_{N})=F_{\lambda}(v^{N-1}x_{1},\ldots,v^{N-1}x_{N};q^{-1})$.
Then $\sigma_{\lambda}$ is clearly $\Gamma$-invariant and
$\sigma_{\lambda}|_{L(\zeta)\otimes V({\mu})}=g_{\lambda}(\zeta_{i}\cdot
v^{\mu_{i}+\rho_{i}})=F_{\lambda}(q^{(\mu_{1}+N-1)},\ldots,q^{\mu_{N}};q^{-1}).$
Indeed, if $y_{i}=\zeta_{i}\cdot
v^{\mu_{i}+\rho_{i}}=\zeta_{i}v^{(\mu_{i}-\frac{N-1}{2}+N-i)}$ then
$v^{N-1}y_{i}^{2}=q^{(\mu_{i}+N-i)}$.
Now $F_{\lambda}(q^{(\mu_{1}+N-1)},\ldots,q^{\mu_{N}};q^{-1})$ vanishes unless
$\mu$ contains $\lambda$, and has the nonzero value prescribed by the previous
theorem for $\mu=\lambda$. ∎
Let us define $\mathcal{R}_{\mathbb{Q}}:=\mathcal{R}\otimes\mathbb{Q}(v)$ by
extending the coefficient ring $\mathbb{Z}[v^{\pm 1}]$ of $\mathcal{R}$ to the
rational functions in $v$.
###### Theorem 8.3.
Define the following formal elements of $\mathcal{R}_{\mathbb{Q}}$
$P_{\lambda}=\sum_{\mu\subset\lambda}\frac{d_{\lambda,\mu}(q^{-1})}{\dim_{q}V(\mu)}\;V(\mu)\in\mathcal{R}_{\mathbb{Q}},$
then one has
(23) $\langle
P_{\lambda},\sigma_{\nu}\rangle:=\mathrm{Tr}^{P_{\lambda}}_{q}(\sigma_{\nu})=\delta_{\lambda,\nu}\
.$
###### Proof.
First, let us write the interpolation formula (22) for $F=F_{\nu}$:
$F_{\nu}(x_{1},\ldots,x_{N};q)=\sum_{\mu\subset\lambda}d_{\lambda,\mu}(q)F_{\nu}(q^{-\mu_{1}-N+1},\ldots,q^{-\mu_{N}};q)F_{\lambda}(x_{1},\ldots,x_{N};q),$
so
$\sum_{\mu\subset\lambda}d_{\lambda,\mu}(q)F_{\nu}(q^{-\mu_{1}-N+1},\ldots,q^{-\mu_{N}};q)=\delta_{\lambda,\nu}.$
By changing $q$ to $q^{-1}$ we get
(24)
$\sum_{\mu\subset\lambda}d_{\lambda,\mu}(q^{-1})F_{\nu}(q^{\mu_{1}+N-1},\ldots,q^{\mu_{N}};q^{-1})=\delta_{\lambda,\nu}.$
Now
$\mathrm{Tr}^{V(\mu)}_{q}(\sigma_{\nu})=\dim_{q}(V(\mu))\;g_{\nu}(q^{\mu_{1}+N-1},\ldots,q^{\mu_{N}})$,
hence
$\mathrm{Tr}^{P_{\lambda}}_{q}(\sigma_{\nu})=\sum_{\mu\subset\lambda}\frac{d_{\lambda,\mu}(q^{-1})}{\dim_{q}V({\mu})}\mathrm{Tr}^{V({\mu})}_{q}(\sigma_{\nu})=\;\delta_{\lambda,\nu}\
.$
∎
Next, we would like to study the integrality properties of the universal knot
invariant.
###### Lemma 8.4.
(a) Let $\sigma\in\mathcal{U}^{\text{ev}}_{\mathbb{Z}}$. Then
$\sigma=(K_{1}\cdots K_{N})^{-2s}\sum a_{\lambda}\sigma_{\lambda}$ with
$a_{\lambda}\in\mathbb{Z}[q,q^{-1}]$.
(b) Given $k$ and $m$, there exists $n=n(k,m)$ such that for all
$\Gamma$-invariant central elements $\sigma$ in the ideal
$\mathcal{U}^{(n)}_{\mathbb{Z}}$ the coefficients $a_{\lambda}$ are divisible
by $(q;q)_{m}$ for $|\lambda|\leq k$.
###### Proof.
(a) Recall that Harish-Chandra transform $\mathit{hc}$ identifies the
$\Gamma$-invariant part of the center of $\mathcal{U}_{\mathbb{Z}}$ with the
space of symmetric functions in $x_{1},\ldots,x_{N}$ with coefficients in
$\mathbb{Z}[q,q^{-1}]$. Since $F_{\lambda}$ is a polynomial with top degree
part equal to the Schur polynomial (up to a monomial in $q$), we can write
$(x_{1}\cdots
x_{N})^{s}f(x_{1},\ldots,x_{N})=\sum_{\lambda}a_{\lambda}F_{\lambda}(x_{1},\ldots,x_{N};q^{-1})$
and the result follows.
(b) If $\sigma$ is in the ideal $\mathcal{U}^{(n)}_{\mathbb{Z}}$ for
sufficiently large $n$, then by Proposition 6.2 its matrix elements in the
integral basis of $V(\lambda)$ are divisible by $(q;q)_{m}$. By definition of
Harish-Chandra transform, this implies that the values
$f(q^{-\lambda_{1}-N+1},\ldots,q^{-\lambda_{N}})$ are divisible by $(q;q)_{m}$
and hence by the interpolation formula (22) the coefficients $a_{\lambda}$ are
divisible by $(q;q)_{m}$ as well. ∎
###### Corollary 8.5.
The center of the completion $\widehat{\mathcal{U}}$ is isomorphic to the
completion of the space of symmetric polynomials with coefficients in
$\widehat{\mathbb{Z}[v]}$ with respect to the basis $F_{\lambda}$.
###### Proof.
By Lemma 8.4 any element of the center of $\widehat{\mathcal{U}}$ can be
written as an infinite series $\sum a_{\lambda}F_{\lambda}$ with coefficients
in $\widehat{\mathbb{Z}[v]}$, up to a factor $(x_{1}\cdots x_{N})^{-s}$. By
Corollary 11.10 the multiplication by $(x_{1}\cdots x_{N})^{-s}$ preserves the
space of such series. ∎
###### Corollary 8.6.
Any $\sigma\in\widehat{\mathcal{U}^{\text{ev}}}$ can be written as an infinite
sum $\sigma=\sum a_{\lambda}\sigma_{\lambda}$ with coefficients
$a_{\lambda}=\mathrm{Tr}^{P_{\mu}}_{q}(\sigma)\in\widehat{\mathbb{Z}[q]}$.
###### Proposition 8.7.
The universal knot invariant admits an expansion
$J_{K}(\mathfrak{gl}_{N};q)=\sum_{\lambda}a_{\lambda}(K)\sigma_{\lambda}\quad\text{with}\quad
a_{\lambda}(K)=\sum_{\mu\subset\lambda}{d_{\lambda,\mu}(q^{-1})}\,J_{K}(V(\mu),q)\in\mathbb{Z}[q,q^{-1}]\
$
called a cyclotomic expansion of the universal $\mathfrak{gl}_{N}$ knot
invariant.
Proposition 8.7 implies Theorem 1.3 in Introduction. Note that the knot
invariant $J_{K}(V(\mu),q)$ is normalized to be 1 for the unknot.
###### Proof.
By Corollary 6.5, $J_{K}(\mathfrak{gl}_{N};q)$ is a central element in
$\widehat{\mathcal{U}^{\text{ev}}}$, so it can be written as $\sigma=\sum
a_{\lambda}\sigma_{\lambda}$ with coefficients
$a_{\lambda}\in\widehat{\mathbb{Z}[q]}$. On the other hand, the value of
$J_{K}$ on any representation $V_{\lambda}$ is in $\mathbb{Z}[q,q^{-1}]$, so
by the interpolation formula (22) the coefficients $a_{\lambda}$ can be
written as rational functions with numerators in $\mathbb{Z}[q,q^{-1}]$ and
cyclotomic denominators. By Proposition 12.1 this implies that
$a_{\lambda}\in\mathbb{Z}[q,q^{-1}]$. The explicit formula for $a_{\lambda}$
is obtained by taking Hopf pairing with $P_{\mu}$ and observing that
$\mathrm{Tr}^{V({\mu})}_{q}\left(J_{K}(\mathfrak{gl}_{N};q)\right)=\dim_{q}(V(\mu))J_{K}(V(\mu),q)$
according to our normalization. ∎
The last result shows that
$a_{\lambda}(K)=\mathrm{Tr}_{q}^{P_{\lambda}}(J_{K}(\mathfrak{gl}_{N};q))\in\mathbb{Z}[q^{\pm
1}]$, even through the coefficients $d_{\lambda,\mu}(q)$ are rational
functions in $q$ (compare Example 10.23).
## 9\. Unified invariants of integral homology 3-spheres
This section is devoted to our main application of the previous results — a
construction of the unified invariants for integral homology 3-spheres. We
start with few auxiliary results.
Let us denote by
$P^{\prime}_{\lambda}=v^{-|\lambda|}\dim_{q}V(\lambda)\sum_{\mu\subset\lambda}\frac{d_{\lambda,\mu}(q^{-1})}{\dim_{q}V(\mu)}\;V(\mu)\in\mathcal{R}_{\mathbb{Q}}$
and define
(25) $\omega_{\pm}=\sum_{\lambda}(-1)^{|\lambda|+\binom{N}{2}}q^{\mp
c(\lambda)}q^{w_{\pm}(\lambda)}P^{\prime}_{\lambda}\in\widehat{\mathcal{R}}_{\mathbb{Q}}\quad\text{with}\quad\begin{array}[]{ll}w_{+}(\lambda)&=D_{N}(\lambda)\\\
w_{-}(\lambda)&=D_{N}(\lambda)+N|\lambda|\end{array}$
where $c(\lambda)$ is the content of $\lambda$. The next Lemma implies that
$\omega_{\pm}$ is the universal Kirby color for $(\pm 1)$-surgery.
###### Lemma 9.1.
For any $x\in\widehat{\mathcal{R}}_{\mathbb{Q}}$, we have
(26) $\langle\omega_{\pm},x\rangle=J_{U_{\mp}}(x)=\langle r^{\pm
1},\xi(x)\rangle$
where $J_{U_{\pm}}(x)$ is the Reshetikhin–Turaev invariant of the $(\pm
1)$-framed unknot colored by $x$.
###### Proof.
It is enough to check (26) for the basis elements $x=V(\nu)$. We compute
$\displaystyle\langle P^{\prime}_{\lambda},V(\nu)\rangle$
$\displaystyle=v^{-|\lambda|}\dim_{q}V(\lambda)\sum_{\mu\subset\lambda}\frac{{d_{\lambda,\mu}(q^{-1})}}{\dim_{q}V(\mu)}\;\langle
V(\mu),V(\nu)\rangle=v^{-|\lambda|}\dim_{q}V(\lambda)\sum_{\mu\subset\lambda}{d_{\lambda,\mu}(q^{-1})}s_{\nu}(q^{\mu_{i}+N-i})$
$\displaystyle=\dim_{q}V(\lambda)\sum_{\mu\subset\lambda}{d_{\lambda,\mu}(q^{-1})}C_{\nu}F_{\nu}(q^{\mu_{i}+N-i})={C_{\lambda}}\delta_{\lambda,\nu}\dim_{q}V(\lambda)$
where we used Lemma 5.3, equation (24) and the expansion
$s_{\nu}=(-1)^{|\lambda|+\binom{N}{2}}q^{-D_{N}(\lambda)}v^{(1-N)|\lambda|}F_{\nu}+\text{lower
terms}$ and hence,
$C_{\lambda}=(-1)^{|\lambda|+\binom{N}{2}}q^{-D_{N}(\lambda)}v^{-N|\lambda|}\
.$
Using this computation it is easy to check that
(27) $\langle\omega_{\pm},V(\nu)\rangle=v^{\mp N|\nu|}q^{\mp
c(\nu)}\dim_{q}V(\nu)=v^{\mp(\nu,\nu+2\rho)}\dim_{q}V(\nu)=\mathrm{Tr}^{V(\nu)}_{q}(r^{\pm
1})$
is equal to $J_{U_{\mp}}(V(\nu))$. ∎
From the following computation for
$V^{\prime}(\nu)=\frac{V(\nu)}{\dim_{q}(V(\nu))}$
$\langle\omega_{+}\omega_{-},V^{\prime}(\nu)\rangle=\langle\omega_{+},V^{\prime}(\nu)\rangle\langle\omega_{-},V^{\prime}(\nu)\rangle=v^{(\nu,\nu+2\rho)}v^{-(\nu,\nu+2\rho)}=\langle
1,V^{\prime}(\nu)\rangle$
we see that $\omega_{+}$ and $\omega_{-}$ are inverse to each other in the
algebra $\widehat{\mathcal{R}}_{\mathbb{Q}}$ isomorphic to
$\widehat{\mathcal{Z}}_{\mathbb{Q}}$.
A direct consequence of the above Lemma and the fusion rules is the following
result.
###### Theorem 9.2.
Let $L\cup K=L_{1}\cup L_{2}\dots\cup L_{l}\cup K$ be an $(l+1)$ component
algebraically split $0$-framed link such that $K$ is the unknot. We denote by
$L_{(K,\pm 1)}$ the framed link in $S^{3}$ obtained from $L$ by $\pm
1$-surgery along $K$, then for any $p_{1},\dots,p_{l}\in\mathcal{R}$
$J_{L\cup K}(p_{1},\dots,p_{l},\omega^{\pm 1})=J_{L_{(K,\pm
1)}}(p_{1},\dots,p_{l}).$
###### Proof.
The proof is given in [15, Thm. 9.4]. ∎
### 9.1. Construction of the unified invariants
Without loss of generality, we can assume that an integral homology 3-sphere
$M$ is obtained by $\mathbf{\varepsilon}$-surgery on an $\ell$ component
algebraically split link $L$, where $\mathbf{\varepsilon}\in\\{\pm
1\\}^{\ell}$. For $\mathfrak{sl}_{N}$ Habiro and Le defined a unified
invariant of $M$ as follows
$I^{\text{HL}}(M):=\mathcal{T}^{\prime}_{\mathbf{\varepsilon}}(J_{L}(\mathfrak{sl}_{N};q))\;\in\;\widehat{\mathbb{Z}[q]}\quad\text{
where}\quad\mathcal{T}^{\prime}_{\varepsilon}=\bigotimes^{\ell}_{i=1}\mathcal{T}^{\prime}_{\varepsilon_{i}}\
$
is the $\mathfrak{sl}_{N}$ twist form. They also proved that
$I^{\text{HL}}(M)$ belongs to the Habiro ring [19].
For $\mathfrak{gl}_{N}$ we define the unified invariant of $M$ similarly
$I(M):=\mathcal{T}_{\mathbf{\varepsilon}}(J_{L}(\mathfrak{gl}_{N};q))\;\in\;\widehat{\mathbb{Z}[q]}\quad\text{
where}\quad\mathcal{T}_{\varepsilon}=\bigotimes^{\ell}_{i=1}\mathcal{T}_{\varepsilon_{i}}\
$
by using the $\mathfrak{gl}_{N}$ twist forms.
###### Theorem 9.3.
For any integral homology $3$-sphere $M$,
$I(M)=J_{L}(\omega_{\varepsilon_{1}},\dots,\omega_{\varepsilon_{l}})\in\;\widehat{\mathbb{Z}[q]}$
Moreover, its evaluation at any root of unity coincides with the
$\mathfrak{sl}_{N}$ Witten–Reshetikhin–Turaev invariant of $M$.
This implies Theorem 1.4 from Introduction.
###### Proof.
By Corollary 6.5, for any algebraically split $0$-framed link $L$ we have
$J_{L}(\mathfrak{gl}_{N};q)=J_{L}(\mathfrak{sl}_{N};q).$
Hence, as explained in Section 6.3, the $\mathfrak{gl}_{N}$ and
$\mathfrak{sl}_{N}$ twist forms on $J_{L}$ do coincide. This implies
$I(M)=I^{\text{HL}}(M)$. Since the Habiro–Le invariant is known to belong to
the Habiro ring and to evaluate at a root of unity to the Witten-Reshetikhin-
Turaev (WRT) one, it remain to show
$I(M)=J_{L}(\omega_{\epsilon_{1}},\dots,\omega_{\epsilon_{\ell}})$. We prove
this claim in two steps.
Step 1: Assume $\ell=1$, then $J_{L}(\omega_{\pm})=I(M)$ by Lemma 9.1.
Step 2: For any $x=x_{1}\otimes\dots\otimes
x_{\ell}\in\text{Inv}(\widehat{\mathcal{U}}_{\mathbb{Z}}^{\hat{\otimes}\ell})$
we define $a_{k}$ for $k=0,1,\dots,\ell$ and $b_{k}=1,\dots,\ell$ as follows:
$a_{k}=\prod^{k}_{i=1}\langle
r^{\epsilon_{i}},x_{i}\rangle\prod^{\ell}_{j=k+1}\langle\omega_{\epsilon_{j}},x_{j}\rangle,\quad
b_{k}=x_{k}\prod^{k-1}_{i=1}\langle
r^{\epsilon_{i}},x_{i}\rangle\prod^{\ell}_{j=k+1}\langle\omega_{\epsilon_{j}},x_{j}\rangle.$
Then
$a_{k-1}=\langle\omega_{\epsilon_{k}},b_{k}\rangle\quad\text{and}\quad
a_{k}=\langle r^{\epsilon_{k}},b_{k}\rangle.$
where we identify $\omega_{\pm}$ with their image under $\xi$ for simplicity.
Since $b_{k}\in\mathcal{Z}_{\mathbb{Q}}$, we have $a_{k}=a_{k-1}$ by Step 1
for $k=1,2,\dots,\ell$. Hence, we have $a_{0}=a_{\ell}$ which is our claim. ∎
Theorem 9.3 has striking consequences. Indeed, for any $\lambda\in\mathcal{R}$
let us denote
$\mathcal{P}_{\lambda}=\text{\rm Span}_{\mathbb{Z}[q^{\pm
1}]}\\{P^{\prime}_{\mu}\,|\,\lambda\subset\mu\\},\quad\mathcal{P}=\mathcal{P}_{\emptyset}\quad\text{and}\quad\widehat{\mathcal{P}}:={\lim\limits_{\overleftarrow{\hskip
5.69054pt\lambda\hskip
5.69054pt}}}\;\;\frac{\mathcal{P}}{\mathcal{P}_{\lambda}}.$
For any framed link $L$, the Reshetikhin–Turaev functor provides a
$\mathbb{Q}(v)$-multilinear map
$J_{L}:\mathcal{R}_{\mathbb{Q}}\times\dots\times\mathcal{R}_{\mathbb{Q}}\to\mathbb{Q}(v)$.
For any algebraically split $0$-framed link $L$, Theorem 9.3 implies that its
restriction to $\mathcal{P}$ provides a $\mathbb{Z}[q^{\pm 1}]$-multilinear
map
$J_{L}:\mathcal{P}\times\dots\times\mathcal{P}\to\mathbb{Z}[q,q^{-1}]$
inducing
$J_{L}:\widehat{\mathcal{P}}\times\dots\times\widehat{\mathcal{P}}\to\widehat{\mathbb{Z}[q]}\
.$
This leads to a generalization of the famous integrability theorem in [15,
Thm. 8.2].
###### Corollary 9.4.
Given an $\ell$ component algebraically split $0$-framed link $L$, then for
all but finitely many partitions $\lambda_{i}$ with $1\leq i\leq\ell$, there
exist positive integers $n=n(\lambda_{i},N)$, such that
$J_{L}(P^{\prime}_{\lambda_{1}},\dots,P^{\prime}_{\lambda_{\ell}})\in(q;q)_{n}\mathbb{Z}[q,q^{-1}]\
.$
It would be interesting to have a direct proof of Corollary 9.4 without using
the theory of unified invariants.
Based on Corollary 9.4 we can give a cyclotomic expansion of the
Reshetikhin–Turaev invariant of $L$ as follows:
(28)
$J_{L}(\lambda_{1},\dots,\lambda_{\ell})=v^{\sum_{i}|\lambda_{i}|}\sum_{\mu_{i}\subset\lambda_{i}}\prod^{\ell}_{j=1}c_{\lambda_{j},\mu_{j}}(q^{-1})\,J_{L}(P^{\prime}_{\mu_{1}},\dots,P^{\prime}_{\mu_{\ell}})$
where the matrix
$\left[c_{\lambda,\mu}(q)\right]_{\lambda,\mu}:=\left[F_{\lambda}(q^{-\mu_{i}-N+i})\right]_{\lambda,\mu}$
is the inverse of $\left[d_{\lambda,\mu}(q)\right]_{\lambda,\mu}$ by Theorem
10.17 below. This generalizes equation $(8.2)$ in [15].
### 9.2. Few direct arguments
Our proof of the fact that $I(M)$ belongs to the Habiro ring is based on the
result that $I^{\text{HL}}(M)\in\widehat{\mathbb{Z}[q]}$ proven in [19] on
more than 100 pages. Given the complexity of their argument, we decided to
collect here different facts that can be shown without reference to [19].
###### Theorem 9.5.
Assume $M_{\pm}$ is obtained by $(\pm 1)$-surgery on the knot $K$, then
$I(M_{\pm})=J_{K}(\omega_{\pm})\;\in\;\widehat{\mathbb{Z}[v]}$
belongs to the Habiro ring.
###### Proof.
By Theorem 1.3 we know
$J_{K}(\mathfrak{gl}_{N};q)=\sum_{\mu}a_{\mu}(K)\sigma_{\mu}\quad\text{
with}\quad a_{\mu}(K)\in\mathbb{Z}[q,q^{-1}]$
The fact that $I(M_{\pm})$ belongs to the Habiro ring easily follows from the
claim that $\mathcal{T}_{\pm}(\sigma_{\mu})$ is divisible by $(v;v)_{m}$ for
some $m$ depending on $\mu$ and $N$. Let us prove this claim. By (19) the Hopf
pairing with $r^{\pm 1}$ replaces an element $x^{k_{i}}_{i}$ with
$v^{Q_{\pm}(k_{i})}$ where $Q_{\pm}$ is a quadratic form. By Lemma 10.27 we
can rewrite $\sigma_{\mu}$ as a linear combination of
$\prod^{d}_{i=1}f_{n_{i}}(q^{s_{i}}x_{i})$ such that $\sum_{i}n_{i}=|\mu|$,
$d=N(N+1)/2$ and $s_{i}\in\mathbb{Z}$. Moreover, each $f_{n}(q^{a}x_{i})$ is
divisible by $f_{n}(v^{a}y_{i})$ where $y_{i}^{2}=x_{i}$ and hence belongs to
the ideal $I_{n}$ of $\mathbb{Z}[v^{\pm 1},y^{\pm 1}_{i}]$ characterized in
Proposition 2.1 of [3]. The result follows now from [3, Theorem 2.2]. The
number $m$ we are looking for is
$\left\lfloor\frac{|\mu|}{N(N+1)}\right\rfloor$.
∎
Combining previous results we obtain an explicit expression for the unified
invariant for knot surgeries:
(29)
$I_{M_{\pm}}=J_{K}(\omega_{\pm})=\mathrm{Tr}_{q}^{\omega_{\pm}}J_{K}(\mathfrak{gl}_{N};q)=\sum_{\lambda}(-1)^{|\lambda|+\binom{N}{2}}q^{\mp
c(\lambda)}q^{w_{\pm}(\lambda)}\,J_{K}(P^{\prime}_{\lambda})$
Assuming that
$I(M)=J_{L}(\omega_{\varepsilon_{1}},\dots,\omega_{\varepsilon_{l}})$ is well
defined, its topological invariance can be shown directly as follows. Since
$I(M)$ depends only on the isotopy class of $L$, it remains to check its
invariance under Hoste moves (a version of Fenn-Rourke moves between
algebraically split links). Without loss of generality, we can assume that the
last component is an unknot, then the statement follows from Theorem 9.2.
Assuming $I(M)$ belongs to the Habiro ring, and as such has well defined
evaluations at roots of unity [17, Thm. 6.3] we can use the same trick as
above to show that for any root of unity $\zeta$
$\text{ev}_{\zeta}I(M)=\text{WRT}(M,\zeta).$
Let us recall that the WRT invariant is obtained from
$J_{L}(\mathfrak{sl}_{N};q)$ by taking trace along each component with the
Kirby color
$\Omega_{\pm}=\frac{\sum_{\lambda}v^{\pm(\lambda,\lambda+2\rho)}\dim_{q}V(\lambda)}{\sum_{\mu}v^{\pm(\mu,2\rho+\mu)}\dim^{2}_{q}V(\mu)}\,V(\lambda)$
where the sums are taken over all
$\lambda,\mu\in\mathcal{R}^{\text{fin}}=\\{\lambda|\dim_{\zeta}V(\lambda)\neq
0\\}$ and $v^{2}$ is evaluated to $\zeta$. Hence, we need to show that for any
$x$ in the ad-invariant part of the completed $\ell$th tensor power of
$\widehat{\mathcal{U}}_{\mathbb{Z}}$, we have
$\mathrm{Tr}_{q}^{\Omega_{\varepsilon}}(x)\stackrel{{\scriptstyle\zeta}}{{=}}\mathrm{Tr}_{q}^{\omega_{\varepsilon}}(x)\quad\quad\forall
x\in\text{Inv}(\widehat{\mathcal{U}}_{\mathbb{Z}}^{\hat{\otimes}\ell})$
where $\stackrel{{\scriptstyle\zeta}}{{=}}$ means the equality after
evaluation $v^{2}=\zeta$. We will prove this fact in two steps.
Step 1: Assume $\ell=1$, in this case
$\text{Inv}\,\widehat{\mathcal{U}}_{\mathbb{Z}}=\widehat{\mathcal{Z}}$ with
basis given by $z_{\lambda}=\xi(V(\lambda))$. Since $\Omega_{\pm}$ is
invariant under Hoste moves, we have
$\mathrm{Tr}_{q}^{\Omega_{\pm}}\left(z_{\nu}\right)=\langle\Omega_{\pm},V(\nu)\rangle=\text{ev}_{\zeta}\left(v^{\mp(\nu,\nu+2\rho)}\dim_{q}V(\nu)\right)$
where we interpret the left hand side as a Hopf link with components colored
by $\Omega_{\pm}$ and $V(\nu)$, and the right hand side is the result of the
sliding. Comparing this computation with (27), we deduce that at roots of
unity the actions of $\Omega_{\pm}$ and $\omega_{\pm}$ do coincide on
$\xi(\mathcal{R}^{\text{fin}})$, and they vanish on
$x\in\widehat{\mathcal{Z}}\setminus\xi(\mathcal{R}^{\text{fin}})$ after
evaluation.
Step 2: Define $a_{k}$ for $k=0,1,\dots,\ell$ and $b_{k}=1,\dots,\ell$ as
follows:
$a_{k}=\bigotimes^{k}_{j=1}\mathrm{Tr}_{q}^{\Omega_{\varepsilon_{j}}}\otimes\bigotimes^{\ell}_{j=k+1}\mathrm{Tr}_{q}^{\omega_{\varepsilon_{j}}}(x),\quad
b_{k}=\bigotimes^{k-1}_{j=1}\mathrm{Tr}_{q}^{\Omega_{\varepsilon_{j}}}\otimes
1\otimes\bigotimes^{\ell}_{j=k+1}\mathrm{Tr}_{q}^{\omega_{\varepsilon_{j}}}(x).$
Then
$a_{k-1}=\mathrm{Tr}_{q}^{\omega_{\varepsilon_{k}}}(b_{k})\quad\text{and}\quad
a_{k}=\mathrm{Tr}_{q}^{\Omega_{\varepsilon_{k}}}(b_{k}).$
Since $b_{k}\in\mathcal{Z}_{\mathbb{Q}}$, we have
$a_{k}\stackrel{{\scriptstyle\zeta}}{{=}}a_{k-1}$ by Step 1 and Lemma 9.1 for
$k=1,2,\dots,\ell$. Hence, we have
$a_{0}\stackrel{{\scriptstyle\zeta}}{{=}}a_{\ell}$ which is our claim.
## 10\. Interpolation polynomials
In this section we summarize the theory of interpolation Macdonald
polynomials.
### 10.1. One variable case
Consider the space of polynomials in one variable $x$ over $\mathbb{C}(q)$
with the following bilinear form
$(x^{k},x^{m})=q^{-km}.$
Let us define polynomials $f_{m}(x),m=0,1,\ldots$ by the equation $f_{0}(x)=1$
and
(30) $f_{m}(x)=(x;q)_{m}=(1-x)\cdots(1-xq^{m-1})\quad\text{for}\quad m\geq 1.$
Clearly, $f_{m}(x)$ is a degree $m$ polynomial with leading term
$(-1)^{m}q^{\frac{m(m-1)}{2}}x^{m}$, so $\\{f_{m}\\}_{m\geq 0}$ form a basis
in $\mathbb{Z}_{q,q^{-1}}[x]$. Our next aim is to show that this basis is
orthogonal. Observe that $f_{m}(q^{-k})=0$ for $k<m$.
###### Lemma 10.1.
We have $(f_{m}(x),f_{k}(x))=\delta_{km}q^{-m}(q;q)_{m}$
###### Proof.
First, observe that $(g(x),x^{k})=g(q^{-k})$ for any polynomial $g(x)$.
Therefore for $m>k$ we have $(f_{m}(x),x^{k})=f_{m}(q^{-k})=0$, so
$(f_{m}(x),g(x))=0$ for any polynomial $g(x)$ of degree strictly less than
$m$. In particular, $(f_{m}(x),f_{k}(x))=0$ for $m>k$ and
$(f_{m}(x),f_{m}(x))=(-1)^{m}q^{\frac{m(m-1)}{2}}(f_{m}(x),x^{m})=(-1)^{m}q^{\frac{m(m-1)}{2}}f_{m}(q^{-m})=$
$(-1)^{m}q^{\frac{m(m-1)}{2}}(1-q^{-m})\cdots(1-q^{-1})=q^{-m}(1-q^{m})\cdots(1-q).$
∎
###### Lemma 10.2.
The transition matrix between the monomial basis $x^{a}$ and the basis
$f_{b}(x)$ has the following form:
(31) $x^{a}=\sum_{b\leq a}k_{a,b}f_{b}(x),\quad
k_{a,b}=(-1)^{b}q^{-ab+\frac{b(b+1)}{2}}\binom{a}{b}_{q}.$
###### Proof.
To find the coefficients we compute the pairing $(f_{b}(x),x^{a})$, then using
orthogonality we obtain
$k_{a,b}=\frac{(f_{b}(x),x^{a})}{(f_{b}(x),f_{b}(x))}=\frac{f_{b}(q^{-a})}{(f_{b}(x),f_{b}(x))}.$
For $a\geq b$ from Lemma 10.1 we get
$(f_{b}(x),f_{b}(x))=q^{-b}(q;q)_{b},$
while
$\displaystyle f_{b}(q^{-a})$
$\displaystyle=(1-q^{-a})\cdots(1-q^{-a+b-1})=(-1)^{b}q^{-ab+\frac{b(b-1)}{2}}(1-q^{a})\cdots(1-q^{a-b+1})$
$\displaystyle=(-1)^{b}q^{-ab+\frac{b(b-1)}{2}}\frac{(q;q)_{a}}{(q;q)_{a-b}}.$
and the equation follows. ∎
Our next goal is to expand arbitrary polynomial $f(x)$ in the basis
$f_{m}(x)$. This can be done in two different ways. First, we can expand
$f(x)$ in the monomial basis and apply (31). Alternatively, we can apply
Newton interpolation method: if $f(x)=\sum a_{m}f_{m}(x)$ then
$f(q^{-j})=\sum_{m\geq j}a_{m}f_{m}(q^{-j}),$
which is a triangular system of equations for the unknown coefficients
$a_{m}$. Thus knowing $f(q^{-j})$ one can at least theoretically reconstruct
the coefficients $a_{m}$. This can be made explicit by the following:
###### Lemma 10.3.
We have
(32) $f(x)=\sum_{m=0}^{\infty}a_{m}f_{m}(x),\
a_{m}=\frac{1}{(f_{m},f_{m})}\sum_{j=0}^{m}(-1)^{j}q^{\frac{j(j-1)}{2}}\binom{m}{j}_{q}f(q^{-j}).$
###### Proof.
By $q$-binomial theorem we have
(33)
$f_{m}(x)=\sum_{j=0}^{m}(-1)^{j}q^{\frac{j(j-1)}{2}}\binom{m}{j}_{q}x^{j}.$
Now
$a_{m}=\frac{(f,f_{m})}{(f_{m},f_{m})}=\frac{1}{(f_{m},f_{m})}\sum_{j=0}^{m}(-1)^{j}q^{\frac{j(j-1)}{2}}\binom{m}{j}_{q}(f,x^{j}).$
Finally, $(f,x^{j})=f(q^{-j})$. ∎
###### Remark 10.4.
Equation (33) can be interpreted as an explicit inverse of the matrix in (31).
One can consider completion $\widehat{\mathbb{Z}_{q}[x]}$ of the space of
polynomials with respect to the basis $f_{m}(x)$. In this completion, infinite
sums $\sum_{m=0}^{\infty}a_{m}f_{m}(x)$ are allowed. Newton interpolation
method and (32) identify this completion with the space of distributions on
the interpolation nodes $1,q^{-1},\ldots$.
We will need the following lemma.
###### Lemma 10.5.
We have
$(x-q^{s})(x-q^{s+1})\cdots(x-q^{s+m-1})=\\\
\sum_{j=0}^{m}(-1)^{j}q^{-jm+\binom{j+1}{2}}\binom{m}{j}_{q}(1-q^{s+j})\cdots(1-q^{s+m-1})f_{j}(x).$
###### Proof.
We prove it by induction in $m$. For $m=1$ we get
$x-q^{s}=-(1-x)+(1-q^{s})=-f_{1}+(1-q^{s})f_{0}.$
For the step of induction we observe
(34) $\displaystyle(x-q^{s+m})f_{j}(x)$
$\displaystyle=-q^{-j}(1-q^{j}x)f_{j}(x)+(q^{-j}-q^{s+m})f_{j}(x)$
$\displaystyle=-q^{-j}f_{j+1}(x)+q^{-j}(1-q^{s+m+j})f_{j}(x).$
Using (34), it is easy to identify the coefficient at $f_{j}(x)$ in
$(x-q^{s+m})\sum(-1)^{j}q^{-jm+\binom{j+1}{2}}\binom{m}{j}_{q}(1-q^{s+j})\cdots(1-q^{s+m-1})f_{j}(x)$
as
$-q^{-j+1}(-1)^{j-1}q^{-(j-1)m+\binom{j}{2}}\binom{m}{j-1}_{q}(1-q^{s+j-1})\cdots(1-q^{s+m-1})\\\
+q^{-j}q^{-jm+\binom{j+1}{2}}\binom{m}{j}_{q}(1-q^{s+j})\cdots(1-q^{s+m-1})(1-q^{s+m+j})\\\
=-q^{-j(m+1)+\binom{j+1}{2}}(1-q^{s+j})\cdots(1-q^{s+m-1})\\\
\times\left[q^{m-j+1}\binom{m}{j-1}_{q}(1-q^{s+j-1})+\binom{m}{j}_{q}(1-q^{s+m+j})\right].$
It remains to notice that
$q^{m-j+1}\binom{m}{j-1}_{q}(1-q^{s+j-1})+\binom{m}{j}_{q}(1-q^{s+m+j})$
$=\left[q^{m-j+1}\binom{m}{j-1}_{q}+\binom{m}{j}_{q}\right]-q^{s+m}\left[\binom{m}{j-1}_{q}+q^{j}\binom{m}{j}_{q}\right]$
$=\binom{m+1}{j}_{q}-q^{s+m}\binom{m+1}{j}_{q}=(1-q^{s+m})\binom{m+1}{j}_{q}.$
∎
###### Remark 10.6.
If we set a formal variable $y=q^{s}$ in Lemma 10.5, then we get the identity
$(x-y)(x-qy)\cdots(x-yq^{m-1})=\sum_{j=0}^{m}(-1)^{j}q^{-jm+\binom{j+1}{2}}\binom{m}{j}_{q}f_{m-j}(yq^{j})f_{j}(x).$
This is a $q$-analogue of the binomial identity
$(x-y)^{m}=\sum_{j=0}^{m}(-1)^{j}\binom{m}{j}(1-y)^{m-j}(1-x)^{j}.$
### 10.2. Multi-variable case: polynomials
Let us generalize the above results to the case of $N$ variables. The pairing
has the form
$(x_{1}^{a_{1}}\cdots x_{N}^{a_{N}},x_{1}^{b_{1}}\cdots
x_{N}^{b_{N}})=q^{-\sum
a_{i}b_{i}}=(x_{1}^{a_{1}},x_{1}^{b_{1}})\cdots(x_{N}^{a_{N}},x_{N}^{b_{N}}).$
Note that for $\mathbf{x}=(x_{1},\dots,x_{N})$
$(g(\mathbf{x}),x_{1}^{b_{1}}\cdots
x_{N}^{b_{N}})=g(q^{-b_{1}},\ldots,q^{-b_{N}}).$
Consider the products
$f_{k_{1},\ldots,k_{N}}(\mathbf{x})=f_{k_{1}}(x_{1})\cdots f_{k_{N}}(x_{N}).$
Since $f_{k}(x)$ give a basis in $\mathbb{C}(q)[x]$, the polynomials
$f_{k_{1},\ldots,k_{N}}$ give a basis in $\mathbb{C}(q)[x_{1},\ldots,x_{N}]$.
Clearly,
$(f_{k_{1},\ldots,k_{N}},x_{1}^{b_{1}}\cdots x_{N}^{b_{N}})=0\
\textrm{unless}\ b_{i}\geq k_{i}\ \textrm{for all}\ i.$
###### Lemma 10.7.
We have $(f_{k_{1},\ldots,k_{N}},f_{m_{1},\ldots,m_{N}})=0$ unless
$k_{i}=m_{i}$ for all $i$.
###### Proof.
Suppose that $k_{i}>m_{i}$ for some $i$. Since $f_{m_{1},\ldots,m_{N}}$
contains only monomials of the form $x_{1}^{b_{1}}\cdots x_{N}^{b_{N}}$ with
$b_{i}\leq m_{i}$, we have $(f_{k_{1},\ldots,k_{N}},x_{1}^{b_{1}}\cdots
x_{N}^{b_{N}})=0$ for all such monomials and hence
$(f_{k_{1},\ldots,k_{N}},f_{m_{1},\ldots,m_{N}})=0$. ∎
Next, we would like to describe the basis in symmetric polynomials. It will be
labeled by partitions
$\lambda=(\lambda_{1}\geq\lambda_{2}\geq\ldots\geq\lambda_{N})$ with at most
$N$ parts. We define
(35)
$F_{\lambda}(\mathbf{x})=\frac{\det(f_{\lambda_{i}+N-i}(x_{j}))}{\prod_{i<j}(x_{i}-x_{j})}.$
Clearly, the numerator in (35) is antisymmetric in $x_{i}$, so it is divisible
by $\prod_{i<j}(x_{i}-x_{j})$ and the ratio is a symmetric function. It is
easy to see that $F_{\lambda}(\mathbf{x})$ is a non-homogeneous polynomial of
degree $|\lambda|$, and the top degree component equals
$(-1)^{|\lambda|+\binom{N}{2}}q^{D_{N}(\lambda)}s_{\lambda}$ where
$s_{\lambda}$ is the Schur function and $D_{N}(\lambda)$ is defined by (6).
The function $F_{\lambda}(\mathbf{x})$ is known as a special case of a
factorial Schur function [26, 27, 28], it is also a specialization of
nonsymmetric Macdonald polynomials described below.
###### Lemma 10.8.
Suppose that $b_{1}>\ldots>b_{N}$. Then
$F_{\lambda}(q^{-b_{1}},\ldots,q^{-b_{N}})=0$ unless
$b_{i}\geq\lambda_{i}+N-i$ for all $i$.
###### Proof.
Suppose that $b_{j}<\lambda_{j}+N-j$ for some $j$, then for all $i\leq j$ and
$\ell>j$ one has $\lambda_{i}+N-i\geq\lambda_{j}+N-j>b_{j}\geq b_{\ell}$, so
$f_{\lambda_{i}+N-i}(q^{-b_{\ell}})=0$. This implies
$\det[f_{\lambda_{i}+N-i}(q^{-b_{\ell}})]_{i,\ell=1}^{N}=0$. On the other
hand, since $b_{i}\neq b_{j}$ the denominator
$\prod_{i<j}(q^{-b_{i}}-q^{-b_{j}})$ does not vanish. ∎
###### Corollary 10.9.
If $\mu$ is another partition then we can define $b_{i}=\mu_{i}+N-i$, and
conclude that $F_{\lambda}(q^{-\mu_{i}-N+i})=0$ unless
$\mu_{i}\geq\lambda_{i}$ for all $i$, that is, partition $\mu$ contains
$\lambda$.
###### Example 10.10.
Suppose that $\lambda=(1)$, then $F_{(1)}$ is a symmetric function of degree 1
with leading term $(-1)^{1+\binom{N}{2}}q^{D_{N}(1)}s_{(1)}=q^{D_{N}(1)}\sum
x_{i}$. We have $D_{N}(1)=N-1+\binom{N}{3}$, so
$F_{(1)}(x_{1},\ldots,x_{N})=(-1)^{1+\binom{N}{2}}q^{N-1+\binom{N}{3}}\sum
x_{i}+c.$ To find the constant $c$, we observe that by Corollary 10.9 we get
$F_{(1)}(q^{-N+1},q^{-N+2},\ldots,1)=0$, so
$c=(-1)^{\binom{N}{2}}q^{N-1+\binom{N}{3}}(q^{-N+1}+q^{-N+2}+\ldots+1)=(-1)^{\binom{N}{2}}q^{\binom{N}{3}}[N]_{q}.$
###### Lemma 10.11.
We have
$F_{\lambda}(q^{-\lambda_{i}-N+i})=(-1)^{\binom{N}{2}}q^{n(\lambda)+\binom{N}{3}}\prod_{\square\in\lambda}(1-q^{-h(\square)}),$
where $h(\square)$ is the hook length of a box $\square$ in the Young diagram
corresponding to $\lambda$.
###### Proof.
Since the sequence $\lambda_{i}+N-i$ is strictly decreasing, we have
$f_{\lambda_{j}+N-j}(q^{-\lambda_{i}-N+i})=0$ for $j>i$ and
$f_{\lambda_{i}+N-i}(q^{-\lambda_{i}-N+i})=\\{\lambda_{i}+N-i\\}_{q^{-1}}!$
and
$\det(f_{\lambda_{j}+N-j}(q^{-\lambda_{i}-N+i}))=\prod_{i}\\{\lambda_{i}+N-i\\}_{q^{-1}}!\
.$
On the other hand,
$\prod_{i<j}(q^{-\lambda_{i}-N+i}-q^{-\lambda_{j}-N+j})=(-1)^{\binom{N}{2}}q^{-\sum(\lambda_{j}+N-j)(j-1)}\prod_{i<j}(1-q^{-\lambda_{i}+i+\lambda_{j}-j}).$
and the statement follows now from formula (5) and the identity
$\sum(\lambda_{j}+N-j)(j-1)=n(\lambda)+\binom{N}{3}.$
∎
###### Example 10.12.
For arbitrary $N$ and $\lambda=(1)$ we computed in Example 10.10 that
$F_{(1)}=(-1)^{1+\binom{N}{2}}q^{\binom{N}{3}}(q^{N-1}(x_{1}+\ldots+x_{N})-[N]_{q}).$
Hence,
$F_{(1)}(q^{-N},q^{-N+2},\ldots,1)=q^{\binom{N}{3}}(q^{N-1}(q^{-N}+q^{-N+2}+\ldots+1)-[N]_{q})=$
$(-1)^{1+\binom{N}{2}}q^{\binom{N}{3}}(q^{-1}-1)=(-1)^{\binom{N}{2}}q^{\binom{N}{3}}(1-q^{-1}).$
We summarize the above results in the following proposition:
###### Proposition 10.13.
[29] There exists a unique collection of nonhomogeneous symmetric polynomials
$F_{\lambda}(x_{1},\ldots,x_{N})$ with the following properties:
* •
$F_{\lambda}(x_{1},\ldots,x_{N})$ has degree $|\lambda|$.
* •
$F_{\lambda}(q^{-\mu_{i}-N+i})=0$ for all partitions $\mu$ not containing
$\lambda$.
* •
$F_{\lambda}(q^{-\lambda_{i}-N+i})=(-1)^{\binom{N}{2}}q^{n(\lambda)+\binom{N}{3}}\prod_{\square\in\lambda}(1-q^{-h(\square)}).$
We will denote the value
$F_{\lambda}(q^{-\lambda_{i}-N+i})=(-1)^{\binom{N}{2}}q^{n(\lambda)+\binom{N}{3}}\prod_{\square\in\lambda}(1-q^{-h(\square)})$
by $c_{\lambda,\lambda}$.
###### Lemma 10.14.
Suppose that $q$ is a root of unity. Then $c_{\lambda,\lambda}$ vanishes for
all but finitely many partitions $\lambda$.
###### Proof.
Observe that $\prod_{\square\in\lambda}(1-q^{-h(\square)})$ is divisible by
$\prod_{i}[\lambda_{i}-\lambda_{i+1}]_{q}!$ and
$\sum_{i=1}^{N}i(\lambda_{i}-\lambda_{i+1})=|\lambda|.$
This means that for some $i$ we must have
$i(\lambda_{i}-\lambda_{i+1})\geq\frac{|\lambda|}{N},\
\lambda_{i}-\lambda_{i+1}\geq\frac{|\lambda|}{iN}\geq\frac{|\lambda|}{N^{2}},$
and $c_{\lambda,\lambda}$ is divisible by
$(1-q)\cdots(1-q^{\lfloor\frac{|\lambda|}{N^{2}}\rfloor})$. If $q^{s}=1$ then
it vanishes for $|\lambda|\geq sN^{2}$. ∎
###### Remark 10.15.
A partition is called an $s$-core if none of its hook lengths is divisible by
$s$. The $s$-core partitions play an important role in representation theory
of symmetric groups in finite characteristic, and of Hecke algebras at roots
of unity [21]. If $q^{s}=1$ then clearly $c_{\lambda,\lambda}(q)\neq 0$ if and
only if $\lambda$ is an $s$-core. Although there are infinitely many
$s$-cores, Lemma 10.14 shows that there are finitely many $s$-cores with at
most $N$ rows.
For example, for $s=2$ the $2$-cores are “staircase partitions”
$\lambda=(k,k-1,\ldots,1)$, and the maximal $2$-core with at most $N$ rows has
size $N+(N-1)+\ldots+1=\binom{N+1}{2}$.
### 10.3. Multi-variable case: interpolation
One can use the polynomials $F_{\lambda}$ to solve the following interpolation
problem.
###### Problem 10.16.
Find a symmetric function $f=\sum a_{\lambda}F_{\lambda}$ given its values
$f(q^{-\mu_{i}-N+i})$ for all $\mu$.
We have
$f(q^{-\mu_{i}-N+i})=\sum a_{\lambda}F_{\lambda}(q^{-\mu_{i}-N+i})$
This is a linear system on $a_{\lambda}$ with the triangular matrix
(36) ${\sf C}=\left[c_{\lambda,\mu}\right]_{\lambda,\mu},\
c_{\lambda,\mu}(q):=F_{\lambda}(q^{-\mu_{i}-N+i})$
It is clear from Proposition 10.13 that to find $a_{\lambda}$ for a given
$\lambda$ it is sufficient to know all coefficients $c_{\mu,\nu}$ for
$\mu\subset\nu\subset\lambda$.
In [29] Okounkov computed the inverse matrix $D={\sf C}^{-1}$ which allows one
to explicitly compute the coefficients $a_{\lambda}$.
###### Theorem 10.17.
[29] Define $c^{*}_{\lambda,\mu}(q)=c_{\lambda,\mu}(q^{-1})$ and
$\operatorname{cont}(\lambda)=n(\lambda)-n(\lambda^{\prime})$. Then
$D=\left[d_{\lambda,\mu}\right]_{\lambda,\mu},\
d_{\lambda,\mu}=(-1)^{|\mu|-|\lambda|}q^{\operatorname{cont}(\lambda)-\operatorname{cont}(\mu)}\frac{c^{*}_{\lambda,\mu}}{c_{\mu,\mu}c^{*}_{\lambda,\lambda}}$
and
$a_{\mu}=\sum_{\lambda\subset\mu}d_{\lambda,\mu}f(q^{-\lambda_{i}-N+i})=\frac{1}{c_{\mu,\mu}}\sum_{\lambda\subset\mu}(-1)^{|\mu|-|\lambda|}q^{\operatorname{cont}(\lambda)-\operatorname{cont}(\mu)}\frac{c^{*}_{\lambda,\mu}}{c^{*}_{\lambda,\lambda}}f(q^{-\lambda_{i}-N+i}).$
###### Example 10.18.
If $\lambda=\mu$ then clearly $d_{\lambda,\mu}=\frac{1}{c_{\lambda,\lambda}}$.
###### Example 10.19.
We have $F_{(\emptyset)}=(-1)^{\binom{N}{2}}q^{\binom{N}{3}}$, so
$c_{(\emptyset),(\emptyset)}=c_{(\emptyset),(1)}=(-1)^{\binom{N}{2}}q^{\binom{N}{3}},\quad
c^{*}_{(\emptyset),(\emptyset)}=c^{*}_{(\emptyset),(1)}=(-1)^{\binom{N}{2}}q^{-\binom{N}{3}}.$
Since
$c_{(1),(1)}=(-1)^{\binom{N}{2}}q^{\binom{N}{3}}(1-q^{-1})=(-1)^{\binom{N}{2}+1}q^{\binom{N}{3}-1}(1-q),$
we get
$d_{(\emptyset),(1)}=\frac{(-1)^{\binom{N}{2}}q^{-\binom{N}{3}+1}}{(1-q)}.$
So the first two terms of interpolation series have the following form:
$f(x_{1},\ldots,x_{N})=(-1)^{\binom{N}{2}}q^{-\binom{N}{3}}f(q^{1-N},q^{2-N},\ldots,1)F_{(\emptyset)}(\mathbf{x})+\\\
\frac{(-1)^{\binom{N}{2}+1}q^{-\binom{N}{3}+1}}{1-q}\left[-f(q^{1-N},q^{2-N},\ldots,1)+f(q^{-N},q^{2-N},\ldots,1)\right]F_{(1)}(\mathbf{x})+\ldots$
###### Example 10.20.
For $N=1$ and $a\geq b$ we have
$c_{(b),(a)}=f_{b}(q^{-a})=(1-q^{-a})\cdots(1-q^{-a+b-1})$
hence
$c^{*}_{(b),(a)}=(1-q^{a})\cdots(1-q^{a-b+1})$
Now
$\frac{c^{*}_{(b),(a)}}{c^{*}_{(b),(b)}}=\frac{(1-q^{a})\cdots(1-q^{a-b+1})}{(1-q^{b})\cdots(1-q)}=\binom{a}{b}_{q},$
and
$d_{(b),(a)}=(-1)^{a-b}q^{\frac{b(b-1)}{2}-\frac{a(a-1)}{2}}\frac{c^{*}_{(b),(a)}}{c_{(a),(a)}c^{*}_{(b),(b)}}=\frac{(-1)^{a-b}}{c_{(a),(a)}}q^{\frac{b(b-1)}{2}-\frac{a(a-1)}{2}}\binom{a}{b}_{q},$
which matches (32).
###### Example 10.21.
Let $N=2$, $\lambda=(1)$ and $\mu=(3,2)$. We have
$F_{\lambda}=q(x_{1}+x_{2})-(1+q)$, so
$c_{\lambda,\mu}=F_{\lambda}(q^{-4},q^{-2})=(-q-1+q^{-1}+q^{-3}),\
c^{*}_{\lambda,\mu}=q^{3}+q-1-q^{-1},\\\ $
and using Lemma 10.11
$c_{\lambda,\lambda}=-(1-q^{-1}),\ c^{*}_{\lambda,\lambda}=-(1-q),$
$c_{\mu,\mu}=-q^{2}(1-q^{-1})^{2}(1-q^{-2})(1-q^{-3})(1-q^{-4})=q^{-9}(1-q)^{2}(1-q^{2})(1-q^{3})(1-q^{4}).$
Now
$d_{\lambda,\mu}=q^{-2}\frac{c^{*}_{\lambda,\mu}}{c_{\mu,\mu}c^{*}_{\lambda,\lambda}}=-q^{6}\frac{q^{4}+q^{2}-q-1}{(1-q)^{3}(1-q^{2})(1-q^{3})(1-q^{4})}.$
### 10.4. Hopf pairing
We have a symmetric bilinear form $(\cdot,\cdot)$ on
$\mathbb{Z}[x_{1},\ldots,x_{N}]^{S_{N}}$ defined by its values on Schur
polynomials
$(s_{\lambda},s_{\mu})=s_{\lambda}(q^{-\mu_{1}-N+1},\ldots,q^{-\mu_{N}})s_{\mu}(q^{-N+1},\ldots,1).$
It is closely related to the Hopf pairing $\langle\cdot,\cdot\rangle$ for
$\mathcal{R}=\mathit{Rep}(\mathcal{U})$ defined in Section 5.2. Note that
$(f,s_{\mu})=f(q^{-\mu_{1}-N+1},\ldots,q^{-\mu_{N}})s_{\mu}(q^{-N+1},\ldots,1).$
for any symmetric function $f$.
###### Proposition 10.22.
We have
(37)
$(F_{\lambda},F_{\nu})=\delta_{\lambda,\nu}q^{-|\lambda|+2\binom{N}{3}}\prod_{\square\in\lambda}(1-q^{N+c(\square)}),$
so the Hopf pairing is diagonal in the basis $\\{F_{\lambda}\\}_{\lambda}$.
###### Proof.
We have
$(F_{\lambda},s_{\mu})=F_{\lambda}(q^{-\mu_{1}-N+1},\ldots,q^{-\mu_{N}})s_{\mu}(q^{-N+1},\ldots,1)=0$
unless $\lambda\subset\mu$. On the other hand, $F_{\nu}$ can be expanded in
$s_{\mu}$ for $\mu\preceq\nu$, so $(F_{\lambda},F_{\nu})$ vanishes unless
there exists $\mu\preceq\nu$ such that $\lambda\subset\mu$, in particular,
$\lambda\preceq\nu$.
Since the Hopf pairing is symmetric, $(F_{\lambda},F_{\nu})$ vanishes unless
$\lambda\preceq\nu$ and $\nu\preceq\lambda$, so $\lambda=\nu$. Finally,
$(F_{\lambda},F_{\lambda})=(-1)^{|\lambda|+\binom{N}{2}}q^{D_{N}(\lambda)}(F_{\lambda},s_{\lambda})=(-1)^{|\lambda|+\binom{N}{2}}q^{D_{N}(\lambda)}F_{\lambda}(q^{-\lambda_{1}-N+1},\ldots,q^{-\lambda_{N}})s_{\lambda}(q^{-N+1},\ldots,1).$
Now
$F_{\lambda}(q^{-\lambda_{1}-N+1},\ldots,q^{-\lambda_{N}})=(-1)^{\binom{N}{2}}q^{n(\lambda)+\binom{N}{3}}\prod_{\square\in\lambda}(1-q^{-h(\square)})$
while
$s_{\lambda}(q^{-N+1},\ldots,1)=q^{-n(\lambda)}\prod_{\square\in\lambda}\frac{(1-q^{-N-c(\square)})}{(1-q^{-h(\square)})}.$
hence
$F_{\lambda}(q^{-\lambda_{1}-N+1},\ldots,q^{-\lambda_{N}})s_{\lambda}(q^{-N+1},\ldots,1)=(-1)^{\binom{N}{2}}q^{\binom{N}{3}}\prod_{\square\in\lambda}(1-q^{-N-c(\square)})=$
$(-1)^{|\lambda|+\binom{N}{2}}q^{-N|\lambda|-c(\lambda)+\binom{N}{3}}(1-q^{N+c(\square)}).$
On the other hand, $D_{N}(\lambda)=c(\lambda)+(N-1)|\lambda|+\binom{N}{3}$. ∎
This provides us with a different perspective for the interpolation problem.
Suppose that we have a Schur expansion for $F_{\lambda}$:
$F_{\lambda}=\sum_{\mu\preceq\lambda}b_{\lambda,\mu}s_{\mu}.$
Then for an arbitrary symmetric function $f(x_{1},\ldots,x_{N})$ we can write
$f=\sum_{\lambda}\frac{(f,F_{\lambda})}{(F_{\lambda},F_{\lambda})}F_{\lambda}=\sum_{\lambda}\sum_{\mu\preceq\lambda}b_{\lambda,\mu}\frac{(f,s_{\mu})}{(F_{\lambda},F_{\lambda})}F_{\lambda}=\sum_{\lambda}\sum_{\mu\preceq\lambda}\frac{b_{\lambda,\mu}s_{\mu}(q^{-N-i})}{(F_{\lambda},F_{\lambda})}f(q^{-\mu_{i}-N+i})F_{\lambda},$
and the interpolation coefficient is equal to
(38)
$d_{\lambda,\mu}=\frac{b_{\lambda,\mu}s_{\mu}(q^{-N+i})}{(F_{\lambda},F_{\lambda})}.$
###### Example 10.23.
For $N=2$ and $\lambda=(3,2)$ we have
$F_{(3,2)}=q^{2}(1-x_{1})(1-qx_{1})(1-x_{2})(1-qx_{2})(q^{3}(x_{1}+x_{2})-(1+q))=\\\
q^{7}s_{3,2}-q^{6}(1+q)s_{3,1}-q^{4}(1+q+q^{2}+q^{3})s_{2,2}+q^{6}s_{3,0}+q^{3}(1+q+q^{2}+q^{3})(1+q)s_{2,1}-\\\
q^{3}(1+q+q^{2}+q^{3})s_{2,0}-q^{2}(1+q+q^{2}+q^{3})(1+q)s_{1,1}+(q^{5}+q^{4}+2q^{3}+q^{2})s_{1,0}-(q^{3}+q^{2}).$
Also
$(F_{3,2},F_{3,2})=-q^{-5}(1-q^{4})(1-q^{3})(1-q^{2})^{2}(1-q)$
Therefore the interpolation coefficient for $\lambda=(3,2)$ and $\mu=(1,0)$
equals
$d_{(3,2),(1,0)}=(q^{5}+q^{4}+2q^{3}+q^{2})\frac{s_{1,0}(q^{-1},1)}{(F_{3,2},F_{3,2})}=$
$-\frac{(q^{5}+q^{4}+2q^{3}+q^{2})(1+q^{-1})}{q^{-5}(1-q^{4})(1-q^{3})(1-q^{2})^{2}(1-q)}=-\frac{q^{6}(q^{4}+q^{2}-q-1)}{(1-q^{4})(1-q^{3})(1-q^{2})(1-q)^{3}}.$
This agrees with Example 10.21.
### 10.5. Divisibility
Given a polynomial $f(x)$, define
$\partial_{xy}(f):=\frac{f(x)-f(y)}{x-y}.$
Observe that
$\partial_{xy}(fg)=\frac{f(x)-f(y)}{x-y}g(x)+f(y)\frac{g(x)-g(y)}{x-y}=\partial_{xy}f\cdot
g(x)+f(y)\cdot\partial_{xy}(g).$
More generally, we have
(39) $\displaystyle\partial_{xy}(f_{1}\cdots f_{k})$
$\displaystyle=\partial_{xy}(f_{1})f_{2}(x)\cdots
f_{k}(x)+f_{1}(y)\partial_{xy}(f_{2})f_{3}(x)\cdots f_{k}(x)+\ldots$
$\displaystyle+f_{1}(y)f_{2}(y)\cdots\partial_{xy}(f_{k}).$
###### Example 10.24.
For $f_{n}(x)=(1-x)\cdots(1-q^{n-1}x)$, note that
$\partial_{xy}(1-q^{i}x)=-q^{i}$, so we get
$\displaystyle F_{n,0}(x,y)$
$\displaystyle=\partial_{x,y}f_{n+1}(x)=\sum_{i=0}^{n}(1-y)\cdots(1-q^{i-1}y)[\partial_{x,y}(1-q^{i}x)](1-q^{i+1}x)\cdots(1-q^{n}x)$
$\displaystyle=\sum_{i=0}^{n}f_{i}(y)\cdot(-q^{i})f_{n-i}(q^{i+1}x).$
###### Example 10.25.
For example,
$F_{1,0}(x,y)=q(x+y)-(1+q)=q(y-1)+(qx-1)=-[qf_{1}(y)+f_{1}(qx)].$
Similarly,
$\displaystyle F_{2,0}(x,y)$
$\displaystyle=-q^{3}(x^{2}+xy+y^{2})+(q+q^{2}+q^{3})(x+y)-(1+q+q^{2})$
$\displaystyle=-[(1-qx)(1-q^{2}x)+q(1-q^{2}x)(1-y)+q^{2}(1-y)(1-qy)]$
$\displaystyle=-[f_{2}(qx)+qf_{1}(x)f_{2}(y)+q^{2}f_{2}(y)].$
###### Corollary 10.26.
For all integers $a$ and $b$ the value $F_{n,0}(q^{a},q^{b})$ is divisible by
$\left(\left\lfloor\frac{n}{2}\right\rfloor\right)_{q}!$
###### Proof.
Let $k=\left\lfloor\frac{n}{2}\right\rfloor$. In the above equation either
$i\geq k$ or $n-i\geq k$, so each term in the sum is either divisible by
$f_{k}(q^{i+1+a})$ or by $f_{k}(q^{b})$, so by $q$-binomial theorem it is
divisible by $(k)_{q}!$ ∎
More generally, let $\partial_{i}=\partial_{x_{i},x_{i+1}}$ then it is well
known that $\partial_{i}$ satisfy braid relations, so one can define
$\partial_{w}$ for any permutation $w$. Furthermore,
$F_{\lambda}(x_{1},\ldots,x_{N})=\partial_{w_{0}}[f_{\lambda_{1}+N-1}(x_{1})\cdots
f_{\lambda_{N}}(x_{N})],$
where $w_{0}=(N\ N-1\ \ldots 1)$ is the longest element in $S_{N}$.
###### Lemma 10.27.
For all $\lambda$ one can write $F_{\lambda}(x_{1},\ldots,x_{N})$ as the sum
where each term has the form
(40) $f_{j_{1}}(q^{s_{1}}x_{m_{1}})\cdots f_{j_{d}}(q^{s_{d}}x_{m_{d}}),\
\text{where}\ j_{1}+\ldots+j_{d}=|\lambda|\ \text{and}\ d=\binom{N+1}{2}.$
Here the indices $m_{i}$ might repeat arbitrarily.
###### Proof.
From (39) and Example 10.24 it is clear that $\partial_{i}$ applied to a
product (40) with $\ell$ factors produces a sum of similar products with
$\ell+1$ factors. We start from a product of $N$ factors, and $\partial_{w}$
is a composition of $\binom{N}{2}$ operators $\partial_{i}$, so the terms in
the resulting sum have $N+\binom{N}{2}=\binom{N+1}{2}$ factors. Also, each
$\partial_{i}$ decreases the degree by 1, so
$j_{1}+\ldots+j_{d}=\sum(\lambda_{i}+N-i)-\binom{N}{2}=|\lambda|.$
∎
###### Remark 10.28.
A more careful analysis of this proof leads to a combinatorial formula for
$F_{\lambda}$ where the terms are labeled by semistandard tableaux, but we do
not need it here. This is a $q$-analogue of the expansion of a Schur function
in the monomial basis.
###### Lemma 10.29.
For any sequence of integers $a_{1},\ldots,a_{N}$ the value
$F_{\lambda}(q^{a_{1}},\ldots,q^{a_{N}})$ is divisible by $(k)_{q}!$ where
$k=\left\lfloor\frac{|\lambda|}{\binom{N+1}{2}}\right\rfloor$.
###### Proof.
In each term (40) there are $d=\binom{N+1}{2}$ indices $j_{1},\cdots,j_{d}$
which add up to $|\lambda|$, so at least one of these indices is greater than
$|\lambda|/d$. It remains to notice that $f_{j}(q^{a})$ is divisible by
$(q)_{j}!$ for all integers $a$. ∎
The following lemma gives a rough description of the expansion
(41)
$F_{\lambda}(x_{1},\ldots,x_{N})=\sum_{m_{1},\ldots,m_{N}}b_{m_{1},\ldots,m_{k}}f_{m_{1}}(x_{1})\cdots
f_{m_{N}}(x_{N}).$
of the symmetric interpolation polynomial $F_{\lambda}$ in terms of
nonsymmetric ones.
###### Lemma 10.30.
Given $k$, for sufficiently large $|\lambda|$ for all terms of the expansion
(41) either the coefficient $b_{m_{1},\ldots,m_{k}}$ is divisible by
$(k)_{q}!$ or there exists $m_{i}\geq k$ for some $1\leq i\leq N$.
###### Proof.
We follow the same logic as in Lemma 10.29. For $|\lambda|>2k\binom{N+1}{2}$
every term (40) is divisible by $f_{2k}(q^{s}x_{i})$ for some $s$ and $i$. By
Lemma 10.5 this can be further decomposed into terms which are divisible by
$(j)_{q}!f_{2k-j}(x_{i})$, and either $j$ or $2k-j$ is greater than or equal
to $k$. Overall, we presented
$F_{\lambda}(x_{1},\ldots,x_{N})=A(k)_{q}!+\sum B_{i}f_{k}(x_{i})$
for some polynomials $A$ and $B_{i}$. It remains to notice that the polynomial
$B_{i}f_{k}(x_{i})$ can be presented as the sum of $f_{m_{1}}(x_{1})\cdots
f_{m_{N}}(x_{N})$ where $m_{i}\geq k$. ∎
## 11\. Stability of interpolation and the case $N=2$
### 11.1. Stability of interpolation matrices
In this section study the dependence of the interpolation polynomials on $N$.
As above, if partition $\lambda$ has less than $N$ parts we can complete it
with zeroes. We denote by $F_{\lambda;N}(x_{1},\ldots,x_{N})$ the
corresponding polynomial in $N$ variables.
###### Lemma 11.1.
Let $\lambda$ be a partition with at most $N$ parts. Then
$F_{\lambda;N}(x_{1},\ldots,x_{N-1},1)=\begin{cases}(-1)^{N-1}q^{\binom{N-1}{2}}F_{\lambda;N-1}(qx_{1},\ldots,qx_{N-1})&\text{if}\
\lambda_{N}=0\\\ 0&\text{otherwise}.\end{cases}$
###### Proof.
Let $\mu$ be a partition with at most $N-1$ parts. Then by Proposition 10.13
$F_{\lambda;N}(q^{-\mu_{1}-N+1},\ldots,q^{-\mu_{N-1}-1},1)=0$
unless $\mu$ contains $\lambda$. If $\lambda_{N}>0$ then this never happens
and $F_{\lambda;N}(x_{1},\ldots,x_{N-1},1)=0$. If $\lambda_{N}=0$ we write
$L(x_{1},\ldots,x_{N-1})=F_{\lambda;N-1}(qx_{1},\ldots,qx_{N-1})$. We have
$L(q^{-\mu_{1}-N+1},\ldots,q^{-\mu_{N-1}-1})=F_{\lambda;N-1}(q^{-\mu_{1}-(N-1)+1},\ldots,q^{-\mu_{N-1}})$
which vanishes unless $\mu$ contains $\lambda$, so by Proposition 10.13
$F_{\lambda;N}(x_{1},\ldots,x_{N-1},1)$ is proportional to
$L(x_{1},\ldots,x_{N-1})$. Finally, at $\mu=\lambda$ we can use Lemma 10.11 to
determine the coefficient. ∎
###### Remark 11.2.
We can also prove the lemma using the explicit determinantal formula. Indeed,
$f_{\lambda_{i}+N-i}(1)=0$ unless $f_{\lambda_{i}+N-i}=0$ which is equivalent
to $i=N$ and $\lambda_{N}=0$. Therefore for $\lambda_{N}\neq 0$ the last row
in the matrix $f_{\lambda_{i}+N-i}(x_{j})$ vanishes (where $x_{N}=1$), and
$F_{\lambda;N}(x_{1},\ldots,x_{N-1},1)=0$. For $\lambda_{N}=0$ we have
$F_{\lambda;N}(x_{1},\ldots,x_{N-1},1)=\frac{\det\left[f_{\lambda_{i}+N-i}(x_{j})\right]_{i,j=1}^{N-1}}{\prod_{i<j\leq
N-1}(x_{i}-x_{j})\prod_{i\leq N-1}(x_{i}-1)}.$
Note that $f_{k+1}(x)=(1-x)f_{k}(qx)$, so
$f_{\lambda_{i}+N-i}(x_{j})=(1-x_{j})f_{\lambda_{i}+(N-1)-i}(qx_{j})$
Therefore
$F_{\lambda;N}(x_{1},\ldots,x_{N-1},1)=\frac{\prod_{i=1}^{n}(1-x_{i})\det\left[f_{\lambda_{i}+(N-1)-i}(qx_{j})\right]_{i,j=1}^{N-1}}{\prod_{i<j\leq
N-1}(x_{i}-x_{j})\prod_{i\leq N-1}(x_{i}-1)}=$
$(-1)^{N-1}q^{\binom{N-1}{2}}F_{\lambda;N-1}(qx_{1},\ldots,qx_{N-1}).$
###### Corollary 11.3.
Let $c_{\lambda,\mu}^{(N)}$ be the coefficient defined in previous section for
symmetric functions in $N$ variables. Then the expressions
$(-1)^{\binom{N}{2}}q^{-\binom{N}{3}}c^{(N)}_{\lambda,\mu},(-1)^{\binom{N}{2}}q^{\binom{N}{3}}c^{(N)*}_{\lambda,\mu},(-1)^{\binom{N}{2}}q^{\binom{N}{3}}d^{(N)}_{\lambda,\mu}$
are independent of $N$ (provided that $\lambda$ and $\mu$ have at most $N$
parts).
###### Example 11.4.
For one-row partitions $\lambda=(b)$ and $\mu=(a)$ the interpolation
coefficients are given by the formulas in Example 10.20 up to a monomial
factor.
The above results allow us to describe Schur expansion of interpolation
polynomials:
###### Proposition 11.5.
We have
(42)
$F_{\lambda}^{(N)}=(-1)^{\binom{N}{2}}q^{\binom{N}{3}}\sum_{\mu\subset\lambda}\overline{b_{\lambda,\mu}}A^{|\mu|}\prod_{\square\in\lambda\setminus\mu}(1-Aq^{c(\square)})s_{\lambda}^{(N)}$
where $A=q^{N}$ and the coefficients
$\overline{b_{\lambda,\mu}}=(-1)^{\binom{N}{2}}q^{\binom{N}{3}}d^{(N)}_{\lambda,\mu}q^{-|\lambda|-|\mu|-n(\mu)}\prod_{\square\in\mu}(1-q^{h(\square)})$
do not depend on $N$.
###### Proof.
It follows from (38) that
$F_{\lambda}=\sum b_{\lambda,\mu}s_{\mu},\
b_{\lambda,\mu}=\frac{d_{\lambda,\mu}(F_{\lambda},F_{\lambda})}{s_{\mu}(q^{-N+i})}.$
Since $d_{\lambda,\mu}$ vanishes unless $\mu\subset\lambda$, the same is true
for $b_{\lambda,\mu}$. By Corollary 11.3 the product
$\overline{d_{\lambda,\mu}}=(-1)^{\binom{N}{2}}q^{\binom{N}{3}}d_{\lambda,\mu}$
does not depend on $N$, and we can use the formulas
$(F_{\lambda},F_{\lambda})=q^{-|\lambda|+2\binom{N}{3}}\prod_{\square\in\lambda}(1-Aq^{c(\square)}),$
$s_{\mu}(q^{-N+i})=q^{n(\mu)-(N-1)|\mu|}\prod_{\square\in\mu}\frac{(1-Aq^{c(\square)})}{(1-q^{h(\square)})}.$
to write
$b_{\lambda,\mu}=(-1)^{\binom{N}{2}}q^{-\binom{N}{3}}\overline{d_{\lambda,\mu}}\cdot
q^{-|\lambda|+2\binom{N}{3}-n(\mu)+N|\mu|-|\mu|}\prod_{\square\in\lambda}(1-Aq^{c(\square)})\prod_{\square\in\mu}\frac{(1-q^{h(\square)})}{(1-Aq^{c(\square)})}.$
The result follows. ∎
###### Corollary 11.6.
The one-row interpolation polynomials have the following Schur expansion:
$F_{(m)}^{(N)}=(-1)^{\binom{N}{2}}q^{\binom{N}{3}}\sum_{\mu\subset\lambda}(-1)^{j}q^{\frac{j(j-3)}{2}}A^{j}\frac{(1-Aq^{j})\cdots(1-Aq^{m-1})}{(1-q)\cdots(1-q^{m-j})}h_{j}^{(N)}=$
$(-1)^{\binom{N}{2}}q^{\binom{N}{3}}\sum_{\mu\subset\lambda}(-1)^{j}q^{\frac{j(j-3+2N)}{2}}\binom{N+m-1}{m-j}_{q}h_{j}^{(N)}.$
Here $h_{j}^{(N)}=s_{(j)}^{(N)}$ are complete symmetric functions in $N$
variables.
###### Proof.
For $\lambda=(m)$ and $N=1$ we have
$f_{m}(x)=\sum_{j=0}^{m}(-1)^{j}q^{\frac{j(j-1)}{2}}\binom{m}{j}_{q}x^{j}.$
By writing $A=q$ and $\mu=(j)$ we get
$A^{|\mu|}\prod_{\square\in\lambda\setminus\mu}(1-Aq^{c(\square)})=q^{j}(1-q^{j+1})\cdots(1-q^{m}),$
so
$\overline{b_{(m),(j)}}=(-1)^{j}\frac{q^{\frac{j(j-3)}{2}}}{(1-q)\cdots(1-q^{m-j})}.$
∎
###### Remark 11.7.
The HOMFLY-PT limit of interpolation polynomials in Proposition 11.5 appears
to be related to the results and conjectures in [22], it would be interesting
to find a precise connection.
### 11.2. Adding a column
It is well known that in symmetric functions in $N$ variables one has the
identity
$s_{\lambda+1^{N}}=x_{1}\cdots x_{N}\cdot s_{\lambda}.$
Here $\lambda+1^{N}=(\lambda_{1}+1,\ldots,\lambda_{N}+1)$ and the
corresponding Young diagram is obtained from the Young diagram for $\lambda$
by adding a vertical column.
For interpolation polynomials we have two different generalizations of this
identity: the first relates $F_{\lambda+1^{N}}$ to $F_{\lambda}$ and the
second describe the action of the multiplication by $x_{1}\cdots x_{N}$.
###### Proposition 11.8.
We have
$F_{\lambda+1^{N}}(x_{1},\ldots,x_{N})=q^{\binom{N}{2}}\prod_{i=1}^{N}(1-x_{i})F_{\lambda}(qx_{1},\ldots,qx_{N})$.
More generally,
(43)
$F_{\lambda+k^{N}}(x_{1},\ldots,x_{N})=q^{k\binom{N}{2}}\prod_{i=1}^{N}f_{k}(x_{i})F_{\lambda}(q^{k}x_{1},\ldots,q^{k}x_{N}).$
###### Proof.
We have $f_{m+1}(x)=(1-x)f_{m}(qx)$, therefore
$\det\left[f_{\lambda_{i}+1+N-i}(x_{j})\right]=\det\left[(1-x_{j})f_{\lambda_{i}+N-i}(qx_{j})\right]=\prod_{j=1}^{N}(1-x_{j})\det\left[f_{\lambda_{i}+N-i}(x_{j})\right].$
Since each factor $(x_{i}-x_{j})$ in the denominator gets multiplied by $q$
after changing $x_{i}\to qx_{i}$, this implies the first equation. Now (43)
can be obtained by applying it $k$ times. ∎
Let $e_{i}$ denote the $i$-th basic vector in $\mathbb{Z}^{N}$ with $1$ at
$i$-th position and $0$ at other positions. Given $I\subset\\{1,\ldots,n\\}$,
we define $e_{I}=\sum_{i\in I}e_{i}$.
###### Proposition 11.9.
We have
$x_{1}\cdots
x_{N}F_{\lambda}(x_{1},\ldots,x_{N})=q^{-|\lambda|-\binom{N}{2}}\sum_{I\subset\\{1,\ldots,n\\}}(-1)^{|I|}F_{\lambda+e_{I}}(x_{1},\ldots,x_{N}).$
Here we use the convention that $F_{\lambda+e_{I}}=0$ unless the entries of
$\lambda+e_{I}$ are non-increasing (that is, $\lambda+e_{I}$ is a partition).
###### Proof.
We have $f_{m+1}(x)=f_{m}(x)(1-q^{m}x)$, so
$xf_{m}(x)=q^{-m}(f_{m}(x)-f_{m+1}(x)).$
Therefore
$x_{1}\cdots
x_{N}\det\left[f_{\lambda_{i}+N-i}(x_{j})\right]=\det\left[x_{j}f_{\lambda_{i}+N-i}(x_{j})\right]=$
$\det\left[q^{-\lambda_{i}-N+i}(f_{\lambda_{i}+N-i}(x_{j})-f_{\lambda_{i}+1+N-i}(x_{j}))\right].$
∎
###### Corollary 11.10.
Consider the completion of the space of symmetric functions with coefficients
in $\mathbb{Z}[q,q^{-1}]$ with respect to the basis $F_{\lambda}$. Then the
operator of multiplication by $x_{1}\cdots x_{N}$ is invertible in this
completion and its inverse is given by the equation
$(x_{1}\cdots
x_{N})^{-1}F_{\lambda}(x_{1},\ldots,x_{N})=q^{\binom{N}{2}}\sum_{v\in\mathbb{Z}_{\geq
0}^{N}}q^{|\lambda|+v}F_{\lambda+v}(x_{1},\ldots,x_{N}).$
###### Proof.
Define the operators $A_{i}$ by $A_{i}(F_{\lambda})=F_{\lambda+e_{i}}$, and
$p_{i}(F_{\lambda})=q^{\lambda_{i}}F_{\lambda}$ for $i=1,\ldots,N$. Clearly,
$[A_{i},A_{j}]=[p_{i},p_{j}]=[A_{i},p_{j}]$ for $i\neq j$ and by Proposition
11.9 we have
$x_{1}\cdots x_{N}=q^{-\binom{N}{2}}\prod_{i}(1-A_{i})p^{-1}_{i},$
hence
$(x_{1}\cdots
x_{N})^{-1}=q^{\binom{N}{2}}\prod_{i}p_{i}(1+A_{i}+A_{i}^{2}+\ldots).$
∎
###### Example 11.11.
For $N=1$ and $\lambda=(0)$ we get a curious identity
$x^{-1}=\sum_{m=0}^{\infty}f_{m}(x)q^{m}$
We can check this identity directly, by computing the values of both sides at
$q^{-j}$ for all $j$. Denote
$u_{j}=\sum_{m=0}^{\infty}f_{m}(q^{-j})q^{m}=\sum_{m=0}^{j}f_{m}(q^{-j})q^{m}.$
Then $u_{j+1}=1+q(1-q^{-j-1})u_{j}$ and $u_{0}=1$, so it is easy to see that
$u_{j}=q^{j}$.
### 11.3. Interpolation polynomials for $\mathfrak{gl}_{2}$
In this subsection we describe the interpolation polynomials for
$\mathfrak{gl}_{2}$ explicitly. By definition, we have polynomials
$F_{\lambda}(x_{1},x_{2})$ where $\lambda_{1}\geq\lambda_{2}$:
$F_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=\frac{1}{x_{1}-x_{2}}\left|\begin{matrix}f_{\lambda_{1}+1}(x_{1})&f_{\lambda_{1}+1}(x_{2})\\\
f_{\lambda_{2}}(x_{1})&f_{\lambda_{2}}(x_{2})\end{matrix}\right|$
Let us consider the case $\lambda_{2}=0$ first, and write $\lambda_{1}=k$.
Then
$F_{k,0}(x_{1},x_{2})=\frac{1}{x_{1}-x_{2}}\det\left|\begin{matrix}f_{k+1}(x_{1})&f_{k+1}(x_{2})\\\
1&1\end{matrix}\right|=\frac{f_{k+1}(x_{1})-f_{k+1}(x_{2})}{x_{1}-x_{2}}.$
Let
$h_{i}(x_{1},x_{2})=\frac{x_{1}^{i+1}-x_{2}^{i+1}}{x_{1}-x_{2}}.$
Recall that
$f_{k+1}(x)=\sum_{j=0}^{k+1}(-1)^{j}q^{\frac{j(j-1)}{2}}\binom{k+1}{j}_{q}x^{j}$,
so
$F_{k,0}(x_{1},x_{2})=\sum_{j=1}^{k+1}(-1)^{j}q^{\frac{j(j-1)}{2}}\binom{k+1}{j}_{q}h_{j-1}(x_{1},x_{2}),$
compare with Corollary 11.6. We just replace each $x^{j}$ in the expression
for $f_{k+1}(x)$ by $h_{j-1}(x_{1},x_{2})$.
###### Example 11.12.
We have
$f_{1}(x)=1-x,\ f_{2}(x)=(1-x)(1-qx)=1-(1+q)x+qx^{2},$
$f_{3}(x)=(1-x)(1-qx)(1-q^{2}x)=1-(1+q+q^{2})x+(q+q^{2}+q^{3})x^{2}-q^{3}x^{3}$
so
$F_{0,0}(x_{1},x_{2})=-1,\ F_{1,0}(x_{1},x_{2})=q(x_{1}+x_{2})-(1+q),\ $
$F_{2,0}=-q^{3}(x_{1}^{2}+x_{1}x_{2}+x_{2}^{2})+(q+q^{2}+q^{3})(x_{1}+x_{2})-(1+q+q^{2}).$
By Proposition 11.8 we have
$F_{\lambda_{1},\lambda_{2}}(x_{1},x_{2})=q^{\lambda_{2}}f_{\lambda_{2}}(x_{1})f_{\lambda_{2}}(x_{2})F_{\lambda_{1}-\lambda_{2},0}(q^{\lambda_{2}}x_{1},q^{\lambda_{2}}x_{2}).$
In particular, for $(\lambda_{1},\lambda_{2})=(k,k)$ we have
$F_{k,k}(x_{1},x_{2})=q^{k}f_{k}(x_{1})f_{k}(x_{2}).$
Also, by Lemma 11.1 we get
(44)
$F_{\lambda_{1},\lambda_{2}}(x_{1},1)=\begin{cases}-f_{\lambda_{1}}(qx_{1})&\text{if}\
\lambda_{2}=0\\\ 0&\text{otherwise}.\end{cases}$
### 11.4. Interpolation tables for $\mathfrak{gl}_{2}$
For the reader’s convenience, we have computed the polynomials
$F_{\lambda}(x_{1},x_{2})$ and the corresponding interpolation matrices using
Sage [35]. First, we present $F_{\lambda}$ in Schur basis:
$F_{0}=-1,\quad F_{1}=qs_{1}-(q+1),\quad
F_{2}=-q^{3}s_{2}+(q^{3}+q^{2}+q)s_{1}-(q^{2}+q+1),\\\
F_{1,1}=-qs_{1,1}+qs_{1}-q=-q(1-x_{1})(1-x_{2})\\\
F_{3}=q^{6}s_{3}-(q^{6}+q^{5}+q^{4}+q^{3})s_{2}+(q^{5}+q^{4}+2q^{3}+q^{2}+q)s_{1}-(q^{3}+q^{2}+q+1)\\\
F_{2,1}=q^{3}s_{2,1}-q^{3}s_{2}-(q^{3}+q^{2}+q)s_{1,1}+(q^{3}+q^{2}+q)s_{1}-(q^{2}+q)\\\
F_{3,1}=-q^{6}s_{3,1}+q^{6}s_{3}+(q^{6}+q^{5}+q^{4}+q^{3})s_{2,1}-(q^{6}+q^{5}+q^{4}+q^{3})s_{2}-\\\
(q^{5}+q^{4}+2q^{3}+q^{2}+q)s_{1,1}+(q^{5}+q^{4}+2q^{3}+q^{2}+q)s_{1}-(q^{3}+q^{2}+q)\\\
F_{2,2}=-q^{4}s_{2,2}+(q^{4}+q^{3})s_{2,1}-q^{3}s_{2}-(q^{4}+q^{3}+q^{2})s_{1,1}+(q^{3}+q^{2})s_{1}-q^{2}\\\
F_{3,2}=q^{7}s_{3,2}-(q^{7}+q^{6})s_{3,1}-(q^{7}+q^{6}+q^{5}+q^{4})s_{2,2}+q^{6}s_{3}+(q^{7}+2q^{6}+2q^{5}+2q^{4}+q^{3})s_{2,1}-\\\
(q^{6}+q^{5}+q^{4}+q^{3})s_{2}-(q^{6}+2q^{5}+2q^{4}+2q^{3}+q^{2})s_{1,1}+(q^{5}+q^{4}+2q^{3}+q^{2})s_{1}-(q^{3}+q^{2})\\\
F_{3,3}=-q^{9}s_{3,3}+(q^{9}+q^{8}+q^{7})s_{3,2}-(q^{8}+q^{7}+q^{6})s_{3,1}-(q^{9}+q^{8}+2q^{7}+q^{6}+q^{5})s_{2,2}+q^{6}s_{3}+\\\
(q^{8}+2q^{7}+2q^{6}+2q^{5}+q^{4})s_{2,1}-(q^{6}+q^{5}+q^{4})s_{2}-(q^{7}+q^{6}+2q^{5}+q^{4}+q^{3})s_{1,1}+(q^{5}+q^{4}+q^{3})s_{1}-q^{3}$
Next, we list the values of the evaluations
$c_{\lambda,\mu}=F_{\lambda}(q^{-\mu_{1}-1},q^{-\mu_{2}})$ for various
$\lambda$ and $\mu$ in Tables 1, 2, 3 below. The resulting matrix ${\sf
C}=(c_{\lambda,\mu})$ is upper-triangular, with diagonal entries prescribed by
Lemma 10.11. Zero entries correspond to pairs $(\lambda,\mu)$ where $\mu$ does
not contain $\lambda$. The entry corresponding to $(\lambda,\mu)=((1),(3,2))$
is marked in bold, it is divisible by $1-q$ but does not factor any further.
Using either Theorem 10.17 or equation (38), one can easily reconstruct the
inverse matrix $D={\sf C}^{-1}$, and we list part of it in Table 4 (see
Examples 10.21 and 10.23 for more computations).
Note that by Corollary 11.3 this determines the coefficients $c_{\lambda,\mu}$
and $d_{\lambda,\mu}$ for $\lambda\subset\mu\subset(3,3)$ and arbitrary $N$.
### 11.5. Link invariants for $\mathfrak{gl}_{2}$
We can use the interpolation tables to expand the invariants of simple knots
in the basis $F_{\lambda}$. Indeed, the colored $\mathfrak{gl}_{2}$ invariants
are determined by the colored $\mathfrak{sl}_{2}$ invariants (that is, colored
Jones polynomial) by the formula
$J_{K}(V(\lambda_{1},\lambda_{2}),q)=J_{K}(V_{\lambda_{1}-\lambda_{2}},q).$
The coefficients $a_{\lambda}(K)$ are then determined by Theorem 1.3
$a_{\lambda}(K)=\sum_{\mu\subset\lambda}d_{\lambda,\mu}(q^{-1})J_{K}(V(\mu),q).$
For example, for the figure eight knot we have the following values of the
colored Jones polynomial:
$J_{K}(V_{0},q)=1=J_{K}(V({1,1}),q),\
J_{K}(V_{1},q)=J_{K}(V({2,1}),q)=1+q^{2}+q^{-2}-q-q^{-1},$
$J_{K}(V_{2},q)=1+q^{3}+q^{-3}-q-q^{-1}+(q^{3}+q^{-3}-q-q^{-1})(q^{3}+q^{-3}-q^{2}-q^{-2}).$
Using the values of $d_{\lambda,\mu}$ from Table 4 (and changing $q$ to
$q^{-1}$) we obtain
$a_{0}(K)=-J_{K}(V_{0},q)=-1,\
a_{1}(K)=-\frac{q^{-1}}{1-q^{-1}}J_{K}(V_{0},q)+\frac{q^{-1}}{1-q^{-1}}J_{K}(V_{1},q)=q^{-2}(q^{3}-1),$
$a_{2}(K)=-\frac{q^{-2}}{(1-q^{-1})(1-q^{-2})}J_{K}(V_{0},q)+\frac{q^{-2}}{(1-q^{-1})^{2}}J_{K}(V_{1},q)-\\\
\frac{q^{-3}}{(1-q^{-1})(1-q^{-2})}J_{K}(V_{2},q)=q^{-6}(-q^{9}+q^{5}+q^{4}-q^{3}-1),$
$a_{1,1}(K)=-\frac{q^{-3}}{(1-q^{-1})(1-q^{-2})}J_{K}(V_{0},q)+\frac{q^{-2}}{(1-q^{-1})^{2}}J_{K}(V_{1},q)-\\\
\frac{q^{-2}}{(1-q^{-1})(1-q^{-2})}J_{K}(V({1,1}),q)=q^{-2}(q^{2}+q+1),$
$a_{2,1}(K)=-\frac{q^{-4}}{(1-q^{-1})^{2}(1-q^{-3})}J_{K}(V_{0},q)+\frac{q^{-3}}{(1-q^{-1})^{3}}J_{K}(V_{1},q)-\\\
\frac{q^{-4}}{(1-q^{-1})^{2}(1-q^{-2})}J_{K}(V_{2},q)-\frac{q^{-3}}{(1-q^{-1})^{2}(1-q^{-2})}J_{K}(V({1,1}),q)+\\\
\frac{q^{-4}}{(1-q^{-1})^{2}(1-q^{-3})}J_{K}(V({2,1}),q)=q^{-6}(-q^{8}-q^{7}-q^{6}-q^{5}+q^{4}+2q^{3}+q^{2}+q+1).$
Using Tables 1, 2, 3, one can similarly compute the values of $a_{\lambda}(K)$
for all $\lambda\subset(3,3)$ and verify that these are indeed Laurent
polynomials in $q$.
$\lambda\backslash\mu$ | (0) | (1) | (2) | (1,1)
---|---|---|---|---
(0) | $-1$ | $-1$ | $-1$ | $-1$
(1) | $0$ | $q^{-1}(1-q)$ | $q^{-2}(1-q^{2})$ | $q^{-1}(1-q^{2})$
(2) | $0$ | $0$ | $-q^{-3}(1-q)(1-q^{2})$ | $0$
(1,1) | $0$ | $0$ | $0$ | $-q^{-2}(1-q)(1-q^{2})$
(3) | $0$ | $0$ | $0$ | $0$
(2,1) | $0$ | $0$ | $0$ | $0$
(3,1) | $0$ | $0$ | $0$ | $0$
(2,2) | $0$ | $0$ | $0$ | $0$
(3,2) | $0$ | $0$ | $0$ | $0$
(3,3) | $0$ | $0$ | $0$ | $0$
Table 1. Evaluations of interpolation polynomials: matrix
$C=(c_{\lambda,\mu})$
$\lambda\backslash\mu$ | (3) | (2,1) | (3,1)
---|---|---|---
(0) | $-1$ | $-1$ | $-1$
(1) | $q^{-3}(1-q^{3})$ | $q^{-2}(1-q^{3})$ | $q^{-3}(1-q^{4})$
(2) | $-q^{-5}(1-q^{2})(1-q^{3})$ | $-q^{-3}(1-q)(1-q^{3})$ | $-q^{-5}(1-q^{2})(1-q^{4})$
(1,1) | $0$ | $-q^{-3}(1-q)(1-q^{3})$ | $-q^{-4}(1-q)(1-q^{4})$
(3) | $q^{-6}(1-q)(1-q^{2})(1-q^{3})$ | $0$ | $q^{-6}(1-q)(1-q^{2})(1-q^{4})$
(2,1) | $0$ | $q^{-4}(1-q)^{2}(1-q^{3})$ | $q^{-6}(1-q)(1-q^{2})(1-q^{4})$
(3,1) | $0$ | $0$ | $-q^{-7}(1-q)^{2}(1-q^{2})(1-q^{4})$
(2,2) | $0$ | $0$ | $0$
(3,2) | $0$ | $0$ | $0$
(3,3) | $0$ | $0$ | $0$
Table 2. Matrix $C=(c_{\lambda,\mu})$, continued
$\lambda\backslash\mu$ | (2,2) | (3,2) | (3,3)
---|---|---|---
(0) | $-1$ | $-1$ | $-1$
(1) | $q^{-2}(1+q)(1-q^{2})$ | $\mathbf{q^{-3}(-q^{4}-q^{3}+q^{2}+1)}$ | $q^{-3}(1+q)(1-q^{3})$
(2) | $-q^{3}(1-q^{2})(1-q^{3})$ | $-q^{-5}(1-q^{3})(1-q^{4})$ | $-q^{-5}(1+q)(1-q^{3})^{2}$
(1,1) | $-q^{-4}(1-q^{2})(1-q^{3})$ | $-q^{-5}(1-q^{2})(1-q^{4})$ | $-q^{-6}(1-q^{3})(1-q^{4})$
(3) | $0$ | $q^{-6}(1-q)(1-q^{3})(1-q^{4})$ | $q^{-6}(1-q^{2})(1-q^{3})(1-q^{4})$
(2,1) | $q^{-5}(1-q^{2})^{2}(1-q^{3})$ | $q^{-7}(1-q^{2})(1-q^{3})(1-q^{4})$ | $q^{-8}(1+q)(1-q^{2})(1-q^{3})(1-q^{4})$
(3,1) | $0$ | $-q^{-8}(1-q)(1-q^{2})(1-q^{3})(1-q^{4})$ | $-q^{-9}(1-q^{2})(1-q^{3})^{2}(1-q^{4})$
(2,2) | $-q^{-6}(1-q)(1-q^{2})^{2}(1-q^{3})$ | $-q^{-8}(1-q)(1-q^{2})(1-q^{3})(1-q^{4})$ | $-q^{-10}(1-q^{2})(1-q^{3})^{2}(1-q^{4})$
(3,2) | $0$ | $q^{-9}(1-q)^{2}(1-q^{2})(1-q^{3})(1-q^{4})$ | $q^{-11}(1-q^{2})^{2}(1-q^{3})^{2}(1-q^{4})$
(3,3) | $0$ | $0$ | $-q^{-12}(1-q)(1-q^{2})^{2}(1-q^{3})^{2}(1-q^{4})$
Table 3. Matrix $C=(c_{\lambda,\mu})$, continued $\lambda\backslash\mu$ | (0) | (1) | (2) | (1,1) | (2,1)
---|---|---|---|---|---
(0) | $-1$ | $-\frac{q}{1-q}$ | $-\frac{q^{2}}{(1-q)(1-q^{2})}$ | $-\frac{q^{3}}{(1-q)(1-q^{2})}$ | $-\frac{q^{4}}{(1-q)^{2}(1-q^{3})}$
(1) | $0$ | $\frac{q}{1-q}$ | $\frac{q^{2}}{(1-q)^{2}}$ | $\frac{q^{2}}{(1-q)^{2}}$ | $\frac{q^{3}}{(1-q)^{3}}$
(2) | $0$ | $0$ | $-\frac{q^{3}}{(1-q)(1-q^{2})}$ | $0$ | $-\frac{q^{4}}{(1-q)^{2}(1-q^{2})}$
(1,1) | $0$ | $0$ | $0$ | $-\frac{q^{2}}{(1-q)(1-q^{2})}$ | $-\frac{q^{3}}{(1-q)^{2}(1-q^{2})}$
(2,1) | $0$ | $0$ | $0$ | $0$ | $\frac{q^{4}}{(1-q)^{2}(1-q^{3})}$
Table 4. Interpolation matrix $D=(d_{\lambda,\mu})=C^{-1}$
## 12\. Appendix
Here we collect some useful definitions and facts about Habiro’s ring and
interpolation Macdonald polynomials.
### 12.1. Habiro’s ring
The Habiro ring [17] is defined as
$\widehat{\mathbb{Z}[q]}:={\lim\limits_{\overleftarrow{\hskip 5.69054ptn\hskip
5.69054pt}}}\;\frac{\mathbb{Z}[q]}{((q;q)_{n})}$
Any element of $\widehat{\mathbb{Z}[q]}$ can be presented (not uniquely) as
infinite series
$f(q)=\sum_{n=0}^{\infty}f_{n}\,(q;q)_{n},\quad f_{n}\in\mathbb{Z}[q].$
Evaluations of such $f(q)$ at all roots of unity are well defined, since if
$q^{s}=1$ one has $f(q)=\sum_{n=0}^{s-1}f_{n}(q)_{n}$. It is easy to expand
every $f(q)\in\widehat{\mathbb{Z}[q]}$ into formal power series in $(q-1)$,
denoted by $T(f)$ and called the Taylor series of $f(q)$ at $q=1$. One
important property of the Habiro ring is that any
$f\in\widehat{\mathbb{Z}[q]}$ is uniquely determined by its Taylor series. In
other words, the map $T:\widehat{\mathbb{Z}[q]}\to\mathbb{Z}[[q-1]]$ is
injective [17, Thm 5.4]. In particular, $\widehat{\mathbb{Z}[q]}$ is an
integral domain. Moreover, every $f\in\widehat{\mathbb{Z}[q]}$ is determined
by the values of $f$ at any infinite set of roots of unity of prime power
order. Because of these properties, Habiro ring is also known as a ring of
analytic functions at roots of unity.
Since $\cap_{n\geq 0}I_{n}=0$ with $I_{n}=(q;q)_{n}\mathbb{Z}[q]$, the natural
map $\mathbb{Z}[q]\to\widehat{\mathbb{Z}[q]}$ is injective. The image of $q$
under this map is invertible, and the inverse is given by
$q^{-1}=\sum_{n=1}^{\infty}q^{n}(q;q)_{n},$
compare with Example 11.11. This implies that there is an injective map
$\mathbb{Z}[q,q^{-1}]\to\widehat{\mathbb{Z}[q]}$. The following result is
proved in [17, Proposition 7.5], but we give a slightly different proof here
for the reader’s convenience. We will denote by $\Phi_{n}(q)$ the $n$th
cyclotomic polynomial
$\Phi_{n}(q)=\prod_{(a,n)=1}\left(q-\zeta^{a}_{n}\right)$ where $\zeta_{n}$ is
any primitive $n$th root of unity.
###### Proposition 12.1.
Suppose that $f(q)\in\widehat{\mathbb{Z}[q]}$ and
$f(q)h(q)\in\mathbb{Z}[q,q^{-1}]$ for some product of cyclotomic polynomials
$h(q)=\Phi_{n_{1}}(q)\cdots\Phi_{n_{r}}(q)$. Then
$f(q)\in\mathbb{Z}[q,q^{-1}]$.
###### Proof.
Let us denote $g(q)=f(q)h(q)\in\mathbb{Z}[q,q^{-1}]$, we prove the statement
by induction in $r$. For $r=1$ we get $h(q)=\Phi_{n}(q)$ and
$g(q)=f(q)\Phi_{n}(q)$, so for any primitive $n$-th root of unity $\zeta_{n}$
we have $g(\zeta_{n})=f(\zeta_{n})\Phi_{n}(\zeta_{n})=0$, so
$g(q)=\alpha(q)\Phi_{n}(q)$ for some $\alpha\in\mathbb{Z}[q,q^{-1}]$. This
implies $(f(q)-\alpha(q))\Phi_{n}(q)=0$, and since $\widehat{\mathbb{Z}[q]}$
is an integral domain we get $f(q)=\alpha(q)$.
For $r>1$ we get
$f(q)\Phi_{n_{1}}(q)\cdots\Phi_{n_{r}}(q)\in\mathbb{Z}[q,q^{-1}],$
so by the above
$f(q)\Phi_{n_{1}}(q)\cdots\Phi_{n_{r-1}}(q)\in\mathbb{Z}[q,q^{-1}],$
and by the assumption of induction $f(q)\in\mathbb{Z}[q,q^{-1}]$. ∎
### 12.2. Interpolation Macdonald polynomials
We consider partitions with at most $N$ parts.
###### Theorem 12.2.
[23, 24, 29, 30, 31, 32, 36] There exists unique up to scalar factors family
of symmetric polynomials $I_{\lambda}(x_{1},\ldots,x_{N};q,t)$ with the
following properties:
* (a)
$I_{\lambda}(q^{-\mu_{i}}t^{N-i})=0$ unless $\mu$ contains $\lambda$
* (b)
$I_{\lambda}(q^{-\lambda_{i}}t^{N-i})\neq 0$
* (c)
$I_{\lambda}$ is a nonhomogeneous polynomial of degree $|\lambda|$, and its
degree $|\lambda|$ part is proportional to the Macdonald polynomial
$P_{\lambda}(x_{1},\ldots,x_{N};q,t)$.
The polynomials $I_{\lambda}$ are called interpolation Macdonald polynomials.
In fact, the properties (a) and (b) already uniquely determine $I_{\lambda}$
(up to a scalar), and their existence follows from the fact that
$q^{-\lambda_{i}}t^{N-i}$ for a nondegenerate grid in the sense of [31]. Part
(c) is then a deep property of these polynomials.
It is easy to see that at $q=t$ interpolation Macdonald polynomials
$I_{\lambda}$ specialize to $F_{\lambda}$. Unlike $F_{\lambda}$, there is no
determinant formula for $I_{\lambda}$ but there is a different combinatorial
formula [30].
## References
* [1] A. Beliakova, C. Blanchet, T. Lê, Unified quantum invariants and their refinements for homology 3–spheres with 2–torsion, Fund. Math. 201 (2008) 217–239.
* [2] A. Beliakova, I. Bühler, T. Lê, A unified quantum $SO(3)$ invariant for rational homology 3-spheres, Invent. Math. 185 (2011) 121–174
* [3] A. Beliakova, Q. Chen, T. Lê. On the integrality of the Witten-Reshetikhin-Turaev 3-manifold invariants. Quantum Topol. 5 (2014), no. 1, 99–141.
* [4] A. Beliakova, K. Hikami, Non-semisimple invariants and Habiro’s series. arXiv:2009.13285
* [5] A. Beliakova, T. Lê. Integrality of quantum 3-manifold invariants and a rational surgery formula. Compos. Math. 143 (2007), no. 6, 1593–1612.
* [6] A. Beliakova, K. Putyra, S. Wehrli. Quantum link homology via trace functor I. Invent. Math. 215 (2019), no. 2, 383–492.
* [7] E. Carlsson, N. Nekrasov, A. Okounkov. Five dimensional gauge theories and vertex operators. Mosc. Math. J. 14 (2014), no. 1, 39–61, 170.
* [8] V. Chari, A. Pressley. A Guide to Quantum Groups. Cambridge University Press, 1995.
* [9] V. Drinfeld. Almost cocommutative Hopf algebras. Leningrad Math. J. 1 (1990), no. 2, 321–342.
* [10] E. Gorsky, M. Hogancamp. Hilbert schemes and $y$-ification of Khovanov-Rozansky homology. arXiv:1712.03938
* [11] E. Gorsky, M. Hogancamp, P. Wedrich. Derived traces of Soergel categories. To appear in International Mathematics Research Notices. arXiv:2002.06110
* [12] E. Gorsky, A. Negut, J. Rasmussen. Flag Hilbert schemes, colored projectors and Khovanov-Rozansky homology. Advances in Mathematics 378 (2021) 107542.
* [13] E. Gorsky, P. Wedrich. Evaluations of annular Khovanov-Rozansky homology. arXiv:1904.04481
* [14] R. B. Zhang, M. D. Gould, A. J. Bracken. Generalized Gelfand invariants of quantum groups. J. Phys. A 24 (1991), no. 5, 937–943.
* [15] K. Habiro. A unified Witten-Reshetikhin-Turaev invariant for integral homology spheres. Invent. Math. 171 (2008), no. 1, 1–81.
* [16] K. Habiro. Bottom tangles and universal invariants, Algebr. Geom. Topol. 6 (2006) 1113–1214.
* [17] K. Habiro. Cyclotomic completions of polynomial rings. Publ. Res. Inst. Math. Sci. 40 (2004), no. 4, 1127–1146.
* [18] K. Habiro. An integral form of the quantized enveloping algebra of $\mathfrak{sl}_{2}$ and its completions, J. Pure Applied Algebra 211(1) (2007) 265–292
* [19] K. Habiro, T. Lê. Unified quantum invariants for integral homology spheres associated with simple Lie algebras. Geom. Topol. 20 (2016), no. 5, 2687–2835.
* [20] J.C. Jantzen, Lectures on quantum groups, Graduate Studies in Mathematics, 6. American Mathematical Society, Providence, RI, 1996.
* [21] G. James, Gordon, A. Kerber. The representation theory of the symmetric group. Encyclopedia of Mathematics and its Applications, 16. Addison-Wesley Publishing Co., Reading, Mass., 1981.
* [22] M. Kameyama, S. Nawata, R. Tao, H. D. Zhang. Cyclotomic expansions of HOMFLY-PT colored by rectangular Young diagrams. Lett. Math. Phys. 110 (2020), no. 10, 2573–2583.
* [23] F. Knop, Symmetric and non-Symmetric quantum Capelli polynomials, Comment. Math. Helv. 72 (1997), 84–100,
* [24] F. Knop and S. Sahi, Difference equations and symmetric polynomials defined by their zeros, Internat. Math. Res. Notices (1996), no. 10, 473–486.
* [25] G. Lusztig, Introduction to quantum groups. Progress in Mathematics, 110. Birkhäuser Boston, Inc., Boston, MA, 1993.
* [26] I. G. Macdonald, Schur functions: theme and variations. Séminaire Lotharingien de Combinatoire 28 (1992), paper B28a, 35 pp.
* [27] I. G. Macdonald, Symmetric functions and Hall polynomials, 2nd ed. Oxford Univ. Press, 1995\.
* [28] A. Molev, Comultiplication rules for the double Schur functions and Cauchy identities. Electr. J. Comb. 16 (2009), paper R13, 44 pp.
* [29] A. Okounkov. Binomial formula for Macdonald polynomials and applications. Math. Res. Lett. 4 (1997), no. 4, 533–553.
* [30] A. Okounkov. (Shifted) Macdonald polynomials: $q$-integral representation and combinatorial formula. Compositio Math. 112 (1998), no. 2, 147–182.
* [31] A. Okounkov. On Newton interpolation of symmetric functions: a characterization of interpolation Macdonald polynomials. Adv. in Appl. Math. 20 (1998), no. 4, 395–428.
* [32] G. Olshanski. Interpolation Macdonald polynomials and Cauchy-type identities. J. Combin. Theory Ser. A 162 (2019), 65–117.
* [33] H. Queffelec, D. Rose. Sutured annular Khovanov-Rozansky homology. Trans. Amer. Math. Soc. 370 (2018), no. 2, 1285–1319.
* [34] L.-H. Robert, E. Wagner. Symmetric Khovanov-Rozansky link homologies. J. Ec. Polytech. Math. 7 (2020), 573–651.
* [35] SageMath, the Sage Mathematics Software System. https://www.sagemath.org.
* [36] S. Sahi, Interpolation, integrality, and a generalization of Macdonald’s polynomials. Intern. Math. Research Notices 1996, no. 10, 457–471.
* [37] S. Willetts. A unification of the ADO and colored Jones polynomials of a knot. arXiv:2003.09854.
|
# Moderate Deviations in Triangle Count
Joe Neeman and Charles Radin and Lorenzo Sadun Joe Neeman
Department of Mathematics
The University of Texas at Austin
Austin, TX 78712<EMAIL_ADDRESS>Charles Radin
Department of Mathematics
The University of Texas at Austin
Austin, TX 78712<EMAIL_ADDRESS>Lorenzo Sadun
Department of Mathematics
The University of Texas at Austin
Austin, TX 78712<EMAIL_ADDRESS>
###### Abstract.
We prove moderate deviations bounds for the lower tail of the number of
triangles in a $\mathcal{G}(n,m)$ random graph. We show that the probability
of decreasing the triangle density by $t$, with $n^{-3/4}\ll t\ll 1$, is
$\exp(-\Theta(n^{2}t^{2/3}))$; we find the leading coefficient in the exponent
for $m\geq\frac{1}{2}\binom{n}{2}$, and estimate it otherwise. This
complements results of Goldschmidt et al., who showed that for $n^{-3/2}\ll
t\ll n^{-1}$, the probability is $\exp(-\Theta(n^{3}t^{2}))$. That is,
moderate deviations behave much like small deviations for $t\ll n^{-1}$ and
much like large deviations for $n^{-3/4}\ll t$. We conjecture a sharp change
between the two regimes at $t=\Theta(n^{-3/4})$, which we associate with a
single large negative eigenvalue of the adjacency matrix becoming responsible
for almost all of the triangle deficit.
Our results can be interpreted as finite size effects in phase transitions in
constrained edge-triangle graphs.
This work was partially supported by the Deutsche Forschungsgemeinschaft (DFG,
German Research Foundation) under Germany’s Excellence Strategy – EXC-2047/1 –
390685813, and by a fellowship from the Alfred P. Sloan Foundation.
## 1\. Introduction
We prove moderate deviations bounds for the lower tail of the number of
triangles in a $\mathcal{G}(n,m)$ random graph, for deviations larger than
those of Goldschmidt et al. [1] but smaller than large deviations, which are
of order the mean of the triangle density. For instance, with the notation
that $\tau(G)$ is the triangle density of a $\mathcal{G}(n,m)$ graph $G$ where
$n\to\infty$ and $m=p\binom{n}{2}+O(1)$, for some $1/2\leq p<1$ that is fixed
as $n\to\infty$ and $n^{-3/4}\ll t\ll 1$, we prove (see Theorem 1) that
(1) $\Pr\left(\tau(G)\leq
p^{3}-t\right)=\exp\left(-\frac{\ln\frac{p}{1-p}}{2(2p-1)}t^{2/3}n^{2}+o(t^{2/3}n^{2})\right).$
The number of triangles in a random graph is a fundamental and surprisingly
important random variable in the study of probabilistic combinatorics. The
probabilistic behavior of these triangle counts is at least partially
responsible for the development of many important methods related to
concentration inequalities for dependent random variables, including Janson’s
inequality [2], the entropy method [3], martingale difference techniques in
random graphs, and others [4].
The traditional point of view, as exemplified by the seminal paper by Janson
and Rucínski [5], holds that the lower tail of the triangle count is easy to
characterize while the upper tail is hard. This view stems at least partly
from the fact that most earlier works studied the $\mathcal{G}(n,p)$ model, in
which edges appear independently, each with probability $p$. In the
$\mathcal{G}(n,m)$ model, in which the number of edges is fixed at $m$, the
situation is rather more subtle. For example, one can easily see that under
$\mathcal{G}(n,p)$, the number of triangles, $T(G)$, satisfies
$\operatorname{Var}(T(G))=\Theta(n^{4})$, while under $\mathcal{G}(n,m)$,
$\operatorname{Var}(T(G))=\Theta(n^{3})$. The distinction between the two
models – especially in the lower tail – becomes even more pronounced at larger
deviations. This can be intuitively explained by the fact that in
$\mathcal{G}(n,p)$ one can easily “depress” the triangle count simply by
reducing the number of edges: a graph $G$ with edge number $|E(G)|\approx
q\binom{n}{2}$ will typically have triangle density $\tau\approxeq q^{3}$, and
the probability of seeing such a graph under $\mathcal{G}(n,p)$ is of the
order $\exp(-\Theta(n^{2}(p-q)^{2}))$; it follows that under
$\mathcal{G}(n,p)$ we have
(2) $\Pr(\tau(G)\leq\mathbb{E}\tau(G)-t)\geq\exp(-\Omega(n^{2}t^{2})).$
Under $\mathcal{G}(n,m)$, large deficits in the triangle density are much
rarer than they are in $\mathcal{G}(n,p)$. At the scale of constant-order
deficits, this was noticed in [6, 7], where it is proved that for
$t=\Theta(1)$ and $\mathcal{G}(n,m)$ with $m=\Theta(n^{2})$,
(3) $\Pr(\tau(G)\leq\mathbb{E}\tau(G)-t)=\exp(-\Theta(n^{2}t^{2/3})).$
(They also found the exact leading-order term in the exponent when
$m=\frac{1}{2}\binom{n}{2}+o(n^{2})$ and bounded the leading-order coefficient
for all other values of $m$.) At the other end of the scale, a recent result
of Goldschmidt et al. [1] showed that for $n^{-3/2}\ll t\ll n^{-1}$ the lower
triangle tail has a different behavior:
(4) $\Pr(\tau(G)\leq\mathbb{E}\tau(G)-t)=\exp(-\Theta(n^{3}t^{2})).$
(Again, they also found the exact leading-order term in the exponent.) Since
$t\leq\Theta(n^{-3/2})$ is within the range of the Central Limit Theorem this
leaves open the case of $n^{-1}\ll t\ll 1$. Noting that the two exponential
rates (namely $n^{2}t^{2/3}$ and $n^{3}t^{2}$) cross over at
$t=\Theta(n^{-3/4})$, it is natural to guess that
(5)
$\Pr(\tau(G)\leq\mathbb{E}\tau(G)-t)=\begin{cases}\exp(-\Theta(n^{3}t^{2}))&\text{if
$t\ll n^{-3/4}$},\\\ \exp(-\Theta(n^{2}t^{2/3}))&\text{if $n^{-3/4}\ll t\ll
1$}.\end{cases}$
We prove the second of these two cases (see Theorem 2); the first remains a
conjecture. We also prove some structural results on graphs with
$\tau(G)\leq\mathbb{E}\tau(G)-\omega(n^{-3/4})$. These structural results
provide a plausible explanation for the importance of $t=\Theta(n^{-3/4})$,
namely that it is the threshold at which a single large negative eigenvalue of
the adjacency matrix becomes responsible for almost all of the triangle
deficit.
## 2\. Context and references
We are concerned with random graphs, more specifically with
$\mathcal{G}(n,m)$, the uniform distribution on graphs on $n$ nodes with $m$
edges, rather than the more common model $\mathcal{G}(n,p)$, in which the
edges are independent with probability $p$. For either of these random graph
models one can study the distribution of subgraph counts, for instance
triangle counts or triangle density (density being the quotient of the count
by the count in the complete graph) with which we are concerned. Results on
the probability of deviations of triangle density from the mean fall into
three classes by size: small deviations, on the order of the standard
deviation, large deviations, on the order of the mean, and moderate
deviations, of intermediate size.
Our main results concern the moderate regime of deviations of triangle density
in $\mathcal{G}(n,m)$, in which we prove, among other things, that deviations
near but below the large class are qualitatively different from deviations
near but above the small class. We know of no other results of this sort, for
$\mathcal{G}(n,m)$ or $\mathcal{G}(n,p)$.
For small deviations there is a long history under the name Central Limit
Theorem. There are also many papers on moderate and large deviations of
subgraph counts. As background, more specifically for results discussed here,
we suggest the following: [8, 9, 10, 11, 12, 13, 14, 15, 16] and references
within them for a broader view. As our results are strongly colored by large
deviations we note in particular [17].
For convenience we note some common asymptotics notation. We use $f=o(g)$ or
$f\ll g$ to mean $\lim|f(n)|/g(n)=0$, $f=O(g)$ to mean $\lim\sup
f(n)/g(n)<\infty$, $f=\Omega(g)$ to mean $\lim\inf f(n)/g(n)>0$, $f=\omega(g)$
or $f\gg g$ to mean $\lim|f|/g=\infty$, and $f=\Theta(g)$ to mean both
$f=O(g)$ and $f=\Omega(g)$. The phrase “with high probability” means “with
probability converging to 1 as $n\to\infty$,” and we also make use of
probabilistic asymptotic notation: “$f=O(g)$ with high probability” means that
for every $\epsilon>0$ there exists $C>0$ with $\limsup\Pr(f\geq
Cg)\leq\epsilon$; “$f=o(g)$ with high probability” means that for every
$\epsilon>0$, $|f|/g\leq\epsilon$ with high probability; and analogously for
$\Omega$ and $\omega$.
More specifically we are studying the triangle density of $\mathcal{G}(n,m)$
graphs in the range $\tau(G)=p^{3}-t$ for $n^{-3/4}\ll t\ll 1$. The case
$0\leq t\leq\Omega(n^{-3/2})$ is within the range of the Central Limit Theorem
(and it is covered by Janson’s more general work on subgraph statistics [18]).
The range $n^{-3/2}\ll t\ll n^{-1}$ is studied by [1]; they showed that in
this regime
(6) $\Pr(\tau(G)\leq
p^{3}-t)=\exp\left(-\frac{t^{2}n^{3}}{2\sigma_{p}^{2}}(1+o(1))\right),$
where $\sigma_{p}^{2}=\operatorname{Var}(\tau(G))/n^{3}$, which is of constant
order. They also show an upper bound for larger $t$: for $n^{-3/2}\ll t\ll 1$,
(7) $\Pr(\tau(G)\leq p^{3}-t)=\exp\left(-\Omega(t^{2}n^{3})\right).$
We show that this is not tight for $n^{-3/4}\ll t\ll 1$. For example, we show
that in this range,
(8) $\Pr(\tau(G)\leq p^{3}-t)=\exp\left(-\Theta(t^{2/3}n^{2})\right).$
In the case $p\geq\frac{1}{2}$, we also derive more detailed results (see
Theorem 1): we identify the leading constant in the exponent and prove some
results on the spectrum of the adjacency matrix of $G$.
### 2.1. Related work on random graphs
Besides the work of [1], there is related work on large deviation principles
(LDPs) for more general statistics, and LDPs for sparser graphs, notably in
[19].
Moderate deviations in triangle count in $\mathcal{G}(n,m)$ can be seen from a
different vantage based on [20]. That paper follows a series of works [6, 7,
24, 22, 23, 25, 26] on the asymptotics of ‘constrained’ random graphs, in
particular the asymptotics of $\mathcal{G}(n,m,t)$, the uniform distribution
on graphs on $n$ nodes constrained to have $m$ edges and $t$ triangles. A
large deviation principle, using optimization over graphons, a variant of the
seminal work [27] by Chatterjee and Varadhan on large deviations in
$\mathcal{G}(n,p)$, was used to prove various features of phase transitions
between asymptotic ‘phases’, phases illustrated by the entropy-optimal
graphons. (See also [28].) But in [20] numerical evidence showed that the
transitions could be clearly seen in finite systems, using constrained graphs
with as few as 30 vertices. From this perspective moderate deviations in
triangle count can be understood as finite size effects in a phase transition.
Asymptotically, entropy goes through a sharp ridge as the edge
density/triangle density pair $(\varepsilon,\tau)$ passes through
$(\varepsilon,\varepsilon^{3})$ (Thms. 1.1,1.2 in [7]), and moderate
deviations quantify how the sharp ridge rounds off at finite node number,
somewhat as an ice cube freezing in water has rounded edges. The focus thus
shifts to the infinite system, where emergent phases are meaningful, away from
$\mathcal{G}(n,m,t)$ or $\mathcal{G}(n,m)$.
### 2.2. Related work on random matrices
Since we are studying the spectrum of the adjacency matrix, our methods mainly
come from random matrix theory. Specifically, we are interested in large
deviations of eigenvalues of the random adjacency matrices coming from our
random graphs. The study of large deviations of eigenvalues is an active
topic, but the results we aim for are somewhat atypical. Traditionally, “large
deviations” refers to deviations on the order of the mean, so large deviations
results for random matrices typically consider the event that the largest
eigenvalue of a symmetric $n\times n$ matrix with i.i.d. mean-zero,
variance-$\sigma^{2}$ entries is of order $\alpha\sqrt{n}$ for
$\alpha>2\sigma$; this is because the typical value of the largest eigenvalue
is of order $2\sigma\sqrt{n}$. However, because an eigenvalue of order
$n^{\beta}$ contributes $n^{3\beta}$ to the triangle count, and because we are
interested in triangle deviation of orders $n^{9/4}$ through $n^{3}$, we are
necessarily interested in much larger eigenvalues.
Another difference in our work is that we consider several large eigenvalues
simultaneously. This is because we need to consider the possibility that the
triangle count is affected by several atypically large eigenvalues instead of
just one.
In related work,
* •
Guionnet and Husson [29] showed an LDP for the largest eigenvalue for a family
of random matrices that includes Rademacher matrices, which is essentially the
case that we consider when $p=\frac{1}{2}$.
* •
Augeri [30] showed an LDP for the largest eigenvalue for random matrices whose
entries have heavier-than-Gaussian tails.
* •
Battacharya and Ganguly [31] showed an LDP for the largest eigenvalue of an
Erdős-Rényi graph. Their setting differs from the others in that the random
matrices they consider are not centered (which makes a big difference when
studying the largest eigenvalue).
* •
Augeri, Guionnet, and Husson [32] showed an LDP for the largest eigenvalue for
most random matrices with subgaussian elements. These are essentially the same
random matrices that we consider, with the main difference being that they are
looking at eigenvalues of size $\Theta(\sqrt{n})$.
## 3\. Triangle counts
Our general setting is: we let $A$ be the adjacency matrix of a
$\mathcal{G}(n,m)$ graph, where $n\to\infty$ and $m=p\binom{n}{2}+O(1)$, for
some $p\in\mathbb{R}$ that is fixed as $n\to\infty$. We denote by $\tau(A)$
the triangle density of $A$, and by $\lambda_{n}(A)$ the smallest eigenvalue
of $A$.
We prove two theorems governing asymptotic behavior as $n\to\infty$ and
$n^{-3/4}\ll t\ll 1$. The first is a strong result for $\frac{1}{2}\leq p<1$.
###### Theorem 1.
If $\frac{1}{2}\leq p<1$ and $n^{-3/4}\ll t\ll 1$ then
(9) $\Pr\left(\tau(A)\leq
p^{3}-t\right)=\exp\left(-\frac{\ln\frac{1-p}{p}}{2(1-2p)}t^{2/3}n^{2}+o(t^{2/3}n^{2})\right),$
with the convention that $\frac{\ln\frac{1-p}{p}}{1-2p}=2$ when
$p=\frac{1}{2}$. Moreover, conditioned on $\tau(A)\leq p^{3}-t$, with high
probability we have
(10) $\lambda_{n}^{3}(A)=-tn^{3}(1-o(1))$
and $\lambda_{n-1}^{3}(A)\geq-o(tn^{3})$.
The second result, for $0<p\leq\frac{1}{2}$, is weaker.
###### Theorem 2.
If $0<p\leq\frac{1}{2}$ and $n^{-3/4}\ll t\ll 1$ then $\Pr\left(\tau(G)\leq
p^{3}-t\right)$ is bounded above by
(11)
$\exp\left(-\frac{\ln\frac{p}{1-p}}{2(2p-1)}t^{2/3}n^{2}+o(t^{2/3}n^{2})\right)$
and bounded below by
(12) $\exp\left(-\frac{1}{2p(1-p)}t^{2/3}n^{2}+o(t^{2/3}n^{2})\right).$
Moreover, conditioned on $\tau(A)\leq p^{3}-t$, with high probability we have
(13) $\lambda_{n}^{3}(A)=-\Omega(tn^{3}).$
Together, these theorems show that $p(\tau(A)\leq
p^{3}-t)=\exp(-\Theta(t^{2/3}n^{2}))$ for all $0<p<1$.
### 3.1. Centering the matrix
The main point of this section is that when considering the lower tail for
triangle counts in $\mathcal{G}(n,m)$ graphs, it suffices to look at
eigenvalues of the centered adjacency matrix. This might sound obvious, but
there are two subtleties:
1. (1)
It is important that we are looking at the lower tail, because the upper tail
probabilities are controlled by perturbations to the largest eigenvector; this
is exactly the eigenvector that gets destroyed when we center the adjacency
matrix, so the eigenvalues of the centered adjacency matrix don’t give much
information about the upper tail probabilities.
2. (2)
It is important that we are looking at $\mathcal{G}(n,m)$ and not
$\mathcal{G}(n,p)$, because – as discussed in the introduction – in
$\mathcal{G}(n,p)$ the entropically favorable way to reduce the triangle count
is to reduce the number of edges; again, this primarily affects the largest
eigenvector and so is not related to the centered adjacency matrix.
###### Lemma 3.
Let $A$ be the adjacency matrix of a graph with $n$ vertices and $m$ edges.
For any $p\in\mathbb{R}$,
(14)
$\operatorname{tr}[(A-p\mathbf{1}+pI)^{3}]=\operatorname{tr}[A^{3}]-p^{3}n^{3}+p^{3}n+6mp(np-2p+1)+3p^{3}n(n-1)-3p\sum_{i}d_{i}^{2},$
where $d_{i}$ is the degree of vertex $i$. Or, if $p=m/\binom{n}{2}$, then
(15)
$\operatorname{tr}[(A-p\mathbf{1}+pI)^{3}]\leq\operatorname{tr}[A^{3}]-p^{3}n^{3}+p^{3}n+6mp.$
Note that if $A$ is sampled from $\mathcal{G}(n,m)$ and $p=m/\binom{n}{2}$
then $\mathbb{E}A=p\mathbf{1}-pI$, and so the quantity of interest in Lemma 3
is in fact the centered adjacency matrix $A-\mathbb{E}A$.
###### Proof.
We expand everything in gory detail:
$\displaystyle\operatorname{tr}[(A-p\mathbf{1}+pI)^{3}]-\operatorname{tr}[A^{3}]+p^{3}\operatorname{tr}[\mathbf{1}^{3}]-p^{3}\operatorname{tr}[I^{3}]$
$\displaystyle=3\operatorname{tr}[-pA^{2}\mathbf{1}+p^{2}A\mathbf{1}^{2}+pA^{2}I+p^{2}AI^{2}+p^{3}\mathbf{1}^{2}I-p^{3}\mathbf{1}I^{2}-2p^{2}A\mathbf{1}I]$
$\displaystyle=3\operatorname{tr}[-pA^{2}\mathbf{1}+np^{2}A\mathbf{1}+pA^{2}+p^{2}A+np^{3}\mathbf{1}-p^{3}\mathbf{1}-2p^{2}A\mathbf{1}]$
$\displaystyle=3(-p\operatorname{tr}[A^{2}\mathbf{1}]+2(n-2)p^{2}m+2pm+0+n^{2}p^{3}-np^{3}),$
where we used the fact that $\operatorname{tr}[A\mathbf{1}]$ is the sum of the
entries of $A$ and $\operatorname{tr}[A^{2}]$ is the sum of squares of entries
of $A$; since $A$ is an adjacency matrix, both of these are $2m$. Finally,
(16)
$\operatorname{tr}[A^{2}\mathbf{1}]=\sum_{i,k}(A^{2})_{ik}=\sum_{i,j,k}A_{ij}A_{jk}=\sum_{j}d_{j}^{2}.$
This proves the equality. For the inequality, Cauchy-Schwarz implies that
$\sum_{i}d_{i}^{2}\geq\frac{1}{n}(\sum_{i}d_{i})^{2}=\frac{4m^{2}}{n}=2mp(n-1)$.
After applying this inequality and rewriting $3p^{3}n(n-1)$ as $6p^{2}m$, we
obtain the inequality. ∎
Combining Lemma 3 with the observation that
$\mathbb{E}\operatorname{tr}[A^{3}]=p^{3}n^{3}+O(n^{2})$ when $A$ is the
adjacency matrix of a $\mathcal{G}(n,m)$ graph, we arrive at the following
consequence:
###### Corollary 4.
Let $A$ be the adjacency matrix of a $\mathcal{G}(n,m)$ graph and let
$\tilde{A}=A-\mathbb{E}A$. For any $t\geq 0$,
(17)
$\Pr(\operatorname{tr}[A^{3}]\leq\mathbb{E}\operatorname{tr}[A^{3}]-t)\leq\Pr(\operatorname{tr}[\tilde{A}^{3}]\leq-t+O(n^{2}))$
For an inequality in the other direction, note that by the same argument as in
Lemma 3, as long as $\sum_{i}d_{i}^{2}\leq n^{3}p^{2}+D$, we have
(18)
$\operatorname{tr}[(A-p\mathbf{1}+pI)^{3}]=\operatorname{tr}[A^{3}]-p^{3}n^{3}+O(D+n^{2}).$
###### Corollary 5.
With the notation of Corollary 4, if $D=\Omega(n^{2})$ then
(19)
$\Pr\left(\operatorname{tr}[A^{3}]\leq\mathbb{E}\operatorname{tr}[A^{3}]-t\right)\geq\Pr\left(\operatorname{tr}[\tilde{A}^{3}]\leq-t-\Omega(D)\text{
and }\sum_{i}d_{i}^{2}\leq n^{3}p^{2}+D\right).$
## 4\. Large deviations for eigenvalues of random matrices
In this section and beyond, we let $A$ denote a generic random matrix and we
estimate the most positive eigenvalues of $A$. Since we are looking at lower
tails, the most important such matrix to keep in mind is minus the centered
adjacency matrix, previously denoted $\tilde{A}$ or $A-\mathbb{E}A$. This is
the same as plus the centered adjacency matrix of a random graph with edge
density $q=1-p$. The proof of Theorem 1 ($p\geq\frac{1}{2}$) thus relies on
results for $q\leq\frac{1}{2}$, while the proof of Theorem 2
($p\leq\frac{1}{2}$) relies on results for $q\geq\frac{1}{2}$.
###### Definition 6.
For a random variable $\xi$, its cumulant-generating function is
(20) $\Lambda_{\xi}(s)=\ln\mathbb{E}\exp(s\xi)$
whenever the expectation exists; when the expectation does not exist, we set
$\Lambda_{\xi}(s)=+\infty$.
###### Definition 7.
The random variable $\xi$ is _subgaussian_ if there exists a constant $C$ such
that $\Lambda_{\xi}(t)\leq Ct^{2}$ for every $t\in\mathbb{R}$.
Note that according to our definition, a subgaussian random variable has mean
zero (since if $\Lambda_{\xi}(t)$ is finite on a neighborhood of 0 then
$\Lambda_{\xi}(0)=0$ and $\Lambda_{\xi}^{\prime}(0)=\mathbb{E}\xi$, and so if
$\mathbb{E}\xi$ is non-zero then one cannot have $\Lambda_{\xi}(t)\leq Ct^{2}$
on a neighborhood of 0). Note also that if $\mathbb{E}\xi=0$ and
$\|\xi\|_{\infty}<\infty$ then $\xi$ is subgaussian.
###### Definition 8.
For a function $f:\mathbb{R}\to\mathbb{R}$, its Legendre transform is the
function $f^{*}:\mathbb{R}\to\mathbb{R}\cup\\{+\infty\\}$ defined by
(21) $f^{*}(y)=\sup_{x\in\mathbb{R}}\\{xy-f(x)\\}$
Some basic properties of the Legendre transform include:
* •
If $f\leq g$ then $f^{*}\geq g^{*}$.
* •
If $f$ is convex then $f^{**}=f$.
* •
If $f(x)=cx^{2}$ then $f^{*}(x)=\frac{x^{2}}{4c}$.
Our goal in this note is to establish large deviations principles for extreme
eigenvalues and singular values of random matrices. We will consider a
symmetric $n\times n$ random matrix $A_{n}$ (or sometimes just $A$) having
i.i.d. upper-diagonal entries and zero diagonal entries. The letter $\xi$ will
always denote a random variable that is distributed as an upper-diagonal
element of $A$, and we will always assume that $\xi$ is subgaussian. We write
$\lambda_{i}(A)$ for the eigenvalues of $A$ (in non-increasing order) and
$\sigma_{i}(A)$ for the singular values of $A$ (in non-increasing order).
For the definition of a large deviations principle (LDP), we refer to [36,
Chapter 27].
###### Theorem 9.
Let $\xi$ be a subgaussian random variable. For any integer $k\geq 1$ and any
sequence $m_{n}$ satisfying $\sqrt{n}\ll m_{n}\ll n$, the sequence
(22) $\frac{1}{m_{n}}(\sigma_{1}(A_{n}),\dots,\sigma_{k}(A_{n}))$
satisfies an LDP with speed $m_{n}^{2}$ and good rate function
$I:\mathbb{R}_{+}^{k}\to[0,\infty)$ given by
(23)
$I(x)=\frac{|x|^{2}}{2}\inf_{s\in\mathbb{R}}\frac{\Lambda^{*}(s)}{s^{2}}.$
If we assume in addition that the function
$s\mapsto\frac{\Lambda^{*}(s)}{s^{2}}$ achieves its infimum at some $s\geq 0$,
then the sequence
(24) $\frac{1}{m_{n}}(\lambda_{1}(A_{n}),\dots,\lambda_{k}(A_{n}))$
satisfies an LDP with speed $m_{n}^{2}$ and the same good rate function $I$ as
above.
If $A_{n}$ is the centered adjacency matrix of $\mathcal{G}(n,q)$ then it is
covered by Theorem 9, where $\xi$ is the random variable taking the values
$-q$ and $1-q$ with probabilities $1-q$ and $q$ respectively. In this case, we
have
(25)
$\Lambda_{\xi}^{*}(s)=D(q+s,q):=(q+s)\ln\frac{q+s}{q}+(1-q-s)\ln\frac{1-q-s}{1-q},$
with the understanding that $\Lambda_{\xi}^{*}(s)=+\infty$ whenever
$q+s\not\in(0,1)$. It is not hard to check – and we will do it in Section 5.5
– that $\frac{\Lambda_{\xi}^{*}(s)}{s^{2}}$ achieves its infimum at some
$s\geq 0$ if and only if $q\leq\frac{1}{2}$.
In the case that $\frac{\Lambda^{*}(s)}{s^{2}}$ saturates its infimum only at
negative $s$ (corresponding to $q>\frac{1}{2}$ in the Bernoulli example), we
are not able to show an LDP for the eigenvalues. Note, however, that
$\sum_{i}\sigma_{i}^{2}(A)\geq\sum_{i}\lambda_{i}^{2}(A)$ and so our LDP for
singular values provides an upper bound: it implies, for example, that
(26)
$\frac{1}{m_{n}^{2}}\ln\Pr\left(\sqrt{\sum_{i}\lambda_{i}^{2}(A_{n})}>m_{n}t\right)\leq-\frac{t^{2}}{2}\inf_{s\in\mathbb{R}}\frac{\Lambda^{*}(s)}{s^{2}}+o(1)$
On the other hand, we can also easily show the lower bound
(27)
$\frac{1}{m_{n}^{2}}\ln\Pr\left(\sqrt{\sum_{i}\lambda_{i}^{2}(A_{n})}>m_{n}t\right)\geq-\frac{t^{2}}{2}\inf_{s\geq
0}\frac{\Lambda^{*}(s)}{s^{2}}-o(1),$
but the assumption that $\frac{\Lambda^{*}(s)}{s^{2}}$ saturates its infimum
only at negative $s$ implies that these bounds are non-matching.
There are natural examples (including the Bernoulli example mentioned above)
where $s^{-2}\Lambda^{*}(s)$ is increasing for $s\geq 0$. In this case,
(28) $\inf_{s\geq 0}s^{-2}\Lambda^{*}(s)=\lim_{s\to
0}s^{-2}\Lambda^{*}(s)=\frac{1}{2}(\Lambda^{*})^{\prime\prime}(0)=\frac{1}{2\mathbb{E}\xi^{2}},$
and so our lower bound (for simplicity, focusing only on the case $k=1$)
becomes
(29)
$\frac{1}{m_{n}^{2}}\ln\Pr\left(\lambda_{1}(A_{n})>m_{n}t\right)\geq-\frac{t^{2}}{4\mathbb{E}\xi^{2}}-o(1).$
When $\xi$ has a Gaussian distribution, this turns out to be sharp, but we
show that it is not sharp in general.
###### Theorem 10.
In the setting of Theorem 9, if $\mathbb{E}\xi^{3}<0$ and
$\lim_{s\to\infty}s^{-2}\Lambda(s)=0$ then there exists some $\eta>0$ such
that for any $t>0$,
(30)
$\lim_{n\to\infty}\frac{1}{m_{n}^{2}}\ln\Pr\left(\lambda_{1}(A_{n})>m_{n}t\right)>-(1-\eta)\frac{t^{2}}{4\mathbb{E}\xi^{2}}.$
In particular, the assumptions of Theorem 10 are satisfied for the (centered)
Bernoulli random variable with $q>\frac{1}{2}$ mentioned above.
For our applications to random graphs, we require a version of Theorem 9 for
random bits chosen without replacement. Specifically, we consider the Erdős-
Rényi random graphs $\mathcal{G}(n,m)$, where $m$ is an integer satisfying
$|m-q\binom{n}{2}|=O(1)$ (and $q\in(0,1)$ is fixed).
###### Theorem 11.
Fix $q\in(0,1)$ and let $A_{n}$ be the centered adjacency matrix of a
$\mathcal{G}(n,m)$ random graph with $|m-q\binom{n}{2}|=O(1)$. For any integer
$k\geq 1$ and any sequence $m_{n}$ satisfying $\sqrt{n}\ll m_{n}\ll n$, the
sequence
(31) $\frac{1}{m_{n}}(\sigma_{1}(A_{n}),\dots,\sigma_{k}(A_{n}))$
satisfies an LDP with speed $m_{n}^{2}$ and good rate function
$I:\mathbb{R}_{+}^{k}\to[0,\infty)$ given by
$I(x)=\frac{|x|^{2}}{2}\cdot\frac{\ln\frac{1-q}{q}}{1-2q}$ (or $I(x)=|x|^{2}$
when $q=\frac{1}{2}$).
If, in addition, $q\leq\frac{1}{2}$ then the sequence
(32) $\frac{1}{m_{n}}(\lambda_{1}(A_{n}),\dots,\lambda_{k}(A_{n}))$
also satisfies an LDP with the same speed and rate function.
## 5\. Upper bound
Most of this work is concerned with handling the triangle-count contribution
of very negative eigenvalues, but we also need to show that there is no
significant contribution from the rest. For this, we will use a deviation
inequality from [33]:
###### Theorem 12.
Assume that $\|\xi\|_{\infty}<\infty$, and let $f:\mathbb{R}\to\mathbb{R}$ be
a 1-Lipschitz, convex function. Define
$X_{n}=\frac{1}{n}\sum_{i=1}^{n}f(n^{-1/2}\lambda_{i}(A_{n}))$. Then there is
a universal constant $C<\infty$ such that for any $\delta\gg n^{-1}$,
(33) $\Pr(|X_{n}-\mathbb{E}X_{n}|\geq\delta)\leq
C\exp\left(-\frac{n^{2}\delta^{2}}{C\|\xi\|_{\infty}^{2}}\right).$
The main observation is that in the regime we are interested in (namely,
eigenvalues or singular values of order $\omega(\sqrt{n})$), the probability
of large eigenvalues can be controlled by a union bound over the potential
eigenvectors.
Let $\mathcal{M}_{k}$ be the set of $n\times n$ matrices with rank at most $k$
and Frobenius norm at most 1. Let $\mathcal{M}_{k}^{+}\subset\mathcal{M}_{k}$
consist of those matrices that are symmetric and positive semidefinite.
###### Lemma 13.
For any symmetric matrix $A$,
(34)
$\left(\sum_{i=1}^{k}\max\\{0,\lambda_{i}(A)\\}^{2}\right)^{1/2}=\sup_{M\in\mathcal{M}_{k}^{+}}\langle
A,M\rangle.$
For any matrix $A$,
(35)
$\left(\sum_{i=1}^{k}\sigma_{i}(A)^{2}\right)^{1/2}=\sup_{M\in\mathcal{M}_{k}}\langle
A,M\rangle.$
###### Proof.
Let $UDU^{T}=A$ be an eigen-decomposition of $A$ (where $D$ is diagonal and
$U$ is orthogonal), and let $\tilde{D}$ be $D$ but with all but the $k$th-
largest diagonal entries set to zero. Define
(36)
$M=\frac{U\tilde{D}U^{T}}{\|\tilde{D}\|_{F}}=\frac{U\tilde{D}U^{T}}{\left(\sum_{i=1}^{k}\lambda_{i}(A)^{2}\right)^{1/2}}.$
Then $M\in\mathcal{M}_{k}^{+}$ and $\langle
A,M\rangle=\|\tilde{D}\|_{F}=\left(\sum_{i=1}^{k}\lambda_{i}(A)^{2}\right)^{1/2}$.
This proves one direction of the first claim.
For the other direction, take any $M\in\mathcal{M}_{k}^{+}$, and decompose $A$
as $A_{+}-A_{-}$, where $A_{+}$ and $A_{-}$ are positive semi-definite and the
non-zero eigenvalues of $A_{+}$ are the positive eigenvalues of $A$. Then
(37) $\langle A,M\rangle\leq\langle
A_{+},M\rangle\leq\|A_{+}\|_{F}\|M\|_{F}\leq\sqrt{\sum_{i=1}^{k}\lambda_{i}(A_{+})^{2}}=\sqrt{\sum_{i=1}^{k}\lambda_{i}(A)^{2}}.$
This proves the first claim. The proof of the second claim is identical, but
uses a singular value decomposition instead of an eigen-decomposition. ∎
Hence, in order to prove the upper bounds in Theorem 9, it suffices to control
(38) $\Pr\left(\sup_{M\in\mathcal{M}_{k}^{+}}\langle
A,M\rangle>tn^{\alpha}\right).$
The first step is to replace the supremum with a finite maximum.
### 5.1. The net argument
###### Definition 14.
For a subset $\mathcal{N}$ of a metric space $(X,d)$, we say that
$\mathcal{N}$ is an $\epsilon$-net of $X$ if for every $x\in X$ there exists
$y\in\mathcal{N}$ with $d(x,y)\leq\epsilon$.
###### Lemma 15.
Let $\mathcal{N}\subset\mathcal{M}_{k}$ be an $\epsilon$-net (with respect to
$\|\cdot\|_{F}$) for $\epsilon<\frac{1}{2}$. Then for any symmetric matrix
$A$,
(39) $\sup_{M\in\mathcal{M}_{k}}\langle
A,M\rangle\leq\frac{1}{1-2\epsilon}\sup_{N\in\mathcal{N}}\langle A,N\rangle.$
###### Proof.
Fix $M\in\mathcal{M}_{k}$, and choose $N\in\mathcal{N}$ with
$\|N-M\|_{F}\leq\epsilon$. Note that $N-M$ has rank at most $2k$, and it has
at most $k$ positive eigenvalues and $k$ negative eigenvalues. Letting
$\epsilon M_{0}$ and $\epsilon M_{1}$ be the positive and negative parts of
$N-M$, we have $\|M_{0}\|_{F}\leq\|N-M\|_{F}/\epsilon\leq 1$ and (similarly
$\|M_{1}\|_{F}\leq 1$). In other words, we can decompose
(40) $M=N+\epsilon M_{0}+\epsilon M_{1}$
with $N\in\mathcal{N}$ and $M_{0},M_{1}\in\mathcal{M}_{k}$. We continue this
construction recursively: for every finite binary string $v$ and matrix
$M_{v}\in\mathcal{M}_{k}$, we can find $N_{v}\in\mathcal{N}$ and
$M_{v0},M_{v1}\in\mathcal{M}_{k}$ such that
(41) $M_{v}=N_{v}+\epsilon M_{v0}+\epsilon M_{v1}.$
Recursing this construction $m$ levels, it follows that (with $S_{m}$ being
the set of binary strings of length $m$ and $|v|$ denoting the length of the
string $v$)
(42) $M=\sum_{\ell=0}^{m-1}\sum_{v\in
S_{\ell}}\epsilon^{-|v|}N_{v}+\epsilon^{-m}\sum_{v\in S_{m}}M_{v}.$
Since $|S_{m}|=2^{m}$ and each $M_{v}$ has $\|M_{v}\|_{F}\leq 1$, the
remainder term converges to zero and we can continue this construction to the
limit:
(43) $M=\sum_{\ell=0}^{\infty}\sum_{v\in S_{\ell}}\epsilon^{-|v|}N_{v},$
where the outer sum converges in Frobenius norm.
Taking the inner product with $A$, note that Cauchy-Schwarz and the
convergence of the sum imply that the inner product and summation can be
exchanged:
(44) $\langle M,A\rangle=\sum_{\ell=0}^{\infty}\sum_{v\in
S_{\ell}}\epsilon^{-|v|}\langle
N_{v},A\rangle\leq\sum_{\ell=0}^{\infty}|S_{\ell}|\epsilon^{-\ell}\sup_{N\in\mathcal{N}}\langle
N,A\rangle=\frac{1}{1-2\epsilon}\sup_{N\in\mathcal{N}}\langle N,A\rangle.$
∎
The construction in Lemma 15 approximates the supremum over
$M\in\mathcal{M}_{k}$, which is enough for most of what we will do. In some
cases, we will want the supremum over $M\in\mathcal{M}_{k}^{+}$ instead, but
that can be handled also:
###### Lemma 16.
Let $\mathcal{N}\subset\mathcal{M}_{k}$ and
$\mathcal{N}^{+}\subset\mathcal{M}_{k}^{+}$ be $\epsilon$-nets (with respect
to $\|\cdot\|_{F}$) for $\epsilon<\frac{1}{2}$ Then for any symmetric matrix
$A$,
(45) $\sup_{M\in\mathcal{M}_{k}^{+}}\langle
A,M\rangle\leq\sup_{N^{+}\in\mathcal{N}^{+}}\langle
A,N^{+}\rangle+\frac{2\epsilon}{1-2\epsilon}\sup_{N\in\mathcal{N}}\langle
A,N\rangle.$
###### Proof.
Fix $M\in\mathcal{M}_{k}^{+}$ and choose $M_{0}\in\mathcal{N}^{+}$ such that
$\|M_{0}-M\|_{F}\leq\epsilon$. Then $\frac{M_{0}-M}{\epsilon}$ has rank at
most $2k$ and Frobenius norm at most $1$. Hence, we can write
$M_{0}-M=\epsilon N_{0}+\epsilon N_{1}$, where
$N_{0},N_{1}\in\mathcal{M}_{k}$. It follows that
(46) $\langle A,M\rangle=\langle A,M_{0}\rangle+\epsilon\langle
A,N_{0}\rangle+\epsilon\langle A,N_{1}\rangle,$
and we conclude by applying Lemma 15 to $\langle A,N_{0}\rangle$ and $\langle
A,N_{1}\rangle$. ∎
We have shown that to approximate the supremum it suffices to take a good
enough net. In order to put this together with a union bound, we need a bound
on the size of a good net. Our starting point is the following basic bound in
Euclidean space [35, Corollary 4.2.13]
###### Lemma 17.
The unit Euclidean ball in $\mathbb{R}^{d}$ admits an $\epsilon$-net (with
respect to the Euclidean metric) $\mathcal{N}$ satisfying
$|\mathcal{N}|\leq(3/\epsilon)^{d}$.
###### Corollary 18.
There is a constant $C$ such that for any $0<\epsilon<1$, there is an
$\epsilon$-net (with respect to Frobenius norm) for $\mathcal{M}_{k}$ of size
at most $(C/\epsilon)^{2nk}$ and an $\epsilon$-net for $\mathcal{M}_{k}^{+}$
of size at most $(C/\epsilon)^{k}$.
###### Proof.
Let $\tilde{\mathcal{N}}$ be an $(\epsilon/2)$-net for the set of $n\times k$
matrices with Frobenius norm at most one. Since this space is isometric to
$\mathbb{R}^{nk}$ with the Euclidean norm, Lemma 17 implies that we can choose
such a $\tilde{\mathcal{N}}$ with
$|\tilde{\mathcal{N}}|\leq(C/\epsilon)^{nk}$. Now let
$\mathcal{N}=\\{XY^{T}:X,Y\in\tilde{\mathcal{N}}\\}$. Then
$|\mathcal{N}|\leq|\tilde{\mathcal{N}}|^{2}\leq(C/\epsilon)^{2nk}$.
It remains to show that $\mathcal{N}$ is an $\epsilon$-net. Since
$\|XY^{T}\|_{F}\leq\|X\|_{F}\|Y\|_{F}$, it follows that every
$N\in\mathcal{N}$ has $\|N\|_{F}\leq 1$; also, each $N\in\mathcal{N}$ clearly
has rank at most $k$. Now choose an arbitrary $M\in\mathcal{M}_{k}$ and write
$M=AB^{T}$ for $n\times k$ matrices $A$ and $B$ of Frobenius norm at most 1
(for example, this can be done using a singular value decomposition). Choose
$X,Y\in\tilde{\mathcal{N}}$ with $\|X-A\|_{F}\leq\frac{\epsilon}{2}$ and
$\|Y-B\|_{F}\leq\frac{\epsilon}{2}$. Then
$\displaystyle\|XY^{T}-M\|_{F}$
$\displaystyle\leq\|XY^{T}-AY^{T}\|_{F}+\|AY^{T}-AB^{T}\|_{F}$
$\displaystyle\leq\|X-A\|_{F}+\|Y^{T}-B^{T}\|_{F}$
$\displaystyle\leq\epsilon.$
To construct an $\epsilon$-net of $\mathcal{M}_{k}^{+}$, take
$\tilde{\mathcal{N}}$ be as above and let
$\mathcal{N}=\\{XX^{T}:X\in\tilde{\mathcal{N}}\\}$. Then
$|\mathcal{N}|\leq|\tilde{\mathcal{N}}|$, and the proof that $\mathcal{N}$ is
an $\epsilon$-net of $\mathcal{M}_{k}^{+}$ is essentially the same as the
proof above, the only change being that every $M\in\mathcal{M}_{k}^{+}$ can be
written as $M=AA^{T}$ for an $n\times k$ matrix $A$ of Frobenius norm at most
1. ∎
Applying a union bound over these nets gives the main result of this section:
singular values and eigenvalues of $A$ can be controlled in terms of the
deviations of linear functions of $A$. The main point here is that (as we will
show in the next section) if $t\gg\sqrt{n}$ then the
$O(nk\ln\frac{1}{\epsilon})$ terms are negligible compared to the other terms.
###### Proposition 19.
Let $A$ be a symmetric $n\times n$ random matrix with i.i.d. entries. For any
integer $k\geq 1$, any $0<\epsilon<\frac{1}{2}$, and any $t>0$,
(47)
$\ln\Pr\left(\sum_{i=1}^{k}\sigma_{i}^{2}(A)>t\right)\leq\sup_{M\in\mathcal{M}_{k}}\ln\Pr\left(\langle
A,M\rangle\geq(1-2\epsilon)t\right)+O(nk\ln\frac{1}{\epsilon}).$
###### Proof.
For the first inequality, let $\mathcal{N}$ be an $\epsilon$-net for
$\mathcal{M}_{k}$ according to Corollary 18. By Lemma 13 and Lemma 15
$\displaystyle\Pr\left(\sum_{i=1}^{k}\sigma_{i}^{2}(A)>t\right)$
$\displaystyle=\Pr\left(\sup_{M\in\mathcal{M}_{k}}\langle A,M\rangle>t\right)$
$\displaystyle\leq\Pr\left(\max_{N\in\mathcal{N}}\langle
A,N\rangle>(1-2\epsilon)t\right).$
By a union bound,
$\displaystyle\Pr\left(\max_{N\in\mathcal{N}}\langle
A,N\rangle>(1-2\epsilon)t\right)$
$\displaystyle\leq\sum_{N\in\mathcal{N}}\Pr\left(\langle
A,N\rangle>(1-2\epsilon)t\right)$
$\displaystyle\leq|\mathcal{N}|\sup_{M\in\mathcal{M}_{k}}\Pr\left(\langle
A,M\rangle>(1-2\epsilon)t\right),$
which, by our bound on $|\mathcal{N}|$, completes the proof of the first
claim. ∎
We remark that it is possible to prove a version of Proposition 19 for
eigenvalues also, giving an upper bound on $\Pr(\sum\lambda_{i}^{2}(A)>t)$ in
terms of
(48) $\sup_{M^{+}\in\mathcal{M}_{k}^{+}}\Pr\left(\langle A,M^{+}\rangle\geq
t\right).$
This can in principle give a better bound on the eigenvalues than for the
singular values. The issue is that we do not know how to exploit the
additional information that we are testing $A$ against a positive semidefinite
matrix.
### 5.2. Hoeffding-type argument
Using a Hoeffding-type argument, we can get a sharp upper bound on
(49) $\sup_{M\in\mathcal{M}_{k}}\ln\Pr\left(\langle A,M\rangle\geq t\right)$
for any $k$ and any $t$ (in fact, the sharp upper bound turns out not to
depend on $k$).
###### Lemma 20.
If $\xi$ is subgaussian then
(50)
$4\sup_{s\in\mathbb{R}}\frac{\Lambda_{\xi}(s)}{s^{2}}=\left(\inf_{s\in\mathbb{R}}\frac{\Lambda_{\xi}^{*}(u)}{u^{2}}\right)^{-1}<\infty.$
###### Proof.
The fact that $\sup_{s\in\mathbb{R}}\frac{\Lambda(s)}{s^{2}}<\infty$ is the
definition of subgaussianity. To show the claimed identity, let
$L=\sup_{t\in\mathbb{R}}\frac{\Lambda_{\xi}(t)}{t^{2}}$ and define
$M_{L}(s)=Ls^{2}$. Clearly, $\Lambda(s)\leq M_{L}(s)$ for all
$s\in\mathbb{R}$. It follows that $\Lambda^{*}(u)\geq
M_{L}^{*}(u)=\frac{u^{2}}{4L}$; in other words,
(51) $\frac{\Lambda^{*}(u)}{u^{2}}\geq\frac{M_{L}^{*}(u)}{u^{2}}=\frac{1}{4L}$
for all $u$. This shows that
(52)
$4\sup_{s\in\mathbb{R}}\frac{\Lambda(s)}{s^{2}}\geq\left(\inf_{u\in\mathbb{R}}\frac{\Lambda^{*}(u)}{u^{2}}\right)^{-1}.$
For the other direction, suppose that for some $L^{\prime}$ we have
$\Lambda^{*}(u)\geq\frac{u^{2}}{4L^{\prime}}=M^{1/(4L)^{\prime}}(u)$ for every
$u$. Then (since $\Lambda$ is convex) $\Lambda(t)=\Lambda^{**}(t)\leq
M_{1/(4L^{\prime})}^{*}(t)=L^{\prime}t^{2}$ for every $t$. The definition of
$L$ ensures that $L^{\prime}\geq L$, and this shows the other direction of the
claim. ∎
###### Proposition 21.
Let $\xi$ be a random variable with globally-finite moment-generating
function, and define
(53) $\Lambda(s)=\ln\mathbb{E}\exp(s\xi)$
to be the cumulant-generating function of $\xi$. Let $A$ be a symmetric random
matrix with zero diagonal, and with upper-diagonal elements distributed
independently according to $\xi$. Define
$\ell^{*}=\sup_{s>0}\frac{\Lambda(s)}{s^{2}}$. Then
(54) $\sup_{\|M\|_{F}\leq 1}\Pr(\langle
A,M\rangle>t)\leq\exp\left(-\frac{t^{2}}{8\sup_{s>0}\frac{\Lambda(s)}{s^{2}}}\right)=\exp\left(-\frac{t^{2}}{2}\inf_{s>0}\frac{\Lambda^{*}(s)}{s^{2}}\right).$
###### Proof.
Since $\langle A,M\rangle=\langle A,(M+M^{T})/2\rangle$ and since
$\|(M+M^{T})/2\|_{F}\leq\|M\|_{F}$, it suffices to consider only symmetric
matrices $M$. Let $m=\frac{n}{n-1}{2}$ and let $\xi_{1},\dots,\xi_{m}$ be the
upper-diagonal elements of $A$, in any order. Let $\|M\|\leq 1$ be symmetric,
with upper-diagonal entries $a_{1},\dots,a_{m}$. Then $\langle
A,M\rangle=2\sum_{i=1}^{m}a_{i}\xi_{i}$, and so (for any $s>0$)
$\displaystyle\Pr(\langle A,M\rangle>t)$ $\displaystyle=\Pr\left(\sum
a_{i}\xi_{i}>t/2\right)$ $\displaystyle=\Pr\left(e^{s\sum
a_{i}\xi_{i}}>e^{st/2}\right)$ $\displaystyle\leq e^{-st/2}\mathbb{E}e^{s\sum
a_{i}\xi_{i}}$ $\displaystyle=\exp\left(\sum_{i}\Lambda(sa_{i})-st/2\right),$
where the inequality follows from Markov’s inequality. Now,
$\sum_{i=1}^{m}a_{i}^{2}\leq\frac{1}{2}\|M\|_{F}^{2}\leq\frac{1}{2}$, and so
if we set $\ell^{*}=\sup_{r>0}\frac{\Lambda(r)}{r^{2}}$ then
(55)
$\sum_{i}\Lambda(sa_{i})=\sum_{i}\frac{\Lambda(sa_{i})}{(sa_{i})^{2}}(sa_{i})^{2}\leq
s^{2}\sum_{i}\ell^{*}a_{i}^{2}\leq\frac{s^{2}\ell^{*}}{2}.$
Hence,
(56) $\Pr(\langle
A,M\rangle>t)\leq\exp\left(\frac{s^{2}\ell^{*}}{2}-\frac{st}{2}\right),$
and the first claim follows by optimizing over $s$.
The second claim follows immediately from Lemma 20. ∎
Putting Proposition 21 together with Proposition 19, we arrive at the
following upper bound for singular values:
###### Corollary 22.
Let $A$ be a symmetric $n\times n$ random matrix with i.i.d. upper diagonal
entries. Assuming that the entries are subgaussian and have cumulant-
generating function $\Lambda$, let
$L=\inf_{s\in\mathbb{R}}\frac{\Lambda^{*}(s)}{s^{2}}$. Then for any integer
$k$ and any $t>0$, if $t^{2}L>2nk$ then
(57)
$\ln\Pr\left(\sqrt{\sum_{i=1}^{k}\sigma_{i}^{2}(A)}>t\right)\leq-\frac{t^{2}L}{2}+O\left(nk\ln\frac{t^{2}L}{nk}\right).$
###### Proof.
We combine Proposition 21 and Proposition 19, setting
$\epsilon=\frac{nk}{t^{2}L}$ (which is less than $\frac{1}{2}$ by assumption).
This yields an upper bound of
(58) $-\frac{t^{2}L}{2}+O\left(nk+nk\ln\frac{t^{2}L}{nk}\right),$
and the $nk$ term can be absorbed in the final term. ∎
###### Remark 23.
Note that the argument leading to Corollary 22 applies even when the entries
$\xi_{ij}$ are not identically distributed as long as
$L\leq\inf_{s}\frac{\Lambda_{ij}^{*}(s)}{s^{2}}$ for every $i$, where
$\Lambda_{ij}$ is the cumulant-generating function of $\xi_{ij}$.
### 5.3. Lower bound
In this section, we give a lower bound that matches the upper bound of
Corollary 22 whenever $\sqrt{n}\ll t\ll n$. The starting point is the lower
bound of Cramér’s theorem [36, Theorem 27.3]
###### Theorem 24.
Let $\xi$ be a mean-zero random variable with everywhere-finite cumulant-
generating function $\Lambda_{\xi}$. Let $\xi_{1},\dots,\xi_{m}$ be
independent copies of $\xi$. Then
(59)
$\frac{1}{m}\ln\Pr\left(\sum_{i=1}^{m}\xi_{i}>mt\right)\to-\Lambda^{*}(t)$
as $m\to\infty$.
###### Proposition 25.
In the setting of Corollary 22, suppose in addition that the function
$s\mapsto s^{-2}\Lambda^{*}(s)$ achieves its minimum at some finite
$s\in\mathbb{R}$. Then for any $1\ll t\ll n^{2}$ and for any
$w_{1},\dots,w_{k}>0$, we have
(60)
$\ln\Pr\left(\sum_{i=1}^{k}w_{i}\sigma_{i}(A_{n})>|w|\sqrt{t}\right)\geq-\frac{tL}{2}-o(t).$
If $s\mapsto s^{-2}\Lambda^{*}(s)$ achieves its minimum at some $s\geq 0$,
then for any $1\ll t\ll n^{2}$ and for any $w_{1},\dots,w_{k}>0$, we have
(61)
$\ln\Pr\left(\sum_{i=1}^{k}w_{i}\lambda_{i}(A_{n})>|w|\sqrt{t}\right)\geq-\frac{tL}{2}-o(t).$
Choosing an arbitrary $w_{1},\dots,w_{k}$ and applying the Cauchy-Schwarz
inequality, Proposition 25 implies the same lower bounds on
$\ln\Pr(\sum_{i}\sigma_{i}^{2}(A_{n})>t)$ and
$\ln\Pr(\sum_{i}\lambda_{i}^{2}(A_{n})>t)$. In particular, it really is a
lower bound that matches the upper bound of Corollary 22.
###### Proof.
Fix $t$ and assume that $\frac{\Lambda^{*}(s)}{s^{2}}$ achieves its minimum at
$s_{*}\in\mathbb{R}$. Actually, we will assume $s_{*}\neq 0$; the case
$s_{*}=0$ is easily handled by replacing $s_{*}$ with $\epsilon>0$ everywhere,
and then sending $\epsilon\to 0$. Fix $w_{1},\dots,w_{k}$ and assume
$\sum_{i}w_{i}^{2}=t$; because the statement of the proposition is homogeneous
in $w$, this is without loss of generality. Now choose the smallest integers
$\ell_{1},\dots,\ell_{k}$ so that $\ell_{i}-1\geq\frac{w_{i}}{s_{*}}$. We
write $|\ell|^{2}$ for $\sum_{i}\ell_{i}^{2}$, and note that
$|\ell|^{2}\geq\frac{1}{s_{*}^{2}}\sum_{i}w_{i}^{2}=\frac{t}{s_{*}^{2}}$,
meaning that $1\ll|\ell|^{2}\ll n^{2}$.
Let $M$ be a block-diagonal matrix, whose non-zero entries are all equal to
$s_{*}$, appearing in blocks of size $\ell_{i}\times\ell_{i}$ for
$i=1,\dots,k$. (The fact that $\sum_{i}\ell_{i}\leq\sqrt{k}|\ell|\ll n$
implies that these blocks do indeed fit into an $n\times n$ matrix.) Then $M$
has rank $k$, and the singular values of $M$ are $s_{*}\ell_{i}$ for
$i=1,\dots,k$; note that our choices of $\ell_{i}$ ensure that
$w_{i}\leq\sigma_{i}(M)\leq w_{i}+2s_{*}$. Moreover, if we set
$m=\sum_{i}\frac{\ell_{i}(\ell_{i}-1)}{2}$ (which is also an integer, and
counts the number of non-zero upper-diagonal elements of $M$) then $\langle
A,M\rangle$ is equal in distribution to $2s_{*}\sum_{i=1}^{m}\xi_{i}$. Hence,
(62) $\Pr\left(\langle
A,M\rangle>t\right)=\Pr\left(\operatorname{sgn}(s_{*})\sum_{i=1}^{m}\xi_{i}>\frac{t}{2|s_{*}|}\right).$
Now, $m=\frac{1}{2}|\ell|^{2}-\frac{1}{2}\sum_{i}\ell_{i}$, while on the other
hand
(63)
$\frac{t}{s_{*}^{2}}=\frac{\sum_{i}w_{i}^{2}}{s_{*}^{2}}\leq\sum_{i}(\ell_{i}-1)^{2}=|\ell|^{2}-2\sum_{i}\ell_{i}+2k.$
Since $\sum_{i}\ell_{i}\geq|\ell|\gg 1$, we have $\frac{t}{2s_{*}^{2}}\leq m$
for sufficiently large $n$. Going back to our probability estimates, we have
$\displaystyle\ln\Pr\left(\langle A,M\rangle>t\right)$
$\displaystyle=\ln\Pr\left(\operatorname{sgn}(s_{*})\sum_{i=1}^{m}\xi_{i}>\frac{t}{2|s_{*}|}\right)$
$\displaystyle\geq\ln\Pr\left(\operatorname{sgn}(s_{*})\sum_{i=1}^{m}\xi_{i}>m|s_{*}|\right)$
$\displaystyle=-m\Lambda^{*}(s_{*})+o(m)$
$\displaystyle=-\frac{t\Lambda^{*}(s_{*})}{2s_{*}^{2}}-o(t),$
where the second-last equality follows by Cramér’s theorem (applied to the
random variables $-\xi_{i}$ in case $s_{*}<0$). By the Cauchy-Schwarz
inequality we have
(64) $\langle
A,M\rangle\leq\sum_{i=1}^{k}\sigma_{i}(A)\sigma_{i}(M)\leq\sum_{i=1}^{k}\sigma_{i}(A)(w_{i}+2s_{*})\\\
\leq\sum_{i=1}^{k}\sigma_{i}(A)w_{i}+2s_{*}\sqrt{k}\sqrt{\sum_{i=1}^{k}\sigma_{i}^{2}(A)},$
and hence
(65) $\Pr\left(\langle
A,M\rangle>t\right)\leq\Pr\left(\sum_{i=1}^{k}\sigma_{i}(A)w_{i}>t-t^{2/3}\right)+\Pr\left(\sum_{i=1}^{k}\sigma_{i}^{2}(A)>\frac{t^{4/3}}{4s_{*}^{2}k}\right).$
By Corollary 22, the second probability is of order $\exp(-\Omega(t^{4/3}))$,
and hence
(66)
$\ln\Pr\left(\sum_{i=1}^{k}\sigma_{i}(A)w_{i}>t-t^{2/3}\right)\geq(1-o(1))\ln\Pr\left(\langle
A,M\rangle>t\right)\geq-\frac{t\Lambda^{*}(s_{*})}{2s_{*}^{2}}-o(t).$
Substituting in $t=|w|\sqrt{t}$ in place of $t-t^{2/3}$, the extra error term
can be absorbed in the $o(t)$ term.
For the second claim, simply note that if $s_{*}>0$ then the matrix $M$ is
positive semi-definite. Denoting
$\lambda_{i}^{+}(A)=\max\\{0,\lambda_{i}(A)\\}$, we replace (64) by
(67) $\langle
A,M\rangle\leq\sum_{i=1}^{k}\lambda_{i}^{+}(A)\lambda_{i}(M)\leq\sum_{i=1}^{k}\lambda_{i}^{+}(A)(w_{i}+2s_{*})\leq\sum_{i=1}^{k}\lambda_{i}^{+}(A)w_{i}+2s_{*}\sqrt{k}\sqrt{\sum_{i=1}^{k}\sigma_{i}^{2}(A)},$
and the rest of the proof proceeds as before. ∎
There are a few extra useful facts that we can extract from the proof of
Proposition 25, namely that we have explicit candidates for extremal
eigenvectors and singular vectors. We will state these just for the largest
eigenvector, but of course they also hold in other situations.
###### Corollary 26.
Assume that $s\mapsto s^{-2}\Lambda^{*}(s)$ achieves its minimum at some
$s_{*}\geq 0$. For $1\ll t\ll n^{2}$, let $\ell=\lceil 1+\sqrt{t}/s_{*}\rceil$
and define $v\in\mathbb{R}^{n}$ by $v_{1},\dots,v_{\ell}=s_{*}^{1/2}t^{-1/4}$
and $v_{\ell+1},\cdots,v_{n}=0$. Then $|v|\leq 1+o(1)$ and
(68) $\ln\Pr(v^{T}A_{n}v\geq t)\geq-\frac{t^{2}L}{2}-o(t^{2}).$
Corollary 26 is immediate from the proof of Proposition 25, because in the
case $k=1$ and $w_{1}=\sqrt{t}$, the $M$ that we constructed in that proof is
exactly $\sqrt{t}vv^{T}$.
### 5.4. The LDP
Putting together Corollary 22 and Proposition 25, we complete the proof of the
LDP (Theorem 9). Take a sequence $m_{n}$ satisfying $\sqrt{n}\ll m_{n}\ll n$,
and set $X=\frac{1}{m_{n}}(\sigma_{1}(A_{n}),\dots\sigma_{k}(A_{n}))$. Let
$E\subset\mathbb{R}^{k}$ be any closed set, and let $t=\inf_{x\in E}|x|$. Then
$\frac{1}{m_{n}}(\sigma_{1}(A_{n}),\dots,\sigma_{k}(A_{n}))\in E$ implies that
$\sum\sigma_{i}^{2}(A_{n})>m_{n}^{2}t^{2}$. By Corollary 22,
$\ln\Pr\left(X\in
E\right)\leq\ln\Pr\left(\sum_{i=1}^{k}\sigma_{i}^{2}(A_{n})>m_{n}^{2}t^{2}\right)\\\
\leq-\frac{m_{n}^{2}t^{2}L}{2}+O\left(n\ln\frac{m_{n}^{2}}{n}\right)=-\frac{m_{n}^{2}t^{2}L}{2}+o(m_{n}^{2}).$
On the other hand, if $E\subset\mathbb{R}^{k}$ is open, then choose any $w\in
E$. Since $E$ is open, there is some $\epsilon>0$ so that if $\langle
x,w\rangle\geq|w|^{2}$ and $|x|^{2}\leq|w|^{2}+\epsilon$ then $x\in E$. Now,
Proposition 25 implies that
(69) $\ln\Pr\left(\langle
X,w\rangle\geq|w|^{2}\right)=\ln\Pr\left(\sum_{i}\sigma_{i}(A_{n})w_{i}\geq
m_{n}|w|^{2}\right)\geq-\frac{m_{n}^{2}|w|^{2}L}{2}-o(m_{n}^{2})$
On the other hand, Corollary 22 implies that
$\ln\Pr\left(|X|^{2}>|w|^{2}+\epsilon\right)=\ln\Pr\left(\sum_{i}\sigma_{i}^{2}(A_{n})\geq
m_{n}^{2}(|w|^{2}+\epsilon)\right)\\\
\leq-\frac{m_{n}^{2}(|w|^{2}+\epsilon)L}{2}-o(m_{n}^{2}).$
In particular, $\Pr(|X|^{2}>|w|^{2}+\epsilon)$ is dominated by $\Pr(\langle
X,w\rangle\geq|w|^{2})$, implying that
(70) $\ln\Pr(X\in E)\geq\ln\Pr\left(\langle X,w\rangle\geq|w|^{2}\text{ and
}|X|^{2}\leq|w|^{2}+\epsilon\right)\geq-\frac{m_{n}^{2}|w|^{2}L}{2}-o(m_{n}^{2}).$
Since this holds for arbitrary $w\in E$, it implies the lower bound in the
LDP.
The second part of Theorem 9 follows the exact same argument, only it uses the
second part of Proposition 25.
### 5.5. The case of $\mathcal{G}(n,m)$
We next consider the case of Theorem 11. The first observation is that
$q\leq\frac{1}{2}$ if and only if $\Lambda^{*}(s)/s^{2}$ achieves its minimum
at some non-negative $s$.
###### Lemma 27.
If $\xi=-q$ with probability $1-q$ and $\xi=1-q$ with probability $q$ then
$\Lambda^{*}$ (the convex conjugate of $\xi$’s cumulant generating function)
satisfies
(71)
$\inf_{s\in\mathbb{R}}\frac{\Lambda^{*}(s)}{s^{2}}=\frac{\ln\frac{1-q}{q}}{1-2q},$
and the minimum is uniquely attained at $s=1-2q$.
###### Proof.
We recall that $\Lambda^{*}(s)=D(q+s,q)$ where
(72) $D(r,q)=r\ln\frac{r}{q}+(1-r)\ln\frac{1-r}{1-q}$
(with the convention that $D(r,q)=+\infty$ for $r\not\in(0,1)$). Note that
$D(r,q)$ is non-negative, convex, and has a double-root at $r=q$. Fix $q$ and
define
(73) $L(r)=\frac{D(r,q)}{(r-q)^{2}}=\frac{\Lambda^{*}(r-q)}{(r-q)^{2}}$
(defined by continuity at $r=q$); our task is then to minimize $L$. We compute
(74)
$L^{\prime}(r)=-\frac{(q+r)\ln\frac{r}{q}+(2-q-r)\ln\frac{1-r}{1-q}}{(r-q)^{3}}=:-\frac{F(r)}{(r-q)^{3}}.$
Then
$\displaystyle F^{\prime}(r)$
$\displaystyle=\ln\frac{r}{q}-\ln\frac{1-r}{1-q}+\frac{q}{r}-\frac{1-q}{1-r}$
$\displaystyle F^{\prime\prime}(r)$
$\displaystyle=(r-q)\left(\frac{1}{r^{2}}-\frac{1}{(1-r)^{2}}\right).$
In particular, $F^{\prime\prime}$ has exactly two roots on $(0,1)$: at
$r=\frac{1}{2}$ and at $r=q$ (counting with multiplicity in case
$q=\frac{1}{2}$). It follows that $F$ has at most 4 roots on $(0,1)$. On the
other hand, we can easily see that
$F(q)=F^{\prime}(q)=F^{\prime\prime}(q)=F(1-q)=0$. Hence, $F(r)$ has a triple-
root at $r=q$ and a single root at $r=1-q$, and no other roots. Since $r=q$ is
only a triple-root, $L^{\prime}(q)\neq 0$, and it follows that $r=1-q$ is the
only root of $L^{\prime}(r)$. It follows that $L(r)$ is minimized at either
$r=0$, $r=1$, or $r=1-q$. The possible minimum values are therefore
(75) $x:=q^{-2}\ln\frac{1}{1-q},\qquad
y:=(1-q)^{-2}\ln\frac{1}{q},\qquad\text{or }z:=\frac{\ln\frac{1-q}{q}}{1-2q}.$
We will show that $z$ is the smallest one. By symmetry in $q$ and $1-q$, it
suffices to show that $z\leq x$ for all $q$. Now,
(76)
$q^{2}(1-2q)(z-x)=q^{2}\ln\frac{1-q}{q}+(1-2q)\ln(1-q)=(1-q)^{2}\ln(1-q)-q^{2}\ln
q.$
Let $f(q)=(1-q)^{2}\ln(1-q)-q^{2}\ln q$, and we need to show that $f(q)<0$ for
$0<q<\frac{1}{2}$ and $f(q)>0$ for $\frac{1}{2}<q<1$. In fact, since
$f(q)=-f(1-q)$, it suffices to show only one of these. Finally, note that
$f(0)=f(\frac{1}{2})=0$, and $f^{\prime\prime}(q)>0$ for $0<q<\frac{1}{2}$,
and it follows that $f(q)<0$ for $0<q<\frac{1}{2}$. ∎
To complete the proof of Theorem 11, it is enough to show that the upper bound
of Corollary 22 and the lower bound of Proposition 25 still hold in this
setting; then the proof of the LDP proceeds exactly as in the proof of Theorem
9. Checking Corollary 22 is trivial: recalling that $A_{n}$ is the centered
adjacency matrix of $\mathcal{G}(n,m)$ for $|m-q\binom{n}{2}|=O(1)$, we let
$\tilde{A}_{n}$ be the centered adjacency matrix of $\mathcal{G}(n,q)$. Note
that the distribution of $A_{n}$ is equal to the distribution of
$\tilde{A}_{n}$, conditioned on the event that $\tilde{A}_{n}$ has exactly $m$
positive entries on the upper diagonal; call this event $E$. By Stirling’s
approximation, $\Pr(E)=\Omega(n^{-1})$, and it follows that for any event $F$,
(77) $\Pr(A_{n}\in F)=\Pr(\tilde{A}_{n}\in F\mid E)\leq\frac{\Pr(A_{n}\in
F)}{\Pr(E)}\leq O(n\Pr(\tilde{A}_{n}\in F)).$
In other words, $\ln\Pr(A_{n}\in F)\leq\ln\Pr(\tilde{A}_{n}\in F)+O(\ln n)$,
and so Corollary 22 immediately implies the same upper bound for
$\mathcal{G}(n,m)$.
For the lower bound, we need to look into the proof of Proposition 25. Recall
that in the proof of Proposition 25, we constructed a matrix $M$ with
$O(t)=o(n^{2})$ non-zero entries, all of which had the same value. For the
$\mathcal{G}(n,q)$ adjacency matrix $\tilde{A}_{n}$,
$\langle\tilde{A}_{n},M\rangle$ has a (scaled and translated) binomial
distribution; for the $\mathcal{G}(n,m)$ adjacency matrix $A_{n}$, $\langle
A_{n},M\rangle$ has a (scaled and translated) hypergeometric distribution.
Now, if $H_{k,n,r}$ denotes a hypergeometric random variable with population
size $n$, $k$ successes, and $r$ trials; and if $B_{q,r}$ denotes a binomial
random variable with success probability $q$ and $r$ trials; then one easily
shows using Stirling’s approximation that
(78) $|\ln\Pr(H_{k,n,r}=s)-\ln\Pr(B_{k/n,r}=s)|=O(r^{2}/n).$
In the setting of Proposition 25, the number of trials $r$ is the number of
non-zero elements in $M$, and since $r^{2}/n=O(t^{2}/n)=o(t)$, we have
(79) $\ln\Pr(\langle
A_{n},M\rangle>t)\geq\ln\Pr(\langle\tilde{A}_{n},M\rangle>t)-o(t).$
With this lower bound, we can follow the rest of the proof of Proposition 25
to complete the proof of Theorem 11.
## 6\. Proof of Theorem 10
Next, we consider the case that $\frac{\Lambda^{*}(s)}{s^{2}}$ does not
achieve its infimum at any $s>0$, and we construct an example showing that
taking $s\to 0$ does not yield the sharp bound. The basic idea is to use the
first part of Lemma 13, by producing a positive semi-definite matrix $M$ and
giving a lower bound on the tails of $\langle A,M\rangle$. The main challenge
is to find a good matrix satisfying the positive definiteness constraint: in
Proposition 25 we chose a matrix taking only one non-zero value, specifically,
$s_{*}\in\operatorname{argmin}\frac{\Lambda^{*}(s)}{s^{2}}$. The issue, of
course, is that if $s_{*}$ is negative then such matrix cannot be positive
semi-definite. Instead, we will construct a rank-1 matrix taking four
different non-zero values.
Consider a sequence $a_{1},\dots,a_{n}$ whose non-zero elements take $m$
different values, $\alpha b_{1},\dots,\alpha b_{m}$, with $\alpha b_{i}$
repeated $\tilde{m}_{i}=\beta m_{i}(1+o(1))$ times respectively (the addition
of the error term just allows us to deal with the fact that matrices have
integer numbers of rows and columns). We will think of $m_{i}$ and $b_{i}$ as
being fixed, while $\alpha$ and $\beta$ depend on the tail bound that we want
to show, with $\alpha$ being small and $\beta$ being large. Then for any
$t=\sum_{i=1}^{m}t_{i}$,
(80)
$\Pr\left(\sum_{i}a_{i}\xi_{i}>t\right)\geq\prod_{i=1}^{m}\Pr\left(\sum_{j=1}^{\lceil\tilde{m}_{i}\rceil}\xi_{j}>t/(\alpha
b_{i})\right)$
and so Theorem 24 implies that if $\frac{t_{i}}{\alpha\beta
m_{i}b_{i}}=\Theta(1)$ then
(81)
$\ln\Pr\left(\sum_{i}a_{i}\xi_{i}>t\right)\geq-\beta\sum_{i}m_{i}\Lambda^{*}\left(\frac{t_{i}}{\alpha\beta
m_{i}b_{i}}\right)-o\left(\beta\sum_{i}m_{i}\right)$
Our goal will be to choose the parameters $m_{i},b_{i},\alpha,\beta$, and
$t_{i}$ to make the right hand side large. First, we will treat $m_{i}$ and
$b_{i}$ as given, and optimize over $t_{i}$, $\alpha$, and $\beta$. We will
enforce the constraints $\sum_{i}t_{i}=t$ and
$\sum_{i}a_{i}^{2}=\alpha^{2}\beta\sum_{i}m_{i}b_{i}^{2}=2$.
Define
$\displaystyle\beta$
$\displaystyle=t^{2}\frac{\sum_{i}m_{i}b_{i}^{2}}{2\left(\sum_{i}m_{i}b_{i}\Lambda^{\prime}(b_{i})\right)^{2}}$
$\displaystyle\alpha$
$\displaystyle=2\left(\beta\sum_{i}m_{i}b_{i}^{2}\right)^{-1/2}=\frac{\sum_{i}m_{i}b_{i}\Lambda^{\prime}(b_{i})}{t\sum_{i}m_{i}b_{i}^{2}}$
$\displaystyle t_{i}$ $\displaystyle=\alpha\beta
m_{i}b_{i}\Lambda^{\prime}(b_{i}).$
With these choices, we have
(82) $\alpha^{2}\beta=\frac{2}{\sum_{i}m_{i}b_{i}^{2}},$
meaning that
(83) $\sum_{i}a_{i}^{2}=\alpha^{2}\beta\sum_{i}m_{i}b_{i}^{2}=2$
and
(84) $\sum_{i}t_{i}=\alpha\beta\sum_{i}m_{i}b_{i}\Lambda^{\prime}(b_{i})=t.$
(These turn out to be the optimal choices of $\alpha,\beta$, and $t$, although
we do not need to show this, since any choice will give us a bound.) Plugging
these parameters into (81), we obtain
(85)
$\ln\Pr\left(\sum_{i}a_{i}\xi_{i}>t\right)\geq-\frac{t^{2}}{2}\cdot\frac{\sum_{i}m_{i}b_{i}^{2}\cdot\sum_{i}m_{i}\Lambda^{*}(\Lambda^{\prime}(b_{i}))}{\left(\sum_{i}m_{i}b_{i}\Lambda^{\prime}(b_{i})\right)^{2}}-o(t^{2}),$
where the $o(t^{2})$ term depends on the parameters $m_{i}$ and $b_{i}$.
Next, we will define the parameters $m_{i}$ and $b_{i}$. Take
$\epsilon,\delta>0$, and define
$\displaystyle m_{1}$ $\displaystyle=\frac{1}{\epsilon^{2}}$ $\displaystyle
b_{1}$ $\displaystyle=\epsilon$ $\displaystyle m_{2}$
$\displaystyle=2\frac{\epsilon}{\delta^{3}}$ $\displaystyle b_{2}$
$\displaystyle=-\delta$ $\displaystyle m_{3}$
$\displaystyle=\frac{\epsilon^{4}}{\delta^{6}}$ $\displaystyle b_{3}$
$\displaystyle=\frac{\delta^{2}}{\epsilon},$
and note that it is possible to define a positive semi-definite integral
kernel taking the value $b_{i}/2$ on a set of measure $2m_{i}$, simply by
starting with a function taking the values $\sqrt{\epsilon}$ and
$-\delta/\sqrt{\epsilon}$ on sets of size $1/\epsilon$ and
$\epsilon/\delta^{3}$ respectively, and then taking the outer product of that
function with itself. It follows that if $\epsilon$ and $\delta$ are fixed and
$\beta$ is large (and $\alpha$ is arbitrary), then we can define a rank-1
p.s.d. matrix ($M$, say) with $(1+o(1))2\beta m_{i}$ entries taking the value
$\alpha b_{i}/2$; note that
$\|M\|_{F}^{2}=\frac{1+o(1)}{2}\alpha\beta^{2}\sum_{i}m_{i}=1+o(1)$. Since $A$
is a symmetric matrix with $\xi$ on the upper diagonal, this will yield
(86) $\langle A,M\rangle=\sum_{i}a_{i}\xi_{i}$
where $(a_{i})$ is a sequence containing $(1+o(1))\beta m_{i}$ copies of
$\alpha b_{i}$.
We will first choose a small $\delta$ and then choose a smaller $\epsilon$.
The error terms in the following analysis are taking this into account, so for
example we may write $\epsilon^{2}\delta^{-k}=o(\epsilon)$ no matter how large
$k$ is. Our next task is to compute the various expressions in (85), in terms
of $\epsilon$ and $\delta$. Before doing so, we observe some basic properties
of the Legendre transform.
###### Lemma 28.
Assume that $f$ is convex and differentiable and
$\lim_{x\to\infty}\frac{f(x)}{x^{2}}=0$. Then
$\lim_{x\to\infty}\frac{f^{*}(f^{\prime}(x))}{x^{2}}=0$.
###### Proof.
Fix $x$ and let $y=f^{\prime}(x)$. By the definition of $f^{*}$, we can write
(87) $f^{*}(y)=\sup_{z}\\{zy-f(z)\\},$
and note that the supremum is attained at $x=z$ (because the derivative is
zero, and the expression being supremized is concave). Hence,
(88) $f^{*}(f^{\prime}(x))=xf^{\prime}(x)-f(x).$
Convexity of $f$ implies that $f^{\prime}$ is non-decreasing, and so
$f(x)=o(x^{2})$ implies that $f^{\prime}(x)=o(x)$ as $x\to\infty$. Hence,
$f^{*}(f^{\prime}(x))=xf^{\prime}(x)-f(x)=o(x^{2})$. ∎
###### Lemma 29.
If $f$ is convex with $f(0)=f^{\prime}(0)=0$ and $f^{\prime\prime}(0)>0$, and
if both $f$ and $f^{*}$ are $\mathcal{C}^{4}$ in a neighborhood of $0$, then
(89)
$f^{*}(f^{\prime}(\epsilon))=f^{\prime\prime}(0)\frac{\epsilon^{2}}{2}+((f^{*})^{\prime\prime\prime}(0)(f^{\prime\prime})^{3}(0)+3f^{\prime\prime\prime}(0))\frac{\epsilon^{3}}{6}+O(\epsilon^{4})$
as $\epsilon\to 0$
###### Proof.
This is nothing but Taylor’s theorem and a computation. Setting $g=f^{*}$, we
compute
(90)
$\frac{d}{d\epsilon}g(f^{\prime}(\epsilon))=g^{\prime}(f^{\prime}(\epsilon))f^{\prime\prime}(\epsilon),$
and then
(91)
$\frac{d^{2}}{d\epsilon^{2}}g(f^{\prime}(\epsilon))=g^{\prime\prime}(f^{\prime}(\epsilon))(f^{\prime\prime}(\epsilon))^{2}+g^{\prime}(f^{\prime}(\epsilon))f^{\prime\prime\prime}(\epsilon),$
and finally
(92)
$\frac{d^{3}}{d\epsilon^{3}}g(f^{\prime}(\epsilon))=g^{\prime\prime\prime}(f^{\prime}(\epsilon))(f^{\prime\prime}(\epsilon))^{3}+3g^{\prime\prime}(f^{\prime}(\epsilon))f^{\prime\prime}(\epsilon)f^{\prime\prime\prime}(\epsilon)+g^{\prime}(f^{\prime}(\epsilon))f^{\prime\prime\prime\prime}(\epsilon).$
Our assumptions on $f$ ensure that $g^{\prime}(0)=0$, and hence the first-
order term vanishes, the second-order term at $\epsilon=0$ becomes
(93) $g^{\prime\prime}(0)(f^{\prime\prime}(0))^{2},$
and the third-order term at $\epsilon=0$ becomes
(94)
$g^{\prime\prime\prime}(0)(f^{\prime\prime}(0))^{3}+3g^{\prime\prime}(0)f^{\prime\prime}(0)f^{\prime\prime\prime}(0).$
Finally, note that $g^{\prime\prime}(0)f^{\prime\prime}(0)=1$. ∎
Note that $\Lambda$ satisfies the assumptions on $f$ in Lemmas 28 (because we
assumed that $\Lambda(s)=o(s^{2})$) and 29 (because every cumulant-generating
function defined on a neighborhood of zero is $\mathcal{C}^{\infty}$ in a
neighborhood of zero). Note that $\Lambda$ and $\Lambda^{*}$ both have a
second-order root at zero. Define
(95) $L=\Lambda^{\prime\prime}(0)>0.$
Expanding out the parameters in (85), we have
(96)
$\sum_{i}m_{i}b_{i}^{2}=1+2\frac{\epsilon}{\delta}+\frac{\epsilon^{2}}{\delta^{2}}$
for the first term in the numerator. The second term in the numerator is
$\displaystyle\sum_{i}m_{i}\Lambda^{*}(\Lambda^{\prime}(b_{i}))$
$\displaystyle=\frac{1}{\epsilon^{2}}(\Lambda^{*}\circ\Lambda^{\prime})(\epsilon)+2\frac{\epsilon}{\delta^{3}}(\Lambda^{*}\circ\Lambda^{\prime})(-\delta)+\frac{\epsilon^{4}}{\delta^{6}}(\Lambda^{*}\circ\Lambda^{\prime})(\delta^{2}/\epsilon).$
According to Lemma 28 and our assumptions on $\Lambda$, the last term is
$o(\epsilon^{2})$. Applying Lemma 29 to the other terms, we have
$\displaystyle\sum_{i}m_{i}\Lambda^{*}(\Lambda^{\prime}(b_{i}))$
$\displaystyle=\frac{L}{2}+M\frac{\epsilon}{6}+L\frac{\epsilon}{\delta}-M\frac{\epsilon}{3}+O(\epsilon^{2}+\epsilon\delta)$
$\displaystyle=\frac{L}{2}\left(1+\frac{2\epsilon}{\delta}\right)-M\frac{\epsilon}{6}+O(\epsilon^{2}+\epsilon\delta),$
where
(97)
$M=(\Lambda^{*})^{\prime\prime\prime}(0)L^{3}+3\Lambda^{\prime\prime\prime}(0).$
For the denominator in (85), we ignore the $i=3$ contribution, giving a lower
bound of
$\displaystyle\sum_{i}m_{i}\Lambda^{\prime}(b_{i})$
$\displaystyle\geq\frac{\Lambda^{\prime}(\epsilon)}{\epsilon}-2\frac{\epsilon\Lambda^{\prime}(-\delta)}{\delta^{2}}$
$\displaystyle=\Lambda^{\prime\prime}(0)+\frac{\epsilon}{2}\Lambda^{\prime\prime\prime}(0)+O(\epsilon^{2})+2\frac{\epsilon}{\delta}\Lambda^{\prime\prime}(0)-\epsilon\Lambda^{\prime\prime\prime}(0)+O(\epsilon\delta)$
$\displaystyle=L\left(1+2\frac{\epsilon}{\delta}\right)-\frac{\epsilon}{2}\Lambda^{\prime\prime\prime}(0)+O(\epsilon^{2}+\epsilon\delta).$
Putting everything together,
$\displaystyle\frac{\sum_{i}m_{i}\Lambda^{\prime}(b_{i})\cdot\sum_{i}m_{i}\Lambda^{*}(\Lambda^{\prime}(b_{i}))}{\left(\sum_{i}m_{i}\Lambda^{\prime}(m_{i})\right)^{2}}$
$\displaystyle=\frac{\left(1+\frac{2\epsilon}{\delta}+O(\epsilon^{2})\right)\left(\frac{L}{2}\left(1+\frac{2\epsilon}{\delta}\right)-\frac{\epsilon
M}{6}+O(\epsilon^{2}+\epsilon\delta)\right)}{\left(L\left(1+\frac{2\epsilon}{\delta}\right)-\frac{\epsilon\Lambda^{\prime\prime\prime}(0)}{2}+O(\epsilon^{2}+\epsilon\delta)\right)^{2}}$
$\displaystyle=\frac{\frac{L}{2}-\frac{\epsilon
M}{6}+O(\epsilon^{2}+\epsilon\delta)}{L^{2}-\epsilon
L\Lambda^{\prime\prime\prime}(0)+O(\epsilon^{2}+\epsilon\delta)}$
$\displaystyle=\frac{1}{2L}-\frac{\epsilon
M}{6L^{2}}+\frac{\epsilon\Lambda^{\prime\prime\prime}(0)}{2L^{2}}+O(\epsilon^{2}+\epsilon\delta)$
$\displaystyle=\frac{1}{2L}-\frac{\epsilon(\Lambda^{*})^{\prime\prime\prime}(0)L}{6}+O(\epsilon^{2}+\epsilon\delta),$
and in particular it is possible to choose $\delta$ and $\epsilon$ so that
this quantity is at most $(1-\eta)\frac{1}{2L}$ for some $\eta>0$.
Going back to (85) and recalling that the sequence $a_{i}$ can be realized as
the elements of a rank-1 p.s.d. matrix, $M$ say, with $\|M\|_{F}=1+o(1)$, we
have shown that
(98) $\ln\Pr(\lambda_{1}(A_{n})>t)\geq\ln\Pr(\langle
A,M\rangle>t\|M\|_{F})\geq-(1-\eta)\frac{t^{2}}{4L}-o(t^{2}).$
Replacing $t$ by $m_{n}t$ and recalling that
$L=\Lambda^{\prime\prime}(0)=\mathbb{E}\xi^{2}$ completes the proof of Theorem
10.
## 7\. Back to triangle counts
We now turn to the proofs of Theorems 1 and 2. The proofs are very similar, so
we devote most of this section to proving Theorem 1 and then indicate what
changes must be made to obtain Theorem 2.
Our eigenvalue LDP (Theorem 9) allows us to control the triangle-count
contribution from a constant number of very extreme eigenvalues, but in order
to fully characterize the behavior of the triangle-count, we also need to
handle the other eigenvalues. We will do this in two steps: we use Theorem 12
to control the contribution of the bulk eigenvalues, and then Corollary 22 to
show that the triangle count cannot be determined by $\omega(1)$ largish
eigenvalues. Bear in mind that we will be applying our eigenvalue LDP to
$\mathbb{E}A-A$, where $A$ is the adjacency matrix, because Theorem 11 is for
the positive eigenvalues of centered matrices, and we are interested in the
negative eigenvalues here.
### 7.1. The contribution of the bulk
We consider two functions $f_{1}$ and $f_{2}$, where
(99) $f_{1}(x)=\begin{cases}0&\text{if $x<0$}\\\ x^{3}&\text{if $0\leq
x<\sqrt{K}$}\\\ 3Kx-2K^{3/2}&\text{if $x\geq\sqrt{K}$}\end{cases}$
and $f_{2}(x)=-f_{1}(-x)$. Then both $f_{1}$ and $f_{2}$ are $3K$-Lipschitz
functions; also, $f_{1}$ is convex and $f_{2}$ is concave.
The following lemma is the main technical result of this section. Essentially,
it says that changing the triangle-count using non-extreme eigenvalues carries
a substantial entropy cost.
###### Lemma 30.
Let $A_{n}$ be the centered adjacency matrix of a $\mathcal{G}(n,m)$ graph.
There is a universal constant $C$ such that if $K\geq C$ then
(100)
$\Pr\left(\sum_{i:\lambda_{i}(A_{n})\geq-\sqrt{Kn}}\lambda_{i}^{3}(A_{n})<-\delta-O(n^{2})\right)\leq\exp\left(-\Omega\left(\frac{\delta^{2}}{n^{3}K^{2}}\right)\right).$
###### Proof.
We will prove the claim when $A_{n}$ is the centered adjacency matrix of a
$\mathcal{G}(n,p)$ graph, with $p=m/\binom{n}{2}$. The result for
$\mathcal{G}(n,m)$ follows from the fact that a $\mathcal{G}(n,m)$ graph can
be obtained by starting from $\mathcal{G}(n,p)$ and conditioning on the
(probability $\Omega(1/n)$) event that there are exactly $m$ edges.
Note that
(101) $f_{1}(x)+f_{2}(x)\leq\begin{cases}0&\text{if $x<-\sqrt{K}$}\\\
x^{3}&\text{if $x\geq-\sqrt{K}$}.\end{cases}$
Hence,
(102) $\sum_{i}(f_{1}+f_{2})(n^{-1/2}\lambda_{i}(A_{n}))\leq
n^{-3/2}\sum_{i:\lambda_{i}(A_{n})\geq-\sqrt{Kn}}\lambda_{i}^{3}(A_{n}).$
Since $-f_{2}$ is convex, Theorem 12 applies to both $f_{1}$ and $f_{2}$,
giving
(103)
$\Pr\left(\frac{1}{n}\operatorname{tr}[(f_{1}+f_{2})(n^{-1/2}A_{n})]\leq\frac{1}{n}\mathbb{E}\operatorname{tr}[(f_{1}+f_{2})(n^{-1/2}A_{n})]-s\right)\leq
2\exp(-\Omega(n^{2}s^{2}/K^{2}))$
whenever $s=\omega(K/n)$. From the inequality above, we also have
(104)
$\Pr\left(\sum_{i:\lambda_{i}(A_{n})\geq-\sqrt{Kn}}\lambda_{i}^{3}(A_{n})\leq
n^{3/2}\mathbb{E}\operatorname{tr}(f_{1}+f_{2})(n^{-1/2}A_{n})-s\right)\leq
2\exp\left(-\Omega\left(\frac{s^{2}}{K^{2}n^{3}}\right)\right).$
It remains to control
$\mathbb{E}\operatorname{tr}[(f_{1}+f_{2})(n^{-1/2}A_{n})]$; specifically, we
want to show that $\mathbb{E}\operatorname{tr}(f_{1}+f_{2})(n^{-1/2}A_{n})$ is
close to $n^{-3/2}\mathbb{E}\operatorname{tr}(A_{n}^{3})$ (which is
$O(\sqrt{n})$). But note that
$|\operatorname{tr}[(f_{1}+f_{2})(n^{-1/2}A_{n})-n^{-3/2}A_{n}^{3}]|\\\ \leq
n^{-3/2}\sum_{i:|\lambda_{i}|>\sqrt{Kn}}|\lambda_{i}(A_{n})|^{3}\leq
n^{-1/2}|s_{1}(A_{n})|^{3}1_{\\{|s_{1}(A_{n})|>\sqrt{Kn}\\}},$
where $s_{1}(A_{n})$ is the largest singular value of $A_{n}$. But Proposition
21 implies that if $K$ is sufficiently large then
$\mathbb{E}[|s_{1}(A_{n})|^{3}1_{\\{|s_{1}(A_{n})|>\sqrt{Kn}\\}}]\leq\exp(-\Omega(\sqrt{n}))$.
Hence,
(105)
$\Pr\left(\sum_{i:\lambda_{i}(A_{n})\geq-\sqrt{Kn}}\lambda_{i}^{3}(A_{n})\leq
n^{3/2}\mathbb{E}\operatorname{tr}(A_{n}^{3})-s-\exp(-\Omega(\sqrt{n}))\right)\leq
2\exp\left(-\Omega\left(\frac{s^{2}}{K^{2}n^{3}}\right)\right).$
Finally, note that $\mathbb{E}\operatorname{tr}(A_{n}^{3})=O(n^{2})$. ∎
We will be interested in applying Lemma 30 when $\delta\gg n^{9/4}$. In this
case, the $O(n^{2})$ term becomes negligible and the probability bound is at
most $\exp(-\omega(n^{3/2}))$.
### 7.2. Many large negative eigenvalues
There is one situation that we still need to handle: the possibility that
there are $\omega(1)$ eigenvalues smaller than $-\Omega(\sqrt{n})$, and
$\omega(1)$ of these eigenvalues contribute to the triangle count.
The first observation is that although Corollary 22 is written for a fixed
_number_ of singular values, it can be easily transferred to an inequality for
singular values above a certain threshold.
###### Corollary 31.
With the notation of Corollary 22, if $\sigma_{i}=\sigma_{i}(A)$ are the
singular values of $A$ then
(106) $\ln\Pr\left(\sqrt{\sum_{\sigma_{i}>\sqrt{Kn}}\sigma_{i}^{2}}\geq
t\right)\leq-\frac{t^{2}L}{2}+O\left(\frac{t^{2}}{K}\ln K\right)$
The same bound holds if $A$ is the centered adjacency matrix of a
$\mathcal{G}(n,m)$ graph.
###### Proof.
Set $k=\lceil t^{2}/(Kn)\rceil$ and observe that if
$s_{1},\dots,s_{k}\geq\sqrt{Kn}$ then $\sum_{i=1}^{k}\sigma_{i}^{2}\geq t$.
Hence, we either have
(107)
$\sum_{\sigma_{i}>\sqrt{Kn}}\sigma_{i}^{2}\leq\sum_{i=1}^{k}\sigma_{i}^{2},$
or else $\sum_{i=1}^{k}\sigma_{i}^{2}\geq t$. It follows that
(108) $\ln\Pr\left(\sqrt{\sum_{\sigma_{i}>\sqrt{Kn}}\sigma_{i}^{2}}\geq
t\right)\leq\ln\Pr\left(\sqrt{\sum_{i=1}^{k}\sigma_{i}^{2}}\geq t\right),$
and we conclude by applying Corollary 22 with our choice of $k$.
Finally, if $A$ is the centered adjacency matrix of a $\mathcal{G}(n,m)$ graph
then we use the same argument that was used to extend Corollary 22 to the
$\mathcal{G}(n,m)$ case, namely that a $\mathcal{G}(n,m)$ graph can be
obtained by conditioning a $\mathcal{G}(n,p)$ graph on an event of
$\Omega(n^{-1})$ probability. ∎
### 7.3. The upper bound in Theorem 1
Let $A$ be the adjacency matrix of a $\mathcal{G}(n,m)$ graph and recall that
$\tau(A)=\frac{\operatorname{tr}[A^{3}]}{n(n-1)(n-2)}=\frac{\operatorname{tr}[A^{3}]}{n^{3}}+O(1/n)$.
Let $\tilde{A}=A-\mathbb{E}A$; by Corollary 4,
(109) $\Pr(\tau(A)\leq p^{3}-t)=\Pr(\operatorname{tr}[A^{3}]\leq
n^{3}p^{3}-n^{3}t+O(n^{2}))\leq\Pr(\operatorname{tr}[\tilde{A}^{3}]\leq-n^{3}t+O(n^{2})).$
Writing out
$\operatorname{tr}[\tilde{A}^{3}]=\sum_{i}\lambda_{i}^{3}(\tilde{A})$, choose
$K=\omega(1)$ and $\epsilon=o(1)$ such that $K/\epsilon=o(n^{1/2}t^{2/3})$.
Applying Lemma 30 to $-\tilde{A}$ gives
(110)
$\Pr\left(\sum_{i:\lambda_{i}\geq-\sqrt{Kn}}\lambda_{i}^{3}(\tilde{A})<-\epsilon
tn^{3}\right)\leq\exp\left(-\Omega\left(\frac{\epsilon^{2}t^{2}n^{3}}{K^{2}}\right)\right)=\exp(-\omega(n^{2}t^{2/3}))$
On the other hand, Jensen’s inequality implies that
(111)
$\left|\sum_{i:\lambda_{i}<-\sqrt{Kn}}\lambda_{i}^{3}\right|\leq\left(\sum_{i:\lambda_{i}<-\sqrt{Kn}}\lambda_{i}^{2}\right)^{3/2}\leq\left(\sum_{i:\sigma_{i}>\sqrt{Kn}}\sigma_{i}^{2}\right)^{3/2},$
where $\lambda_{i}=\lambda_{i}(\tilde{A})$ and
$\sigma_{i}=\sigma_{i}(\tilde{A})$. Recall here that
$L=\inf_{s\in\mathbb{R}}\frac{\Lambda^{*}(s)}{s^{2}}$, where $\Lambda$ is the
cumulant-generating function of an entry of $\tilde{A}$. Lemma 27 (with
$q=1-p$) implies that $L=\frac{\ln\frac{p}{1-p}}{2p-1}$. By Corollary 31 (and
taking into account the fact that $\epsilon=o(1)$ and $K=\omega(1)$),
$\displaystyle\Pr\left(\sum_{i:\lambda_{i}<-\sqrt{Kn}}\lambda_{i}^{3}(\tilde{A})<-(1-\epsilon)tn^{3}\right)$
$\displaystyle\leq\Pr\left(\sqrt{\sum_{i:\sigma_{i}>\sqrt{Kn}}\sigma_{i}^{2}}>(1-\epsilon)^{1/3}t^{1/3}n\right)$
$\displaystyle\leq\exp\left(-\frac{L}{2}t^{2/3}n^{2}+o(t^{2/3}n^{2})\right).$
Combined with (110), this yields
(112) $\ln\Pr\left(\operatorname{tr}[\tilde{A}^{3}]\leq-
tn^{3}\right)\leq-\frac{Lt^{2/3}n^{2}}{2}(1+o(1)).$
Now we apply (109), noting that $n^{3}t=\omega(n^{2})$, and so
$n^{3}t+O(n^{2})=n^{3}t(1+o(1))$, to get
(113) $\ln\Pr(\tau(A)\leq p^{3}-t)\leq-\frac{Lt^{2/3}n^{2}}{2}(1+o(1)).$
This completes the proof of the upper bound in Theorem 1 but let us also note
two other facts that we can easily extract from the proof. From (110) we see
that only the extremely negative eigenvalues contribute to the triangle
deviation:
###### Corollary 32.
Conditioned on $\tau(A)\leq p^{3}-t$,
$\sum_{i:\lambda_{i}\leq-\Omega(\sqrt{n})}\lambda_{i}^{3}(\tilde{A})\leq-
tn^{3}(1-o(1))$ with high probability.
The other piece of information we can extract from our proof is that the
vertex degrees of a triangle-deficient graph are close to constant.
###### Corollary 33.
Conditioned on $\tau(A)\leq p^{3}-t$, if $d_{1},\dots,d_{n}$ are the vertex
degrees of the graph then with high probability
(114) $\sum_{i}(d_{i}-pn)^{2}=o(tn^{3}).$
###### Proof.
Since $\sum_{i}d_{i}=2m=n^{2}p+O(n)$, we have
$\sum_{i}(d_{i}-pn)^{2}=\sum_{i}d_{i}^{2}-p^{2}n^{3}+O(n^{2})$. Going back to
the proof of Lemma 3, we have
(115)
$\operatorname{tr}[\tilde{A}^{3}]\leq\operatorname{tr}[A^{3}]-p^{3}n^{3}-3p\sum_{i}(d_{i}-pn)^{2}+O(n^{2}).$
It follows that
(116) $\left\\{\tau(A)\leq
p^{3}-t\right\\}\subseteq\left\\{\operatorname{tr}[\tilde{A}^{3}]\leq-
tn^{3}-3p\sum_{i}(d_{i}-pn)^{2}+O(n^{2})\right\\}.$
Hence, if $tn^{3}\gg n^{2}$ then
$\ln\Pr\left(\tau(A)\leq p^{3}-t\text{ and }\sum_{i}(d_{i}-pn)^{2}\geq\epsilon
tn^{3}\right)\\\ \leq\ln\Pr\left(\operatorname{tr}[\tilde{A}^{3}]\leq-
tn^{3}(1+\Omega(\epsilon))\right)\leq-\frac{Lt^{2/3}n^{2}}{2}(1+\Omega(\epsilon)).$
In particular,
(117) $\Pr\left(\tau(A)\leq p^{3}-t\text{ and
}\sum_{i}(d_{i}-pn)^{2}\geq\epsilon tn^{3}\right)=o\left(\Pr\left(\tau(A)\leq
p^{3}-t\right)\right),$
and claim follows. ∎
### 7.4. The lower bound in Theorem 1
In showing the lower bound of Theorem 1, we need to apply Corollary 5 (instead
of Corollary 4 as in the upper bound), and therefore we need to control
$\operatorname{tr}[\tilde{A}^{3}]$ and $\sum_{i}d_{i}^{2}$ simultaneously. To
do this, take $v$ and $\ell$ as in Corollary 26 (applied with $t^{2/3}n^{2}$
in place of $t$). Now, let $\xi_{1},\dots,\xi_{\binom{n}{2}}$ be some ordering
of the upper diagonal of $\tilde{A}$, ordered so that the first
$\xi_{1},\dots,\xi_{\binom{\ell}{2}}$ correspond to the upper diagonal of the
upper-left $\ell\times\ell$ principal submatrix. Then
$\langle\tilde{A},vv^{T}\rangle=2\sum_{i=1}^{\binom{\ell}{2}}v_{1}^{2}\xi_{i}$,
and so conditioning on $\langle\tilde{A},vv^{T}\rangle<-t^{1/3}n$ is
equivalent to conditioning on
$\sum_{i=1}^{r}\xi_{i}<-\frac{t^{1/3}n}{2v_{1}^{2}}$ (where we have set
$r=\binom{\ell}{2}$).
Let $\Omega$ be the event that $\langle\tilde{A},vv^{T}\rangle\leq-t^{1/3}n$.
To prove the lower bound of Theorem 1, we show three properties:
1. (1)
$\ln\Pr(\Omega)\geq-\frac{t^{2/3}n^{2}L}{2}(1+o(1))$.
2. (2)
Conditioned on $\Omega$, $\sum_{i}d_{i}^{2}=n^{3}p^{2}+O(n^{2})$ with
probability at least $\exp(-o(t^{2/3}n^{2}))$.
3. (3)
Conditioned on $\Omega$, $\operatorname{tr}[\tilde{A}^{3}]\leq-
tn^{3}(1-\epsilon)$ with probability at least
$1-\exp(-\Omega(\epsilon^{2/3}t^{2/3}n^{2}))$.
To complete the proof of the lower bound in Theorem 1 from these three facts,
we choose $\epsilon\to 0$ slowly enough that the probability of failure in
item (3) is at most half the probability of success in item (2). It then
follows that with probability at least $\exp(-t^{2/3}n^{2}L/2(1+o(1))$,
$\Omega$ holds together with properties (2) and (3). In particular, we have
shown that
(118) $\ln\Pr\left(\operatorname{tr}[\tilde{A}^{3}]\leq-
tn^{3}-\Omega(n^{2})\text{ and }\sum_{i}d_{i}^{2}\leq
n^{3}p^{2}+O(n^{2})\right)\geq-\frac{t^{2/3}n^{2}L}{2}(1+o(1)),$
and so the lower bound of Theorem 1 follows from Corollary 5.
Note that the first claimed property follows immediately from Corollary 26.
Next we prove the second property, noting that $\ell=\Theta(t^{1/3}n)$ and so
$\ell^{3}/n=O(tn^{2})=o(t^{2/3}n^{2})$.
###### Lemma 34.
Conditioned on $\Omega$, we have $\sum_{i}d_{i}^{2}=n^{3}p^{2}+O(n^{2})$ with
probability at least $\exp(-O(\ell^{3}/n))$.
###### Proof.
We decompose $A$ block-wise as
$A=\begin{bmatrix}A_{11}&A_{12}\\\ A_{21}&A_{22}\end{bmatrix}$
where $A_{11}$ has size $\ell\times\ell$. Note that the event $\Omega$ amounts
to conditioning on the number of ones in $A_{11}$, and that the number of ones
in $A_{11}$ is between 0 and $\ell^{2}$. For some $0\leq s\leq\ell^{2}$, let
$\Omega_{s}$ be the event that $A_{11}$ has $s$ ones; we will prove the claim
conditioned on each $\Omega_{s}$ and then it follows for $\Omega$.
Let $\tilde{\Omega}_{s}$ be the event that $A_{12}$ has $\ell pn-s+\eta$ ones
for $|\eta|\leq 1$ (where the $\eta$ term is merely to account for
integrality). Conditioned on $\Omega_{s}\cap\tilde{\Omega}_{s}$,
$d_{1},\dots,d_{\ell}$ have expectation $pn\pm O(1)$, and it follows that
$d_{\ell+1},\dots,d_{n}$ also have expectation $pn\pm O(1)$. Note that –
conditioned on $\Omega_{s}\cap\tilde{\Omega}_{s}$, each $d_{i}$ is distributed
as either a hypergeometric random variable or a sum of two independent
hypergeometric random variables; it follows from standard concentration
results that conditioned on $\Omega_{s}\cap\tilde{\Omega}_{s}$,
$\sum_{i}d_{i}^{2}=n^{3}p^{2}+O(n^{2})$
with high probability.
It therefore suffices to lower bound $\Pr(\tilde{\Omega}_{s}\mid\Omega_{s})$,
which is exactly the probability that a hypergeometric random variable – with
$\binom{n}{2}-\binom{\ell}{2}$ population, $p\binom{n}{2}-\Theta(\ell^{2})$
successes, and $n\ell$ trials – deviates from its mean by order
$\Theta(\ell^{2})$. Since a deviation of order $\Theta(\ell^{2})$ is a
deviation of $\Theta(\ell^{3/2}n^{-1/2})$ standard deviations, Stirling’s
approximation implies that
$\Pr(\tilde{\Omega}_{s}\mid\Omega_{s})=\exp(-O(\ell^{3}/n)).$
∎
Finally, we prove the third property claimed above.
###### Lemma 35.
For any $\epsilon>0$,
$\Pr(\Omega\text{ and }\operatorname{tr}[\tilde{A}^{3}]\geq-\epsilon
tn^{3})\leq\exp(-\Omega(\epsilon^{2/3}t^{2/3}n^{2})).$
###### Proof.
On the event $\Omega$, we have $\lambda_{n}(\tilde{A})\leq-t^{1/3}n$, and so
in order to have $\Omega$ and $\operatorname{tr}[\tilde{A}^{3}]\geq
tn^{3}(1-a_{n})$ we must have
$\sum_{i=1}^{n-1}\lambda_{i}(\tilde{A})^{3}\geq a_{n}tn^{3}.$
Now, Lemma 30 implies that only extremely large eigenvalues can contribute:
for a sufficiently large $K$ we have
$\Pr\left(\sum_{i:\lambda_{i}(\tilde{A})\geq\sqrt{Kn}}\lambda_{i}^{3}(\tilde{A})\geq
a_{n}tn^{3}\right)\leq\exp(-\Omega(a_{n}^{2}t^{2}n^{3})),$
which is less than $\exp(-\omega(n^{2}t^{2/3}))$ if $a_{n}\to 0$ sufficiently
slowly.
In order to control the contribution of the large eigenvalues, consider the
symmetric random matrix $B$ having independent upper-diagonal entries, with
each entry $B_{ij}$ having the same distribution as $\tilde{A}_{ij}$ given
$\Omega$. Because the distribution of $\tilde{A}$ given $\Omega$ can be
obtained by conditioning $B$ on an event of probability $\Omega(1/n)$, it
suffices to show that
$\Pr\left(\sum_{i:\lambda_{i}(B)\geq\sqrt{Kn}}\lambda_{i}(B)^{2}\geq\epsilon^{2/3}t^{2/3}n^{2}\right)\leq\exp(-\Omega(\epsilon^{2/3}t^{2/3}n^{2})),$
But this follows because $\mathbb{E}B=\mathbb{E}[\tilde{A}\mid\Omega]$ is
negative semi-definite and we may apply Remark 23 to $B-\mathbb{E}B$. ∎
### 7.5. The two extreme eigenvalues
In proving the upper bound on $\Pr(\tau(A)\leq p^{3}-t)$, we applied the
inequality $\sum_{i}|a_{i}|^{3}\leq(\sum_{i}a_{i}^{2})^{3/2}$ to the
collection of most-negative eigenvalues. In order to understand how these most
negative eigenvalues are actually distributed, observe that in order for the
inequality above to be an equality, all but one of the terms in the sum must
be zero. Made quantitative, this observation implies that in order for our
probability upper bound to be tight, the smallest eigenvalue must dominate the
others. In what follows, we write $\|a\|_{p}^{p}$ for $\sum_{i}|a_{i}|^{p}$.
###### Lemma 36.
Let $a_{1},\dots$ be a sequence of non-negative numbers, in non-increasing
order. For $\epsilon>0$, if
(119) $\sum_{i\geq 2}a_{i}^{3}\geq\epsilon a_{1}^{3}$
then
(120) $\|a\|_{2}^{2}\geq(1+\epsilon)^{1/3}\|a\|_{3}^{2}.$
###### Proof.
If $\sum_{i\geq 2}a_{i}^{3}\geq\epsilon a_{1}^{3}$ then
$\|a\|_{\infty}^{3}=a_{1}^{3}\leq\frac{\|a\|_{3}^{3}}{1+\epsilon}$. Then
$\|a\|_{3}^{3}\leq\|a\|_{\infty}\|a\|_{2}^{2}\leq(1+\epsilon)^{-1/3}\|a\|_{3}\|a\|_{2}^{2}$,
and the claim follows. ∎
Applying Lemma 36 to the most negative eigenvalues of $\tilde{A}$ allows us to
show that the eigenvalues of $\tilde{A}$ satisfy the claims that Theorem 1
makes for the eigenvalues of $A$.
###### Corollary 37.
In the setting of Theorem 1, for any $\epsilon>0$, conditioned on $\tau(A)\leq
p^{3}-t$ we have
(121) $\lambda_{n}^{3}(\tilde{A})\leq-(1-\epsilon)tn^{3}\text{ and
}\lambda_{n-1}^{3}(\tilde{A})\geq-\epsilon tn^{3}$
with high probability.
###### Proof.
Let $S=\\{i:\lambda_{i}(\tilde{A})\leq-\Omega(\sqrt{n})\\}$. By Corollary 32,
for any $\delta>0$, conditioned on $\tau(A)\leq p^{3}-t$ we have
(122) $\sum_{i\in S}\lambda_{i}^{3}(\tilde{A})\leq-(1-\delta)tn^{3}$
with high probability. On this event, we either have
$\lambda_{n}^{3}(\tilde{A})\leq-(1-\delta-\epsilon)tn^{3}$ or $\sum_{i\in
S\setminus\\{n\\}}\lambda_{i}^{3}(\tilde{A})\leq-\epsilon tn^{3}$. We will
show that for some $\delta=\Omega(\epsilon)$,
(123) $\Pr\left(\sum_{i\in
S}\lambda_{i}^{3}(\tilde{A})\leq-(1-\delta)tn^{3}\text{ and }\sum_{i\in
S\setminus\\{n\\}}\lambda_{i}^{3}(\tilde{A})\leq-\epsilon tn^{3}\right)$
is much smaller than $\Pr(\tau(A)\leq p^{3}-t)$; this will imply the claim.
Indeed, applying Lemma 36 to the sequence of $|\lambda_{i}|$ for $i\in S$, we
see that if
(124) $\sum_{i\in S}\lambda_{i}^{3}(\tilde{A})\leq-(1-\delta)tn^{3}\text{ and
}\sum_{i\in S\setminus\\{n\\}}\lambda_{i}^{3}(\tilde{A})\leq-\epsilon tn^{3}$
then
(125) $\sum_{i\in
S}\lambda_{i}^{2}(\tilde{A})\geq(1+\epsilon)^{1/3}(1-\delta)t^{2/3}n^{2}\geq(1+\Omega(\epsilon))t^{2/3}n^{2},$
where the last inequality follows by choosing a sufficiently small
$\delta=\Omega(\epsilon)$. But Corollary 31 implies that
$\displaystyle\Pr\left(\sum_{i\in
S}\lambda_{i}^{2}(\tilde{A})\geq(1+\Omega(\epsilon))t^{2/3}n^{2}\right)$
$\displaystyle\leq\exp\left(-(1+\Omega(\epsilon))(1-o(1))\frac{t^{2/3}n^{2}L}{2}\right)$
$\displaystyle=o(\Pr(\tau(A)\leq p^{3}-t)),$
where the final bound follows from the lower bound of Theorem 1. ∎
To complete the proof of Theorem 1, we need to pass from the eigenvalues of
$\tilde{A}$ to the eigenvalues of $A$; recall that
$A=\tilde{A}+p\mathbf{1}-pI$. Since $p\mathbf{1}\geq 0$, we have
(126) $\lambda_{n-1}(A)\geq\lambda_{n-1}(\tilde{A})-p,$
and so $\lambda_{n-1}(\tilde{A})\geq-o(tn^{3})$ implies the same for
$\lambda_{n-1}(A)$. For $\lambda_{n}$, we have
$\lambda_{n}(A)\leq\lambda_{n}(\tilde{A}+p\mathbf{1})$ and so it remains to
take care of the $p\mathbf{1}$ term.
Let $v$ be an eigenvector with eigenvalue $\lambda_{n}(\tilde{A})$ satisfying
$|v|^{2}=1$. Then $|\langle
v,\tilde{A}\mathbf{1}\rangle|=|\lambda_{n}(\tilde{A})||\langle
v,\mathbf{1}\rangle|$; conditioned on $\tau(A)\leq p^{3}-t$, this is
$(1+o(1))tn^{3}|\langle v,\mathbf{1}\rangle|$ with high probability. On the
other hand
$\tilde{A}\mathbf{1}=(A-p\mathbf{1}+pI)\mathbf{1}=d-p(n-1)\mathbf{1}$, where
$d$ whose entries are the vertex degrees. By Corollary 33, conditioned on
$\tau(A)\leq p^{3}-t$ we have $|d-pn\mathbf{1}|^{2}=o(tn^{3})$, and since
$|\mathbf{1}|^{2}=n=o(tn^{3})$, we also have
$|\tilde{A}\mathbf{1}|^{2}=|d-p(n-1)\mathbf{1}|^{2}=o(tn^{3})$. Hence,
(127) $(1+o(1))tn^{3}|\langle v,\mathbf{1}\rangle|=|\langle
v,\tilde{A}\mathbf{1}\rangle|\leq|\tilde{A}\mathbf{1}|=o(tn^{3}),$
and it follows that $|\langle v,\mathbf{1}\rangle|=o(1)$. Finally, note that
(128) $v^{T}(\tilde{A}+p\mathbf{1})v=v^{T}\tilde{A}v+p|\langle
v,\mathbf{1}\rangle|^{2}=\lambda_{n}(\tilde{A})+o(1),$
and so by Rayleigh’s criterion it follows that
$\lambda_{n}(\tilde{A}+p\mathbf{1})\leq\lambda_{n}(\tilde{A})+o(1)$. This
completes the proof of Theorem 1.
### 7.6. Theorem 2
Like Theorem 1, Theorem 2 has three elements: an upper bound on the
probability of a moderate deviation, a lower bound, and a bound on the most
negative eigenvalue of the adjacency matrix.
The upper bound is proved exactly as in the proof of Theorem 1. The singular
values of the eigenvalues are controlled by the rate function involving
$\inf_{x\in\mathbb{R}}\frac{\Lambda^{*}(s)}{s^{2}}$, which we have already
established to be $\frac{\ln\frac{1-p}{p}}{2(1-2p)}$. Upper bounds on singular
values then give upper bounds on eigenvalues. The entire argument is
independent of whether $p\geq\frac{1}{2}$ or $p<\frac{1}{2}$.
The proof of the lower bound in Theorem 2 is similar to that of the lower
bound in Theorem 1, except that we use the vector
$v=(\frac{1}{\sqrt{n}},\dots,\frac{1}{\sqrt{n}},-\frac{1}{\sqrt{n}},\dots,-\frac{1}{\sqrt{n}})$.
For this $v$, Cramér’s theorem shows that
$\ln\Pr(\langle\tilde{A},vv^{T}\rangle\leq-t^{1/3}n)\geq-\frac{t^{2/3}n^{2}}{2p(1-p)}(1+o(1))$,
and the rest of the proof proceeds as before. (Note that we cannot use the
vector $v$ constructed in the proof of Theorem 10, because Lemma 35 fails for
that $v$: the matrix $\mathbb{E}[\tilde{A}\mid\Omega]$ is not negative semi-
definite.)
For the claim about the eigenvalue, we use Lemma 36: fix $\eta>0$ and $K>0$
and consider the event $\Omega$ on which
$\sum_{\lambda_{i}(\tilde{A})\leq-\sqrt{Kn}}\lambda_{i}^{3}(\tilde{A})\leq-
tn^{3}$
but $\lambda_{n}^{3}(\tilde{A})\geq-\frac{1}{1+\eta}tn^{3}$. According to
Lemma 36, on this event we have
$\sum_{\lambda_{i}(\tilde{A})\leq-\sqrt{Kn}}\lambda_{i}^{2}(\tilde{A})\geq(1+\eta)^{1/3}t^{2/3}n^{2}.$
By Corollary 31 (for a sufficiently slowly growing $K=\omega(1)$),
$\ln\Pr(\Omega)\leq-\frac{t^{2/3}n^{2}(1+\eta)^{1/3}\ln\frac{p}{1-p}}{2(2p-1)}+o(t^{2/3}n^{2}),$
which, for sufficiently large $\eta$ (depending on $p$) implies that
$\ln\Pr(\Omega)\leq-(1+\Omega(1))\frac{t^{2/3}n^{2}}{2p(1-p)}.$
It follows from the lower bound in Theorem 2 that $\Pr(\Omega\mid\tau\leq
p^{3}-t)\to 0$. Together with Lemma 30 – which shows that eigenvalues larger
than $-\sqrt{Kn}$ are unlikely to contribute – this implies that
$\lambda_{n}^{3}\leq-\frac{1}{1+\eta}tn^{3}$ with high probability given
$\tau\leq p^{3}-t$. (The main difference here compared to the proof of Theorem
1 is that because we lack matching upper and lower bounds on the log-
probabilities, we cannot take $\eta\approx 0$.)
## References
* [1] C. Goldschmidt, S. Griffiths and A. Scott, Moderate deviations of subgraph counts in the Erdős-Rényi random graphs $G(n,m)$ and $G(n,p)$, Trans. Amer. Math. Soc. 373 (2020) 5517–5585.
* [2] S. Janson, Poisson approximation for large deviations, Random Structures & Algorithms 1 (1990) 221–229.
* [3] S. Boucheron, G. Lugosi, P. Massart etal, Concentration inequalities using the entropy method, Ann. Prob. 31 (2003) 1583–1614.
* [4] S. Chatterjee, The missing log in large deviations for triangle counts, Random Structures & Algorithms 40 (2012) 437–451.
* [5] S. Janson and A. Ruciński, The Infamous Upper Tail, Random Structures & Algorithms 20 (2002) 317-342.
* [6] C. Radin and L. Sadun, Phase transitions in a complex network, J. Phys. A 46 (2013) 12 pp.
* [7] C. Radin and L. Sadun, Singularities in the entropy of asymptotically large simple graphs, J. Stat. Phys. 158 (2015) 853–865.
* [8] S. Chatterjee and A. Dembo, Nonlinear large deviations, Adv. Math. 299 (2016) 396–450.
* [9] N. Cook and A. Dembo, Large deviations of subgraph counts for sparse Erdős-Rényi graphs, Adv. Math, 373 (2020) 107289.
* [10] E. Lubetzky and Y. Zhao, On the variational problem for upper tails in sparse random graphs, Random Structures Algorithms 50 (2017) 420–436.
* [11] B. Bhattacharya, S. Ganguly, E. Lubetzky and Y. Zhao, Upper tails and independence polynomials in random graphs, Adv. Math. 319 (2017) 313–347.
* [12] Y. Zhao, On the lower tail variational problem for random graphs, Combin. Probab. Comput. 26 (2017) 301–320.
* [13] S. Bhattacharya and A. Dembo, Upper Tail For Homomorphism Counts in Constrained Sparse Random Graphs, arXiv:1909.03045
* [14] B. Gunby, Upper Tails of Subgraph Counts in Sparse Regular Graphs, arXiv:2010.00658.
* [15] E. Lubetzky and Y. Zhao, On replica symmetry of large deviations in random graphs, Random Structures Algorithms 47 (2015) 109–146.
* [16] S. Janson and L. Warnke, The lower tail: Poisson approximation revisited, Random Structures and Algorithms 48 (2016) 219–246.
* [17] S. Chatterjee, An introduction to large deviations for random graphs, Bull. Amer. Math. Soc. 53 617–642.
* [18] Svante Janson, Orthogonal decompositions and functional limit theorems for random graph statistics, American Mathematical Soc. 1994.
* [19] M. Harel, F. Mousset and W. Samotij, Upper tails via high moments and entropic stability, arXiv:1904.08212.
* [20] J. Neeman, C. Radin and L. Sadun, Phase transitions in finite random networks, J. Stat. Phys. 181 (2020) 30–328.
* [21] C. Radin, K. Ren and L. Sadun, The asymptotics of large constrained graphs, J. Phys. A: Math. Theor. 47 (2014) 175001.
* [22] R. Kenyon, C. Radin, K. Ren and L. Sadun, Multipodal structure and phase transitions in large constrained graphs, J. Stat. Phys. 168 (2017) 233–258.
* [23] R. Kenyon, C. Radin, K. Ren and L. Sadun, Bipodal structure in oversaturated random graphs, Int. Math. Res. Notices, 2018 (2016) 1009–1044.
* [24] C. Radin, K. Ren and L. Sadun, A symmetry breaking transition in the edge/triangle network model, Ann. Inst. H. Poincaré D 5 (2018) 251–286.
* [25] R. Kenyon, C. Radin, K. Ren and L. Sadun, The phases of large networks with edge and triangle constraints, J. Phys. A: Math. Theor. 50 (2017) 435001\.
* [26] H. Koch, Vertex order in some large constrained random graphs, SIAM J. Math. Anal. 48 (2016) 2588–2601.
* [27] S. Chatterjee and S.R.S. Varadhan, The large deviation principle for the Erdős-Rényi random graph, Eur. J. Combin. 32 (2011) 1000–1017.
* [28] A. Dembo and E. Lubetzky, A large deviation principle for the Erdős-Rényi uniform graph, Electron. Commun. Probab. 23 (2018) 1–13.
* [29] A. Guionnet and J. Husson, Large deviations for the largest eigenvalue of Rademacher matrices, Ann. Prob. 48 (2000) 1436–1465.
* [30] F. Augeri, Large deviations principle for the largest eigenvalue of Wigner matrices without Gaussian tail, Electron. J. Probab. 21 (2016) 49 pp.
* [31] B. Bhattacharya and S. Ganguly, Upper Tails for Edge Eigenvalues of Random Graphs, SIAM Journal on Discrete Mathematics 34 (2020) 1069–1083.
* [32] F. Augeri, A. Guionnet and J. Husson, Large deviations for the largest eigenvalue of sub-Gaussian matrices, arXiv:1911.10591.
* [33] A. Guionnet and O. Zeitouni, Concentration of the Spectral Measure for Large Matrices, Electron. Commun. Probab. 5 (2020) 119–136.
* [34] T. Tao, Topics in random matrix theory, Americal Mathematical Soc. 2012.
* [35] R. Vershynin, High-dimensional probability: An introduction with applications in data science, Cambridge University Press 2018.
* [36] O. Kallenberg, Foundations of Modern Probability, Second Edition, Springer 2001.
|
# RADAR: Run-time Adversarial Weight Attack Detection and Accuracy Recovery
Jingtao Li1, Adnan Siraj Rakin1, Zhezhi He2, Deliang Fan1, Chaitali
Chakrabarti1 1{jingtao1, asrakin, dfan<EMAIL_ADDRESS><EMAIL_ADDRESS>1School of Electrical Computer and Energy
Engineering, Arizona State University, Tempe, AZ, 85287 2Department of
Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai
###### Abstract
Adversarial attacks on Neural Network weights, such as the progressive bit-
flip attack (PBFA), can cause a catastrophic degradation in accuracy by
flipping a very small number of bits. Furthermore, PBFA can be conducted at
run time on the weights stored in DRAM main memory. In this work, we propose
RADAR, a Run-time adversarial weight Attack Detection and Accuracy Recovery
scheme to protect DNN weights against PBFA. We organize weights that are
interspersed in a layer into groups and employ a checksum-based algorithm on
weights to derive a 2-bit signature for each group. At run time, the 2-bit
signature is computed and compared with the securely stored golden signature
to detect the bit-flip attacks in a group. After successful detection, we zero
out all the weights in a group to mitigate the accuracy drop caused by
malicious bit-flips. The proposed scheme is embedded in the inference
computation stage. For the ResNet-18 ImageNet model, our method can detect 9.6
bit-flips out of 10 on average. For this model, the proposed accuracy recovery
scheme can restore the accuracy from below 1% caused by 10 bit flips to above
69%. The proposed method has extremely low time and storage overhead. System-
level simulation on gem5 shows that RADAR only adds $<$1% to the inference
time, making this scheme highly suitable for run-time attack detection and
mitigation.
###### Index Terms:
Neural networks, weight attack, run-time detection, protection
## I Introduction
Neural networks have been widely adopted in image recognition, natural
language processing, medical diagnosis and autonomous driving tasks. The
security and trustworthiness of neural networks directly affect the safety of
these applications making this study even more important. Neural network
models have been shown to be vulnerable to various types of attacks.
Adversarial input attack, which manipulates the inputs fed to the neural
network model, such as FGSM [1], can cause serious misclassification.
Recently, adversarial weight attack with malicious weight bit-flips, aka. PBFA
[2], on ResNet-18 model was able to degrade ImageNet classification accuracy
to below 0.2% with only 13 bit-flips. Furthermore, [3] showed how weight
attacks can be mounted at run-time to circumvent protection schemes that
perform detection periodically.
There is only a handful of techniques that provide some level of security
against adversarial weight attacks. The passive defense method in [4] applies
regularization to make the weights more resistant to weight attacks. However
it is incapable of detecting whether an attack has occurred or not. Error
correction code (ECC) based schemes proposed in [5, 6] provide protection
against random soft-errors but not against adversarial attacks. Standard data
integrity checking methods such as MD5, CRC can perform detection but with
high overhead. These are generic techniques and do not exploit the
characteristics of the neural network model or the specifics of the attacks
that are launched against the networks.
As a countermeasure to PBFA, we propose RADAR, a Run-time adversarial weight
Attack Detection and Accuracy Recovery scheme. It operates on weights that are
fetched from DRAM to on-chip cache for inference computation. RADAR leverages
the PBFA characteristics to derive a simple checksum based technique that has
excellent error detection and accuracy recovery performance. The weights in a
layer are reorganized into groups for the checksum computation, where each
group has weights that were originally $k$ locations apart, $k>1$. The
checksum is computed on the weights in a group that have been masked using a
secret key and is used to derive a 2-bit signature. At run-time, the 2-bit
signature of a group is compared with the secure signature to detect possible
errors. Once an error is flagged, all weights in that group are replaced with
zeroes. The storage and the time overhead of this method is very small
compared to the RADAR-free inference baseline. Our contributions can be
summarized as follows:
* •
We present RADAR, a low latency and storage overhead run-time scheme that can
provide effective detection and recovery on state-of-the-art adversarial
weight attack on DNN, namely, Progressive Bit-Flip Attack (PBFA).
* •
RADAR computes addition checksum on a group of masked weights that were
originally interspersed to derive a 2-bit signature. Use of interleaved
weights and masking helps achieve a high detection ratio of 96.1% on a 10-bit
PBFA attack on ResNet-18 model.
* •
RADAR employs a simple scheme where all weights in a group are set to zero if
that group has been flagged with an error. For ResNet-18 on ImageNet, the
accuracy drop due to PBFA can be recovered from 0.18% to greater than 60% when
the partition size is 512.
* •
System-level Gem5 simulations show that the time cost of RADAR is $<$1% of the
inference time for ResNet-18. The overhead to store the signature is only 5.6
KB for ResNet-18, making it feasible to be stored securely on-chip.
## II PRELIMINARIES
### II-A DNN Attack, Defense & Detection.
Recent developments of memory fault injection techniques on hardware [7, 8]
have made directly attacking model parameters, such as DNN weights, at run
time feasible. Among them, row-hammer attack which causes bit-flips in Dynamic
Random-Access Memory (DRAM) through repeatedly activating DRAM rows, is the
most popular one [7, 9, 10]. Adversarial weight attack [11, 12, 2] corrupts
the neural network weights directly to achieve certain attacking goals [12,
2]. A recently developed adversarial weight attack, known as Progressive Bit-
Flip Attack (PBFA), identifies vulnerable bits based on gradient information
and degrades DNN classification accuracy to random guess level [2].
To mitigate affect of adversarial weight attacks, attack defense mechanisms
have also been investigated [4]. For instance, [4] uses binarization or a
relaxed version of the binarization technique to handle PBFA attacks. This
method increases the resistance to the PBFA attack and is a good passive
defense method. DNN soft error detection schemes can be used to detect small
perturbations in weights [13]. Error Correction Codes (ECC)-based techniques
[5, 6, 14] have been shown to correct soft errors in neural network models.
However, rowhammer attack can be used to compromise ECC codewords in DRAM,
making these methods not as effective.
## III Threat Model
Fig. 1 describes the threat model in this work. At the software end, the
attacker uses PBFA in [2] to identify the vulnerable bits, and at the hardware
end, the attacker performs fault injection via DRAM row-hammer attack by
mounting the vulnerable bits at run-time, thus corrupting the stored weights.
We consider DNNs with 8-bit quantized weights as in [2].
Figure 1: Software and hardware aspects of the threat model.
### III-A Hardware Assumptions
Row-hammer attack has been demonstrated to be very effective in corrupting
DRAM contents [7, 9, 10]. The neural network weight parameters are are very
large in size (MB$\sim$GB) and hence stored in DRAM. Recent work in [10] has
demonstrated how DRAM weights can be attacked using rowhammer in practice.
We consider all weight attacks are physically implemented by DRAM row-hammer
attack on PBFA identified vulnerable weight bits. Since every bit flip attack
costs time and effort, we assume that the attacker stops the bit flip attacks
after causing a significant accuracy drop. We also assume that the attacker is
unable to attack data stored securely in on-chip SRAM. Additionally we assume
that the attacker cannot corrupt the system kernels (otherwise the attacker
would be able to break the system [15]).
### III-B Software Assumptions
We only consider Progressive Bit-Flip Attack (PBFA) [2] since it is the
strongest adversarial weight attack technique to date. It causes the DNN to
malfunction with the fewest number of bit-flips. We argue performing random
bit-flip is too weak to be considered as an attack. It has already been
demonstrated in [2] that randomly flipping 100 bits merely degrades the
accuracy by less than 1%.
To perform PBFA, the attacker has access to the network architecture and
parameters, e.g., weight, bias, etc. (white box assumption). Such information
can be acquired by acting as a benign user or revealed through side-channels
[16]. To perform BFA, we assume the attacker has a small dataset with roughly
similar distribution as the training data to get accurate gradient
information. Additionally, we assume that the attacker has some knowledge of
the defense mechanism (aka checksum) but does not know of the secret key used
for generating masked weights or the interleaving strategy.
### III-C Characteristics of PBFA.
PBFA is a very powerful attack that can severely degrade the accuracy with
only a few bit flips. Our experiments show that, on average, with 10 bit-
flips, the accuracy of a trained 8-bit ResNet-20 model on CIFAR-10 dataset can
drop from above 90% to 18.01% and the accuracy of an 8-bit ResNet-18 model on
ImageNet can drop from around 70% to 0.18%.
To derive an effective detection scheme for PBFA-based attacks, we first did
an in-depth characterization of the attack. We generated multiple sets of PBFA
bit profiles and did a statistical analysis. Specifically, we performed 100
rounds of PBFA with 10 bit-flips per round on ResNet-20 model and ResNet-18
model, saved the profiles of the vulnerable bits in each round, and computed
the statistics.
TABLE I: Number of PBFA Attacks in Different Bit Positions over 100 rounds | MSB (0 $\rightarrow$ 1) | MSB (1 $\rightarrow$ 0) | others
---|---|---|---
ResNet-20 | 334 | 666 | 0
ResNet-18 | 16 | 897 | 87
Observation 1. The PBFA attack exploits the non-uniformity in the importance
of some bits over others in a quantized representation. PBFA always chooses to
flip the Most Significant Bit (MSB) in a weight. Table I shows that the MSBs
are targeted (334+666)/1000 times for ResNet-20 and (16+897)/1000 times for
ResNet-18. Thus a low overhead detection scheme should target detecting bit-
flips in the MSB position.
Observation 2. The vulnerable bits identified by PBFA have a scattered spatial
distribution. In this experiment, we partition the weights into groups with
$G$ weights in a group, and count the number of bits picked by PBFA in each
group. Fig. 2 shows that the proportion of multiple vulnerable bits inside one
group is very low when $G$ is small (relative to the model size), and the
proportion grow in a super-linear manner for larger group sizes. This
indicates that vulnerable bits are scattered across groups rather than being
clustered in a group.
Figure 2: Proportion of occurrences of multiple vulnerable bits in the same
group.
Observation 3. The bit-flip locations in PBFA are more likely to occur on
weights that have very small values. As shown in Table II, most of the bit-
flips happen on weights that have values in the range (-32, 32). Thus, after
the bit-flip, the weight value will be in either (96, 127) or (-128, -96)
range. We believe that the large weight change is the main cause of severe
accuracy drop in PBFA attacks.
TABLE II: Frequency of targeted weights in different ranges Range | (-128, -32) | (-32, 0) | (0, 32) | (32, 127)
---|---|---|---|---
ResNet-20 | 85 | 595 | 249 | 71
ResNet-18 | 16 | 860 | 76 | 27
## IV RADAR Scheme: Detection
We assume that the weight values are loaded from DRAM main memory into on-chip
caches and then processed. A well-designed computing scheme maximizes the
weight reuse so that each weight is accessed only once. Since the main memory
is vulnerable to rowhammer attack, the weights stored there could be
compromised and so detection has to be performed on all weights that are
loaded into cache prior to processing.
In order to embed detection in the inference process, the proposed method has
to have the following properties:
* •
Low timing overhead. The time to perform detection adds to the total inference
computation time and thus has to be as small as possible.
* •
Low storage overhead. The golden signature that is required for detection has
to be small enough to be stored in the secure on-chip memory.
### IV-A Checksum-based Signature Calculation
Popular detection schemes based on CRC or SEC-DED have high storage overhead
(Section VII.B) and are not applicable. We adopt an addition-based checksum
scheme [17] for its simplicity, and add interleaving of weights and checksum
on masked weights to improve the attack resistance. Specifically, we compute
$M$, the sum of $G$ weights in a group and derive a two-bit signature
$S_{i,j}=\\{S_{A},S_{B}\\}$ from $M$ for $i$-th layer, $j$-th group in the
following way:
$S_{A}=\lfloor M/256\rfloor\%2;\hskip 36.135ptS_{B}=\lfloor M/128\rfloor\%2$
(1)
In equation 1, the $\lfloor\cdot\rfloor$ denotes the floor function and $\%$
denotes the remainder function. Note that the binarization step can be simply
implemented as bit truncation in hardware. Similar to the parity code, $S_{B}$
can detect any odd number of bit-flips on MSBs of a group of $G$ weights. From
Fig. 2 we see that most groups have single bit-flips which can be detected by
the parity bit $S_{B}$ 100% of the time. Also, when the group size is large,
multiple bits in a group could be flipped. Since $S_{B}$ is blind to any even
number of bit-flips, we include a second bit, $S_{A}$, which can only detect
double bit-flips if they occur in the same direction, i.e., the bit-flips are
of the form (0$\rightarrow$1, 0$\rightarrow$1) or (1$\rightarrow$0,
1$\rightarrow$0). However, flips of the form (0$\rightarrow$1,
1$\rightarrow$0) cannot be detected since they do not change the value of $M$.
Next we show how this weakness can be addressed by computing the checksum on
weights that are masked and interleaved. We argue that it is less efficient to
incorporate more bits into the signature, such as one more bit to protect the
MSB-1 position by computing $S_{C}=\lfloor M/64\rfloor$. This is because
attacking MSB-1 would require the attacker to flip more bits to achieve the
same attacking performance as discussed in section VIII.
### IV-B Improving attack detection capability of checksum
We adopt the following two features to improve the attack detection capability
of the simple addition checksum approach.
1\. Masking Weights in Checksum Computation: We argue that simply performing
addition checksum to derive the signature makes the system risky. We use a
randomly generated secret key as a mask on a group of weights to determine
whether or not to take its two’s complement or not during the summation (lines
4-6 of Algorithm 1).
The secret key is $N_{k}$ bits long and changes from layer to layer.
Increasing $N_{k}$ can reduce the probability of the sequence of operations
being guessed correctly by the attacker but comes with a high implementation
cost. We set $N_{k}=16$, and the $2^{16}$ different combinations provide for
sufficient security.
2\. Interleaving Weights for Checksum Computation. Given the double bit error
blindness of addition checksum, the attacker can intentionally target multiple
bits in the same group to attack in order to bypass the detection. So we
compute the checksum on a group of weights, where the weights in a group were
originally $m$ locations apart, $m>1$. This process is referred to as
interleaving, a well known technique that is used to handle detection of burst
errors in communication systems.
The basic interleaving process is shown in Fig. 3. For the case when there are
$N$ groups where each group consists of weights that were originally $N_{W}$
locations apart, the $k$-th group consists of weights $k+N_{W}\times l$, where
$0\leq l<N$ and $0\leq k<N_{W}$. So for $N=16$, $N_{W}=8$, group $0$ consists
of weights in locations $0,8,16,\ldots,120$ as shown in the figure. We choose
$N_{W}=G$ and add an additional offset of $3$ in all our experiments.
The interleaving distance can be kept as secret and stored in the secure on-
chip SRAM. It can be different from one layer to the next making it even
harder for the attacker to know which two bits are in the same group. We will
show that the interleaving strategy not only addresses the security concern,
but also improves the detection of multiple bit-flips.
Figure 3: Basic interleaving strategy in checksum calculation. The checksum is
calculated on a group of interleaved weights. Algorithm 1 Pseudo-code of
signature calculation
1:8-bit fixed point weight tensor $B_{i}$ in layer $i$. $B_{i,j}$ denotes
$j$th group of weights in layer $i$, $K_{i}$ is the secret key for layer $i$,
and $N_{W}$ is the parameter for interleaving. $N$ denotes the total number of
groups of size $G$ in layer $i$.
2:Signature $S_{i,j}$
3:
4:function SignCal($B_{i},~{}G$)
5: $B_{i}^{*}$ $\leftarrow$ Interleave ($B_{i}$, $N_{W}$) $\triangleright$
Interleaving
6: for $j$ = 0 : $N$ do
7: $B_{i,j}^{*}[t]$ $\leftarrow$ $B_{i}^{*}[j\times G:(j+1)\times G]$
$\triangleright$ Grouping
8: for $t$ = 0 : $G$ do
9: sign $\leftarrow$ $K_{i}.next()$ $\triangleright$ Secret Key
10: if sign $==$ 0 then:
11: $B_{i,j}^{*}[t]$ $\leftarrow$ $-B_{i,j}^{*}[t]$ $\triangleright$ Two’s
Complement
12: end if
13: end for
14: $M$ $\leftarrow$ $\sum_{t=0}^{G-1}B_{i,j}^{*}[t]$$\triangleright$
Summation
15: $S_{i,j}$ = Binarize(M, 2) $\triangleright$ Signature
16: end for
17: return $S_{i}$
18:end function
### IV-C Overall Algorithm
The overall detection scheme is described in Algorithm 1. The weights in a
layer, $B_{i}$, are reorganized into groups with weights that are originally
interspersed. The secret key is applied on the interleaved weights to
determine the sign of each weight (referred to as masking) in the checksum
computation. The 2-bit signature for each group is the binarized summation of
the scaled weights. The signatures $S_{i,j}$ of each group $B_{i,j}$ in
$B_{i}$ are calculated and stored as golden signature in on-chip memory.
During run-time, a fresh set of signatures is computed for every new chunk of
data that is fetched from the cache. The detection is performed by comparing
the computed signature with the golden signature.
## V RADAR Scheme: Recovery
If an attack does occur, a successful detection can help halt the system to
stop making decisions, wait for downloading a clean copy of weights or let the
secondary system take over. This may result in significant increase in timing
overhead so next we describe a very simple recovery scheme that can recover
most of the model accuracy instantly.
In the PBFA analysis described in Section III.C, we see that PBFA attacks the
MSB of a small weight and converts it to a large value. It is this large
change in weight value that causes a direct change in the ReLU activation and
thus has a dramatic effect on the output. So we locate the groups where the
bit-flips have occurred using the proposed fine-grain detection scheme and
then set all the weights in that group to 0. A de-interleaving step is
required when interleaving is applied prior to checksum calculation during the
weight update so that the original weight organization is not affected. Since
most of the weights in a group have small values and are clustered around 0,
setting all the weights in the group to 0 works well especially if the
partition size is small. In Section VI we demonstrate that this scheme helps
regain most of the accuracy lost due to PBFA for ResNet-18 and ResNet-20
models.
## VI Experiments
### VI-A Settings
We demonstrate the effectiveness of our detection and protection scheme for
image classification using two popular datasets: CIFAR-10 [18] and ImageNet
[19]. We use a 8-bit quantized ResNet-20 for CIFAR-10 dataset and 8-bit
quantized ResNet-18 for ImageNet [20]. ResNet-20 model is trained from scratch
for 200 epochs using Adam, with a learning rate of 0.01 and decay of 0.0001.
We use a pre-trained model for ResNet-18 and fine-tune it for 20 epochs using
SGD.
### VI-B Detection Performance
We set the number of bit-flips to be equal to 10 per attack since this is
sufficient to cause a significant performance degradation and calculate the
number of bit-flips that were detected. We perform each attack 100 times. The
detection performance with and without the interleaving strategy is shown in
Fig. 4. For the ResNet-20 model (shown in blue), the detection performance
without interleave approaches 10/10 when $G$ is small, and drops to around
7/10 when $G=64$. This is consistent with the observation in Fig. 2 where we
show that when $G$ is large, the proportion of multiple bits in the same group
sharply increases thereby harming the detection performance. With
interleaving, the detection performance for large group size is better because
of the reduction in the number of multiple bit-flip cases. For the ResNet-18
model (shown in red), we observe that interleaving results in a very high
9.5/10 detection ratio even when the group size is large.
Figure 4: Average number of detected bit-flips (out of 10 bit flip attacks)
using PBFA. The group size G is swept from 4 to 64 for ResNet-20 model and 64
to 1024 for ResNet-18.
We also investigate the probability of failing to detect an attack on the
MSBs. We consider a layer with 512 weights and perform $10^{6}$ rounds of bit-
flips with 10 random bit-flips on MSB position per round. We find that for
group size $G=32$, the detection miss rate is $10^{-5}$; for group size
$G=16$, the detection miss rate further reduces to $10^{-6}$. Since the number
of weights in a group is much larger than the toy example, we conclude that
the proposed detection scheme exposes even smaller risk for full-fledged
networks.
### VI-C Recovery Performance
To evaluate recovery performance, we consider number of bit-flips (N_BF) of 5
and 10. For ResNet-20 and ResNet-18 we compare choices of different group
sizes with and without interleaving. Recall that in the proposed recovery
technique, the weights in a group are set to zero if a bit-flip is
successfully detected. So in this experiment, we check the test accuracy of
the revised model obtained by setting the weights of a group to 0 if that
group detected bit-flips.
TABLE III: Accuracy Recovery of the RADAR scheme
Model | $N_{BF}$ | Test Accuracy (%)
---|---|---
Baseline | w.o. interleave/ with interleave
ResNet-20 | 0 | 90.15 | G = 8 | G = 16 | G = 32
5 | 40.72 | 82.66/85.64 | 76.39/83.72 | 68.06/73.35
10 | 18.01 | 80.86/81.07 | 70.53/77.96 | 61.62/61.32
ResNet-18 | 0 | 69.79 | G = 128 | G = 256 | G = 512
5 | 5.66 | 66.60/67.51 | 65.12/66.15 | 62.89/62.87
10 | 0.18 | 62.69/66.33 | 59.95/64.96 | 57.46/60.69
The results for ResNet-20 and ResNet-18 are shown in Table III. For ResNet-20,
the accuracy after the attack drops to as low as 18.01% with N_BF=10. After
recovery, the accuracy can climb up to 81% when $G=8$ and 62% when $G=32$.
Similarly for ResNet-18, the accuracy drops to 0.18% with N_BF=10 and climbs
to 66% when $G=128$ and 61% when $G=512$. We see that the accuracy recovery is
consistently better when the interleaving strategy is used. Fig. 5 further
illustrates the test accuracies for ResNet-18. While there is a mild
performance drop when $G$ increases, the accuracy recovery is consistently
quite high.
Figure 5: Accuracy recovery performance on ResNet-18 model (ImageNet).
## VII Discussion
### VII-A Tradeoff between recovery & storage
We use gem5 [21] to evaluate the timing overhead of RADAR. We use a 8-core
processor build in gem5. Each core is instantiated as an Arm Cortex-M4F core
and runs at 1GHz frequency. The system is equipped with a two-level cache:
L1-32KB and L2-64KB. The layer information and weights are obtained from the
pre-trained ResNet-20 and ResNet-18 models. Our detection and recovery
procedure, RADAR, is embedded in the computations of every layer. For RADAR
scheme with group size $G$, we use padding if the number of weights in a layer
is not divisible by $G$.
To choose a good group size, we study the tradeoffs between recovery
performance and storage overhead. Fig. 6 plots recovery accuracy as a function
of storage overhead for ResNet-18 and ResNet-20 models. For ResNet-20, the
best accuracy-storage tradeoff occurs at $G=8$. The accuracy under 10 bit-
flips is still over 80% and the storage overhead for the signature bits is 8.2
KB, which can be easily stored on-chip. For ResNet-18, $G=512$ seems to be the
best choice. The accuracy can be kept at over 60% for 10 bit-flip attack and
the storage overhead is just 5.6 KB.
Figure 6: Test accuracy after recovery vs. Storage overhead of proposed RADAR scheme under PBFA with N_BF = 10 on ResNet-20 and ResNet-18 models. TABLE IV: Time Overhead of RADAR | Original | RADAR | Overhead
---|---|---|---
ResNet-20 | 66.3ms | 68.7ms (69.8ms) | 3.56% (5.27%)
ResNet-18 | 3.268s | 3.287s (3.328s) | 0.58% (1.83%)
The time overhead for RADAR on ResNet-20 with $G=8$ and ResNet-18 with $G=512$
for batch size of 1 is shown in Table IV. The time overhead with interleaving
is shown in brackets. The overhead is quite small – 5.27% for ResNet-20 and
1.83% for ResNet-18 with interleaving. The time overhead can be further
reduced in a multi-batch inference setting, where each chunk of weights is
loaded once and used many times.
### VII-B Comparison with Related work
RADAR has been designed to address the strongest adversarial weight attack to
date, i.e., PBFA, via a fast and light weight checksum algorithm that has a
very low storage overhead. Other options to perform single and double bit-flip
detection for general data integrity checking include Cyclic Redundancy Check
(CRC) [22] and Hamming Code [23] based Double-bit Error Detection. To provide
for recovery, both codes require significantly higher storage overhead. For
instance Hamming code requires 7 bits for 64 bits of data (corresponding to
group size of 8) and 13 bits for 4096 bits (corresponding to group size of
512). Similarly, to achieve a HD=3, CRC needs 7 bits and 13 bits for group
size of 8 and 512 respectively.
We compare the performance of RADAR with the competitive CRC schemes. Table V
shows the total inference time, the overhead time for detection ($\Delta$) and
the storage overhead for ResNet-20 when G=8 and ResNet-18 when G=512. For
ResNet-18, when G=512, CRC-13 has a time overhead if 0.317s compared to
0.060s. The storage overhead of CRC-13 is 36.4KB, compared to 5.6KB which is
required by RADAR. If only the MSBs were to be protected, we would require
CRC-10 which has a time overhead of 0.315s and storage overhead of 28.0KB,
which is still significantly larger than RADAR.
TABLE V: Overhead comparison with CRC techniques
Scheme | ResNet-20; G=8 | ResNet-18; G=512
---|---|---
Time/$\Delta$ | Storage | Time/$\Delta$ | Storage
CRC | 84.2ms/17.9ms | 28.7KB | 3.585s/0.317s | 36.4KB
RADAR | 69.8ms/3.5ms | 8.2KB | 3.328s/0.060s | 5.6KB
## VIII Knowledgeable Attacker
Next we assume that the attacker knows that a checksum-based scheme has been
used to detect attacks on MSBs. However, the attacker does not know the secret
key that is used for masking the weights and/or the interleaving strategy.
Flip multiple bits in a group. In addition to attacking the 10 bits identified
by PBFA, the attacker could add in another 10 bits to flip (20 bit-flips in
total). These bit flips would be of the form (0 $\rightarrow$ 1 and 1
$\rightarrow$ 0) to evade detection. As shown in Fig. 7, the detection
performance without interleaving drops greatly (lower blue line) causing the
accuracy recovery to be low as well. By applying interleaving, the detection
ratio can be keep at a level similar to the traditional PBFA case. Also, the
accuracy is much higher when group size is small.
Figure 7: Detection and accuracy recovery performance on ResNet-20 model
against knowledgeable attackers.
Avoid flipping MSB. The proposed checksum-based detection scheme with a 2-bit
signature cannot detect bit-flips on less significant bits as effectively as
it can on MSB. However, many more bits are required to launch a successful
attack if only MSB-1 or lower significant bits are allowed to be flipped. For
instance, the attacker needs around 30 bit-flips on MSB-1 bits (compared to 10
bit-flips on MSB) for comparable accuracy degradation on the ResNet-20 model.
We address attacks on MSB-1 bits by utilizing a 3-bit signature computed by
binarizing $M$ to 3 bits. While this method has a higher storage overhead
(3-bit signature vs 2-bit signature), it can successfully detect errors due to
attacks on MSB-1 bits.
## IX Conclusion
In this work, we propose RADAR, a low overhead run-time adversarial weight
attack detection and accuracy recovery scheme for PBFA. We show that the RADAR
scheme has very low timing overhead and can be embedded in the inference
process. A thorough analysis of the PBFA attack characteristics helped derive
a simple error detection and accuracy recovery scheme. A 2-bit signature is
computed using addition checksum of a group of interspersed weights and
compared with the golden signature to determine whether a PBFA attack had been
launched on that group or not. We show that RADAR has superior detection
performance; it can consistently detect over 9.5 bit-flips out of 10 injected
bit-flips. A simple but effective accuracy recovery scheme is built on top of
this detection scheme. It can recover the accuracy from 0.18% back to above
60% for a ResNet-18 model on ImageNet. The gem5 simulation shows that for
ResNet-18 model, the RADAR scheme only increases the inference time by $<$2%,
making this scheme highly suitable for run time detection and protection.
## References
* [1] Ian J Goodfellow et al. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
* [2] Adnan Siraj Rakin et al. Bit-flip attack: Crushing neural network with progressive bit search. In Proceedings of the IEEE ICCV, pages 1211–1220, 2019.
* [3] Fan Yao et al. Deephammer: Depleting the intelligence of deep neural networks through targeted chain of bit flips. arXiv preprint arXiv:2003.13746, 2020.
* [4] Zhezhi He et al. Defending and harnessing the bit-flip based adversarial weight attack. IEEE International Symposium on Hardware Oriented Security and Trust (HOST), 2020.
* [5] Minghai Qin et al. Robustness of neural networks against storage media errors. arXiv preprint arXiv:1709.06173, 2017.
* [6] Hui Guan et al. In-place zero-space memory protection for cnn. In NIPS, pages 5735–5744, 2019.
* [7] Yoongu Kim et al. Flipping bits in memory without accessing them: An experimental study of dram disturbance errors. In ACM SIGARCH Computer Architecture News, volume 42. IEEE Press, 2014.
* [8] Michel Agoyan et al. How to flip a bit? In 2010 IEEE 16th International On-Line Testing Symposium, pages 235–239. IEEE, 2010.
* [9] Daniel Gruss et al. Another flip in the wall of rowhammer defenses. In 2018 IEEE Symposium on Security and Privacy (SP). IEEE, 2018\.
* [10] Fan Yao et al. Deephammer: Depleting the intelligence of deep neural networks through targeted chain of bit flips. In 29th USENIX Security Symposium (USENIX Security 20). USENIX Association, August 2020.
* [11] Sanghyun Hong et al. Terminal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks. arXiv preprint arXiv:1906.01017, 2019.
* [12] Yannan Liu et al. Fault injection attack on deep neural network. In 2017 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pages 131–138, 2017.
* [13] Zecheng He et al. Sensitive-sample fingerprinting of deep neural networks. In Proceedings of the IEEE CVPR, pages 4729–4737, 2019.
* [14] Y. Deguchi et al. Error-reduction controller techniques of taox-based reram for deep neural networks to extend data-retention lifetime by over 1700x. In 2018 IEEE International Memory Workshop (IMW), 2018.
* [15] Mark Seaborn and Thomas Dullien. Exploiting the dram rowhammer bug to gain kernel privileges. Project Zero, 2015.
* [16] Honggang Yu et al. Deepem: Deep neural networks model recovery through em side-channel information leakage. 2020\.
* [17] Theresa C Maxino et al. The effectiveness of checksums for embedded control networks. IEEE Transactions on dependable and secure computing, 6(1):59–72, 2009.
* [18] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The cifar-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html, 55, 2014\.
* [19] Jia Deng et al. Imagenet: A large-scale hierarchical image database. In CVPR, pages 248–255. IEEE, 2009.
* [20] Kaiming He et al. Deep residual learning for image recognition. In Proceedings of the IEEE CVPR, pages 770–778, 2016.
* [21] Nathan Binkert et al. The gem5 simulator. ACM SIGARCH computer architecture news, 39(2):1–7, 2011.
* [22] Philip Koopman and Tridib Chakravarty. Cyclic redundancy code (crc) polynomial selection for embedded networks. In International Conference on Dependable Systems and Networks, 2004, pages 145–154. IEEE, 2004.
* [23] Richard W Hamming. Error detecting and error correcting codes. The Bell system technical journal, 29(2):147–160, 1950.
|
# Autocart - spatially-aware regression trees for ecological and spatial
modeling
Ethan Ancell, Brennan Bean
(Utah State University)
## 1 Abstract
Many ecological and spatial processes are complex in nature and are not
accurately modeled by linear models. Regression trees promise to handle the
high-order interactions that are present in ecological and spatial datasets,
but fail to produce physically realistic characterizations of the underlying
landscape. The “autocart” (autocorrelated regression trees) R package extends
the functionality of previously proposed spatial regression tree methods
through a spatially aware splitting function and novel adaptive inverse
distance weighting method in each terminal node. The efficacy of these
autocart models, including an autocart extension of random forest, is
demonstrated on multiple datasets. This highlights the ability of autocart to
model complex interactions between spatial variables while still providing
physically realistic representations of the landscape.
## 2 Introduction
Accurate characterizations of ecological variables have important implications
for researchers and stakeholders in agriculture, watershed sciences, natural
resources, and environmental sciences [1]. Periodic land surveys to get
accurate measurements for ecological variables are expensive. For example, in
2018 the United States Department of Agriculture spent over $70 million
dollars in soil surveys [2].
Many of these ecological variables are highly interactive. As an example, soil
moisture is affected by precipitation, soil composition, temperature,
elevation, and the surrounding ecosystem, yet the individual relationships
between each variable and soil moisture cannot be considered in isolation. The
complexity of these high ordered interactions make it extremely difficult to
accurately predict with traditional spatial interpolation models [3]. This is
particularly true in Utah, where sharp changes in elevation create drastic
changes in climate over short distances.
Traditional spatial methods such as kriging [4] create smooth maps of soil
properties, but such approaches fail to adequately handle high-order
interactions among explanatory variables. Such shortcomings emphasize the need
to model interactions among the soil variables to mitigate this problem. The
need for non-linear predictions of soil properties is well-documented [5].
Machine-learning techniques promise to remedy under-performing linear models,
due to their flexibility in characterizing complex and high-order interactions
[6].
There is a lot of existing research in the realm of applying machine-learning
algorithms to spatial ecological data [7, 8, 9, 10, 11, 12]. Soil property
mapping that utilizes machine-learning has also been extensively studied [13,
5, 14]. One particularly promising method are regression trees [15], which
model high-order interactions in a way that is easy to interpret without
requiring a massive amount of data like other machine-learning approaches.
Unfortunately for spatial data, traditional tree-based algorithms such as
regression trees have no way of accounting for the relationship between
observations that is explained by their spatial distribution. Coordinate
information such as longitude and latitude can be naively included as a
predictor in a tree-based model, but this leads to physically unrealistic
predictions, often with sharp and discrete jumps in predictions across the
geographical region. An example of these visual artifacts can be seen in
Figure 2, which shows an attempt to use the random forests algorithm to model
soil moisture in Utah. This figure shows a sharp, discrete jump in the
predicted soil moisture at approximately 41.5 degrees latitude.
Figure 1: Prediction map of soil moisture from a default implementation of
Random Forests in Utah. Explicitly including longitude/latitude as predictors
in the tree-based method yields a clear jump in predictions moving from north
to south that is not physically realistic.
The visual artifacts in Figure 2 are a symptom of the overfitting that is
present in machine-learning methods when coordinates are included as explicit
predictors [16]. Entirely omitting coordinate information from the prediction
process may remove the visual artifacts, but ignoring the spatial component
can compromise accuracy. Coordinate information is especially useful when
analyzing data exhibiting spatial autocorrelation, which occurs when
observations are related to each other as a function of distance [17]. The
discussion and definitions surrounding the concept of spatial autocorrelation
can be argued to directly follow from Tobler’s first law of geography, which
states ”everything is related to everything else, but near things are more
related than distant things” [18]. Properly accounting for spatial
autocorrelation in the modeling process is a powerful way to improve
predictions in data that exhibit this property [19].
Another problem that exists when analyzing a spatial dataset that covers a
large region is the inability to make the assumption that the distribution of
the variable of interest remains consistent across the entire space [20, 21].
Regression trees as a means to decompose a global spatial process into smaller
local regions have been studied, including the effort by [21] which discusses
the use of hierarchical clustering trees for this purpose.
This paper seeks to extend a machine-learning algorithm to handle coordinate
information in an appropriate way, avoiding the problems of overfitting and
visual artifacts while fully economizing on the predictive power that resides
in coordinate information. Initial attempts to create models of this type
include regression tree extensions by [22, 23], and a random forest extension
by [24]. In the ensuing sections, we propose a tree-based algorithm called
“autocart” (autocorrelative regression trees) intended to decompose a global
spatial process into local regions in a hierarchical and automatic way,
building upon the work proposed by [22] and [23]. The terminal nodes of the
tree can be used for the simulation of a local spatial process using inverse-
distance weighting interpolation. The result is a predictive process that
harnesses the predictive power of both interpolation and regression trees.
## 3 Existing work
### 3.1 Classification and regression trees
In the traditional regression tree algorithm, we create partition rules on a
dataset to predict some response variable according to a set of splits on the
predictor variables [15]. The regression tree algorithm falls under the
paradigm of supervised learning, where we use labeled training data to form
rules for the prediction of new unlabeled training data.
Formally, we predict a class or response $Y$ from predictor variables
$X_{1},X_{2},\dots,X_{p}$ by growing a binary tree. We form a prediction with
the grown tree by applying a test on one of the predictor variables at each
internal node of the tree, and depending on the outcome of the test, we move
to either the right or left child of the internal node and proceed to apply
the next test to one of our predictor variables. The final prediction for an
observation with predictors $X_{1},X_{2},\dots,X_{p}$ is made upon arriving at
a terminal node of the binary tree, using the average of the response variable
of the training observations that were a part of the terminal node during
training.
Figure 2: An example of a regression tree. The response variable for this tree
is the percentage of water by volume found in a particular soil sample.
To make a prediction, a set of predictors $\\{X_{1},X_{2},\dots,X_{p}\\}$ are
passed into the tree. As an example, if we had predictors $\text{slope}=0.54$
and $\text{silt}=32$, then we would make a prediction by starting at the top
node of the tree, going left because $\text{slope}=0.54<0.62$, and then going
right because $\text{silt}<42$. We would arrive at the terminal node, and then
make the prediction $y=6.7$. The 13% in the terminal node indicates that 13%
of the training data for the building of the tree resides in that node.
While growing the tree, the algorithm searches for the decision rule $X_{p}<x$
out of all possible decision rules, such that predictive accuracy is maximized
on the training dataset. This partitioning is done recursively, such that each
two child nodes that are created as part of a split form the basis for the
next splitting rule. To maximize predictive accuracy on the training data at
each step of the algorithm, the decision rule splits the data into the two
halves that minimize the residual sum of squares (RSS) in the newly formed
children nodes.
$RSS=\sum_{i}(y_{i}-\bar{y})^{2}$
where $y_{i}$ are the observations of the labeled response value at this node,
and $\bar{y}=\frac{1}{n}\sum_{i}{y_{i}}$ is the average of the labeled
response variables within the node.
The minimization of $RSS$ described above can be most efficiently calculated
by maximizing the variance between the nodes. This variance is calculated as
$SS_{B}=n_{l}(\bar{y}_{l}-\bar{y})^{2}+n_{r}(\bar{y}_{r}-\bar{y})^{2}$
where $n_{l}$ and $n_{r}$ are the sample sizes in the “left” and “right”
partitions respectively, $\bar{y}_{l}$ and $\bar{y}_{r}$ are the average
values of the response in the left and right children, and $\bar{y}$ is the
average of the response over all observations in the parent node where the
split is being made.
$SS_{B}$ is compared to the total variance of the node if no split had been
made:
$SS_{T}=\sum_{i}(y_{i}-\bar{y})^{2}$
The tree makes splits by maximizing the ratio between $SS_{B}$ and $SS_{T}$.
The ratio of $SS_{B}$ and $SS_{T}$ is encapsulated in the so-called “objective
function”, the measure of goodness or utility of each potential split.
$g_{rss}=SS_{B}/SS_{T}$ (1)
### 3.2 Spatial extensions to regression trees
Breiman’s CART algorithm discussed in section 3.1 is aspatial, meaning that it
does not consider the geographic distribution of the measurements. As
discussed in section 2, coordinate information can be included as predictor
variables in the model, but often leads to physically unrealistic predictions
as seen in Figure 2. Including coordinate information as explicit predictors
in machine-learning models for spatial data also leads to an overfitting of
the training dataset [16].
Simply excluding coordinate information from the predictive process may reduce
the overfitting of the training dataset, but this may curtail the predictive
power that lies in coordinate information, especially in spatial datasets with
spatial autocorrelation.
An appropriate handling of coordinate information is to prevent the tree from
making splits based upon coordinate information, but allow the tree to track
coordinate information for each sample to use as part of the predictive
process. The following techniques assume that the coordinate vector
$\mathbf{s}=(x,y)$ is available for all samples in the training dataset. This
restructure frames the regression tree prediction process as being fueled by
both the information encapsulated in the predictor variables
$\\{X_{1},X_{2},\dots,X_{p}\\}$ as well as the geographic location expressed
as a coordinate vector $(x,y)$.
An extension to the regression tree algorithm was proposed by [22]. In this
extension, the objective function in the regression tree algorithm is formed
by a linear combination of the objective function $g_{rss}$ described in
Equation 1 and another objective function $g_{ac}$ that optimizes for measures
of spatial autocorrelation in the partitions. It is defined as
$g=(1-\alpha)g_{rss}+\alpha g_{ac},\quad\quad\alpha\in[0,1]$ (2)
where $\alpha$ is a user-set parameter that weights minimizing the RSS versus
maximizing the autocorrelative statistic. Please note that all cited equations
have been converted into a common notation for convenience.
A popular statistic intended to measure the presence of spatial
autocorrelation in a group of observations is Global Moran’s I (i.e, Moran’s
I) [25]. Moran’s I requires a vector of the response variable $Y$ where we
measure spatial autocorrelation. We also require a spatial weights matrix that
reflects the intensity of the spatial relationship between all pairwise
observations in the group. Moran’s I of a response variable $Y$ for a group of
observations $G$ is defined to be
$I_{Y}=\dfrac{n\sum_{i}\sum_{j}w_{ij}(y_{i}-\bar{y})(y_{j}-\bar{y})}{W\sum_{i}(y_{i}-\bar{y})^{2}}$
(3)
where $n$ is the total number of observations that are indexed by $i$ and $j$
in the group $G$; $w_{ij}$ is the spatial weights matrix entry that represents
the weight between observations $i$ and $j$ ($w_{ij}$ is 0 when $i=j$), and
$W=\sum_{i}\sum_{j}w_{ij}$. $y_{i}$ is the response variable of interest at
observation $i$, and $\bar{y}=\frac{1}{n}\sum_{i}{y_{i}}$ is the mean of the
response variable in the group of observations $G$.
Under the null hypothesis of no spatial autocorrelation, the expected value of
Moran’s I for the response $Y$ in group $G$ is
$E(I_{Y})=\dfrac{-1}{n-1}.$
The statistic $I$ typically lies in the range $[-1,1]$. Values of $I$ that are
significantly above $E(I)$ indicate positive spatial autocorrelation, where
values significantly below $E(I)$ indicate negative spatial autocorrelation
[17].
A critical choice is the spatial weighting scheme to produce the spatial
weights matrix entries $w_{ij}$ used in Moran’s I. The following Gaussian
similarity measure connecting observations $i$ and $j$ is used by [22].
$w_{ij}=\begin{cases}e^{-\frac{\text{dist}(i,j)^{2}}{b^{2}}},&\text{if
dist}(i,j)<b\\\ 0,&\text{if dist}(i,j)\geq b\end{cases}$ (4)
where $\text{dist}(\cdot)$ is a distance metric between two observations $i$
and $j$, and $b$ is a spatial-bandwidth parameter that reflects the spatial
distance at which no spatial influence is assumed between observations. Other
methods for choosing $w_{ij}$ in the weights matrix for the calculation of
Moran’s I are viable, and the best scheme for picking $w_{ij}$ may be dataset
dependent. For a typical use case, this Gaussian weighting scheme is
sufficient.
In order to simultaneously maximize $I_{Y}$ in both partitions of the data, a
fair weighting of $I_{Y}$ in each half according to the number of observations
is required. This can be expressed as
$\tilde{I}_{Y}=\dfrac{n_{L}\cdot I_{YL}+n_{R}\cdot I_{YR}}{n}$ (5)
where $n_{L}$ and $n_{R}$ represent then number of observations in arbitrary
“left” and “right” partitions. $I_{YL}$ and $I_{YR}$ are the value of the
Moran’s I statistic from equation 3 for the individual left and right
partitions, and $n=n_{L}+n_{R}$ is the total number of observations in the
node being split.
The nature of $\tilde{I}_{Y}\in[-1,1]$ requires a re-scaling to $[0,1]$ to
ensure an even match against the residual sum of squares objective function
$g_{rss}$. The autocorrelative objective function can be defined as
$g_{ac}=\dfrac{\tilde{I}_{Y}+1}{2}$
which finds its way into final weighted objective function from equation 2:
$g=(1-\alpha)g_{rss}+\alpha g_{ac},\quad\quad\alpha\in[0,1].$
By weighting $g_{ac}$ higher with $\alpha$, the tree is more likely to choose
splits that create partitions where the observations in the partition exhibit
the property of spatial autocorrelation. This is in contrast to traditional
regression trees which simply seek to minimize the residual sum of squares.
The motivation underlying this spatial adaptation is that the consideration of
$g_{ac}$ at the expense of $g_{rss}$ is worth the investment: the creation of
spatial partitions that reflect self-contained units exhibiting a spatial
pattern. [22] showed that for some datasets, weighting $g_{ac}$ higher
resulted in gains in predictive accuracy with cross-validation.
On the other hand, weighting $g_{ac}$ too high has its own problems. If
$g_{rss}$ is not weighted enough, then the prediction that is formed by the
mean of the response value in the terminal node $\bar{y}_{T}$ is not
representative of the region as a whole and leads to poor predictions. The
best strategy is to pick the value of $\alpha$ such that an optimal balance is
struck between creating partitions that exhibit spatial autocorrelation while
still maintaining the power of $\bar{y}_{T}$ as a baseline prediction.
[23] builds upon the work in [22] by proposing a novel data mining framework
for geophysical data called interpolative clustering, in which interpolation
of the response variable of the training data can be used to supplement the
prediction $\bar{y}_{T}$ of a regression tree.
As discussed in Section 2, the end goal of these modified regression trees is
to decompose the global spatial landscape into focused sub-regions. [23]
observed that the predictive power of the tree can be improved by simulating a
local spatial process contained in a terminal node of the tree using an
interpolative method such as inverse-distance weighting. Previously, we formed
a prediction for a new observation by considering the arithmetic mean of the
response variable at the terminal node of the tree $\bar{y}_{T}$. Now, we
supplement that prediction with an interpolative simulation of the spatial
pattern of the training observations in the terminal node.
In more formal terms, to make a prediction we run the predictor variables
$\\{X_{1},X_{2},\dots,X_{p}\\}$ through the splitting rules of the tree as
normal, and note the resulting terminal node $T$ that the prediction falls
into. Next, we make the prediction using an interpolation method, using only
the training data in the terminal node as the reference points for
interpolation. We denote the coordinates of the new prediction with
$\mathbf{s}=(x,y)$, whose final prediction is made with
$\hat{Y}(\mathbf{s})=\begin{cases}\dfrac{\sum_{i=1}^{n_{T}}w_{i}(\mathbf{s}_{Ti})y_{Ti}}{\sum_{i=1}^{n_{T}}w_{i}(\mathbf{s})},&\text{if
}\text{dist}(\mathbf{s},\mathbf{s_{Ti}})\neq 0\text{ for all }i\\\
y_{i},&\text{if }\text{dist}(\mathbf{s},\mathbf{s_{i}})=0\text{ for some
}i\end{cases}$ (6)
where $n_{T}$ is the total number of training observations in the terminal
node $T$, $y_{Ti}$ are the labeled response values of the training data in
$T$, $\mathbf{s}_{Ti}=(x_{i},y_{i})$ are the coordinate locations of the
training data in $T$, and $\text{dist}(\cdot)$ is some spatial distance
metric.
$w_{i}$ is the spatial weight that is assigned between the interpolated
location $\mathbf{s}$ and the known response locations $\mathbf{s}_{Ti}$.
$w_{i}$ is calculated with inverse distance weighting:
$w_{i}(\mathbf{s})=\dfrac{1}{\text{dist}(\mathbf{s},\mathbf{s}_{Ti})^{p}}$ (7)
where $\mathbf{s}$ is the interpolated point, $\mathbf{s}_{Ti}$ are the known
points in $T$, $\text{dist}(\cdot)$ is a distance metric from the known point
to the interpolated point (this may be Euclidean distance in the case of
projected data or also great circle distance in the case of latitude/longitude
coordinates), and $p\in\mathbb{R}^{+}$ is the power parameter set by the user.
A higher power parameter will highly weight close observations in relation to
further observations, compared to a lower power parameter where further
observations have more influence in the final prediction
$\hat{Y}(\mathbf{s})$. The optimal choice for $p$ is a function of the
strength of the underlying spatial distribution in nature, but $p=2$ is
commonly used.
The interpolative step described in equations 6 and 7 is most effective when
used in combination with the objective function $g_{ac}$. Creating partitions
and terminal nodes that exhibit spatial autocorrelation is beneficial for
inverse-distance weighting interpolation as its efficacy relies upon the
assumption that the correlation between observations decreases as the distance
between them increases.
## 4 Autocart: extensions to the spatial regression tree
In this section we propose further extensions to the methodologies introduced
in Section 3.2. We refer to the final tree algorithm as the “autocart”
algorithm (autocorrelative regression trees), which is publicly available as
an R statistical software package at the following URL:
github.com/ethanancell/autocart
### 4.1 An adaptive approach to picking the power parameter $p$
In Section 3.2, inverse distance weighting at the terminal nodes of the
regression tree is discussed as a way to supplement the prediction of the
tree. Here, we propose a novel approach to picking the power parameter $p$ in
equation 7 for the inverse-distance weights.
If we consider these spatial regression trees to do the part of automatically
decomposing a global landscape into local areas where the spatial pattern may
differ, then it is unfair to state that the optimal choice of the power
parameter $p$ is constant across all terminal nodes representing local
regions. A local region may exhibit a stronger dependence between closely
neighboring observations than other local regions, necessitating a varying
$p$.
In order to assess the strength of a spatial relationship in some terminal
node $T$, we can reuse the Moran’s I statistic from equation 3. A terminal
node exhibiting a stronger correlation between closely neighboring
observations will have a comparatively higher value of $I_{Y}$. In a region
where a stronger correlation is noted between closely neighboring
observations, it is sensible to pick $p$ to be higher so that the weights
defined in equation 7 give greater weight to neighboring observations as
compared to far observations. In a region where a weak correlation is noted
between closely neighboring observations, it is sensible to pick $p$ to be
lower so that the resulting prediction catches on to the trend of the region
as a whole, rather than relying upon inappropriate confidence in the
predictive power of close observations, indicated by the low value of $I_{Y}$.
Let $M$ represent the set of $I_{Yi}$ calculated for each terminal node in the
regression tree. In the case that $I_{Yi}<E(I_{Yi})$ for the terminal node,
there is no value in interpolation as the correlation between observations
close in space is not significant. The ranged value of $p$ will be only based
upon the values $I_{Yi}\in M$ where spatial autocorrelation is observed. We
denote this new set with
$\tilde{M}=\\{I_{Yi}\in M:I_{Yi}>E(I_{Yi})\\}.$
To make a prediction for the coordinate $\mathbf{s}=(x,y)$, we first run the
accompanying predictor variables $\\{X_{1},X_{2},\dots,X_{p}\\}$ through the
tree to find which terminal node $\mathbf{s}$ belongs to. Once we have found
the correct terminal node $T$, we make the prediction in a similar way to
equation 6:
$\hat{Y}(\mathbf{s})=\begin{cases}y_{i},&\text{if
dist}(\mathbf{s},\mathbf{s}_{i})=0\text{ for some }i\\\ \bar{y}_{T},&\text{if
}I_{T}<E(I_{T})\text{ and dist}(\mathbf{s},\mathbf{s}_{i})\neq 0\text{ for all
}i\\\
\dfrac{\sum_{i=1}^{n_{T}}w_{i}(\mathbf{s})y_{i}}{\sum_{i=1}^{n_{T}}w_{i}(\mathbf{s})},&\text{if
}I_{T}>E(I_{T})\text{ and dist}(\mathbf{s},\mathbf{s}_{i})\neq 0\text{ for all
}i\end{cases}$ (8)
where $I_{YT}$ represent the observed value of the Moran’s I statistic for the
training observations in the terminal node $T$ with respect to the response
variable $Y$, and $E(I_{YT})$ is the expected value of $I_{YT}$. $\bar{y}_{T}$
denotes the observed mean of the response variable $Y$ of the training
observations in $T$.
$w_{i}$ remains much the same as in equation 7, the difference being we use
the varying power parameter $p_{T}$:
$w_{i}(\mathbf{s})=\dfrac{1}{\text{dist}(\mathbf{s},\mathbf{s}_{i})^{p_{T}}}.$
(9)
The varying power parameter $p_{T}$ is a monotonic increasing function that
maps from $[\min(\tilde{M}),\max(\tilde{M})]$ to $[p_{1},p_{2}]$, where
$p_{1}$ and $p_{2}$ are user-set parameters that indicate the range of power
parameters. The terminal node that exhibits the most significant value of
$I_{YT}$ will use $p_{2}$ for its power parameter, and the terminal node with
the least significant (yet above expected) value of $I_{YT}$ will use $p_{1}$.
One choice of $p_{T}:[\min(\tilde{M}),\max(\tilde{M})]\mapsto[p_{1},p_{2}]$ is
the following linear function:
$p_{T}=\dfrac{(I_{T}-\min(\tilde{M}))(p_{2}-p_{1})}{\max(\tilde{M})-\min(\tilde{M})}+p_{1}$
(10)
As $p_{1}$ and $p_{2}$ are set by the user, it is crucial that the user have a
sense of an appropriate range of values for $p$ in the context of their
particular dataset.
### 4.2 An objective function for the encouragement of spatially-compact
terminal nodes
Section 3.2 described an objective function that encourages high values of
spatial autocorrelation within the internal nodes of the tree. Section 3.2
also describes the use of interpolation at the terminal nodes of the tree to
supplement the prediction of $\bar{y}_{T}$. In this section, we propose
another possible objective function for the tree and explore the possibility
of weighting this new objective function alongside the objective function
$g_{ac}$ described in section 3.2.
When using interpolation as part of the predictive process, it is desired that
terminal nodes of the tree create sub-regions of the data that are ideal for
interpolation. Excessive overlap in the regions create by the terminal nodes
is not ideal for interpolation, as much of the final prediction is weighted by
distant observations while ignoring other observations that are geographically
close but not in the same terminal node. In this section, another objective
function $g_{sc}$ for the encouragement of spatially-compact internal and
terminal nodes is introduced.
At an arbitrary level of the splitting process, define the total sum of
squared pairwise distances within the node $N$ to be
$TSS_{D}=\sum_{\mathbf{s}_{i}\in N}\sum_{\mathbf{s}_{j}\in
N}\text{dist}(\mathbf{s}_{i},\mathbf{s}_{j})^{2}.$
Consider an arbitrary partition of the data in the node $N$ that produces a
“left” and “right” partition, sub-scripted by $l$ and $r$ respectively. Let
$N_{l}$ be the set of all training observations in the left partition, and
$N_{r}$ be the set of all training observations in the right partition.
Define the between sum of squared differences for the partitions to be
$BSS_{D}=\sum_{\mathbf{s}_{i}\in N_{l}}\sum_{\mathbf{s}_{j}\in
N_{r}}\text{dist}(\mathbf{s}_{i},\mathbf{s}_{j})^{2}.$
As a sort of “spatial extension” to a 1-way anova, the total sum of squares of
distances is composed of the sum of all between sums of squared differences
and the sum of all within sums of squared differences. This is represented as
$TSS_{D}=BSS_{D}+WSS_{D}$ (11)
where
$WSS_{D}=\sum_{\mathbf{s}_{i}\in N_{l}}\sum_{\mathbf{s}_{j}\in
N_{l}}\text{dist}(\mathbf{s}_{i},\mathbf{s}_{j})^{2}+\sum_{\mathbf{s_{i}}\in
N_{r}}\sum_{\mathbf{s}_{j}\in
N_{r}}\text{dist}(\mathbf{s}_{i},\mathbf{s}_{j})^{2}.$
Minimizing $WSS_{D}$ encourages spatially compact regions resulting from the
split, minimizing uncertainty in the interpolation step. Due to the identity
in equation 11, this is possible by maximizing $BSS_{D}$. The previous
objective functions $g_{rss}$ and $g_{ac}$ from sections 3.1 and 3.2
respectively indicate a more desirable split when $g_{rss}$ and $g_{ac}$ are
higher in value and closer to 1. As $\dfrac{BSS_{D}}{TSS_{D}}\in[0,1]$, it is
natural and intuitive to define the goodness of spatial compactness $g_{sc}$
as:
$g_{sc}=\dfrac{BSS_{D}}{TSS_{D}}.$ (12)
We can include $g_{sc}$ in the linear combination of previously discussed
objective functions with the weighting parameter $\beta$:
$g=(1-\alpha-\beta)g_{rss}+\alpha g_{ac}+\beta g_{sc},\quad\quad\text{where
}\alpha,\beta\in[0,1]\text{ and }\alpha+\beta\leq 1.$ (13)
Thus, the revised regression tree algorithm is to search through all predictor
values $X_{1},X_{2},\dots,X_{p}$ looking for the splitting rule $x<X_{i}$ for
some $i$ that maximizes the objective function $g$ of equation 13 at each
recursive partitioning of the data.
## 5 Autoforest - a Random Forest extension to autocart trees
The revised objective function in equation 13 and the interpolative process
discussed in section 4 are promising ways to improve upon the predictions of
regression trees when applied to continuous spatial data. Random Forests are a
powerful extension of classification and regression trees. Random Forests
increase the predictive accuracy of classification and regression trees by
minimizing the variance of predictions by producing a “forest” of trees, each
trained using a bootstrapped sample of training data and a random subset of
the predictor variables to split on at each node. The averaging of predictions
that occurs in Random Forests greatly improve upon the predictive power of a
single regression tree [26].
The creation of a “forest” of autocart trees is proposed and discussed in this
section.
Let us denote a single autocart tree as a function $f_{A}$, where a prediction
is made by running a set of predictors $\\{x_{1},x_{2},\dots,x_{p}\\}$ through
the splitting rules in the tree (trained by maximizing the objective function
$g$ at each recursive partition), and then assigning the final prediction
either by the average of the response variable in the terminal node
(previously referred to as $\bar{y}_{T}$), or by an interpolative rule
$\hat{Y}(\mathbf{s})$ as in equation 6 or equation 8.
We create a forest of $k$ autocart trees by creating the set of trees
$F=\\{f_{A_{1}},f_{A_{2}},\dots,f_{A_{k}}\\}$. A prediction is made by running
the set of predictors $\\{x_{1},x_{2},\dots,x_{p}\\}$ through each tree, and
then using the arithmetic mean of the prediction of all trees in $F$:
$\hat{Y}=\dfrac{1}{k}\sum_{i=1}^{k}f_{A_{i}}(\\{x_{1},x_{2},\dots,x_{p}\\}).$
Each regression tree $f_{A_{i}}$ is trained with $\dfrac{2n}{3}$ training
observations randomly sampled from all $n$ observations without replacement.
Note that this differs from standard Random Forests where $n$ observations are
sampled with replacement. In this spatial adaption, repeat observations with
identical coordinate information causes problems in the spatial weights
matrix, as this results in an “infinite” weight.
As all $n$ records have an equal chance of being chosen, the bias of
$f_{A_{i}}$ is not affected, especially across all $k$ trees. Additionally, in
each tree only $m$ predictors are selected from ${X_{1},X_{2},\dots,X_{p}}$ at
each node. $m$ is a user-set parameter, but can be safely defaulted to
$\lceil\frac{p}{3}\rceil$.
The autoforest extension to the autocart tree is a way to imbue the Random
Forest algorithm with spatial data while refraining from explicitly including
coordinates as a predictor in the splitting.
(Note: The software implementation of autoforest currently chooses $m$
predictor variables to split on at each tree rather than at each node. This
was done for ease of implementation, but a future version of the R package
will resolve this issue.)
## 6 Results and comparison of model architectures
### 6.1 Datasets Tested
An optimal dataset for the autocart algorithm would include coordinate
information for all training observations, and the prediction of a continuous
response variable over the region represented by the dataset. Additionally,
all predictor variables used to train the autocart tree must be available at
all locations where predictions are desired, as techniques to infer the value
of a missing predictor variable $X_{i}$ are not covered in this paper. If the
autocart algorithm is to be used in a mapping setting, then gridded data
across the entire mapped region is required.
#### 6.1.1 September 2017 Utah soil moisture
This dataset contains the average soil moisture level recorded in moisture per
cubed centimeter for 195 remote sensing stations across the state of Utah.
Gridded 30-year PRISM Climate Normals [27] are used, including the 30-year
normals for maximum vapor pressure deficit, mean annual temperature, and mean
annual precipitation. Additionally, a digital elevation map of Utah from the
PRISM Climate Normals is used to obtain the elevation predictor and the
derived slope and aspect predictors.
The 30-year PRISM Climate Normals and derived data are selected for their
gridded nature and possible environmental relation to soil moisture.
Variables | Description
---|---
sm_8 | The proportion of water per $\text{cm}^{3}$ of soil
elev | The elevation of the location in meters
slope | A unitless “rise over run” measurement of the surface angle of the location. This is calculated from the “elev” model.
aspect | The compass orientation of the slope at a point measured in degrees, where 0 and 360 degrees is north, 90 degrees is east, etc.
min_vpd and max_vpd | The 30-year estimate of minimum / maximum vapor pressure deficit measured in kilopascals (kPa)
min_temp, max_temp, and mean_temp | The 30-year estimate of minimum, maximum, and mean temperature measured in degrees Fahrenheit
mean_dewpoint | The 30-year estimate of the mean dew point temperature in degrees Fahrenheit
precip | The 30-year estimate of annual precipitation in inches
#### 6.1.2 Utah 2017 Ground snowload dataset
This dataset contains the 50 year ground snow load at a variety of measurement
stations across the state of Utah [28]. Predictors are obtained from gridded
PRISM 30-year Climate Normals [27]:
Variables | Description
---|---
yr50 | The response variable: measures the designed ground snow load at the site in kilopascals (kPa)
ELEVATION | The elevation of the measurement site in meters
PPTWT | The total amount of precipitation in a year in inches
MCMT | The mean temperature of the coldest month in the year in Celsius
MWMT | The mean temperature of the warmest month in the year in Celsius
To fix the skewed distribution of the yr50 variable, a log transform was taken
of the response.
#### 6.1.3 Kenya Poverty Mapping
This dataset contains variables related to mapping the presence of poverty in
various states of Kenya [29, 30].
The variables in the dataset include the following:
Variables | Description
---|---
POORDENS | The number of poor people per $\text{km}^{2}$
AREA | The area of the active community group in Kenya
FLIVDEN | The density of livestock expressed in total livestock units/$\text{km}^{2}$
ACCWAT | The percentage of area within one hour walking distance of a permanent natural water source.
PTOWNKM | Distance from the shopping center in each sublocation to the nearest major towns by road, in kms.
GROUPDENS | The total number of active community groups, including non-governmental organizations and community based organizations.
LATITUDE | The latitude of the centroid of the community group (obtained from accompanying shapefile)
LONGITUDE | The longitude of the centroid of the community group (obtained from accompanying shapefile)
### 6.2 Results
We use cross-validation to assess the predictive accuracy of each model. In
cross-validation, we divide the data into $k$ disjoint partitions known as
“folds”. The model is trained on $k-1$ folds, and then used to predict the
response variable $Y$ of the withheld fold. The predictions $\hat{Y}$ of the
model can be compared to the real response $Y$ for an assessment of the
performance of the model. We repeat the training on $k-1$ folds of the data
$k$ times, withholding a different fold each time, such that all data in the
training data eventually has the chance to be withheld and compared to a
prediction from the model where the fold was absent. This strategy of
withholding information at each step provides an estimate of a model’s ability
to predict new observations and discourages over-fitting the input data.
Cross-validation is the gold standard for the assessment of a model when a
separate testing dataset is not available.
One choice in forming the $k$ folds is to randomly select observations from
the training data to be a part of each fold. However, the autocorrelation
present in spatial datasets can cause traditional cross-validation to under-
estimate the true model error. [16] discusses this issue and presents a
solution for the cross-validation of spatial data known as “spatial blocking”.
In this setup, the folds in cross-validation are formed by creating chunks of
neighboring observations, which limits the opportunity for geographically
close neighbors to be a part of different folds in cross-validation.
If the spatial blocks that form the cross-validation folds are too large, then
the predictive power of the model may be underestimated, as in a realistic
setting predictions may be often made at sites that are very close to the
training observations. Additionally, if we consider the regression tree models
discussed in this paper to be a tool for decomposing a global spatial process
into separate local processes, then withholding all data in a large spatial
block region may inadvertently eradicate the local spatial region’s
representation. On the other hand, if the spatial blocks are too small, then
the number of folds may be very large and dramatically increase the
computation time required to perform the cross-validation procedure. In the
absence of well-defined rules regarding the geographical size of the cross
validation folds, groups were constructed through the process of trial and
error through a consideration of the maximum geographical distance between
within-group observations. In the case of both the Utah 2017 snow and soil
datasets, a distance of 15km was chosen. For the Utah 2017 snow dataset, this
yields around hundred sub-groups which were then consolidated into 10 larger
groups for use in cross-validation.
Once a vector of predictions from the model has been created for all 10 folds,
the results of each algorithm on each dataset are evaluated with the root mean
square error (RMSE). This is a common metric to assess the predictive accuracy
of cross-validation for continuous regression problems.
$RMSE=\sqrt{\dfrac{1}{n}\sum_{i=1}^{n}(\hat{y}_{i}-y_{i})^{2}}$
where $n$ is the number of observations in the dataset used for cross-
validation, $y_{i}$ is the $i$th element of the true response vector of the
training data, and $\hat{y}_{i}$ is the $i$th element of the prediction vector
made by the model from 10-fold cross-validation.
#### 6.2.1 A tuning method for autocart
As the autocart function contains several tune-able parameters (namely
$\alpha$, $\beta$, and the spatial bandwidth ’$b$’), we need to be careful how
we select the optimal choice of parameters during cross-validation. The “best”
performing choices of $\alpha$, $\beta$, and $b$ over all the training data
may be different than the best performing choices for a random subset of the
data, such as a subset used in cross-validation. In a realistic scenario, we
predict new data that was not a part of the training data, and thus we do not
have the labeled response variable $y$ to tune the parameters with. Thus,
instead of tuning the parameters $\alpha$, $\beta$, and $b$ over all the data
and then performing cross-validation, we tune the parameters to the 9 withheld
groups, and then predict onto the last “test” group. In this way, the optimal
choices for $\alpha$, $\beta$, and $b$ will likely vary each time we withhold
a group. In the next section, the cross-validated accuracy of autocart is
obtained using this tuning method.
#### 6.2.2 Dataset results
To test the datasets, 6 different methods are used:
* •
“Simple” Prediction: This is a baseline prediction method that ensures a
machine-learning method is providing an improvement over an overly-simplistic
model. The simple prediction is formed by taking the average of the response
variable in the nine withheld groups, and then using that average to predict
for the withheld group in cross-validation.
* •
Regression trees: Simple regression trees that are introduced at the beginning
of the paper. The trees are pruned to the point with the best cross-validated
accuracy.
* •
Autocart with $p=2$: An autocart interpolation tree using the power parameter
of $p=2$.
* •
Autocart with $p_{1}=0.5,p_{2}=3.0$: An autocart interpolation tree using the
ranged power parameter $p_{1}$ and $p_{2}$, meant to provide a comparison to
the unranged power parameter $p=2$.
* •
Random forest with ntree = 100: A random forest made of 100 regression trees.
* •
Autoforest with ntree = 100: An autoforest made up of 100 autocart trees.
In each column representing a dataset, the RMSE of the “best performing” model
is written in bold font.
RMSE of spatial cross-validation
---
| Dataset
Method | September 2017 Soil (proportion of water composition per $\text{cm}^{3}$) | Utah 2017 Snow (log of 50 year ground snow load avg in kPa) | Kenya Poverty Mapping (log of number of poor residents per $\text{km}^{2}$)
“Simple Prediction” | 0.0882 | 0.8890 | 1.219
Regression trees | 0.1082 | 0.3445 | 1.255
Autocart with $p=2$ | 0.0962 | 0.3097 | 0.966
Autocart with $p_{1}=0.5,p_{2}=3.0$ | 0.0935 | 0.3089 | 0.989
Random forest with $\text{ntree}=100$ | 0.0871 | 0.2845 | 0.933
Autoforest with $\text{ntree}=100$ | 0.0842 | 0.3003 | 0.993
## 7 Discussion of Results
### 7.1 Inadequacies in the soil moisture datasets
In the ”September 2017 Soil” dataset, we observe that the simple regression
using the average of the response in the nine withheld groups was nearly the
best performing method, outperforming all except Random Forests by a slim
margin.
This highlights an inadequacy in the data, as none of the tested machine-
learning methods are capable of learning the patterns in the labeled response
variable given the set of gridded predictor variables. The following are
possible explanations for the poor performance of the models on the data:
1. 1.
The variation in soil moisture may be much more “localized” than previously
thought. As the soil moisture data is only available at 195 sites in the 2017
soil moisture dataset, there is a strong possibility that there does not exist
enough data for the machine-learning models to appropriately characterize the
patterns in the landscape.
2. 2.
The given gridded predictor variables have no relationship with soil moisture.
One requirement for candidate predictor variables is that they are available
as high resolution gridded maps for the area of interest. The number of
candidate variables satisfying this requirement is limited. Thus, there may be
variables better suited for soil moisture prediction, but are unusable in
their current forms.
3. 3.
The data may be contaminated. Some of the sampling locations are yield
unusually high soil moisture measurements. This may be the result of an
improper inclusion of irrigation site data (yielding an artificially high
measurement of soil moisture) or perhaps an anomalous rainy day. Such
anomalies defy relationships that may otherwise exist between soil moisture
and the response variables, but there is no current way to know which
observations should be removed due to human intervention in the soil moisture
content.
Datasets covering other time periods were supplied by the Plants, Soils, and
Climates department at USU. However, these datasets contained considerably
less soil moisture remote sensing stations, around 95 as opposed to the 195 in
the September 2017 datasets. The timeline required to obtain the additional
soil moisture information fell outside the scope of this current project.
All other datasets with the 95 remote sensing sites exhibited the same
problems as the September 2017 soil moisture dataset: overall poor performance
from tree-based machine-learning methods using the PRISM gridded climate
normals. This was the same regardless of time period observed. Tested time
periods included the average of all summer month soil moisture measurements
since 2016, the average of soil moisture measurements in individual months,
weeks, and days. Future research will require a re-evaluation of the candidate
predictor variables for soil moisture as well as methods to detect and remove
data anomalies associated with irrigated sites.
### 7.2 “Smoothness” of the maps: Characterizing a complex landscape
Using the cross-validated RMSE score is not the only way we can evaluate the
performance of these methods. In Section 2 and Figure 2, it is mentioned that
the ultimate aim of these methods is to characterize a complex landscape in a
physically realistic way.
As an example, in the field of meteorology, advection schemes are a way to
model the horizontal flow of heat or matter through a certain geographic
region. Advection schemes are based upon the gradients (i.e. local rates of
change) which must be relatively smooth to avoid numerical precision issues in
modeling. Having an extreme jump in a modeled surface can lead to an extreme
local gradient, thereby disrupting the small-scale meteorology. While
smoothness is not a necessary condition of predictions on all spatial
datasets, most spatial and ecological datasets benefit from predictions with
realistic transitions in modeled values across space.
In a spatial or ecological setting, it may be the case that we reject a method
that has a slightly higher measure of predictive accuracy in favor of a method
with similar accuracy and physically realistic output. In this sense, the
method that appears to be providing the map with the most detail and physical
realism across the geographical landscape may be judged to be the “superior”
method, as there is strong evidence it is characterizing the patterns
inherently present in the data. We are unaware of any methods that formally
balance accuracy with smoothness. In practice, this balance will be domain
specific.
A critical parameter in the formation of a regression tree is the choice of
pruning: the depth that the tree is grown to. A pruned regression tree may
have few terminal nodes, thus yielding a “chunky” landscape with sharp and
discrete breaks across the space. It would unfair to say that a pruned
regression tree does a poor job of characterizing a complex landscape. To
circumvent this unfair comparison, in the mapping process both the simple
regression tree and the autocart tree are not pruned. The stopping criteria
for the autocart and regression tree growth are the same, so we are ensured
that there will be a similar number of terminal nodes in each tree.
#### 7.2.1 2017 Utah Soil Moisture
Although the RMSE of all machine-learning methods tested are less than
impressive when compared against the “simple” regression discussed in Section
6.2.2, the maps generated by the methods are promising:
(a) Regression trees
(b) Autocart with ranged power parameter $p_{1}=0.5,p_{2}=3.0$
Figure 3: Prediction map of average soil moisture (proportion of water per
$\text{cm}^{3}$) in September 2017
In Figure 3b, we observe the characterization of a more complex landscape, as
compared to Figure 3a. One noticeable “halo” of the interpolative effect can
be seen at approximately (112 W, 41 N).
(a) Random forests
(b) Autoforest with $\text{ntree}=100$ and $p_{1}=0.5,p_{2}=3.0$
Figure 4: Prediction map of average soil moisture (proportion of water per
$\text{cm}^{3}$) in September 2017 with ensemble methods.
In both the Random Forest and Autoforest maps, we see an improved
characterization of the complex landscape with physically realistic
predictions as compared to the traditional regression-tree based approaches.
Although there is a discernible difference between the two maps, the
difference is not as stark as with Figures 3a and 3b.
It is important to mention that Figure 4a differs from Figure 2 from the
beginning of the paper in that longitude/latitude are not included as
predictors in the model. As the coordinate information is not explicitly
included as a predictor, we do not observe the same visual artefacts that we
did in Figure 2.
#### 7.2.2 2017 Utah ground snow load
(a) Regression trees
(b) Autocart with ranged power parameter $p_{1}=0.5,p_{2}=3.0$
Figure 5: Prediction map of 50-year ground snow load average (log of kPa)
ground snow load in Utah as of 2017
In a similar way to Figure 3, we observe that Figure 5b is slightly more
smooth and complex in nature. The smoothness of the autocart tree can once
again be primarily attributed to the interpolation step at the terminal node
of the tree.
The efficacy of both the regression tree and autocart tree is observed here,
as the patterns in the map directly reflect the mountains of Utah. It is no
surprise that high snow loads are observed in mountainous locations.
(a) Random forests
(b) Autoforest with $\text{ntree}=100$ and $p_{1}=0.5,p_{2}=3.0$
Figure 6: Prediction map of 50-year ground snow load average (log of kPa)
ground snow load in Utah as of 2017 with ensemble methods
In the prediction map of Random Forests and Autoforest, we see very few
differences. Primarily, we observe a slightly more “patchy” area in
southwestern Utah in Figure 6a when compared with the smooth southwestern Utah
area in Figure 6b.
## 8 Developed Software
All software and source code surrounding the ideas in this paper are available
at
https://github.com/ethanancell/autocart.
All methods are implemented in the R statistical software environment [31].
The autocart package utilizes the Rcpp [32, 33, 34], RcppArmadillo [35],
RcppParallel [36], fields [37], and mgcv [38, 39, 40, 41, 42] R packages.
Regression tree and random forest testing were performed using the rpart [43]
and randomForest [44] packages respectively.
## 9 Future Work
* •
Resolving possible issues in the soil moisture dataset
As mentioned in Section 7, a future research direction would be in obtaining
gridded predictor variables that have a stronger relationship with the measure
of soil moisture. As it currently stands, the PRISM 30-year normals and other
derived predictors don’t appear to strongly characterize soil moisture.
In addition, it would be helpful to have a human examination of the data so
that contaminated sites (such as measurements taken on irrigated land) can be
removed. The inclusion of these sites can tamper with a possible underlying
relationship between predictor variables and soil moisture.
* •
Another ensemble method of autocart trees
In addition to a “forest” of autocart trees, another interesting research
direction would be in creating a boosted autocart trees. Boosted trees are
another tree-ensemble method [45]. Boosted trees are very popular and powerful
tools for both regression and classification. It may be the case that a
boosted ensemble of autocart trees does comparatively better than a random
forest ensemble of autocart trees.
* •
An automatic selection of the power parameter
Wherever the ranged or unranged power parameter exists, some speculation and
knowledge on the part of the researcher is required to pick a good value of
$p$ or $p_{1}$ and $p_{2}$. It may be possible to include an algorithm that
can automatically pick up on the weight of spatial strength in a particular
region.
* •
An objective function representing the interaction between $g_{ac}$ and
$g_{sc}$
It may be the case that the objective function $g_{sc}$ described in Section
4.2 is only useful in regions with strong autocorrelation. There may be some
predictive value in selecting splits that maximize an objective function such
as $\tilde{g}=g_{ac}\cdot g_{sc}\cdot I_{\\{g_{ac}>\lambda\\}}$ where $I$ is
the indicator function and $\lambda$ is some threshold of autocorrelation that
needs to be met before the interaction objective function $g_{ac}g_{sc}$ is
used in evaluating the split.
* •
Using $g_{ac}$ and $g_{sc}$ in a non-tree-based machine-learning method
In many cases, other machine-learning methods are easily adaptable with a
change in the objective function. The objective functions $g_{ac}$ and
$g_{sc}$ described in this paper could be used as objective functions for
other methods such as neural networks or support vector machines.
* •
A formal evaluation of the smoothness of a region
In Section 7.2, the smoothness of the maps is compared with a visual
assessment. The visual comparison of these maps could be supplemented with a
numerical test of “smoothness” over a region.
## 10 Conclusion
Complex spatial datasets pose a difficult challenge to existing machine-
learning methods are they are not equipped to handle spatial information in a
natural way. Autocart regression trees provide a way to imbue a traditional
regression tree with coordinate information in a natural way through the use
of spatially-aware objective functions. The autocart tree is also capable of
traditional interpolation in the terminal nodes of the tree using an adaptive
inverse distance weighting technique.
Spatially-aware regression trees such as autocart also show a level of promise
in providing results with a high measure of predictive accuracy on spatial
datasets. In addition, the mapping result of autocart trees exhibit many
desirable properties such as smoothness in the modeled response variable over
the geographical region. In many cases, the mapped result of an autocart tree
is very similar to that obtained by a random forest, yet retains a simpler
model form.
Although the autocart method was originally created to respond to the unique
challenges of modeling soil moisture in a climatically-diverse state like
Utah, as of now this method does not show a particularly strong increase in
predictive accuracy over a very simple regression formed by averaging the
available data. It is suspected that this may be the product of a lack of soil
moisture data availability, contamination of irrigation data, or an
unfortunate lack of predictive power in available gridded climate variable
predictors.
Due to the simple nature of the autocart tree’s structure, it can be easily
included in an ensemble method such as a random forest of autocart trees.
Although the “autoforest” gain in accuracy over its traditional counterpart is
not as pronounced in the random forest as it was with regression trees, it is
evident that autocart trees show potential in being adapted into ensemble
methods. The autocart package provides the framework and template for
continued improvements in the ways that the spatial data of today is handled
in the machine-learning algorithms of tomorrow.
## References
* [1] I. Rodriguez-Iturbe, “Ecohydrology: A hydrologic perspective of climate-soil-vegetation dynamies,” Water Resources Research, vol. 36, no. 1, pp. 3–9, 2000.
* [2] Natural Resources Conservation Service, “President’s budget.” https://www.obpa.usda.gov/27nrcsexnotes2018.pdf, 2018.
* [3] D. Kathuria, B. P. Mohanty, and M. Katzfuss, “A nonstationary geostatistical framework for soil moisture prediction in the presence of surface heterogeneity,” Water Resources Research, vol. 55, no. 1, pp. 729–753, 2019.
* [4] P. Goovaerts, Geostatistics for natural resources evaluation. Oxford University Press, 1997.
* [5] A. B. McBratney, M. M. Santos, and B. Minasny, “On digital soil mapping,” Geoderma, vol. 117, no. 1-2, pp. 3–52, 2003.
* [6] J. Friedman, T. Hastie, and R. Tibshirani, The elements of statistical learning. Springer series in statistics New York, 2001.
* [7] E. W. Fox, J. M. Ver Hoef, and A. R. Olsen, “Comparing spatial regression to random forests for large environmental data sets,” PloS one, vol. 15, no. 3, p. e0229509, 2020.
* [8] R. Grimm, T. Behrens, M. Märker, and H. Elsenbeer, “Soil organic carbon concentrations and stocks on barro colorado island—digital soil mapping using random forests analysis,” Geoderma, vol. 146, no. 1-2, pp. 102–113, 2008.
* [9] J. Li, A. D. Heap, A. Potter, Z. Huang, and J. J. Daniell, “Can we improve the spatial predictions of seabed sediments? a case study of spatial interpolation of mud content across the southwest australian margin,” Continental Shelf Research, vol. 31, no. 13, pp. 1365–1376, 2011.
* [10] M. Sommer, M. Wehrhan, M. Zipprich, U. Weller, W. Zu Castell, S. Ehrich, B. Tandler, and T. Selige, “Hierarchical data fusion for mapping soil units at field scale,” Geoderma, vol. 112, no. 3-4, pp. 179–196, 2003.
* [11] D. R. Cutler, T. C. Edwards Jr, K. H. Beard, A. Cutler, K. T. Hess, J. Gibson, and J. J. Lawler, “Random forests for classification in ecology,” Ecology, vol. 88, no. 11, pp. 2783–2792, 2007.
* [12] G. De’ath and K. E. Fabricius, “Classification and regression trees: a powerful yet simple technique for ecological data analysis,” Ecology, vol. 81, no. 11, pp. 3178–3192, 2000.
* [13] T. Hengl, J. Mendes de Jesus, G. B. Heuvelink, M. Ruiperez Gonzalez, M. Kilibarda, A. Blagotić, W. Shangguan, M. N. Wright, X. Geng, B. Bauer-Marschallinger, et al., “Soilgrids250m: Global gridded soil information based on machine learning,” PLoS one, vol. 12, no. 2, p. e0169748, 2017.
* [14] A. Ramcharan, T. Hengl, T. Nauman, C. Brungard, S. Waltman, S. Wills, and J. Thompson, “Soil property and class maps of the conterminous united states at 100-meter spatial resolution,” Soil Science Society of America Journal, vol. 82, no. 1, pp. 186–201, 2018.
* [15] L. Breiman, J. Friedman, C. J. Stone, and R. A. Olshen, Classification and regression trees. CRC press, 1984.
* [16] H. Meyer, C. Reudenbach, S. Wöllauer, and T. Nauss, “Importance of spatial predictor variable selection in machine learning applications–moving from data reproduction to spatial prediction,” Ecological Modelling, vol. 411, p. 108815, 2019.
* [17] R. A. Dubin, “Spatial autocorrelation: a primer,” Journal of housing economics, vol. 7, no. 4, pp. 304–327, 1998.
* [18] W. R. Tobler, “A computer movie simulating urban growth in the detroit region,” Economic geography, vol. 46, no. sup1, pp. 234–240, 1970.
* [19] P. Legendre, “Spatial autocorrelation: Trouble or new paradigm?,” Ecology, vol. 74, no. 6, pp. 1659–1673, 1993.
* [20] H.-M. Kim, B. K. Mallick, and C. Holmes, “Analyzing nonstationary spatial data using piecewise gaussian processes,” Journal of the American Statistical Association, vol. 100, no. 470, pp. 653–668, 2005.
* [21] M. J. Heaton, W. F. Christensen, and M. A. Terres, “Nonstationary gaussian process models using spatial hierarchical clustering from finite differences,” Technometrics, vol. 59, no. 1, pp. 93–101, 2017.
* [22] D. Stojanova, M. Ceci, A. Appice, D. Malerba, and S. Džeroski, “Dealing with spatial autocorrelation when learning predictive clustering trees,” Ecological Informatics, vol. 13, pp. 22–39, 2013.
* [23] A. Appice and D. Malerba, “Leveraging the power of local spatial autocorrelation in geophysical interpolative clustering,” Data Mining and Knowledge Discovery, vol. 28, no. 5-6, pp. 1266–1313, 2014.
* [24] S. Georganos, T. Grippa, A. Niang Gadiaga, C. Linard, M. Lennert, S. Vanhuysse, N. Mboga, E. Wolff, and S. Kalogirou, “Geographical random forests: a spatial extension of the random forest algorithm to address spatial heterogeneity in remote sensing and population modelling,” Geocarto International, pp. 1–16, 2019.
* [25] P. A. Moran, “Notes on continuous stochastic phenomena,” Biometrika, vol. 37, no. 1/2, pp. 17–23, 1950.
* [26] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.
* [27] O. S. U. PRISM Climate Group. http://prism.oregonstate.edu. created 4 Feb 2004.
* [28] B. Bean, M. Maguire, and Y. Sun, “The Utah snow load study,” Tech. Rep. 4591, Utah State University, Department of Civil and Environmental Engineering, 2018.
* [29] Center For International Earth Science Information Network-CIESIN-Columbia University, “Poverty mapping project: Poverty and food security case studies,” 2005.
* [30] International Livestock Research Institute, 20040115, “Kenya Kajiado Case Study: ILRI, Nairobi.”
* [31] R Core Team, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2013.
* [32] D. Eddelbuettel and R. François, “Rcpp: Seamless R and C++ integration,” Journal of Statistical Software, vol. 40, no. 8, pp. 1–18, 2011.
* [33] D. Eddelbuettel, Seamless R and C++ Integration with Rcpp. New York: Springer, 2013. ISBN 978-1-4614-6867-7.
* [34] D. Eddelbuettel and J. J. Balamuta, “Extending extitR with extitC++: A Brief Introduction to extitRcpp,” PeerJ Preprints, vol. 5, p. e3188v1, aug 2017.
* [35] D. Eddelbuettel and C. Sanderson, “Rcpparmadillo: Accelerating r with high-performance c++ linear algebra,” Computational Statistics and Data Analysis, vol. 71, pp. 1054–1063, March 2014.
* [36] JJ Allaire et al., “RcppParallel.” https://cran.r-project.org/web/packages/RcppParallel/index.html, 2020.
* [37] Douglas Nychka, Reinhard Furrer, John Paige, and Stephan Sain, “fields: Tools for spatial data,” 2017. R package version 11.4.
* [38] S. N. Wood, “Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models,” Journal of the Royal Statistical Society (B), vol. 73, no. 1, pp. 3–36, 2011.
* [39] S. Wood, N., Pya, and B. S”afken, “Smoothing parameter and model selection for general smooth models (with discussion),” Journal of the American Statistical Association, vol. 111, pp. 1548–1575, 2016.
* [40] S. N. Wood, “Stable and efficient multiple smoothing parameter estimation for generalized additive models,” Journal of the American Statistical Association, vol. 99, no. 467, pp. 673–686, 2004.
* [41] S. Wood, Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC, 2 ed., 2017.
* [42] S. N. Wood, “Thin-plate regression splines,” Journal of the Royal Statistical Society (B), vol. 65, no. 1, pp. 95–114, 2003.
* [43] Terry Therneau, Beth Atkinson, Brian Ripley, “rpart.” https://CRAN.R-project.org/package=rpart, 2019.
* [44] A. Liaw and M. Wiener, “Classification and regression by randomforest,” R News, vol. 2, no. 3, pp. 18–22, 2002.
* [45] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of computer and system sciences, vol. 55, no. 1, pp. 119–139, 1997.
|
11institutetext: Center for Cosmology and Astrophysics, Alikhanian National
Laboratory and Yerevan State University, Yerevan, Armenia 22institutetext:
SIA, Sapienza Universita di Roma, Rome, Italy
# Black hole shadow to probe modified gravity
A. Stepanian 11 Sh. Khlghatyan 11 V.G. Gurzadyan 1122
(Received: date / Revised version: date)
###### Abstract
We study the black hole’s shadow for Schwarzschild-de Sitter and Kerr-de
Sitter metrics with the contribution of the cosmological constant $\Lambda$.
Based on the reported parameters of the M87* black hole shadow we obtain
constraints for the $\Lambda$ and show the agreement with the cosmological
data. It is shown that, the coupling of the $\Lambda$-term with the spin
parameter reveals peculiarities for the photon spheres and hence for the
shadows. Within the parametrized post-Newtonian formalism the constraint for
the corresponding $\Lambda$-determined parameter is obtained.
###### pacs:
98.80.-kCosmology
## 1 Introduction
The release of the M87* massive black hole (BH) shadow image by the EHT
Collaboration M87a ; M87a1 ; M87a2 ; M87b marked a new phase of study of a
number of physical effects occurring in the very centers of galactic nuclei.
The BH’s shadow image is used to constraint both the BH parameters including
its spin within General Relativity (GR), as well as the properties of the
accretion disk, see M87c ; M87d ; M87e and references therein. The shadow
parameters appear to be efficient also in testing for strong-field conditions
the parametrized post-Newtonian (PPN) formalism in first order approximation
PPN .
In this Letter we consider the possibility to use the M87* shadow available
information to constraint the Schwarzschild-de Sitter (SdS) and Kerr-de Sitter
(KdS) BHs, i.e. taking into account the cosmological constant $\Lambda$ in the
metric. The $\Lambda$-term in the spherically symmetric metric is arising also
in the modified weak-field General Relativity (GR) which enables to consider
the common nature of the dark energy and dark matter G ; GS1 ; GS2 ; GS3 .
That approach also suggests a test for the potential deviations from GR having
in view of the value of the cosmological constant $\Lambda=1.11\times
10^{-52}m^{-2}$ Pl using the gravity lensing observations GSlens .
We use key properties of the photon orbits in the presence of non-zero
$\Lambda$ for SdS and KdS metrics and of the shadows, to obtain the
constraints on the numerical value of $\Lambda$. We also analyze the role of
the $\Lambda$ term for the shadow properties within the parametrized post-
Newtonian formalism.
## 2 Constraining the value of $\Lambda$ from the shadow
The Schwarzschild-de Sitter metric, i.e. the spherically symmetric metric with
non-vanishing $\Lambda$-term has the following form R
$ds^{2}=\left(1-\frac{2GM}{rc^{2}}-\frac{\Lambda
r^{2}}{3}\right)c^{2}dt^{2}-\left(1-\frac{2GM}{rc^{2}}-\frac{\Lambda
r^{2}}{3}\right)^{-1}dr^{2}-r^{2}d\Omega^{2}.$ (1)
Importantly, it has been shown that the $\Lambda$-term does not affect the
null geodesics Is ; Con1 ; L . Thus, it can be checked that, comparing to
$\Lambda=0$ case the photon sphere will remain unchanged i.e.
$r_{ph}=\sqrt{g_{tt}}\left(\frac{d\sqrt{g_{tt}}}{dr}|_{r_{ph}}\right)^{-1}=\frac{3GM}{c^{2}}.$
(2)
However, due to the presence of the $\Lambda$ term in the $g_{tt}$ component,
the same is not true for the radius of the BH’s shadow which is equal to
$r_{sh}=\frac{r_{ph}}{\sqrt{g_{tt}(r_{ph})}}.$ (3)
In this sense, comparing to Schwarzschild metric where the shadow is
$r_{sh,sh}=3\sqrt{3}\frac{GM}{c^{2}},$ (4)
we get
$r_{sh,\Lambda}=\frac{\frac{3GM}{c^{2}}}{\sqrt{\frac{1}{3}-3(\frac{GM}{c^{2}})^{2}\Lambda}}.$
(5)
Thus, by taking the recently reported values of M87* BH shadow we can find
constraints on the numerical value of the cosmological constant. Namely, the
upper limits for $\Lambda$ can be obtained based on two different reported
values.
First, we get the upper limit based on the numerical values for $r_{sh}=42\pm
3\mu as$. In this case the constraint is equal to
$1+\frac{\mathbb{E}(r_{sh})}{r_{sh}}\geq\frac{r_{sh,\Lambda}}{r_{sh,sh}}=\frac{1}{\sqrt{1-9\left(\frac{GM}{c^{2}}\right)^{2}\Lambda}}.$
(6)
Next, we repeat the same analysis this time by considering the mass of M87*
i.e. $M=(6.5\pm 0.7)\times 10^{9}M_{\odot}$. It should be noticed that we
cannot take into account both errors simultaneously, since the mass of the BH
and the radius of the shadow are dependent on each other. Consequently, for
these two cases we get
$\Lambda\leq 1.542\times 10^{-28}m^{-2},\quad\Lambda\leq 2.214\times
10^{-28}m^{-2}.$ (7)
The obtained limits, as we see, are close to each other and are in agreement
with the cosmological value of $\Lambda$.
## 3 Schwarzschild-de Sitter vs Kerr-de Sitter BHs
For Schwarzschild-de Sitter metric in Eq.(5) we get $r_{sh,\Lambda}\to\infty$
once
$9\left(\frac{GM}{c^{2}}\right)^{2}\Lambda=1.$ (8)
It can be checked that, this relation is also the condition of having the so-
called extreme SdS BH solution. Indeed, it is known the event horizons of the
SdS metric i.e. Eq.(1) are equal to
$r_{1}=\frac{2}{\sqrt{\Lambda}}cos\left(\frac{1}{3}cos^{-1}\left(\frac{3GM\sqrt{\Lambda}}{c^{2}}\right)+\frac{\pi}{3}\right),\quad
r_{2}=\frac{2}{\sqrt{\Lambda}}cos\left(\frac{1}{3}cos^{-1}\left(\frac{3GM\sqrt{\Lambda}}{c^{2}}\right)-\frac{\pi}{3}\right),\quad
r_{3}=-(r_{1}+r_{2}).$ (9)
However, once the condition in Eq.(8) is satisfied, instead of two (positive
and real) event horizons we will have one horizon of radius equal to
$r_{1}=r_{2}=\frac{1}{\sqrt{\Lambda}}.$ (10)
Interestingly, one can check that at $r=\frac{1}{\sqrt{\Lambda}}$ the
gravitational attraction of Newtonian term in Eq.(1) will be completely
balanced by the repulsive force produced by $\Lambda$ term i.e.
$\left(\frac{GM}{r^{2}}-\frac{\Lambda
c^{2}r}{3}\right)\bigg{|}_{r=\frac{1}{\sqrt{\Lambda}}}=0.$ (11)
Moreover, it is commonly believed that in more realistic astrophysical cases a
BH is described not by Schwarzschild metric, but by Kerr metric, where the
spin parameter of BH i.e. $a=\frac{J}{Mc}$ is also taken into account. Indeed,
the presence of $a$ as the indicator of BH’s intrinsic angular momentum leads
to several interesting results which have been studied extensively in the
literature (see e.g. K1 ; K2 ; K3 ). Accordingly, for Kerr BH instead of one
photon orbit, one will have two of them with the following radii
$r_{1}=\frac{2GM}{c^{2}}(1+cos(\frac{2}{3}cos^{-1}(-\frac{|a|c^{2}}{GM}))),\quad
r_{2}=\frac{2GM}{c^{2}}(1+cos(\frac{2}{3}cos^{-1}(+\frac{|a|c^{2}}{GM}))).$
(12)
Clearly, for $a=0$ the two solutions will coincide and the solution of
Schwarzschild case in Eq (2) is recovered. Considering our analysis, here we
are interested to include the $\Lambda$ in the metric of rotating BHs. Namely,
in our case this will be the Kerr-de Sitter (KdS) metric which is defined as
follows
$ds^{2}=\frac{\Delta_{r}}{\rho^{2}L^{2}}\left(cdt-a\sin^{2}\theta
d\phi\right)^{2}-\frac{\rho^{2}}{\Delta_{r}}dr^{2}-\frac{\rho^{2}}{\Delta_{\theta}}d\theta^{2}-\frac{\Delta_{\theta}\sin^{2}\theta}{\rho^{2}L^{2}}\left(acdt-(r^{2}+a^{2})d\phi\right)^{2},$
(13)
where
$\displaystyle\Delta_{r}=\left(1-\frac{\Lambda
r^{2}}{3}\right)(r^{2}+a^{2})-\frac{2GMr}{c^{2}},$ (14)
$\displaystyle\Delta_{\theta}=\left(1+\frac{a^{2}\Lambda\cos^{2}\theta}{3}\right),$
$\displaystyle L=\left(1+\frac{a^{2}\Lambda}{3}\right),$
$\displaystyle\rho^{2}=r^{2}+a^{2}\cos^{2}\theta.$
In this sense, KdS metric describes the geometry of spacetime when a single
axially symmetric object is immersed in de Sitter background KdS1 ; KdS2 ;
KdS3 . As a result, the photon sphere for KdS metric can be obtained by
solving the following cubic equation
$3\left(1+\frac{1}{3}\Lambda
a^{2}\right)^{2}r^{3}+6(\frac{GM}{c^{2}})\left(\Lambda
a^{2}-3\right)r^{2}+27(\frac{GM}{c^{2}})^{2}r-12\left(\frac{GM}{c^{2}}\right)a^{2}=0$
(15)
The key point of the above equation is the coupling of $a$ and $\Lambda$.
Namely, we have no free $\Lambda$-term which means that for $a=0$ the equation
will be reduced to standard Schwarzschild case. Clearly, this fact illustrates
that for spherically symmetric BHs, no matter $\Lambda$ is vanishing or not,
the photon sphere will be equal to $3\frac{GM}{c^{2}}$. But in axially
symmetric case we have the coupling of $\Lambda$ term with spin parameter and
as a result of that in contrast to spherically symmetric case, the photon
sphere of Kerr and KdS BHs will be different.
However, the differences for Kerr and KdS BHs for astrophysical configurations
such as M87* are too small to be detected via current observational methods.
In fact, the difference between Eq.(15) and pure Kerr case is the presence of
$\frac{1}{3}\Lambda a^{2}$ and $\Lambda a^{2}$ in the 3rd and 2nd degrees of
the polynomial, respectively. Similarly, the difference between Kerr and
Schwarzschild case is arisen due to $12\left(\frac{GM}{c^{2}}\right)a^{2}$.
Nevertheless, the main point is that, while considering the current value of
cosmological constant and the parameters of a BH such as M87*, it turns out
the contribution of $\Lambda a^{2}$ is too small to be observed at the typical
astrophysical scales. In particular, for M87* this contribution is around
$10^{-27}$ which is far smaller than $\frac{GM}{c^{2}}a^{2}$ in the last term
of Eq.(15).
Here, the essential point is that, the nature of photon orbits in both Kerr
and KdS BHs is identical. Indeed, since the roots of Eq.(15) can be considered
as the radii of photon orbits, finding the real and positive roots is of the
main importance. Meantime, it should be noticed that the number of real and
positive solutions depends on the sign of $\Lambda a^{2}-3$. Based on the
current astrophysical data for a BH of M87* type this value will be definitely
negative. Thus, one can state that there are three positive solutions.
Furthermore, it can be shown that the value of the first positive root is
smaller than the radius of the outer event horizon of BH. Namely, for the data
of M87* the solutions of full Eq.(15) for radii in the presence of positive
$\Lambda$, are (in meters)
$r_{1}=5.124\times 10^{12},\quad r_{2}=1.5\times 10^{13},\quad
r_{3}=3.767\times 10^{13}.$ (16)
On the other hand, by taking the metric of KdS according to Eq.(13) it becomes
clear that the event horizons of BH are obtained by solving the following
equation
$\Delta_{r}=\left(1-\frac{\Lambda
r^{2}}{3}\right)(r^{2}+a^{2})-\frac{2GMr}{c^{2}}=0.$ (17)
Consequently, for M87* the radii of event horizons (in meters) will be
$EH_{1}=-1.643\times 10^{26},\quad EH_{2}=5.434\times 10^{12},\quad
EH_{3}=1.383\times 10^{13},\quad EH_{4}=1.643\times 10^{26}.$ (18)
Following the geometry of KdS BH it turns out that the $EH_{4}$ is the so-
called “cosmological horizon” and is located far beyond the other two horizons
i.e. $EH_{2}$ and $EH_{3}$ which are regarded as the inner and outer event
horizons of KdS in the same analogy with Kerr BH. Meantime, the negative
$EH_{1}$ is interpreted as the dual of $EH_{4}$ which is located on the other
side of BH’s ring singularity and becomes important only during the KdS
solution’s mathematical extension.
Accordingly, by comparing the Eqs.(16,18) we find
$r_{1}<EH_{3}<r_{2},r_{3}<EH_{4}.$ (19)
Thus, similar to the Kerr case, here two photon orbits will be formed outside
the BH. Indeed, the presence of these two solutions for Kerr BH can be
regarded as a manifestation of frame dragging effect. Namely, the frame
dragging for KdS metric in Eq.(13) is defined as
$\Omega=-\frac{g_{t\phi}}{g_{\phi\phi}},$ (20)
so while the photon at inner orbit i.e. $r_{2}$ moves in the same direction of
BH’s spin, the motion at the outer orbit $r_{3}$ is in the opposite direction
which is due to the presence of BH’s intrinsic spin. In this sense, we can
conclude that the presence of $\Lambda$ term in the BH equations only corrects
the radii of photon orbits while the nature of the frame dragging effect
itself and particularly the formation of photon orbits remains intact.
Finally, since the well-known Lense-Thirring (LT) precession can be obtained
from Eq.(20) in the slow rotation limit, our above statement about the effect
of $\Lambda$ on the frame dragging effect can be regarded as the continuation
of investigation of LT precession in the presence of $\Lambda$ SKG . It was
shown that then an additional term is appeared which contains $\Lambda$
$\Omega_{LT}=\frac{2GJ}{c^{2}r^{3}}+\frac{\Lambda J}{3M},$ (21)
and which can be interpreted as a correction to the so-called
gravito–gyromagnetic ratio. However, in the above relation the $\Lambda$ is
coupled to the mass and angular momentum of the rotating object. In other
words, in the presence of a positive $\Lambda$ the LT precession is corrected
according to Eq.(21) and no new effect is reported. This is due to the fact
that, without the rotation of the central object the $\Lambda$ itself cannot
cause any precession at all.
In PPN it has been proposed that based on the data of BH’s shadow one can get
constraints on the variables of parametrized post-Newtonian formalism. Namely,
by considering the expansion to the $r^{-3}$ order term one has
$g_{tt}=1-\frac{2GM}{c^{2}r}+\frac{2(\beta-\gamma)(GM)^{2}}{c^{4}r^{2}}-\frac{2\zeta(GM)^{3}}{c^{6}r^{3}}.$
(22)
Accordingly, since $\beta-\gamma\approx 0$, the $r_{ph}$ and $r_{sh}$ in the
$M=c=G=1$ units will be written as
$r_{ph}=3+\frac{5}{9}\hat{\zeta},\quad
r_{sh}=3\sqrt{3}\left(1+\frac{1}{9}\hat{\zeta}\right).$ (23)
By substituting the numerical values, we find
$\hat{\zeta}_{\Lambda}=4.77\times 10^{-26},$
as the order of the potential discrepancy from the standard GR. The value of
$\hat{\zeta}$ is a measure of deviations from GR and it is also a criterion to
put constraints on the free parameters of different modified theories of
gravity.
## 4 Conclusions
We studied the effect of the cosmological constant $\Lambda$ on the shadow of
BH for Schwarzschild-de Sitter and Kerr-de Sitter metrics. The recently
reported data of M87* BH shadow was used to obtain the constraints on the
numerical value of $\Lambda$. Namely, two upper limits have been obtained
which are based on the error limits of the shadow and the BH mass. Comparing
both limits with the current value of $\Lambda$ we have shown that there is no
inconsistency among those. It should be noticed that, the importance of such
kind of analysis is not to directly observe the effect of the modified term,
but to check the validity of the modified theory of gravity according to
reported data.
Then, it is revealed that the condition for having the extreme case for BH
shadow is equivalent to the formation of extreme SdS BH. Furthermore, we
analyzed the structure of the Kerr BH in the presence of non-zero $\Lambda$.
It is shown that while in case of spherically symmetric BHs the radius of
photon sphere remains unchanged for both Schwarzschild and SdS BHs, for
axially symmetric case due to the coupling of spin parameter $a$ with
$\Lambda$ the radii are changed. However, similar to Lense-Thirring
precession, the nature of the frame dragging effect does not change in the
presence of $\Lambda$. Namely, the radii are modified and again as in the Kerr
BH case, two photon orbits i.e. one prograde and other retrograde, are formed.
We also checked the potential numerical deviation due to the $\Lambda$-term
from standard GR based on the PPN formalism. We obtained the corresponding
$\hat{\zeta}_{\Lambda}$ parameter, which although being too small to be
observed, can be regarded as an indication of deviation from GR.
## References
* (1) The Event Horizon Telescope Collaboration, ApJL, 875, L1 (2019)
* (2) The Event Horizon Telescope Collaboration, ApJL, 875, L3 (2019)
* (3) The Event Horizon Telescope Collaboration, ApJL, 875, L5 (2019)
* (4) The Event Horizon Telescope Collaboration, ApJL, 875, L6 (2019)
* (5) D. Garofalo, Annalen der Physik, 532, 1900480 (2020)
* (6) F.H. Vincent et al, A&A, in press, arXiv:2002.09226
* (7) V.I. Dokuchaev, N. O. Nazarova, arXiv:2010.01885
* (8) D. Psaltis et al. (EHT Collaboration), Phys. Rev. Lett. 125, 141104 (2020)
* (9) V.G. Gurzadyan, Eur. Phys. J. Plus, 134, 98 (2019)
* (10) V.G. Gurzadyan, A. Stepanian, Eur. Phys. J. C, 78, 632 (2018)
* (11) V.G. Gurzadyan, A. Stepanian, Eur. Phys. J. C, 79, 169 (2019)
* (12) V.G. Gurzadyan, A. Stepanian, Eur. Phys. J. Plus, 134, 98 (2019)
* (13) P.A.R. Ade et al, A& A, 594, A13 (2016)
* (14) V.G. Gurzadyan, A. Stepanian, Eur. Phys. J. C, 78, 869 (2018)
* (15) W. Rindler, Relativity: Special, General and Cosmological, Oxford University Press (2006)
* (16) J.N. Islam, Phys. Lett., A97, 239 (1983)
* (17) A.W. Kerr, J.C. Hauck, B. Mashhoon, Class. Quant. Grav., 20, 2727 (2003)
* (18) V. Kagramanova, J. Kunz, C. Lammerzahl, Phys. Lett., B, 634, 465 (2006)
* (19) J.M. Bardeen, W.H. Press, S.A. Teukolsky, ApJ., 178, 347 (1972)
* (20) C.R Cramer, Gen. Rel. Grav., 29, 445 (1997)
* (21) E. Teo, Gen. Rel. Grav., 35, 1909 (2003)
* (22) S. Akcay, R. Matzner, Class. Quant. Grav., 28, 085012 (2011)
* (23) D. Charbulák, Z. Stuchlík, Eur. Phys. J. C, 77, 897 (2017)
* (24) J. Veselý, M. Žofka, Gen. Rel. Grav., 51, 156 (2019)
* (25) A.Stepanian, Sh.Khlghatyan, V.G. Gurzadyan, Eur. Phys. J. C, 80, 1011 (2020)
|
# THE ROLE OF STRONG GRAVITY AND THE NUCLEAR EQUATION OF STATE
ON NEUTRON-STAR COMMON-ENVELOPE ACCRETION
A. Miguel Holgado Department of Physics and McWilliams Center for Cosmology,
Carnegie Mellon University, Pittsburgh, PA, 15213, USA Department of
Astronomy and National Center for Supercomputing Applications, University of
Illinois at Urbana-Champaign, Urbana IL, 61801, USA Hector O. Silva Max-
Planck-Institut für Gravitationsphysik (Albert-Einstein-Institut), Am
Mühlenberg 1, D-14476 Potsdam, Germany Department of Physics and Illinois
Center for Advanced Studies of the Universe, University of Illinois at Urbana-
Champaign, Urbana IL, 61801, USA Paul M. Ricker Department of Astronomy and
National Center for Supercomputing Applications, University of Illinois at
Urbana-Champaign, Urbana IL, 61801, USA Department of Physics and Illinois
Center for Advanced Studies of the Universe, University of Illinois at Urbana-
Champaign, Urbana IL, 61801, USA Nicolás Yunes Department of Physics and
Illinois Center for Advanced Studies of the Universe, University of Illinois
at Urbana-Champaign, Urbana IL, 61801, USA
###### Abstract
Common-envelope evolution is important in the formation of neutron star
binaries within the isolated binary formation channel. As a neutron star
inspirals within the envelope of a primary massive star, it accretes and spins
up. Because neutron stars are in the strong-gravity regime, they have a
substantial relativistic mass deficit, i.e., their gravitational mass is less
than their baryonic mass. This effect causes some fraction of the accreted
baryonic mass to convert into neutron star binding energy. The relativistic
mass deficit also depends on the nuclear equation of state, since more compact
neutron stars will have larger binding energies. We model the mass growth and
spin-up of neutron stars inspiraling within common-envelope environments and
quantify how different initial binary conditions and hadronic equations of
state affect the post-common-envelope neutron star’s mass and spin. From these
models, we find that neutron star mass growth is suppressed by $\approx
15-30\%$. We also find that for a given amount of accreted baryonic mass, more
compact neutron stars will spin-up faster while gaining less gravitational
mass, and vice versa. This work demonstrates that a neutron star’s strong
gravity and nuclear microphysics plays a role in neutron-star-common-envelope
evolution, in addition to the macroscopic astrophysics of the envelope. Strong
gravity and the nuclear equation of state may thus affect both the population
properties of neutron star binaries and the cosmic double neutron star merger
rate.
neutron stars — common envelope evolution — accretion — nuclear physics —
compact objects – compact binary stars – interacting binary stars
††software: matplotlib (Hunter, 2007), numpy (Walt et al., 2011), scipy
(Virtanen et al., 2020), MESA: v12778 (Paxton et al., 2010, 2013, 2015, 2018,
2019)
## 1 Introduction
Neutron stars (NSs) as well as double neutron star (DNS) systems are versatile
laboratories for multiple disciplines, including (but not limited to)
astrophysics, nuclear physics, and gravitational physics. Our knowledge of DNS
population properties as well as the nuclear equation of state (EoS) has
greatly improved as we are entering a data-rich era for NS observations. The
LIGO-Virgo Collaboration (LVC) has observed GWs from NS mergers, providing
constraints on the NS tidal deformability (LVC, 2017, 2019) and new insights
on the DNS mass distribution (LVC, 2020). NICER X-ray timing observations of
pulsars have provided the first constraints on the NS compactness (Miller et
al., 2019b; Riley et al., 2019). Radio pulsar timing has revealed the most
massive NS to date from the Green Bank Telescope (Cromartie et al., 2020) and
has also revealed a DNS with the lowest asymmetric mass ratio of $0.78\pm
0.03$ observed to date from the Arecibo Observatory (Ferdman et al., 2020).
A DNS that forms in isolation must survive two supernova explosions and one or
more common-envelope (CE) phases (e.g., Andrews et al., 2015; Tauris et al.,
2017; Andrews & Mandel, 2019). In the context of CE evolution, NSs have been
treated as point masses that accrete some fraction of their pre-CE mass,
similar to white dwarfs and black holes (e.g., Belczynski et al., 2002a, b;
Voss & Tauris, 2003; Dewi et al., 2006; Osłowski et al., 2011; Dominik et al.,
2012; Belczynski et al., 2018; Chruslinska et al., 2018; Giacobbo & Mapelli,
2018; Vigna-Gómez et al., 2020; Kruckow, 2020). The NS’s strong gravity and
nuclear EoS, however, result in a relativistic mass deficit, where the
gravitational mass is significantly less than the total baryonic mass. This
binding-energy effect has been previously studied in the context of NS
accretion in low-mass X-ray binaries (Alécian & Morsink, 2004; Lavagetto et
al., 2005; Bagchi, 2011).
Early theoretical studies of NS mass growth during CE evolution predicted that
accretion would be substantial enough to cause NSs to collapse into black
holes (e.g., Chevalier, 1993; Brown, 1995; Fryer et al., 1996; Armitage &
Livio, 2000; Brown et al., 2000). Global 3D hydrodynamic CE simulations,
however, have found typical accretion rates to be less than the Hoyle-
Lyttleton (HL) rate (e.g., Ricker & Taam, 2012). Moreover, MacLeod & Ramirez-
Ruiz (2015a) have found from local 3D wind-tunnel simulations that envelope
density gradients may substantially suppress the accretion rate (MacLeod &
Ramirez-Ruiz, 2015a). These results imply that NSs are much more likely to
survive the CE phase (e.g., MacLeod & Ramirez-Ruiz, 2015b; Holgado et al.,
2018) instead of collapsing into black holes.
Further wind-tunnel studies have provided more insights into how the local
density gradient and flow properties are correlated, where such correlations
occur, and to what extent such correlations hold (MacLeod et al., 2017; De et
al., 2020; Everson et al., 2020). General-relativistic 2D wind-tunnel
simulations with a relativistic plasma have also been carried out to
characterize accretion and drag on compact-object scales (Cruz-Osorio &
Rezzolla, 2020). Building on these general-relativistic models towards 3D and
further capturing the plasma conditions relevant to massive-star interiors is
certainly well motivated. In addition to these studies of accretion and drag
local to the compact object, the global numerical modeling of NS-CE evolution
has been steadily progressing with 1D hydrodynamic (Fragos et al., 2019) and
3D hydrodynamic models (Law-Smith et al., 2020).
As such numerical models improve in complexity, it may soon be of interest to
consider how CE evolution may be sensitive to additional physics, which itself
is an open question. Given the current observational constraints on the
nuclear EoS, we here investigate how a NS’s macrosopic properties affects its
mass-growth and spin-up during the CE inspiral, and before the primary
explodes and forms another NS. In addition to focusing on the role of strong
gravity and the nuclear EoS, we approximate the pertinent aspects of the
accretion and local dynamical friction, which isolates the full complexities
of the macroscopic CE physics.
## 2 Methods
We consider a primary massive star with mass $M_{\star}$ and radius
$R_{\star}$ orbiting a companion NS with initial mass $M_{\rm NS,0}$ that
rotates rigidly with an initial angular frequency $\Omega_{0}$. For the system
to be in the NS-CE phase, we also initialize the orbit at a separation $a_{0}$
that is equal to the radius of the primary massive star, $a_{0}=R_{\star}$.
The primary’s radius $R_{\star}$ will depend on its evolutionary stage, where
we consider here the base and tip of the red-giant branch (RGB).
During the CE phase, the inspiral is driven by local dynamical friction,
causing the NS to accrete matter and spin-up. If enough energy is injected
into the CE, it will be ejected, thus leaving a less massive primary star and
a spun-up NS at a closer separation; the DNS then would form after the primary
goes supernova. A second CE, however, may occur before the primary helium star
forms the second NS (e.g., Dewi et al., 2002; Ivanova et al., 2013; Romero-
Shaw et al., 2020; Galaudage et al., 2021), though we leave such
considerations for future work.
### 2.1 Neutron Star Equation of State, Stellar Structure, Accretion, and
Spin-Up
Even for the highest spinning pulsars observed to date, such NSs can be
considered as slowly-rotating objects, meaning that rotation can thus be
treated as a small perturbation $\epsilon$ to the Tolman-Oppenheimer-Volkoff
(TOV) solution for non-rotating NSs (Tolman, 1939; Oppenheimer & Volkoff,
1939). Here, $\epsilon\equiv\Omega/\Omega_{\rm k}$ is a dimensionless spin
parameter, where $\Omega$ is the angular spin frequency of the star, and
$\Omega_{\rm k}$ is the Keplerian angular spin frequency $\Omega_{\rm
k}=\sqrt{GM_{\rm TOV}/R_{\rm TOV}^{3}}$, with $M_{\rm TOV}$ and $R_{\rm TOV}$
the mass and radius of our NS if it were not rotating. We solve for the
structure of slowly rotating NSs to second-order in $\epsilon\ll 1$ using the
Hartle-Thorne approximation (Hartle, 1967; Hartle & Thorne, 1968) with the
same set of 46 hadronic EoSs from Silva et al. (2020, Appendix A). This set of
EoSs is simultaneously consistent with the LIGO-Virgo observations of GW170817
(LVC, 2017) and the NICER observation of PSR J0030+0451 (Miller et al., 2019a;
Riley et al., 2019).
For a given EoS, and a chosen value of the central density and spin frequency
$\Omega$, the second-order in $\epsilon$ solution to the Einstein equations in
the Hartle-Thorne approximation allows us to calculate macroscopic properties
of the star (e.g., Berti et al., 2005). These properties include the spin-
corrected mass $M_{\rm NS}=M_{\rm TOV}+\epsilon^{2}\delta M$, the spin-
corrected equatorial radius $R_{\rm NS}=R_{\rm TOV}+\epsilon^{2}\delta R$, the
leading-order-in-spin moment of inertia $I_{\rm NS}$ and the spin-corrected
dimensionless gravitochemical potential $\Phi_{\rm NS}$ (Alécian & Morsink,
2004).
In the context of accreting NSs, the gravitochemical potential can be
interpreted as a susceptibility to changes in baryon number or the fraction of
baryon mass that gets converted into gravitational mass. In the non-rotating
limit, $\Phi_{\rm NS}$ simplifies to
$\lim_{\Omega\to 0}\Phi_{\rm NS}=\Phi_{\rm TOV}=\sqrt{1-2{\cal C}_{\rm
TOV}}=\sqrt{1-\frac{2GM_{\rm TOV}}{c^{2}R_{\rm TOV}}}\ ,$ (1)
where ${\cal C}_{\rm TOV}$ is the compactness of a given non-rotating NS. We
will use $\Phi_{\rm NS}$ for our calculations, where we elaborate in Appendix
B how this is calculated with our EoS catalog. Equation 1 provides a fast
approximation for population synthesis or as a sub-grid prescription for
global hydrodynamic simulations. We later compare in §3 how well this
approximation compares to using $\Phi_{\rm NS}$, with a more detailed
quantification shown in Appendix D.
We plot in Figure 1 the gravitochemical potential $\Phi_{\rm TOV}$ versus the
gravitational mass $M_{\rm TOV}$ for non-rotating NSs, as predicted from our
EoS catalog.
Figure 1: Gravitochemical potential vs. gravitational mass for non-rotating
NSs. Each curve corresponds to a different EoS in our catalog that is
consistent with both the latest LIGO-Virgo and NICER constraints and is able
to produce a NS with $M_{\rm max}/M_{\odot}\geq 1.96$. The color of each curve
corresponds to the nondimensional NS binding energy $|{\cal B}_{\rm
TOV}|/(M_{\rm TOV}c^{2})$. For $\Phi=1$, all of the accreted baryonic mass
contributes to the gravitational mass growth. NSs with larger gravitational
masses and with larger compactness will convert a larger fraction of the
accreted baryonic mass into binding energy instead of gravitational mass.
Each curve represents $\Phi_{\rm TOV}$ for a different EoS, with different
colors corresponding to different dimensionless NS binding energy $|{\cal
B}_{\rm TOV}|/(M_{\rm TOV}c^{2})$. Observe that the gravitochemical potential
decreases as the gravitational mass increases, and also it decreases as the NS
compactness increases. Since the gravitochemical potential is inversely
related to the binding energy, this figure tells us that binding energy
conversion is enhanced for higher mass or higher compactness NSs.
As the NS mass and spin increase during the CE inspiral, we are then able to
track the temporal evolution of all of NS macroscopic quantities. For example,
as the NS accretes, its gravitational mass responds to the baryon mass
accretion rate $\dot{M}_{\rm b}$ as well as the angular momentum that the
accreted mass carries. The resulting NS gravitational-mass accretion rate is
$\dot{M}_{\rm NS}=\frac{\partial M_{\rm NS}}{\partial M_{\rm b}}\dot{M}_{\rm
b}+\frac{\partial M_{\rm NS}}{\partial J_{\rm NS}}\dot{J}_{\rm NS}=\Phi_{\rm
NS}\dot{M}_{\rm b}+\frac{\Omega}{c^{2}}\dot{J}_{\rm NS}\ ,$ (2)
where $c$ is the speed of light, and $J_{\rm NS}=I_{\rm NS}\Omega$ is the NS
spin angular momentum. Similarly, as the NS accretes, its spin angular
momentum will also change, as given by
$\dot{J}_{\rm NS}=\dot{I}_{\rm NS}\Omega+I_{\rm NS}\dot{\Omega}=\frac{{\rm
d}I_{\rm NS}}{{\rm d}M_{\rm NS}}\dot{M}_{\rm NS}\Omega+I_{\rm NS}\dot{\Omega}\
.$ (3)
With this at hand, we can now solve for the temporal evolution of the angular
frequency and the gravitational mass. We assume the NS accretes from a
Keplerian accretion disk, where matter captured within the NS accretion radius
carries angular momentum and spirals several orders of magnitude down to the
scale of several NS radii. Approximating the total torque as the accretion
torque, $\dot{J}_{\rm NS}\approx\dot{M}_{\rm NS}\sqrt{GM_{\rm NS}R_{\rm NS}}$
(e.g., Brown et al., 2000), in Eqs. 2 and 3, and for now ignoring other
external torques on the NS, we then find
$\dot{M}_{\rm NS}=\frac{\Phi_{\rm NS}\dot{M}_{\rm b}}{1-\sqrt{GM_{\rm
NS}R_{\rm NS}}\,({\Omega}/{c^{2}})}\ ,$ (4)
and
$\dot{\Omega}=\frac{\Phi_{\rm NS}\dot{M}_{\rm b}}{I_{\rm
NS}}\frac{\sqrt{GM_{\rm NS}R_{\rm NS}}-\Omega\,({{\rm d}I_{\rm NS}}/{{\rm
d}M_{\rm NS}})}{1-\sqrt{GM_{\rm NS}R_{\rm NS}}\,({\Omega}/{c^{2}})}\ .$ (5)
The evolution equations (4) and (5) are generic for any slowly rotating NS
accreting from a Keplerian disk. In general, however, the accretion and NS’s
angular-momentum evolution may be more complex. Such complications can arise
if the NS’s magnetic field pressure is comparable to the pressure of the
radiation and accreting plasma, or if there is feedback from the accretion
itself (e.g., Soker et al., 2019; Grichener & Soker, 2019; López-Cámara et
al., 2020). For a given EoS, we can then find the right-hand sides of the
above equations as a function of $M_{\rm NS}$ and $\Omega$, which leads to a
closed system of ordinary differential equations, once $\dot{M}_{\rm b}$ is
prescribed. In the CE inspiral context, the baryon mass accretion rate depends
on the primary star’s envelope structure, which we discuss in the following
subsection.
### 2.2 Primary massive-star models, common envelope accretion, inspiral, and
ejection
We evolve single massive stars with the MESA (v12778) stellar-evolution code
(Paxton et al., 2010, 2013, 2015, 2018, 2019) to obtain their interior
structure. We consider a total of 6 primary red-giant stars with masses of
$M_{\star}/M_{\odot}=(12,12,16,16,20,20)$ with respective radii
$R_{\star}/R_{\odot}=(173,594,322,672,872,1247)$. Here, the smaller radii at a
given mass corresponds to the RGB base, while the larger radii at a given mass
corresponds to the RGB tip. For our CE inspiral calculations, we take the
envelope structure to be constant in time.
As a NS inspirals in the CE, the envelope plasma supersonically flows past the
NS and may be captured within the NS’s accretion radius $R_{\rm a}=2GM_{\rm
NS}/v^{2}$, where $v$ is the upstream flow velocity, which, in the NS’s rest
frame is the orbital velocity. If the upstream flow is homogeneous, then from
Hoyle-Lyttleton (HL) theory (Hoyle & Lyttleton, 1939), the accretion rate and
local drag force obey
$\displaystyle\dot{M}_{\rm HL}$ $\displaystyle=\pi R_{\rm a}^{2}\rho
v=\frac{4\pi\rho G^{2}M_{\rm NS}^{2}}{v^{3}}\ ,$ (6a) $\displaystyle F_{\rm
d,HL}$ $\displaystyle=\dot{M}_{\rm HL}v=\pi R_{\rm a}^{2}\rho
v^{2}=\frac{4\pi\rho G^{2}M_{\rm NS}^{2}}{v^{2}}\ ,$ (6b)
where $\rho$ is the upstream mass density. For NS accretion in stellar-
envelope environments, the density and temperature may be high enough for
neutrino cooling, such that the accretion rate exceeds the Eddington limit
(Houck & Chevalier, 1991). The envelope’s local density scale height may be
comparable in size to the NS accretion radius, which breaks the symmetry that
HL theory assumes and thus requires a treatment of this effect.
To model the accretion and drag, we use the fitting formulae from De et al.
(2020, see their Appendix A). The accretion and drag coefficients, $C_{\rm a}$
and $C_{\rm d}$ are defined such that the baryonic mass accretion rate and the
local drag force are
$\displaystyle\dot{M}_{\rm b}$ $\displaystyle=C_{\rm a}\dot{M}_{\rm
HL}\,,\quad C_{\rm a}=C_{\rm a}({\cal M},q,R_{\rm sink})\,,$ (7a)
$\displaystyle F_{\rm d}$ $\displaystyle=C_{\rm d}F_{\rm d,HL}\,,\quad C_{\rm
d}=C_{\rm d}({\cal M},q)\ .$ (7b)
These coefficients are both functions of the upstream Mach number ${\cal M}$
and the mass ratio $q$ between the compact object and the enclosed mass within
the orbit. The accretion coefficient also depends on the sink radius $R_{\rm
sink}$, given that the wind-tunnel simulations only resolve the accretion flow
up to a sphere with radius $0.05R_{\rm a}$ surrounding the point-mass
accretor. Thus, some fraction of matter that flows into the region within
$0.05R_{\rm a}$ ultimately ends up accreting onto the NS. For each EoS, we use
$R_{\rm NS}$ as the sink radius. In Appendix C, we describe in more detail how
we compute these accretion and drag coefficients.
We plot in Figure 2 the stellar profiles of the density, upstream Mach number,
polytropic exponent, and envelope binding energy (panels A, B, C, and D,
respectively) for the primary masses of $M_{\star}/M_{\odot}\in(12,16,20)$
that we consider here.
Figure 2: MESA stellar models. Panel A: density profiles for primary stellar
masses of $M_{\star}/M_{\odot}=(12,12,16,16,20,20)$ and respective radii of
$R_{\star}/R_{\odot}=(173,594,322,672,872,1247)$. Masses of
$M_{\star}/M_{\odot}=(12,16,20)$ correspond to blue, orange, and green colors,
respectively. Solid and dashed lines correspond to models at the base and tip
of the RGB, respectively. Panel B: the upstream Mach number ${\cal M}=v/c_{\rm
s}$ for each stellar model (formatted in the same manner) for a NS companion
with $M_{\rm NS}=1.4M_{\odot}$. Panel C: the polytropic exponent $\gamma$ for
each stellar model. Horizontal magenta lines are shown for $\gamma=4/3$ and
$\gamma=5/3$ (dot-dashed and dotted, respectively). Panel D: envelope binding
energy profiles (absolute values) for each stellar model.
For a given evolutionary stage and for most of the primary’s radii, the
$\delta_{\rho}$ parameter decreases as the primary’s mass increases, such that
the accretion rate will be greater and result in higher accreted mass. Primary
stars that are smaller in size will have higher envelope binding energy, which
thus requires more energy dissipation during the CE phase in order for
successful envelope ejection. In §3, we quantify how much more NSs accrete
when in envelopes with higher binding energies compared to less bound
envelopes.
Given the primary’s envelope structure, we can now model the CE inspiral as
follows. We approximate the orbital inspiral with Newtonian gravity (Blanchet,
2014), given that on the scales of CE evolution, gravity is weak and the
orbital velocities are non-relativistic, i.e., $v_{\rm orb}/c\ll 1$. With this
in mind, the orbital energy throughout the inspiral is $E=-GM_{\rm
NS}m_{\star}/(2a)$, where $m_{\star}=m_{\star}(a)=\int_{0}^{a}4\pi\rho
r^{2}\,{\rm d}r$ is the mass enclosed within the NS’s separation from the
primary’s center. The orbital velocity at any given time obeys
$v^{2}=G\left[M_{\rm NS}+m_{\star}(a)\right]/a$, since we consider the
inspiral to be quasi-circular. The change in the binary orbital energy as the
NS inspirals thus obeys
${\rm d}E=\frac{\partial E}{\partial m_{\star}}{\rm d}m_{\star}+\frac{\partial
E}{\partial M_{\rm NS}}{\rm d}M_{\rm NS}+\frac{\partial E}{\partial a}{\rm
d}a\ ,$ (8)
such that we can then solve for ${\rm d}a/{\rm d}t$
$\frac{{\rm d}a}{{\rm
d}t}=a\left(-\frac{\dot{E}}{E}+\frac{\dot{m}_{\star}}{m_{\star}}+\frac{\dot{M}_{\rm
NS}}{M_{\rm NS}}\right)=\frac{a}{1-4\pi\rho
a^{3}/m_{\star}}\left(-\frac{\dot{E}}{E}+\frac{\dot{M}_{\rm NS}}{M_{\rm
NS}}\right)\ ,$ (9)
where $\dot{m}_{\star}=4\pi\rho a^{2}\dot{a}$ since we assume a static
envelope. We take the energy decay rate to be the drag luminosity
$\dot{E}=-F_{\rm d}(\delta_{\rho})v$, which dominates over the gravitational-
wave luminosity from the orbital motion, and which can be obtained using both
Eqs. (6b) and (7b).
We summarize our integration procedure as follows. We first precompute the NS
properties shown in Eqs. (4) and (5) as well as Appendix B for each EoS in our
catalog, which are then stored as tables to interpolate from at each timestep
of an orbital integration. We then explicitly integrate Eqs. (4), (5), and (9)
to obtain $M_{\rm NS}$, $\Omega$, and $a$ throughout the CE inspiral. The NS
properties at each point in the NS’s evolution correspond to a Hartle-Thorne
NS with gravitational mass $M_{\rm NS}$ and spin $\Omega$ that we obtain from
our pre-computed tables. Our orbital integrations are carried out for each of
our 6 primary stellar models and for each EoS in our catalog, varying the
initial NS gravitational mass $M_{\rm NS,0}$ and initial NS spin $\Omega_{0}$.
We terminate these orbital integrations when the dissipated orbital energy
$\Delta E_{\rm orb}=E(a)-E(a_{0})$ is equal to the primary envelope’s binding
energy $E_{\rm env,bind}$ given by
$E_{\rm
env,bind}=\int_{m_{\star}(r)}^{m_{\star}(R_{\star})}\left(u-\frac{GM_{\star}(r)}{r}\right)\,{\rm
d}M\ ,$ (10)
where $u$ is the stellar fluid’s internal energy and where the integration
coordinate is the primary’s mass coordinate. This amounts to assuming a CE
efficiency parameter (e.g., Webbink, 1984) of $\alpha_{\rm CE}=1$.
## 3 Results
### 3.1 NS mass gain and spin-up
We plot the NS evolution for the often fiducial case of a pre-CE NS mass of
$1.4M_{\odot}$ and a primary of $12M_{\odot}$ in the top panel of Figure 3.
For this case, the primary is taken to be at the RGB base such that
$a_{0}=173R_{\odot}$ and we take the initial NS spin to be
$\Omega_{0}/(2\pi)=50$ Hz.
Figure 3: NS mass gain and spin-up. The initial pre-CE system considered here
is a primary star at the base of the RGB with a mass $M_{\star}=12M_{\odot}$
with a companion NS that has an initial mass $M_{\rm NS,0}=1.4M_{\odot}$ and
initial spin $\Omega_{0}/(2\pi)=50$ Hz. Top panel: gain in gravitational mass
vs. orbital separation. The orange and black curves correspond to $\Phi<1$ and
$\Phi=1$ (with binding energy vs. without), respectively, where each curve
corresponds to a different EoS. Bottom panel: the final spin-up
$\Delta\Omega_{\rm f}/(2\pi)$ vs. the final gravitational mass gain $\Delta
M_{\rm NS,f}$ for each EoS and the same initial pre-CE parameters, except with
a varying initial NS spin. The circle and diamond points are for $\Phi=1$ and
$\Phi<1$, respectively. The color of each data point corresponds to a
different initial NS spin of $\Omega_{0}/(2\pi)=$ (10, 100, 200, 500) Hz with
blue, orange, green, and red, respectively.
The black curves corresponds to $\Phi=1$, i.e., not accounting for NS binding
energy. Each black and orange curve corresponds to a different EoS in our
catalog.
In all cases, the NS accretes no more than a few percent of its pre-CE mass,
due to the suppressed accretion rate from the envelope density gradient. The
gravitational-mass gain as well as the spin-up further decreases, since some
of the accreted baryon mass-energy is converted into binding energy. In the
bottom panel of Figure 3, we plot the final spin-up $\Delta\Omega_{\rm
f}/(2\pi)=(\Omega_{\rm f}-\Omega_{0})/(2\pi)$ and the final gravitational-mass
gain for each EoS model and for varying initial NS spins of
$\Omega_{0}/(2\pi)=(10,50,100,200,500)$ Hz as blue, orange, green, red, and
purple points, respectively. Higher initial NS spins increase the NS binding
energy, such that less gravitational mass is gained and the spin-up decreases.
With different EoSs, there is an anti-correlation between the mass gain and
spin-up, where an EoS that allows for higher gravitational-mass gain results
in a lower spin-up when starting with the same initial NS spin. A larger
increase in $\Delta M_{\rm NS}$ is a result of a larger $\Phi_{\rm NS}$, i.e.,
higher baryon mass converted to gravitational mass. The gravitochemical
potential $\Phi_{\rm NS}$ is proportional to the inertia $I_{\rm NS}$, such
that less compact NSs are harder to spin up because they have higher
$\Phi_{\rm NS}$ and higher $I_{\rm NS}$. Conversely, more compact NSs will
gain less gravitational mass and spin-up faster because they have lower
$\Phi_{\rm NS}$ and lower $I_{\rm NS}$.
### 3.2 Parameter survey
Given that the relativistic mass deficit is greater for more massive NSs (see
Figure 1), we then vary the pre-CE NS mass. We run inspirals for the following
set of pre-CE NS masses $M_{0}/M_{\odot}\in[1.2,1.8]$ with a step size of
$0.1M_{\odot}$. We plot in the top panels of Figure 4 the mean accreted masses
when varying the EoS as solid lines with the shaded region corresponding to
the $\pm 2\sigma$ deviation. The dashed lines correspond to $\Phi=1$, i.e.,
taking the accreted gravitational mass to be equivalent to the accreted
baryonic mass.
Figure 4: Varying initial NS masses, primary masses, and envelope structures.
Top row: the accreted gravitational mass at the end of our orbital
integrations for the 6 primary stellar models, initial NS gravitational masses
ranging in $M_{0}/M_{\odot}\in[1.2,1.8]$ with spacings of $\delta
M=0.1M_{\odot}$, and an initial NS spin of 50 Hz. The left, middle, and right
columns correspond to primary stellar masses of
$M_{\star}/M_{\odot}=(12,16,18)$, respectively. The top and bottom rows
correspond to the accreted mass and the spin-up, respectively. The blue and
orange curves correspond to primary stellar models at the base and tip of the
RGB, respectively. The dashed lines correspond to $\Phi=1$, i.e., taking the
accreted gravitational mass to be equivalent to the accreted baryonic mass.
The width of the dashed line encompasses the $\pm 2\sigma$ region. The solid
lines with shaded bands correspond to the mean and the $\pm 2\sigma$
deviation, respectively, of our predicted accreted NS masses including binding
energy from our catalog of 46 EoSs.
The width of the dashed line encompasses the $\pm 2\sigma$ region. In the
bottom panels of Figure 4, we plot the corresponding spin-up
$\Delta\Omega_{\rm f}=(\Omega_{\rm f,0}-\Omega_{0})/(2\pi)$.
An increasing pre-CE NS mass results in a systematically decreasing accreted
NS mass across all of our models. This is because at constant $\alpha_{\rm
CE}$, having a more massive NS results in a larger dissipated orbital energy,
such that envelope ejection is achieved at wider separations and such that the
accreted baryonic mass is reduced compared to lower-mass NSs. It remains to be
seen whether or not this trend will hold in global 3D hydrodynamic CE
simulations when the initial NS mass is varied. Models at the RGB base result
in higher accreted mass and spin-up, which is due to the larger envelope
binding energy from their smaller sizes as compared to the RGB tip (bottom
panel of Figure 2).
As previously shown in Figure 1, NSs with higher gravitational mass will
convert a larger fraction of the accreted mass into binding energy. We plot in
Figure 5 the distributions of the ratio of the accreted gravitational mass to
the accreted baryonic mass from our RGB base models.
Figure 5: Ratio distributions. Distributions of the ratio of the accreted
gravitational mass to the accreted baryonic mass $\Delta M_{\rm NS}/\Delta
M_{\rm b}$. These distributions are represented with a kernel density
estimator. Without accounting for NS binding energy, $\Delta M_{\rm NS}/\Delta
M_{\rm b}=1$. The left, middle, and right panels correspond to primary stellar
masses of $M_{\star}/M_{\odot}=(12,16,18)$ at the RGB base, respectively. In
each panel, each distribution from bottom ascending to top is for initial NS
gravitational masses of $M_{\rm NS,0}/M_{\odot}=(1.2,1.4,1.6,1.8)$,
respectively. The green and magenta curves correspond to initial NS spins of
50 Hz and 200 Hz, respectively. The black dashed curve corresponds to the
$\Phi_{\rm TOV,0}$ distribution from our EoS catalog, i.e., evaluating
Equation 1 with the initial TOV mass and radius. Distributions for the RGB tip
case will be similar, though the separation between distributions at the same
initial NS mass will be smaller since the accreted baryonic mass for the RGB
tip cases is smaller than the RGB base cases (see Figure 4). This is
quantified in Appendix D and Figure 6.
In each panel, each distribution from bottom ascending to top is for initial
NS gravitational masses of $M_{\rm NS,0}/M_{\odot}=(1.2,1.4,1.6.,1.8)$,
respectively. The green and magenta curves correspond to initial NS spins of
50 Hz and 200 Hz, respectively. We also plot a black dashed curve that
corresponds to using $\Phi_{\rm TOV,0}$ as a fast approximation, i.e., the
gravitochemical potential of the non-rotating NS at its initial properties.
Higher initial spins tend to decrease $\Delta M_{\rm NS}/\Delta M_{\rm b}$,
though the model with $M_{\rm NS,0}=1.8M_{\odot}$ and $M_{\star}=20M_{\odot}$
at the RGB base exhibits opposite behavior, albeit slight. Since lower-mass
NSs accrete more gravitational mass compared to the higher-mass NSs in our
models, the differences between the ratio distributions at various spins is
also higher as well. Distributions for the RGB tip case will be similar,
though the separation between distributions at the same initial NS mass will
be smaller since the RGB tip cases resulted in less mass gain (Figure 4).
## 4 Discussion and Conclusions
We have investigated here how NS binding energy affects NS-CE accretion, which
plays a role in forming DNSs that merge within a Hubble time. We find that the
gravitational-mass gain and spin-up is systematically reduced and that this
effect is enhanced for higher-mass NSs. We also find that more compact NSs
will gain less gravitational mass and spin-up faster due to having a lower
$\Phi_{\rm NS}$ and a lower $I_{\rm NS}$ compared to less compact NSs. The
strongest assumption from our model is that the envelope remains static
throughout the inspiral. Realistically, the envelope is expected to respond
and readjust in structure as the NS inspirals deeper toward the primary’s
core. The accretion, which we have focused on in this work, is still expected
to be some small fraction of the pre-CE NS mass. There will still be density
gradients within the envelope that break BHL symmetry and the accreting
material still needs to overcome the angular momentum barrier over multiple
length scales.
The amount of NS mass gain and spin-up we obtain with this modeling approach
may be testable with Galactic DNS observations (e.g., Osłowski et al., 2011).
For millisecond pulsars, spin-period derivatives corresponding to a spindown
timescale of order a Hubble time would be ideal. If a phase-transition to
quark matter happens in NS interiors, a new branch of stable stars with the
same masses, but smaller radii relative to their hadronic counterparts can
appear (e.g., Gerlach, 1968; Kampfer, 1981; Glendenning & Kettner, 2000;
Montana et al., 2019). These have been called “twin-stars” and due to their
larger compactness, the effects we present here would be further enhanced in
comparison to the purely hadronic NSs we studied. We leave a more systematic
investigation of these aspects for future work.
This work demonstrates that a NS’s strong gravity and nuclear microphysics
play a role in NS-CE evolution in addition to the macroscopic astrophysics of
the envelope. Strong gravity and the nuclear EoS thus may affect the
population properties of NS binaries and the cosmic double NS merger rate. Our
results may further inform binary population synthesis models, 1D hydrodynamic
CE inspiral coupled to stellar evolution, and global 3D hydrodynamic CE
simulations.
We thank the anonymous referee for comments and suggestions that led to
improvements of this work. We thank Cole Miller for insightful discussions and
for detailed feedback on our manuscript. A.M.H. is supported by the McWilliams
Postdoctoral Fellowship. This work was also partially supported by the DOE
NNSA Stewardship Science Graduate Fellowship under grant No. DE-NA0003864.
H.O.S and N.Y. are supported by NASA grants No. NNX16AB98G, No. 80NSSC17M0041,
and No. 80NSSC18K1352 and NSF Award No. 1759615. P.M.R. acknowledges support
by AST 14-13367. .
## References
* Alécian & Morsink (2004) Alécian, E., & Morsink, S. M. 2004, ApJ, 614, 914
* Andrews et al. (2015) Andrews, J. J., Farr, W. M., Kalogera, V., & Willems, B. 2015, ApJ, 801, 32
* Andrews & Mandel (2019) Andrews, J. J., & Mandel, I. 2019, ApJL, 880, L8
* Armitage & Livio (2000) Armitage, P. J., & Livio, M. 2000, ApJ, 532, 540
* Bagchi (2011) Bagchi, M. 2011, MNRAS Letters, 413, L47
* Belczynski et al. (2002a) Belczynski, K., Bulik, T., & Kluźniak, W. 2002a, ApJL, 567, L63
* Belczynski et al. (2002b) Belczynski, K., Kalogera, V., & Bulik, T. 2002b, ApJ, 572, 407
* Belczynski et al. (2018) Belczynski, K., Askar, A., Arca-Sedda, M., et al. 2018, A&A, 615, A91
* Berti et al. (2005) Berti, E., White, F., Maniopoulou, A., & Bruni, M. 2005, MNRAS, 358, 923
* Blanchet (2014) Blanchet, L. 2014, Living Rev. Relativ., 17, 2
* Brown (1995) Brown, G. E. 1995, ApJ, 440, 270
* Brown et al. (2000) Brown, G. E., Lee, C.-H., & Bethe, H. A. 2000, ApJ, 541, 918
* Chevalier (1993) Chevalier, R. A. 1993, ApJL, 411, L33
* Chruslinska et al. (2018) Chruslinska, M., Belczynski, K., Klencki, J., & Benacquista, M. 2018, MNRAS, 474, 2937
* Cromartie et al. (2020) Cromartie, H. T., Fonseca, E., Ransom, S. M., et al. 2020, Nature Astronomy, 4, 72
* Cruz-Osorio & Rezzolla (2020) Cruz-Osorio, A., & Rezzolla, L. 2020, ApJ, 894, 147
* De et al. (2020) De, S., MacLeod, M., Everson, R. W., et al. 2020, ApJ, 897, 130
* Dewi et al. (2006) Dewi, J. D. M., Podsiadlowski, P., & Sena, A. 2006, MNRAS, 368, 1742
* Dewi et al. (2002) Dewi, J. D. M., Pols, O. R., Savonije, G. J., & van den Heuvel, E. P. J. 2002, MNRAS, 331, 1027
* Dominik et al. (2012) Dominik, M., Belczynski, K., Fryer, C., et al. 2012, ApJ, 759, 52
* Everson et al. (2020) Everson, R. W., MacLeod, M., De, S., Macias, P., & Ramirez-Ruiz, E. 2020, ApJ, 899, 77
* Ferdman et al. (2020) Ferdman, R. D., Freire, P. C. C., Perera, B. B. P., et al. 2020, Nature, 583, 211
* Fragos et al. (2019) Fragos, T., Andrews, J. J., Ramirez-Ruiz, E., et al. 2019, ApJL, 883, L45
* Fryer et al. (1996) Fryer, C. L., Benz, W., & Herant, M. 1996, ApJ, 460, 801
* Galaudage et al. (2021) Galaudage, S., Adamcewicz, C., Zhu, X.-J., Stevenson, S., & Thrane, E. 2021, ApJL, 909, L19
* Gerlach (1968) Gerlach, U. H. 1968, Phys. Rev., 172, 1325
* Giacobbo & Mapelli (2018) Giacobbo, N., & Mapelli, M. 2018, MNRAS, 480, 2011
* Glendenning & Kettner (2000) Glendenning, N. K., & Kettner, C. 2000, Astron. Astrophys., 353, L9
* Grichener & Soker (2019) Grichener, A., & Soker, N. 2019, ApJ, 878, 24
* Hartle (1967) Hartle, J. B. 1967, ApJ, 150, 1005
* Hartle & Thorne (1968) Hartle, J. B., & Thorne, K. S. 1968, ApJ, 153, 807
* Holgado et al. (2018) Holgado, A. M., Ricker, P. M., & Huerta, E. A. 2018, ApJ, 857, 38
* Houck & Chevalier (1991) Houck, J. C., & Chevalier, R. A. 1991, ApJ, 376, 234
* Hoyle & Lyttleton (1939) Hoyle, F., & Lyttleton, R. A. 1939, PCPS, 35, 405
* Hunter (2007) Hunter, J. D. 2007, CSE, 9, 90
* Ivanova et al. (2013) Ivanova, N., Justham, S., Chen, X., et al. 2013, Astron. Astrophys. Rev., 21, 59
* Kampfer (1981) Kampfer, B. 1981, J. Phys. A, 14, L471
* Kruckow (2020) Kruckow, M. U. 2020, A&A, 639, A123
* Kullback & Leibler (1951) Kullback, S., & Leibler, R. A. 1951, Ann. Math. Statist., 22, 79
* Kumar & Landry (2019) Kumar, B., & Landry, P. 2019, Phys. Rev. D, 99, 123026
* Lavagetto et al. (2005) Lavagetto, G., Burderi, L., D’Antona, F., et al. 2005, MNRAS, 359, 734
* Law-Smith et al. (2020) Law-Smith, J. A. P., Everson, R. W., Ramirez-Ruiz, E., et al. 2020, arXiv:2011.06630 [astro-ph], arXiv: 2011.06630
* López-Cámara et al. (2020) López-Cámara, D., Moreno Méndez, E., & De Colle, F. 2020, MNRAS, 497, 2057
* LVC (2017) LVC. 2017, PRL, 119, 161101
* LVC (2019) —. 2019, Phys. Rev. X, 9, 011001
* LVC (2020) —. 2020, ApJL, 892, L3
* MacLeod et al. (2017) MacLeod, M., Antoni, A., Murguia-Berthier, A., Macias, P., & Ramirez-Ruiz, E. 2017, ApJ, 838, 56
* MacLeod & Ramirez-Ruiz (2015a) MacLeod, M., & Ramirez-Ruiz, E. 2015a, ApJ, 803, 41
* MacLeod & Ramirez-Ruiz (2015b) —. 2015b, ApJL, 798, L19
* Miller et al. (2019a) Miller, M. C., Chirenti, C., & Lamb, F. K. 2019a, ApJ, 888, 12
* Miller et al. (2019b) Miller, M. C., Lamb, F. K., Dittmann, A. J., et al. 2019b, ApJL, 887, L24
* Montana et al. (2019) Montana, G., Tolos, L., Hanauske, M., & Rezzolla, L. 2019, Phys. Rev. D, 99, 103009
* Oppenheimer & Volkoff (1939) Oppenheimer, J. R., & Volkoff, G. M. 1939, Phys. Rev., 55, 374
* Osłowski et al. (2011) Osłowski, S., Bulik, T., Gondek-Rosińska, D., & Belczyński, K. 2011, MNRAS, 413, 461
* Paxton et al. (2010) Paxton, B., Bildsten, L., Dotter, A., et al. 2010, ApJS, 192, 3
* Paxton et al. (2013) Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS, 208, 4
* Paxton et al. (2015) Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15
* Paxton et al. (2018) Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, ApJS, 234, 34
* Paxton et al. (2019) Paxton, B., Smolec, R., Schwab, J., et al. 2019, ApJS, 243, 10
* Read et al. (2009) Read, J. S., Lackey, B. D., Owen, B. J., & Friedman, J. L. 2009, Phys. Rev. D, 79, 124032
* Ricker & Taam (2012) Ricker, P. M., & Taam, R. E. 2012, ApJ, 746, 74
* Riley et al. (2019) Riley, T. E., Watts, A. L., Bogdanov, S., et al. 2019, ApJL, 887, L21
* Romero-Shaw et al. (2020) Romero-Shaw, I. M., Farrow, N., Stevenson, S., Thrane, E., & Zhu, X.-J. 2020, MNRAS Letters, 496, L64
* Silva et al. (2020) Silva, H. O., Holgado, A. M., Cárdenas-Avendaño, A., & Yunes, N. 2020
* Soker et al. (2019) Soker, N., Grichener, A., & Gilkis, A. 2019, MNRAS, 484, 4972
* Tauris et al. (2017) Tauris, T. M., Kramer, M., Freire, P. C. C., et al. 2017, ApJ, 846, 170
* Tolman (1939) Tolman, R. C. 1939, Phys. Rev., 55, 364
* Vigna-Gómez et al. (2020) Vigna-Gómez, A., MacLeod, M., Neijssel, C. J., et al. 2020, PASA, 37, E038
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261
* Voss & Tauris (2003) Voss, R., & Tauris, T. M. 2003, MNRAS, 342, 1169
* Walt et al. (2011) Walt, S. v. d., Colbert, S. C., & Varoquaux, G. 2011, CSE, 13, 22
* Webbink (1984) Webbink, R. F. 1984, ApJ, 277, 355
## Appendix A The Neutron Star Catalog
We use the same set of 46 EoSs from Silva et al. (2020) for purely hadronic
NSs, including ALF2, APR3, APR4, BCPM, BSP, BSR2, BSR2Y, BSk20, BSk21, BSk22,
BSk23, BSk24, BSk25, BSk26, DD2, DD2Y, DDHd, DDME2, DDME2Y, ENG, FSUGarnet,
G3, GNH3, IOPB, K255, KDE0v1, MPA1, Model1, Rs, SINPA, SK272, SKOp, SKa, SKb,
SLY2, SLY230a, SLY4, SLY9, SLy, SkI2, SkI3, SkI4, SkI6, SkMP, WFF1, and WFF2
(Read et al., 2009; Kumar & Landry, 2019).
## Appendix B The Neutron Star Gravitochemical Potential
For a NS with spin parameter $\epsilon=\Omega/\Omega_{*}$, the gravitochemical
potential is defined as (Alécian & Morsink, 2004)
$\Phi_{\rm NS}=\sqrt{e^{\nu}(1+2h)}\ ,$ (B1)
where $\nu$ and $h$ are both metric functions related to the metric tensor via
$g_{tt}=-e^{\nu}(1+2h)$ in the Hartle-Thorne approximation. The metric
function $\nu$ is a quantity of ${\cal{O}}(\epsilon^{0})$, and thus, it is
obtained by solving the TOV equations. The metric correction $h$ is a quantity
of ${\cal{O}}(\epsilon^{2})$, so we then write it as $h=\epsilon^{2}\delta h$,
such that the non-rotating limit is recovered as $\epsilon\to 0$ and $\delta
h$ remains finite.
In order to evaluate the gravitochemical potential $\Phi_{\rm NS}$, we need to
solve for the function $h$, which therefore requires that we solve the
Einstein equations at second-order in the small rotation expansion (Hartle,
1967). Performing a Legendre decomposition, we can write
$\delta h(r)=\delta h_{0}(r)+\delta h_{2}(r)P_{2}(\cos\theta)\ ,$ (B2)
where $\theta$ is the polar angle from the equator, and $P_{2}$ is the second-
order Legendre polynomial. Matching the interior and the exterior solutions at
the NS surface allows us to find an exact solution for $\delta h_{0}(r)$ at
the NS surface, namely
$\delta h_{0}(R_{\rm TOV})=-\frac{\delta M}{R_{\rm TOV}-2M_{\rm
TOV}}+\frac{\delta J^{2}}{R_{\rm TOV}^{3}(R_{\rm TOV}-2M_{\rm TOV})}\ .$ (B3)
Here, $\delta J$ is the NS angular momentum at the Keplerian angular spin
frequency. The function $\delta h_{2}(r)$ generally obeys $\delta
h_{2}\ll\delta h_{0}$, such that when this function is scaled by
$\epsilon^{2}$, which, for this work obeys $\epsilon^{2}\ll 1$, the
contribution from the $\epsilon^{2}\delta h_{2}$ component to Equation B1 is
effectively negligible. We thus take $\delta h(R_{\rm NS})\approx\delta
h_{0}(R_{\rm NS})$, such that
$h(R_{\rm NS})\approx\epsilon^{2}\delta h_{0}(R_{\rm
TOV})=-\frac{\epsilon^{2}\delta M}{R_{\rm TOV}-2M_{\rm
TOV}}+\frac{\epsilon^{2}\delta J^{2}}{R_{\rm TOV}^{3}(R_{\rm TOV}-2M_{\rm
TOV})}\ .$ (B4)
## Appendix C Accretion and Drag Coefficients
De et al. (2020) present fitting formulae for the accretion rate and drag
within a non-relativistic background plasma. They consider two polytropic
exponents of $\gamma=4/3$ and $\gamma=5/3$, where the coefficients for each
fitting formula are given in their Tables A1 and A2. Given that our stellar
models for the massive primaries have polytropic exponents that predominantly
obey $4/3\leqslant\gamma\leqslant 5/3$ (see Figure 2), we compute the
accretion and drag coefficients by weighting both the $C_{\rm ad,4/3}$ and
$C_{\rm ad,5/3}$ formulae as
$\displaystyle C_{\rm a}$ $\displaystyle=\xi\left(w_{4/3}C_{\rm
a,4/3}+w_{5/3}C_{\rm a,5/3}\right)\ ,$ (C1a) $\displaystyle C_{\rm d}$
$\displaystyle=w_{4/3}C_{\rm d,4/3}+w_{5/3}C_{\rm d,5/3}\ ,$ (C1b)
where
$\displaystyle w_{4/3}=1-3(\gamma-4/3)\ ,$ (C2a) $\displaystyle
w_{5/3}=1-3(5/3-\gamma)\ ,$ (C2b)
and where $\xi$ is defined as
$\xi\equiv(R_{\rm NS}/0.05R_{\rm a})^{0.33}\ .$ (C3)
For $\gamma<4/3$, we use $C_{\rm ad,4/3}$. The factor $\xi$ approximates the
fraction of matter flowing into the sink radius that ultimately accretes onto
the NS. Given that these wind-tunnel models do not resolve the flow past a
sink radius $R_{\rm sink}=0.05R_{\rm a}$, the matter falling into this sink
volume is likely to be an upper estimate of the NS’s accreted baryons. De et
al. (2020) estimate how the accretion rate depends on the sink radius and fit
a power-law dependence $\dot{M}\propto\left(R_{\rm sink}/R_{\rm
a}\right)^{\alpha_{\dot{M}}}$, where $\alpha_{\dot{M}}\approx 0.33$ with a
scatter of order tens of percent.
## Appendix D Kullback-Leibler Divergence
For a given NS-CE system evolution with an initial primary star with mass
$M_{\star}$ and radius $R_{\star}$ and an initial NS with mass $M_{\rm NS,0}$
and spin $\Omega_{0}$, we define $p$ as distribution of $\Delta M_{\rm
NS}/\Delta M_{\rm b}$ predicted from our EoS catalog and semi-analytic models.
We also define $q$ as the distribution of $\Phi_{\rm TOV,0}$ from our EoS
catalog, i.e., evaluating Equation 1 at the initial NS parameters. Given these
two distributions, we can compute the Kullback-Leibler divergence, given by
${\cal D}(p||q)=\int p(x)\ln\left(\frac{p(x)}{q(x)}\right)\ {\rm d}x\ ,$ (D1)
where $x\equiv\Delta M_{\rm NS}/\Delta M_{\rm b}$. The distributions $p$ and
$q$ are approximated as a kernel-density estimate of the samples for each
model. One can interpret the KL divergence between $p$ and $q$ as the
information loss when using $q$ to approximate $p$. Conversely, it can be
interpreted as the information gained by using $p$ in place of $q$.
Directly using $\Phi_{\rm TOV,0}$ as a fast approximation in other models such
as population synthesis or as a subgrid prescription for global 3D
hydrodynamic simulations may be acceptable as long as $\Delta M_{\rm
NS}/M_{\rm NS,0}\lesssim 1\%$ and if the initial NS spin is low enough. To
quantify the information loss from this approximation, we compute the KL
divergences (Kullback & Leibler, 1951, Appendix D) between our semi-analytic
inspiral models and using Equation 1 at the initial NS properties over a range
of initial NS spins: $\Omega_{0}/(2\pi)=(10,20,50,80,100,150,200,300,500)$ Hz.
We plot the KL divergences for each of our models in Figure 6.
Figure 6: KL divergence vs. initial NS spin. The KL divergence between
$\Delta M_{\rm NS}/\Delta M_{\rm b}$ and $\Phi_{\rm TOV,0}$ evaluated at
varying initial NS spins of
$\Omega_{0}/(2\pi)=(10,20,50,80,100,150,200,300,500)$ Hz. The top and bottom
rows are for stellar models at the RGB base and tip, respectively, with each
column for primary stellar masses of $M_{\star}/M_{\odot}=(12,16,18)$,
respectively. The blue, orange, green, and red curves correspond to initial NS
gravitational masses of $M_{\rm NS,0}/M_{\odot}=(1.2,1.4,1.6,1.8)$. For KL
divergences $\lesssim 0.1$, the information loss is considered to be small,
while KL divergences $\gtrsim 1$ corresponds to a large information loss.
For initial NS spins of $\lesssim 200$ Hz, the KL divergence is $\lesssim
0.1$, corresponding to a small information loss and thus $\Phi_{\rm TOV,0}$
being a reasonable approximation if used in other models.
|
11institutetext: European Southern Observatory, Alonso de Córdova 3107,
Vitacura, Casilla 19001, Santiago, Chile
11email<EMAIL_ADDRESS>22institutetext: Université Grenoble Alpes, CNRS,
IPAG, F-38000 Grenoble, France 33institutetext: Núcleo de Astronomía, Facultad
de Ingeniería y Ciencias, Universidad Diego Portales, Av. Ejercito 441,
Santiago, Chile 44institutetext: Escuela de Ingeniería Industrial, Facultad de
Ingeniería y Ciencias, Universidad Diego Portales, Av. Ejercito 441, Santiago,
Chile 55institutetext: Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France
66institutetext: Max Planck Institute for Astronomy, Königstuhl 17, 69117
Heidelberg, Germany 77institutetext: Space Telescope Science Institute, 3700
San Martin Dr., Baltimore, MD 21218, USA 88institutetext: Department of
Physics and Astronomy, Bucknell University, Lewisburg, PA 17837
# A search for a 5th planet around HR 8799 using the star-hopping RDI
technique at VLT/SPHERE
Z. Wahhaj 1155 J. Milli 1122 C. Romero 1122 L. Cieza 3344 A. Zurlo 334455 A.
Vigan 55 E. Peña 11 G. Valdes 11 F. Cantalloube 66 J. Girard 77 B. Pantoja 88
(Received June 29, 2020; accepted December 20, 2020)
###### Abstract
Context. The direct imaging of extrasolar giant planets demands the highest
possible contrasts ($\Delta$H $\gtrsim$10 magnitudes) at the smallest angular
separations ($\sim 0.1^{\prime\prime}$) from the star. We present an adaptive
optics observing method, called star-hopping, recently offered as standard
queue observing (service mode) for the SPHERE instrument at the VLT. The
method uses reference difference imaging (RDI) but unlike earlier works,
obtains images of a reference star for PSF subtraction, within minutes of
observing the target star.
Aims. We aim to significantly gain in contrast over the conventional angular
differencing imaging (ADI) method, to search for a fifth planet at separations
less than 10 au, interior to the four giant planets of the HR 8799 system. The
most likely semi-major axes allowed for this hypothetical planet, estimated by
dynamical simulations in earlier work, were 7.5 and 9.7 au within a mass range
of 1–8 $M_{Jup}$.
Methods. We obtained 4.5 hours of simultaneous low resolution integral field
spectroscopy (R$\sim$30, Y–H band with IFS) and dual-band imaging (K1 and
K2-band with IRDIS) of the HR 8799 system, interspersed with observations of a
reference star. The reference star was observed for about one-third of the
total time, and generally needs to be of similar brightness
($\Delta$R$\lesssim$1 magnitude) and separated on sky by $\lesssim$1–2o. The
hops between stars were made every 6–10 minutes, with only 1 minute gaps in
on-sky integration per hop.
Results. We did not detect the hypothetical fifth planet at the most plausible
separations, 7.5 and 9.7 au, down to mass limits of 3.6 and 2.8 $M_{Jup}$
respectively, but attained an unprecedented contrast limit of 11.2 magnitudes
at 0.1′′. We detected all four planets with high signal-to-noise ratios. The
YJH spectra for planets $c$, $d$ were detected with redder H-band spectral
slopes than found in earlier studies. As noted in previous works, the planet
spectra are matched very closely by some red field dwarfs. Finally, comparing
the current locations of the planets to orbital solutions, we found that
planets $e$ and $c$ are most consistent with coplanar and resonant orbits. We
also demonstrated that with star-hopping RDI, the contrast improvement at
0.1′′ separation can be up to 2 magnitudes.
Conclusions. Since ADI, meridian transit and the concomitant sky rotation are
not needed, the time of observation can be chosen from within a 2–-3 times
larger window. In general, star-hopping can be used for stars fainter than R=4
magnitudes, since for these a reference star of suitable brightness and
separation is usually available. The reduction software used in this paper has
been made available online1.
###### Key Words.:
exoplanets – adaptive optics
## 1 Introduction
11footnotetext: https://github.com/zwahhaj/starhopping.
Radial velocity (RV) surveys have revealed to us the exoplanet population
orbiting within $\sim$5 au of their parent stars (Mayor et al., 2011;
Fernandes et al., 2019). Transit techniques have done the same for the
population of closer-in planets ($\lesssim$1 au), providing us a glimpse of
their atmospheres as inferred from their spectra (Howard et al., 2010; Dong &
Zhu, 2013; Madhusudhan, 2019). Direct imaging on the other hand has found more
than a dozen planets orbiting farther than 10 au from their stars
(http://exoplanet.eu/). Direct imaging and interferometry are the only methods
that allow us to obtain spectra of exoplanets separated by more than a few au
from their host stars (Bonnefoy et al., 2014, 2016). Direct imaging is also
the only technique that captures protoplanetary disks in the act of forming
planets (Keppler et al., 2018; Müller et al., 2018; Haffert et al., 2019).
Moreover, it has shown us fully formed planetary systems with their left-over
dusty planetesimal disks (Lagrange et al., 2012), and captured these dust-
producing rocky disks at various stages over their lifetime (e.g., Boccaletti
et al., 2018, 2020; Wahhaj et al., 2016).
Studies of systems like HR 8799 with its four planets can offer us a glimpse
at possible early (age $<$ 30 Myrs) architectures (Marois et al., 2010),
perhaps at a stage prior to major planet-migration and scattering (Chatterjee
et al., 2008; Crida, 2009; Raymond et al., 2010). However, the extrasolar
Jupiter and Saturn analogs are mostly still hidden from us, orbiting in the
glare of their parent stars between 5 and 10 au (Fernandes et al., 2019).
Fortunately, a giant planet at an age of 30 Myr can be a hundred times
brighter than at 300 Myr (e.g., Allard et al., 2012a). With direct imaging, we
are trying to detect the younger component of this hidden population, bridging
the unexplored gap to connect to the RV and transit exoplanet populations
closer in. In fact, some of the state-of-the-art direct imaging surveys have
nearly completed and yielded a few more giant planets, fainter and orbiting
closer to their stars than in earlier surveys, but mostly they report that the
regions beyond 10 au rarely have planets more massive than 3–5 $M_{Jup}$.
(Nielsen et al., 2019; Chauvin et al., 2017; Macintosh et al., 2015).
Especially for gound-based instruments, the success of the direct imaging
technique, imaging dozens of exoplanets and protoplanetary disks has been
mainly due to angular and spectral difference imaging (ADI, SDI and ASDI; Liu,
2004; Marois et al., 2006; Sparks & Ford, 2002; Wahhaj et al., 2011). Without
point spread function (PSF) differencing, within minutes we hit a wall in
terms of sensitivity because of quasi static speckles in adaptive optics
images. Speckles essentially mimic astronomical point sources, integrating
more like signal than noise. The ADI, SDI and other related techniques
decouple the speckles from the real signal, allowing them to be isolated and
subtracted. However, these techniques are hampered by the self-subtraction
problem (Marois et al., 2006). Since the decoupling of speckles and
astronomical signal is never complete, there is inevitably some self-
subtraction of signal. This can be manageable for planets moderately separated
from the star, where we just lose sensitivity depending on the subtraction
algorithm used (e.g., Wahhaj et al., 2013, 2015). However, for planet-star
separations of 1–2 resolution elements and extended structures like
circumstellar disks, the signal can be completely subtracted or the morphology
significantly altered or completely masked (Milli et al., 2012).
Reference difference imaging (RDI), a possible solution, has been routinely
used in space telescope observations
(eġ\̇@@bibref{missing}{missing}{missing}{missing}AuthorsPhrase1Year1999ApJ…525L..53W,
2016ApJ…817L…2C, ), as the PSF is quite stable over successive orbits of the
telescope. However, RDI is not often used in ground-based observing where PSFs
change significantly over hours. This is because, prior to extreme AO, the PSF
of other stars could not closely match the target PSFs in speckle similarity,
especially if the reference star images were not obtained the same night as
the science target. Nevertheless, impressive ground-based results on quite a
few targets have been achieved (Lagrange et al., 2009; Xuan et al., 2018;
Ruane et al., 2019; Bohn et al., 2020). In the more recent efforts, reference
PSFs were obtained 30 mins to hours apart and the telescope operator would
have to manually setup the guiding for each target change, costing significant
human effort and photon dead-time. Starting recently at VLT/SPHERE, we now
offer fast automated RDI available in queue mode for the first time, requiring
only a one minute gap for each target change, a technique monikered star-
hopping RDI. To demonstrate the power of this new observing mode, and to look
for new planets closer to the star, we targeted HR 8799, the home of the four
giants.
HR 8799 is a young main-sequence star (age 20–160 Myrs; Cowley et al., 1969;
Moór et al., 2006; Marois et al., 2008; Hinz et al., 2010; Zuckerman et al.,
2011; Baines et al., 2012) at a distance of 41.29$\pm$0.15 pc (Gaia
Collaboration, 2018). The space motions of the star suggest membership in the
Columbia moving group (age 30–40 Myr; Torres et al., 2008; Zuckerman et al.,
2011; Bell et al., 2015; Geiler et al., 2019). It has four directly imaged
giant planets at projected distances of 15, 27, 43, and 68 au (Marois et al.,
2008, 2010). Upper-limit to the masses from orbital stability requirements and
the derived luminosities assuming an age of $\sim$30 Myrs suggest that the
planet masses are 5–7 $M_{Jup}$ (Marois et al., 2010; Currie et al., 2011;
Sudol & Haghighipour, 2012). Interior and exterior to the planets, warm dust
at 6–10 au and an exo-Kuiper Belt beyond 100 au have been detected (Sadakane &
Nishida, 1986; Su et al., 2009; Hughes et al., 2011; Matthews et al., 2014;
Booth et al., 2016). Thus, it is likely that the planets formed in a
circumstellar disk, instead of directly from a protostellar cloud as in binary
or multiple star formation. However, it is currently a theoretical challenge
to form so many massive planets in a single system.
The total system architecture and stability, considering the age, mass and
debris disk formation history have been studied in some detail (see
Goździewski & Migaszewski, 2009, 2014, 2018; Reidemeister et al., 2009; Su et
al., 2009; Fabrycky & Murray-Clay, 2010; Moro-Martín et al., 2010; Galicher et
al., 2011; Marleau & Cumming, 2014; Matthews et al., 2013; Booth et al., 2016;
Konopacky et al., 2016; Wilner et al., 2018; Geiler et al., 2019). HR 8799 is
a star of the $\lambda$ Bootis type (indicating an iron poor atmosphere), and
also a $\gamma$ Dor variable, indicating small surface-pulsations perhaps also
due to some accretion-associated chemical peculiarity (Saio et al., 2018;
Saio, 2019; Takata et al., 2020). Spectra of the planets has been obtained in
the NIR bands with Keck/OSIRIS (Barman et al., 2011, 2015; Konopacky et al.,
2013), Project 1640 at Palomar (Oppenheimer et al., 2013), VLT/NACO (Janson et
al., 2010), GPI (Ingraham et al., 2014) and SPHERE (Zurlo et al., 2016;
Bonnefoy et al., 2016). The comparison of the spectra to brown dwarfs, cool
field objects and current atmospheric models suggest patchy thin and thick
clouds of uncertain height, non-equilibrium chemistry, and a dusty low-gravity
atmosphere (Marois et al., 2008; Currie et al., 2011; Madhusudhan et al.,
2011; Skemer et al., 2012; Marley et al., 2012; Morley et al., 2012; Apai et
al., 2013; Buenzli et al., 2015). Given the theoretical challenge in
explaining such a massive multi-planet and debris disk system with detailed
and specific information, and the prospect of finding additional planets
(Goździewski & Migaszewski, 2014, 2018) the system deserves a deeper look. We
describe our SPHERE study of HR 8799 in the following sections. The reduction
software used in this paper can be found online
111https://github.com/zwahhaj/starhopping.
## 2 Observations
### 2.1 Telescope and instrument control for Star-hopping
The goal of star-hopping on VLT/SPHERE is to switch from recording adaptive
optics corrected images of the science star to the reference star with only a
$\sim$1 minute gap. Thus the usual help from the human operator to setup the
guide star for the primary mirror’s active optics correction, typically a 5
minute interaction, should be restricted to once per star, thus two times in
total. This would allow us to make hops between science and reference star
every $\sim$10 minutes without much loss in photon collecting efficiency, and
ensuring minimal change in the PSF shape in the elapsed time. We do not
provide an exact calculation for the optimum hopping frequency as it depends
strongly on how the seeing and coherence time vary over the observation.
However, we found in our observations that PSF similarity drops $\sim$2% every
10 minutes (see Section 3.3). This is significant as the sensitivity reached
depends non-linearly on the PSF subtraction quality. Thus, we recommend
observing the science target for 10 minutes, then hopping to the reference
star and observing it for 5 mins, repeating the cycle as needed.
To preserve PSF similarity and for time-efficiency, the AO loops would not be
re-optimized when changing stars, and thus the reference star would need to
have an R-magnitude (mag) within 1 mag of the science star, to ensure similar
AO performance. While, we do not have strong constraints on the color of the
reference star, again similar brightness (within 1 mag) in the observing
wavelength is important. This is because the adaptive optics performance need
to be similar and the signal-to-noise of the reference images need to be
comparable or better. Also, the reference star would need to be within 1 to 2
degrees of the science star, so that the main mirror’s shape-changes at the
new pointing would not result in large changes in PSF properties. Fortunately,
for the vast majority of stars fainter than $R\sim 4$ mags a suitable
reference star can be found, making star-hopping practical for $R\sim$ 4–13
mag stars. The solution required new software to be designed for telescope
control and new template software to be written for the observing sequence and
instrument, i.e., SPHERE’s control.
For SPHERE, we designed two new acquisition templates called starhop and
hopback which are only responsible for moving the telescope between the two
stars and store relevant setup information so that subsequent hops can be made
automatically. Thus a typical observing sequence would be: 1) Normal
acquisition of science star with desired instrumental mode and setup, 2) An
observing template lasting a few minutes, 3) Acquisition of reference star a
few degrees away, with the starhop template, 4) Another observing template, 5)
Quick return to the science star using the hopback template lasting $\sim$1
minute, 6) Another observing template 7) Quick return to reference star using
the hopback template again, 8) As many iterations of steps 4 to 7 as desired.
All three types of acquisitions constitute a full preset of the telescope,
i.e., the primary mirror’s shape and the secondary’s pointing are set by a
look-up table, then a guide star is selected (automatic for hopback) for
accurate pointing corrections, continuous active optics corrections for the
main mirror shape are activated using the guide star. However, human operators
only assist with the first (normal) acquisition and the starhop acquisition,
especially in the selection of the guide star and related setup. The starhop
template stores all parameters required for these setups for the first star,
moves (presets) to the second star, lets the operator assist in the second
acquisition, and then stores all the parameters for the second acquisition.
Small telescope offsets for fine-centering made by the operator when
positioning the star on the instrument detector, are also recorded. Thus, the
hopback template already has the relevant parameters saved and can
automatically hop back and forth between the two stars, taking only $\sim$1
minute each time.
### 2.2 HR 8799 observations
We observed HR 8799 as part of a director’s discretionary time (DDT) proposal,
to test the performance limits of star-hopping with RDI on SPHERE. The SPHERE
instrument (Beuzit et al., 2019), installed at the Nasmyth Focus of unit
telescope 3 (UT3) at the VLT, is a state-of-the-art high-contrast imager,
polarimeter and spectrograph, designed to find and characterize exoplanets. It
employs an extreme adaptive optics system, SAXO (Fusco et al., 2005, 2006;
Petit et al., 2012; Sauvage et al., 2016), with 41$\times$41 actuators (1377
active in the pupil) for wavefront control, a low read noise EMCCD running at
1380 Hz, a fast (800 Hz bandwidth) tip-tilt mirror (ITTM) for pupil
stabilization, extremely smooth toric mirrors (Hugot et al., 2012), and a
differential tip-tilt loop for accurate centering in the NIR. This system can
deliver H-band strehl ratios for bright stars (R$<$9) of up to 90% and
continue to provide AO correction for stars as faint as R$=$14 mags. SPHERE
also provides coronagraphs for diffraction suppression, including apodized
Lyot coronagraphs (Soummer, 2005) and achromatic four-quadrants phase masks
(Boccaletti et al., 2008). It is comprised of three subsystems: the infrared
dual-band imager and spectrograph (IRDIS; Dohlen et al., 2008), an integral
field spectrograph (IFS; Claudi et al., 2008) and the Zimpol imaging
polarimeter (ZIMPOL; Schmid et al., 2018).
We observed HR 8799 in the IRDIFS extended mode (Zurlo et al., 2014), where
IRDIS K1 and K2-band images and IFS Y–H spectra are obtained simultaneously
(Vigan et al., 2010). The IRDIFS data was obtained in three 1.5 hour observing
blocks (OBs), one block on the night of October 31, 2019 and two contiguous
blocks on the night of November 1, 2019. We used the N_ALC_YJ_S coronagraph
with a central obscuration of radius 73 mas, which is not ideal for the
maximum contrast in K-band but ensures that any object at 100 mas separation
would not be partially obscured. With IRDIS we used 8s exposures, while with
IFS we used 32s. We also obtained short-exposure unsaturated non-coronagraphic
observations of the primary star for flux calibration, which we will call FLUX
observations henceforth. The datasets can be found in the ESO archive under
program ID 2103.C-5076(A) and container IDs: 2622640, 2623891 and 2623923.
Each container represents a separation epoch, consisting of several OBs
alternating between HR 8799 and the reference star. The reference star, HD
218381 (spectral type K0 vs F0V for HR 8799), is separated 0.55o from HR 8799
and is 0.52 mag fainter than it in R-band but 0.75 mag brighter in H-band. In
total, we had 1440 IRDIS exposures for HR 8799 and 830 for the reference star.
With IFS, we had 190 exposures for HR 8799 and 114 for the reference star. The
observing conditions were average, with a coherence time of 4.7$\pm$1.3 ms, a
seeing of 0.9$\pm$0.15′′, and a windspeed of 2.1–7.7 m/s without the low-wind
effect (Milli et al., 2018). The total sky-rotations were 23.8o on the first
night and 53.4o on the second night.
## 3 Data Reduction
### 3.1 IFS reduction and contrast limit estimates
Since our main motivation is to achieve sensitivities to fainter planets than
earlier observations, we begin by estimating the detection limits of our data
set and post-processing method. The detection limits are estimated by
comparison to simulated planets which undergo the same reduction processes as
the real planets. The measurement and analysis of the real planets in the
system are presented afterwards. For the basic reduction calibrations, we used
SPHERE pipeline version 0.36.0 and scripts by (Vigan et al., 2015,
http://astro.vigan.fr/tools.html) The IFS data sets from all 3 epochs were
combined to form a cube of 7254 images, 186 images in each of the 39
wavelength channels. In each image, 16 simulated companions were inserted with
offsets wrt. to the star, given by separations: 0.1$"$ to 1.6$"$ with steps of
0.1$"$ and position angles increments of 90o with each step. The simulated
companions were made from the FLUX exposures of the primary appropriately
scaled in intensity. Since these sources were given constant chromatic
contrast, i.e. the same spectra as the host star, we did not apply any
spectral differencing in the reduction described below. The contrasts of these
sources were chosen to be roughly 2 mags brighter than a preliminary contrast
limit estimate for the data set. The reference PSF data set consisted of 4446
images. All science and reference images were unsharp-masked, i.e., each image
was convolved with a Gaussian of FWHM 0.1$"$ (roughly twice the image
resolution) and subtracted from the original to produce an image where most
large scale spatial features like diffuse stellar light has been removed
(eġ\̇@@bibref{missing}{missing}{missing}{missing}AuthorsPhrase1Year1999PASP..111..587R,2013ApJ…779…80W,
).
A diagonally oriented stripe pattern was found in all the IFS images, which we
were unable to remove in the basic calibrated images. A zero-valued image
passed through the basic calibration also yielded this pattern, found to be
independent of the channel wavelength. Thus the pattern is likely an artefact
of the pipeline. The output pattern image was bad-pixel cleaned and unsharp-
masked to prepare it to be subtracted from the science images. Two annular
regions were defined to optimize PSF subtraction, i.e., minimize the residual
RMS in each region. These two annuli had inner and outer radii of 0.075$"$ and
0.67$"$, and 0.67$"$ and 1.33$"$ respectively. The science images were median-
combined without de-rotation to reveal the background stripe pattern more
clearly. Then we obtained the best intensity-scaled pattern images for the
inner and outer annuli, which we in turn subtracted from each science image,
to perform a preliminary removal of the pattern. Next, for each science image,
we computed the best linear combination of reference images that reduced the
RMS in the two annular regions separately, similar to the LOCI algorithm
(Lafrenière et al., 2007), but a much simpler version since optimization is
done only over the two large annuli. We then took the difference of the
science image and this composite reference image, and further applied an
azimuthal profile removal filter as described in Wahhaj et al. (2013). All the
difference images were median-combined again to check for any residual striped
pattern, and remove it again by the same procedure as before.
Generally, we see a consistent but modest improvement in contrast ($\sim$ 0.2
mag) with the use of image filters (e.g. unsharp masking), and so we recommend
their use. Also, we notice fewer artifacts, e.g. fewer PSF residuals in these
reductions. However, as data sets may differ in PSF morphology, we also
recommend studying reductions without applying such filters, even when trying
to detect faint point sources.
Figure 1: Left: An IFS Y–H band reduced image showing simulated planets which
are recovered with high SNR. The source recovered closest to the star
indicates a contrast limit of 11.2 mags at 0.1′′ projected separation. Right:
An IRDIS K1+K2 band reduced image also showing simulated planets at the same
separations, all recovered with high SNR. The same contrast at 0.1′′ was
reached with IRDIS also. The planets were inserted into the basic calibrated
data (flat-fielded, dark-subtracted and bad pixel corrected) All real planets
have been masked out. The color scale is linear with intensity. Figure 2:
Contrast limits achieved in the IFS and IRDIS data sets, estimated by flux
comparison to simulated planets recovered post-reduction. Figure 3: IFS and
IRDIS images from star-hopping RDI reductions shown with same scale and
orientation (North is up, East is left). Left: SNR map of the IFS Y–H band
reduced image, showing only the real planets. The azimuthal filtering creates
the dark negative arcs around the planets. They are more pronounced in the IFS
reduction as more images were combined here than for the IRDIS reduction.
Right: SNR map of the IRDIS K1+K2 band reduced image, showing only the real
planets. The star, at the center of the black circle, is masked by the
coronagraph. No new planets are detected in the newly probed region around
0.1′′ separation above the contrast limit of 11.2 mags.
Next, the images were derotated to align the sky with North up and East left
orientation and median-combined. A signal-to-noise map is made for the final
reduced image (Figure 1), where the pixels in annular rings of width 4 pixels
are divided by the robust standard deviation in that region. The robust value
is taken to mitigate the effect of the simulated planets on the RMS. The
signal-noise-ratio of each recovered simulated planet was then compared to its
input contrast to calculate the 5$\sigma$-contrast limit achieved at the
separation, like so $Contrast=InputContrast\times SNR/5$. The
5$\sigma$-contrasts achieved in this RDI-only reduction at 0.1$"$, 0.2$"$,
0.4$"$ and 0.8$"$ separations were 11.2, 13.5, 14.4 and 15 mags, corresponding
to mass limits of 6.5, 3.1, 2.3 and 1.8 $M_{Jup}$ respectively, as estimated
from BT-Settl models assuming an age of 30 Myrs (Allard et al., 2012b). The
contrast curve is shown in Figure 2. The reduction showing only the real
planets (without simulated planet insertions) is shown in Figure 3. No new
planets are detected.
### 3.2 IRDIS reduction and contrast limit estimates
The IRDIS reductions with simulated planets were done in a similar way to the
IFS reductions. Since there were less images to process, we opted to use a
more sophisticated but also more computation intensive reduction method. The
simulated planets were inserted in the basic calibrated data at the same
offsets with respect to the star as before. The planets inserted were $\sim$1
mag brighter than the 5$\sigma$ detection limit. For this exercise, we did not
correct the relative rotational offset between IFS and IRDIS, so the PAs of
the real HR 8799 planets do not agree between the two reduced images in Figure
1. There were 1443 good science images in the three datasets combined and 828
reference images.
The images were first unsharp-masked. Next, we calculated the residual rms
between all pairs of science and reference images, after intensity scaling to
minimize the rms between 70 mas and 270 mas. For each science image, the best
16 reference images (more would worsen signal loss) were linearly combined by
LOCI for subtraction to minimize the residual rms separately in annular rings
covering the whole image. Each target annulus, where the subtraction was
actually done, had width 200mas. But the reference annuli, where LOCI tried to
minimize the residual rms, started 25mas outside the target annuli and
extended outwards to the cover the rest of the image. This was done to
mitigate over-subtraction and signal loss. We chose these parameters mostly by
trial and error. The azimuthal filtering, de-rotation and combination of all
the difference images, and the contrast limit estimates were done in the same
way as in the IFS reduction. The final reduced images (with and without
simulated companions) and the contrast performance are shown in Figures 1, 3
and 2, respectively. The IRDIS contrast limit is 11.2 mags at 0.1′′ which is
equal to the IFS limits, but IFS fares $\sim$0.5 mags better at larger
separations.
### 3.3 Comparison of RDI and ADI IRDIS detection limits
For a comparison of typical ADI and RDI IRDIS observations we use only the
first of the three data sets, totalling 1.5 hours of execution time, since
this is slightly longer that the typical observation length (1 hour) at the
VLT. The data set constitutes 481 science images and for RDI, 276 reference
images. The total sky rotation in the sciences images was 24o. We performed 3
different ADI-based reductions which we call ADI-LOCI-F1, ASDI-LOCI-F10 and
ASDI-PCA-F10. The ADI-LOCI-F1 is the same as the RDI reduction in terms of
reference image selection and reference sector size and the use of LOCI,
except that the references were restricted to those with more relative
rotation than one-half FWHM (found by trial). The ASDI-LOCI-F10 reduction
(ASDI is Angular and Spectral Difference Imaging) was performed on a data set
with simulated companions which were made 10 times fainter (thus labeled F10)
in the K2 channel than in K1 channel, allowing aggressive spectral
differencing and a potential contrast gain over ADI. Since reference images
could have companions both spectrally and rotationally displaced, only the
combined displacement need to be more than one-half FWHM. The ASDI-PCA-F10
reduction was performed on the same data set as that of ASDI-LOCI-F10. The
reduction parameters were again optimized by trial and error. We used
principal component analysis (PCA) to construct the subtraction PSFs with 5
components (See Soummer et al., 2012). However, for each science image, and
for each annular sub-component of the image (same as the reductions above)
only selected subsets of the science images were chosen as input for the PCA –
residual rms were calculated after subtracting all science image pairs, the
best 30 matches (with least rms) that had more relative rotation than one-half
FWHM were chosen, if less than 30 appropriate matches were found then the
relative rotation criteria was relaxed to down to one-fourth FWHM, but no
further, to allow input images for the PSF construction. This more selective
approach to PCA helps to reduce the signal self-subtraction expected in ADI,
and our tests supported this assumption, yielding significantly better results
than PCA alone.
The RDI reduction (see top of section 3.2) was repeated for the same 1.5 hour
data set used in the ADI reduction. In Figure 4 we compare the RDI and the
ADI-LOCI-F1 reduction. The simulated planets inside 0.3′′ separation are much
better recovered in the star-hopping RDI reduction. In the ADI reduction, the
innermost planet at 0.1′′ is not recovered at all, while the one at 0.2′′ is
barely recovered. Contrast curves were calculated from the signal to noise
ratio of the recovered simulated planets as before. The contrast improvement
of RDI over the three ADI reductions, more than 2 mags at 0.1′′ separation, is
shown in Figure 5 as a difference between the two contrast curves. The
improvement will of course vary with the total amount of sky rotation in the
science images.
Figure 4: Comparison of star-hopping RDI versus ADI reductions of IRDIS K1+K2
band data injected with flat spectrum simulated planets. The inner two
simulated planets are not successfully recovered in the ADI reduction, while
they are clearly detected in the RDI reductions. The third simulated planet is
recovered significantly better in the star-hopping RDI reduction. All real
planets have been masked out. The color scale is linear with intensity. Figure
5: RDI contrast improvement over ADI or ASDI, estimated from the SNR of
recovered simulated companions from an IRDIS data set. The star-hopping RDI
technique yields detections limits more than 2 mags fainter than ADI at 0′′.1
separation from the target star. The green line shows the case for a K1/K2
companion flux ratio of 10, and very similar algorithms for RDI and ASDI,
except that the ASDI reduction is fine-tuned to minimize self-subtraction. The
blue line similarly shows RDI$-$ADI difference for equal K1, K2 flux. The red
line shows the RDI improvment against the best PCA-based ADI reduction for a
K1/K2 flux ratio of 10. The LOCI and PCA reductions are described in section
3.3.
Figure 7 illustrates why star-hopping RDI performs so much better than ADI. It
shows the residual fractional rms (RFR) for each science image as a function
of relative rotation, i.e., the remaining rms between 0.1′′–0.3′′ separations
after subtraction of another science or reference star image, divided by the
original rms in each science image. Specifically,
$RFR_{i}=RMS(s_{i}-o_{j})/RMS(s_{i})$, where $s_{i}$ is a science image,
$o_{j}$ is another science or reference star image and $RMS$ is computed
between 0.1′′–0.3′′ separations. The RFRs post-RDI subtraction had a 2$\sigma$
range of 0.32–0.78. We see that although the science images provide better-
matched PSFs in general, the images that can be used with minimal self-
subtraction are much fewer and much poorer matches than the RDI reference set.
Thus, the reference star images constitute a superior set for constructing
subtraction PSFs.
In Figure 6 we show that artificially increasing the field rotation for the
RDI reduction (1.5 hour data set) before coadding the images does not improve
the contrast significantly. Thus the speckle residuals are comparable to white
noise as more rotation does not seem to result in additional smoothing. We
estimate no improvement at 0.1′′, $\sim$0.2 mag improvement farther out when
comparing rotations of 140o to 20o, and 0.5 mag improvement at 1′′, when
comparing rotations of 140o to 2o. The reductions were done by mutliplying the
actual position angles of the images by specific factors that would achieve
total field rotaions of 2o to $\sim$140o (distributed logarithmically), before
coadding the images.
Figure 6: Artificially increasing the field rotation for an RDI reduction
before coadding the images does not improve the contrast significantly (see
section 3.3). The legend gives the total rotation of the reduction for each
contrast curve. At small separations (0.1–0.2′′) we see no improvement, as
contrasts are not correlated with rotation angles. At larger separations, we
see a maximum of 0.5 mag improvement between the minimum and maximum
rotations, 2o and 143o, but only 0.3 mag improvement between 18o and 143o.
During star-hopping tests on the night of August 8, 2019, we obtained 8 images
for each of a pair of stars, HD 196963 and HD 196081, which are separated by
$\sim$1.75o. Since this pair has a much larger angular separation, we can use
the RFR from this data set to gauge whether there is significant degradation
in PSF similarity. Fortunately, the 2$\sigma$ range of the RFR was 0.33–0.53,
indicating that star-hopping is still very effective for such large
separations. It should be noted that the coherence time was only 1.9–2.1 msec
for these observations, compared to 2.5–7.2 msec for the HR 8799 observations.
Although we have low statistics for such a performance, these results shows
that even in poor to average conditions, star-hopping RDI can be effective for
a pair of stars separated by almost 2o.
Figure 7: The comparison of PSF similarity between reference-science and
science-science pairs. The residual fractional rms of difference images are
plotted as a function of relative position angle/rotational offset. The black
dots represent science-science subtractions, the blue dots represent science-
reference subtractions, the red dots represent science-science differences
with acceptable self-subtraction. For the science-reference points, the
relevant quantity is the time difference, which in our case has an almost
linear relationship to the PA difference.
### 3.4 $JH$-band spectra from IFS
The spectra of planets $c,d$ and $e$ were extracted with an aperture size of 3
pixels for all IFS channels. The spectra for planets $d$ and $e$ were
corrected for flux loss by comparing them to three flat contrast sources
(uniform contrast across wavelength) per planet inserted at the planets’
separations, but at different PAs (offset from the planets by 30o to 270o).
These simulated planets are just the IFS $FLUX$ exposures scaled appropriately
in intensity. They were inserted at 10 mags of contrast, which is somewhat
brighter than the real planets. Since planet $c$ was detected at the edge of
the IFS detector where simulated planets could not be inserted, we used the
same comparison sources for planets $c$ and $d$. The simulated planets undergo
the same reduction process as the real planets, and their fluxes are extracted
using the same aperture sizes, and thus their systematic fractional flux error
are the same. We verified this by checking that the spectrum recovered from
the simulated companions did indeed have a uniform contrast. Thus the planet
spectra is calculated as
$B_{PR}(\lambda)=\frac{F_{PR}(\lambda)}{F_{PS}(\lambda)}\times
10^{-4}B_{S}(\lambda)$ (1)
where $F_{PR}$ and $F_{PS}$ are the real and simulated planet aperture fluxes
respectively, and $B_{S}$ is the stellar spectra. Here, the fractional flux
losses for the real planet are fully accounted for in the ratio,
$F_{PR}(\lambda)/F_{PS}(\lambda)$.
The flux corrected spectra for planets $d$ and $e$ are shown in Figure 8 along
with that of the particularly red L6 object 2MASS J2148+4003 (Looper et al.,
2008) for comparison. All three spectra are much redder towards the $H$-band,
in comparison to typical late L-types. Although not as red, the dusty dwarves
of the field population also have redder than average spectra(see Zurlo et
al., 2016; Stephens et al., 2009; Gagné et al., 2014). It should be noted that
the spectra do differ somewhat in shape from earlier publications, (e.g.,
Zurlo et al., 2016). This could be because the spectra we present here are the
first not to be effected by signal self-subtraction due to ADI or SDI
processing. The most notable differences from earlier spectra (see Figure 9)
are less defined peaks at 1.1$\leavevmode\nobreak\ \mu$m, and for planet $d$
in 2019, a gentler slope towards 1.6 $\mu$m. The absence of the peak at 1.1
$\mu$m is quite common among observed late L-type (see Figure 3 of Bonnefoy et
al., 2016, for example), and also seen in the spectra of 2MASS J2148+4003.
However, we also note that the 2016 spectral slopes towards 1.6 $\mu$m are
very similar to planet $e$ in 2019. Although, the higher fluxes at 1.6 $\mu$m
are rarer among such L-types, it would explain the earlier discrepancy between
IRDIS and IFS fluxes near the $H$-band (Zurlo et al., 2016). We could not
estimate an accurate flux normalization for the spectra of planet $c$ as it
was detected near the edge of the detector, so we show its spectra normalized
to 1 at 1.25 $\mu$m in Figure 10. We do not pursue this further, as accurate
$JH$-band photometry has already been provided in past publications. However,
the shape of the planet’s spectra is reliably detected and show’s an even
redder $J-H$ color than planets $d$ and $e$. Although such red spectra are not
common, a very similar slope (flux doubling between 1.25 and 1.6$\mu$m) was
seen in the L7 object, VHS J125601257 b (Gauza et al., 2015). This L7 object,
a planetary candidate companion to a brown dwarf, is also thought to have a
dusty atmosphere with thick clouds (see Bonnefoy et al., 2016, for a
discussion).
Figure 8: The spectra for planets $d$ and $e$ compared with that of the L6
object, 2MASS J2148+4003 from Looper et al. (2008). The planet spectra have
been divided by the stellar flux at 1.25 $\mu$m to show the contrast at that
wavelength. The L6 object spectra was scaled to match planet $e$ at 1.25
$\mu$m. The shaded regions indicate the 1$\sigma$ error ranges of the spectra.
The wavelength range 1.37–1.45 $\mu$m which is dominated by telluric lines is
not shown. Figure 9: The RDI-extracted spectra for planets $d$ and $e$ in 2019
compared with their ADI-extracted spectra from 2016 as reported in Zurlo et
al. (2016). The 2016 planet spectra to the 2019 have been matched at 1.25
$\mu$m for easier comparison for their respective shapes. The shaded regions
indicate the 1$\sigma$ error ranges of the spectra. The wavelength range
1.37–1.45 $\mu$m which is dominated by telluric lines is not shown. Figure 10:
The RDI-extracted spectra for planets $c$ compared with that of the L6 object,
2MASS J2148+4003 from Looper et al. (2008) and L7 object VHS J125601257 b from
Gauza et al. (2015). The wavelength range 1.37–1.45 $\mu$m which is dominated
by telluric lines is not shown.
### 3.5 The HR 8799 debris disk
Booth et al. (2016), using the ALMA millimeter array, detected a broad debris
ring, extending from $\sim$145 au to $\sim$430 au with an inclination of
40$\pm$6o and a position angle of 51$\pm$8o. Prior to this, Su et al. (2009)
inferred from the spectral energy distribution of the system that a
planetesimal belt extending from 100 and 300 au separation was the source of
blow-out grains extending out to $\sim$1500 au. Thus the inner radius of the
disk could start as close as 2.5′′ and the outer radius could be as far as
11′′ from the star.
It is expected that RDI reductions would be a major improvement over ADI for
detections of disks with large angular extents, as self-subtraction in these
cases is a severe problem for ADI. To detect the disk, we repeated the IRDIS
RDI reduction without simulated companions or any prior image filtering (used
to enhance speckle subtraction), as these remove all extended emission. We
only used the K1-band images as the K2-band have much higher background.
Detecting disks which are close to azimuthally symmetric in the plane of the
sky, and extended over several arcseconds is a challenge very different from
planet recovery, as the expected signal area is most of the image and the
background area is perhaps non-existent. The image sectors used for PSF
subtraction cannot be small, as this would remove extended signal. So, we used
one large annulus extending from 0.4′′ to 2′′ separations to cover most of the
PSF halo. The final reduction is shown in Figure 11, but no disk emission was
detected down to a 5$\sigma$ contrast of 14.1 magnitudes beyond 2.5′′
separations. The non-detection is not surprising given the marginal detection
of the much brighter 49 Cet debris disk with SPHERE (Choquet et al., 2017).
The fractional disk luminosity of HR 8799 is 8$\times$10-5 (Su et al., 2009)
versus 9$\times$10-4 for 49 Cet (Moór et al., 2015). The inner radius of the
disks start at roughly 2′′ separation for both (Choquet et al., 2017; Booth et
al., 2016), with expected physical separations of 100–150 au. The two stars
have similar spectral types (F0–A1) with very similar $H$-band magnitudes
(5.3–5.5 mag).
Figure 11: A IRDIS reduction without any prior image filtering to search for
an extended circumstellar disk beyond angular separations of 2.5′′ (to $>$6′′)
from the star. ALMA observations by Booth et al. (2016) indicate that the disk
should have a position angle of 51$\pm$8o and an inclination of 40$\pm$6o. We
do not detect any disk down to a contrast limit of 14.1 magnitudes. Some faint
thermal emission from the detector background is seen in the lower right, but
not in the expected orientation of the known disk. North is up and East is to
the left.
### 3.6 IRDIS K1, K2 band photometry
The photometry of the four planets were extracted by comparison with simulated
planets in a similar way to the IFS spectra. For each of the four planets,
three simulated planets were inserted into the dataset with a contrast of 10
mags, at the same separation as the real planets, but with large PA offsets
(30 to 270o). The relative aperture photometry was done similar to IFS, but
with aperture radius 4 pixels, because of the larger FWHM in the K-band. The
recovered photometry are all brighter than the Zurlo et al. (2016)
measurements by about 0.1 mag (see Table 1). The standard deviation in the
contrasts estimates for the three reference simulated planets is less than
0.03 mags. The dominant contrast uncertainty comes from the measurement of the
AO-corrected stellar PSF core flux, which is measured only once every 1.5
hours.
### 3.7 Astrometric measurements and comparison to orbital models
The IRDIS data set for science images were separately reduced by the SPHERE
data center (Delorme et al., 2017) which treated it as an ordinary pupil-
tracking sequence. The data center applied the optimal distortion correction
methods consistent with Maire et al. (2016), to produce a basic-calibrated
data set with high astrometric fidelity (3–4 mas). These images were then
reduced using the high-contrast imaging algorithm, ANDROMEDA (Cantalloube et
al., 2015), to produce astrometric measurements (see Table 1) for the four
known HR 8799 planets. We also compared the recovered coordinates for the real
planets between the RDI and ADI reductions, and found that the planet
locations agreed to within 2.7 mas, smaller than the errors estimated in Table
1.
An exhaustive orbital fitting effort is being currently undertaken by Zurlo et
al. (in preparation) including all extant astrometry. Moreover, extensive work
has been done to find orbital solutions to the prior astrometry for this
system, so we just compare our latest measurements to the viable orbits
computed by Wang et al. (2018). From millions of orbits generated by a monte
carlo method, they generated 3 sets of solutions: 1) the orbits are forced to
be coplanar and have 1:2:4:8 orbital commensurabilities, 2) no coplanarity but
with low eccentricity and period commensurabilities as before, 3) with no
additional constraints. In Figure 12, we overlay our astrometry on orbital
solution sets 1 and 3. Although, the latest points are consistent with both
sets of solutions, planets $e$ and $c$ fall close to the expected position in
the dynamically stable set, but a bit far from the mean expected location in
the unconstrained set of orbits. Thus, the coplanar orbits with period
commensurabilities are favored in our comparisons.
Survival of the four planets and even a hypothetical fifth planet is possible
for the lifetime of the system ($>$ 30 Myrs), but requires the period
commensurabilities mentioned above. In fact, this was needed even when only
planets $b$, $c$ and $d$ were known (Goździewski & Migaszewski, 2009;
Reidemeister et al., 2009; Fabrycky & Murray-Clay, 2010; Marshall et al.,
2010). Such dynamical models envision that the four planets were formed at
larger separations and migrated inwards. This would allow the very similar
chemical compositions indicated by their spectra, as opposed to more variation
expected if they had formed insitu (Marois et al., 2010).
The most likely semi-major axes allowed for the hypothetical inner planet $f$,
estimated by Goździewski & Migaszewski (2014, 2018) were 7.5 au and 9.7 au,
with dynamical constraints on the masses of 2–8 $M_{Jup}$ and 1.5–5 $M_{Jup}$
respectively. The IFS contrasts we achieved at these separation were 13.05 and
13.86 mags, corresponding to estimated masses of 3.6 $M_{Jup}$ and 2.8
$M_{Jup}$ respectively (assuming an age of 30 Myr), from the BT-Settl models
(Allard et al., 2012b). Thus, the planet may still exist with a mass of 2–3.6
$M_{Jup}$ at 7.5 au or 1.5–2.8 $M_{Jup}$ at 10 au.
Table 1: Astrometry and photometry of the four HR 8799 planets.
planet | $\rho$ (mas) | $\sigma_{\rho}$(mas) | PA | $\sigma_{PA}$ | SNR | $\Delta$K1 (mag) | $\Delta$K2 (mag) | Mass ($M_{J}$)
---|---|---|---|---|---|---|---|---
e | 406 | 4 | 302.72o | 0.04o | 41 | 10.8$\pm$0.02 | 10.63$\pm$0.03 | 8${}^{+7}_{-2}$
d | 686 | 4 | 231.38o | 0.006o | 83 | 10.7$\pm$0.02 | 10.47$\pm$0.02 | 8${}^{+7}_{-2}$
c | 958 | 3 | 335.86o | 0.05o | 96 | 10.8$\pm$0.02 | 10.53$\pm$0.03 | 8${}^{+7}_{-2}$
b | 1721 | 4 | 69.05o | 0.04o | 47 | 11.89$\pm$0.01 | 10.75$\pm$0.01 | 6${}^{+7}_{-1}$
The mass estimates are from the PHOENIX BT-Settl atmospheric models (Baraffe
et al., 2015), assuming an age of 30${}^{+130}_{-10}$ Myrs. However, the most
dynamically stable orbital solutions from Wang et al. (2018) set much tighter
limits: a mass of $5.8\pm 0.5\leavevmode\nobreak\ M_{J}$ for planet $b$, and
$7.2\pm 0.6\leavevmode\nobreak\ M_{J}$ for the other planets.
Figure 12: Top: The November 1, 2019 epoch astrometry overlaid as gray
diamonds on the most dynamically stable orbital solutions from Wang et al.
(2018) (see their Figure 4), where coplanarity and 1:2:4:8 period
commensurabilities were imposed. The black dots represent earlier measured
astrometry for the four planets. Bottom: Same points overlaid on the orbital
solutions without the additional constraints. The 2019 locations for planets
$e$ and $c$ are more consistent with the dynamical stable family of orbits.
## 4 Conclusions
In this paper, we successfully used the new star-hopping RDI technique to
detect all four known planets of the HR 8799 system, and significantly
improved on the contrast limits attained previously with ADI, at separations
less than 0.4′′. This technique of moving quickly to a reference star to
capture a similar AO PSF for differencing, with only a 1 minute gap in photon
collection, can now be used in service mode at the VLT with all the observing
modes available on the SPHERE instrument. Using star-hopping RDI, we
demonstrated the contrast improvement at 0.1′′ separation can be up to 2 mags,
while at larger separations the improvement can be $\approx$1–0.5 mags,
results which are comparable to those of Ruane et al. (2019). With this
technique there is no need for any local sidereal time constraints during
observations, which is usually essential for ADI observations. This means that
the observing window for typical targets can be expanded by a factor of 2–3.
Moreover, star-hopping can usually be used for stars fainter than R$=$4 mag,
as for these a reference star of comparable brightness can be found within 1–2
degrees (closer is better). Indeed we found comparable PSF similarity for a
pair of stars 1.75o apart. The technique provides significant contrast
improvement mainly because of two reasons: the usable PSF, those without
significant self-subtraction or flux loss from PSF subtraction 1) occur closer
in time and thus are more similar to the target image than in ADI and 2) are
more numerous than in ADI as they are spread uniformly over the whole
sequence, rather than only available after significant sky rotation. The
benefit for extended object like disks will be the most impactful, as in ADI
the self-subtraction artefacts can result in significant change in their
apparent morphology.
In our SPHERE observations of HR 8799, we did not detect planet $f$ at the
most plausible locations, 7.5 and 9.7 au, down to mass limits of 3.6 and 2.8
$M_{Jup}$, respectively. Also, we did not detect any new candidate companions,
even at the smallest observable separation, 0.1′′ or $\approx$ 4.1 au, where
we attained a contrast limit of 11.2 mags or 6 $M_{Jup}$ in K1+K2-band (6.5
$M_{Jup}$ in JHK-band using BT-Settl models from Allard et al., 2012a).
However, we detected all 4 planets in K1+K2-band with SNR of 41, 83, 96 and 47
for planets $e$, $d$, $c$ and $b$, respectively. The YJH spectra for planets
$c$, $d$, $e$ were detected with very red colors. Our spectra of planet $c$
has higher SNR than earlier observations (P1640, Oppenheimer et al., 2013;
Pueyo et al., 2015). Planets $c$ and $d$ spectra have some differences with
respect to earlier observations. Particularly, the spectral slope is redder in
the H-band, which is significant as that part of the spectra has the highest
SNR. This could be due to real evolution of the atmosphere of the planets over
the past few years. Previous work has already shown that the spectra are
difficult to find close matches with current compositional models due to
inadequate understanding of cloud properties and non-equilibrium chemistry
(Bonnefoy et al., 2016). However, the spectra are matched very closely by some
red field dwarfs and a planetary mass companion to a brown dwarf (VHS
J125601257 b; Gauza et al., 2015). We disk not detect the debris disk seen by
ALMA (Booth et al., 2016), but this is not surprising given that the much
brighter debris disk of a comparable system, 49 Cet, was only marginally
detected by SPHERE (Choquet et al., 2017). Finally, comparing the current
locations of the planets to orbital solutions from Wang et al. (2018), we
found that planets $e$ and $c$ are more consistent with coplanar and resonant
orbits than without such restrictions.
In summary, the star-hopping RDI technique significantly boosts SPHERE’s
detection capabilities both for planets and circumstellar disks, and should
contribute to high-impact exoplanet science, as the technique is brought to
other telescope facilities.
###### Acknowledgements.
This work has made use of the the SPHERE Data Centre, jointly operated by
OSUG/IPAG (Grenoble), PYTHEAS/LAM/CESAM (Marseille), OCA/Lagrange (Nice),
Observatoire de Paris/LESIA (Paris), and Observatoire de Lyon. We would
especially like to thank Nadege Meunier at the SPHERE data center in Grenoble
for the distortion-corrected reductions used for the astrometric measurements,
Bartosz Gauza for providing the spectra of VHS J125601.92-125723.9 b, Jason
Wang for allowing us to use the dynamical modeling figures from his
publication, and Matias Jones, Florian Rodler, Benjamin Courtney-Barrer,
Francisco Caceres and Alain Smette at the VLT for technical help during the
various phases of the development of the star-hopping technique.
## References
* Allard et al. (2012a) Allard, F., Homeier, D., & Freytag, B. 2012a, Philosophical Transactions of the Royal Society of London Series A, 370, 2765
* Allard et al. (2012b) Allard, F., Homeier, D., Freytag, B., & Sharp, C. M. 2012b, in EAS Publications Series, Vol. 57, EAS Publications Series, ed. C. Reylé, C. Charbonnel, & M. Schultheis, 3–43
* Apai et al. (2013) Apai, D., Radigan, J., Buenzli, E., et al. 2013, ApJ, 768, 121
* Baines et al. (2012) Baines, E. K., White, R. J., Huber, D., et al. 2012, ApJ, 761, 57
* Baraffe et al. (2015) Baraffe, I., Homeier, D., Allard, F., & Chabrier, G. 2015, A&A, 577, A42
* Barman et al. (2015) Barman, T. S., Konopacky, Q. M., Macintosh, B., & Marois, C. 2015, ApJ, 804, 61
* Barman et al. (2011) Barman, T. S., Macintosh, B., Konopacky, Q. M., & Marois, C. 2011, ApJ, 733, 65
* Bell et al. (2015) Bell, C. P. M., Mamajek, E. E., & Naylor, T. 2015, Proceedings of the International Astronomical Union, 10, 41–48
* Beuzit et al. (2019) Beuzit, J. L., Vigan, A., Mouillet, D., et al. 2019, A&A, 631, A155
* Boccaletti et al. (2008) Boccaletti, A., Chauvin, G., Baudoz, P., & Beuzit, J. L. 2008, A&A, 482, 939
* Boccaletti et al. (2020) Boccaletti, A., Di Folco, E., Pantin, E., et al. 2020, A&A, 637, L5
* Boccaletti et al. (2018) Boccaletti, A., Sezestre, E., Lagrange, A. M., et al. 2018, A&A, 614, A52
* Bohn et al. (2020) Bohn, A. J., Kenworthy, M. A., Ginski, C., et al. 2020, MNRAS, 492, 431
* Bonnefoy et al. (2014) Bonnefoy, M., Marleau, G. D., Galicher, R., et al. 2014, A&A, 567, L9
* Bonnefoy et al. (2016) Bonnefoy, M., Zurlo, A., Baudino, J. L., et al. 2016, A&A, 587, A58
* Booth et al. (2016) Booth, M., Jordán, A., Casassus, S., et al. 2016, MNRAS, 460, L10
* Buenzli et al. (2015) Buenzli, E., Saumon, D., Marley, M. S., et al. 2015, ApJ, 798, 127
* Cantalloube et al. (2015) Cantalloube, F., Mouillet, D., Mugnier, L. M., et al. 2015, A&A, 582, A89
* Chatterjee et al. (2008) Chatterjee, S., Ford, E. B., Matsumura, S., & Rasio, F. A. 2008, ApJ, 686, 580
* Chauvin et al. (2017) Chauvin, G., Desidera, S., Lagrange, A. M., et al. 2017, A&A, 605, L9
* Choquet et al. (2017) Choquet, É., Milli, J., Wahhaj, Z., et al. 2017, ApJ, 834, L12
* Choquet et al. (2016) Choquet, É., Perrin, M. D., Chen, C. H., et al. 2016, ApJ, 817, L2
* Claudi et al. (2008) Claudi, R. U., Turatto, M., Gratton, R. G., et al. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7014, Proc. SPIE, 70143E
* Cowley et al. (1969) Cowley, A., Cowley, C., Jaschek, M., & Jaschek, C. 1969, AJ, 74, 375
* Crida (2009) Crida, A. 2009, in SF2A-2009: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, ed. M. Heydari-Malayeri, C. Reyl’E, & R. Samadi, 313
* Currie et al. (2011) Currie, T., Burrows, A., Itoh, Y., et al. 2011, ApJ, 729, 128
* Delorme et al. (2017) Delorme, P., Meunier, N., Albert, D., et al. 2017, in SF2A-2017: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, ed. C. Reylé, P. Di Matteo, F. Herpin, E. Lagadec, A. Lançon, Z. Meliani, & F. Royer, Di
* Dohlen et al. (2008) Dohlen, K., Langlois, M., Saisse, M., et al. 2008, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7014, Proc. SPIE, 70143L
* Dong & Zhu (2013) Dong, S. & Zhu, Z. 2013, ApJ, 778, 53
* Fabrycky & Murray-Clay (2010) Fabrycky, D. C. & Murray-Clay, R. A. 2010, ApJ, 710, 1408
* Fernandes et al. (2019) Fernandes, R. B., Mulders, G. D., Pascucci, I., Mordasini, C., & Emsenhuber, A. 2019, ApJ, 874, 81
* Fusco et al. (2005) Fusco, T., Petit, C., Rousset, G., Conan, J. M., & Beuzit, J. L. 2005, Optics Letters, 30, 1255
* Fusco et al. (2006) Fusco, T., Rousset, G., Sauvage, J. F., et al. 2006, Optics Express, 14, 7515
* Gagné et al. (2014) Gagné, J., Lafrenière, D., Doyon, R., Malo, L., & Artigau, É. 2014, ApJ, 783, 121
* Gaia Collaboration (2018) Gaia Collaboration. 2018, VizieR Online Data Catalog, I/345
* Galicher et al. (2011) Galicher, R., Marois, C., Macintosh, B., Barman, T., & Konopacky, Q. 2011, ApJ, 739, L41
* Gauza et al. (2015) Gauza, B., Béjar, V. J. S., Pérez-Garrido, A., et al. 2015, ApJ, 804, 96
* Geiler et al. (2019) Geiler, F., Krivov, A. V., Booth, M., & Löhne, T. 2019, MNRAS, 483, 332
* Goździewski & Migaszewski (2009) Goździewski, K. & Migaszewski, C. 2009, MNRAS, 397, L16
* Goździewski & Migaszewski (2014) Goździewski, K. & Migaszewski, C. 2014, MNRAS, 440, 3140
* Goździewski & Migaszewski (2018) Goździewski, K. & Migaszewski, C. 2018, ApJS, 238, 6
* Haffert et al. (2019) Haffert, S. Y., Bohn, A. J., de Boer, J., et al. 2019, Nature Astronomy, 3, 749
* Hinz et al. (2010) Hinz, P. M., Rodigas, T. J., Kenworthy, M. A., et al. 2010, ApJ, 716, 417
* Howard et al. (2010) Howard, A. W., Marcy, G. W., Johnson, J. A., et al. 2010, Science, 330, 653
* Hughes et al. (2011) Hughes, A. M., Wilner, D. J., Andrews, S. M., et al. 2011, ApJ, 740, 38
* Hugot et al. (2012) Hugot, E., Ferrari, M., El Hadi, K., et al. 2012, A&A, 538, A139
* Ingraham et al. (2014) Ingraham, P., Marley, M. S., Saumon, D., et al. 2014, ApJ, 794, L15
* Janson et al. (2010) Janson, M., Bergfors, C., Goto, M., Brandner, W., & Lafrenière, D. 2010, ApJ, 710, L35
* Keppler et al. (2018) Keppler, M., Benisty, M., Müller, A., et al. 2018, A&A, 617, A44
* Konopacky et al. (2013) Konopacky, Q. M., Barman, T. S., Macintosh, B. A., & Marois, C. 2013, Science, 339, 1398
* Konopacky et al. (2016) Konopacky, Q. M., Marois, C., Macintosh, B. A., et al. 2016, AJ, 152, 28
* Lafrenière et al. (2007) Lafrenière, D., Marois, C., Doyon, R., Nadeau, D., & Artigau, É. 2007, ApJ, 660, 770
* Lagrange et al. (2012) Lagrange, A. M., Boccaletti, A., Milli, J., et al. 2012, A&A, 542, A40
* Lagrange et al. (2009) Lagrange, A. M., Gratadour, D., Chauvin, G., et al. 2009, A&A, 493, L21
* Liu (2004) Liu, M. C. 2004, Science, 305, 1442
* Looper et al. (2008) Looper, D. L., Kirkpatrick, J. D., Cutri, R. M., et al. 2008, ApJ, 686, 528
* Macintosh et al. (2015) Macintosh, B., Graham, J. R., Barman, T., et al. 2015, Science, 350, 64
* Madhusudhan (2019) Madhusudhan, N. 2019, ARA&A, 57, 617
* Madhusudhan et al. (2011) Madhusudhan, N., Burrows, A., & Currie, T. 2011, ApJ, 737, 34
* Maire et al. (2016) Maire, A. L., Bonnefoy, M., Ginski, C., et al. 2016, A&A, 587, A56
* Marleau & Cumming (2014) Marleau, G. D. & Cumming, A. 2014, MNRAS, 437, 1378
* Marley et al. (2012) Marley, M. S., Saumon, D., Cushing, M., et al. 2012, ApJ, 754, 135
* Marois et al. (2006) Marois, C., Lafrenière, D., Doyon, R., Macintosh, B., & Nadeau, D. 2006, ApJ, 641, 556
* Marois et al. (2008) Marois, C., Macintosh, B., Barman, T., et al. 2008, Science, 322, 1348
* Marois et al. (2010) Marois, C., Zuckerman, B., Konopacky, Q. M., Macintosh, B., & Barman, T. 2010, Nature, 468, 1080
* Marshall et al. (2010) Marshall, J., Horner, J., & Carter, A. 2010, International Journal of Astrobiology, 9, 259
* Matthews et al. (2013) Matthews, B., Kennedy, G., Sibthorpe, B., et al. 2013, The Astrophysical Journal, 780, 97
* Matthews et al. (2014) Matthews, B., Kennedy, G., Sibthorpe, B., et al. 2014, ApJ, 780, 97
* Mayor et al. (2011) Mayor, M., Marmier, M., Lovis, C., et al. 2011, arXiv e-prints, arXiv:1109.2497
* Milli et al. (2018) Milli, J., Kasper, M., Bourget, P., et al. 2018, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 10703, Proc. SPIE, 107032A
* Milli et al. (2012) Milli, J., Mouillet, D., Lagrange, A. M., et al. 2012, A&A, 545, A111
* Moór et al. (2006) Moór, A., Ábrahám, P., Derekas, A., et al. 2006, ApJ, 644, 525
* Moór et al. (2015) Moór, A., Kóspál, Á., Ábrahám, P., et al. 2015, MNRAS, 447, 577
* Morley et al. (2012) Morley, C. V., Fortney, J. J., Marley, M. S., et al. 2012, ApJ, 756, 172
* Moro-Martín et al. (2010) Moro-Martín, A., Malhotra, R., Bryden, G., et al. 2010, ApJ, 717, 1123
* Müller et al. (2018) Müller, A., Keppler, M., Henning, T., et al. 2018, A&A, 617, L2
* Nielsen et al. (2019) Nielsen, E. L., De Rosa, R. J., Macintosh, B., et al. 2019, AJ, 158, 13
* Oppenheimer et al. (2013) Oppenheimer, B. R., Baranec, C., Beichman, C., et al. 2013, ApJ, 768, 24
* Petit et al. (2012) Petit, C., Sauvage, J. F., Sevin, A., et al. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8447, Proc. SPIE, 84471Z
* Pueyo et al. (2015) Pueyo, L., Soummer, R., Hoffmann, J., et al. 2015, ApJ, 803, 31
* Racine et al. (1999) Racine, R., Walker, G. A. H., Nadeau, D., Doyon, R., & Marois, C. 1999, PASP, 111, 587
* Raymond et al. (2010) Raymond, S. N., Armitage, P. J., & Gorelick, N. 2010, ApJ, 711, 772
* Reidemeister et al. (2009) Reidemeister, M., Krivov, A. V., Schmidt, T. O. B., et al. 2009, A&A, 503, 247
* Ruane et al. (2019) Ruane, G., Ngo, H., Mawet, D., et al. 2019, AJ, 157, 118
* Sadakane & Nishida (1986) Sadakane, K. & Nishida, M. 1986, Publications of the Astronomical Society of the Pacific, 98, 685
* Saio (2019) Saio, H. 2019, MNRAS, 487, 2177
* Saio et al. (2018) Saio, H., Bedding, T. R., Kurtz, D. W., et al. 2018, MNRAS, 477, 2183
* Sauvage et al. (2016) Sauvage, J.-F., Fusco, T., Petit, C., et al. 2016, Journal of Astronomical Telescopes, Instruments, and Systems, 2, 025003
* Schmid et al. (2018) Schmid, H. M., Bazzon, A., Roelfsema, R., et al. 2018, A&A, 619, A9
* Skemer et al. (2012) Skemer, A. J., Hinz, P. M., Esposito, S., et al. 2012, ApJ, 753, 14
* Soummer (2005) Soummer, R. 2005, ApJ, 618, L161
* Soummer et al. (2012) Soummer, R., Pueyo, L., & Larkin, J. 2012, ApJ, 755, L28
* Sparks & Ford (2002) Sparks, W. B. & Ford, H. C. 2002, ApJ, 578, 543
* Stephens et al. (2009) Stephens, D. C., Leggett, S. K., Cushing, M. C., et al. 2009, ApJ, 702, 154
* Su et al. (2009) Su, K. Y. L., Rieke, G. H., Stapelfeldt, K. R., et al. 2009, The Astrophysical Journal, 705, 314
* Su et al. (2009) Su, K. Y. L., Rieke, G. H., Stapelfeldt, K. R., et al. 2009, ApJ, 705, 314
* Sudol & Haghighipour (2012) Sudol, J. J. & Haghighipour, N. 2012, ApJ, 755, 38
* Takata et al. (2020) Takata, M., Ouazzani, R. M., Saio, H., et al. 2020, A&A, 635, A106
* Torres et al. (2008) Torres, C. A. O., Quast, G. R., Melo, C. H. F., & Sterzik, M. F. 2008, Young Nearby Loose Associations, ed. Reipurth, B., 757
* Vigan et al. (2015) Vigan, A., Gry, C., Salter, G., et al. 2015, MNRAS, 454, 129
* Vigan et al. (2010) Vigan, A., Moutou, C., Langlois, M., et al. 2010, MNRAS, 407, 71
* Wahhaj et al. (2015) Wahhaj, Z., Cieza, L. A., Mawet, D., et al. 2015, A&A, 581, A24
* Wahhaj et al. (2011) Wahhaj, Z., Liu, M. C., Biller, B. A., et al. 2011, ApJ, 729, 139
* Wahhaj et al. (2013) Wahhaj, Z., Liu, M. C., Biller, B. A., et al. 2013, ApJ, 779, 80
* Wahhaj et al. (2016) Wahhaj, Z., Milli, J., Kennedy, G., et al. 2016, A&A, 596, L4
* Wang et al. (2018) Wang, J. J., Graham, J. R., Dawson, R., et al. 2018, AJ, 156, 192
* Weinberger et al. (1999) Weinberger, A. J., Becklin, E. E., Schneider, G., et al. 1999, ApJ, 525, L53
* Wilner et al. (2018) Wilner, D. J., MacGregor, M. A., Andrews, S. M., et al. 2018, ApJ, 855, 56
* Xuan et al. (2018) Xuan, W. J., Mawet, D., Ngo, H., et al. 2018, AJ, 156, 156
* Zuckerman et al. (2011) Zuckerman, B., Rhee, J. H., Song, I., & Bessell, M. S. 2011, ApJ, 732, 61
* Zurlo et al. (2016) Zurlo, A., Vigan, A., Galicher, R., et al. 2016, A&A, 587, A57
* Zurlo et al. (2014) Zurlo, A., Vigan, A., Mesa, D., et al. 2014, A&A, 572, A85
|
# Hubble Space Telescope Imaging of Isolated Local Volume Dwarfs GALFA-Dw3
and Dw4
P. Bennet Physics & Astronomy Department, Texas Tech University, Box 41051,
Lubbock, TX 79409-1051, USA Space Telescope Science Institute, 3700 San
Martin Drive, Baltimore, MD 21218, USA D. J. Sand Department of
Astronomy/Steward Observatory, The University of Arizona, 933 North Cherry
Avenue, Rm. N204, Tucson, AZ 85721-0065, USA D. Crnojević University of
Tampa, Department of Chemistry, Biochemistry, and Physics, 401 West Kennedy
Boulevard, Tampa, FL 33606, USA D. R. Weisz University of California,
Berkeley, Department of Astronomy, 501 Campbell Hall # 3411, Berkeley, CA
94720-3411,USA N. Caldwell Harvard-Smithsonian Center for Astrophysics, 60
Garden Street, Cambridge, MA 02138, USA P. Guhathakurta UCO/Lick Observatory,
University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064,
USA J. R. Hargis Space Telescope Science Institute, 3800 San Martin Drive,
Baltimore, MD, 21208, USA A. Karunakaran Department of Physics, Engineering
Physics and Astronomy, Queen’s University, Kingston, ON K7L 3N6, Canada B.
Mutlu-Pakdil Kavli Institute for Cosmological Physics, University of Chicago,
Chicago, IL 60637, USA Department of Astronomy and Astrophysics, University
of Chicago, Chicago IL 60637, USA E. Olszewski Department of
Astronomy/Steward Observatory, The University of Arizona, 933 North Cherry
Avenue, Rm. N204, Tucson, AZ 85721-0065, USA J. J. Salzer Department of
Astronomy, Indiana University, 727 East Third Street, Bloomington, IN 47405,
USA A. C. Seth Department of Physics and Astronomy, University of Utah, 115
South 1400 East, Salt Lake City, Utah 84112, USA J. D. Simon Observatories
of the Carnegie Institution for Science, Pasadena, California 91101, USA K.
Spekkens Department of Physics and Space Science, Royal Military College of
Canada P.O. Box 17000, Station Forces Kingston, ON K7K 7B4, Canada Department
of Physics, Engineering Physics and Astronomy, Queen’s University, Kingston,
ON K7L 3N6, Canada D. P. Stark Department of Astronomy/Steward Observatory,
The University of Arizona, 933 North Cherry Avenue, Rm. N204, Tucson, AZ
85721-0065, USA J. Strader Center for Data Intensive and Time Domain
Astronomy, Department of Physics and Astronomy, Michigan State University,
East Lansing, MI 48824, USA E. J. Tollerud Space Telescope Science
Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA E. Toloba
Department of Physics, University of the Pacific, 3601 Pacific Avenue,
Stockton, CA 95211, USA B. Willman LSST and Steward Observatory, 933 North
Cherry Avenue, Tucson, AZ 85721, USA
(Received January 20, 2020)
###### Abstract
We present observations of the dwarf galaxies GALFA Dw3 and GALFA Dw4 with the
Advanced Camera for Surveys (ACS) on the Hubble Space Telescope (HST). These
galaxies were initially discovered as optical counterparts to compact HI
clouds in the GALFA survey. Both objects resolve into stellar populations
which display an old red giant branch, younger helium burning, and massive
main sequence stars. We use the tip of the red giant branch method to
determine the distance to each galaxy, finding distances of
7.61${}_{-0.29}^{+0.28}$ Mpc and 3.10${}_{-0.17}^{+0.16}$ Mpc, respectively.
With these distances we show that both galaxies are extremely isolated, with
no other confirmed objects within $\sim$1.5 Mpc of either dwarf. GALFA Dw4 is
also found to be unusually compact for a galaxy of its luminosity. GALFA Dw3 &
Dw4 contain HII regions with young star clusters and an overall irregular
morphology; they show evidence of ongoing star formation through both
ultraviolet and H$\alpha$ observations and are therefore classified as dwarf
irregulars (dIrrs). The star formation histories of these two dwarfs show
distinct differences: Dw3 shows signs of a recently ceased episode of active
star formation across the entire dwarf, while Dw4 shows some evidence for
current star formation in spatially limited HII regions. Compact HI sources
offer a promising method for identifying isolated field dwarfs in the Local
Volume, including GALFA Dw3 & Dw4, with the potential to shed light on the
driving mechanisms of dwarf galaxy formation and evolution.
Dwarf galaxies (416), Dwarf irregular galaxies (417), Galaxy distances (590),
HST photometry (756), Red giant tip (1371), Star formation (1569)
††journal: ApJ††journal: ApJ††facilities: HST (ACS), WIYN:0.9m, GALEX,
SWIFT††software: Numpy, Astropy (The Astropy Collaboration et al., 2018),
DOLPHOT (Dolphin, 2000)
## 1 Introduction
The Lambda Cold Dark Matter model for structure formation has been very
successful at reproducing observations of large scale structures; however
challenges emerge at sub-galactic scales (for a recent review, see Bullock &
Boylan-Kolchin, 2017, and the references therein). Some of these challenges
can be examined by switching focus from dwarf galaxies in nearby groups
(McConnachie et al., 2018; Crnojević et al., 2019; Bennet et al., 2019, 2020;
Carlsten et al., 2020; Mao et al., 2020) to isolated field galaxies within the
Local Volume (Sand et al., 2015; McQuinn et al., 2015b; Tollerud et al.,
2016).
Examining these isolated, gas rich, dwarf galaxies is critical to our
understanding of dwarf galaxy formation and testing dark matter theories. They
are the faintest/least massive galaxies we know of that have never interacted
with a massive galaxy halo, and thus have never felt the effects of tidal/ram
pressure stripping (Spekkens et al., 2014; Wetzel et al., 2015). They are a
more controlled experiment for understanding other mechanisms which drive the
star formation history (SFH) and metallicity of a dwarf galaxy, for instance
supernova-driven winds, or infall of pristine gas from the local environment
(McQuinn et al., 2013). By characterizing their resolved stellar populations,
it becomes possible both to obtain the present-day structural parameters for
these galaxies and to characterize their SFHs, providing constraints on their
pasts (McQuinn et al., 2015a; Tollerud et al., 2016; McQuinn et al., 2020).
Additionally, these gas rich galaxies potentially trace the full dwarf galaxy
population at the outskirts of the Local Group and other similar low-density
environments, a regime where the numbers and properties of these dwarfs are
just starting to be compared directly with numerical simulations (Tikhonov &
Klypin, 2009; Garrison-Kimmel et al., 2014, 2019; Tollerud & Peek, 2018).
In this work, we will examine the isolated Local Volume dwarf galaxies GALFA
Dw3 and Dw4. These objects were discovered as part of an archival search for
optical counterparts to HI clouds (Giovanelli et al., 2010) discovered in the
ALFALFA (Adams et al., 2013) and GALFA (Saul et al., 2012) surveys by Sand et
al. (2015), and were both confirmed to have H$\alpha$ emission at a velocity
consistent with the HI detection. The key properties of GALFA Dw3 and Dw4 are
listed in Table 1.
An outline of the paper follows. In Section 2, we describe the HST photometry
and artificial star tests (ASTs), as well as supplemental observations of the
dwarfs. In Section 3, we derive distances to GALFA Dw3 and Dw4 via the Tip of
the Red Giant Branch (TRGB) method. In Section 4, we examine the observational
properties of the dwarfs in the HST imaging and derive their physical
properties. In Section 5, we discuss the star formation histories based on
their HST color-magnitude diagrams (CMDs), as well as supplemental H$\alpha$
and ultraviolet (UV) images. In Section 6, we discuss the environment of the
dwarfs and potential analogs within the Local Volume. Finally we summarize and
conclude in Section 7.
## 2 Data Overview
### 2.1 Hubble Space Telescope Observations
The HST observations of GALFA Dw3 & Dw4 were taken as part of program GO-14676
(Cycle 24, PI Sand). Both Dw3 & Dw4 were observed for a single orbit with the
Advanced Camera for Surveys (ACS)/Wide Field Camera (WFC), using the F606W and
F814W filters. We did not dither to fill in the WFC CCD chip gap, as each
dwarf easily fit into one chip. The total exposure time was 1062 s for each
filter on both Dw3 & Dw4. Color composites of these images are shown in Figure
1.
We perform PSF-fitting photometry on the provided .flt images using the
DOLPHOT v2.0 photometric package (with the ACS module), a modified version of
HSTphot (Dolphin, 2000). For this work we use the suggested input parameters
from the DOLPHOT/ACS User’s
Guide111http://americano.dolphinsim.com/dolphot/dolphotACS.pdf, including
corrections for charge transfer efficiency losses and default aperture
corrections based around a 0.5” aperture. Quality cuts are then applied using
the following criteria: the derived photometric errors must be $\leq$0.3 mag
in both bands, the sum of the crowding parameter in both bands is $\leq$1 and
the square of the sum of the sharpness parameter in both bands is $\leq$0.075.
Detailed descriptions of these parameters can be found in Dolphin (2000). For
this analysis, we correct these extracted magnitudes for foreground extinction
and reddening using the Schlafly & Finkbeiner (2011) calibration of the
Schlegel et al. (1998) dust maps (we note that GALFA Dw4 suffers from
significant extinction due to its proximity to the plane of the Galaxy,
E(B-V)=0.531 mag).
We estimate photometric uncertainties using ASTs in a CMD region covering the
full range of observed stars, from blue Main Sequence (MS) features to regions
redward of the red giant branch (RGB). The fake stars have a similar color-
magnitude distribution to that of the observed sources, except for a deeper
extension at faint magnitudes (down to $\sim$2 mag fainter than the faintest
real recovered stars), so as to take into account those faint objects that are
upscattered in the observed CMD due to noise. The AST photometry is derived in
exactly the same way as for the real data, and the same quality cuts and
calibration are applied.
The resulting CMDs can be seen in Figure 2. The completeness and uncertainties
for Dw4 appear to be worse than that of Dw3, but this is solely because of the
higher extinction associated with Dw4; they are identical in uncorrected
apparent magnitude space.
We assessed the crowding for each field. Visual inspection of GALFA Dw3 showed
clearly separated point sources throughout the main body of the dwarf. GALFA
Dw4 required more careful examination, with possible crowding in the blue
knots in the southeast and northwest ends of the dwarf (see Figure 1).
Examination of the potentially crowded regions showed similar completeness
levels to those found in the rest of the dwarf when using standard photometry,
and visual inspection showed no obviously missed point sources in the region
in question. We also made standard changes to the photometry recommended for
crowded regions, namely setting the parameter FitSky=3 (for more details
please see the DOLPHOT’s User Guide). This crowded photometry was then
compared to the standard photometry in the affected region with no significant
difference between the two: we conclude that the use of crowded photometry
parameters was unnecessary and that standard parameter photometry was as
effective in all regions of GALFA Dw4. However, some of the stars from Dw4 may
not be recovered successfully in either the standard or crowded photometry,
and this will be further discussed in §5.2.
### 2.2 Other Observations
Data from the Galaxy Evolution Explorer (GALEX; Martin & GALEX Team 2005) were
also used to check for UV emission from GALFA Dw3, as this can be a strong
indicator of recent star formation. Indeed, GALFA Dw3 shows substantial FUV
and NUV emission, which we report alongside the HST data in Figure 3. These
data were part of the All-Sky Imaging Survey; see Morrissey et al. (2007) for
details. GALFA Dw4 is outside the GALEX footprint and therefore no conclusions
can be drawn about its recent star formation with this dataset. We thus used
UV images from the Neil Gehrels $Swift$ Observatory (Gehrels et al., 2004) and
the Ultraviolet/Optical Telescope (UVOT; Roming et al., 2005), which were
taken as part of proposal 1417202 (P.I. L. Hagen) in all 3 available UV
filters (UVW1, UVM2, UVW2). There is no UV emission detected in these data,
likely due to the high levels of extinction along the line of sight to Dw4.
Supplemental H$\alpha$ narrow band imaging of GALFA Dw3 & Dw4 were obtained by
our group with the WIYN 0.9-m telescope and the Half Degree Imager on 21 July
2017 (UT). These images are used to trace HII regions with active star
formation within the last $\sim$10 Myrs (Calzetti, 2013) and can be seen in
Figure 4.
Figure 1: Color composite of F606W/F814W HST ACS imaging of the dwarf galaxies
GALFA Dw3 (upper panel) and GALFA Dw4 (lower panel). The bright objects in the
SW of Dw3 are background galaxies. Images are 1.2’x1.2’. North is up, east is
left.
## 3 Tip of the Red Giant Branch Distances
To determine distances to these resolved dwarf galaxies, we make use of the
TRGB technique (e.g., Da Costa & Armandroff, 1990; Lee et al., 1993; Makarov
et al., 2006; Rizzi et al., 2007; Freedman et al., 2020). The peak luminosity
of the RGB is a standard candle in the red bands, because it is driven by core
helium ignition and so it provides a useful distance estimate for galaxies
with an old stellar component which are close enough that the RGB stars can be
resolved. To determine TRGB magnitudes, we adopt the methodology described in
Crnojević et al. (2019). Briefly, the photometry is first corrected to account
for the color dependence of the TRGB (Jang & Lee, 2017); we also consider only
RGB stars with colors in the range $0.85<(F606W-F814W)_{0}<1.35$, so as to
exclude possible contamination from young red supergiant stars. The luminosity
function for RGB stars is then computed (note that the field,
background+foreground, contamination as derived from a dwarf-free region of
the ACS field-of-view is not significant for the range of colors and
magnitudes considered here), and a model luminosity function (convolved with
the appropriate photometric uncertainty, bias and incompleteness function as
derived from our ASTs) is fit to it with a non-linear least squares method.
Using the HST data, we find a TRGB magnitudes of 25.37$\pm$0.08 mag and
23.42$\pm$0.12 mag for GALFA Dw3 and Dw4, this correspond to distance moduli
of 29.41$\pm$0.08 and 27.46$\pm$0.12 mag, which translate to distances of
7.61${}_{-0.29}^{+0.28}$ Mpc and 3.10${}_{-0.17}^{+0.16}$ Mpc, respectively.
We mark the position of the TRGB and its uncertainty in Figure 2, and tabulate
our results in Table 1.
Anand et al. (2019) used the same dataset presented here for GALFA Dw4 to
study the peculiar velocities of galaxies at the edge of the Local Group, and
reported a TRGB distance of 2.97$\pm$0.37 Mpc, which is consistent with the
distance reported here.
Figure 2: F606W/F814W CMD for the dwarf galaxies GALFA Dw3 (left panel) and
GALFA Dw4 (right panel). Magnitudes are corrected for foreground extinction
(see §2). Only point sources are shown (i.e., those sources with a DOLPHOT
object type=1 or 2). Black dots are stars within the dwarfs, red dots are
stars from an equal-area control field. In the left panel, the green crosses
indicate those stars associated with the spatial position of the HII region in
Dw3, see §5.1. In the right panel, the green crosses indicate those stars
associated with the spatial position of the southeast HII region and the
magenta crosses those associated with the northwest HII region, see §5.2. The
black horizontal line indicates the best fit for the TRGB, and the dashed gray
lines represent the 1$\sigma$ uncertainty. We display several Padova
isochrones (Bressan et al., 2012), shown as solid lines of varying color, each
line representing a stellar population of fixed age, shown in the legend of
each panel. The red isochrone (RGB stars) is plotted at [Fe/H]=$-$1.6 for both
dwarfs, while all other isochrones are at [Fe/H]=$-$1.0. Finally, the 50$\%$
completeness limit (black dashed line) and the photometric uncertainties are
reported.
Figure 3: The UV images of GALFA Dw3 from the GALEX All Sky Imaging Survey
(AIS) alongside optical images from HST for illustrative purposes, see Figure
1. This clearly shows the elevated UV emission from Dw3. North is up, east is
left. Each image is 1.1’x1.1’. The ellipses in this plot are illustrative.
Left: HST Optical, Center: GALEX NUV, Right: GALEX FUV.
Figure 4: The H$\alpha$ narrow band images (see §2) of GALFA Dw3 and Dw4 minus
the continuum emission (right column), alongside optical images from HST for
illustrative purposes (left column). We point out the elevated H$\alpha$
emission from the northeast corner of Dw3. GALFA Dw4 shows more H$\alpha$
emission within two clear regions, one at the southeast end of the dwarf and
the other at the northwest end. These regions match with the blue regions seen
in the HST imaging. North is up, east is left. Each image is 1.1’x1.1’. The
ellipses in this plot are illustrative.
## 4 Structural Parameters
Utilizing the HST imaging, we revisit the structural properties of these dwarf
galaxies, previously reported in Sand et al. (2015). To constrain the
structural parameters, we use the maximum-likelihood technique of Martin et
al. (2008) using the implementation of Sand et al. (2009, 2012). First, we
select the stars consistent with the RGB as seen in Figure 2. We fit a
standard exponential profile plus constant background to the data, with the
following free parameters: the central position (RA0, DEC0), position angle,
ellipticity, half-light radius ($r_{h}$) and background surface density.
Uncertainties on structural parameters are determined by bootstrap resampling
the data 1000 times, from which 68% confidence limits are calculated. The
resulting structural parameters are summarized in Table 1.
Note that while the derived parameters describe the older stellar populations
in our targets, both Dw3 and Dw4 host young populations that are highly
irregular in appearance and are concentrated in the HII regions in the case of
Dw4 (see Figure 5).
We derive the absolute magnitude of the dwarfs via direct aperture photometry
using an elliptical aperture with semi-major axis equal to the half-light
radius. We estimate the flux within this aperture (after background
correction), and multiply by a factor of two to account for the total flux of
the dwarf, and then convert to a magnitude. After applying our measured
distance modulus and correction for galactic extinction, we find
M${}_{V}=-12.8\pm 0.3$ and $-11.8\pm 0.3$ for Dw3 and Dw4, respectively. Our
results are consistent with the properties reported in Sand et al. (2015)
within the uncertainties. We then estimate the present day stellar mass from
the V band luminosity combined with the V-I color using the mass to light
ratio formalism from Bell & de Jong (2001):
$\log(M/L)_{V}=a_{V}+b_{V}\cdot(V-I)$ (1)
where aV$=$$-$1.476 and bV$=$1.747 with an assumed solar luminosity of
MV$=$4.77. This produces masses of 2.1$\times$106 M⊙ and 2.6$\times$106 M⊙ for
Dw3 and Dw4 respectively.
Our two targets broadly fit on the Local Group size-luminosity relations with
slightly higher than typical surface brightness (see Figure 6). These
properties are very similar to those found for Pisces A & B, two other gas-
rich dwarf galaxies initially found in the GALFA survey of HI compact objects
(Tollerud et al., 2015; Sand et al., 2015). Dw3 fits closer with the Local
Group size-luminosity relation and has similar properties to many objects
within the Local Group that are not satellites of the MW or M31. Dw4 appears
to be higher surface brightness than many of these objects and is the most
compact object at its magnitude (McConnachie et al., 2018), but has possible
analogues at the edge of the Local Group such as GR8 (Dohm-Palmer et al.,
1998; Tolstoy, 1999). This higher surface brightness when compared to Local
Group satellites is likely explained by the recent star formation in both
objects. These comparisons are discussed further in §6.2.
Table 1: Properties of GALFA Dw3 & Dw4 | GALFA Dw3 | GALFA Dw4
---|---|---
R.A. (J2000) | 02h:58m:56s.5$\pm$0.6 | 05h:45m:44s.7$\pm$0.5
Dec (J2000) | +13∘:37${}^{{}^{\prime}}$:45${}^{{}^{\prime\prime}}$.4$\pm$0.5 | +10∘:46${}^{{}^{\prime}}$:15${}^{{}^{\prime\prime}}$.7$\pm$0.3
l (deg) | 164.15 | 195.67
b (deg) | $-$38.84 | $-$24.70
GALFA ID | 044.7+13.6+528 | 086.4+10.8+611
Distance Modulus (mag) | 29.41$\pm$0.08 | 27.46$\pm$0.12
Distance (Mpc) | 7.61${}_{-0.29}^{+0.28}$ | 3.10${}_{-0.17}^{+0.16}$
mV (mag)aaVEGA Magnitude, derived from mF606W using the conversion from (Sahu et al., 2014) | 16.6$\pm$0.2 | 15.7$\pm$0.2
MV (mag)aaVEGA Magnitude, derived from mF606W using the conversion from (Sahu et al., 2014) | $-$12.8$\pm$0.3 | $-$11.8$\pm$0.3
V$-$I (mag) | 0.44 | 0.72
E(B$-$V)bbBased on the Schlafly & Finkbeiner (2011) dust maps | 0.134 | 0.531
AF606WbbBased on the Schlafly & Finkbeiner (2011) dust maps | 0.322 | 1.334
AF814WbbBased on the Schlafly & Finkbeiner (2011) dust maps | 0.207 | 0.811
rh (″) | 12.62$\pm$1.2 | 6.82$\pm$0.06
rh (pc) | 466$\pm$46 | 102$\pm$9
Ellipticity | 0.54$\pm$0.03 | 0.58$\pm$0.05
Position Angle (deg) | 56.4$\pm$1.7 | 100.4$\pm$1.8
fHα (erg s-1 cm-2) | 0.514$\pm$0.051$\times$10-14 | 5.221$\pm$0.110$\times$10-14
HI $v_{LSR}$ (km s-1)ccFrom the GALFA survey, see Saul et al. (2012), using the erratum values | 528.59$\pm$18.90 | 614.53$\pm$40.83
H$\alpha$ $v_{LSR}$ (km s-1)ddFrom Sand et al. (2015) | 503$\pm$35 | 607$\pm$35
Stot(Jy km s-1)ccFrom the GALFA survey, see Saul et al. (2012), using the erratum values | 0.51 | 0.53
M⋆ (M⊙) | 2.1$\times$106 | 2.6$\times$106
MHI (M⊙) | 6.9$\times$106 | 1.2$\times$106
SFRNUV (M⊙) | 8.7$\pm$2.5$\times$10-3 | –
SFRFUV (M⊙) | 8.7$\pm$0.6$\times$10-4 | –
SFRHα (M⊙) | 3.77$\pm$0.47$\times$10-4 | 1.37$\pm$0.15$\times$10-3
Figure 5: Spatial distribution of point sources consistent with stellar
populations in GALFA Dw3 and Dw4. Point sources consistent with RGB stars are
shown in red; these are selected via matching to the RGB isochrones seen in
Figure 2. The blue points are those point sources consistent with a color of
(F606W0-F814W0)$<$0.1, which are consistent with MS and blue helium burning
stars. Only stars brighter than our 50% completeness limits are plotted. The
approximate position and size of the HII regions in both dwarfs are shown by
black outlines. The blue stars in Dw3 have a higher ellipticity than the RGB
populations, but are generally spread throughout the dwarf. In Dw4 there is a
concentration of blue stars around the HII region to the southeast, along with
several associated with the HII region to the northwest, but few in the main
body of the dwarf. Panels are 0.9′squares. North is up, East is left. Figure
6: Absolute V-band magnitude as a function of half-light radius for GALFA Dw3
and Dw4 (blue stars) as compared to satellites of the MW and M31 (Red Inverted
Triangles) and other Local Group objects, i.e. those outside the virial radius
of either the MW or M31 (Black Squares). Pisces A & B are shown for comparison
(Cyan Triangles), along with Leo P (Green Circle). The lines of constant
central surface brightness assume an exponential profile and range from 16
$\textrm{mag/arcsec}^{2}$ to 30 $\textrm{mag/arcsec}^{2}$ with a line every
$\Delta$2 $\textrm{mag}/\textrm{arcsec}^{2}$.
### 4.1 HI mass
The HI mass for GALFA Dw3 and Dw4 can be calculated using the HI flux and the
distances derived in Section 3. This is done via the standard equation for an
optically thin gas (Haynes & Giovanelli, 1984):
$M_{HI}=2.356\times 10^{5}(D_{HI})^{2}S_{HI}M_{\odot}$ (2)
where DHI is the distance in Mpc and SHI is the flux in Jy km s-1. These
values are reported in Table 1.
We use the HI fluxes from (Saul et al., 2012)222We use the revised flux values
from the erratum. and the distances we derive here, along with the standard
equation to derive HI masses for GALFA Dw3 and Dw4. We note that these fluxes
are likely underestimated due to spatial and spectral smoothing procedures
employed by Saul et al. (2012). An example of this underestimation is present
in the discrepant fluxes for Pisces A and B, $\sim$1.2 and $\sim$1.6 Jy km
s-1, respectively, found in Tollerud et al. (2015) compared to 0.445 and 0.957
Jy km s-1 from Saul et al. (2012). Nevertheless, for the purpose of this work,
we carry on using the values from Saul et al. (2012) for Dw3 and Dw4.
Given their optical luminosities, both GALFA dwarfs are relatively gas rich,
with gas mass to light ratios of $\sim$0.6 M⊙/L⊙ for GALFA Dw3 and $\sim$0.3
M⊙/L⊙ for GALFA Dw4. These values are comparable to that of star forming
objects within the Local Group with similar absolute magnitudes to those of
the GALFA dwarfs (McConnachie, 2012). When we compare GALFA Dw3 and Dw4 to
Pisces A and B, we find that the former have smaller gas mass to light ratios
(Pisces A: $\sim$2.5 M⊙/L⊙, Pisces B: $\sim$2.7 M⊙/L⊙, Tollerud et al. 2016;
Tollerud & Peek 2018; Beale et al. 2020), though this may be due to the
underestimation of the HI fluxes discussed above. These gas masses are similar
to other isolated field objects which are gas rich and star forming (Geha et
al., 2012; Bradford et al., 2015; McQuinn et al., 2020).
## 5 Star Formation Histories
It is immediately apparent from the HST images and the derived CMDs that GALFA
Dw3 & Dw4 are nearby star-forming dwarf galaxies. They have well-resolved
stellar populations, both show RGBs, asymptotic giant branch (AGB) stars, red
helium burning stars, blue helium burning stars, MSs, an overall irregular
morphology, and HII regions with young star clusters.
We attempted to use the CMD-fitting code MATCH (Dolphin, 2002) to determine
the SFHs of GALFA Dw3 and Dw4 similar to other works in the Local Volume (e.g.
McQuinn et al., 2010; Weisz et al., 2011, 2014). However, the distance to
these dwarfs and the shallow nature of the CMDs meant that the results did not
provide meaningful constraints on the SFH of either dwarf, other than an
indication of active star formation within the past 100 Myrs. Therefore we
have qualitatively analyzed each dwarf’s possible SFH via comparison to the
Padova isochrones (Bressan et al., 2012) and multi-wavelength observations,
similar to other works with Local Volume low-mass dwarfs where more in depth
analysis has not been possible (e.g. McQuinn et al., 2015a, 2020).
### 5.1 GALFA Dw3
#### 5.1.1 Isochrone Comparisons
The CMD of GALFA Dw3 reveals a complex SFH, with both young and old stellar
populations. We point the reader to the left panel of Figure 2 to guide this
discussion, where we denote stars in the main body of GALFA Dw3, along with
those associated with its HII region (see discussion below), and plot relevant
isochrones of varying age and metallicity.
There are several faint, blue stars (with 23 $\lesssim$F814W0$\lesssim$25 and
(F606W0$-$F814W0) $<$ $-$0.1 mag) that are likely young MS stars, with an
approximate age of $\sim$10 Myrs. Other young MS stars are apparent at fainter
magnitudes. A sequence of stars spanning the same F814W0 range at slightly
redder colors ((F606W0$-$F814W0) $\approx$ 0.0 mag) is likely a combination of
slightly older MS stars and a blue helium burning sequence, to go hand in hand
with the red helium burning sequence visible at 22
$\lesssim$F814W0$\lesssim$24.5 and 0.7 $\lesssim$(F606W0$-$F814W${}_{0})$
$\lesssim$ 1.0 mag. A RGB is apparent at faint magnitudes (see the TRGB at
F814W0=25.4 mag), likely corresponding to an ancient and metal poor stellar
population ($>$10–12 Gyr, [Fe/H]$\approx$$-$1.6). Stars immediately above the
TRGB may be intermediate age AGB stars, or luminous helium burning stars.
The separation of the helium burning branches is a strong indicator of
metallicity, with a wider separation for more metal rich systems (Radburn-
Smith et al., 2011), while the length and width of the branches is a good
indicator of the age of the stars (McQuinn et al., 2011). Approximate
properties of stellar populations can even be derived for systems with very
few member stars (e.g. Sand et al., 2017). Using the approximate length of the
red helium burning branch as a guide, we estimate a stellar population with
ages between 25–100 Myrs. However, for stars older than this the red helium
burning branch stars becomes hard to distinguish from AGB and RGB stars
(McQuinn et al., 2015b). The blue helium burning branch shows stars with ages
between 25–250 Myrs. The upper limit on the duration of this star formation is
determined by the completeness of the HST data. Star formation may have
happened before this estimated age, however deeper data would be required to
determine this.
The size and separation of the helium burning branches in Dw3 indicate a
population with [Fe/H]$\approx-1.0$, based on an approximate match to
isochrones. A metallicity of [Fe/H]$=-1.0$ is consistent with other galaxies
of similar luminosity as Dw3 (MV=$-$12.8) based on the standard
luminosity–metallicity relation for Local Volume galaxies (Berg et al., 2012).
It is also consistent within 1$\sigma$ with the possible
luminosity–metallicity relation for void galaxies (Pustilnik et al., 2016;
Kniazev et al., 2018; McQuinn et al., 2020).
Generally, dwarf irregulars form stars in bursts (Weisz et al., 2011), and
this is also backed up by simulations (Wheeler et al., 2019). Deeper
observations would be required to distinguish between continuous star
formation and more episodic, bursty star formation in Dw3. Finally, isochrone
fitting in the main body (excluding the HII region) shows a well populated
young MS of stars below m${}_{F814W}\approx 25.5$. If this is the MS turnoff
for the majority of the dwarf, it would show that star formation across most
of the dwarf ceased $\sim$20 Myrs ago.
#### 5.1.2 H$\alpha$ Imaging
The H$\alpha$ imaging of GALFA Dw3 (see Section 2) reveals a single HII region
located at the northeast edge of the dwarf, this image is shown alongside the
HST image in Figure 4. This H$\alpha$ imaging shows a flux of
0.514$\pm$0.051$\times$10-14 erg s-1 cm-2, which combined with the distance,
foreground extinction and the conversion factor from Kennicutt (1998) implies
a star formation rate of 3.77$\pm$0.47$\times$10-4 M⊙ yr-1.
If we limit the CMD to only those stars with a spatial position consistent
with this HII region, we can see that the H$\alpha$ emission may be caused by
a single MS O-star with a maximum age of 5 Myrs (see Figure 2 and 4). In this
region we also see a population of lower-mass young MS stars as well as red
and blue helium burning stars at higher density than across the main body of
the dwarf. The RGB is at a similar density in the HII region when compared to
the rest of the dwarf at a similar radius, indicating the overdensity of
younger stars is not simply a result of higher overall stellar density in this
region.
We also find a point source (F814W0=23.2 and F606W0-F814W0=$-$0.25 )
consistent with an O-star, with an O5 spectral class (MV=$-$5.03; see the
smoothed magnitudes in Table 1 of Wegner, 2000), outside of the HII region. As
this star should be massive and young enough to drive H$\alpha$ emission, but
we see no H$\alpha$ emission from its position, we can draw some conclusions.
The first idea would be that this is a blended multiple star system (see the
Leo P analysis in McQuinn et al. 2015a). If we assume equally massed component
stars, then these components would be O8 (MV=$-$4.3) class stars, which would
still be large enough to drive H$\alpha$ emission (even an equally massed
triple star system would have components large enough to produce H$\alpha$).
This source may be an evolved helium burning star that due to noise has been
scattered into the region of the CMD equivalent to the MS.
#### 5.1.3 GALEX
As an additional method to determine the level and spatial position of recent
star formation in GALFA Dw3, we checked the GALEX archive for the dwarf’s
ultraviolet emission. Dw3’s position was observed by GALEX as part of the All-
sky Imaging Survey (AIS, exposure time $\sim$270s). These GALEX images can be
seen alongside the HST images in Figure 3.
The GALEX data shows diffuse NUV and FUV emission across the body of Dw3,
though slightly more concentrated toward the north. We see some concentration
of FUV emission in the HII region found in the H$\alpha$ imaging, however the
majority is spread across the dwarf. This significant NUV and FUV emission
confirms the conclusion from the isochrone fitting that significant star
formation has occurred across the dwarf within the last 100 Myrs (Calzetti,
2013).
The detected level of NUV emission indicates that GALFA Dw3 has had recent
star formation at a rate of 8.7$\pm$2.5$\times$10-3M⊙ yr-1, whereas the FUV
emission indicates an order of magnitude lower star formation rate of
8.7$\pm$0.6$\times$10-4M⊙ yr-1. Both star formation rates were calculated
using the relevant relations from Iglesias-Páramo et al. (2006). These
relations have been shown to be potentially unreliable in low metallicity
galaxies, like GALFA Dw3 (McQuinn et al., 2015a); in which case the star
forming rate maybe up to $\sim$1.5 times higher than indicated, although this
does not effect our overall results. The difference between the star formation
rates drawn from the NUV and FUV emission may indicate that star formation in
Dw3 has decreased significantly in the last $\sim$100 Myr. This is reinforced
by the SFR derived from the H$\alpha$ imaging above (3.77$\pm$0.47$\times$10-4
M⊙ yr-1) which is comparable to the rate derived from the FUV emission but
slightly lower. This difference in star formation rates between the tracers
examined here can be explained by their differing sensitivity to different
ages of star formation. As NUV is equally sensitive to all star formation
across the last 100 Myrs, while FUV is most sensitive to stars formed in the
last 10 Myrs (though there is some FUV sensitivity to populations up to 100
Myrs old, Calzetti, 2013), and H$\alpha$ is sensitive to only star formation
within the last 10 Myrs.
The UV emission coming from across the dwarf, along with the difference
between the H$\alpha$, NUV and FUV, supports the conclusion drawn from the
isochrone matching: that star formation was higher and more widespread in Dw3
in the recent past ($\lesssim$100 Myr), but has now quenched across most of
the dwarf, and that there is ongoing star formation only in the single HII
region (in the last $\sim$10 Myr).
#### 5.1.4 Spatial structure
Another diagnostic that we can use to analyze GALFA Dw3 is spatial maps, see
Figure 5. When the stars are plotted on spatial maps we can see that the MS
stars are concentrated in the central regions of the dwarf, have a more
elliptical distribution and are preferentially found toward the northern end
of the galaxy. This is true for all MS stars, aside from the very brightest
which are only found in the HII region. This is in contrast to the RGB stars
which are more evenly distributed throughout the galaxy. The helium burning
stars are also more concentrated towards the center of the dwarf when compared
to the RGB stars, however the concentration is less pronounced than it is for
the MS stars.
When we examine the star positions and compare them to the multi-wavelength
observations, we find a strong match between the MS stars and the NUV
emission.
#### 5.1.5 Summary
GALFA Dw3 shows an underlying old ($>$10–12 Gyr) metal poor
([Fe/H]$\approx$$-$1.6) stellar population across the body of the dwarf. There
are also younger stellar populations. In the CMD we find well populated red
and blue helium burning branches (20–100 Myr) across the body of the dwarf,
this population can also be seen in the UV emission from Dw3 (see Figure 3).
Finally we also find evidence in the CMD and H$\alpha$ emission for a very
young population ($<$20 Myr) that is spatially limited to a single HII region
in the northeast of the dwarf (see Figure 4).
The differences in the spatial position and extent of the tracers of different
ages of star formation can be used to reconstruct a qualitative SFH for GALFA
Dw3: the star formation was at a higher level and distributed more evenly
throughout the dwarf in the recent past, but is now restricted to a single HII
region. This could indicate that GALFA Dw3 is concluding an episode of recent
star formation that has now been quenched outside of the HII region. This
interpretation appears to support the model that star formation in isolated
dwarf galaxies is driven by a series of ‘bursts’ of intense star formation,
interspersed with periods of quiescence (Weisz et al., 2011). In this model,
galaxies go through intense bursts of active star formation which expels the
HI gas through stellar feedback. This expulsion of the neutral gas causes the
star formation to wane and the feedback to decrease. Without feedback, more HI
gas falls onto the dwarf, producing a new episode of star formation (Wheeler
et al., 2019). In this case, GALFA Dw3 would be in the concluding part of such
a star forming episode with the last parts of star formation from an active
burst. More detailed HI observations may be needed to determine the position
and kinematic properties of the gas, as the existing HI information from the
GALFA survey is low resolution (Saul et al., 2012).
### 5.2 GALFA Dw4
The position of GALFA Dw4 near the galactic plane complicates creating a
comprehensive SFH due to the high extinction (particularly in the UV).
#### 5.2.1 Isochrone Comparisons
The CMD of GALFA Dw4 also reveals a complex SFH, with both young and old
stellar populations, however there are substantial differences between Dw3 and
Dw4. We point the reader to the right panel of Figure 2 to guide this
discussion, where we denote stars in the main body of GALFA Dw4, along with
those associated with both of its HII regions (see discussion below), and plot
relevant isochrones of varying age and metallicity.
Isochrone matching of the red and blue helium burning branches in GALFA Dw4
indicates a metallicity of [Fe/H]$\approx-$1.0 and ages of 50-500 Myrs, based
on the branches’ length and separation. Similar to Dw3, the red helium burning
branch shows stars with ages between 50-100 Myrs, with the blue helium burning
branch showing stars from 100-500 Myrs. The upper age boundary is limited by
the completeness of the CMD so star formation may have started even earlier
than 500 Myrs ago, but this can not be determined without a deeper CMD. This
metallicity is consistent within 1$\sigma$ with the luminosity–metallicity
relationship for Local Volume dwarfs (Berg et al., 2012).
Isochrone matching also shows Dw4 has an ancient ($>$10–12 Gyrs), low-
metallicity ([Fe/H]$\approx-$1.6) RGB. We see some evidence for a limited
metallicity spread in the RGB, with some stars being consistent with
[Fe/H]$\approx-$1.0, or even slightly more metal-rich, and with most likely
member stars being part of this ancient population.
Isochrone matching of Dw4 indicates that there are relatively few young MS
stars when compared to the stars of the helium burning branches. This could
mean that the current star formation rate is at a lower level when compared to
a few hundred Myrs ago when the stars that now make up the helium burning
branches were formed. This could also be a function of the very low mass of
Dw4, where even if there is active star formation very few high mass MS stars
are formed and therefore the higher density of helium burning branch stars is
caused by the initial stellar mass function, rather than differences in star
formation rate over time. This would be similar to other isolated low mass
dwarfs such as Leo P or Leoncino (McQuinn et al., 2015a, 2020). It is also
possible that the young MS stars are being missed for some reason, and this
possibility will be explored below.
#### 5.2.2 H$\alpha$ Imaging
The H$\alpha$ imaging (see §2) shows that Dw4 has two HII regions, one at each
end of the galaxy, which match the blue regions seen in the HST imaging (see
Figure 4). We find an H$\alpha$ flux of 4.184$\pm$0.097$\times$10-14 erg s-1
cm-2 for the southeast region and 1.037$\pm$0.052$\times$10-14 erg s-1 cm-2
for the northwest region, for a total H$\alpha$ flux of
5.221$\pm$0.110$\times$10-14 erg s-1 cm-2. Combined with the distance,
foreground extinction and the conversion factor from Kennicutt (1998) this
flux implies a star formation rate of 1.37$\pm$0.15$\times$10-3 M⊙ yr-1.
When we examine these HII regions in the CMD (see the right panel of Figure
2), we find that there are no obvious O-stars to drive the H$\alpha$ emission.
This could be caused by internal extinction within Dw4, which could cause the
MS O-stars to appear as stars at the upper end of the blue helium burning
branch. For this to be the case, the HII regions would have to be obscured by
enough dust to cause extinction of AF606W$\approx$1.1 and AF814W$\approx$0.7.
This level of extinction is substantially higher than the internal extinction
reported for other dwarf galaxies (McQuinn et al., 2010) and is far larger
than variations in the foreground extinction.
Another possibility is that the stars which are driving H$\alpha$ emission are
visible, but are not recovered in our point source photometry because they
were culled at some stage in our reductions. To test this possibility, a CMD
for Dw4 was constructed using the DOLPHOT catalog, but with the photometric
quality cuts severely relaxed. This did not detect any sources with color and
brightness consistent with MS O-stars across Dw4. We have also used the ASTs
to confirm that artificial stars with properties similar to MS O-stars are
successfully recovered by DOLPHOT in the HII regions of Dw4. We also tried a
similar reduction in photometric quality cuts with the crowded photometry
discussed in §2, and this yielded a few point sources consistent with MS
O-stars of the spectral classes O7-O9. These could be the source of the
H$\alpha$ emission, however these poorly recovered sources are generally too
blue to be MS O-stars. We have considered that these objects may be O-stars
with line contamination from the HII region sufficient to move it off the MS
in the CMD, however this contamination would have to be larger than expected
to have the observed effect. On the other hand, equivalent point sources are
not found in the parallel field, indicating they are unique to the dwarf. ‘
Therefore, it is possible these are the MS O-stars, but they are in areas of
the dwarf that preclude clean photometric recovery with the present data.
It is also possible that a combination of the above scenarios are the reason
we see no MS O-stars in Dw4 despite the presence of H$\alpha$ emission. In
this case, internal extinction obscures and blurs the O-stars such that they
are not recovered clearly by DOLPHOT.
The two HII regions also contain most of the lower-mass MS stars seen in Dw4
(see Figure 5). This indicates that star formation is currently limited to
these two regions. We also see overdensities of red and blue helium burning
stars in the HII regions compared to the dwarf as a whole. RGB stars appear to
be at a similar density in the HII regions when compared to other parts of the
dwarf with similar radius, indicating the overdensities of young stars are
genuine and not caused by general stellar overdensities in these regions.
#### 5.2.3 SWIFT UVOT
As stated in §2, GALFA Dw4 is outside of the GALEX footprint due to its
proximity to the galactic plane. Therefore, to get UV information on this
object, Swift UVOT (Roming et al., 2005) observations were required. These
were taken as part of proposal 1417202 (P.I. L. Hagen) to observe the UV dust
extinction properties in GALFA Dw4 (along with 4 other Local Volume dwarfs).
Despite these Swift images with a reasonable total exposure time
($\sim$1100s), they show no detectable UV emission from Dw4 in any of the 3
filters examined (UVW1, UVM2, UVW2). This is likely due to the high levels of
extinction around Dw4 (see Table 1). The H$\alpha$ emission from Dw4, combined
with the presence of bright MS stars in the HST imaging, means it is likely
that there is UV emission from Dw4, but that it is not observable with the
present data due to the previously mentioned high levels of extinction.
#### 5.2.4 Spatial structure
In Dw4 the RGB stars are spread throughout the dwarf while the young MS and
helium burning stars are largely confined to regions near the HII regions.
These younger stellar populations are at higher relative density at either end
of the dwarf near the HII regions, see Figure 5. We find that older helium
burning stars are more evenly spread throughout the dwarf, though still more
concentrated towards the current HII regions than the RGB stars. This may be
the result of previous star formation being more evenly distributed, or a
result of these older stars having had time to mix through the dwarf since
they formed.
#### 5.2.5 Summary
GALFA Dw4 has an old ($\gtrsim$10–12 Gyrs) metal poor ([Fe/H]$\approx-$1.6)
stellar population, with some evidence for a metallicity spread in the RGB. We
also see younger stellar populations, with well populated red and blue helium
burning sequences, and young MS stars. This is supported by H$\alpha$ imaging
which shows emission concentrated in two regions at either end of the dwarf at
the same position as the young stellar populations in the CMD. Therefore we
conclude that star formation in Dw4 is limited to the HII regions at either
end of the dwarf. We also find that star formation has been ongoing for $>$500
Myrs, and seems to be more concentrated in the HII regions. This can be seen
by the concentration of young stars in these regions compared to the RGB
stars, along with the H$\alpha$ emission. However, our conclusions here are
less robust than for Dw3. This is due to the lack of UV information, and the
lower total number of stars in Dw4 which makes it difficult to derive concrete
information via examining stellar populations.
## 6 Discussion
Having determined the distance (§3), structural properties (§4) and
qualitative SFHs (§5) of both GALFA Dw3 and Dw4, we are in a position to
discuss these galaxies in detail.
### 6.1 Environment
We began exploring the environment around both GALFA Dw3 & Dw4 using their
newly derived distances and a search of the NASA Extragalactic Database
(NED)333http://ned.ipac.caltech.edu/. We searched for any catalogued objects
within $\sim$5 degrees of angular separation and a relative velocity
difference between $-$400 to +600 km s-1 (this range was chosen to avoid
contamination by Galactic objects with velocities less than the MW escape
velocity). This search showed that both GALFA Dw3 & Dw4 are extremely
isolated, confirming the result from Sand et al. (2015). In addition, catalogs
of known galaxies were searched for objects nearby to either galaxy and we
found nothing within 1.5 Mpc of either dwarf (Karachentsev et al., 2013).
The closest known object to GALFA Dw3 is NGC1156: NGC1156 has a distance
consistent with GALFA Dw3 at 7.6$\pm$0.7 Mpc (Kim et al., 2012), however with
a projected separation of 11.61 degrees (1.54 Mpc at the distance of
Dw3/NGC1156) and a velocity separation of 155 km s-1 (Karachentsev et al.,
2013), we consider direct association at the present time to be unlikely.
GALFA Dw4 is projected near to the Orion Dwarf and A0554, however these
objects are more distant at D$\sim$6.8 Mpc (Anand et al., 2019) and D$\sim$5.5
Mpc (Karachentsev & Musella, 1996) respectively, and therefore we consider
association to be unlikely. The closest object to GALFA Dw4 is the HI source
HIPASS J0630+08, with an angular separation of 11.2 degrees (a projected
separation of 0.78 Mpc at the distance of Dw4) and a velocity difference of
240 km s-1 (Karachentsev et al., 2013). This is a HI source with no detected
optical counterpart (Donley et al., 2005). We find that A0554 is the closest
object with an optical counterpart, though this is extremely distant with a
radial separation of 2.3 Mpc and a projected separation of 220 kpc. However,
as GALFA Dw4 is in the ‘zone of avoidance’ around the Galactic plane, there
have been relatively few deep wide field optical surveys done in the area, and
therefore it can not be ruled out that there maybe other undetected galaxies
closer than A0554. GALFA Dw4 is also unusual as it has large peculiar velocity
$\sim$+350 km s-1, which is unexpected for isolated systems, which tend to
move with the Hubble flow (Anand et al., 2019).
The isolation of GALFA Dw3 & Dw4 can be seen in Figure 7, where the dwarfs are
shown to be ‘below’ the supergalactic plane in very low density regions of the
Local Volume. Therefore we conclude that both GALFA Dw3 & Dw4 are truly
isolated with no other objects close enough to influence them at the current
time or in the recent past. This isolation allows us to use them as probes
into how star formation and galaxy evolution occur in isolated low-mass
galaxies.
Figure 7: The location of GALFA Dw3 and Dw4 in the Local Volume. GALFA Dw3 and
Dw4 are shown as blue stars and labelled, as are Pisces A and B (Cyan
Triangles), while the black dots are a 10 Mpc volume-limited sample of nearby
galaxies (Karachentsev et al., 2013). The coordinates are supergalactic
Cartesian with Earth at the center, oriented such that the x-axis points
towards the origin and the z-axis points towards the Local Void (Lahav et al.,
2000).
### 6.2 Local Volume Analogs
We have examined other Local Volume dwarf galaxies to compare the properties
of GALFA Dw3 and Dw4 with other low mass systems.
GALFA Dw3 & Dw4 have very similar physical properties to Pisces A & B, which
were also found in follow-up to the GALFA survey (Tollerud et al., 2015; Sand
et al., 2015). All of these objects are very isolated, however Pisces A and B
were theorised to be falling into local filamentary structure after spending
most of cosmic time at the edge of the Local Void (Tollerud et al., 2016),
which is speculated to have triggered recent star formation in Pisces A & B.
The other object from Sand et al. (2015), ALFALFA Dw1 (also referred to as AGC
226067 or SECCO 1; Bellazzini et al., 2015) shows stellar populations that
were found to be approximately consistent with a single burst of star
formation with an age range of $\sim$7–50 Myr (Sand et al., 2017), with no
accompanying old stellar population, the latter being typical in almost all
known dwarf galaxies. Based on this and other results in the literature on
this object (Bellazzini et al., 2015; Adams et al., 2015; Beccari et al.,
2016), there is circumstantial evidence that ALFALFA Dw1 is a distant star-
forming remnant of a ram pressure stripping event in the M86 subgroup, as
recent simulations have predicted (Kapferer et al., 2009; Tonnesen & Bryan,
2012), and is therefore a very different class of object; it is possible that
similar systems will be minor contaminants in field dwarf searches.
In Figure 6, there are a number of Local Volume objects that have similar
physical properties to GALFA Dw3 and Dw4. UGC9128 is an isolated Local Volume
object (D$\sim$2.3 Mpc; Tully et al., 2013), and is a good analog for Dw3. It
has very similar physical properties and a recent SFH that is comparable to
Dw3, with recent star formation throughout the dwarf but current star
formation limited to a few small regions (McQuinn et al., 2010). UGC9128 shows
evidence of having had 3 bursts of star formation in the last $\sim$500 Myrs
(McQuinn et al., 2010). GR8 (DDO155/UGC8091) is a star forming dwarf in the
Local Volume with a distance of $\sim$2.2 Mpc (Tully et al., 2013). It has
very similar physical properties to GALFA Dw4. In GR8, star formation is
limited to HII complexes which seem to arise in associated regions
approximately 100-200 pc in size, which last for $\sim$100 Myrs before star
formation ceases and new regions begin to actively form stars (Dohm-Palmer et
al., 1998; Tolstoy, 1999).
The Survey of HI in Extremely Low-mass Dwarfs (SHIELD) galaxies (Cannon et
al., 2011) are a selection of 12 galaxies initially detected in ALFALFA
(Giovanelli et al., 2005; Haynes et al., 2018) data. These galaxies were
selected based on low HI and stellar mass estimates. In terms of absolute
magnitude and gas mass, the SHIELD galaxies are in the same range as GALFA Dw3
and Dw4. Examination of the SFHs of the SHIELD galaxies also shows a recent
star formation rate consistent with that derived for Dw3 (see §5.1.3). The
SHIELD galaxies are found in a number of different environments, with three
(AGC 748778, AGC 174605, and AGC 74923) being isolated ($>$1 Mpc from their
nearest neighbors, McQuinn et al., 2014, 2015b). These objects have very
similar physical properties to Dw3, making them potentially good analogs,
while Dw4 is fainter and physically smaller than the typical SHIELD galaxy. As
previously mentioned Dw4 is one of the most compact objects at its luminosity
detected.
## 7 Conclusions
We have presented HST imaging of GALFA Dw3 and Dw4, two Local Volume dwarf
galaxies which were initially discovered as optical counterparts to compact HI
clouds in the GALFA survey. Both dwarfs resolve into stars, displaying complex
stellar populations, including an old red giant branch, young helium burning
sequences and main sequence stars. Each system also has young star clusters
and HII regions which are evident in our H$\alpha$ imaging. In detail, the two
dwarfs appear to have slightly different star formation histories based on a
qualitative assessment of their CMDs and on the available UV data. GALFA Dw3
shows signs of a recently ceased episode of active star formation; although it
is not well constrained, Dw4 seems to have a more consistent level of star
formation within spatially limited HII regions at either end of the dwarf.
Using the resolved CMDs, we measure the distance to each dwarf using the TRGB
method, finding $D$=7.61${}_{-0.29}^{+0.28}$ Mpc and
$D$=3.10${}_{-0.17}^{+0.16}$ Mpc for GALFA Dw3 and Dw4, respectively. With
this information in hand, we found each dwarf to be extremely isolated, with
no known neighbor within $\sim$1.5 Mpc, suggesting that neither galaxy has
experienced a significant environmental influence.
GALFA Dw3 and Dw4 are similar to other Local Volume dwarfs initially detected
in wide field HI surveys (see §6.2 Cannon et al., 2011; Tollerud et al., 2015;
Sand et al., 2015). The lack of detections of new gas-rich low-mass dwarf
galaxies within the Local Group (similar to Leo P or Leo T Irwin et al., 2007;
Rhode et al., 2013) in these surveys indicates that these ‘mini-halos’ are
likely rare. The lack of new Local Group objects found in the GALFA survey has
been used to examine a potential link between HI gas in dwarfs and the lower
mass limit for reionization (Tollerud & Peek, 2018). It found that the lack of
detections was very unlikely if these objects were common and this rarity
could be used to determine the lower mass limit for reionization (see Tollerud
& Peek, 2018, for more details).
GALFA Dw3 and Dw4 (and other systems like them, such as Pisces A and B;
Tollerud et al. 2016) present a unique opportunity to examine low-metallicity
isolated dwarf galaxies analogous to the earliest galaxies in the Universe.
Further work on GALFA Dw3 and Dw4, and related objects, will include gas phase
metallicity measurements (e.g. Hirschauer et al., 2016; McQuinn et al., 2020)
and high resolution HI mapping (e.g. Beale et al., 2020) to further understand
the driving mechanisms of the structure and evolution of the faintest dwarf
galaxies.
Research by PB is supported by NASA through grant number HST-GO-14796.005-A
from the Space Telescope Science Institute which is operated by AURA, Inc.,
under NASA contract NAS 5-26555. Research by DJS is supported by NSF grants
AST-1821967 and AST-1813708. Research by DC is supported by NSF grant
AST-1814208, and by NASA through grants number HST-GO-15426.007-A and HST-
GO-15332.004-A from the Space Telescope Science Institute, which is operated
by AURA, Inc., under NASA contract NAS 5-26555. BMP is supported by an NSF
Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2001663. EO
is partially supported by NSF grant AST-1815767. JS acknowledges support from
the Packard Foundation. This publication utilizes data from Galactic ALFA HI
(GALFA HI) survey data set obtained with the Arecibo L-band Feed Array (ALFA)
on the Arecibo 305m telescope. The Arecibo Observatory is operated by SRI
International under a cooperative agreement with the National Science
Foundation (AST-1100968), and in alliance with Ana G. Méndez-Universidad
Metropolitana, and the Universities Space Research Association. The GALFA HI
surveys have been funded by the NSF through grants to Columbia University, the
University of Wisconsin, and the University of California.
## References
* Adams et al. (2013) Adams, E. A. K., Giovanelli, R., & Haynes, M. P. 2013, ApJ, 768, 77
* Adams et al. (2015) Adams, E. A. K., Faerman, Y., Janesh, W. F., et al. 2015, A&A, 573, L3
* Anand et al. (2019) Anand, G. S., Tully, R. B., Rizzi, L., Shaya, E. J., & Karachentsev, I. D. 2019, ApJ, 880, 52
* Beale et al. (2020) Beale, L., Donovan Meyer, J., Tollerud, E. J., Putman, M. E., & Peek, J. E. G. 2020, arXiv e-prints, arXiv:2009.09145
* Beccari et al. (2016) Beccari, G., Bellazzini, M., Battaglia, G., et al. 2016, A&A, 591, A56
* Bell & de Jong (2001) Bell, E. F., & de Jong, R. S. 2001, ApJ, 550, 212
* Bellazzini et al. (2015) Bellazzini, M., Magrini, L., Mucciarelli, A., et al. 2015, ApJ, 800, L15
* Bennet et al. (2019) Bennet, P., Sand, D. J., Crnojević, D., et al. 2019, ApJ, 885, 153
* Bennet et al. (2020) —. 2020, ApJ, 893, L9
* Berg et al. (2012) Berg, D. A., Skillman, E. D., Marble, A. R., et al. 2012, ApJ, 754, 98
* Bradford et al. (2015) Bradford, J. D., Geha, M. C., & Blanton, M. R. 2015, ApJ, 809, 146
* Bressan et al. (2012) Bressan, A., Marigo, P., Girardi, L., et al. 2012, MNRAS, 427, 127
* Bullock & Boylan-Kolchin (2017) Bullock, J. S., & Boylan-Kolchin, M. 2017, Annual Review of Astronomy and Astrophysics, 55, 343
* Calzetti (2013) Calzetti, D. 2013, Star Formation Rate Indicators, ed. J. Falcón-Barroso & J. H. Knapen, 419
* Cannon et al. (2011) Cannon, J. M., Giovanelli, R., Haynes, M. P., et al. 2011, ApJ, 739, L22
* Carlsten et al. (2020) Carlsten, S. G., Greco, J. P., Beaton, R. L., & Greene, J. E. 2020, ApJ, 891, 144
* Crnojević et al. (2019) Crnojević, D., Sand, D. J., Bennet, P., et al. 2019, ApJ, 872, 80
* Da Costa & Armandroff (1990) Da Costa, G. S., & Armandroff, T. E. 1990, AJ, 100, 162
* Dohm-Palmer et al. (1998) Dohm-Palmer, R. C., Skillman, E. D., Gallagher, J., et al. 1998, AJ, 116, 1227
* Dolphin (2000) Dolphin, A. E. 2000, PASP, 112, 1383
* Dolphin (2002) —. 2002, MNRAS, 332, 91
* Donley et al. (2005) Donley, J. L., Staveley-Smith, L., Kraan-Korteweg, R. C., et al. 2005, AJ, 129, 220
* Freedman et al. (2020) Freedman, W. L., Madore, B. F., Hoyt, T., et al. 2020, ApJ, 891, 57
* Garrison-Kimmel et al. (2014) Garrison-Kimmel, S., Boylan-Kolchin, M., Bullock, J. S., & Lee, K. 2014, MNRAS, 438, 2578
* Garrison-Kimmel et al. (2019) Garrison-Kimmel, S., Wetzel, A., Hopkins, P. F., et al. 2019, MNRAS, 489, 4574
* Geha et al. (2012) Geha, M., Blanton, M. R., Yan, R., & Tinker, J. L. 2012, ApJ, 757, 85
* Gehrels et al. (2004) Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, ApJ, 611, 1005
* Giovanelli et al. (2010) Giovanelli, R., Haynes, M. P., Kent, B. R., & Adams, E. A. K. 2010, ApJ, 708, L22
* Giovanelli et al. (2005) Giovanelli, R., Haynes, M. P., Kent, B. R., et al. 2005, AJ, 130, 2598
* Haynes & Giovanelli (1984) Haynes, M. P., & Giovanelli, R. 1984, AJ, 89, 758
* Haynes et al. (2018) Haynes, M. P., Giovanelli, R., Kent, B. R., et al. 2018, ApJ, 861, 49
* Hirschauer et al. (2016) Hirschauer, A. S., Salzer, J. J., Skillman, E. D., et al. 2016, ApJ, 822, 108
* Iglesias-Páramo et al. (2006) Iglesias-Páramo, J., Buat, V., Takeuchi, T. T., et al. 2006, ApJS, 164, 38
* Irwin et al. (2007) Irwin, M. J., Belokurov, V., Evans, N. W., et al. 2007, ApJ, 656, L13
* Jang & Lee (2017) Jang, I. S., & Lee, M. G. 2017, ApJ, 835, 28
* Kapferer et al. (2009) Kapferer, W., Sluka, C., Schindler, S., Ferrari, C., & Ziegler, B. 2009, A&A, 499, 87
* Karachentsev & Musella (1996) Karachentsev, I., & Musella, I. 1996, A&A, 315, 348
* Karachentsev et al. (2013) Karachentsev, I. D., Makarov, D. I., & Kaisina, E. I. 2013, AJ, 145, 101
* Kennicutt (1998) Kennicutt, Robert C., J. 1998, ARA&A, 36, 189
* Kim et al. (2012) Kim, S. C., Park, H. S., Kyeong, J., et al. 2012, PASJ, 64, 23
* Kniazev et al. (2018) Kniazev, A. Y., Egorova, E. S., & Pustilnik, S. A. 2018, MNRAS, 479, 3842
* Lahav et al. (2000) Lahav, O., Santiago, B. X., Webster, A. M., et al. 2000, MNRAS, 312, 166
* Lee et al. (1993) Lee, M. G., Freedman, W. L., & Madore, B. F. 1993, ApJ, 417, 553
* Makarov et al. (2006) Makarov, D., Makarova, L., Rizzi, L., et al. 2006, AJ, 132, 2729
* Mao et al. (2020) Mao, Y.-Y., Geha, M., Wechsler, R. H., et al. 2020, arXiv e-prints, arXiv:2008.12783
* Martin & GALEX Team (2005) Martin, C., & GALEX Team. 2005, in IAU Symposium, Vol. 216, Maps of the Cosmos, ed. M. Colless, L. Staveley-Smith, & R. A. Stathakis, 221
* Martin et al. (2008) Martin, N. F., de Jong, J. T. A., & Rix, H.-W. 2008, ApJ, 684, 1075
* McConnachie (2012) McConnachie, A. W. 2012, AJ, 144, 4
* McConnachie et al. (2018) McConnachie, A. W., Ibata, R., Martin, N., et al. 2018, ApJ, 868, 55
* McQuinn et al. (2011) McQuinn, K. B. W., Skillman, E. D., Dalcanton, J. J., et al. 2011, ApJ, 740, 48
* McQuinn et al. (2015a) McQuinn, K. B. W., Skillman, E. D., Dolphin, A. E., & Mitchell, N. P. 2015a, ApJ, 808, 109
* McQuinn et al. (2010) McQuinn, K. B. W., Skillman, E. D., Cannon, J. M., et al. 2010, ApJ, 724, 49
* McQuinn et al. (2013) McQuinn, K. B. W., Skillman, E. D., Berg, D., et al. 2013, AJ, 146, 145
* McQuinn et al. (2014) McQuinn, K. B. W., Cannon, J. M., Dolphin, A. E., et al. 2014, ApJ, 785, 3
* McQuinn et al. (2015b) —. 2015b, ApJ, 802, 66
* McQuinn et al. (2020) McQuinn, K. B. W., Berg, D. A., Skillman, E. D., et al. 2020, ApJ, 891, 181
* Morrissey et al. (2007) Morrissey, P., Conrow, T., Barlow, T. A., et al. 2007, ApJS, 173, 682
* Pustilnik et al. (2016) Pustilnik, S. A., Perepelitsyna, Y. A., & Kniazev, A. Y. 2016, MNRAS, 463, 670
* Radburn-Smith et al. (2011) Radburn-Smith, D. J., de Jong, R. S., Seth, A. C., et al. 2011, ApJS, 195, 18
* Rhode et al. (2013) Rhode, K. L., Salzer, J. J., Haurberg, N. C., et al. 2013, AJ, 145, 149
* Rizzi et al. (2007) Rizzi, L., Tully, R. B., Makarov, D., et al. 2007, ApJ, 661, 815
* Roming et al. (2005) Roming, P. W. A., Kennedy, T. E., Mason, K. O., et al. 2005, Space Sci. Rev., 120, 95
* Sahu et al. (2014) Sahu, K., Deustua, S., & Sabbi, E. 2014, WFC3/UVIS Photometric Transformations, Tech. rep.
* Sand et al. (2009) Sand, D. J., Olszewski, E. W., Willman, B., et al. 2009, ApJ, 704, 898
* Sand et al. (2012) Sand, D. J., Strader, J., Willman, B., et al. 2012, ApJ, 756, 79
* Sand et al. (2015) Sand, D. J., Crnojević, D., Bennet, P., et al. 2015, ApJ, 806, 95
* Sand et al. (2017) Sand, D. J., Seth, A. C., Crnojević, D., et al. 2017, ApJ, 843, 134
* Saul et al. (2012) Saul, D. R., Peek, J. E. G., Grcevich, J., et al. 2012, ApJ, 758, 44
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103
* Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
* Spekkens et al. (2014) Spekkens, K., Urbancic, N., Mason, B. S., Willman, B., & Aguirre, J. E. 2014, ApJ, 795, L5
* The Astropy Collaboration et al. (2018) The Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, ArXiv e-prints, arXiv:1801.02634
* Tikhonov & Klypin (2009) Tikhonov, A. V., & Klypin, A. 2009, MNRAS, 395, 1915
* Tollerud et al. (2015) Tollerud, E. J., Geha, M. C., Grcevich, J., Putman, M. E., & Stern, D. 2015, ApJ, 798, L21
* Tollerud et al. (2016) Tollerud, E. J., Geha, M. C., Grcevich, J., et al. 2016, ApJ, 827, 89
* Tollerud & Peek (2018) Tollerud, E. J., & Peek, J. E. G. 2018, ApJ, 857, 45
* Tolstoy (1999) Tolstoy, E. 1999, in IAU Symposium, Vol. 192, The Stellar Content of Local Group Galaxies, ed. P. Whitelock & R. Cannon, 218
* Tonnesen & Bryan (2012) Tonnesen, S., & Bryan, G. L. 2012, MNRAS, 422, 1609
* Tully et al. (2013) Tully, R. B., Courtois, H. M., Dolphin, A. E., et al. 2013, AJ, 146, 86
* Wegner (2000) Wegner, W. 2000, MNRAS, 319, 771
* Weisz et al. (2014) Weisz, D. R., Dolphin, A. E., Skillman, E. D., et al. 2014, ApJ, 789, 147
* Weisz et al. (2011) Weisz, D. R., Dalcanton, J. J., Williams, B. F., et al. 2011, ApJ, 739, 5
* Wetzel et al. (2015) Wetzel, A. R., Tollerud, E. J., & Weisz, D. R. 2015, ApJ, 808, L27
* Wheeler et al. (2019) Wheeler, C., Hopkins, P. F., Pace, A. B., et al. 2019, MNRAS, 490, 4447
|
# Cubic spin-orbit coupling and anomalous Josephson effect in planar junctions
Mohammad Alidoust Department of Physics, Norwegian University of Science and
Technology, N-7491 Trondheim, Norway Chenghao Shen University at Buffalo,
State University of New York, Buffalo, NY 14260-1500, USA Igor Žutić
University at Buffalo, State University of New York, Buffalo, NY 14260-1500,
USA
###### Abstract
Spin-orbit coupling in two-dimensional systems is usually characterized by
Rashba and Dresselhaus spin-orbit coupling (SOC) linear in the wave vector.
However, there is a growing class of materials which instead support dominant
SOC cubic in the wave vector (cSOC), while their superconducting properties
remain unexplored. By focusing on Josephson junctions in Zeeman field with
superconductors separated by a normal cSOC region, we reveal a strongly
anharmonic current-phase relation and complex spin structure. An experimental
cSOC tunability enables both tunable anomalous phase shift and supercurrent,
which flows even at the zero-phase difference in the junction. A fingerprint
of cSOC in Josephson junctions is the $f$-wave spin-triplet superconducting
correlations, important for superconducting spintronics and supporting
Majorana bound states.
Spin-orbit coupling (SOC) and its symmetry breaking provide versatile
opportunities for materials design and brining relativistic phenomena to the
fore of the condensed matter physics Kane2005:PRL ; Konig2007:S ; Wan2011:PRB
; Burkov2011:PRL ; Armitage2018:RMP ; Zutic2019:MT . While for decades SOC was
primarily studied to elucidate and manipulate normal-state properties,
including applications in spintronics and quantum computing Bychkov1984:PZETF
; DasSarma2001:SSC ; Winkler:2003 ; Zutic2004:RMP ; Hanson2007:RMP ;
Fabian2007:APS ; Xiao2010:RMP ; Sinova2015:RMP ; Schliemann2017:RMP , there is
a growing interest to examine its role on superconductivity Gorkov2001:PRL ;
Samokhin2005:PRL ; Reynoso2008:PRL ; Buzdin2008:PRL ; Eschrig2015:RPP ;
Smidman2017:RPP .
Through the coexistence of SOC and Zeeman field, a conventional spin-singlet
superconductivity can acquire spin-dependent long-range proximity effects
Eschrig2015:RPP ; Martinez2020:PRA ; Jeon2020:PRX ; Gonzalez-Ruano2020:PRB as
well as support topological superconductivity and host Majorana bound states,
a building block for fault-tolerant quantum computing Lutchyn2010:PRL ;
Oreg2010:PRL ; Aasen2016:PRX . In both cases, Josephson junctions (JJs)
provide a desirable platform to acquire spin-triplet superconductivity through
proximity effects Keizer2006:N ; Robinson2010:S ; Khaire2010:PRL ;
Banerjee2014:NC ; Gingrich2016:NP ; Linder2015:NP ; Rokhinson2012:NP ;
Fornieri2019:N ; Ren2019:N ; Desjardins2019:NM ; Mayer2019:P . In contrast,
even seemingly well-established intrinsic spin-triplet superconductivity in
Sr2RuO4 Mackenzie2003:RMP is now increasingly debated Pustogow2019:N ;
Sharma2020:PNAS .
Extensive normal-state studies of SOC in zinc-blende heterostructures usually
distinguishing the resulting spin-orbit fields due to broken bulk inversion
symmetry, Dresselhaus SOC, and surface inversion asymmetry, Rashba SOC, and
focus on their dominant linear dependence in the wave vector, ${\bf k}$
Zutic2004:RMP ; Schliemann2017:RMP . In this linear regime, with a matching
strengths of these SOC it is possible to strongly suppress the spin relaxation
Schliemann2003:PRL and realize a persistent spin helix (PSH) Bernevig2006:PRL
; Koralek2009:N with a controllable spin precession over long distances
Dettwiler2017:PRX ; Walser2012:NP ; Iizasa2020:PRB .
Figure 1: (a) Spin-orbit fields in k-space for Rashba cubic spin-orbit
coupling (cSOC) $(\alpha_{c}=-1$), Dresselhaus cSOC ($\beta_{c}=-1$, middle),
and both ($\alpha_{c}=\beta_{c}=-1$, bottom). (b) Schematic of the Josephson
junction. The middle region hosts cSOC and an effective Zeeman field, h,
between the two $s$-wave superconductors (S). (c) Spin textures in the cSOC
region resulting from the normal-incident electrons with in-plane spin
orientations [see Fig. 1(b)] when S is at normal-state, the upper (lower)
panel $\alpha_{c}=1$, $\beta_{c}=0$ ($\alpha_{c}=\beta_{c}=1$). The in-plane
spin orientations of the incident electrons $\phi_{s}$ are from 0 (bottom row)
to $\pi/2$ (top row).
While typically k-cubic SOC contributions (cSOC) in heterostructures are
neglected or considered just detrimental perturbations, for example, limiting
the stability of PSH Dettwiler2017:PRX ; Walser2012:NP ; Iizasa2020:PRB , a
more complex picture is emerging. Materials advances suggest that such cSOC,
shown in Fig. 1(a), not only has to be included, but may also dominate the
normal-state properties Winkler2002:PRB ; Krich2007:PRL ; Altmann2006:PRL ;
Yoshizumi2016:APL ; Kammermeier2016:PRL ; Nakamura2012:PRL ; Moriya2014:PRL ;
Cottier2020:PRB ; Liu2018:PRL ; Brosco2016:PRL . However, the role of cSOC in
superconducting heterostructures is unexplored. It is unclear if cSOC is
detrimental or desirable to key phenomena such as Josephson effect, spin-
triplet superconductivity, or Majorana bounds states.
To address this situation and motivate further cSOC studies of superconducting
properties, we consider JJs depicted in Fig. 1(b), where $s$-wave
superconductors (S) are separated by a normal region with cSOC which is
consistent with the two-dimensional (2D) electron or hole gas, confined along
the z-axis Winkler2002:PRB ; Nakamura2012:PRL . We find that the interplay
between Zeeman field and cSOC results in an anomalous Josephson effect with a
spontaneous supercurrent. While the commonly-expected current-phase relation
(CPR) is $I(\varphi)=I_{c}\sin(\varphi+\varphi_{0})$ Buzdin2008:PRL ;
Strambini2020:NN , where $I_{c}$ is the JJ critical current and $\varphi_{0}$
the anomalous phase ($\varphi_{0}\neq 0,\pi$), we reveal that CPR can be
strongly anharmonic and host Majorana bound states. Instead of the $p$-wave
superconducting correlations for linear SOC, their $f$-wave symmetry is the
fingerprint of cSOC.
To study cSOC, we consider an effective Hamiltonian
$H=\frac{1}{2}\int
d\mathbf{p}~{}\hat{\psi}^{{\dagger}}(\mathbf{p})H(\mathbf{p})\hat{\psi}(\mathbf{p}),$
(1)
where
$H(\mathbf{p})=\mathbf{p}^{2}/2m^{*}+{\bm{\sigma}}\cdot\mathbf{h}+H_{\text{cSOC}}(\mathbf{p})$,
with momentum, $\mathbf{p}=(p_{x},p_{y},0)$ [see Fig. 1(b)], effective mass,
$m^{*}$, Pauli matrices, ${{\bm{\sigma}}}$, effective Zeeman field, ${\bf h}$,
realized from an externally applied magnetic field or through magnetic
proximity effect Zutic2019:MT ; Takiguchi2019:NP , and cSOC term
Winkler2002:PRB ; Krich2007:PRL ; Nakamura2012:PRL ; Moriya2014:PRL
$H_{\text{cSOC}}(\mathbf{p})=\frac{i{\alpha_{\text{c}}}}{2\hbar^{3}}(p_{-}^{3}\sigma_{+}-p_{+}^{3}\sigma_{-})-\frac{{\beta_{\text{c}}}}{2\hbar^{3}}(p_{-}^{2}p_{+}\sigma_{+}+p_{+}^{2}p_{-}\sigma_{-}),$
(2)
expressed using cSOC strengths $\alpha_{c}$ and $\beta_{c}$, for Rashba and
Dresselhaus terms, where $p_{\pm}=p_{x}\pm ip_{y}$, and
$\sigma_{\pm}=\sigma_{x}\pm i\sigma_{y}$. The field operator in spin space is
given by
$\hat{\psi}(\mathbf{p})=[\psi_{\uparrow}(\mathbf{p}),\psi_{\downarrow}(\mathbf{p})]^{\mathrm{T}}$,
with $\uparrow,\,\downarrow$ spin projections.
To describe S regions in Fig. 1(b), we use an $s$-wave BCS model with a two-
electron amplitude in spin-Nambu space
$\Delta\langle\psi_{\uparrow}^{\dagger}\psi_{\downarrow}^{\dagger}\rangle+\text{H.c.}$,
given by the effective Hamiltonian in particle-hole space
${\cal
H}(\mathbf{p})=\left(\begin{array}[]{cc}H(\mathbf{p})-\mu\hat{1}&\hat{\Delta}\\\
\hat{\Delta}^{\dagger}&-H^{\dagger}(-\mathbf{p})+\mu\hat{1}\end{array}\right),$
(3)
where $\mu$ is the chemical potential and $\hat{\Delta}$ is a $2\times 2$ gap
matrix in spin space. The field operators in the rotated particle-hole and
spin basis are
$\hat{\psi}=(\psi_{\uparrow},\psi_{\downarrow},\psi_{\downarrow}^{{\dagger}},-\psi_{\uparrow}^{{\dagger}})^{\mathrm{T}}$.
To calculate the charge current, we use its quantum definition where no charge
sink or source is present. Therefore, the time variation of charge density
vanishes, $\partial_{t}\rho_{\text{c}}\equiv
0=\lim\limits_{\mathbf{r}\rightarrow\mathbf{r}^{\prime}}\sum\limits_{\sigma\tau\sigma^{\prime}\tau^{\prime}}[\psi^{\dagger}_{\sigma\tau}(\mathbf{r}^{\prime}){\cal
H}_{\sigma\tau\sigma^{\prime}\tau^{\prime}}(\mathbf{r})\psi_{\sigma^{\prime}\tau^{\prime}}(\mathbf{r})-\psi^{\dagger}_{\sigma\tau}(\mathbf{r}^{\prime}){\cal
H}_{\sigma\tau\sigma^{\prime}\tau^{\prime}}^{\dagger}(\mathbf{r}^{\prime})\psi_{\sigma^{\prime}\tau^{\prime}}(\mathbf{r})]$.
${\cal H}_{\sigma\tau\sigma^{\prime}\tau^{\prime}}$ is the component form of
${\cal H}$, with spin (particle-hole) label $\sigma$ ($\tau$), and and
$\mathbf{r}\equiv(x,y,0)$. From the current conservation, the charge current
density is, $\mathbf{J}=\int
d\mathbf{r}\\{\hat{\psi}^{\dagger}(\mathbf{r})\overrightarrow{{\cal
H}}(\mathbf{r})\hat{\psi}(\mathbf{r})-\hat{\psi}^{\dagger}(\mathbf{r})\overleftarrow{{\cal
H}}(\mathbf{r})\hat{\psi}(\mathbf{r})\\},$ where ${\cal H}(\mathbf{r})$ is
obtained by substituting
$\mathbf{p}\equiv-i\hbar(\partial_{x},\partial_{y},0)$. The arrow directions
indicate the specific wavefunctions that the ${\cal H}$ operates on. By an
exact diagonalization of ${\cal H}$, we obtain spinor wavefunctions
$\hat{\psi}^{l,r,m}(\textbf{p})$ within the left ($x<0$) and right ($x>d$) S
region and the middle normal region ($\smash{0}<x<d$) in Fig. 1(b). The
wavefunctions and generalized velocity operators $v_{x}^{l,r,m}$ are
continuous at the junctions, i.e., $\hat{\psi}^{l}$=$\hat{\psi}^{m}|_{x=0}$,
$\hat{\psi}^{m}$=$\hat{\psi}^{r}|_{x=d}$,
$v_{x}^{l}\hat{\psi}^{l}$=$v_{x}^{m}\hat{\psi}^{r}|_{x=0}$, and
$v_{x}^{m}\hat{\psi}^{m}$=$v_{x}^{r}\hat{\psi}^{r}|_{x=d}$. The spinor
wavefunctions are given in the Supplmental Material sm .
Figure 2: (a) Josephson energy and (b) associated supercurrent evolution with
the superconducting phase difference $\varphi$. Zeeman field values, $h_{x}$,
are chosen near a $0$-$\pi$ transition. The other parameters are
${\alpha_{\text{c}}}=\pm 0.1$ and ${\beta_{\text{c}}}=0$, $\mu=\Delta$,
$h_{y}=0$.
The complexity of ${\cal H}$ precludes simple solutions and we evaluate the
wavefunctions and supercurrent numerically. To reduce the edge effects, we
consider Fig. 1(b) geometry with $W/d\gg 1$ Alidoust2015:JAP . This approach
has been successfully used to study supercurrent in junctions with PSH, Weyl
semimetals, phosphorene, and twisted bilayer graphene Alidoust2020:PRB0 ;
Alidoust2020:PRB1 ; Alidoust2018:PRB1 ; Alidoust2020:PRB2 ; Alidoust2018:PRB2
; Alidoust2018:PRB3 ; Alidoust2020:PRB4 . The calculated supercurrent is
normalized by $I_{0}=2|e\Delta|/\hbar$, where $e$ is the electron charge, and
$\Delta$ the energy gap in S. The energies are normalized by $\Delta$, lengths
by $\xi_{\text{S}}=\hbar/\sqrt{2m^{*}\Delta}$, cSOC strengths by
$\Delta\xi_{\text{S}}^{3}$. The junction length is set at
$d=0.3\xi_{\text{S}}$.
To investigate the role of cSOC on the ground-state Josephson energy,
$E_{\text{GS}}$, and the CPR obtained from the supercurrent
$I(\varphi)\propto\partial E_{\text{GS}}/\partial\varphi$, we first consider a
simple situation with only Rashba cSOC ($\alpha_{c}\neq 0$, $\beta_{c}=0$) and
effective Zeeman field $h_{x}$ ($h_{y}=h_{z}=0$). The evolution of
$E_{\text{GS}}$ with $|h_{x}|$, where its minima are denoted by dots in Fig.
2(a), shows a continuous transition from $\varphi=0$ to $\pi$ state (blue to
green dot). For $\varphi_{0}\neq 0$, $E_{\text{GS}}$ minima come in pairs at
$\pm\varphi_{0}$ Sickinger2012:PRL . The corresponding CPR reveals in Fig.
2(b) a competition between the standard, $\sin\varphi$, and the next harmonic,
$\sin 2\varphi$, resulting in $I(-\varphi)=-I(\varphi)$. There is no
spontaneous current expected in a Josephson junction with SOC,
$I(\varphi=0)=0$, but only $I_{c}$ reversal with $h_{x}$. Such a scenario of a
continuous and symmetric 0-$\pi$ transition is well studied without SOC in
S/ferromagnet/S JJs due to the changes in the effective magnetization or a
thickness of the magnetic region Kontos2002:PRL ; Ryazanov2001:PRL ;
Bergeret2005:RMP ; Eschrig2003:PRL ; Halterman2015:PRB ; Wu2018:PRB ;
Moen2020:PRB ; Yokoyama2014:PRB .
Figure 3: (a) Josephson energy and (b) related supercurrent evolution with the
superconducting phase difference $\varphi$ Zeeman field, $h_{y}$, at a fixed
magnitude and varying Rashba cSOC strength ${\alpha_{\text{c}}}$ are
considered. The other parameters are ${\beta_{\text{c}}}=0$, $\mu=\Delta$,
$h_{x}=0$. (c) Three fits to the green curve in (b) using the generalized CPR
from Eq. (4) with $N=1,2,3$ harmonics.
While our previous results suggest no direct cSOC influence on CPR, a simple
in-plane rotation of h, $h_{x}=0$, $h_{y}\neq 0$, drastically changes this
behavior. This is shown in Figs. 3(b) where, at fixed $|h_{y}|=2.4\Delta$, we
see a peculiar influence of a finite Rashba cSOC which is responsible for the
anomalous Josephson effect with spontaneous current, $I(\varphi=0)\neq 0$, and
strong anharmonic CPR that cannot be described by
$I(\varphi)=I_{c}\sin(\varphi+\varphi_{0})$. Unlike in Fig. 3(a), a relative
sign between ${\alpha_{\text{c}}}$ and $h$ alters the CPR and Josephson
energy, where the ground states $\varphi_{0}$ appear at single points [green,
red dots in Fig. 3(a)], consistent with $\varphi_{0}\propto\alpha_{c}h_{y}$.
If instead of $\mu=\Delta$, we consider a regime $\mu\gg\Delta$, the evolution
of Josephson energy from Fig. 2(a) changes. While 0-$\pi$ transitions with
$|h_{x}|$ remain, there are no longer global minima with $\varphi\neq 0,\pi$
and the CPR reveals a stronger anharmonicity. In contrast, for $\mu\gg\Delta$,
the anomalous Josephson effect from Fig. 3 remains robust and similar
$\varphi_{0}$ states are accessible (see Ref. sm ).
Simple harmonics used to describe anharmonic CPR in high-temperature
superconductors Golubov2004:RMP ; Kashiwaya2000:RPP here are not very
suitable. By generalizing a short-junction limit for CPR Yokoyama2014:PRB ;
Golubov2004:RMP ; Hart2019:PRB , we identify a much more compact form where
only a small number of terms gives an accurate description. To recognize the
importance of SOC and two nondegenerate spin channels, $\sigma$, we write
$I(\varphi)\approx\sum_{n=1}^{N}\sum_{\sigma=\pm}\frac{I_{n}^{\sigma}\sin(n\varphi+\varphi_{0n}^{\sigma})}{\sqrt{1-\tau_{n}^{\sigma}\sin^{2}(n\varphi/2+\varphi_{0n}^{\sigma}/2)}},$
(4)
where $\tau_{n}^{\sigma}$ is the normal region transparency for spin channel
$\sigma$. With only few lowest terms in this expansion ($N=1,2,3$), shown in
Fig. 3(c) with the corresponding errors, it is possible to very accurately
describe strong CPR anharmonicities for anomalous Josephson effect. To achieve
the relative error from $N=3$ expansion in Eq. (4), in a standard
$\\{\sin,\cos\\}$ expansion, with the corresponding phase shifts as extra
fitting parameters, requires $N>20$ sm .
Key insights into the CPR and an explicit functional dependence for the
$\varphi_{0}$ state is obtained by a systematic $I(\varphi)$ symmetry analysis
with respect to the cSOC ($\alpha_{c}$, $\beta_{c}$) and Zeeman field or,
equivalently, magnetization ($h_{x,y,z}$) parameters sm . We find that $h_{z}$
plays no role in inducing the $\varphi_{0}$ state, it only produces
$I(\varphi)$ reversals, explaining our focus on $h_{z}=0$ [Figs. 2 and 3].
These properties are expressed as an effective phase shift to the a sinusoidal
CPR, $\sin(\varphi+\varphi_{0})$, extracted from Eq. (4). We again distinguish
small- and large-$\mu$ regime ($\mu=\Delta$ v.s. $\mu=10\Delta$). In the first
case, for the JJ geometry from Fig. 1, we obtain
$\varphi_{0}\propto\Gamma_{y}\Big{(}\alpha_{\text{c}}^{2}+\Gamma_{1}\beta_{\text{c}}^{2}\Big{)}h_{x}{\beta_{\text{c}}}+\Gamma_{x}\Big{(}\alpha_{\text{c}}^{2}-\Gamma_{2}\beta_{\text{c}}^{2}\Big{)}h_{y}{\alpha_{\text{c}}},$
(5)
where the parameters $\Gamma_{1,2,x,y}$ are introduced through their
relations, $\Gamma_{2}>\Gamma_{1}$, $\Gamma_{1}<1$, $\Gamma_{2}>1$,
$\Gamma_{y}(h_{y}=0)=\Gamma_{x}(h_{x}=0)=1$, $\Gamma_{y}(h_{y}\neq 0)<1$,
$\Gamma_{x}(h_{x}\neq 0)<1$. These relations are modified as $\mu$ and
$\mathbf{h}$ change. For $\mu\gg\Delta$, the functional dependence for the
$\varphi_{0}$ state is simplified
$\varphi_{0}\propto\Big{(}\alpha_{\text{c}}^{2}-\Gamma_{1}\beta_{\text{c}}^{2}\Big{)}h_{x}{\beta_{\text{c}}}+\Big{(}\alpha_{\text{c}}^{2}-\Gamma_{2}\beta_{\text{c}}^{2}\Big{)}h_{y}{\alpha_{\text{c}}},$
(6)
where $\Gamma_{2}>\Gamma_{1}$ and $\Gamma_{1,2}>1$. Therefore, $\varphi_{0}$
state occurs when h shifts p $\bot$ to ${\bm{I}}(\varphi)$ and thus alters the
SOC sm .
Taken together, these results reveal that cSOC in JJ supports a large
tunability of the Josephson energy, anharmonic CPR, and the anomalous phase,
key to many applications, from post-CMOS logic, superconducting spintronics,
quiet qubits, and topological quantum computing. Realizing $\pi$ states in JJs
is desirable for improving rapid single flux quantum (RSFQ) logic, with
operation $>100\,$GHz Likharev1991:IEEETAS ; Terzioglu1998:IEEETAS and
enhancing coherence by decoupling superconducting qubits from the environment
Yamashita2005:PRL . However, common approaches for $\pi$ states using JJs
combining $s$\- and $d$-wave superconductors or JJs with ferromagnetic regions
Golubov2004:RMP ; Kashiwaya2000:RPP pose various limitations. Instead,
extensively studied gate-tunable SOC Zutic2004:RMP ; Dettwiler2017:PRX ;
Nakamura2012:PRL ; Moriya2014:PRL ; Mayer2019:P ; Nitta1997:PRL , could allow
not only a fast transformation between $0$ and $\pi$ states in JJs with cSOC,
but also an arbitrary $\varphi_{0}$ state to tailor desirable CPR.
An insight to the phase evolution and circuit operation of JJs with cSOC is
provided by generalizing the classical model of resistively and capacitively
shunted junction (RSCJ) Stewart1968:APL . The total current, $i$, is the sum
of the displacement current across the capacitance, $C$, normal current
characterized by the resistance, $R$, and $I(\varphi)$,
$\frac{\phi_{0}}{2\pi}C\frac{d^{2}\varphi}{dt^{2}}+\frac{\phi_{0}}{2\pi
R}\frac{d\varphi}{dt}+I(\varphi)=i,$ (7)
where $\phi_{0}$ is the magnetic flux quantum and $I(\varphi)$ yields a
generally anharmonic CPR, as shown from Eq. (4), which can support $0$, $\pi$,
and turnable $\varphi_{0}$ states. As we have seen from Figs. 2 and 3, this
CPR tunability is accompanied by the changes in Josephson energy, which in
turn is responsible for the changes in effective values of $C$, $R$, and the
nonlinear Josephson inductance. This JJ tunability complements using voltage
or flux control Casparis2018:NN ; Krantz2019:APR .
Figure 4: Normalized critical supercurrent as a function of cSOC strength
${\alpha_{\text{c}}}$ and ${\beta_{\text{c}}}$ for (a) $\mu=\Delta$ and (b)
$\mu=10\Delta$. The Zeeman field is set to zero.
In JJs with ferromagnetic regions, $I_{c}$ is the tunable $I_{c}$ by changing
the underlying magnetic state Gingrich2016:NP ; Baek2014:NC ; Costa2017:PRB .
In JJs with cSOC, tuning $I_{c}$ could be realized through gate control by
changing the relative strengths of $\alpha_{c}$ and $\beta_{c}$, even at zero
Zeeman field. This is shown in Fig. 4 by calculating $\text{Max}[I(\varphi)]$
with $\varphi\in[0,2\pi]$. In the low-$\mu$ regime, the maximum $I_{c}$ occurs
at slightly curved region near the symmetry lines $|\alpha_{c}|=|\beta_{c}|$.
For the high-$\mu$ regime, the region of maximum $I_{c}$ evolves into inclined
symmetry lines, $|\alpha_{c}|={\cal A}|\beta_{c}|$, ${\cal A}<1$. Similar to
linear SOC, in the diffusive regime for cSOC, one expects that the minimum in
$I_{c}$ occurs near these symmetry lines because of the presence of long-range
spin-triplet supercurrent Alidoust2015:NJP ; Alidoust2020:PRB1 .
Figure 5: Real and imaginary parts of equal-spin superconducting correlations
in the k-space, $\xi_{\text{S}}=\hbar/\sqrt{2m^{*}\Delta}$ is the
characteristic length. (a), (b) Linear Rashba, $\alpha=1$. (c), (d) cSOC,
$\alpha_{c}=1$, $\beta_{c}=0$. (e), (f) cSOC, $\alpha_{c}=\beta_{c}=1$. The
other parameters are the same for all panels.
We expect that a hallmark of JJs with cSOC goes beyond CPR and will also
influence the spin structure and symmetry properties of superconducting
proximity effects. Linear SOC is responsible for mixed singlet-triplet
superconducting pairing Gorkov2001:PRL , while with Zeeman or exchange field
it is possible to favor spin-triplet proximity effects which can become long-
range Eschrig2015:RPP ; Linder2015:NP or host Majorna bound states
Lutchyn2010:PRL ; Oreg2010:PRL . To explore the proximity effects in the cSOC
region, we calculate superconducting pair correlations using the Matsubara
representation for the anomalous Green function,
$F(\tau;\mathbf{r},\mathbf{r}^{\prime})$ Zagoskin:2014 ,
$F_{ss^{\prime}}(\tau;\mathbf{r},\mathbf{r}^{\prime})=+\langle
T_{\tau}\psi_{s}(\tau,\mathbf{r})\psi_{s_{1}}(0,\mathbf{r}^{\prime})\rangle(-i\sigma^{y}_{s_{1}s^{\prime}}),$
(8)
where $s,s^{\prime},s_{1}$ are spin indices, the summation is implied over
$s_{1}$, $\tau$ is the imaginary time, $\psi_{s}$ is the field operator, and
$T_{\tau}$ denotes time ordering of operators sm .
For a translationally invariant SOC region, spin-triplet correlations in Fig.
5, obtained from Eq. (8), provide a striking difference between linear and
cubic SOC. Unlike the $p$-wave symmetry for linear Rashba SOC [Figs. 5(a),
5(b)], we see that the $f$-wave symmetry is the fingerprint for cSOC, retained
with only $\alpha_{c}\neq 0$ [Figs. 5(c), 5(d)] or both
$\alpha_{c},\beta_{c}\neq 0$ [Figs. 5(e), 5(f)]. Remarkably, unlike the
commonly-sought $p$-wave symmetry, we confirm that with a suitable orientation
of the Zeeman field cSOC also supports Majorana flat bands sm .
While we are not aware of any Josephson effect experiments in 2D systems
dominated by cSOC, our studied parameters are within the range of already
reported measurements. Choosing $m^{*}$ of an electron mass, and
$\Delta=0.2\,$meV, which is similar for both Al and proximity-induced
superconductivity Mayer2019:P ; Mayer2020:NC , the characteristic length
becomes $\xi_{\text{S}}\approx 14\,$nm. The resulting cSOC strength from Fig.
3(b) with $\alpha_{c}\Delta\xi_{\text{S}}^{3}\approx 50\,$eVÅ3 is compatible
with the values in 2D electron and hole gases Cottier2020:PRB ; Liu2018:PRL .
The Zeeman splitting $2.4\times 0.2\,$meV is available by applying magnetic
field in large $g$-factor materials Zutic2004:RMP , or from magnetic proximity
effects, measured in 2D systems to reach up to $\sim 20\,$meV Zutic2019:MT .
Even though we have mostly focussed on the tunable Rashba SOC, the Dresselhaus
SOC can also be gate tunable Dettwiler2017:PRX ; Iordanskii1994:JETPL ,
offering a further control of the anomalous Josephson effect.
Our results reveal that the cSOC in JJs provides versatile opportunities to
design superconducting response and test its unexplored manifestations. The
anomalous Josephson effect could serve as a sensitive probe to quantify cSOC.
While identifying the relevant form of SOC is a challenge even in the normal
state Zutic2004:RMP ; Fabian2007:APS , in the superconducting state already a
modest SOC can give a strong anisotropy in the transport properties
Hogl2015:PRL ; Martinez2020:PRA ; Gonzalez-Ruano2020:PRB ; Vezin2020:PRB and
enable extracting the resulting SOC. Identifying SOC, either intrinsic, or
generated through magnetic textures, remains important for understanding which
systems could host Majorana bound states Desjardins2019:NM ; Scharf2019:PRB ;
Pakizer2020:P ; Fatin2016:PRL ; Matos-Abiague2017:SSC ; Ronetti2020:PRR ;
Klinovaja2012:PRL ; Zhou2019:PRB ; Mohanta2019:PRA ; Turcotte2020:PRB ;
Jiang2021:N ; Rex2020:PRB ; Kornich2020:PRB ; Mohanta2020:P ; Mohanta2018:PRB
.
With the advances in gate-tunable structures and novel materials systems
Mayer2019:P ; Mayer2020:NC ; Nakamura2012:PRL ; Moriya2014:PRL ;
Cottier2020:PRB ; Liu2018:PRL ; Assouline2019:NC , the functional dependence
of the anomalous phase $\varphi_{0}$ and the $f$-wave superconducting
correlations could also enable decoupling of the linear and cubic SOC
contributions sm . For the feasibility of such decoupling, it would be useful
to consider methods employed in the studies of the nonlinear Meissner effect
Xu2015:PRB ; Bae2019:RSI ; Zhuravel2013:PRL ; Prozorov2006:SST ;
Halterman2000:PRB ; Bhattacharya1999:PRL ; Zutic1997:PRB ; Zutic1998:PRB .
Even small corrections to the supercurrent from the magnetic anisotropy of the
nonlinear Meissner response offer a sensitive probe to distinguish different
paring-state symmetries.
###### Acknowledgements.
C.S. and I.Ž. were supported by NSF ECCS-1810266, and I.Ž by DARPA
DP18AP900007, and the UB Center for Computational Research.
## References
* (1) C. L. Kane and E. J. Mele, Quantum Spin Hall Effect in Graphene, Phys. Rev. Lett. 95, 226801 (2005).
* (2) M. König, S. Wiedmann, C. Brüne, A. Roth, H. Buhmann, L. W. Molenkamp, X.-L. Qi, and S.-C. Zhang, Quantum Spin Hall Insulator State in HgTe Quantum Wells, Science 318, 766 (2007).
* (3) X. Wan, A. M. Turner, A. Vishwanath, and S. Y. Savrasov, Topological semimetal and Fermi-arc surface states in the electronic structure of pyrochlore iridates, Phys. Rev. B 83, 205101 (2011).
* (4) A. A. Burkov and L. Balents, Weyl Semimetal in a Topological Insulator Multilayer, Phys. Rev. Lett. 107, 127205 (2011).
* (5) N. P. Armitage, E. J. Mele, A. Vishwanath, Weyl and Dirac semimetals in three-dimensional solids, Rev. Mod. Phys. 90, 015011 (2018).
* (6) I. Žutić, A. Matos-Abiague, B. Scharf, H. Dery, K. Belashchenko, Proximitized materials, Mater. Today 22, 85 (2019).
* (7) Y. A. Bychkov and E. I. Rashba, Properties of a 2D electron gas with lifted spectral degeneracy, Pis’ma Zh. Eksp. Teor. Fiz. 39, 66 (1984) [JETP Lett. 39, 78 (1984)].
* (8) R. Winkler, Spin-Orbit Coupling Effects in Two-Dimensional Electron and Hole Systems (Springer, New York, 2003).
* (9) S. Das Sarma, J. Fabian, X. Hu, I. Žutić, Spin electronics and spin computation, Solid State Commun. 119, 207 (2001).
* (10) I. Žutić, J. Fabian, and S. Das Sarma, Spintronics: Fundamentals and applications, Rev. Mod. Phys. 76, 323 (2004).
* (11) R. Hanson, L. P. Kouwenhoven, J. R. Petta, S. Tarucha, and L. M. K. Vandersypen, Spins in few-electron quantum dots, Rev. Mod. Phys. 79, 1217 (2007).
* (12) J. Fabian, A. Matos-Abiague, C. Ertler, P. Stano, and I. Žutić, Semiconductor Spintronics, Acta Phys. Slov. 57, 565 (2007).
* (13) Di Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on electronic properties, Rev. Mod. Phys. 82, 1959 (2010).
* (14) J. Sinova, S. O. Valenzuela, J. Wunderlich, C. H. Back, and T. Jungwirth, Spin Hall effects, Rev. Mod. Phys. 87, 1213 (2015).
* (15) J. Schliemann, Colloquium: Persistent spin textures in semiconductor nanostructures, Rev. Mod. Phys. 89, 011001 (2017).
* (16) L. P. Gor’kov and E. I. Rashba, Superconducting 2D System with Lifted Spin Degeneracy: Mixed Singlet-Triplet State, Phys. Rev. Lett. 87, 037004 (2001).
* (17) K. V. Samokhin, Paramagnetic Properties of Noncentrosymmetric Superconductors: Application to CePt3Si, Phys. Rev. Lett. 94, 027004 (2005); P. M. R. Brydon, L. Wang, M. Weinert, and D. F. Agterberg, Pairing of $j=3=2$ Fermions in Half-Heusler Superconductors, Phys. Rev. Lett. 116 177001 (2016).
* (18) A. A. Reynoso, G. Usaj, C. A. Balseiro, D. Feinberg, and M. Avignon, Anomalous Josephson Current in Junctions with Spin Polarizing Quantum Point Contacts, Phys. Rev. Lett. 101, 107001 (2008).
* (19) A. Buzdin, Direct Coupling Between Magnetism and Superconducting Current in the Josephson $\varphi_{0}$ Junction, Phys. Rev. Lett. 101, 107005 (2008); F. Konschelle and A. Buzdin, Magnetic Moment Manipulation by a Josephson Current, Phys. Rev. Lett. 102, 017001 (2009).
* (20) M. Eschrig, Spin-polarized supercurrents for spintronics: A review of current progress, Rep. Prog. Phys. 78, 104501 (2015).
* (21) M. Smidman, M. B. Salamon, H. Q. Yuan, and D. F. Agterberg, Superconductivity and spin-orbit coupling in non-centrosymmetric materials: A review, Rep. Prog. Phys. 80, 036501 (2017).
* (22) I. Martínez, P. Högl, C. González-Ruano, J. P. Cascales, C. Tiusan, Y. Lu, and M. Hehn, A. Matos-Abiague, J. Fabian, I. Žutić, and F. G. Aliev Interfacial Spin-Orbit Coupling: A Platform for Superconducting Spintronics, Phys. Rev. Applied 13, 014030 (2020).
* (23) K.-R. Jeon, X. Montiel, S. Komori, C. Ciccarelli, J. Haigh, H. Kurebayashi, L. F. Cohen, A. K. Chan, K. D. Stenning, C.-M. Lee, M. G. Blamire, and J. W. A. Robinson, Tunable Pure Spin Supercurrents and the Demonstration of Their Gateability in a Spin-Wave Device, Phys. Rev. X 10, 031020 (2020).
* (24) C. González-Ruano, L. G. Johnsen, D. Caso, C. Tiusan, M. Hehn, N. Banerjee, J. Linder, and F. G. Aliev, Superconductivity-induced change in magnetic anisotropy in epitaxial ferromagnet-superconductor hybrids with spin-orbit interaction, Phys. Rev. B 102, 020405(R) (2020).
* (25) R. M. Lutchyn, J. D. Sau, and S. Das Sarma, Majorana Fermions and a Topological Phase Transition in Semiconductor-Superconductor Heterostructures, Phys. Rev. Lett. 105, 077001 (2010).
* (26) Y. Oreg, G. Refael, and F. von Oppen, Helical Liquids and Majorana Bound States in Quantum Wires, Phys. Rev. Lett. 105, 177002 (2010).
* (27) D. Aasen, M. Hell, R. V. Mishmash, A. Higginbotham, J. Danon, M. Leijnse, T. S. Jespersen, J. A. Folk, C. M. Marcus, K. Flensberg, and J. Alicea, Milestones Toward Majorana-Based Quantum Computing, Phys. Rev. X 6, 031016 (2016).
* (28) R. S. Keizer, S. T. B. Goennenwein, T. M. Klapwijk, G. Miao, G. Xiao, and A. A. Gupta, A spin triplet supercurrent through the half-metallic ferromagnet CrO2, Nature 439, 825 (2006).
* (29) J. W. A. Robinson, J. D. S. Witt, and M. G. Blamire, Controlled Injection of Spin-Triplet Supercurrents into a Strong Ferromagnet, Science 329, 59 (2010).
* (30) T. S. Khaire, M. A. Khasawneh, W. P. Pratt, Jr., and N. O. Birge, Observation of Spin-Triplet Superconductivity in Co-Based Josephson Junctions, Phys. Rev. Lett. 104, 137002 (2010).
* (31) N. Banerjee, J. W. A. Robinson, and M. G. Blamire, Reversible control of spin-polarized supercurrents in ferromagnetic Josephson junctions, Nat. Commun. 5, 4771 (2014).
* (32) E. C. Gingrich, B. M. Niedzielski, J. A. Glick, Y. Wang, D. L. Miller, R. Loloee, W. P. Pratt, Jr., and N. O. Birge, Controllable $0-\pi$ Josephson junctions containing a ferromagnetic spin valve, Nat. Phys. 12, 564 (2016).
* (33) J. Linder and J. W. A. Robinson, Superconducting spintronics, Nat. Phys. 11, 307 (2015).
* (34) L. P. Rokhinson, X. Liu, and J. K. Furdyna, The fractional a.c. Josephson effect in a semiconductor-superconductor nanowire as a signature of Majorana particles, Nat. Phys. 8, 795 (2012).
* (35) A. Fornieri, A. M. Whiticar, F. Setiawan, E. Portolés, A. C. C. Drachmann, A. Keselman, S. Gronin, C. Thomas, T. Wang, R. Kallaher, G. C. Gardner, E. Berg, M. J. Manfra, A. Stern, C. M. Marcus, and F. Nichele, Evidence of topological superconductivity in planar Josephson junctions, Nature 569, 89 (2019).
* (36) H. Ren, F. Pientka, S. Hart, A. Pierce, M. Kosowsky, L. Lunczer, R. Schlereth, B. Scharf, E. M. Hankiewicz, L. W. Molenkamp, B. I. Halperin, and A. Yacoby, Topological superconductivity in a phase-controlled Josephson junction, Nature 569, 93 (2019).
* (37) M. M. Desjardins, L. C. Contamin, M. R. Delbecq, M. C. Dartiailh, L. E. Bruhat, T. Cubaynes, J. J. Viennot, F. Mallet, S. Rohart, A. Thiaville, A. Cottet, and T. Kontos, Synthetic spin-orbit interaction for Majorana devices, Nat. Mater. 18, 1060 (2019).
* (38) M. C. Dartiailh, W. Mayer, J. Yuan, K. S. Wickramasinghe, A. Matos-Abiague, I. Žutić, and J. Shabani, Phase signature of topological transition in Josephson junctions, arXiv:1906.01179.
* (39) A. P. Mackenzie, Y. Maeno, The superconductivity of Sr2RuO4 and the physics of spin-triplet pairing, Rev. Mod. Phys. 75, 657 (2003).
* (40) A. Pustogow, Y. Luo, A. Chronister, Y.-S. Su, D. A. Sokolov, F. Jerzembeck, A. P. Mackenzie, C. W. Hicks, N. Kikugawa, S. Raghu, E. D. Bauer and S. E. Brown Pronounced drop of 17O NMR Knight shift in superconducting state of Sr2RuO4, Nature 574, 72 (2019); I. Žutić and I. Mazin, Phase-Sensitive Tests of the Pairing State Symmetry in Sr2RuO4, Phys. Rev. Lett. 95, 217004 (2005).
* (41) R. Sharma, S. D. Edkins, Z. Wang, A. Kostin, C. Sow, Y. Maeno, A. P. Mackenzie, J. C. S. Davis, and V. Madhavan Momentum-resolved superconducting energy gaps of Sr2RuO4 from quasiparticle interference imaging, Proc. Natl. Acad. Sci. U.S.A. 117, 5222 (2020).
* (42) J. Schliemann, J. C. Egues, and D. Loss, Nonballistic Spin-Field-Effect Transistor, Phys. Rev. Lett. 90, 146801 (2003).
* (43) B. A. Bernevig, J. Orenstein, and S.-C. Zhang, Exact SU(2) Symmetry and Persistent Spin Helix in a Spin-Orbit Coupled System, Phys. Rev. Lett. 97, 236601 (2006).
* (44) J. D. Koralek, C. P. Weber, J. Orenstein, B. A. Bernevig, S.-C. Zhang, S. Mack, and D. D. Awschalom, Emergence of the persistent spin helix in semiconductor quantum wells, Nature 458, 610 (2009).
* (45) F. Dettwiler, J. Fu, S. Mack, P. J. Weigele, J. C. Egues, D. D. Awschalom, and D. M. Zumbuhl, Stretchable Persistent Spin Helices in GaAs Quantum Wells, Phys. Rev. X 7, 031010 (2017).
* (46) M. P. Walser, C. Reichl, W. Wegscheider, and G. Salis, Direct mapping of the formation of a persistent spin helix, Nat. Phys. 8, 757 (2012).
* (47) D. Iizasa, M. Kohda, U. Zülicke, J. Nitta, and M. Kammermeier, Enhanced longevity of the spin helix in low-symmetry quantum wells, Phys. Rev. B 101, 245417 (2020).
* (48) R. Winkler, H. Noh, E. Tutuc, and M. Shayegan, Anomalous Rashba spin splitting in two-dimensional hole systems, Phys. Rev. B 65, 155303 (2002).
* (49) J. J. Krich and B. I. Halperin, Cubic Dresselhaus Spin-Orbit Coupling in 2D Electron Quantum Dots, Phys. Rev. Lett. 98, 226802 (2007).
* (50) P. Altmann, F. G. G. Hernandez, G. J. Ferreira, M. Kohda, C. Reichl, W. Wegscheider, and G. Salis, Current-Controlled Spin Precession of Quasistationary Electrons in a Cubic Spin-Orbit Field, Phys. Rev. Lett. 116, 196802 (2016).
* (51) K. Yoshizumi, A. Sasaki, M. Kohda, and J. Nitta, Gate-controlled switching between persistent and inverse persistent spin helix states, Appl. Phys. Lett. 108, 132402 (2016).
* (52) M. Kammermeier, P. Wenk, and J. Schliemann, Control of Spin Helix Symmetry in Semiconductor Quantum Wells by Crystal Orientation, Phys. Rev. Lett. 117, 236801 (2016).
* (53) H. Nakamura, T. Koga, and T. Kimura, Experimental Evidence of Cubic Rashba Effect in an Inversion-Symmetric Oxide, Phys. Rev. Lett. 108, 206601 (2012).
* (54) R. Moriya, K. Sawano, Y. Hoshi, S. Masubuchi, Y. Shiraki, A. Wild, C. Neumann, G. Abstreiter, D. Bougeard, T. Koga, and T. Machida, Cubic Rashba Spin-Orbit Interaction of a Two-Dimensional Hole Gas in a Strained-Ge/SiGe Quantum Well, Phys. Rev. Lett. 113, 0866601 (2014).
* (55) R. J. Cottier, B. D. Koehne, J. T. Miracle, D. A. Currie, N. Theodoropoulou, L. Pantelidis, A. Hernandez-Robles, and A. Ponce, Strong spin-orbit interactions in a correlated two-dimensional electron system formed in SrTiO3(001) films grown epitaxially on p-Si(001), Phys. Rev. B 102, 125423 (2020).
* (56) H. Liu, E. Marcellina, A. R. Hamilton, and D. Culcer, Strong Spin-Orbit Contribution to the Hall Coefficient of Two-Dimensional Hole Systems, Phys. Rev. Lett. 121, 087701 (2018).
* (57) V. Brosco, L. Benfatto, E. Cappelluti, C. Grimaldi, Unconventional dc Transport in Rashba Electron Gases, Phys. Rev. Lett. 116, 166602 (2016).
* (58) E. Strambini, A. Iorio, O. Durante, R. Citro, C. Sanz-Fernández, C. Guarcello, I. V. Tokatly, A. Braggio, M. Rocci, N. Ligato, V. Zannier, L. Sorba, F. S. Bergeret, and F. Giazotto, A Josephson phase battery, Nat. Nanotechnol. 15, 656 (2020).
* (59) K. Takiguchi, Le Duc Anh, T. Chiba, T. Koyama, D. Chiba, and M. Tanaka, Giant gate-controlled proximity magnetoresistance in semiconductor-based ferromagnetic/non-magnetic bilayers, Nat. Phys. 15, 1134 (2019).
* (60) See Supplemental Material for more details of calculations and analyses.
* (61) M. Alidoust, and K. Halterman, Proximity Induced vortices and long-range triplet supercurrents in ferromagnetic Josephson junctions and spin valves, J. Appl. Phys. 117, 123906 (2015).
* (62) M. Alidoust, and K. Halterman, Supergap and subgap enhanced currents in asymmetric $\rm S_{1}FS_{2}$ Josephson junctions, Phys. Rev. B 102, 224504 (2020).
* (63) M. Alidoust, Critical supercurrent and $\varphi_{0}$ state for probing a persistent spin helix, Phys. Rev. B 101, 155123 (2020).
* (64) M. Alidoust, Self-biased current, magnetic interference response, and superconducting vortices in tilted Weyl semimetals with disorder, Phys. Rev. B 98, 245418 (2018).
* (65) M. Alidoust and K. Halterman, Evolution of pair correlation symmetries and supercurrent reversal in tilted Weyl semimetals, Phys. Rev. B 101, 035120 (2020).
* (66) M. Alidoust, M. Willatzen, and A.-P. Jauho, Strain-engineered Majorana zero energy modes and $\varphi_{0}$ Josephson state in black phosphorus, Phys. Rev. B 98, 085414 (2018).
* (67) M. Alidoust, M. Willatzen, and A.-P. Jauho, Fraunhofer response and supercurrent spin switching in black phosphorus with strain and disorder, Phys. Rev. B 98, 184505 (2018).
* (68) M. Alidoust, A.-P. Jauho, and J. Akola, Josephson effect in graphene bilayers with adjustable relative displacement, Phys. Rev. Res. 2, 032074(R) (2020).
* (69) H. Sickinger, A. Lipman, M. Weides, R. G. Mints, H. Kohlstedt, D. Koelle, R. Kleiner, and E. Goldobin, Experimental Evidence of a $\varphi$ Josephson Junction, Phys. Rev. Lett. 109,107002 (2012)
* (70) T. Kontos, M. Aprili, J. Lesueur, F. Genêt, B. Stephanidis, and R. Boursier, Josephson Junction through a Thin Ferromagnetic Layer: Negative Coupling, Phys. Rev. Lett. 89, 137007 (2002).
* (71) V. V. Ryazanov, V. A. Oboznov, A. Yu. Rusanov, A. V. Veretennikov, A. A. Golubov, and J. Aarts, Coupling of Two Superconductors Through a Ferromagnet: Evidence for a $\pi$ Junction, Phys. Rev. Lett. 86, 2427 (2001).
* (72) F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Odd triplet superconductivity and related phenomena in superconductor-ferromagnet structures, Rev. Mod. Phys. 77, 1321 (2005).
* (73) M. Eschrig, J. Kopu, J. C. Cuevas, and G. Schön, Theory of Half-Metal/Superconductor Heterostructures, Phys. Rev. Lett. 90, 137003 (2003).
* (74) K. Halterman, O. T. Valls, and C.-T. Wu, Charge and spin currents in ferromagnetic Josephson junctions Phys. Rev. B 92, 174516 (2015).
* (75) C.-T. Wu and K. Halterman, Spin transport in half-metallic ferromagnet-superconductor junctions, Phys. Rev. B 98, 054518 (2018).
* (76) E. Moen and O. T. Valls Quasiparticle conductance in spin valve Josephson structures, Phys. Rev. 101, 184522 (2020).
* (77) T. Yokoyama, M. Eto, Y. V. Nazarov, Anomalous Josephson effect induced by spin-orbit interaction and Zeeman effect in semiconductor nanowires, Phys. Rev. B 89, 195407 (2014).
* (78) A. A. Golubov, M. Yu. Kupriyanov, and E. Il’ichev, The current-phase relation in Josephson junctions, Rev. Mod. Phys. 76, 411 (2004).
* (79) S. Kashiwaya and Y. Tanaka, Tunnelling effects on surface bound states in unconventional superconductors, Rep. Prog. Phys. 63, 1641 (2000).
* (80) S. Hart, Z. Cui, G. Ménard, M. Deng, A. E. Antipov, R. M. Lutchyn, P. Krogstrup, C. M. Marcus, and K. A. Moler, Current-phase relations of InAs nanowire Josephson junctions: From interacting to multimode regimes, Phys. Rev. B 100, 064523 (2019).
* (81) K. K. Likharev and V. K. Semenov, RSFQ Logic/Memory Family: A New Josephson-Junction Technology for Sub-Terahertz-Clock-Frequency Digital Systems, IEEE Trans. Appl. Supercond. 1, 3 (1991).
* (82) E. Terzioglu and M. R. Beasley, Complementary Josephson Junction Devices and Circuits: A Possible New Approach to Superconducting Electronics, IEEE Trans. Appl. Supercond. 8, 48 (1998).
* (83) T. Yamashita, K. Tanikawa, S. Takahashi, and S. Maekawa, Superconducting Qubit with a Ferromagnetic Josephson Junction, Phys. Rev. Lett. 95, 097001 (2005).
* (84) J. Nitta, T. Akazaki, H. Takayanagi, and T. Enoki, Gate Control of Spin-Orbit Interaction in an Inverted In0.53Ga0.47As/In0.52Al0.48As Heterostructure, Phys. Rev. Lett. 78, 1335 (1997).
* (85) W. C. Stewart, Current-voltage characteristics of Josephson junctions, Appl. Phys. Lett. 12, 277 (1968).
* (86) L. Casparis, M. R. Connolly, M. Kjaergaard, N. J. Pearson, A. Kringhøj, T. W. Larsen, F. Kuemmeth, T. Wang, C. Thomas, S. Gronin, G. C. Gardner, M. J. Manfra, C. M. Marcus, and K. D. Petersson, Nat. Nanotechnol. 13, 915 (2018).
* (87) P. Krantz, M. Kjaergaard, F. Yan, T. P. Orlando, S. Gustavsson, and W. D. Oliver, A Quantum Engineer?s Guide to Superconducting Qubits, Appl. Phys. Rev. 6, 021318 (2019)].
* (88) B. Baek, W. H. Rippard, S. P. Benz, S. E. Russek, and P. D. Dresselhaus, Hybrid superconducting-magnetic memory device using competing order parameters, Nat. Commun. 5, 3888 (2014).
* (89) A. Costa, P. Högl, and J. Fabian Magnetoanisotropic Josephson effect due to interfacial spin-orbit fields in superconductor/ferromagnet/superconductor junctions, Phys. Rev. B 95, 024514 (2017).
* (90) M. Alidoust and K. Halterman, Spontaneous edge accumulation of spin currents in finite-size two-dimensional diffusive spin-orbit coupled SFS heterostructures, New J. Phys. 17, 033001 (2015).
* (91) M. Alidoust and K. Halterman, Long-range spin-triplet correlations and edge spin currents in diffusive spin-orbit coupled SNS hybrids with a single spin-active interface, J. Phys: Cond. Matt. 27, 235301 (2015).
* (92) A. Zagoskin, Quantum Theory of Many-Body Systems, 2nd Ed. (Springer, New York, 2014).
* (93) W. Mayer, M. C. Dartiailh, J. Yuan, K. S. Wickramasinghe, E. Rossi, and J. Shabani, Gate controlled anomalous phase shift in Al/InAs Josephson junctions, Nat. Commun. 11, 21 (2020).
* (94) S. V. Iordanskii, Y. B. Lyanda-Geller, and G. E. Pikus, Weak Localization in Quantum Wells with Spin-Orbit Interaction, JETP Lett. 60, 206 (1994).
* (95) I. Högl, A. Matos-Abiague, I. Žutić, and J. Fabian, Magnetoanisotropic Andreev Reflection in Ferromagnet/Superconductor, Phys. Rev. Lett. 115, 116601 (2015).
* (96) I. Martnez, P. Högl, C. González-Ruano, J. Pedro Cascales, C. Tiusan, Y. Lu, M. Hehn, A. Matos-Abiague, J. Fabian, I. Žutić and F. G. Aliev, Interfacial Spin-Orbit Coupling: A Platform for Superconducting Spintronics, Phys. Rev. Applied 13, 014030 (2020).
* (97) T. Vezin, C. Shen, J. E. Han, and I. Žutić, Enhanced spin-triplet pairing in magnetic junctions with s-wave superconductors, Phys. Rev. B 101, 014515 (2020).
* (98) B. Scharf, F. Pientka, H. Ren, A. Yacoby, and E. M. Hankiewicz, Tuning topological superconductivity in phase-controlled Josephson junctions with Rashba and Dresselhaus spin-orbit coupling, Phys. Rev. B 99, 214503 (2019).
* (99) J. D. Pakizer, B. Scharf, and A. Matos-Abiague, Crystalline Anisotropic Topological Superconductivity in Planar Josephson Junctions, arXiv:2007.03498.
* (100) G. L. Fatin, A. Matos-Abiague, B. Scharf, and I. Žutić, Wireless Majorana Bound States: From Magnetic Tunability to Braiding, Phys. Rev. Lett. 117, 077002 (2016).
* (101) A. Matos-Abiague, J. Shabani, A. D. Kent, G. L. Fatin, B. Scharf, and I. Žutić, Tunable magnetic Textures: From Majorana bound states to braiding, Solid State Commun. 262, 1 (2017).
* (102) F. Ronetti, K. Plekhanov, D. Loss, and J. Klinovaja, Magnetically confined bound states in Rashba systems, Phys. Rev. Research 2, 022052(R) (2020).
* (103) J. Klinovaja, P. Stano, and D. Loss, Transition from Fractional to Majorana Fermions in Rashba Nanowires, Phys. Rev. Lett. 109, 236801 (2012).
* (104) T. Zhou, N. Mohanta, J. E. Han, A. Matos-Abiague, and I. Žutić, Tunable magnetic textures in spin valves: From spintronics to Majorana bound states, Phys. Rev. B 99, 134505 (2019).
* (105) N. Mohanta, T. Zhou, J.-W. Xu, J. E. Han, A. D. Kent, J. Shabani, I. Žutić, and A. Matos-Abiague, Electrical Control of Majorana Bound States Using Magnetic Stripes, Phys. Rev. Applied 12, 034048 (2019).
* (106) S. Turcotte, S. Boutin, J. Camirand Lemyre, I. Garate, and M. Pioro-Ladriére, Optimized micromagnet geometries for Majorana zero modes in low $g$-factor materials, Phys. Rev. B 102, 125425 (2020).
* (107) Y. Jiang, E. J. de Jong, V. van de Sande, S. Gazibegovic, G. Badawy, E. P. A. M. Bakkers, and S. M. Frolov, Hysteretic magnetoresistance in nanowire devices due to stray fields induced by micromagnets, Nanotechnology 32, 095001 (2021).
* (108) S. Rex, I. V. Gornyi, and A. D. Mirlin, Majorana modes in emergent-wire phases of helical and cycloidal magnet-superconductor hybrids, Phys. Rev. B 102, 224501 (2020).
* (109) V. Kornich, M. G. Vavilov, M. Friesen, M. A. Eriksson, and S. N. Coppersmith, Majorana bound states in nanowire-superconductor hybrid systems in periodic magnetic fields, Phys. Rev. B 101, 125414 (2020).
* (110) N. Mohanta, S. Okamoto, and E. Dagotto, Skyrmion Control of Majorana States in Planar Josephson Junctions, arXiv:2012.13502.
* (111) N. Mohanta, A. P. Kampf, and T. Kopp, Supercurrent as a probe for topological superconductivity in magnetic adatom chains, Phys. Rev. B 97, 214507 (2018).
* (112) A. Assouline, C. Feuillet-Palma, N. Bergeal, T. Zhang, A. Mottaghizadeh, A. Zimmers, E. Lhuillier, M. Marangolo, M. Eddrief, P. Atkinson, M. Aprili, H. Aubin, Spin-Orbit induced phase-shift in $\rm Bi_{2}Se_{3}$ Josephson junctions, Nat. Comm. 10, 126 (2019).
* (113) D. Xu, S. K. Yip, and J. A. Sauls, The Nonlinear Meissner Effect in Unconventional Superconductors, Phys. Rev. B 51, 16233 (1995).
* (114) S. Bae, Y. Tan, A. P. Zhuravel, L. Zhang, S. Zeng, Y. Liu, T. A. Lograsso, Ariando, T. Venkatesan, and S. M. Anlage, Dielectric resonator method for determining gap symmetry of superconductors through anisotropic nonlinear Meissner effect, Rev. Sci. Instrum. 90, 043901 (2019).
* (115) A. P. Zhuravel, B. G. Ghamsari, C. Kurter, P. Jung, S. Remillard, J. Abrahams, A. V. Lukashenko, A. V. Ustinov, and S. M. Anlage, Imaging the Anisotropic Nonlinear Meissner Effect in Nodal YBa2Cu3O${7-\delta}$ Thin-Film Superconductors, Phys. Rev. Lett. 110, 087002 (2013).
* (116) R. Prozorov and R. W Giannetta, Magnetic penetration depth in unconventional superconductors, Supercond. Sci. Technol. 19, R41 (2006).
* (117) K. Halterman, O. T. Valls, and I. Žutić Angular dependence of the penetration depth in unconventional superconductors, Phys. Rev. B 63, 014501 (2000).
* (118) A. Bhattacharya, I. Žutić, A. M.Goldman, O. T. Valls, U. Welp, and B. Veal, Angular Dependence of the Nonlinear Magnetic Moment of YBa2Cu3O${6.95}$in the Meissner State, Phys. Rev. Lett. 82, 3132 (1999).
* (119) I. Žutić and O. T. Valls, Superconducting-gap-node spectroscopy using nonlinear electrodynamics, Phys. Rev. B 56, 11279 (1997).
* (120) I. Žutić and O. T. Valls, Low-frequency nonlinear magnetic response of an unconventional superconductor, Phys. Rev. B 58, 8738 (1998).
|
# Marginally bound circular orbits in the composed black-hole-ring system
Shahar Hod The Ruppin Academic Center, Emeq Hefer 40250, Israel The Hadassah
Institute, Jerusalem 91010, Israel
###### Abstract
The physical and mathematical properties of the non-linearly coupled black-
hole-orbiting-ring system are studied analytically to second order in the
dimensionless angular velocity $M_{\text{ir}}\omega_{\text{H}}$ of the black-
hole horizon (here $M_{\text{ir}}$ is the irreducible mass of the slowly
rotating central black hole). In particular, we determine analytically, to
first order in the dimensionless ring-to-black-hole mass ratio
$m/M_{\text{ir}}$, the shift $\Delta\Omega_{\text{mb}}/\Omega_{\text{mb}}$ in
the orbital frequency of the marginally bound circular geodesic that
characterizes the composed curved spacetime. Interestingly, our analytical
results for the frequency shift $\Delta\Omega_{\text{mb}}$ in the composed
black-hole-orbiting-ring toy model agree qualitatively with the recently
published numerical results for the corresponding frequency shift in the
physically related (and mathematically much more complex) black-hole-orbiting-
particle system. In particular, the present analysis provides evidence that,
at order $O(m/M_{\text{ir}})$, the recently observed positive shift in the
angular frequency of the marginally bound circular orbit is directly related
to the physically intriguing phenomenon of dragging of inertial frames by
orbiting masses in general relativity.
## I Introduction
Geodesic orbits of test particles in curved spacetimes are of central
importance in black-hole physics Car ; Bar ; Chan ; Shap ; WT ; Willmb ; Gro ;
Hodmb . They provide valuable information on the physical parameters (mass,
charge, angular momentum) of the central black hole. In particular, the
marginally bound circular orbit of a curved black-hole spacetime is of special
importance in astrophysics and general relativity Car ; Bar ; Chan ; Shap ; WT
; Willmb ; Gro ; Hodmb . This physically interesting geodesic represents the
innermost circular orbit of a massive particle which is energetically bound to
the central black hole.
For a test particle of proper mass $m$, the marginally bound circular geodesic
is characterized by the simple energy relation Car ; Bar ; Chan ; Shap
$E(r_{\text{mb}})=m\ ,$ (1)
where $E$ is the energy of the particle as measured by asymptotic observers.
Interestingly, the marginally bound circular geodesic (1) marks the boundary
between bound orbits, which are characterized by the sub-critical energy
relation $E<m$, and unbound circular orbits with $E>m$ which, given a small
outward perturbation, can escape to infinity. In particular, as nicely
demonstrated in Willmb ; WT , the critical (marginally bound) circular
geodesic (1) plays a key role in the dynamics of star clusters around super-
massive black holes in galactic nuclei. [The critical orbit (1) is sometimes
referred to in the physics literature as the innermost bound spherical orbit
(IBSO) Willmb ; Gro ].
An important gauge-invariant physical quantity that characterizes the motion
of particles along the marginally bound circular geodesic is the orbital
angular frequency $\Omega_{\text{mb}}$ of the particles as measured by
asymptotic observers. For a test-particle moving in the spinless spherically
symmetric Schwarzschild black-hole spacetime, this physically important
orbital frequency is given by the simple dimensionless relation Bar ; Chan ;
Shap
$M_{\text{ir}}\Omega_{\text{mb}}={1\over 8}\ .$ (2)
Here $M_{\text{ir}}$ is the irreducible mass Noteirr of the central black
hole.
In recent years, there is a growing physical interest in calculating the
$O(m/M_{\text{ir}})$ corrections to the orbital frequency (1) of the
marginally bound circular orbit in non-linearly coupled black-hole-particle
systems (see the physically interesting work Barackmb and references
therein). To this end, one should take into account the gravitational self-
force corrections to the geodesic orbits of the particles Ori ; Poi ; Lou1 ;
Det1 ; Bar1 ; Det2 ; Sag ; Kei ; Sha ; Dam ; Bar2 ; Fav2 ; Kol .
The gravitational self-force has two distinct physical contributions to the
dynamics of a composed black-hole-particle system:
(1) It is responsible for non-conservative physical effects, like the emission
of energy and angular momentum in the form of gravitational waves Ori .
(2) The composed black-hole-particle system is also characterized by
conservative gravitational self-force effects that preserve the total energy
and angular momentum of the system but shift the orbital frequency of the
marginally bound orbit.
Computing the gravitational self-force (GSF) correction
$\Delta\Omega_{\text{mb}}$ to the zeroth-order frequency (1) of the marginally
bound circular orbit is a highly non-trivial task. Intriguingly, Barack at.
al. Barackmb have recently used sophisticated numerical techniques in the
composed Schwarzschild-black-hole-orbiting-particle system in order to compute
the characteristic shift $\Delta\Omega_{\text{mb}}$ in the orbital frequency
of the marginally bound orbit which is caused by the conservative part of the
GSF.
In particular, Barack at. al. Barackmb have found the (positive)
dimensionless value
${{\Delta\Omega_{\text{mb}}}\over{\Omega_{\text{mb}}}}=c\cdot\eta+O(\eta^{2})\
\ \ \text{with}\ \ \ c\simeq 0.5536\ $ (3)
for the shift in the orbital frequency of the marginally bound circular orbit,
where
$\eta\equiv{{m}\over{M_{\text{ir}}}}\ $ (4)
is the dimensionless ratio between the mass of the orbiting particle and the
mass of the central Schwarzschild black hole. The physical importance of the
result (3) of Barackmb stems from the fact that it provides gauge-invariant
information about the strong-gravity effects in the highly curved region
($r\simeq 4M_{\text{ir}}$) of the black-hole spacetime.
The main goal of the present compact paper is to use analytical techniques in
order to gain some physical insights on the intriguing $O(m/M_{\text{ir}})$
increase [see Eq. (3)] in the orbital frequency of the marginally bound
circular orbit as recently observed numerically in the physically important
work Barackmb . In particular, we shall analyze a simple black-hole-orbiting-
ring toy model which, as we shall explicitly show below, captures some of the
essential physical features of the (astrophysically more interesting and
mathematically much more complex) black-hole-orbiting-particle system in
general relativity. As nicely proved by Will Will , the composed black-hole-
orbiting-ring toy model is amenable to a perturbative analytical treatment to
second order in the dimensionless angular velocity
$M_{\text{ir}}\omega_{\text{H}}$ of the central slowly rotating black hole.
## II The orbital frequency of the marginally bound circular orbit in the
composed black-hole-orbiting-ring spacetime
In the present paper we would like to gain some analytical insights into the
conservative part of the $O(m/M)$-shift in the orbital frequency
$\Omega_{\text{mb}}$ of the marginally bound orbit that has recently been
computed numerically in the highly interesting work Barackmb . To this end, we
shall use the analytically solvable model of an axisymmetric ring of matter
which orbits a central slowly spinning black hole Will . In particular, we
shall use this simplified axisymmetric toy model (which, due to its symmetry,
has no dissipative effects) in order to model the conservative part of the
dynamics of the mathematically more complex black-hole-orbiting-particle
system Notengw .
We expect the composed axisymmetric black-hole-orbiting-ring system to
capture, at least qualitatively, the essential physical features that
characterize the conservative dynamics of the composed black-hole-orbiting-
particle system. In particular, both the orbiting particle in the black-hole-
particle system and the orbiting ring in the black-hole-ring system drag the
generators of the central black-hole horizon Will .
The physically intriguing general relativistic effect of dragging of inertial
frames by an orbiting object is reflected, both in the black-hole-particle
system and in the black-hole-ring system, by a non-linear spin-orbit
interaction term of order $\omega_{\text{H}}\cdot j$ in the total
gravitational energy of the composed systems (here $\omega_{\text{H}}$ is the
angular velocity of the black-hole horizon and $j$ is the angular momentum per
unit mass of the orbiting ring).
Interestingly, and most importantly for our analysis, the main mathematical
advantage of the black-hole-orbiting-ring system over the physically more
interesting (but mathematically more complex) black-hole-orbiting-particle
system stems from the fact that the spin-orbit interaction term in the black-
hole-ring system is known in a closed analytical form to second order in the
dimensionless angular velocity $M_{\text{ir}}\omega_{\text{H}}$ of the central
black hole Will [see Eq. (10) below].
In a very interesting work, Will Will has analyzed the total gravitational
energy and the total angular momentum of a stationary physical system which is
composed of an axisymmetric ring of particles of proper mass $m$ which orbits
a central slowly rotating black hole of an irreducible mass $M_{\text{ir}}$.
In particular, it has been proved in Will that the composed axisymmetric
black-hole-orbiting-ring system is characterized by the total angular momentum
$J_{\text{total}}(x)=mj+4M^{3}_{\text{ir}}\omega_{\text{H}}-8mjx^{3}\ ,$ (5)
where
$x\equiv{{M_{\text{ir}}}\over{R}}\ $ (6)
is the dimensionless ratio between the irreducible mass of the black hole and
the proper circumferential radius of the ring,
$j(x)={{M_{\text{ir}}}\over{[x(1-3x)]^{1/2}}}\cdot[1+O(M_{\text{ir}}\omega_{\text{H}})]\
$ (7)
is the angular momentum per unit mass of the orbiting ring, and
$\omega_{\text{H}}$ is the angular velocity of the black-hole horizon.
Since the first term on the r.h.s of (5) represents the angular momentum
$J_{\text{ring}}$ of the orbiting ring of mass $m$, one concludes Will that
the last two terms in (5) represent the angular momentum
$J_{\text{H}}=4M^{3}_{\text{ir}}\omega_{\text{H}}-8mjx^{3}\ $ (8)
which is contributed by the slowly spinning central black hole as measured by
asymptotic observers. In particular, it is interesting to point out that,
while the first term in (8) represents the usual relation between the angular
momentum and the angular velocity of a slowly rotating Kerr black hole, the
second term on the r.h.s of (8) is a direct consequence of the dragging of
inertial frames caused by the orbiting ring Will .
A simple inspection of the compact expression (8) reveals the physically
important fact that, unlike vacuum Schwarzschild black holes, a zero angular
momentum ($J_{\text{H}}=0$) black hole in the non-linearly coupled black-hole-
orbiting-ring system is characterized by the non-zero horizon angular velocity
$\omega_{\text{H}}(J_{\text{H}}=0)={{2x^{3}}\over{M^{3}_{\text{ir}}}}\cdot mj\
.$ (9)
In addition, it has been explicitly proved in Will that, to second order in
the angular velocity of the black-hole horizon, the composed axisymmetric
black-hole-orbiting-ring system is characterized by the total gravitational
energy
$\displaystyle
E_{\text{total}}(x)=m-m\Phi(x)+M_{\text{ir}}+2M^{3}_{\text{ir}}\omega^{2}_{\text{H}}-\omega_{\text{H}}mj\Psi(x)-{{m^{2}x}\over{2\pi
M_{\text{ir}}}}\ln\Big{(}{{8M_{\text{ir}}\over{xr}}}\Big{)}\ $ (10)
as measured by asymptotic observers. Here we have used the dimensionless
radially dependent functions
$\Phi(x)\equiv 1-{{1-2x}\over{(1-3x)^{1/2}}}\ \ \ ;\ \ \ \Psi(x)\equiv
12{{x^{3}-2x^{4}}\over{1-3x}}\ .$ (11)
The various terms in the energy expression (10), which characterizes the
composed black-hole-orbiting-ring system, have the following physical
interpretations Will :
* •
The first term in the energy expression (10) represents the proper mass of the
ring.
* •
In order to understand the physical meaning of the second term in the energy
expression (10), it is worth pointing out that, in the small-$x$ regime (large
ring radius, $R\gg M_{\text{ir}}$), this term can be approximated by the
compact expression [see Eqs. (6), (10), and (11)]
$-M_{\text{ir}}m/2R\cdot[1+O(M_{\text{ir}}/R)]$, which is simply the sum of
the potential and rotational Newtonian energies of the ring in the background
of the central compact object. Thus, this term represents the leading order
(linear in the mass $m$ of the ring) interaction between the central black
hole and the orbiting ring.
* •
In order to understand the physical meaning of the third and fourth terms in
the energy expression (10), it is worth pointing out that a slowly spinning
bare (isolated) Kerr black hole is characterized by the simple mass-angular-
velocity relation
$M_{\text{Kerr}}=M_{\text{ir}}+2M^{3}_{\text{ir}}\omega^{2}_{\text{H}}+O(M^{5}_{\text{ir}}\omega^{4}_{\text{H}})$.
Thus, the third and fourth terms in (10) can be identified as the contribution
of the slowly spinning central black hole to the total energy of the system.
Interestingly, taking cognizance of Eq. (9) one learns that due to the general
relativistic frame dragging effect, which is caused by the orbital motion of
the ring, the fourth term in (10) contains a self-interaction contribution [of
order $O(m^{2}/M_{\text{ir}})]$ to the total energy of the composed black-
hole-orbiting-ring system.
* •
The fifth term in the energy expression (10) is a non-linear spin-orbit
interaction between the slowly spinning central black hole and the orbiting
ring. This energy term plays a key role in our composed black-hole-orbiting-
ring toy model system since it is expected to mimic, at least qualitatively,
the physically analogous non-linear spin-orbit interaction in the original
black-hole-orbiting-particle system. Taking cognizance of Eq. (9) one learns
that, due to the intriguing general relativistic phenomenon of frame dragging,
the spin-orbit interaction term in (10) contains a non-linear contribution to
the total energy of the composed black-hole-orbiting-ring system which is of
order $O(m^{2}/M_{\text{ir}})$.
* •
The sixth term in the energy expression (10) is the gravitational self-energy
of the ring Tho (not discussed in Will ), where $r\ll R$ is the half-
thickness of the ring. This energy contribution represents the inner
interactions between the many particles that compose the axisymmetric ring.
Since our main goal in the present paper is to present a simple analytical
toy-model for the physically more interesting (and mathematically more
complex) two-body system in general relativity, which is characterized by a
single orbiting particle, we shall not consider here this many-particle energy
contribution. This approximation allows one to focus the physical attention on
the general relativistic frame-dragging effect which characterizes both the
black-hole-orbiting-particle system and the black-hole-orbiting-ring system.
Taking cognizance of Eqs. (7), (9), (10), and (11), one finds the compact
functional expression
$\displaystyle
E_{\text{total}}(x)=M_{\text{ir}}+m\cdot\Big{[}{{1-2x}\over{(1-3x)^{1/2}}}+{{8x^{5}(-2+3x)}\over{(1-3x)^{2}}}\cdot\eta+O(\eta^{2})\Big{]}\
$ (12)
for the total gravitational energy of the non-linearly coupled black-hole-
orbiting-ring system.
In the decoupling $R/M_{\text{ir}}\to\infty$ limit, in which the ring is
located at spatial infinity, the system is characterized by the presence of
two non-interacting physical objects: (1) a bare (unperturbed) Schwarzschild
black hole of mass $M=M_{\text{ir}}$ Notesmir , and (2) a ring of proper mass
$m$. Thus, the total energy of the black-hole-ring system in the
$R/M_{\text{ir}}\to\infty$ limit is given by the simple expression [see Eq.
(12) with $x\to 0$]
$\displaystyle E_{\text{total}}(R/M_{\text{ir}}\to\infty)=M+m\ \ \
\text{with}\ \ \ M=M_{\text{ir}}\ .$ (13)
Energy conservation implies that the marginally bound orbit of the composed
black-hole-orbiting-ring system is characterized by the same total
gravitational energy Notemmir
$\displaystyle E_{\text{total}}(x=x_{\text{mb}})=M_{\text{ir}}+m\ $ (14)
as measured by asymptotic observers. Substituting the relation (14) into Eq.
(12), one finds the simple expression
$x_{\text{mb}}={1\over
4}\cdot\Big{[}1+{{5}\over{16}}\cdot\eta+O(\eta^{2})\Big{]}\ $ (15)
for the $O(m/M_{\text{ir}})$-corrected location of the marginally bound
circular orbit in the composed black-hole-orbiting-ring system.
Substituting the dimensionless radial coordinate (15) of the marginally bound
orbit into the functional expression Will
$M_{\text{ir}}\Omega=x^{3/2}\cdot\Big{[}1-4x^{3/2}\cdot
M_{\text{ir}}\omega_{\text{H}}+O[(M_{\text{ir}}\omega_{\text{H}})^{2}]\Big{]}\
$ (16)
for the dimensionless orbital frequency of the axisymmetric orbiting ring and
using Eqs. (7) and (9) Notesn , one obtains the $O(m/M_{\text{ir}})$-corrected
expression
$M_{\text{ir}}\Omega_{\text{mb}}={1\over
8}\cdot\Big{[}1+{{13}\over{32}}\cdot\eta+O(\eta^{2})\Big{]}\ $ (17)
for the characteristic orbital frequency of the marginally bound circular
geodesic in the composed black-hole-orbiting-ring system.
## III Summary
We have analyzed the physical and mathematical properties of a composed black-
hole-orbiting-ring system. In particular, we have proposed to use this
analytically solvable conservative Notengw system as a simple toy model for
the conserved dynamics of the astrophysically more interesting (and
mathematically more complex) black-hole-orbiting-particle system in general
relativity.
Our main goal was to provide a simple qualitative analytical explanation for
the increase in the orbital frequency of the marginally bound circular
geodesic that has recently been observed numerically in the physically
important work Barackmb . To this end, we have used the non-trivial spin-orbit
interaction between the central black hole and the orbiting ring, which is
known in a closed analytical form to second order in the dimensionless angular
velocity $M_{\text{ir}}\omega_{\text{H}}$ of the black-hole horizon, in order
to capture the essential physical features of a similar non-linear spin-orbit
interaction which is expected to characterize the conservative dynamics of the
black-hole-orbiting-particle system.
Interestingly, the analytically derived expression [see Eqs. (2) and (17)]
${{\Delta\Omega_{\text{mb}}}\over{\Omega_{\text{mb}}}}={{13}\over{32}}\cdot\eta+O(\eta^{2})\
$ (18)
for the dimensionless $O(m/M_{\text{ir}})$-shift in the orbital frequency of
the marginally bound circular geodesic in the composed black-hole-orbiting-
ring system provides the correct order of magnitude (with the correct sign)
for the corresponding shift in the orbital frequency of the marginally bound
circular geodesic of the physically more interesting black-hole-orbiting-
particle system.
This qualitative agreement suggests that the observed shift (3) in the
characteristic orbital frequency of the marginally bound circular geodesic is
mainly determined by the general relativistic effect of dragging of inertial
frames by orbiting objects (the non-linear spin-orbit interaction between the
orbiting object and the generators of the central black-hole horizon).
ACKNOWLEDGMENTS
This research is supported by the Carmel Science Foundation. I thank Yael
Oren, Arbel M. Ongo, Ayelet B. Lata, and Alona B. Tea for stimulating
discussions.
## References
* (1) B. Carter, Phys. Rev. 174, 1559 (1968).
* (2) J. M. Bardeen, W. H. Press and S. A. Teukolsky, Astrophys. J. 178, 347 (1972).
* (3) S. Chandrasekhar, The Mathematical Theory of Black Holes, (Oxford University Press, New York, 1983).
* (4) S. L. Shapiro and S. A. Teukolsky, Black holes, white dwarfs, and neutron stars: The physics of compact objects (Wiley, New York, 1983).
* (5) D. Merritt, T. Alexander, S. Mikkola, and C. M. Will, Phys. Rev. D 84, 044024 (2011).
* (6) C. M. Will, Class. Quantum Grav. 29, 217001 (2012).
* (7) R. Grossman, J. Levin, and G. Perez-Giz, Phys. Rev. D 85, 023012 (2012).
* (8) S. Hod, Phys. Rev. D 84, 104024 (2011) [arXiv:1201.0068]; S. Hod, Phys. Rev. D 84, 124030 (2011) [arXiv:1112.3286]; S. Hod, Phys. Lett. B 718, 1552 (2013) [arXiv:1210.2486]; S. Hod, Phys. Rev. D 87, 024036 (2013) [arXiv:1311.1281]; S. Hod, Phys. Rev. D 88, 087502 (2013) [arXiv:1707.05680]; S. Hod, Phys. Lett. B 726, 533 (2013) [arXiv:1312.4969]; S. Hod, The Euro. Phys. Jour. C 74, 2840 (2014) [arXiv:1404.1566].
* (9) The irreducible mass of a black hole is related to its surface area $A$ by the simple relation $M_{\text{ir}}=(A/16\pi)^{1/2}$. For a spherically symmetric vacuum Schwarzschild black hole, the irreducible mass coincides with the total ADM mass $M$ of the spacetime: $M_{\text{ir}}=M$.
* (10) L. Barack, M. Colleoni, T. Damour, S. Isoyama, N. Sago, Phys. Rev. D 100, 124015 (2019).
* (11) A. Ori and K. S. Thorne, Phys. Rev. D. 62, 124022 (2000); A. Buonanno and T. Damour, Phys. Rev. D 62, 064015 (2000).
* (12) E. Poisson, Living Rev. Relativity 7, 6 (2004).
* (13) C. O. Lousto, Class. and Quant. Grav. 22, S369 (2005).
* (14) S. Detweiler, in Mass and Motion in General Relativity, edited by L. Blanchet, A. Spallicci, and B. Whiting (Springer, 2011).
* (15) L. Barack, Class. and Quant. Grav. 26, 213001 (2009).
* (16) S. Detweiler, Phys. Rev. D 77, 124026 (2008).
* (17) N. Sago, L. Barack, and S. Detweiler, Phys. Rev. D 78, 124024 (2008).
* (18) T. S. Keidl, A. G. Shah, J. L. Friedman, D. Kim, and L. R. Price, Phys. Rev. D 82, 124012 (2010).
* (19) A. Shah, T. Keidl, J. Friedman, D. Kim, and L. Price, Phys. Rev. D 83, 064018 (2011).
* (20) T. Damour, Phys. Rev. D 81, 024017 (2010).
* (21) L. Barack and N. Sago, Phys. Rev. Lett. 102, 191101 (2009); L. Barack and N. Sago, Phys. Rev. D 81, 084021 (2010); S. Akcay, L. Barack, T. Damour, and N. Sago, Phys. Rev. D 86, 104041 (2012).
* (22) M. Favata, Phys. Rev. D 83, 024027 (2011); M. Favata, Phys. Rev. D 83, 024028 (2011).
* (23) B. Kol, arXiv:1307.4064 .
* (24) C. M. Will, The astrophysical Journal 191, 521 (1974); C. M. Will, The astrophysical Journal 196, 41 (1975).
* (25) It is worth stressing the fact that the dynamics of the axisymmetric black-hole-orbiting-ring system is conservative in the sense that, due to its simple symmetry, it contains no gravitational waves. Likewise, the conservative part of the dynamics of the composed black-hole-orbiting-particle system ignores the emission of energy and angular momentum in the form of gravitational waves.
* (26) K. S. Thorne, in Quasi-Stellar Sources and Gravitational Collapse (University of Chicago, 1965).
* (27) Note that a bare Schwarzschild black hole is characterized by the simple relation $M_{\text{ir}}=(A/16\pi)^{1/2}=M$.
* (28) We consider composed black-hole-orbiting-ring configurations which are characterized by a fixed value of the geometrically important quantity $M_{\text{ir}}$ (that is, a fixed value of the central black-hole surface area).
* (29) Note that one finds from Eqs. (7) and (9) the dimensionless relation $M_{\text{ir}}\omega_{\text{H}}={1\over 8}\eta\cdot[1+O(\eta)]$ for the angular velocity of the black-hole horizon when the orbiting ring is in its marginally bound orbit (15).
|
revtex4-1Repair the float
# Optimised Domain-engineered Crystals for Pure Telecom Photon Sources
A. Pickston Institute of Photonics and Quantum Sciences, Heriot-Watt
University Edinburgh, UK, EH14 4AS F. Graffitti Institute of Photonics and
Quantum Sciences, Heriot-Watt University Edinburgh, UK, EH14 4AS P. Barrow
Institute of Photonics and Quantum Sciences, Heriot-Watt University Edinburgh,
UK, EH14 4AS C. Morrison Institute of Photonics and Quantum Sciences,
Heriot-Watt University Edinburgh, UK, EH14 4AS J. Ho Institute of Photonics
and Quantum Sciences, Heriot-Watt University Edinburgh, UK, EH14 4AS A. M.
Brańczyk Perimeter Institute for Theoretical Physics, Waterloo, Ontario,
Canada, N2L 2Y5 A. Fedrizzi Institute of Photonics and Quantum Sciences,
Heriot-Watt University Edinburgh, UK, EH14 4AS
###### Abstract
The ideal photon-pair source for building up multi-qubit states needs to
produce indistinguishable photons with high efficiency. Indistinguishability
is crucial for minimising errors in two-photon interference, central to
building larger states, while high heralding rates will be needed to overcome
unfavourable loss scaling. Domain engineering in parametric down-conversion
sources negates the need for lossy spectral filtering allowing one to satisfy
these conditions inherently within the source design. Here, we present a
telecom-wavelength parametric down-conversion photon source that operates on
the achievable limit of domain engineering. We generate photons from
independent sources which achieve two-photon interference visibilities of up
to $98.6\pm 1.1\%$ without narrow-band filtering. As a consequence, we reach
net heralding efficiencies of up to 67.5%, which corresponds to collection
efficiencies exceeding $90\%$.
Scalable photonic quantum technologies require pure photons created on demand.
The simplicity of using photon sources based on spontaneous parametric down-
conversion (PDC) means the process has been exploited widely and studied in
great depth Christ et al. (2013); Slussarenko and Pryde (2019). Efforts have
been made to achieve pseudo-deterministic operation via multiplexing Pittman
et al. (2002); Migdall et al. (2002); Kaneda and Kwiat (2019); Collins et al.
(2013); Kiyohara et al. (2016); Francis-Jones et al. (2016); Ma et al. (2011);
Mendoza et al. (2016); Broome et al. (2011); Meyer-Scott et al. (2020), reach
high heralding efficiencies, and generate indistinguishable
photons—characteristics that all contribute towards an ideal source of
photons. Whilst deterministic operation can be addressed separately, photon
source engineering must focus on generating indistinguishable photons with
high heralding efficiencies, since tasks such as measurement-based quantum
computing Raussendorf et al. (2003); Walther et al. (2005), photonic Boson
sampling Broome et al. (2013); van der Meer et al. (2020) and photonic quantum
repeaters Azuma et al. (2015) are ultimately contingent on high visibility
two-photon interference at high rates with minimal losses.
Our work focuses on tailoring the phase matching function (PMF), modifying the
PDC interaction to produce optimal photons. The quantum state resulting from
PDC, when considering solely terms which describe emission of a single pair
reads,
$\ket{\psi}_{\text{pair}}=\iint
d\omega_{s}d\omega_{i}f(\omega_{s},\omega_{i})\hat{a}_{s}^{\dagger}(\omega_{s})\hat{a_{i}}^{\dagger}(\omega_{i})\ket{0}.$
(1)
The state contains $f(\omega_{i},\omega_{s})$, which is determined by the pump
envelope function (PEF) $\alpha(\omega_{s}+\omega_{i})$, and PMF
$\phi(\omega_{s},\omega_{i})$,
$f(\omega_{s},\omega_{i})=\phi(\omega_{s},\omega_{i})\hskip
1.0pt\alpha(\omega_{s}+\omega_{i}),$ (2)
and is referred to as the Joint Spectral Amplitude (JSA). Under symmetric
group-velocity matching—where the mean of the inverse signal-idler group
velocities are matched to the inverse of the pump group velocity Grice et al.
(2001); U’Ren et al. (2006); Mosley et al. (2008); Jin et al. (2014, 2013);
Greganti et al. (2018)—the PMF and PEF are orthogonal. In this condition,
signal and idler photons can be interfered interchangeably, unlike in Ref.
Wang et al. (2016), as well as in heralded single-photon schemes.
Achieving unit photon purities requires the bi-photon states to exist in a
single spectral mode. In standard non-linear crystals, the PMF is a sinc-
shaped function, which generates spectral correlations in the JSA, leading to
bi-photon states that exist in a superposition of spectral modes Brańczyk et
al. (2011); Graffitti et al. (2018a), illustrated in Figure 1 (a, b). These
correlations reduce spectral photon purity and thus indistinguishability.
Typically, tight filtering is used to suppress the spectral correlations,
increasing purity and interference visibility. But filtering introduces
optical loss, leading to a reduction in heralding efficiencies, source
brightness and photon-number purity Brańczyk et al. (2010); Meyer-Scott et al.
(2017). One can achieve a factorable JSA without tight filtering however, by
engineering the properties of the crystal such that the PMF approximates a
Gaussian function, shown in Figure 1 (c). First suggested by Brańczyk et al.
in Ref. Brańczyk et al. (2011), several methods for obtaining a Gaussian PMF
have been developed. Altering the poling duty cycle of the crystal domains
Dixon et al. (2013); Chen et al. (2017, 2019); Cui et al. (2019), the
orientation of the poling direction Dosseva et al. (2016), and tailoring both
Graffitti et al. (2018b); Tambasco et al. (2016); Graffitti et al. (2017) can
all generate the desired function. Using an optimal technique developed in
Ref. Graffitti et al. (2017), Graffitti et al. demonstrated interference of
photons generated from independent domain-engineered crystals in Ref.
Graffitti et al. (2018b). Within that work, a symmetric heralding efficiency
of 65% was achieved along with a source brightness of 4kHz/mW and lower-bound
photon purity of 90.7$\pm$0.3%. While developed primarily for generating
separable photons, domain engineering can also be exploited for tailoring high
quality non-Gaussian PMFs, e.g. for efficient generation of time-frequency
mode entanglement Graffitti et al. (2020a) and time-frequency hyper-
entanglement Graffitti et al. (2020b).
Here we present a PDC source based on domain-engineered crystals, operating on
the achievable limits of this technique. Through the optimisation of
parameters which trade off the non-trivial relationship between non-linearity
and indistinguishability, we establish a lower bound on spectral purity of
$98.0\pm 0.1\%$, achieve a maximal visibility of $98.6\pm 1.1\%$, a symmetric
heralding efficiency of $67.5$% and a source brightness of $4.1$ kHz/mW.
Figure 1: Theoretical Phase Matching Functions and Joint Spectral Amplitudes
for periodically poled (a, b) and our Gaussian apodized crystals (c, d). The
prevalent correlations in the joint spectrum for the periodically poled
crystal are a result of the Sinc shaped phase matching function (a) and must
be filtered out with narrow-band filters to achieve high spectral purity.
These correlations are suppressed in the apodized crystals joint spectrum
(compare (b) and (d)) by targeting a Gaussian phase matching function (c),
increasing spectral purity, increasing source indistinguishability and
removing the need for tight spectral filtering.
PDC occurs in non-centrosymmetric optical materials, such as potassium titanyl
phosphate (KTP). Quasi-phase-matching (QPM), a method commonly used to bypass
non-critical phase-matching, is achieved by inverting the orientation of the
crystal lattice structure with a period that prevents destructive interference
of signal and idler fields. This allows photon generation along the
crystallographic axes, thus avoiding birefringent walk-off effects and permits
photon generation at desired wavelengths Fejer et al. (1992). The non-linear
response of a uniformly periodically-poled crystal corresponds to a step
function, which, in the Fourier domain, transforms to the detrimental sinc
function seen in Figure 1 (a).
Figure 2: Target function for Gaussian domain engineering. The top panel shows
the target function of varying widths, with the red shaded areas indicating
regions outside the boundary fixed by the crystal length $l$. A target
function that is too wide, for example when $\sigma=l/2$, will result in side
lobes in the PMF which is shown on the bottom panel. A narrow target function
may produce minimal side lobes in the PMF, but will result in a lower
effective non-linearity and therefore a lower source brightness. The blue
dotted lines indicate the trade-off we chose for our implementation.
To achieve high purities in the group-velocity matching (GVM) regime, the pump
envelope function should be a transform-limited Gaussian envelope, whilst the
phase-matching function should also be a Gaussian function Graffitti et al.
(2018a); Quesada and Brańczyk (2018). Typical mode-locked lasers have a
sech2-shaped PEF, which in our case contributes $\sim$1% in purity decrease.
In this work we focus on PMF engineering only, but it’s also possible to
reshape the pump field spectrum into a Gaussian PEF, as recently demonstrated
by C. Chen et al. Chen et al. (2019).
We define the PMF as:
$\phi(\omega_{s},\omega_{i})=\int^{+\infty}_{-\infty}\hskip
1.0pt\text{g}(z)\text{e}^{i\Delta k(\omega_{s},\omega_{i})z}\text{d}z,$ (3)
where $\Delta
k(\omega_{s},\omega_{i})=k_{p}(\omega_{s}+\omega_{i})-k_{s}(\omega_{s})-k_{i}(\omega_{i})$
is the phase mismatch arising from material dispersion and
$\text{g}(z)=\chi^{(2)}(z)/\chi^{(2)}_{0}$ is the normalised non-linear
response of the material. We can modify this function by aperiodically
altering the width and orientation of each domain while tracking a predefined
target function $\text{g}(z)_{\text{target}}$ Graffitti et al. (2017). This
target function produces a target PMF amplitude which is scaled to possess the
maximum gradient achievable—that is $\frac{\pi}{2}$ Boyd (2008)—to ensure that
the non-linear response along the longitudinal direction is maximised. The
resulting PMF amplitude Graffitti et al. (2017); Tambasco et al. (2016) is
given by:
$\text{PMF}(\Delta
k_{0})=\sqrt{\frac{\pi}{2}}\left(\text{erf}\left(\frac{l}{2\sqrt{2}\sigma}\right)+\text{erf}\left(\frac{z-\frac{l}{2}}{\sqrt{2}\sigma}\right)\right).$
(4)
A crucial parameter is the choice of $\sigma$, the width of the Gaussian
function. This parameter balances source brightness with spectral purity. In
order to obtain high brightness the function must be wide, but to avoid
correlations the function must be narrow. Thus we choose a width that both
avoids a large step in non-linearity—avoiding spectral correlations—whilst
wide enough to obtain a reasonably high effective nonlinearity and thus
brightness. This trade-off is illustrated in Figure 2. With a $\sigma=l/4.7$,
where $l$ is the crystal length, the generation of side lobes is minimal
whilst not adversely reducing generation rates, see Figure 1 (c) for our
apodized crystals theoretical PMF. The crystal length is $l=30$ mm, resulting
in $\sigma=6.38$ mm.
Figure 3: Experimental Layout. (a) A Ti:Saphh laser pumps a standard ppKTP, or
domain-engineered aKTP crystal, at a repetition rate of 80.9MHz. The down-
converted photon pairs are collected after reflection from a dichroic mirror
and separated by a PBS. Individual photons from two sources are temporally
synchronised with an adjustable free-space gap before they are superposed in
an in-fibre BS. Photons are then detected by Superconducting Nano-wire Single
Photon Detectors (SNSPDs), with photon arrival times being time-tagged and
processed. (b) Two $\sim 20$km fibre spools of telecommunication grade fibre
are used for dispersive spectroscopy, exploiting chromatic dispersion allowing
us to reconstruct the joint photon spectrum Avenhaus et al. (2009). We collect
the photon pairs in the same manner as above, however the collected photons
are subjected to the fibre delay.
Using a mode-locked Ti:Sapphire laser with a non-ideal
$\text{sech}^{2}$-shaped spectral envelope we pump our crystals at a
wavelength of 774.9 nm, down-converting to 1549.8 nm. The pulse duration can
be tuned between 1.3 ps and 1.4 ps to be matched to different crystal and
filter combinations. Operating at just below $1550$ nm was necessary to ensure
temperature stabilisation, enabled by keeping our crystals sufficiently far
from room temperature for degenerate photon generation. We focus into the
centre of the crystal with a 40 cm lens, generating a slightly elliptical spot
with a waist of $\sim 124\mu$m in the horizontal and $\sim 119\mu$m in the
vertical axis. This focusing condition was chosen as an optimal trade-off
between brightness and heralding efficiency Bennink (2010); Grice et al.
(2011); Dixon et al. (2014). To collect the down-converted modes we separate
the emitted photon pairs on a polarising beam splitter, with an initial
dichroic mirror removing pump photons. Signal and idler photons are collected
into single-mode fibres after long-pass filtering to reduce any residual pump
photons further. We introduce some gentle filtering around the central
spectral lobe of our down-converted photons via a filter with a transmission
profile of $\text{exp}[-\frac{(\omega-\omega_{0})^{4}}{2\sigma^{4}}]$, a FWHM
of 7.4 nm and is $\sim$5 times wider than the generated photon bandwidth which
minimally impacts heralding efficiencies. Down-converted photons then pass
through optical interference or spectroscopy setups before being collected by
Superconducting Nano-wire Single Photon Detectors (SNSPDs) operating at $\sim
80\%$ detection efficiencies. See Figure 3 (a) for the experimental layout.
We investigated two-photon interference visibilities for different
configurations of crystals—a 22 mm periodically-poled KTP crystal and a 30 mm
custom-poled KTP crystal—and filters. We interfered photons generated from
separate, but identical (manufactured from the same lithographic mask)
crystals. In order to obtain a lower bound on the implied photon purity and to
generate the data in Figure 4 (a), the two sources were pumped with the same
amount of pump power and at least five independent two-photon interference
scans were run consecutively. The data acquisition time for each of these
scans was sufficient to obtain at least 1000 four-photon coincidence events
outside of the dip position. From this data set we fit a linear function and
extrapolate the expected visibility at zero pump power. This technique
eliminates visibility degradation due to photon-number impurity (see the
Appendix of Ref. Graffitti et al. (2018b)) and serves to lower bound photon
purity. The performance of all results are summarised in Table 1. The
different crystal photon generation rates, in terms of number of coincident
photon counts per second, are shown in Figure 4 (c). Importantly, the
generation rates and heralding efficiencies are quoted with consistent
focusing conditions in the same optical setup and provide a comparison and not
an upper limit on crystal performances. Different pump focusing conditions as
well as different collection optics will result in different values for source
brightness, heralding efficiencies and can also impact photon purity Bennink
(2010).
Crystal | | Interference
---
Visibility
(%)
| Mean
---
Heralding
Efficiency
(%)
| Collection
---
Efficiency
(%)
| Mean
---
Brightness
(cc/mW/s)
| Experimental
---
$\sqrt{\text{JSI}}$
Purity
(%)
| Theoretical
---
JSA
Purity
(%)
aKTP | 98.0 $\pm$ 0.1 | 67.5 | $91.8$ | 3900 | 91.17 $\pm$ 0.02 | 98.7
ppKTP | 95.3 $\pm$ 0.1 | 57.2 | $77.4$ | 4900 | 94.43 $\pm$ 0.03 | 98.4
Table 1: A summary of results comparing our custom aKTP crystal with loose
spectral filters to a ppKTP crystal with tight spectral filters. The
interference visibilities are quoted at zero pump power. The mean heralding
efficiencies and brightness respectively for each crystal result from an
analysis of the performance of each source as a function of pump power. The
collection efficiencies are calculated with respect to the upper limit
detection efficiency of our detectors (80%) as well as other known optical
losses (7.9%). Finally we also include the purities calculated from our
experimental JSI reconstructions, as well as the theoretical purities. We use
the $\sqrt{\text{JSI}}$ to calculate purities as it represents a better
approximation of the JSA compared to calculating the purity of the JSI
Graffitti et al. (2018a). Figure 4: Experimental results from interfering
photons generated from two independent sources. (a) Visibility dependence on
the squeezing parameter, $\gamma$. Each data point represents the average
visibility from five interference measurements for each value of pump power
(or, equivalently, for each value of $\gamma$). From this data set we can
infer a minimum spectral purity of $98.0\pm 0.1\%$ and compare the performance
of our aKTP crystals with loose spectral filtering against a ppKTP crystal
with narrow-band, tight spectral filtering. (b) A two-photon interference
measurement between photons generated from separate sources. At a pump power
of 10 mW we achieve an interference visibility of $98.6\pm 1.1\%$, with a
four-photon coincidence rate of around 5 Hz. (c) Photon pair generation rates
of our previous crystal, a filtered ppKTP crystal and our aKTP crystal. (d)
Theoretical amplitude of the phase matching function along the longitudinal
direction (z axis) of the crystal at $\Delta k_{0}$.
We observe an improvement in both interference visibility and generation rates
upon Ref. Graffitti et al. (2018b), a result of altering the width of the
Gaussian target function tracked by our algorithm from $\sigma=l/4$ to
$\sigma=l/4.7$. Ref. Graffitti et al. (2018b) reported a lower bound purity of
$92.7\pm 0.2\%$. This data was obtained using a delayed two-photon
interference technique, interfering photons generated from the same source.
Instead of this technique, we perform interference measurements on photons
from independent crystals, representing a true proxy for source scalability.
Our new apodized crystals have a lower-bound purity, under the same gentle
filtering, of $98.0\pm 0.1\%$. Without any filtering we obtain a lower-bound
purity of $95.3\pm 0.1\%$ and the respective data contributes to a full plot
of all results found in the Appendix.
Rather than expressing results in terms of pumping power, we show the main
results in terms of $\gamma$, the effective squeezing of the two-mode squeezed
vacuum, which encompasses the pump power and focusing conditions applied to
the crystal. In the photon number basis, $n$, the PDC process can be expressed
as
$(1-\gamma^{2})^{1/2}\sum^{\infty}_{\text{n}=0}\gamma^{\text{n}}\ket{\text{n},\text{n}}_{s,i}$,
with $\gamma$ defined as: $\gamma=(\tau\text{p})^{1/2}$, where p is the pump
power and $\tau$ is a constant quantifying the non-linear interaction of the
medium Jin et al. (2015). In this work, we evaluate $\gamma$ from the measured
coincidence rates, single rates and the clock rate of the pulsed laser in a
similar manner as in Ref. Jin et al. (2015). With knowledge of $\gamma$, the
photon pair rate and multi-photon pair rates can be determined. This forms a
more representative analysis of crystal performance as the variety of
experimental conditions distinct to our analysis are gathered into this one
term. Figure 4 (a) therefore, compares the interference visibility of our aKTP
crystals performance with a ppKTP crystal as a function the squeezing,
$\gamma$.
With apodization, the need for tight filtering is removed, resulting in
significantly higher heralding efficiencies, seen in Table 1. This higher
efficiency means that when both sources are generating photons at the same raw
rate, the source with higher heralding efficiencies will lead to higher rates
of detector clicks. Factoring out known optical losses and detection
efficiencies (taken as the quoted operational upper bound of 80%), overall
collection efficiencies are lower bounded to 91.8%. Optical losses were
determined by measuring the transmission properties of each optical element
between pair production and the housing of our detectors, this accounted for a
loss of 7.9%. Anti-reflection coated optics were used where possible to
minimise any losses, including on the end facets of all the KTP crystals used
in this investigation.
Figure 5: (a) and (b) Experimental reconstruction of the JSI and marginal
photon spectrum. Using a dispersive spectroscopy technique, we construct the
full joint spectrum spanning a whole repetition of our lasers cycle, for our
aKTP crystal (a) and ppKTP crystal (b). The reconstruction reveals all
spectral correlations which are then either suppressed by filtering, or
already suppressed through modification of the PMF. These figures are plotted
with a logarithmic scale in order to highlight any correlations. (c)
Normalised heralding and purities of the crystals we have analysed in this
manuscript as a function of the photon bandwidth or the filtered photons
bandwidth. The solid data points represent the normalised heralding, the
filled data points are purity values and the solid (dashed) lines are the
simulated results of the heralding (purity) for the ppKTP crystal.
Another means of quantifying source performance is to analyse a reconstruction
of the joint photon spectrum. Reconstruction of the JSA is experimentally
demanding since it requires a spectrally resolved amplitude and phase
measurement, which can be achieved for example via phase-sensitive stimulated
emission tomography Jizan et al. (2016). Constructing the joint spectral
intensity (JSI), equivalent to $|{\text{JSA}}|^{2}$, can be achieved with
comparative ease and is therefore commonly shown, although one has to be
careful what conclusions to draw in the absence of phase information normally
contained in the JSA Graffitti et al. (2018a). With 20 km of standard
telecommunication fibre optic we can exploit chromatic dispersion to map
photon arrival time to the associated spectral component of the JSI, as
performed in Graffitti et al. (2020a). The experimental arrangement is
depicted in Figure 3 (b). Collection of at least $10^{6}$ photons detected by
SNSPD’s operating with $<50$ ps jitter, $<25$ ns reset time and processed via
a Picoquant HydraHarp with 1 ps timing resolution, enabled the construction of
the respective JSI for combinations of filter and crystal. The spectral window
of our results span 12.5 ns and the achievable timing resolution of this
spectrometer translates to a spatial resolution of $\sim 0$.0028 nm.
Figure 5 (a) and (b) shows the respective experimental JSIs of un-filtered
aKTP and un-filtered ppKTP on a logarithmic scale. Any spectral correlations
that exist along the main diagonal are visually highlighted. These correlation
are clearly prevalent for ppKTP but almost non-existent for unfiltered aKTP, a
result of non-zero contributions from the PMF. Along the diagonal, from bottom
left to the top right, as well parallel to the x and y axes, through the
central lobe of the joint spectra, we see a constant background signal arising
from dark counts. An additional PDC source was used as a trigger, to measure
the arrival of signal and idler photons. A dark count detected in the trigger
channel, as opposed to an actual photon, corresponds to a displacement of the
central lobe along the diagonal, resulting in temporally correlated background
noise. If, either the signal or idler photon is lost, but a dark count is
detected in that channel along with the trigger and remaining signal/idler
photon, the central lobe is shifted in the parallel to the x or y axes
depending on which of signal and idler photons are lost. The probability of
this is smaller, proportional to the pair emission probability.
To produce estimates for both the JSI and $\sqrt{\textrm{JSI}}$ purities, we
reconstructed the JSI across increasingly long measurement intervals. Each
estimation is calculated using $50\times 50~{}\textrm{ps}$ bins; doing so
reduces the sparsity of the raw data and provides a more accurate and reliable
Singular Value Decomposition (SVD). The SVD is used to numerically implement
the Schmidt decomposition, used to quantify the non-separability of the JSA
Law et al. (2000). By observing the value the estimation converges towards, we
truncate the total measurement time to avoid instability. These estimation of
purities are contained in Table 1. Neither the JSI nor the $\sqrt{\text{JSI}}$
truly reveal photon purity due to lack of phase information, something two-
photon interference can incorporate Graffitti et al. (2018a). Thus, two-photon
interference results represent a more faithful estimate of photon purity.
Discrepancies between the lower-bound purities determined by two-photon
interference results, and inferred purities from experimental JSIs could be
caused by a combination of different factors, such as drifts in the laser
central frequency and pulse duration, as well as non-negligible jitter in the
detection system. Visually noticeable elongation of central lobe along the
diagonal suggests pump pulse durations that are shorter than the crystal is
optimised for, which in turn would result in a lower purity for experimental
JSI analysis. From simulations we estimate that, pulse durations that are $\pm
0.4$ ps away from the ideal value can result in purities dropping by 6%, see
the Appendix for more details.
The importance of achieving the photon source characteristics displayed in
this work was recently highlighted in Ref. van der Meer et al. (2020), which
concludes that quantum supremacy in a Boson sampling task with non-multiplexed
single-photons from PDC sources can only be achieved with Gaussian-profile
aKTP crystals due to the higher collection efficiencies. Notably, photonic
quantum supremacy has just been demonstrated in Gaussian Boson Sampling (GBS),
in an experiment which created 50 photons from 25 Gaussian apodized crystals
using a duty-cycle poling technique Zhong et al. (2020). Using our improved
poling algorithm and considering the trade off between non-linearity and
photon purity highlighted in this manuscript, an optimal $\sigma$ could enable
higher purities and heralding efficiencies. This, in turn, would culminate in
a greater advantage and scalability of the scheme.
The discrepancy in brightness between our aKTP source and the ppKTP source
highlighted within Table 1 can be balanced by adjusting the relative pump
powers to achieve the same squeezing $\gamma$. At a fixed value of $\gamma$,
the single and multi-pair production probability for aKTP and ppKTP are the
same, independent of the pump power, as the different pumping powers act to
equate the probabilities of generating $n$ photon pairs. A hard limit on
available pump power for multiple bulk PDC sources could restrict one’s
ability to maximise brightness. Future scalable architectures however are
likely to be based on waveguides, which typically require only $\mu$W of
pumping power. Gaussian PMFs can also be achieved in waveguide sources either
through domain engineering, or via inter-modal Spontaneous Four Wave Mixing
(SFWM) in a multi-mode waveguide Paesani et al. (2020).
Future improvements will target higher interference visibilities by modifying
the PEF. In this work, the PEF is a $1.3~{}\textrm{ps}$ long sech2 pulse,
imposing a theoretical limit on the maximum visibility of 98.7%. However, it
is possible to achieve up to 99.9% visibility directly with our crystals by
engineering the PEF Graffitti et al. (2018a). Modification of the PEF can be
achieved using pump-shaping techniques Chen et al. (2019). Additionally,
further improvements may be obtained by exploring the interplay of spatial and
spectral modes generated in a non-linearity engineered crystal Bruno et al.
(2014); Guerreiro et al. (2013).
## References
* Christ et al. (2013) A. Christ, A. Fedrizzi, H. Hübel, T. Jennewein, and C. Silberhorn, in _Single-Photon Generation and Detection_ , edited by A. Migdall, S. V. Polyakov, J. Fan, and J. C. Bienfang (Academic Press, 2013), vol. 45 of _Experimental Methods in the Physical Sciences_ , pp. 351 – 410, URL http://www.sciencedirect.com/science/article/pii/B9780123876959000111.
* Slussarenko and Pryde (2019) S. Slussarenko and G. J. Pryde, Applied Physics Reviews 6, 041303 (2019).
* Pittman et al. (2002) T. B. Pittman, B. C. Jacobs, and J. D. Franson, Phys. Rev. A 66, 042303 (2002), URL https://link.aps.org/doi/10.1103/PhysRevA.66.042303.
* Migdall et al. (2002) A. L. Migdall, D. Branning, and S. Castelletto, Phys. Rev. A 66, 053805 (2002), URL https://link.aps.org/doi/10.1103/PhysRevA.66.053805.
* Kaneda and Kwiat (2019) F. Kaneda and P. G. Kwiat, Science Advances 5 (2019), URL https://advances.sciencemag.org/content/5/10/eaaw8586.
* Collins et al. (2013) M. J. Collins, C. Xiong, I. H. Rey, T. D. Vo, J. He, S. Shahnia, C. Reardon, T. F. Krauss, M. J. Steel, A. S. Clark, et al., Nature Communications 4, 2582 (2013), ISSN 2041-1723, URL https://doi.org/10.1038/ncomms3582.
* Kiyohara et al. (2016) T. Kiyohara, R. Okamoto, and S. Takeuchi, Opt. Express 24, 27288 (2016), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-24-24-27288.
* Francis-Jones et al. (2016) R. J. A. Francis-Jones, R. A. Hoggarth, and P. J. Mosley, Optica 3, 1270 (2016), URL http://www.osapublishing.org/optica/abstract.cfm?URI=optica-3-11-1270.
* Ma et al. (2011) X.-S. Ma, S. Zotter, J. Kofler, T. Jennewein, and A. Zeilinger, Physical Review A 83, 043814 (2011), URL https://link.aps.org/doi/10.1103/PhysRevA.83.043814.
* Mendoza et al. (2016) G. J. Mendoza, R. Santagati, J. Munns, E. Hemsley, M. Piekarek, E. Martín-López, G. D. Marshall, D. Bonneau, M. G. Thompson, and J. L. O’Brien, Optica 3, 127 (2016), URL http://www.osapublishing.org/optica/abstract.cfm?URI=optica-3-2-127.
* Broome et al. (2011) M. A. Broome, M. P. Almeida, A. Fedrizzi, and A. G. White, Opt. Express 19, 22698 (2011), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-19-23-22698.
* Meyer-Scott et al. (2020) E. Meyer-Scott, C. Silberhorn, and A. Migdall, Review of Scientific Instruments 91, 041101 (2020), eprint https://doi.org/10.1063/5.0003320, URL https://doi.org/10.1063/5.0003320.
* Raussendorf et al. (2003) R. Raussendorf, D. E. Browne, and H. J. Briegel, Phys. Rev. A 68, 022312 (2003), URL https://link.aps.org/doi/10.1103/PhysRevA.68.022312.
* Walther et al. (2005) P. Walther, K. J. Resch, T. Rudolph, E. Schenck, H. Weinfurter, V. Vedral, M. Aspelmeyer, and A. Zeilinger, Nature 434, 169 (2005).
* Broome et al. (2013) M. A. Broome, A. Fedrizzi, S. Rahimi-Keshari, J. Dove, S. Aaronson, T. C. Ralph, and A. G. White, Science 339, 794 (2013), ISSN 0036-8075, URL https://science.sciencemag.org/content/339/6121/794.
* van der Meer et al. (2020) R. van der Meer, J. J. Renema, B. Brecht, C. Silberhorn, and P. W. H. Pinkse, Phys. Rev. A 101, 063821 (2020), URL https://link.aps.org/doi/10.1103/PhysRevA.101.063821.
* Azuma et al. (2015) K. Azuma, K. Tamaki, and H.-K. Lo, Nature Communications 6, 6787 (2015).
* Grice et al. (2001) W. P. Grice, A. B. U’Ren, and I. A. Walmsley, Phys. Rev. A 64, 063815 (2001), URL https://link.aps.org/doi/10.1103/PhysRevA.64.063815.
* U’Ren et al. (2006) A. B. U’Ren, C. Silberhorn, R. Erdmann, K. Banaszek, W. P. Grice, I. A. Walmsley, and M. G. Raymer, arXiv preprint quant-ph/0611019 (2006).
* Mosley et al. (2008) P. J. Mosley, J. S. Lundeen, B. J. Smith, P. Wasylczyk, A. B. U’Ren, C. Silberhorn, and I. A. Walmsley, Phys. Rev. Lett. 100, 133601 (2008), URL https://link.aps.org/doi/10.1103/PhysRevLett.100.133601.
* Jin et al. (2014) R.-B. Jin, R. Shimizu, K. Wakui, M. Fujiwara, T. Yamashita, S. Miki, H. Terai, Z. Wang, and M. Sasaki, Opt. Express 22, 11498 (2014), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-22-10-11498.
* Jin et al. (2013) R.-B. Jin, R. Shimizu, K. Wakui, H. Benichi, and M. Sasaki, Opt. Express 21, 10659 (2013), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-21-9-10659.
* Greganti et al. (2018) C. Greganti, P. Schiansky, I. A. Calafell, L. M. Procopio, L. A. Rozema, and P. Walther, Opt. Express 26, 3286 (2018), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-26-3-3286.
* Wang et al. (2016) X.-L. Wang, L.-K. Chen, W. Li, H.-L. Huang, C. Liu, C. Chen, Y.-H. Luo, Z.-E. Su, D. Wu, Z.-D. Li, et al., Phys. Rev. Lett. 117, 210502 (2016), URL https://link.aps.org/doi/10.1103/PhysRevLett.117.210502.
* Brańczyk et al. (2011) A. M. Brańczyk, A. Fedrizzi, T. M. Stace, T. C. Ralph, and A. G. White, Opt. Express 19, 55 (2011), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-19-1-55.
* Graffitti et al. (2018a) F. Graffitti, J. Kelly-Massicotte, A. Fedrizzi, and A. M. Brańczyk, Physical Review A 98, 053811 (2018a).
* Brańczyk et al. (2010) A. M. Brańczyk, T. C. Ralph, W. Helwig, and C. Silberhorn, New Journal of Physics 12, 063001 (2010), URL https://doi.org/10.1088%2F1367-2630%2F12%2F6%2F063001.
* Meyer-Scott et al. (2017) E. Meyer-Scott, N. Montaut, J. Tiedau, L. Sansoni, H. Herrmann, T. J. Bartley, and C. Silberhorn, Phys. Rev. A 95, 061803 (2017), URL https://link.aps.org/doi/10.1103/PhysRevA.95.061803.
* Dixon et al. (2013) P. B. Dixon, J. H. Shapiro, and F. N. C. Wong, Opt. Express 21, 5879 (2013), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-21-5-5879.
* Chen et al. (2017) C. Chen, C. Bo, M. Y. Niu, F. Xu, Z. Zhang, J. H. Shapiro, and F. N. C. Wong, Opt. Express 25, 7300 (2017), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-25-7-7300.
* Chen et al. (2019) C. Chen, J. E. Heyes, K.-H. Hong, M. Y. Niu, A. E. Lita, T. Gerrits, S. W. Nam, J. H. Shapiro, and F. N. C. Wong, Opt. Express 27, 11626 (2019), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-27-8-11626.
* Cui et al. (2019) C. Cui, R. Arian, S. Guha, N. Peyghambarian, Q. Zhuang, and Z. Zhang, Phys. Rev. Applied 12, 034059 (2019), URL https://link.aps.org/doi/10.1103/PhysRevApplied.12.034059.
* Dosseva et al. (2016) A. Dosseva, L. Cincio, and A. M. Brańczyk, Phys. Rev. A 93, 013801 (2016), URL https://link.aps.org/doi/10.1103/PhysRevA.93.013801.
* Graffitti et al. (2018b) F. Graffitti, P. Barrow, M. Proietti, D. Kundys, and A. Fedrizzi, Optica 5, 514 (2018b).
* Tambasco et al. (2016) J.-L. Tambasco, A. Boes, L. G. Helt, M. J. Steel, and A. Mitchell, Opt. Express 24, 19616 (2016), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-24-17-19616.
* Graffitti et al. (2017) F. Graffitti, D. Kundys, D. T. Reid, A. M. Brańczyk, and A. Fedrizzi, Quantum Science and Technology 2, 035001 (2017).
* Graffitti et al. (2020a) F. Graffitti, P. Barrow, A. Pickston, A. M. Brańczyk, and A. Fedrizzi, Phys. Rev. Lett. 124, 053603 (2020a), URL https://link.aps.org/doi/10.1103/PhysRevLett.124.053603.
* Graffitti et al. (2020b) F. Graffitti, V. D’Ambrosio, M. Proietti, J. Ho, B. Piccirillo, C. de Lisio, L. Marrucci, and A. Fedrizzi, arXiv preprint arXiv:2006.01845 (2020b).
* Fejer et al. (1992) M. M. Fejer, G. A. Magel, D. H. Jundt, and R. L. Byer, IEEE Journal of Quantum Electronics 28, 2631 (1992).
* Quesada and Brańczyk (2018) N. Quesada and A. M. Brańczyk, Phys. Rev. A 98, 043813 (2018), URL https://link.aps.org/doi/10.1103/PhysRevA.98.043813.
* Boyd (2008) R. W. Boyd, _Nonlinear Optics, Third Edition_ (Academic Press, Inc., USA, 2008), 3rd ed., ISBN 0123694701\.
* Avenhaus et al. (2009) M. Avenhaus, A. Eckstein, P. J. Mosley, and C. Silberhorn, Opt. Lett. 34, 2873 (2009), URL http://ol.osa.org/abstract.cfm?URI=ol-34-18-2873.
* Bennink (2010) R. S. Bennink, Physical review. A, Atomic, molecular, and optical physics 81 (2010), ISSN 1094-1622.
* Grice et al. (2011) W. P. Grice, R. S. Bennink, D. S. Goodman, and A. T. Ryan, Phys. Rev. A 83, 023810 (2011), URL https://link.aps.org/doi/10.1103/PhysRevA.83.023810.
* Dixon et al. (2014) P. B. Dixon, D. Rosenberg, V. Stelmakh, M. E. Grein, R. S. Bennink, E. A. Dauler, A. J. Kerman, R. J. Molnar, and F. N. C. Wong, Phys. Rev. A 90, 043804 (2014), URL https://link.aps.org/doi/10.1103/PhysRevA.90.043804.
* Jin et al. (2015) R.-B. Jin, M. Fujiwara, T. Yamashita, S. Miki, H. Terai, Z. Wang, K. Wakui, R. Shimizu, and M. Sasaki, Optics Communications 336, 47 (2015), ISSN 0030-4018, URL http://www.sciencedirect.com/science/article/pii/S0030401814008803.
* Jizan et al. (2016) I. Jizan, B. Bell, L. G. Helt, A. C. Bedoya, C. Xiong, and B. J. Eggleton, Opt. Lett. 41, 4803 (2016), URL http://ol.osa.org/abstract.cfm?URI=ol-41-20-4803.
* Law et al. (2000) C. K. Law, I. A. Walmsley, and J. H. Eberly, Phys. Rev. Lett. 84, 5304 (2000), URL https://link.aps.org/doi/10.1103/PhysRevLett.84.5304.
* Zhong et al. (2020) H.-S. Zhong, H. Wang, Y.-H. Deng, M.-C. Chen, L.-C. Peng, Y.-H. Luo, J. Qin, D. Wu, X. Ding, Y. Hu, et al., Science 370, 1460 (2020), ISSN 0036-8075, URL https://science.sciencemag.org/content/370/6523/1460.
* Paesani et al. (2020) S. Paesani, M. Borghi, S. Signorini, A. Maïnos, L. Pavesi, and A. Laing, Nature communications 11, 1 (2020).
* Bruno et al. (2014) N. Bruno, A. Martin, T. Guerreiro, B. Sanguinetti, and R. T. Thew, Opt. Express 22, 17246 (2014), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-22-14-17246.
* Guerreiro et al. (2013) T. Guerreiro, A. Martin, B. Sanguinetti, N. Bruno, H. Zbinden, and R. T. Thew, Opt. Express 21, 27641 (2013), URL http://www.opticsexpress.org/abstract.cfm?URI=oe-21-23-27641.
## Supplementary Material
For our crystal analysis we ran a visibility power dependence to extract
linear fits allowing us to obtain a lower bound on the purity of the crystal
and filter combination investigated. The full results, including measurements
with our previous generation aKTP crystals are shown in in Figure S1. We used
the same optical setup for each analysis.
We also determined the interference visibilities for signal-idler
interference. As mentioned in the main text, being able to interchangeably
interfere daughter photons from PDC offers some additional capabilities when
it comes to building multi-photon states. Under the same conditions as the
results obtained with idler-idler interference we obtain a lower-bound purity
of $97.0\pm 0.1\%$, a $1\%$ decrease.
Figure S1: Visibility power dependence. At each pump power setting, we
measured five two-photon interferograms to obtain the average visibility shown
here as solid data points. Error bars are taken as one standard deviation from
these measurements. We then infer a minimum for spectral purity for a
combination of crystals under certain filtering condition by fitting a linear
function (dashed lines) to extract visibility under 0 pump power.
Our efforts to consider why we witnessed lower purities in our experimental
JSI analysis led to simulations into how pulse duration affects photon purity,
the results of which are shown in Figure S2. Maximum purities are achieved
when the width of the PEF and PMF are matched. From the JSI reconstruction
results, the elongation along the diagonal could have been caused by
instability of our pulsed laser source, a reasonable argument as scans were
run for hours at a time. Any drifting in pulse duration from ideal leads to a
reduction in purity.
Figure S2: Theoretical simulations of photon purity as a function of varying
pulse duration. A non-ideal pulse duration affects photon purity as the
bandwidths of the PEF and PMF are not matched for pulse durations not 1.3 ps.
(a), (b) and (c) depict the $\sqrt{\text{JSI}}$ from a range of pulse
durations. Shorter pulse durations contribute towards spectral correlations
along the diagonal, something visible in our reconstructed
$\sqrt{\text{JSI}}$s. The red dash lines represent the width of the PEF
corresponding to the pulse duration under investigation. (d) The effects of
non-ideal pulse durations on photon purity. Analysing the range of purities
derived from $\sqrt{\text{JSI}}$ and JSI as a function of pump pulse duration.
|
# Accumulation of chiral hinge modes and its interplay with Weyl physics in a
three-dimensional periodically driven lattice system
Biao Huang<EMAIL_ADDRESS>Max Planck Institute for the Physics of
Complex Systems, Nöthnitzer Straße 38, 01069 Dresden, Germany Viktor
Novičenko<EMAIL_ADDRESS>Institute of Theoretical Physics and
Astronomy, Vilnius University, Saulėtekio 3, LT-10257 Vilnius, Lithuania
André Eckardt<EMAIL_ADDRESS>Max Planck Institute for the Physics of
Complex Systems, Nöthnitzer Straße 38, 01069 Dresden, Germany Institut für
Theoretische Physik, Technische Universität Berlin, Hardenbergstraße 36, 10623
Berlin, Germany Gediminas Juzeliūnas<EMAIL_ADDRESS>Institute of Theoretical Physics and Astronomy, Vilnius University, Saulėtekio
3, LT-10257 Vilnius, Lithuania
###### Abstract
We demonstrate that a three dimensional time-periodically driven lattice
system can exhibit a second-order chiral skin effect and describe its
interplay with Weyl physics. This Floquet skin-effect manifests itself, when
considering open rather than periodic boundary conditions for the system. Then
an extensive number of bulk modes is transformed into chiral modes that are
bound to the hinges (being second-order boundaries) of our system, while other
bulk modes form Fermi arc surface states connecting a pair of Weyl points. At
a fine tuned point, eventually all boundary states become hinge modes and the
Weyl points disappear. The accumulation of an extensive number of modes at the
hinges of the system resembles the non-Hermitian skin effect, with one
noticeable difference being the localization of the Floquet hinge modes at
increasing distances from the hinges in our system. We intuitively explain the
emergence of hinge modes in terms of repeated backreflections between two
hinge-sharing faces and relate their chiral transport properties to chiral
Goos-Hänchen-like shifts associated with these reflections. Moreover, we
formulate a topological theory of the second-order Floquet skin effect based
on the quasi-energy winding around the Floquet-Brillouin zone for the family
of hinge states. The implementation of a model featuring both the second-order
Floquet skin effect and the Weyl physics is straightforward with ultracold
atoms in optical superlattices.
## I Introduction
In recent years, researches have demonstrated that time-periodically driven
systems can show intriguing and unique effects that find no counterparts in
non-driven systems. Examples include anomalous Floquet topological insulators
featuring robust chiral edge modes for vanishing Chern numbers Kitagawa _et
al._ (2010); Rudner _et al._ (2013); Roy and Harper (2017a); Yao _et al._
(2017a); Mukherjee _et al._ (2017a); Peng _et al._ (2016); Maczewsky _et
al._ (2016); von Keyserlingk and Sondhi (2016); Else and Nayak (2016);
Potirniche _et al._ (2017); Wintersperger _et al._ (2020) and discrete time
crystals Sacha (2015); Khemani _et al._ (2016); Else _et al._ (2016); Yao
_et al._ (2017b); Zhang _et al._ (2017); Choi _et al._ (2017); Rovny _et
al._ (2018); Ho _et al._ (2017); Huang _et al._ (2018); Sacha and Zakrzewski
(2017); Yao and Nayak (2018); Sacha (2020). The periodic driving shifts the
fundamental theoretical framework from focusing on Hamiltonian eigen problems
to the unitary evolution operators genuinely depending on time, resulting in a
plethora of new concepts and methods such as spacetime winding numbers Rudner
_et al._ (2013) and spectral pairing von Keyserlingk _et al._ (2016). In this
context, it appears natural and tantalizing to explore possible new classes of
periodically driven systems that go beyond descriptions by traditional
theories.
In this paper, we show that time-periodic driving of a three dimensional (3D)
lattice can give rise to the coexistence of Weyl physics Armitage _et al._
(2018); Anderson _et al._ (2012, 2013); Dubček _et al._ (2015); Sun _et
al._ (2018); Higashikawa _et al._ (2019); Lu _et al._ (2020); Wang _et al._
; Zhu _et al._ with a new type of hinge (i.e. second-order boundary) states.
These states are chiral in the sense that they transport particles in a
unidirectional fashion along the hinge. As an intriguing effect, we find a
macroscopic accumulation of these chiral Floquet hinge states, which is
associated with a complete reorganization of the system’s quasienergy spectrum
in response to shifting from periodic to open boundary conditions. At a fine-
tuned point, even all states of the system become hinge states. Tuning away
from that point a pair of Weyl points is created at quasienergy $\pi$, leading
to Fermi-arc surface (i.e. first-order boundary) states that coexist with the
hinge modes. The localization of an extensive number of hinge modes at the
boundaries of the system resembles the non-Hermitian skin effect Yao and Wang
(2018); Bergholtz _et al._ (2020); Ashida _et al._ (2020); Kawabata _et
al._ (2020), with a notable difference being the localization of the modes at
increasing distances from hinges (higher-order boundaries) in such a way that
the hinge modes cover the whole lattice. Different from the case of higher-
order topological phases Benalcazar _et al._ (2017a, b); Khalaf (2018), the
hinge states are buried deeply inside the bulk spectrum. Their existence and
robustness is, therefore, not captured by the theory of higher-order
topological insulators/semimetals which rely on the existence of bulk energy
gaps. Instead, the chiral hinge modes can be understood as resulting from the
repeated backreflection from two hinge-sharing surface planes and their chiral
motion as the result of chiral Goos-Hänchen-like shift associated with the
reflection at a boundary face. Furthermore, different from one-dimensional
(1D) periodically driven lattices Budich _et al._ (2017), the modes with
opposite chirality residing at opposite hinges are well spatially separated,
so the scattering due to a local perturbation, such as a local disorder, does
not affect the transport chirality at individual hinges.
The model system proposed and studied here consists of a simple, stepwise
modulation of tunnelling matrix elements involving six steps. It generalizes
to three dimensions a two-dimensional lattice model introduced by Rudner et.
al. Rudner _et al._ (2013) for studying anomalous Floquet topolgical
insulators (see also Ref. Kitagawa _et al._ (2010)). The latter 2D tunnel
modulation has been recently applied to ultracold atoms for the realization of
anomalous topological band structures Wintersperger _et al._ (2020). Our 3D
model can equally be implemented with ultracold atoms using such a stepwise
tunnel modulation, now in 3D optical superlattices. Furthermore, besides
giving rise to a new phenomenon, the unconventional chiral second-order
Floquet skin effect, the model proposed here also provides a simple recipe for
the robust implementation of Weyl physics by means of time-periodic driving,
which should be easier to realize compared to previous proposals Higashikawa
_et al._ (2019).
This paper is structured as follows. In the next Section II a 3D periodically
driven lattice is defined. Subsequently the characteristic features of the
bulk and hinge physics are considered in Secs. III and IV. In particular, in
Sec IV.3 a topological theory based on the quasi-energy winding around the
frequency-Brillouin zone is formulated for the family of localized hinge
states enforced by reflections from open-boundaries. The experimental
implementation of our model, using ultracold atoms in modulated superlattices,
is discussed in Sec. V, before the Concluding Section VI. Some technical
details are presented in three Appendices A, B and C.
(a) Driving
(b) Bulk dynamics
(c) Reflection by open boundary surfaces at $x=1$ and $y=1$ accompaned by
sublattice changes $B\rightarrow A$ and $A\rightarrow B$, respectively.
(d) Hinge dynamics
Figure 1: (a) Bonds connected during the driving steps 1 to 6. (b) Bulk
trajectories within a Floquet cycle at the fine tunned point $\phi=\pi/2$.
Depending on the starting sublattice, particles will travel in opposite
directions along the cubic diagonal $\mathbf{d}=(1,-1,1)$. (c) Trajectories at
the same $\phi=\pi/2$ but with a surface termination (open boundary).
Particles starting from sublattice $A$ (or $B$) near the $y=1$ (or $x=1$)
surface would have their dynamics impeded by the open boundary at a certain
driving step. That results in a switch of sublattice after a Floquet cycle,
and therefore the direction of the trajectory is reversed after the reflection
from the surface. (d) The hinge formed by two intersecting terminating
surfaces renders uni-directional modes. The figure shows two of the modes
closest to the hinge starting at a site of $B$ (lower plot) or $A$ (upper
plot) sublattices directly at the hinge. Each color for the arrow denotes one
Floquet cycle.
## II Model
We consider a bipartite cubic lattice with alternating A-B sublattices in all
three Cartesian directions. The lattice is described by a time-periodic
Hamiltonian $H(t+T)=H(t)$, with the driving period $T$ divided into 6 steps.
In each step tunnelling matrix elements $-J$ between sites $\mathbf{r}_{A}$ of
sublattice $A$ and neighboring sites $\mathbf{r}_{A}\pm a\mathbf{e}_{\mu}$ of
sublattice $B$ are switched on, with $\mu=x,y,z$. During the driving period
$T$ the tunneling steps appear in a sequence
$\mu\pm=x+,\,y+,\,z+,\,x-,\,y-,\,z-$, as illustrated in Fig. 1(a) 111Without
including the tunneling along the $z$ direction described by third and the
sixth driving steps, the dynamics reduces to a 2D motion in a periodically
driven square lattice considered in refs. Rudner _et al._ (2013); Mukherjee
_et al._ (2017a); Wintersperger _et al._ (2020).. Within each step the
evolution is determined by a coordinate-space Hamiltonian
$H_{\pm\mu}=-J\sum_{\bm{r}_{A}}(|\bm{r}_{A}\rangle\langle\bm{r}_{A}\pm
a\bm{e}_{\mu}|+|\bm{r}_{A}\pm a\bm{e}_{\mu}\rangle\langle\bm{r}_{A}|)$, where
$J$ is the tunneling matrix element, $\bm{r}_{A}$ specifies the location of
sublattices $A$, and $a$ is the lattice spacing such that $\bm{r}_{A}\pm
a\bm{e}_{\mu}$ denotes the locations of sites in sublattice $B$ neighboring to
the sublattice $A$ site $\bm{r}_{A}$. The tunnelling processes occurring in
each of the driving steps are characterized by a single dimensionless
parameter, the phase
$\phi=-\frac{JT}{6\hbar}.$ (1)
The one-cycle evolution operator (or Floquet operator),
$U_{F}={\cal T}e^{-(i/\hbar)\int_{0}^{T}dtH(t)},$ (2)
whose repeated application describes the time-evolution in stroboscopic steps
of the driving period $T$, can be decomposed into terms corresponding to the
six driving stages,
$U_{F}=U_{z-}U_{y-}U_{x-}U_{z+}U_{y+}U_{x+}.$ (3)
When dealing with the bulk dynamics we impose periodic boundary conditions in
all three spatial directions. The evolution operators for the individual
driving steps can then be represented as:
$U_{\mu\pm}=U\left(\pm
k_{\mu}\right)=e^{-\frac{i}{6}H_{\mu\pm}}=e^{-i\phi(\tau_{1}\cos
k_{\mu}\pm\tau_{2}\sin k_{\mu})}\,,$ (4)
where $\tau_{1,2,3}$ are Pauli matrices associated with the sublattice states
$A$ and $B$ and where $k_{\mu}$ with $\mu=x,y,z$ denotes the Cartesian
components of the quasimomentum vector $\mathbf{k}$. Here and in the
following, we will use a dimensionless description, where time, energy, length
and quasimomentum are given in units of $T,\hbar/T$, $a$, and $\hbar/a$,
respectively. The quasienergies $E_{n,\bm{k}}$ and the Floquet modes
$\left|n,\bm{k}\right\rangle$ are defined via the eigenvalue equation
$U_{F}|u_{n,\bm{k}}\rangle=\exp(-iE_{n,\bm{k}})|u_{n,\bm{k}}\rangle.$ (5)
We first note that the only global symmetry satisfied by the Floquet operator
(3)-(4) is a particle-hole flip $\Gamma=CK$, where $Ki=-iK$ is complex
conjugation and $C=\tau_{3}$ the third Pauli matrix. Thus, the system belongs
to class D in Altland-Zirnbauer notation Altland and Zirnbauer (1997); Chiu
_et al._ (2016). The Floquet operator satisfies
$CU_{F}(\bm{k})C^{-1}=U_{F}^{*}(-\bm{k})$ Roy and Harper (2017b); Yao _et
al._ (2017a), and therefore the quasienergies must appear in pairs
$E_{1,\bm{k}}=-E_{2,-\bm{k}}$. Meanwhile, the system obeys the inversion
symmetry $PU_{F}(\bm{k})P^{-1}=U_{F}(-\bm{k})$, with $P=\tau_{1}$, enforcing
that for each band one has $E_{n,\bm{k}}=E_{n,-\bm{k}}$. Together, we see that
the Floquet spectrum has pairs of states with quasi-energies
$E_{1,\bm{k}}=-E_{2,\bm{k}}$. This means possible gaps or nodal points/lines
can, modulo $2\pi$, appear only at quasienergy $0$ or $\pi$.
At $k_{x}=k_{y}=k_{z}=\pm\frac{\pi}{2}\,(\text{mod }\pi)$ Eqs. (3) and (4)
yield $U_{F}=1$, so that $E_{n,\bm{k}}=0$. Therefore the quasienergy spectrum
is always gapless at quasienergy $0$ (modulo $2\pi$), regardless of the
driving strength $\phi$. Thus a single band spectrum could only possibly open
up a gap at quasienergy equal to $\pi$ (modulo $2\pi$). To draw a complete
phase diagram, we first note that flipping the sign of $\phi$ amounts to a
particle-hole transformation $U_{F}|_{-\phi}=CU_{F}|_{\phi}C^{-1}$, and from
the previous analysis we see that such a flip does not change the spectrum.
Furthermore, from
$e^{-i\phi\hat{n}\cdot\bm{\tau}}=\cos\phi-i\hat{n}\cdot\bm{\tau}\sin\phi$, the
periodicity of the Floquet operator with respect to the parameter $\phi$ is
clearly seen: $U_{F}|_{\phi}=U_{F}|_{\pi+\phi}$. In this way, the irreducible
parameter range is $\phi\in[0,\pi/2]$ as illustrated in Fig. 2.
$\phi$ | (1) PBC $x,y,z$ | (2) PBC $z$, OBC $x,y$ | (3) OBC $x,y,z$
---|---|---|---
$\frac{\pi}{2}$ | | 4 eigenstates | 2 eigenstates
$\frac{\pi}{3}$ | | 3 eigenstates | 2 eigenstates
$\frac{\pi}{8}$ | | 1 eigenstate | 1 eigenstate
Figure 2: Table showing energy spectra and Floquet modes. The phases of the
system are indicated in the vertical axis on the left. The table’s rows refer
to the driving parameters $\phi=\pi/8,\pi/3,\pi/2$, corresponding to the
metallic phase, the Weyl semimetal/hinge phase, and the fine-tuned point,
respectively. The columns represent periodic/open boundary conditions
(PBC/OBC) along the specified directions. The system sizes are
$L_{x}=L_{y}=40$ in column (1), 20 in (2), and 16 in (3). Columns (1) and (2)
contain energy spectra projected onto $k_{z}$. Although the spectra for fully
periodic and open boundary conditions in $x$ and $z$ direction are almost
identical in the metallic phase ($\phi=\pi/8$), they are very different in the
WSM/hinge phase ($\phi=\pi/3$) and at fine tuning ($\phi=\pi/2$). The
difference is a result of the formation of chiral hinge modes for open
boundary conditions in $x$ and $z$ directions (column 2). This can be inferred
also from the dot size reflecting the inverse participation ratio with respect
to the site basis as a measure for localization, as well as from the color
code indicating the mean distance to two opposite hinges and interpolating
from blue for one hinge over black near the center of the system to red for
the other hinge. The green dots mark the Weyl points. We also plot the real-
space densities of various Floquet modes for open boundary conditions along
$x$ and $y$ (column 2) or all (column 3) directions.
## III Phase diagram for periodic boundary conditions
$\phi$ | Weyl points
---|---
$\frac{\pi}{2}$ |
$\frac{\pi}{3}$ |
$\frac{\pi}{6}$ |
$\frac{\pi}{8}$ |
(a) Positions of Weyl points.
(b) $\phi=\pi/3$, PBC $x,y,z$
(c) $\phi=\pi/3$, PBC $x,y$
(d) $\phi=\pi/6$, PBC $x,y,z$
Figure 3: (a) The different phases can be distinguished by the presence or
absence of Weyl points at quasienergy $\pi$. For $\phi=\pi/8$, in the metallic
phase ($\phi<\pi/6$), no Weyl points are present. At the transition,
$\phi=\pi/6$, the band touching point appears at $\bm{k}=\bm{0}$ (green
circle). For $\pi/6<\phi<\pi/2$ the band touching splits into two Weyl points
of opposite charge (green circles with $\pm$ sign) that separate along the
diagonal $k_{x}=-k_{y}=k_{z}$, as shown for $\phi=\pi/3$. At fine tuning
$\phi=\pi/2$ the dispersion becomes flat in two directions and the Weyl points
disappear to reappear for $\phi>\pi/2$ with reversed charges (signs in green
dots correspond to the limit $\phi=\pi/2-0)$. (b-d) Quasienergy spectra versus
$k_{x},k_{y}$ for periodic boundary conditions in $x$ and $y$ and either
periodic (b,d) or open (c) boundary conditions in $z$ direction. A surface
Fermi arc can be observed for $\phi=\pi/3$ by comparing the spectra with
periodic (b) and open (c) boundary conditions the $z$ direction. The orange
surface denotes the contour formed by quasi-energies closest to $E=\pi$ at
each $(k_{x},k_{y})$. For $\phi=\pi/6$ (d) a pair of Weyl points reduces to a
single band touching point at the quasi-energy $E=\pi$.
(a) Fine tuned point
(b) Slight deviation
Figure 4: (a) Anisotropic 1D-like dispersion along the corrdinate
$k_{x}-k_{y}+k_{z}$ at the fine-tuned point $\phi=\pi/2$. (b) For a small
detuning, $\phi=\pi/2-0.1$, the dispersion is no longer completely flat in
other two directions, and a pair of non-equivalent Weyl points is formed along
the diagonal at $\mathbf{k}=\pm k_{0}(1,-1,1)$. Yet the dispersion remains
highly anisotropic.
Before investigating the system with open boundary conditions and the
emergence of chiral hinge modes, let us first discuss the phase diagram
considering the case of a translation invariant system with periodic boundary
conditions. Let us begin with the topologically trivial high-frequency (weak
driving) limit corresponding to $\phi\ll 1$. In that case one can retain only
the lowest order terms in $\phi$ when expanding the stroboscopic evolution
operator $U_{F}$ of Eq. (3), resulting in $U_{F}|_{\phi\rightarrow
0}\simeq\cos^{6}\phi\tau_{0}-2i\sin\phi\cos\phi(\cos k_{x}+\cos k_{y}+\cos
k_{z})\tau_{1}+O(\sin^{2}\phi)\simeq e^{-i\phi 2(\cos k_{x}+\cos k_{y}+\cos
k_{z})\tau_{1}}$, where $\tau_{0}$ is the identity matrix. The spectrum $\pm
2\phi\sum_{\mu=x,y,z}\cos k_{\mu}$ corresponds to that of a static simple
cubic lattice artificially described by 2 sublattices, where the bands are
folded as the Brillouin zone size is halved. While the system remains gapless
at quasienergy $0$ for arbitrary $\phi$, a characteristic feature of the high-
frequency (weak driving) regime is a finite energy gap at quasienergy $\pi$,
resulting from the fact that the band width is proportional to $\phi$ and thus
is small compared to the dimensionless driving energy $\hbar\omega=2\pi$. This
behaviour can be observed in the spectrum for $\phi=\pi/8$ shown in Fig. 2 at
the bottom of column (1).
Increasing the driving strength $\phi$, the band width grows relative to
$2\pi$. When $\phi=\pi/6$ the single Floquet band touches itself at
quasienergy $\pi$ and quasimomentum $\mathbf{k}=0$ (see Fig. 3 (d)), so that
the quasienergy spectrum becomes gapless. We refer to this as Floquet
autogamy. Going to the regime $\phi\in(\pi/6,\pi/2)$, the band touching point
transforms into a pair of Weyl points forming at quasienergy $\pi$ with
topological charges $\pm 1$, as shown in Fig. 3 (a). They are located at the
quasimomenta $\mathbf{k}=\pm k_{0}\mathbf{d}$ along the diagonal vector
$\mathbf{d}=(1,-1,1),$ (6)
with
$k_{0}=\left(1/2\right)\arccos\left[\left(1/2-\sin^{2}\phi\right)/\sin^{2}\phi\right]$
modulo $\pi$ (see Appendix A), so that $k_{0}\to\pi/3$ as $\phi\to\pi/2$. We
observe the emergence of surface Fermi arc states connecting the Weyl points,
when comparing the spectrum with full periodic boundary conditions to that
with open boundary conditions along z-direction, as illustrated in Fig. 3 (b)
and (c) respectively.
As the driving strength approaches the fine tuned point,
$\phi=\pi/2-\varepsilon$ with $\varepsilon\ll 1$, the Weyl dispersion acquires
a highly anisotropic form shown in Fig. 4 (b). The dispersion remains steep
along the diagonal coordinate $\bm{k}\cdot\mathbf{d}=k_{x}-k_{y}+k_{z}$, but
becomes increasingly flat in other two directions. Exactly at fine tuning,
$\phi=\pi/2$, the constituent evolution operators (4) reduce to
$U_{\mu\pm}\left(\mathbf{k}\right)=-i(\cos k_{\mu}\tau_{1}\pm\sin
k_{\mu}\tau_{2})$, and the Floquet stroboscopic operator takes the form
$U_{F}=-e^{-i2\tau_{3}\bm{k}\cdot\mathbf{d}}$. This provides the quasi-
energies $E_{\bm{k},\pm}=\pm 2\bm{k}\cdot\mathbf{d}+(2m+1)\pi$, where
$m\in\mathbb{Z}$ labels the Floquet bands, and where the upper and lower
branches labeled by $\pm$ now directly correspond to sublattices A and B, i.e.
$\pm\rightarrow\tau_{3}$. In that case the Weyl points disappear and the
dispersion $E_{\bm{k},\pm}$ is completely flat for the momentum plane normal
to $\mathbf{d}$, as one can see in Fig. 4 (a). Hence a particle can only
propagate along the diagonal $\mathbf{d}$ with a dimensionless velocity
$\bm{v_{\pm}}=\pm 2\bm{d}$, depending on whether the particle occupies a site
on the sublattice $A$ or $B$ at the beginning of a driving cycle. It is
noteworthy that for fine tuned driving, the effective Floquet Hamiltonian,
$H_{F}\equiv-i\ln
U_{F}=2\tau_{3}\mathbf{d}\cdot\mathbf{k}+\pi\quad\text{for}\quad\phi=\pi/2,$
(7)
is periodic in the momentum space only by taking into account the periodicity
in quasi-energies. Such an effective Hamiltonian does not have a static
counterpart, and can only be produced in periodically driven systems.
The fine-tuned dispersion can be understood by considering the dynamics in
real space. For $\phi=\pi/2$ in each step $\mu\pm$ the particle is fully
transferred from a sublattice A site positioned at $\mathbf{r}_{A}$ to a
neighboring site B situated at
$\mathbf{r}_{B}=\mathbf{r}_{A}\pm\mathbf{e}_{\mu}$ or vice versa. During the
six steps composing the driving period, the particle follows the trajectory
shown in Fig. 1 (b). Thus, after completing each period the particle located
on a site of sublattice $A$ ($B$) is transferred by $+2\mathbf{d}$
($-2\mathbf{d}$) to an equivalent site of the same sublattice, giving rise to
stroboscopic motion along the diagonal directions $\pm\mathbf{d}$ at the
velocity $\bm{v}_{\pm}$.
## IV Accumulation of chiral hinge modes for open boundary conditions
Let us now consider the case of open boundary conditions with the faces of the
boundary planes oriented along the directions $\pm\mu$, either with
$\mu=x,y,z$ (fully open boundary conditions) or with $\mu=x,y$, keeping
periodic boundary conditions along the $z$ direction in the latter case. As an
intriguing effect, we find that, when the system is subjected to open boundary
conditions with at least two properly chosen boundary planes, an extensive
number of chiral Floquet hinge modes is formed. This phenomenon is best
understood by considering the real-space propagation of the particle at the
fine-tuned parameter $\phi=\pi/2$ in stroboscopic steps of the driving period,
as it will be done next. For this purpose, we will label the lattice sites by
vectors ${\bm{r}}=(x,y,z)$, with coordinates $x,y,z$ taking integer values
between $1$ and $L_{x,y,z}$. The lattice sites are considered to belong to
sublattice $A$ (or $B$) if $s=(-1)^{x+y+z}$ equals $+1$ (or $-1$). Thus we
will use $s=1=A$ and $s=-1=B$ to label the sublattice.
### IV.1 Fine-tuned driving
Let us consider a hinge along the $z$ axis confined by the $-x$ and $-y$
surface planes. In that case the particle is restricted to the lattice sites
with $x\geq 1$ and $y\geq 1$. Starting from a site of, say, sublattice $B$, a
particle will propagate on this sublattice at constant velocity in steps
$-2\bm{d}$ in diagonal direction, until it reaches the $-x$ boundary face
positioned at $x=1$. At the boundary, tunnelling between the lattice sites $B$
and $A$ cannot occur during one of the six steps of the driving cycle. As a
result, the particle changes the sublattice and starts to propagate on the $A$
sublattice in opposite direction. The microscopic processes leading to such a
reflection are illustrated in Fig. 1 (c). The left plot shows the two possible
ways of how a change of sublattice can occur at the boundary face oriented in
the $-x$ direction. The two processes are distinguished by whether they start
on a $B$ lattice site directly at the boundary with $x=1$ (dark grey arrow) or
one site away from the boundary with $x=2$ (yellow arrow). A tunnelling event
is impeded in the first (dark grey) or the fifths (yellow) driving step,
respectively, as marked by small planes.
After changing the sublattice at the $-x$ surface, the particle travels in
reversed direction in steps of $2\bm{d}$ until it eventually reaches the $-y$
surface and is again backreflected, this time with the sublattice change
$A\rightarrow B$. The microscopic details of such a reflection at the $-y$
surface are depicted on the right hand side of Fig. 1 (c). In this way, the
particle will move back and forth between the $-x$ and the $-y$ surfaces
sharing the hinge. Such a dynamics is illustrated in Figs. 5 and 6, showing
the path of a particle projected onto the $xy$-plane. Interestingly, within
this plane, the particle returns to the same transverse position $(x,y)$ only
after having travelled twice between both faces, as one can see in Figs. 5 and
6.
It is noteworthy that the change in the sublattice $B\rightarrow A$ or
$A\rightarrow B$ during the backward reflection from the corresponding hinge
planes is accompanied by a lateral Goos-Hänchen-like (GH) shift of particle’s
trajectory. This is similar to changing a track for a train before sending it
backwards. Importantly the back reflected particle propagates in a trajectory
situated closer the hinge or further away from it for the $B\rightarrow A$ or
$A\rightarrow B$ reflections, respectively. Because of such chiral GH shifts,
the particle visits a larger number of $B$ sites than $A$ sites when
travelling between the two surface planes. Since the ballistic motion along
the $B$ ($A$) sites is the accompanied by a spatial shift in $-z$ ($z$)
direction, one arrives at an overall steady advance in $-z$ direction, i.e.
along the hinge, during the forward and backward motion of the particle
between the hinge-sharing surfaces oriented in $-x$ and $-y$ direction. More
precisely, as demonstrated in Appendix B, the particle traces out two slightly
mismatched “loops” shown in Figs. 5 and 6 that involve four reflections by the
hinge surfaces before it comes back to exactly the same point in the $xy$
plane, but shifted by $-2$ in $z$-direction at an equivalent (i.e. B-type)
lattice site. In this way the particle’s trajectory roughly covers a two-
dimensional strip along the z-direction, whose width is about twice its
distance from the hinge. Although such a picture applies to a particle
situated further away from the hinge, the advance in the $-z$ direction takes
place also for trajectories situated very close to the hinge where the
particle is reflected simultaneously from both hinge planes $x=1$ and $y=1$,
as illustrated in Fig. 1(d).
Figure 5: An example of a fine-tunned stroboscopic motion of a particle at the
lower-left hinge. The picture shows the projection of the particle’s
trajectory in the $xy$ plane. The sites of the $B$ and $A$ sublattices are
marked in blue and red, respectively. The particle is initially at the site of
the sublattice $B$ characterized by the coordinates $x=M+1$ and $y=1$, with
even $M=4$. This corresponds to the lower row ($y=1$) and the fifth column
($x=5$). Subsequently the particle undergoes the stroboscopic evolution
described by Eqs. (35)-(41) in Appendix B. Dashed lines with arrows show
stroboscopic reflections from the planes $x=1$ or $y=1$. Bulk ballistic
trajectories over one/two driving periods are indicated by thin/thick solid
arrows. The particle returns to its initial site after $2M+1=9$ periods.
Figure 6: Like in Fig. 5 the particle is initially at the site of the
sublattice $B$ with $x=M+1$ and $y=1$, but now with odd $M=5$. The particle
returns to the initial site after $2M+1=11$ periods.
Suppose the particle intially occupies a site of the sublattice $B$ with
transverse coordinates $y=1$ and $x=M+1$, so the particle is situated at the
hinge surface oriented in $-y$ direction, and is $M\geq 0$ sites away from
another $-x$ hinge surface. In that case, it takes $\left(2M+1\right)$ driving
periods for the particle to come back to the initial position
$\left(M+1,1\right)$ in the $xy$ plane, while having shifted by $-2$ lattice
units in $z$ direction, i.e.
$\left(U_{F}\right)^{2M+1}\left|B,M+1,1,z\right\rangle=-\left|B,M+1,1,z-2\right\rangle\,.$
(8)
After averaging over such a full reflection cycle, the particle travels with
an $M$-dependent mean velocity of $v_{M-}=-2/\left(2M+1\right)$ along the $z$
(hinge) axis (see Appendix B for more details). Here $|s,x,y,z\rangle$
describes a particle located at site $(x,y,z)$ belonging to sublattice
$s=B=-1$ or $s=A=1$ with $s=(-1)^{x+y+z}$.
Let us now consider periodic boundary conditions in $z$ direction with
$L_{z}=2N_{z}$, i.e.
$\left|s,x,y,z+2N_{z}\right\rangle=\left|s,x,y,z\right\rangle$, while keeping
open boundary conditions in the $x$ and $y$ . It is then convenient to
introduce mixed basis states
$\left|{s,x,y,k_{z}}\right\rangle^{\prime}=\frac{1}{\sqrt{N_{z}}}\sum_{z}e^{ik_{z}z}\left|s,x,y,z\right\rangle.$
(9)
They are characterized by quasimomenta $k_{z}$, which are defined modulo $\pi$
corresponding to the lattice period of 2 when moving in $z$ direction, and
obey
$\left(U_{F}\right)^{2M+1}\left|{B,M+1,1,k_{z}}\right\rangle^{\prime}=-\left|{B,M+1,1,k_{z}}\right\rangle^{\prime}e^{2ik_{z}}\,.$
(10)
In a similar manner, the system returns to any state of the stroboscopic
sequence
$\left|M,k_{z},p\right\rangle=\left(U_{F}\right)^{p}\left|{B,M+1,1,k_{z}}\right\rangle^{\prime}$
(11)
after $2M+1$ driving periods:
$\left(U_{F}\right)^{2M+1}\left|M,k_{z},p\right\rangle=-\left|M,k_{z},p\right\rangle
e^{2ik_{z}}\,,$ (12)
with $p=0,1,\ldots 2M$. By superimposing the subharmonic hinge states
$\left|M,k_{z},p\right\rangle$, we can now construct Floquet hinge states:
$\overline{\left|M,k_{z},q\right\rangle}=\frac{1}{\sqrt{2M+1}}\sum_{p=0}^{2M}\left|M,k_{z},p\right\rangle\exp\left(\mathrm{i}\frac{2\pi
q-2k_{z}+\pi}{2M+1}s\right)\,,$ (13)
where the index $q=0,1,\ldots,2M$ labels these modes. The corresponding
quasienergies are given by
$E_{M,k_{z},q}=\left(\frac{2\pi q-2k_{z}+\pi}{2M+1}\right)\textrm{ mod
}2\pi\,.$ (14)
An analogous dispersion but with an opposite slope (opposite sign) is obtained
for the states formed at the opposite hinge confined by the planes at $x=y=L$
facing the $+x$ and $+y$ directions. The dispersion at both hinges reproduces
the spectrum for the beam geometry shown in the row 1 and column (2) of Fig.
2. Such a spectrum of the system with open boundary conditions in $x$ and $y$
directions looks completely different from the one for full periodic boundary
conditions shown in column (1) of Fig. 2 or Fig. 4 (a), where all the modes
have the same positive or negative dispersion slope (group velocity)
$v_{z}=\pm 2$. In contrast, for the beam geometry [column (1) of Fig. 2] the
spectrum due to the chiral hinge modes is now organized in linear branches
given by Eq. (14) and the analogous dispersion with inverted sign for the
opposite hinge. Each branch is characterized by a different group velocity
$v_{M\pm}=\pm\frac{2}{2M+1}$ (15)
decreasing with the distance from the hinge $M$, where the lower and upper
sign in $\pm$ correspond to the states located around opposite hinges $x=y=1$
and $x=y=L$. The red / blue colors in Fig. 2 indicate the mean distance of
each mode from the two relevant hinges. The dark red (blue) mode associated
with $M=0$ is localized directly at the hinge $x=y=1$ ($x=y=L$) and propagates
at the largest velocity in negative (positive) $z$ direction. Modes with a
smaller slope have larger $M$ and thus are located further away from the
particular hinge, as indicated by the color. The real-space density of four
different hinge modes at $\phi=\pi/2$ is illustrated in the real-space plot
shown in row 1 and column (2) of Fig. 2. We can see that the modes located
further away from the hinge, having smaller chiral group velocities, are less
localized than the fastest modes located directly at the hinge. A measure for
the degree of localization of a mode $|\psi\rangle$ is the inverse
participation ratio $\text{IPR}=\sum_{j}|\langle j|\psi\rangle|^{4}$, with
real-space site states $|j\rangle$ and the Floquet eigenstates $|\psi\rangle$.
It is shown in the spectra of Fig. 2 via the dot size roughly indicating the
inverse of the number of sites a mode is spread over. Thus 1D-like modes
localized at the hinges have larger IPR than those that are spread over a
2D-like ribbon further away from the hinges.
All in all, for the given geometry, all modes are hinge modes at fine tuning.
This effect resembles an extensive accumulation of boundary modes featured in
the non-Hermitian skin effect Yao and Wang (2018); Bergholtz _et al._ (2020);
Ashida _et al._ (2020); Kawabata _et al._ (2020). It is noteworthy that the
modes of the present periodically driven lattice are localized at the second-
order boundaries, viz. at the hinges rather than directly at the boundary
faces. Therefore, the formation of an extensive number of hinge modes might be
called _chiral second-order Floquet skin effect_ , in analogy to the
terminology used for non-Hermitian systems Kawabata _et al._ (2020). An
important difference is that in non-Hermitian systems the skin modes are
localized directly at the boundaries Yao and Wang (2018); Bergholtz _et al._
(2020); Ashida _et al._ (2020); Kawabata _et al._ (2020), whereas in the
present periodically driven system the hinge modes are localized at various
distances from the hinges to which they are bound to in such a way that the
hinge modes cover the whole lattice. This is because the eigenstates of a
unitary Floquet evolution operator are orthogonal to each other (like those of
a Hermitian operator) implying that there can be at most one Floquet
eigenstate per lattice site on average, so an accumulation of boundary states
is not possible directly at the boundary. To put in another way, the non-
Hermitian skin effect is associated with the exceptional points of the non-
Hermitian Hamiltonian when the boundaries are introduced Bergholtz _et al._
(2020); Ashida _et al._ (2020), whereas no exceptional points are formed for
periodically driven systems described by the unitary Floquet evolution
operators. More details on these issues are available in Appendix C.
### IV.2 Beyond fine-tuned driving
(a) without defect
(b) with defect
(c1) $t=3$ (c2) $t=5$ (c3) $t=20$
Figure 7: The dynamics of a particle initially localized at the site
$(x,y,z)=(1,1,16)$ for a system of $16\times 16\times 16$ sites with full open
boundary conditions and fine tuned $\phi=\pi/2$. The squared wave function at
different times is reflected in the opacity of the plotted dots. Different
colors indicate time. (a) Without defect. (b) In the presence of a potential
defect of energy $\Delta=3\pi$ at the two sites marked by the green tube.
Additionally, we plot the squared wave function of one hinge mode. (c)
Snapshots of the time evolution in the presence of the defect at different
times.
(a) without defect
(b) with defect
(c1) $t=3$
(c2) $t=5$
(c3) $t=20$
Figure 8: As Fig. 7, but for $\phi=0.9\times\pi/2$, away from fine tuning.
Although the above discussion is based on ballistic trajectories at the fine-
tuned driving parameter $\phi=\pi/2$, we expect the chirality of the hinge
modes to be robust also against perturabtions and tuning away from
$\phi=\pi/2$. This applies especially the hinge states with larger chiral
velocities, which are situated closer to the hinge than those with smaller
chiral velocities, and are spatially well separated from counter-propagating
modes at the opposite hinge. The chiral hinge modes persist for a rather wide
range of $\phi$ beyond $\phi=\pi/2$. This can be observed from the example of
$\phi=\pi/3$ ($33.3\%$ detuning, half-way across the Weyl phase transition)
displayed in Fig. 2 column (2) around $k_{z}=\pi/2$ and $E=0$. We can see that
the hinge modes at smaller distances $M$ from the hinge still preserve their
chirality. In turn, hinge modes with larger $M$ that are closer to the sample
center, are gradually mixed with modes of opposite chirality and become bulk
modes when $\phi$ deviates away from $\phi=\pi/2$.
We have also considered the exemplary eigenstates for a cube geometry with
full open boundary conditions along all Cartesian axes $x$, $y$ and $z$
presented in column (3) of Fig. 2 showing that for $\phi=\pi/2$ and
$\phi=\pi/3$ the chiral modes at different hinges are joined to form a closed
loop respecting inversion symmetry of the system [see also Fig. 7 (a)]. The
six hinges not participating in this closed loop do not carry hinge modes,
since their two boundary planes are not connected along the diagonal direction
$\bm{d}$. Meanwhile, non-hinge modes, representing the bulk dynamics all
center along the cubic diagonal [column (3) of Fig. 2].
To further confirm the robustness of the hinge states, in Figs. 7 and 8 we
simulate the dynamics of a particle in the presence of a defect for a system
with open boundary conditions in all three directions, corresponding to column
(3) of Fig. 2. Figure 7 illustrates the dynamics of a particle initially
located at a corner $(x,y,z)=(1,1,16)$ of the system, where two transporting
hinges intersect each other, (a) for the fine tuned situation without defect
and (b) in the presence of a strong potential offset of $\Delta=3\pi$ on two
neighboring hinge sites (indicated by a green tube) at
$(x,y,z)=(1,1,8),(1,1,9)$, respectively. The corresponding plots for non-fine-
tuned driving with $\phi=0.9(\pi/2)$ are presented in Fig. 8. We find that
despite this strong defect the chiral nature of the hinge modes ensures that
no backscattering occurs at the defect and the majority of the wave-packets
continues to follow chiral trajectories along the hinges. In Figs. 7 (b) and 8
(b) we also plot representative eigenstates of the system with the defect. The
eigenstates remain delocalized along the hinge, with only a small distortion
compared to the situation without defect shown in column (3) of Fig. 2. The
fact that the defect does not induce scattering away from the hinge (modes)
can be seen also from Fig. 9. It shows the time-evolved state after 100
driving cycles for the non-fine-tuned system ($\phi=0.9\times\pi/2$) both
without defect (a) and with defect (b). Very similar distributions are also
found after even longer evolution, e.g. over 1000 driving cycles; the
densities are, thus, representative for late-time states in the limit
$t\to\infty$.
(a) $\Delta=0$
(b) $\Delta=3\pi$
Figure 9: Density distribution of a particle initially localized at the
corner site $(x,y,z)=(1,1,16)$ after an evolution over $100$ driving cycles
for the non-fine-tuned parameter $\phi=0.9\times\pi/2$, without defect (a) and
with a defect (b) [corresponding to the parameters of Fig. 8 (b)]. The
densities are representative for late-time states in the limit $t\to\infty$;
similar distributions are found also after 1000 driving cycles.
### IV.3 Topological origin
It is an interesting question, whether the robust chiral hinge states are a
consequence of topological properties of the driven system. However, as we see
from column (2) of Fig. 2, the quasi-energies of hinge states are fully mixed
with the bulk spectrum, and therefore no traditional topological band theories
for gapped or semimetallic systems apply. Here, it is the collaboration of
boundary geometry and Floquet driving that generates such topologically
protected states.
In section IV.1 we have obtained equation (12) describing the evolution over
$2M+1$ driving periods of the $M$th hinge state $\left|M,k_{z},p\right\rangle$
at the fine tunned point. Using this equation one can define the the
quasienergy winding number Kitagawa _et al._ (2010) for the $M$th hinge state
via the Floquet evolution operator over the $2M+1$ driving periods:
$W_{M}=\frac{1}{2\pi
i}\int_{0}^{\pi}dk_{z}U_{k_{z},2M+1}^{-1}\partial_{k_{z}}U_{k_{z},2M+1}=1\,,$
(16)
where
$U_{k_{z},2M+1}=\left\langle
M,k_{z},p\right|\left(U_{F}\right)^{2M+1}\left|M,k_{z},p\right\rangle=-e^{2ik_{z}}\,$
(17)
and $p=0,1,\ldots 2M$. In Eq. (16) the integration over $k_{z}$ extends over
one Brillouin zone of width $\pi$, as the distance between two non-equivalent
lattice sites equals to $2$ in $z$ direction. A similar procedure can be
applied to the opposite hinge at $(x,y)=(L,L)$, where the hinge modes shown in
blue in column (2) of Fig. 2 are characterzed by the opposite group velocity
and thus the opposite winding number $W_{M}=-1$.
The rigorous quantization of the topological invariant $W_{M}$ is associated
with fine tuning, $\phi=\pi/2$. However, the spatial separation between hinge
modes of opposite chirality allows to preserve their chiral character also
away from fine tuning point $\phi=\pi/2$, as one can see in column (2) of Fig.
2. Thus, the formation of bulk states via the mixing of hinge modes of
opposite chirality happens mostly in the center of the system, where hinge
modes corresponding to large $M$ and small chiral velocities lie close by to
their counter propagating partners associated with the opposite hinge. In
turn, the states with the largest chiral velocity, which are situated close to
the hinge and far away from counter-propagating modes of the opposite hinge,
are much less affected by a small detuning.
In this way, tuning away from $\phi=\pi/2$ we find a crossover (rather than a
topological transition) in which the chiral hinge modes are gradually
destroyed, as can be observed in the real-space plots in columns (2) and (3)
of Fig. 2. Note that already a small deviation from the fine tuned point
$\phi=\pi/2$ destroys the chiral hinge states in a narrow region near
$k_{z}=0$ and $E=\pi$, as can be seen from the spectrum shown for $\phi=\pi/3$
in column (2) of Fig. 2. In this spectral surface Fermi arc states are formed,
which equally provide definite chiral transport, yet around the surfaces
rather than the hinges. Thus a fraction of the chiral hinge states is
transformed into Fermi arc surface states in the vicinity of the Weyl points.
The latter states extend to a larger and larger spectral area as the detuning
increases.
steps | 1 | 2 | 3 | 4 | 5 | 6
---|---|---|---|---|---|---
$\alpha_{00}$ | 0 | 0 | 0 | 1 | 1 | 1
$\alpha_{10}$ | 1 | 0 | 1 | 0 | 1 | 0
$\alpha_{01}$ | 1 | 1 | 0 | 0 | 0 | 1
$\alpha_{11}$ | 0 | 1 | 1 | 1 | 0 | 0
$None$ $None$
$None$ $None$
$None$ $None$
Figure 10: The stepwise modulation of the dimensionless superlattice
amplitudes $\alpha_{ab}$ according to the protocol given in the table, gives
rise to different dimerizations of the cubic lattice in each driving step,
that enables tunneling along the desired bonds.
## V Experimental Realization with ultracold atoms in optical lattices
### V.1 Engineering of the driven lattice
Above, we have shown that the proposed modulation of tunnelling gives rise to
a variety of phenomena, including the robust creation of a pair of Weyl
points, unidirectional bulk transport, chiral Goos-Hänchen-like shifts, and
the macroscopic accumulation of chiral hinge modes for open boundary
conditions corresponding to a chiral second-order Floquet skin effect. The
model itself is, nevertheless, rather simple and its implementation with
ultracold atoms in optical lattices can be accomplished using standard
experimental techniques. All what is needed is a static cubic host lattice
potential of equal depth $V_{0}$ in each Cartesian direction and a
superlattice potential, whose amplitudes along various diagonal lattice
directions are modulated in a stepwise fashion in time in order to
suppress/allow tunneling along the six different bonds specified by our
protocol. This can be achieved using the following optical lattice potential:
$\displaystyle V(\bm{r})=V_{0}\sum_{\mu=x,y,z}\cos^{2}(2k_{L}r_{\mu})$
$\displaystyle+V_{1}\sum_{a,b=0,1}\alpha_{ab}(t)\cos^{2}k_{L}(x+(-1)^{a}y+(-1)^{b}z)\,,$
(18)
where only two of the four modulating lasers $\alpha_{ab}$ with $a,b=0,1$ are
turned on in each driving step, as shown in Fig. 10. Such a modulation
provides the required six-stage driving of the cubic lattice. Note that a
similar modulation has recently been implemented in two dimensions
Wintersperger _et al._ (2020).
### V.2 Detection of hinge dynamics
To observe the dynamics associated with the hinge modes, one can apply the
boxed potential achieved in recent experiments Gaunt _et al._ (2013); Navon
_et al._ (2016); Mukherjee _et al._ (2017b); Lopes _et al._ (2017a, b);
Eigen _et al._ (2017); Ville _et al._ (2018). There, thin sheets of laser
beams penetrate through the quantum gases creating a steep potential barrier.
Three pairs of such beams are imposed in a three-dimensional system, creating
the sharp “walls” for the box potential while leaving the central part of the
gases homogeneous.
Essentially, such a potential combined with our lattice driving scheme
immediately leads to the particle dynamics described in Fig. 7 and Fig. 8. To
take into account realistic experimental situations, two modifications are
adopted in our following simulations. First, we consider the effect of a
relatively “softer” wall for the box potential with
$\displaystyle
V_{\text{box}}(\bm{r})=\frac{V_{b}}{2}\sum_{\mu=x,y,z}\left(2+\tanh\frac{r_{\mu}^{(1)}-r_{\mu}}{\xi}+\tanh\frac{r_{\mu}-r_{\mu}^{(2)}}{\xi}\right),$
(19)
where the potential ramps up over a finite distance of roughly $4\xi$ near the
boundaries $r_{\mu}^{(1,2)}$, see Fig. 11 for instance. The second
modification we adopt is that the initial state is not taken to be localized
on a single lattice site but described by a gaussian wave packet of finite
width,
$\displaystyle\psi_{i=(x,y,z)}(t=0)=e^{-[(x-x_{0})^{2}+(y-y_{0})^{2}+(z-z_{0})^{2}]/2s_{0}^{2}}.$
(20)
(a) $|\psi_{i}(t=0)|$
(b) Projection to $x$-$y$ (left) and $x$-$z$ (right)
(c1) Ideal boundary (c2) $|\psi_{i}(t=5)|$ (c3) $|\psi_{i}(t=10)|$ (d1)
Softer boundary (d2) $|\psi_{i}(t=5)|$ (d3) $|\psi(t=10)|$
Figure 11: The dynamics of particles in a box potential with different
softness of the boundary. The initial density distribution takes a Gaussian
profile spreading over several lattice sites. The parameters are
$\phi=\pi/2,V_{b}T/6\hbar=7.5\pi$. The initial Gaussian profile has the center
$(x_{0},y_{0},z_{0})=(4,4,12)$ and width $s_{0}=0.75$.
The results of the dynamics are presented in Fig. 11 (c1)–(c3) and (d1)–(d3),
for the ideal sharp boundary (as in Fig. 7) and the realistic softer
boundaries in experimental setting respectively. We see that the chiral motion
snapshots for the ideal/softer boundaries exhibit qualitatively the same
characters, signalling that a softer boundary does not cause significant
changes. This is in consistent with the previous simulation showing the
robustness of hinge states and their resulting chiral dynamics against local
defects in Fig. 7 and Fig. 8. The major difference from previous cases, then,
derives from the initial state that overlaps with more than one set of
eigenstates near the hinge, each with different group velocities as given in
Eq. (15). Finally, we mention that some portion of initial particle
distribution would reside within the region with significant changes in
$V_{\text{box}}$. That portion of the particles could be permanently confined
to the initial hinge due to a mechanism similar to Wannier-Stark localization.
However, the majority of the particles are still traveling into the connecting
hinges, as shown in Fig. 11 (c3) and (d3).
In cold atom experiments, the density profiles are usually detected by taking
a certain projection plane, where the integrated (column-averaged) densities
are observed. To this end, we point out that the hinge dynamics can be
confirmed by observing the density profiles in two perpendicular planes. A
schematic plot is given in Fig. 11 (b), corresponding to the dynamics along
the hinge $x=y\sim 1$. The density profile taken at $x-y$ plane (i.e. the
“top” view) would show a localized distribution at the corner, verifying the
particles only locate at $x=y\sim 1$. Meanwhile, the profile at $x-z$ plane
(i.e. “side” view) indicates the movement/spreading along $z$. In a more
general situation, i.e. at long time limit with all 6 hinges populated as in
Fig. 9, additional image projection planes could be exploited. We also mention
that a simultaneous implementation of multiple imaging planes have been
applied in experiments Lu _et al._ (2020); Wang _et al._ .
### V.3 Detection of Floquet Weyl points
Weyl physics has been explored in recent cold atom experiments and theoretical
proposals Lu _et al._ (2020); Wang _et al._ ; Zhang _et al._ (2015); Dubček
_et al._ (2015); He _et al._ (2016), and also extensively in solid state
systems Armitage _et al._ (2018). Here, we discuss a scheme closely related
to a recent experiment Ünal _et al._ (2019); Wintersperger _et al._ (2020)
detecting the spacetime singularities in anomalous Floquet insulators.
(a1) $\Delta^{(0)}$ at $\phi=\pi/8$
(a1) $\Delta^{(0)}$ at $\phi=\pi/3$
(b) Exemplary gap at $\bm{k}_{0}=k_{0}(1,-1,1)$ with $k_{0}=\pi/3-0.2$.
Figure 12: Simulation of gap measurements using Stückelberg interferometry.
(a1) and (a2) Contours for quasi-energy gap at $E=0$, for $\phi=\pi/8$ and
$\phi=\pi/3$ respectively. (b) The gaps at $E\sim 0$ and $E\sim\pi$, and the
measured gap which takes the smaller one of the two.
First, the band touching at Weyl points can be verified using the Stückelberg
interferometry Zenesini _et al._ (2010); Kling _et al._ (2010); Shevchenko
_et al._ (2010); Wintersperger _et al._ (2020). Such a method measures the
smaller gap for the two bands at $E\sim 0$ and $\pi$, see Ref. Wintersperger
_et al._ (2020) for details. Compared with the experiments for insulators, a
difference here is that the two bands are, overall, always gapless at $E=0$.
That means if a global gap is measured, it will prevent us from gaining
information about the gaps or band touching at quasienergy $E=\pi$. But
fortunately, there exists a finite region neighboring to
$\bm{k}_{0}=\pi/3(1,-1,1)$ where the bands are always gapped at $E=0$ for all
$\phi$, see Fig. 12 (a1), (a2) for example. Then a local gap closure can be
measured near $\bm{k}_{0}$.
The specific measurement for our case can be performed in the following way.
An example for $\bm{k}_{0}=(\pi/3-0.2)(1,-1,1)$ is presented in Fig. 12 (b).
Let us denote the quasi-energy of the two Floquet bands at $\bm{k}_{0}$ with
$E_{\pm}(\bm{k}_{0})=\pm E_{0}$. Here, we use the branch cut along $\pi$ in
taking the logarithm of Floquet eigenvalues $e^{-iE_{\pm}(\bm{k}_{0})}$. They
have the same magnitudes and opposite signs due to particle-hole and inversion
symmetry as explained in Sec. II. Then, the local gap around $E\sim 0$ is
$\Delta^{(0)}\equiv 2E_{0}$, while the other gap around the Floquet Brillouin
zone boundary $E\sim\pi$ is $\Delta^{(\pi)}\equiv 2\pi-2E_{0}$. Therefore,
$\Delta^{(0)}=\Delta^{(\pi)}$ can only occur at $0,\pi$ mod $2\pi$. In
experiments, one can start from the high frequency limit ($\phi\rightarrow 0$)
where the band width is small compared to $2\pi$ and therefore the measured
gap always corresponds to $\Delta^{(0)}$. Slowing down the driving, the gap
$\Delta^{(\pi)}$ shrinks while the other gap $\Delta^{(0)}$ expands. At some
point, the two gaps coincide with their magnitudes, as shown in Fig. 12 (b).
Since it is always the smaller one of $\Delta^{(0)}$ and $\Delta^{(\pi)}$ that
will show up in experimental measurement, one will observe a cusp shape of the
measured gap, i.e. near $\phi\approx 0.41$ in Fig. 12 (b). One could then
imply from the occurrence of the cusp that for $\phi>0.41$, the experimental
data starts to reveal $\Delta^{(\pi)}$, whose vanishing at $\phi\approx 0.87$
shows the existence of the Weyl point around $E\sim\pi$. Similar measurements
can be performed for $\bm{k}$ slightly deviating from $\bm{k}_{0}$, which will
show that at $\phi=0.87$, $\Delta^{(\pi)}$ remains finite, proving that the
band closure around $E\sim\pi$ is a point contact. When the designated $\phi$
is slowly approached, one can perform a measurement of the gap at a certain
$\bm{k}$. A shortcut for our system is that focusing on momenta along the
diagonal $\bm{k}_{0}=k_{0}(1,-1,1)$ is sufficient to determine the Weyl point,
as discussed in Sec.III.
With the Weyl points determined, one could further apply band tomography Hauke
_et al._ (2014); Fläschner _et al._ (2016) method for momentum states
surrounding a certain Weyl point in order to determine its charge. Note that
one does not need the eigenstate information throughout the whole Brillouin
zone as the two bands are gapless in certain regions, except for just an
arbitrarily small surface wrapping a Weyl point $\bm{k}^{\text{(Weyl)}}$
determined previously. As shown before, near the Weyl points in our model,
there exists a finite region where the two bands are fully gapped in both
$E\sim 0$ and $\pi$, which allows for populating eigenstates with bosons at a
certain band Fläschner _et al._ (2016); Wintersperger _et al._ (2020). As an
example, in Fig. 13 (a) we illustrate the surfaces formed by 6 faces
$q_{x,y,z}=\pm 0.5$ of a cube, where $\bm{q}=\bm{k}-\bm{k}^{(\text{Weyl})}$,
with $\bm{k}^{(\text{Weyl})}\approx 0.955\times(1,-1,1)$ for $\phi=\pi/3$, as
in Fig. 2. From the full information of the Floquet eigenstates
$|u_{n,\bm{k}}\rangle$ given by Eq. (5), the Berry curvature penetrating out
of a plane normal to the unit vector $\bm{e}_{\mu}$ can be computed as
$\Omega_{\mu}(\bm{k})=\pm
i\sum_{\nu\rho}\varepsilon_{\mu\nu\rho}\left(\langle\partial_{k_{\nu}}u_{n,\bm{k}}|\partial_{k_{\rho}}u_{n,\bm{k}}\rangle\right)$,
where $\varepsilon_{\mu\nu\rho}$ is the Levi-Civita symbol, and $\pm$ sign
denotes that the unit vector penetrating out of the cube is along
$\pm\bm{e}_{\mu}$ directions. Figure 13 shows momentum resolved Berry
curvatures in each wrapping surface and their net fluxes
$\int_{\text{surf}}d\bm{k}\Omega_{\mu}(\bm{k})$ in that plane.
(a) The surfaces wrapping a Weyl point
(b1) $\Omega_{z+}$, net 1.284
(b2) $\Omega_{z-}$, net 0.866
(b3) $\Omega_{x+}$, net 1.284
(b4) $\Omega_{x-}$, net 0.866
(b5) $\Omega_{y+}$, net 1.469
(b6) $\Omega_{y-}$, net 0.513
Figure 13: The Berry curvatures for the surfaces wrapping a Weyl point.
Adding the net Berry curvatures up we have $2\pi$.
## VI Conclusion
In this paper, we have shown that three-dimensional periodically driven
lattice systems can show a macroscopic accumulation of chiral hinge modes,
when subjected to open boundary conditions. This corresponds to a chiral
second-order Floquet skin effect. An intuitive understanding of this effect
was given by considering the system at a fine-tuned point of the periodic
driving, where the bulk motion can only take place forwards or backwards along
a single diagonal direction. As a consequence, for open boundary conditions,
particles are reflected back and forth between hinge-sharing surface planes,
with a drift along the direction of the hinge being accomplished by chiral
Goos-Hänchen-like shifts associated with these reflections. This effect is
different from higher-order Floquet topological insulators (HOFTI) not only
regarding the underlying mechanism, but also because no macroscopic
accumulation of hinge modes takes place in HOFTI. The effect resembles,
however, the accumulation boundary modes in non-Hermitian systems, even though
here noticeable differences are also found. Namely in the non-Hermitian case
the accumulation of boundary modes occurs close to the hinge. This is not
possible in our system, since different from the eigenmodes of a non-Hermitian
Hamiltonian, the Floquet hinge modes are orthogonal to each other, as they are
eigenstates of the unitary Floquet evolution operator $U_{F}$. Thus we find
hinge bound modes also at larger distances from the hinge. Another interesting
aspect is the competition or interplay between the hinge modes and the
emergence of robust Wely points in our system, so the hinge states can co-
exist with the Fermi arc surface states. The implementation of the model
featuring both the second-order Floquet skin effect and the Weyl physics is
straightforward with ultracold atoms in optical superlattices.
## VII Acknowledgment
The authors thank E. Anisimovas and F. Nur Ünal for helpful discussions. We
acknowledge funding by the European Social Fund under grant No.
09.3.3-LMT-K-712-01-0051 and the Deutsche Forschungsgemeinschaft (DFG) via the
Research Unit FOR 2414 under Project No. 277974659.
## Appendix A Evolution operator and quasienergies along the diagonal
### A.1 Evolution operator
In the bulk the stroboscopic evolution operator $U_{F}$ is generally given by
Eqs. (3)-(4) in the main text. Let us consider the operator $U_{F}$ for wave-
vectors $\mathbf{k}$ along the cubic diagonal direction $\bm{d}=(1,-1,1)$, for
which
$\bm{k}=\pm k_{0}\bm{d}\quad\text{and, thus,}\quad|\bm{k}|=\sqrt{3}k_{0}\,.$
(21)
with $k_{0}>0$. In that case Eqs. (3)-(4) simplify to
$U_{F}=\left[U\left(\mp k_{0}\right)U\left(\pm k_{0}\right)\right]^{3}\,,$
(22)
where
$U\left(\mp k_{0}\right)U\left(\pm
k_{0}\right)=\left(\cos\phi-\mathrm{i}\tau_{\mp
k_{0}}\sin\phi\right)\left(\cos\phi-\mathrm{i}\tau_{\pm
k_{0}}\sin\phi\right)\,,$ (23)
with $\tau_{\pm k_{0}}=\tau_{1}\cos{k_{0}}\pm\tau_{2}\sin{k_{0}}$ and Pauli
matrices $\tau_{1,2,3}$ for the sublattice freedom. Explicitly one, thus, has
$U\left(\mp k_{0}\right)U\left(\pm
k_{0}\right)=\left[\cos^{2}\phi-\sin^{2}\phi\cos\left(2k_{0}\right)\right]-\mathrm{i}d\,,$
(24)
with
$d=\sin^{2}\phi\cos\left(2k_{0}\right)\tau_{3}-2\mathrm{i}\cos\phi\sin\phi\cos
k_{0}\tau_{1}.$ (25)
### A.2 Quasi-energies
The evolution operator $U_{F}=e^{-\mathrm{i}H_{F}}$ defines the quasienergies
representing the eigenvalues of the of tthe Floquet Hamiltonian $H_{F}$, which
describes the stroboscopic time evolution in multiples of the driving period
$T=1$. Using Eqs. (22) and (24) for the evolution operator, one arrives at the
following equation for the quasi-energies $E_{\mathbf{k}}$
$\cos\left(E_{\mathbf{k}}/3\right)=\cos^{2}\phi-\sin^{2}\phi\cos\left(2k_{0}\right)\,.$
(26)
This provides the dispersion (modulo $2\pi$) along the diagonal
$k_{x}=-k_{y}=k_{z}=k_{0}$
$E_{k_{0}\bm{d},\gamma}=3\gamma\arccos\left[\cos^{2}\phi-\sin^{2}\phi\cos\left(2k_{0}\right)\right],\,\,\mathrm{with}\,\,\gamma=\pm
1\,.$ (27)
In particular, quasienergies $E_{k_{0}\bm{d},\gamma}=\pi$ (modulo $2\pi$)
correspond to
$\cos^{2}\phi-\sin^{2}\phi\cos\left(2k_{0}\right)=1/2\,,$ (28)
and thus
$\cos\left(2k_{0}\right)=\frac{1/2-\sin^{2}\phi}{\sin^{2}\phi}\,,$ (29)
giving (modulo $\pi$)
$k_{0}=\left(1/2\right)\arccos\left[\left(1/2-\sin^{2}\phi\right)/\sin^{2}\phi\right]\,.$
(30)
At the fine tunned point ($\phi=\pi/2$) the condition Eq.(28) reduces to
$\cos\left(2k_{0}\right)=-1/2\,,\quad\mathrm{giving}\quad k_{0}=\pi/3\,.$ (31)
On the other hand, at $\phi=\pi/6$ one has $\sin^{2}\phi=1/4$, so that
$\cos\left(2k_{0}\right)=1,\,\quad\mathrm{giving}\quad k_{0}=0\,.$ (32)
In this way, two band touching points are formed at quasienergy $\pi$ for
$\pi/6<\phi<\pi/2$, as well as for $\pi/2<\phi<5\pi/6$ (beyond the fine tuning
point at $\phi=\pi/2$). By taking $\phi<\pi/6$ or $\phi>5\pi/6$, Eq.(28) can
no longer be fulfilled, so a band gap is formed at quasienergy $\pi$.
## Appendix B Stroboscopic hinge motion at fine tuning
In this Appendix we give a detailed description of the stroboscopic real-space
dynamics of the system at fine tuning, $\phi=\pi/2$, giving rise to chiral
hinge-bound Floquet modes. We will consider the hinge that is shared by the
two surface planes oriented in the $-x$ and $-y$ direction, which is parallel
to the $z$-axis. The projection of the particle’s trajectory in the $xy$ plane
is illustrated in Figs. 5 and 6. A particle of sublattice $s=+1,-1\equiv A,B$
is translated by $2s\bm{d}$ during each driving cycle, provided $x+2s\geq 1$
and $y-2s\geq 1$ to ensure it does not hit any the boundary plane. In that
case the state-vector $\left|s,x,y,z\right\rangle$ transforms according to the
following rule after a single driving period:
$U_{F}\left|s,x,y,z\right\rangle=-\left|s,x+2s,y-2s,z+2s\right\rangle\,.$ (33)
The particle thus propagates with a stroboscopic velocity
$\mathbf{v}=2s\left(1,-1,1\right)$ in opposite directions $s=\pm 1$ for
different sublattices $A$ and $B$.
Suppose initially the particle occupies a site of the sublattice $B$ at the
boundary $y=1$ situated $M$ sites away from the hinge ($x=M+1$) with odd
$M+z$, so that $s=B=-1$. The corresponding initial state vector is
$\left|s,M+1,1,z\right\rangle\equiv\left|B,M+1,1,z\right\rangle$. The
subsequent stroboscopic trajectory projected to the $xy$ plane is shown in
Fig. 5 for $M=4$ and in Fig. 6 for $M=5$. Generally it takes
$\left(2M+1\right)$ driving periods for the system to return to its initial
state $\left|M+1,1,z\right\rangle$. To see this, consider the stroboscopic
evolution of the particle with an even $M>2$ and odd $z$. The stroboscopic
motion of the particle then splits into four bulk and four boundary segments
illustrated in Fig. 5 for $M=4$.
During the first $M/2$ driving periods the particle undergoes the bulk
ballistic motion along the sites of the $B$ sublattice, and the state vector
transforms as
$\left|B,M+1,1,z\right\rangle\rightarrow\left|B,1,M+1,z-M\right\rangle$.
Subsequently the particle is reflected from the plane $x=1$ to a site of the
sublattice $A$ situated closer to the hinge,
$\left|B,1,M+1,z-M\right\rangle\rightarrow\left|A,2,M-1,z+2-M\right\rangle$,
as shown in Fig. 1(c) of the main text. During the next $M/2-1$ driving
periods the particle propagates ballistically along the sites of the $A$
sublattices, giving
$\left|A,2,M-1,z+2-M\right\rangle\rightarrow\left|A,M,1,z\right\rangle$. The
subsequent reflection from the plane $y=1$ brings the particle to a site of
the $B$ sublattice situated further away to from the hinge,
$\left|A,M,1,z\right\rangle\rightarrow\left|B,M,2,z-2\right\rangle$. The
evolution takes place in the similar way during final four segments.
Explicitly the full stroboscopic dynamics is given by:
$\displaystyle\left(U_{F}\right)^{M/2}\left|B,M+1,1,z\right\rangle=\left(-1\right)^{M/2}\left|B,1,M+1,z-M\right\rangle\,,$
(34) $\displaystyle
U_{F}\left|B,1,M+1,z-M\right\rangle=-\mathrm{i}\left|A,2,M-1,z+2-M\right\rangle\,,$
(35)
$\displaystyle\left(U_{F}\right)^{M/2-1}(-\mathrm{i})\left|A,2,M-1,z+2-M\right\rangle$
$\displaystyle=\mathrm{i}\left(-1\right)^{M/2}\left|A,M,1,z\right\rangle\,,$
(36) $\displaystyle
U_{F}\mathrm{i}\left|A,M,1,z\right\rangle=\left|B,M,2,z-2\right\rangle\,,$
(37)
$\displaystyle\left(U_{F}\right)^{M/2-1}\left|B,M,2,z-2\right\rangle=-\left(-1\right)^{M/2}\left|B,2,M,z-M\right\rangle\,,$
(38) $\displaystyle
U_{F}(-1)\left|B,2,M,z-M\right\rangle=\mathrm{i}\left|A,1,M,z-M\right\rangle\,,$
(39)
$\displaystyle\left(U_{F}\right)^{M/2-1}\mathrm{i}\left|A,1,M,z-M\right\rangle$
$\displaystyle=(-\mathrm{i})\left(-1\right)^{M/2}\left|A,M-1,2,z-2\right\rangle\,,$
(40) $\displaystyle
U_{F}(-\mathrm{i})\left|A,M-1,2,z-2\right\rangle=-\left|B,M+1,1,z-2\right\rangle\,,$
(41)
In this way, after $\left(2M+1\right)$ driving periods the particle returns
back to the intial position $\left(M+1,1\right)$ in the $xy$ plane and is
shifted by $2$ lattice units to an equivalent point of the sublattice $B$ in
the direction opposite to the $z$ axis. The same holds for the initial state
vector $\left|B,M+1,1,z\right\rangle$ characterized by an odd $M$ and even $z$
(see Fig. 6 for $M=5$), as well as for a particle situated closer to the hinge
($0\leq M\leq 3$) where the reflections can take place simultaneously from
both planes $x=1$ and $y=1$, as illustrated in Fig. 1(d) in the main text.
Thus one can write for any distance $M\geq 0$ from the hinge:
$\left(U_{F}\right)^{2M+1}\left|B,M+1,1,z\right\rangle=-\left|B,M+1,1,z-2\right\rangle\,,$
(42)
This means the particle propagates along the hinge in the $-z$ direction with
the stroboscopic velocity equal to $-2/\left(2M+1\right)$. The relations
analogous to Eq.(42) hold for all $2M+1$ states of the stroboscopic sequence
featured in Eqs. (35)-(41)
The origin of such chiral hinge states can be explained as follows. The
particle in the sublattice $B$ is reflected to a site of the $A$ sublattice
situated closer to the hinge, whereas the particle in the sublattice A is
reflected to a site of the sublattice $B$ situated further away from the
hinge, as one can see in Figs. 5 and 6, as well as in Eqs. (35), (37), (39),
(41). Consequently the number of $B$ sites visited over all $2M+1$ driving
periods ($M+1$) exceeds the corresponding number of $A$ sites ($M$). The four
reflections do not yield any total shift of the particle in the $z$ direction.
On the other hand, the ballistic motion between sites the same sublattice $B$
($A$) is accompanied by a shift by $2$ lattice sites in the $z$ ($-z$)
direction for each driving period. This leads to the overall shift of the
particle to an equivalent site in the $-z$ direction is due to the difference
in the number of the visited $B$ and A sites after $2M+1$ driving periods.
## Appendix C Non-Hermitian Hamiltonian corresponding to stroboscopic
operator
Recently it was suggested Bessho and Sato (2020) to associate a non-Hermitian
Hamiltonian $H_{NH}\left(\mathbf{k}\right)$ to the momentum space stroboscopic
evolution operator $\mathrm{i}U_{F}\left(\mathbf{k}\right)$. Let us consider
such a non-Hermitian Hamiltonian for our 3D periodically driven lattice
$H_{NH}=\mathrm{i}U_{F}\,.$ (43)
For the fine tunned driving ($\phi=\pi/2$) the bulk stroboscopic evolution
operator corresponds to a non-Hermitian Hamiltonian describing a
unidirectional transfer between the lattice sites along the diagonal
$\mathbf{d}=(1,-1,1)$ and in the opposite direction $-\mathbf{d}$ for the
sublattices $A$ and $B$, respectively:
$H_{NH}^{bulk}=-\mathrm{i}\sum_{\mathbf{r}_{A}}\left|A,\mathbf{r}_{A}+2\mathbf{d}\right\rangle\left\langle
A,\mathbf{r}_{A}\right|-\mathrm{i}\sum_{\mathbf{r}_{B}}\left|B,\mathbf{r}_{B}-2\mathbf{d},\right\rangle\left\langle
B,\mathbf{r}_{B}\right|\,.$ (44)
The open boundary conditions for the hinge corresponding to $x\geq 1$ and
$y\geq 1$ are obtained by imposing a constraint on the state-vectors entering
the real space non-Hermitian Hamiltonian (44)
$\left|s,\mathbf{r}_{s}\right\rangle=0\quad\mathrm{for}\quad\mathbf{r}_{s}\cdot\mathrm{e}_{x,y}\leq
0,\,\,\mathrm{with}\,\,s=A,B\,.$ (45)
The bulk non-Hermitian Hamiltonian (44) supplied with the open boundary
conditions (45) describes a unidirectional coupling between unconnected linear
chain of the $A$ or $B$ sites terminating at the hinge planes. The eigenstates
of each linear chain represent non-Hermitian skin modes which are localized at
one end of the chain depending on the direction of asymmetric hopping
Bergholtz _et al._ (2020); Ashida _et al._ (2020). In the present situation
such skin modes would be trivially localised on different planes of the hinge
for the chains comprising different sublattice sites $A$ or $B$, and no chiral
motion is obtained along the hinge.
Yet the open boundary conditions (45) are not sufficient to properly represent
the boundary behavior of a particle in our periodically driven lattice. In
fact, bulk non-Hermitian Hamiltonian (44) supplied with the boundary
conditions (45) is no longer a unitary operator. Thus one can not associate
such an non-Hermitian operator with the evolution operator, in contradiction
with Eq. (43). The unitarity is restored by adding to $H_{NH}^{bulk}$ extra
terms [in addition to the conditions (45)] to include effects of the chiral
backward reflection at the hinge planes to a neighboring linear chain composed
of the sites of another sublattice. These terms are described by Eqs.(35),
(37), (39) and (41), and correspond to the dashed lines in Figs. 5-6. The
chiral reflection appears due to the micromotion during the driving period
involving six steps (see Fig. 1(b)), and is a characteristic feature of the
periodically driven 3D lattice. As discussed in the previous Appendix B, the
backreflection leads to the chiral motion along the hinge.
## References
* Kitagawa _et al._ (2010) T. Kitagawa, E. Berg, M. Rudner, and E. Demler, Phys. Rev. B 82, 235114 (2010).
* Rudner _et al._ (2013) M. S. Rudner, N. H. Lindner, E. Berg, and M. Levin, Phys. Rev. X 3, 031005 (2013).
* Roy and Harper (2017a) R. Roy and F. Harper, Phys. Rev. B 96, 155118 (2017a).
* Yao _et al._ (2017a) S. Yao, Z. Yan, and Z. Wang, Phys. Rev. B 96, 195303 (2017a).
* Mukherjee _et al._ (2017a) S. Mukherjee, A. Spracklen, M. Valiente, E. Andersson, P. Öhberg, N. Goldman, and R. R. Thomson, Nat. Commun. 8, 13918 (2017a).
* Peng _et al._ (2016) Y.-G. Peng, C.-Z. Qin, D.-G. Zhao, Y.-X. Shen, X.-Y. Xu, M. Bao, H. Jia, and X.-F. Zhu, Nat. Commun. 7, 13368 (2016).
* Maczewsky _et al._ (2016) L. J. Maczewsky, J. M. Zeuner, S. Nolte, and A. Szameit, Nat. Commun. 8, 13756 (2016).
* von Keyserlingk and Sondhi (2016) C. W. von Keyserlingk and S. L. Sondhi, Phys. Rev. B 93, 245145 (2016).
* Else and Nayak (2016) D. V. Else and C. Nayak, Phys. Rev. B 93, 201103 (2016).
* Potirniche _et al._ (2017) I.-D. Potirniche, A. C. Potter, M. Schleier-Smith, A. Vishwanath, and N. Y. Yao, Phys. Rev. Lett. 119, 123601 (2017).
* Wintersperger _et al._ (2020) K. Wintersperger, C. Braun, F. N. Ünal, A. Eckardt, M. D. Liberto, N. Goldman, I. Bloch, and M. Aidelsburger, Nature Physics 16, 1058 (2020).
* Sacha (2015) K. Sacha, Phys. Rev. A 91, 033617 (2015).
* Khemani _et al._ (2016) V. Khemani, A. Lazarides, R. Moessner, and S. L. Sondhi, Phys. Rev. Lett. 116, 250401 (2016).
* Else _et al._ (2016) D. V. Else, B. Bauer, and C. Nayak, Phys. Rev. Lett. 117, 090402 (2016).
* Yao _et al._ (2017b) N. Y. Yao, A. C. Potter, I.-D. Potirniche, and A. Vishwanath, Phys. Rev. Lett. 118, 030401 (2017b).
* Zhang _et al._ (2017) J. Zhang, P. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, _et al._ , Nature 543, 217 (2017).
* Choi _et al._ (2017) S. Choi, J. Choi, R. Landig, G. Kucsko, H. Zhou, J. Isoya, F. Jelezko, S. Onoda, H. Sumiya, V. Khemani, _et al._ , Nature 543, 221 (2017).
* Rovny _et al._ (2018) J. Rovny, R. L. Blum, and S. E. Barrett, Phys. Rev. Lett. 120, 180603 (2018).
* Ho _et al._ (2017) W. W. Ho, S. Choi, M. D. Lukin, and D. A. Abanin, Phys. Rev. Lett. 119, 010602 (2017).
* Huang _et al._ (2018) B. Huang, Y.-H. Wu, and W. V. Liu, Phys. Rev. Lett. 120, 110603 (2018).
* Sacha and Zakrzewski (2017) K. Sacha and J. Zakrzewski, Rep. Prog. Phys. 81, 016401 (2017).
* Yao and Nayak (2018) N. Yao and C. Nayak, Phys. Today 71, 40 (2018).
* Sacha (2020) K. Sacha, _Time Crystals_ (Springer International Publishing, Switzerland, 2020).
* von Keyserlingk _et al._ (2016) C. W. von Keyserlingk, V. Khemani, and S. L. Sondhi, Phys. Rev. B 94, 085112 (2016).
* Armitage _et al._ (2018) N. Armitage, E. Mele, and A. Vishwanath, Rev. Mod. Phys. 90, 015001 (2018).
* Anderson _et al._ (2012) B. M. Anderson, G. Juzeliūnas, V. M. Galitski, and I. B. Spielman, Phys. Rev. Lett. 108, 235301 (2012).
* Anderson _et al._ (2013) B. M. Anderson, I. B. Spielman, and G. Juzeliūnas, Phys. Rev. Lett. 111, 125301 (2013).
* Dubček _et al._ (2015) T. Dubček, C. J. Kennedy, L. Lu, W. Ketterle, M. Soljačić, and H. Buljan, Phys. Rev. Lett. 114, 225301 (2015).
* Sun _et al._ (2018) X.-Q. Sun, M. Xiao, T. c. v. Bzdušek, S.-C. Zhang, and S. Fan, Phys. Rev. Lett. 121, 196401 (2018).
* Higashikawa _et al._ (2019) S. Higashikawa, M. Nakagawa, and M. Ueda, Phys. Rev. Lett. 123, 066403 (2019).
* Lu _et al._ (2020) Y.-H. Lu, B.-Z. Wang, and X.-J. Liu, Science Bulletin 65, 2080 (2020).
* (32) Z.-Y. Wang, X.-C. Cheng, B.-Z. Wang, J.-Y. Zhang, Y.-H. Lu, C.-R. Yi, S. Niu, Y. Deng, X.-J. Liu, S. Chen, and J.-W. Pan, 2004.02413v1 .
* (33) Y. Zhu, T. Qin, X. Yang, G. Xianlong, and Z. Liang, 2003.05638v2 .
* Yao and Wang (2018) S. Yao and Z. Wang, Phys. Rev. Lett. 121, 086803 (2018).
* Bergholtz _et al._ (2020) E. J. Bergholtz, J. C. Budich, and F. K. Kunst, “Exceptional topology of non-hermitian systems,” (2020), arXiv:1912.10048 [cond-mat.mes-hall] .
* Ashida _et al._ (2020) Y. Ashida, Z. Gong, and M. Ueda, “Non-hermitian physics,” (2020), arXiv:2006.01837 [cond-mat.mes-hall] .
* Kawabata _et al._ (2020) K. Kawabata, M. Sato, and K. Shiozaki, Physical Review B 102, 205118 (2020).
* Benalcazar _et al._ (2017a) W. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, Science 357, 61 (2017a).
* Benalcazar _et al._ (2017b) W. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, Phys. Rev. B 96, 245115 (2017b).
* Khalaf (2018) E. Khalaf, Phys. Rev. B 97, 205136 (2018).
* Budich _et al._ (2017) J. C. Budich, Y. Hu, and P. Zoller, Phys. Rev. Lett. 118, 105302 (2017).
* Note (1) Without including the tunneling along the $z$ direction described by third and the sixth driving steps, the dynamics reduces to a 2D motion in a periodically driven square lattice considered in refs. Rudner _et al._ (2013); Mukherjee _et al._ (2017a); Wintersperger _et al._ (2020).
* Altland and Zirnbauer (1997) A. Altland and M. R. Zirnbauer, Phys. Rev. B 55, 1142 (1997).
* Chiu _et al._ (2016) C.-K. Chiu, J. C. Teo, A. P. Schnyder, and S. Ryu, Rev. Mod. Phys. 88, 035005 (2016).
* Roy and Harper (2017b) R. Roy and F. Harper, Phys. Rev. B 96, 155118 (2017b).
* Gaunt _et al._ (2013) A. L. Gaunt, T. F. Schmidutz, I. Gotlibovych, R. P. Smith, and Z. Hadzibabic, Physical Review Letters 110, 200406 (2013).
* Navon _et al._ (2016) N. Navon, A. L. Gaunt, R. P. Smith, and Z. Hadzibabic, Nature 539, 72 (2016).
* Mukherjee _et al._ (2017b) B. Mukherjee, Z. Yan, P. B. Patel, Z. Hadzibabic, T. Yefsah, J. Struck, and M. W. Zwierlein, Physical Review Letters 118, 123401 (2017b).
* Lopes _et al._ (2017a) R. Lopes, C. Eigen, A. Barker, K. G. Viebahn, M. R. de Saint-Vincent, N. Navon, Z. Hadzibabic, and R. P. Smith, Physical Review Letters 118, 210401 (2017a).
* Lopes _et al._ (2017b) R. Lopes, C. Eigen, N. Navon, D. Clément, R. P. Smith, and Z. Hadzibabic, Physical Review Letters 119, 190404 (2017b).
* Eigen _et al._ (2017) C. Eigen, J. A. Glidden, R. Lopes, N. Navon, Z. Hadzibabic, and R. P. Smith, Physical Review Letters 119, 250404 (2017).
* Ville _et al._ (2018) J. L. Ville, R. Saint-Jalm, E. Le Cerf, M. Aidelsburger, S. Nascimbène, J. Dalibard, and J. Beugnon, Phys. Rev. Lett. 121, 145301 (2018).
* Zhang _et al._ (2015) D.-W. Zhang, S.-L. Zhu, and Z. D. Wang, Physical Review A 92, 013632 (2015).
* He _et al._ (2016) W.-Y. He, S. Zhang, and K. T. Law, Physical Review A 94, 013606 (2016).
* Ünal _et al._ (2019) F. N. Ünal, B. Seradjeh, and A. Eckardt, Physical Review Letters 122, 253601 (2019).
* Zenesini _et al._ (2010) A. Zenesini, D. Ciampini, O. Morsch, and E. Arimondo, Physical Review A 82, 065601 (2010).
* Kling _et al._ (2010) S. Kling, T. Salger, C. Grossert, and M. Weitz, Physical Review Letters 105, 215301 (2010).
* Shevchenko _et al._ (2010) S. N. Shevchenko, S. Ashhab, and F. Nori, Physics Reports 492, 1 (2010).
* Hauke _et al._ (2014) P. Hauke, M. Lewenstein, and A. Eckardt, Phys. Rev. Lett. 113, 045303 (2014).
* Fläschner _et al._ (2016) N. Fläschner, B. Rem, M. Tarnowski, D. Vogel, D.-S. Lühmann, K. Sengstock, and C. Weitenberg, Science 352, 1091 (2016).
* Bessho and Sato (2020) T. Bessho and M. Sato, “Topological duality in floquet and non-hermitian dynamical anomalies: Extended nielsen-ninomiya theorem and chiral magnetic effect,” (2020), arXiv:2006.04204 [cond-mat.mes-hall] .
|
# The buildup of the intracluster light of Abell 85 as seen by Subaru’s Hyper
Suprime-Cam
Mireia Montes School of Physics, University of New South Wales, Sydney, NSW
2052, Australia Sarah Brough School of Physics, University of New South
Wales, Sydney, NSW 2052, Australia Matt S. Owers Department of Physics and
Astronomy, Macquarie University, NSW 2109, Australia Astronomy, Astrophysics
and Astrophotonics Research Centre, Macquarie University, Sydney, NSW 2109,
Australia Giulia Santucci School of Physics, University of New South Wales,
Sydney, NSW 2052, Australia
###### Abstract
The study of low surface brightness light in large, deep imaging surveys is
still uncharted territory as automated data reduction pipelines over-subtract
or eliminate this light. Using archival data of the Abell 85 cluster of
galaxies taken with Hyper Suprime-Cam on the Subaru Telescope, we show that
using careful data processing can unveil the diffuse light within the cluster,
the intracluster light. We reach surface brightness limits of
$\mu_{g}^{limit}$(3$\sigma$, $10\arcsec\times 10\arcsec$) = 30.9 mag/arcsec2,
and $\mu_{i}^{limit}$(3$\sigma$, $10\arcsec\times 10\arcsec$) = 29.7
mag/arcsec2. We measured the radial surface brightness profiles of the
brightest cluster galaxy out to the intracluster light (radius $\sim 215$
kpc), for the $g$ and $i$ bands. We found that both the surface brightness and
the color profiles become shallower beyond $\sim 75$ kpc suggesting that a
distinct component, the intracluster light, starts to dominate at that radius.
The color of the profile at $\sim 100$ kpc suggests that the buildup of the
intracluster light of Abell 85 occurs by the stripping of massive ($\sim
10^{10}M_{\odot}$) satellites. The measured fraction of this light ranges from
$8\%$ to $30\%$ in $g$, depending on the definition of intracluster light
chosen.
galaxies: clusters: individual (A85) — galaxies: elliptical and lenticular, cD
— galaxies: halos — galaxies: evolution — techniques: image processing
††journal: ApJ††facilities: Subaru Telescope††software: Astropy (The Astropy
Collaboration et al., 2018), SExtractor v2.19.5 (Bertin & Arnouts, 1996),
SWarp v2.38.0 (Bertin et al., 2002), SCAMP v2.0.4 (Bertin, 2006), photutils
v0.7.2 (Bradley et al., 2019), pillow (van Kemenade et al., 2020), ellipse
(Jedrzejewski, 1987), GALFIT (Peng et al., 2002)
## 1 Introduction
Deep observations of galaxy clusters have revealed the existence of a diffuse
glow produced by stars not bound to any individual galaxy; the intracluster
light (ICL, see Mihos, 2016; Montes, 2019, for reviews). As the by-product of
galaxy interactions, the ICL forms a fossil record of all the dynamical
processes the system has undergone and provides a holistic view of the
interaction history of the cluster (e.g., Merritt, 1984).
The ICL is key to understanding how brightest cluster galaxies (BCGs) grow
with time. Their formation and evolution have been predicted to be rather
different than satellite galaxies (e.g., De Lucia & Blaizot, 2007). The
innermost regions of these massive galaxies appear to have formed the majority
of their stars at high redshift and on short timescales (e.g., Thomas et al.,
2005) whereas their outer parts are likely assembled as a consequence of
multiple minor merging more recently (e.g., Trujillo et al., 2011). As the ICL
is often found to be more concentrated around the BCG (e.g., Mihos et al.,
2005), it implies that the growth of both components, BCG and ICL, are
connected. In addition, simulations of the growth rate of BCGs better agree
with observations if formation of ICL is included (e.g., Conroy et al., 2007;
Contini et al., 2019; Spavone et al., 2020).
A useful tool to characterize the ICL is the study of its stellar populations,
as they reflect the properties of the galaxies from which the ICL accreted its
stars. Knowing the stellar populations of the ICL in clusters allow us to
infer the mechanisms at play in the formation of this component, and therefore
how (and when) the assembly history of these clusters occurred. Observations
show clear radial gradients in colors indicating radial gradients in
metallicity (e.g., Zibetti et al., 2005; Iodice et al., 2017; Mihos et al.,
2017; DeMaio et al., 2015, 2018) and, in some cases, age (e.g., Montes &
Trujillo, 2014; Morishita et al., 2017; Montes & Trujillo, 2018). These
studies point to the tidal stripping of massive satellites (a few $\times
10^{10}M_{\odot}$; Montes & Trujillo, 2014, 2018) as the dominant process of
ICL formation for massive clusters ($\sim 10^{15}M_{\odot}$)111Diffuse light
has also been detected and studied in groups of galaxies (e.g., Da Rocha &
Mendes de Oliveira, 2005; DeMaio et al., 2018; Iodice et al., 2020) but the
main mechanism of the formation of this intragroup light appears to differ
from that for clusters (e.g., Spavone et al., 2020).
A clear limitation in the study of the ICL is the lack of statistically
significant samples with the required depth ($\mu_{V}>26.5$ mag/arcsec2;
Rudick et al., 2006). Unfortunately, the long exposure times required for
these observations mean that very few clusters have been studied, so far. To
date, studies have only analysed small samples (1-20 clusters; Krick &
Bernstein, 2007; Montes & Trujillo, 2014, 2018; Burke et al., 2015; Jiménez-
Teja et al., 2018) or employed stacking of many clusters to obtain a coarse
measurement (e.g. Zibetti et al., 2005; Zhang et al., 2019).
This is changing with the next generation of surveys using state-of-the-art
cameras that will be able to reach unprecedented depths over large areas in
the sky. An example is the Hyper Suprime-Cam (HSC; Miyazaki et al., 2018) on
the 8.2-meter Subaru Telescope. This camera is well suited to not only provide
the wide field-of-view necessary to observe nearby clusters but also the time
efficiency of a large telescope being able to reach ICL depths in short
exposure times. The HSC is currently carrying out the HSC Subaru Strategic
Program (HSC-SSP), a survey of 1400 deg2 in five different bands ($grizy$)
plus four narrow filters. The depth and area of this survey will provide the
large numbers of galaxy clusters necessary to _deepen_ our knowledge of the
formation of the ICL (Aihara et al., 2019).
However, ICL studies need very accurate data processing. The data reduction of
HSC data is undertaken with the HSC pipeline (Bosch et al., 2018), a custom
version of the LSST222The Vera C. Rubin Observatory Legacy Survey for Space
and Time. pipeline. The sky subtraction algorithm in the HSC-SSP data release
1 over-subtracts extended halos of bright objects making it almost impossible
to study nearby or very extended objects (Aihara et al., 2018)333This issue
was improved in the data release 2 (Aihara et al., 2019) but not completely
resolved.. In addition, ICL studies are susceptible to biases due to flat-
field inaccuracies and the scattered light from bright stars.
In this work, we use archival HSC images of the cluster Abell 85 (A85) to test
a dedicated data processing technique for low surface brightness science and
study the diffuse light of this cluster out to $\sim 215$ kpc. The main
properties of A85 are listed in Table 1. A85 is a rich cluster of galaxies
($\sim 800$ spectrocopically confirmed galaxies within $2R_{200}$, Owers et
al., 2017; Habas et al., 2018) hosting a massive BCG (M${}_{*}\sim 3\times
10^{12}M_{\odot}$, Mehrgan et al. 2019). Many studies have shown that this
cluster is slowly accreting material through several ongoing mergers with, at
least, two subclusters or groups of galaxies (Bravo-Alfaro et al., 2009;
Ichinohe et al., 2015; Owers et al., 2017). In addition, models of the X-ray
temperature across the cluster support the picture that A85 has undergone
several small mergers in the past few billion years (Durret et al., 2005;
Ichinohe et al., 2015).
This cluster provides an ideal target for a pilot study of the ICL using HSC
and dedicated data processing techniques for low surface brightness science.
Studying the properties of the ICL in this cluster will inform us of the
ongoing processes shaping this cluster, and its BCG.
Throughout this work, we adopt a standard cosmological model with the
following parameters: $H_{0}=70$ km s-1 Mpc-1, $\Omega_{m}=0.3$ and
$\Omega_{\Lambda}=0.7$. All magnitudes are in the AB magnitude system.
Table 1: Main properties of A85. Redshift, mass and radius are taken from Owers et al. (2017). Name | RA | DEC | z | Distance | Angular scale | Virial M200 | R200
---|---|---|---|---|---|---|---
| [deg] | [deg] | | [Mpc] | [kpc/arcsec] | [10${}^{14}M_{\odot}$] | [Mpc]
Abell 85 | 10.458750 | -9.301944 | 0.0549 | 245 | 1.068 | 17.0$\pm$1.3 | $2.42$
## 2 Data
HSC is a 1.77 $\deg^{2}$ imaging camera on the Subaru Telescope operated by
the National Astronomical Observatory of Japan (NAOJ) on the summit of
Maunakea in Hawaii. It consists of 116 CCDs (104 science CCDs, 4 guide and 8
focus sensors) with a resolution of $0\farcs 168$/pixel. For this work, we
have used archival data. A85 was observed on the 2014-09-24 (Proposal ID:
o14171). The science data consist of 9 frames in both the HSC-$G$ ($g$) and
the HSC-$I$ ($i$) bands. The exposure times for each frame are 200s and 240s,
respectively. The observational strategy consisted of a dithering pattern of 9
positions around the center of the cluster. The offsets of $1\farcm 367$ are
enough to fill the gaps of the camera mosaic. All the data used in this work
was downloaded from the Subaru-Mitaka-Okayama-Kiso Archive (SMOKA; Baba et
al., 2002)444https://smoka.nao.ac.jp/.
### 2.1 Custom-made processing
Exploring the ICL of clusters of galaxies is difficult as it is not only
faint, but also extended. This means that in order to avoid biases when
measuring the ICL caused by inhomogeneities in the images such as gradients
and oversubtraction, see Mihos 2019 for a detailed description of the possible
biases in low surface brightness imaging), the images must have a flat
background and the background subtraction should be performed carefully so as
not to eliminate this light. At the time we started this project, the data
reduced with the HSC pipeline (Bosch et al., 2018) for the DR1 of the HSC-SSP
survey (Aihara et al., 2018), showed significant oversubtraction around bright
sources caused by a background estimation using a relatively small mesh size
($128\times 128$ pix${}^{2}=\,21\arcsec\times 21\arcsec$). Because the cluster
of interest is at low redshift (i.e, extended in the sky, $R_{200}=2.42$ Mpc
$=0\fdg 63$; Owers et al. 2017), this oversubtraction would likely eliminate
the ICL. For this reason, we developed a custom-made process in order to
reduce the data, preserving low surface brightness light, i.e. the extended
and faint ICL. The code is mainly written in Python and uses Astropy (The
Astropy Collaboration et al., 2018) and astronomical software such as
SExtractor, SWarp, and SCAMP (Bertin & Arnouts, 1996; Bertin et al., 2002;
Bertin, 2006). The steps followed here to reduce the HSC images, after the
individual CCD processing, are similar to those performed in Trujillo & Fliri
(2016). For this work, as the images are dithered around the BCG of the
cluster, we focus only on the innermost $40$ CCDs of the camera to reduce
inaccuracies due to the non-uniform illumination of the CCDs. This corresponds
to a radius of $\sim 0\fdg 42$ ($1.6$ Mpc) around the BCG. These are the main
steps we conduct to process the data:
1. 1.
Derivation of the calibration files (bias and dark)
2. 2.
Individual CCD processing and assembly
3. 3.
Flat-field derivation using science images and correction
4. 4.
Camera mosaic with a careful determination of the sky background
5. 5.
Mosaic coaddition and final image calibration.
In the following sections, we describe in detail how these steps are
performed.
The HSC CCDs are full-depletion Hamamatsu CCDs (Miyazaki et al., 2012). The
individual raw images are $2144\times 4241$, divided into four science
channels of $512\times 4096$ pixels along with pre-scan, overscan and non-
science regions of each of those channel.555As described in:
https://hsc.mtk.nao.ac.jp/pipedoc/pipedoc_4_e/e_hsc/index.html#e-hsc
Therefore, the next steps to calibrate each CCD have to be performed in each
channel separately before the CCD is assembled.
#### 2.1.1 Calibration files
Bias frames were taken the same night as part of the observing program of A85.
They consist of $15$ bias frames per CCD. However, there were only $2$ dark
frames taken the same night. In order to derive a robust master dark for the
images, we also downloaded the darks taken on adjacent nights; the 2014-09-22,
2014-09-23, and 2014-09-25, to a total of $10$ dark frames per CCD. The master
bias and dark frames were created as the sigma-clipped ($3\sigma$) median for
each of the channels of each of the CCDs.
#### 2.1.2 Individual CCD processing
In this step, we perform the processing and assembly of each CCD for each of
the frames to produce a calibrated image. Each of the CCDs is processed
independently. For each channel, we compute a median overscan using the
corresponding overscan regions, and correct for overscan, dark and bias (as
derived in Sec. 2.1.1). We also correct each channel for nonlinearity as done
in the HSC pipeline (Bosch et al., 2018) by applying a polynomial with
coefficients determined per amplifier. Before assembling the final CCD image,
we applied the gains for each of the channels (provided in the headers). The
final size of the assembled CCD is $2048\times 4176$ pixels.
#### 2.1.3 Flat-field correction
An accurate estimation of the flat-field correction is crucial to achieving
the goals of this study. Dome flats are not suitable for our goals due to
inhomogeneities in the dome illumination that can result in gradients across
the image (e.g., Trujillo & Fliri, 2016). Consequently, our flat-field
correction should be based on science exposures and, ideally, they should be
the same science exposures used in this work. However, the images of the
cluster are not appropriate for two reasons: 1) there are only 9 different
exposures meaning that the resulting flats will be very noisy and 2) the
offsets of the dithering pattern are not large enough for this purpose, so the
galaxies of the cluster occupy roughly the same physical area of the CCD in
all the exposures. The latter means that there will not be enough pixels to
average in those regions in the resulting flats.
To address this, we downloaded images of random fields from the HSC-SSP Wide
survey taken on adjacent nights to the A85 observations in order to derive the
most reliable flat-field correction possible. Using the SSP-Wide survey
reduces the probability of an extended object in the same physical space of
the CCD in all exposures. For the $g$ band the images were taken on 2014-10-01
(31), 2014-11-18 (9), 2014-11-25 (9), a total of 49 frames per CCD. The
exposure times are $150$s per frame.
For the $i$ band, taking the images from the adjacent nights resulted in
substructure remaining after the flat-field correction. This was found to be
due to differences in the rotation angle of the instrument in the different
set of images (see Appendix A for more details). Therefore, the final images
used were taken on 2014-03-27, 2014-09-17, 2015-01-22, 2015-07-11, 2015-07-20,
2014-09-22, 40 images per CCD in total. The exposure times are $200$s for each
frame.
The assembled CCDs show a steep gradient across the detector that can cause
detection algorithms to mistakenly detect and mask regions of the image that
do not correspond to any source. To account for this, the construction of the
flats was done in two steps.
We first derived a rough flat or _preflat_. These were derived by stacking the
HSC-SSP science images using a median of the normalized images for each CCD,
without any masking, to make a CCD flat. Each of the images that went into the
flats was visually inspected to eliminate those presenting very bright,
saturated stars and extended objects that might introduce errors in the
derived flat-field frames. First, we normalized each CCD image to one, using a
region of $1000\times 1000$ pixels located at the same position in the middle
of each CCD. The _preflats_ were created as the sigma-clipped ($3\sigma$)
median of the normalized images. Once these _preflats_ are derived, we use
them to correct the assembled CCD images. We use these _preflat_ -corrected
CCD images to build an object mask with SExtractor (Bertin & Arnouts, 1996).
The settings used for the detection are optimized for faint object detection
so to better mask faint sources. Again, for each CCD the masked and normalized
images are combined to create the final flats.
Finally, each CCD is divided by the corresponding final flat. In Appendix B,
we show a region of our $i$ band images where the improvement of using the
flats with the same rotation as the science images can be seen.
#### 2.1.4 Astrometric calibration and frame assembly
Before combining the CCDs into frames, we need to refine the rough astrometry
that the HSC camera provides. To do that, we use SCAMP (Bertin, 2006) to put
the science images into a common astrometric solution. SCAMP reads SExtractor
catalogs and computes astrometric solutions for each individual CCD. The
reference used is the stars of the SDSS DR9 catalogue (Ahn et al., 2012) in
our field of view. The number of stars used in each mosaic frame (40 CCDs) for
our astrometric solution is typically around a couple of hundred.
After computing the accurate astrometry for each CCD in each frame, we need to
make sure the CCDs are levelled before building the frame, i.e. all CCDs in
the frame have the same sky counts. For each CCD, we run SExtractor again. We
build a mask by using the segmentation map obtained, further expanding the
detected objects by 10 pixels. In addition, we masked all bright sources in
the CCDs. This includes bright stars to minimize the contamination of their
extended halos, large galaxies and $\sim 700\arcsec$ in radius around the BCG.
This constant correction is computed as the $3\sigma$-clipped median of the
remaining pixels and subtracted from the respective CCDs.
After levelling each CCD, we use SWarp (Bertin et al., 2002) to put together
the $40$ CCDs from each exposure into single mosaic frames. SWarp resamples
the CCDs putting them into a common grid using a LANCZOS3 interpolation
function. The result is $9$ mosaic frames for both $g$ and $i$ bands.
Figure 1: Image of the cluster A85 in the $g$-band. The area displayed is
$52\arcmin\times 52\arcmin$ around the centre of the cluster (RA =
00h42m01.2s, DEC = -09d18m18.9s). Two regions of the cluster are highlighted.
Zoom-in A ($390\arcsec\times 350\arcsec$, purple) shows an RGB image of the
BCG of A85 where the ICL can be seen. Zoom-in B ($220\arcsec\times
200\arcsec$, light green) shows a satellite galaxy of the cluster. Tidal
streams and other signs of interaction are easily seen when the images are
zoomed-in on. Three of these features are marked with a blue arrow (one in the
zoom-in B). The RGB images are a combination of the $g$, and $i$ bands whereas
a black and white $g$ image is used for the main image. North is up and East
is left.
#### 2.1.5 Sky subtraction
Sky subtraction is one of the most important steps for reducing low surface
brightness data as if done incorrectly it can introduce unwanted gradients or
remove partially or entirely the object we want to study. The sky
determination and subtraction is done for each of the mosaic frames
individually before the final co-addition step. We first masked all sources in
the individual mosaics using the segmentation maps provided by SExtractor and
further dilated each object by 20 pixels to minimize contamination of the
fainter regions of objects that are not included in SExtractor’s segmentation
map. Separately, we generously masked all bright sources (stars and galaxies)
as well as the gaps between CCDs and created an additional mask to cover the
centre of the cluster to avoid contamination of the outer parts of the BCG (as
done in Sec. 2.1.4 but now for the full mosaic). Once the mosaic is masked, we
distributed $50,000$ boxes of $100\times 100$ pixels randomly through the
image and computed the 3-$\sigma$ clipped median of the counts. We subtract
this constant sky value from the respective mosaic.
In addition, we also fitted a first degree 2D polynomial to the masked
mosaics. As the size of the mosaics is larger than the physical extent of the
ICL in the images, this ensures the correction of any remaining gradients in
the image while preserving the diffuse light in this cluster. This 2D
polynomial is then subtracted from the entire mosaic.
#### 2.1.6 Image co-addition
Once the science mosaics are sky-subtracted and in a common astrometric
solution, we use SWarp to co-add the mosaics into a final image. SWarp
projects the input images into the output frame and co-adds them in an optimum
way. The method used for the geometric resampling is LANCZOS3. The final
output is created as the median of the $9$ mosaic frames. Finally, we computed
and subtracted a constant sky value from the final co-added images.
The final exposure times of the images are 1800s (30 mins) for the $g$ band
and 2160s (36 mins) for the $i$ band. The final $g$ band mosaic is shown in
Fig. 1. The field of view is $52\arcmin\times 52\arcmin$. In Fig. 1, we also
show RGB zoom-in images of two regions of the cluster. Region A shows a
postage stamp of $390\arcsec\times 350\arcsec$ around the BCG of A85 (framed
in purple) and region B shows a $220\arcsec\times 200\arcsec$ region around a
massive galaxy belonging to one of the subclusters that are merging into A85
(Ichinohe et al. 2015; Owers et al. 2017; framed in green). The astrometric
calibration is not accurate at the corners of our field of view, likely due to
the lack of stars available to perform accurate astrometry there.
#### 2.1.7 Photometric calibration
The photometric calibration of our images is based on the photometry of non-
saturated stars in our field of view in common with the SDSS DR12 catalogue
(Alam et al., 2015). For each band, we chose stars within the magnitude range
(SDSS ‘psfMag’) 18 to 21 mag, to avoid saturated stars in our mosaics, as seen
in Fig. 2, and very faint and noisy sources in SDSS. For our images, we used
‘MAG_PETRO’ which provides an estimate of the total flux of the star. We
matched the SDSS DR12 photometric catalogue to ours, multiplying the frames by
a factor to make the photometry in both catalogues equal. The typical number
of stars that are used for photometric calibration within each individual
mosaic image is $\sim 700$ stars. The average dispersion in the photometry for
each band is $\sim 0.1$ mag, for both the $g$ and $i$ bands.
### 2.2 Modeling and subtraction of stars
The careful modeling and removal of stars in deep images is now a common
technique in low surface brightness science (e.g., Slater et al., 2009;
Trujillo & Fliri, 2016; Román et al., 2020). This is important in order to
minimize the contamination by light scattered by the stars, especially bright
stars, in our photometry of the faint ICL.
Figure 2: Magnitude as a function of the half-light radius, in pixels, for all
detected sources in the image of A85. The selection boxes for the stars used
for the core (light green) and intermediate (blue) parts of the PSF are drawn
and the selected stars are highlighted, for the $g$ (left panel) and $i$
(right panel) bands.
#### 2.2.1 Point spread function derivation
A robust and extended characterization of the point spread function (PSF) of
the image is crucial to remove the stars in the field of view, in particular
bright stars close to the object of interest. For example, Uson et al. (1991)
showed that the total amount of diffuse light measured around the BCG of Abell
2029 would be in excess without removing nearby stars (their figure 5).
In order to correct for this, we first construct the PSF of our images.
Generally, to derive PSFs, we need to use stars with a wide range of
brightnesses. The bright, saturated stars are used to characterize the outer
parts of the PSF, or wings of the PSF, while fainter stars are used to
characterize the core and intermediate parts.
The bright stars in Fig. 1 show asymmetries due to internal reflections in the
telescope and the non-uniform illumination through it. These asymmetries
become more significant further away from the centre of the camera. Given the
limited amount of very bright stars in our image (N$\approx 10$), we cannot
build a PSF in every position of the camera. Luckily, the object of interest
(BCG + ICL) is very close to the centre of the camera, therefore deriving a
symmetric PSF to subtract nearby stars is a good approximation in this case.
#### 2.2.2 Core and intermediate part of the PSF
In order to build the inner parts of the PSF, we followed a similar approach
to the one in PSFEx (Bertin, 2011). We first obtain a source catalog using
SExtractor. The SExtractor catalog provides the half-light radius
(‘FLUX_RADIUS’) and the magnitude (‘MAG_AUTO’) of the detected sources. It
also provides the stellarity index ‘CLASS_STAR’ for discerning between stars
and galaxies. A ‘CLASS_STAR’ close to 0 means that the object is very likely a
galaxy, and 1 that it is a star. We select the objects of the catalog with
‘CLASS_STAR’ greater than 0.98. To minimize the asymmetries that can smear the
structure of the PSF, we selected stars only in the inner $40\arcmin\times
40\arcmin$ of the image. Their magnitude and half-light radius distribution is
shown in Fig. 2. We selected non-saturated stars (light green box) to derive
the core, while brighter and saturated stars (blue box) are used to derive the
intermediate parts of the PSF.
We obtained the core and intermediate parts of the PSF by stacking the
corresponding stars following these steps. First, we cut postage stamps around
the selected stars of size $100$ and $500$ pixels2 for the core and
intermediate parts, respectively. In order to stack the stars, we need to
accurately estimate their center. To do that, we need to mask all sources
other than the selected star in the postage stamp. We use SExtractor’s
segmentation map for this. Then, we fitted a 2D Moffat model to the masked
postage stamp of the star. Once the center of the Moffat is obtained, we re-
centered the postage stamp.
Second, we normalized each star by measuring a 1-pixel width ring at a radial
distance of 15 pixels, avoiding the noisier outer parts (for the core stars)
and the central saturated parts (for the intermediate stars). We also
subtracted the sky around the stars in a 5 pixel-width ring at a radius of
$13\arcsec$ for the core stars and $75\arcsec$ for the intermediate
stars666These radii were defined to reach the background in each of the
postage stamps, i.e., to not include flux from the star, at SNR$\sim 1$..
Finally, we stacked the normalized stars using a 3-$\sigma$ clipped median.
The number of stars that were used for the stacking are 51 and 41, for the
core, and 29 and 73, for the intermediate parts for the $g$ and $i$ bands,
respectively.
#### 2.2.3 Outer parts of the PSF
As discussed above, we want a model PSF that is extended enough that we can
subtract the wings of stars close to the BCG + ICL system. However, in our
field of view there are not enough bright stars to properly derive the outer
parts of the PSF. This is also limited by the asymmetries that are more
evident as we move away from the center of the image.
For that reason, we selected a few very bright stars that are in our field of
view and derived their radial profiles. The profiles of these stars look very
similar despite the asymmetries, therefore we decided to use the radial
profile of the closest bright star ($m_{i}\approx 11$ mag, although saturated)
to the center to build the outer part of the PSF. We adopted this methodology
as a profile is more resistant to masking residuals or other artifacts that
could bias the resulting PSF. It also means that this PSF will be symmetrical
(i.e., we lose the spatial information). Note that the center parts of these
very bright stars are strongly saturated causing bleeding in the detector,
seen as spikes in Figure 1. We do not model these spikes.
We followed the same steps as for the core and intermediate parts. First, we
cut a postage stamp of $2000$ pix $\times$ $2000$ pix around the star. We
masked all sources that are not the selected stars using the segmentation map.
In addition, to mask sources that are in the star’s halo, we run SExtractor on
an unsharp-masked image (Sofue, 1993) of the postage stamp. The unsharp-masked
image was obtained smoothing the stamp by a Gaussian with $\sigma=30$ pix,
which was then subtracted from the original. We combined both segmentation
maps, from the original and the unsharp-masked image, to create the final
mask. We re-centered the postage stamp by fitting a Moffat2D model and
shifting it to the new center given by the fit. In this case, the sky is
subtracted at a distance of $325\arcsec$ to avoid contamination from the star
flux (SNR$\sim 1$). Then, we measured the radial profile of the star.
After deriving the radial profile of the star, we build the 2D outer PSF by
assigning the value of each point of the profile to its corresponding radial
distance ring around the centre. We then convolved the whole stamp with a
$\sigma=1$ pix Gaussian to smooth the abrupt changes at each given radius.
This smoothing does not change the shape of the profile of the star.
Finally, we extend this outer part with a power-law in a similar way to Montes
& Trujillo (2018). This last step is to minimise any sky subtraction issues in
the outer parts of the star. We fit a power-law to the PSF image between
$95\arcsec$ to $141\arcsec$, for the $g$ band, and $221\arcsec$ to
$289\arcsec$, in the case of the $i$ band777The bending in the profile of the
$i$ band at $200\arcsec$ is seen in the bright stars’ postage stamps as well
as in their profiles and is not a consequence of the sky subtraction.. This
power-law fit was used to extrapolate the outer regions of the PSF to a radius
of $420\arcsec$.
Figure 3: Radial PSF profile in the HSC $g$ band in black. The different
shaded regions correspond to the four different parts derived in Sec. 2.2.2
and 2.2.3 from which the final PSF was constructed. The colored lines are the
individual radial profiles of the four different parts. The vertical lines
that divide the shaded regions indicate the radii at which these different
parts were joined. Figure 4: Same as Fig. 3 but for the HSC $i$ band.
#### 2.2.4 Connecting the different parts of the PSF
Once we derived the four different parts described above, we constructed our
final PSF. We follow a similar approach to Infante-Sainz et al. (2020). We use
the radial profile of the bright star derived above as a reference for the
connection and multiply the other profiles by a factor so they match the
profile of the bright star at a given radius. The radius at which these
connections are made change depending on the band. Fig. 3 and Fig. 4 show the
final PSF profiles (black thick line) for the $g$ and $i$ bands, respectively.
The shaded regions indicate the four different parts used to construct the
final PSF derived in Sec. 2.2.2 and 2.2.3. The profile of the bright star,
which was used for building the outer part of the PSF is labelled as _Outer 1_
in orange. The power-law extrapolation to the bright star profile is _Outer 2_
, in magenta. The core and intermediate parts are in teal and blue,
respectively. We also show the different individual profiles used to construct
the final PSF, in their respective colors. The radii where the connections
were made for each band and each of the different parts are indicated by the
vertical lines in the plots, in teal (connection between core and intermediate
part), orange (between intermediate and the bright star profile) and magenta
(between the bright star profile and the power-law extension). The total flux
of the final PSFs ($g$ and $i$) is normalized to 1.
#### 2.2.5 Star subtraction
Figure 5: Example of the star subtraction and masking process in a
$500\arcsec\times 500\arcsec$ region of the image of A85 for the $i$ band. The
images shown have the same contrast.
To subtract the stars in our images, we follow similar steps to Román et al.
(2020). We started by building a catalogue of the positions of visually-
selected bright stars. There are two key aspects when fitting these stars: to
obtain an accurate centre of the star and to perform the flux calibration. We
produced postage stamps for each of the stars of $500\times 500$ pixels. Then,
we masked all sources that are not the central star to avoid contamination
that could affect the flux calibration and centering. This masking is done in
two steps: 1) a first run to detect all sources with and 2) a second run where
the detection image is an unsharp-masked image, with a Gaussian smoothing of
$\sigma=20$ pix. This second step allows us to mask sources that are covered
by the halo of the star and not properly detected in the first run. In both
cases, the detection was done with SExtractor.
To accurately center the star, we calculated the centroids for each star by
fitting a 2D Gaussian to the 2D flux distribution of the star using
centroid_sources in photutils. centroid_sources allows us to define a box for
fitting the Gaussian, useful in cases where the centre of the star is strongly
saturated.
Once the star is masked and centered, we performed the flux calibration. We
first derived radial profiles for both star and the PSF. By using the profiles
rather than the stamps we are minimizing contamination due to masking
residuals or other artifacts. To fit each star, we selected a range in radius
for the calibration. The radial range is from $0.1$ times the saturation level
of the image to $4$ times the value of the background of each postage stamp.
This background was calculated as the standard deviation of the postage stamp
with all the sources masked (including the star).
We scaled the PSF profile to match the star profile, using ratio between star
and PSF values derived from the profiles. Once the PSF is centered and
calibrated we subtracted it from the image. We repeated the same process for
each of the stars in the catalogue for both $g$ and $i$ bands. Fig. 5 shows a
region of our image of A85 in the $i$ band. The original image is seen in the
left panel while the middle panel shows the same region with the stars
subtracted.
As mentioned above, the stars in HSC images show asymmetries that become more
evident further away from the center of the image. However, we have built a
symmetric PSF. As the object of interest, the BCG, is centered in the image,
nearby stars that could affect our photometry are not going to present
significant asymmetries. However, we note that this is a potential source of
error for this study.
### 2.3 Masking
The study of the ICL in clusters of galaxies requires a very careful masking
of foreground and background sources to reduce contamination that can affect
the determination of the color of this light. In the case of deep images, this
masking must be optimized not only for faint and small background objects but
also for those that are closer and large.
As a single setup for the detection and masking of both types of sources is
unfeasible, we used a two-step approach like Montes & Trujillo (2018); a
“hot+cold” mode (e.g., Rix et al., 2004). The “cold” mode will detect the
extended bright galaxies from the cluster while the “hot” mode is optimized to
detect the faint and small sources. We use this approach on a deep combined
$g+i$ image, after star subtraction. In the case of the “hot” mode, we
unsharp-masked the original image, to enhance the contrast, particularly in
the central parts of the BCG. To create the unsharp-masked image, we convolved
the image with a box filter with a side of $25$ pixels and then subtracted it
from the original. The threshold for detection is $1.1\sigma$ above the
background.
The “cold” mask was further expanded 10 pixels while the “hot” was expanded 5
pixels. Both masks were combined to create the final mask for our images.
Before this, we unmasked the BCG on the “cold” mask. The bleeding spikes were
manually masked as well as the residuals from the subtraction of stars and
their asymmetries.
We created two masks for the cluster. In the first mask, all the objects of
the image are masked except for the members of the cluster contained in our
field of view and the diffuse ICL. For that, we use the spectroscopic
membership information obtained in Owers et al. (2017). The morphological
information obtained from SExtractor’s “cold” mask run is used to unmask the
members of the cluster.
For the second mask, all the objects are masked except for the BCG and ICL. As
SExtractor does a poor job detecting low surface brightness outskirts, we
manually extended the masks for the remaining objects after visual inspection.
The final mask was again visually inspected to manually mask any remaining
light that was missed by the process described above. In the right panel of
Fig. 5, we show an example of the mask in one region of our image.
### 2.4 Surface brightness limits
Our goal is to study the low surface brightness features in A85 down to the
faintest surface brightness possible. For this reason, we need to know how
deep our images are by estimating the surface brightness limits that they
reach. To obtain these limits, we calculated the r.m.s of the final masked
images by randomly placing $20000$ boxes of $10\times 10$ arcsec2 ($\sim
10\times 10$ kpc2) across the images. In this case, we also masked the BCG and
ICL by adding an ellipse of semi-major axis of $672\arcsec$ centered in the
image.
The $3\sigma$ surface brightness limits are: $\mu_{g}^{limit}$(3$\sigma$,
$10\arcsec\times 10\arcsec$) = 30.9 mag/arcsec2, and
$\mu_{i}^{limit}$(3$\sigma$, $10\arcsec\times 10\arcsec$) = 29.7 mag/arcsec2.
These limits are calculated following Appendix A in Román et al. (2020).
## 3 The intracluster light of A85
### 3.1 Radial surface brightness profiles
Figure 6: The left panel shows the inner $700\arcsec\times 700\arcsec$ of the
A85 image with the isophotes from ellipse. The middle panel presents the
surface brightness profiles as a function of the semi-major axis for the BCG +
ICL of A85 to a radius of $250\arcsec$ (258 kpc), for the $g$ (green) and $i$
(purple) bands. The profiles are $k$-corrected and corrected for the
extinction of the Milky Way and surface brightness dimming. The right panel
shows the ellipticity (top) and position angle (PA, bottom) profiles as a
function of the semi-major axis for the $g$ and $i$ bands. The vertical gray
lines indicate the FWHM of point sources in the $g$ (solid) and $i$ (dashed)
images.
The goal of this paper is to study the diffuse light in HSC images of A85. To
that end, we derived the radial profiles for the $g$ and $i$ bands using the
software ellipse in IRAF. ellipse fits elliptical isophotes to the 2-D images
of galaxies using the method described in Jedrzejewski (1987). It provides the
mean intensity, ellipticity, position angle and harmonic amplitudes
(deviations from perfect ellipticity) for each fitted isophote. By deriving
the 1-D profiles this way, we are not assuming any particular model or models
to describe the BCG+ICL, as they might be sensitive to the choice of the
particular model and prone to degeneracies between the different parameters.
ellipse was run on the star-subtracted, masked images. We first run the task
allowing all parameters to vary freely. In the second run, we fixed the
centers to the median centers of the isophotes returned by ellipse in the
first iteration.We adopted the median setup in ellipse. The surface brightness
profiles reach a signal-to-noise ratio of $2.8$ ($3.0$) at $27.1$ ($25.7$)
mag/arcsec2 in $g$ ($i$), which corresponds to a radius of $200\arcsec$ or
$213$ kpc. Fig. 6 shows the output of ellipse for A85. The left panel shows
the $700\arcsec\times 700\arcsec$ region around the BCG888Known as Holm 15A.
with the fitted ellipses. The 1-D radial surface brightness profiles as a
function of semi-major axis (SMA) for the $g$ (green) and $i$ (purple) bands
are shown in the middle panel, up to $250\arcsec$. These surface brightness
profiles are corrected for the absorption of our galaxy (E(B-V) $=0.034$;
Schlafly & Finkbeiner, 2011) and surface brightness dimming. The profiles are
also $k$-corrected (Chilingarian et al., 2010; Chilingarian & Zolotukhin,
2012). The shaded regions represent the errors of the profiles computed as the
r.m.s scatter of each isophote.
The vertical gray lines in all the panels indicate the full-width-at-half-
maximum (FWHM) of the $g$ (solid; $1\farcs 07$) and $i$ (dashed; $0\farcs 78$)
bands. The FWHM of each image is given by twice the average ‘FLUX_RADIUS’ (the
half-light radius) of stars obtained from SExtractor (see Fig. 2). These lines
define the regions where the isophotal fits are not reliable. The right panel
shows the ellipticity (top) and position angle (PA; bottom) with SMA for both
bands.
The surface brightness profiles derived here show a flattening in the central
regions of the BCG ($\lesssim 10\arcsec$, $11$ kpc). This flattening in the
inner $\sim 10\arcsec$ has already been reported (e.g., López-Cruz et al.,
2014; Madrid & Donzelli, 2016). In fact, the BCG of A85 is known to host one
of the largest cores measured to date (López-Cruz et al., 2014).
Beyond the core, the surface brightness radial profiles roughly follow a
Sérsic (1968) profile. However, in the middle panel of Fig. 6, there appears
to be a break in the profile at a radius of $\sim 70\arcsec$ ($\sim 75$ kpc),
where the profiles become shallower.
In order to explore whether there is a break in the surface brightness radial
profiles, we fit a single Sérsic profile to both bands, excluding the inner
$10\arcsec$. These fits are performed using a least squares fitting method as
suggested in Seigar et al. (2007). The best fit to the profiles are shown in
Fig. 15 and the parameters are listed in Appendix C. We show the residuals of
subtracting the best Sérsic fit from the surface brightness profiles in the
top panel of Fig. 7. The figure shows that at a radius of $\sim 70\arcsec$
($\sim 75$ kpc) there is an excess of light with respect to the Sérsic
fit.999Note that the goal of this fit is to locate the break, not to describe
the light profile. This indicates that there is an extra component over the
galaxy; the ICL. The position of the break found here is consistent with
Zibetti et al. (2005), where a similar flattening is found at a radius of
$\sim 80$ kpc, in their stacked profiles of multiple clusters.
The ellipticity of the diffuse light of A85 increases with radius up to a
value of $\epsilon\sim 0.55$ for both bands at a radius of $\sim 200\arcsec$
($\sim 213$ kpc), as shown in the top right panel of Fig. 6. Kluge et al.
(2020b) also observed an increase in ellipticity for A85. However, at a radius
of $\sim 250$ kpc, their ellipticity profile drops sharply to a value of
$0.1$. We do not see any evidence of such a decrease in our profiles101010Our
ellipticity profile remains constant at $\sim 0.5$ to a radius of $320$ kpc,
although the signal-to-noise at that radius is $\lesssim 1$. In contrast, the
PA does not show any significant change with radius.
Figure 7: The top panel shows the residuals from a Sérsic fit to the surface
brightness radial profiles in the $g$ (green) and $i$ (purple) bands. The
bottom panel shows the B4 coefficient (4$th$ Fourier harmonic) as a function
of semi-major axis for both bands. The vertical grey line at $70\arcsec$
($\sim 75$ kpc), tentatively marks the radius where an extra component starts
to dominate and the isophotes become boxier.
Departures from perfect elliptical isophotes can be described as Fourier
harmonic perturbations (Jedrzejewski, 1987). The coefficients of these
harmonic series carry physical meaning. For example, B4, the 4$th$ Fourier
amplitude, indicates the boxyness/diskyness of the isophotes. In the bottom
panel of Fig. 7, we show the B4 coefficient as a function of SMA. The radius
where the break of the surface brightness profile is located, $70\arcsec$,
also corresponds to where the B4 becomes negative, i.e. the ellipses start
showing a boxy shape. This radius is indicated in both panels of Fig. 7 by a
gray vertical line. This is a confirmation of the boxyness visible in the
outer parts of the BCG (inset A in Fig. 1). Boxyness has been found to be
related to galaxy interactions (e.g., Nieto & Bender, 1989).
### 3.2 Color profile of the BCG+ICL
Radial color gradients provide valuable constraints in the formation processes
of galaxies and, consequently, the BCG and ICL (e.g., Montes & Trujillo, 2014,
2018). The radial color profile was measured in $55$ logarithmic spaced bins
from 0 to $200\arcsec$. The distance to each pixel on the images is computed
as the elliptical distance to the BCG, where the morphological parameters
(ellipticity and PA) are the median values from the ellipse isophotes
excluding the inner $10\arcsec$: $0.37$ for the ellipticity and $56\deg$ for
the PA. For each radial bin, the surface brightness in each band was obtained
by averaging the pixel values. The errors are drawn from jackknife resampling,
i.e. repeating the photometry in a sub-sample of the data for each bin. The
number of sub-samples per bin was 100. Fig. 8 shows the $g-i$ color profile
for the BCG+ICL of A85 down to $200\arcsec$ ($213$ kpc; light blue line). The
color profile is $k$-corrected and corrected for the extinction of the Galaxy.
The error in the color profile, represented as the light blue area, is the sum
of the errors in the individual surface brightness radial profiles. We have
also plotted the $g-i$ color of the satellite galaxies in the cluster as
reported by Owers et al. (2017).
The color profile of the BCG + ICL shows three distinct regions: i) a flat
region out to $10\arcsec$ indicative of the core of the galaxy, ii) a negative
color gradient from $10$ to $\sim 70\arcsec$ and iii) a region from $\sim
70\arcsec$ to $\sim 200\arcsec$ where the color gradient of the diffuse light
becomes shallower. To see if there is a difference, we calculated the
gradients of each region as a linear fit to the color profile $g-i$ vs. log R
($\Delta gi$). The fits are shown in Fig. 8 as the dark blue lines. The
gradients for the different regions are: i) $-0.01\pm 0.01$ (dashed line), ii)
$-0.24\pm 0.01$ (dotted line) and iii) $-0.06\pm 0.04$ (dash-dotted line).
Figure 8: The $g-i$ color profile of BCG + ICL of A85 in blue. The errors are
indicated by the light blue shaded area. The red spirals indicate the average
color of member galaxies of the cluster as derived in Owers et al. (2017). The
color profile presents three different regions: a flattening at $<10\arcsec$,
a color gradient from $10\arcsec$ to $75\arcsec$ and a region from $70\arcsec$
to $200\arcsec$ where the gradient shallows. The linear fits to the color
profiles for each region are shown as the dark blue lines: dashed for
$<10\arcsec$, dotted for $10\arcsec$ to $75\arcsec$ and dash-dotted for
$75\arcsec$ to $242\arcsec$.
The flat color profile at SMA $<10\arcsec$ ($<11$ kpc) coincides with the size
of the core of the galaxy as seen by López-Cruz et al. (2014). This is
consistent with a mixing of the stellar populations in the centre of the
galaxy.
The region between $10\arcsec$ to $\sim 75\arcsec$ ($11$ to $\sim 80$ kpc)
presents a negative gradient in $g-i$ color from $1.45$ to $\sim 1.25$
($\Delta gi=-0.24\pm 0.01$). It is well known that massive early-type galaxies
have negative optical color gradients indicating gradients in their stellar
populations, generally metallicity (e.g., Peletier et al., 1990; Davies et
al., 1993; La Barbera et al., 2012; Huang et al., 2018; Santucci et al.,
2020).
Beyond $\sim 75\arcsec$ ($\sim 80$) kpc, the color profile becomes
significantly shallower ($\Delta gi=-0.06\pm 0.04$) with a median color of
$g-i=1.25$. The observed behaviour of the color profile of A85 is consistent
with the color profile in Zibetti et al. (2005) (also, Coccato et al., 2010;
Montes et al., 2014; Spavone et al., 2020). Zibetti et al. (2005) explored the
$g-r$ color profile of stacked clusters in SDSS. Their color profile also
shows a gradient down to $\sim 80$ kpc where it shallows.
Figure 9: The left panel shows the inner $700\arcsec\times 700\arcsec$ of A85
where the different sections are shaded in different colors: North (N,
orange), East (E, green), South (S, purple) and West (W, magenta). In the
right panel we show the corresponding $g-i$ color profiles to a radius of
$200\arcsec$. The Northern, Southern and Western profiles flatten at different
radius possibly indicating the presence of accreted material at those
distances.
There are some nearby bright stars both East and West of the BCG. Inaccuracies
in the star subtraction process could bias the colors that we obtain,
particularly the colors of the faintest regions of the ICL. In order to assess
that potential issue, we derive the color profiles in 4 different directions
of the BCG: North, East, South and West.
The four different profiles were derived by masking the BCG+ICL except for
$90\deg$-wide sections as shown in the left panel of Fig. 9, labelled as North
(orange, N), East (green, E), South (purple, S) and West (magenta, W). The
profiles are derived in the same way as the overall color profile. The right
panel in Fig. 9 shows the color profiles color-coded by their respective
section. The color profiles behave similarly up to $\sim 50\arcsec$, where the
Southern profile flattens (purple line, $g-i\approx 1.3$) to a radius of $\sim
100\arcsec$ ($107$ kpc). Similarly, the Northern (orange) and Western
(magenta) profiles show flattening, and even reddening (North profile),
between $80\arcsec$ to $130\arcsec$.
While for the Southern profile there is not a clear origin for the observed
flattening, the shape of the Northern profile could be affected by the
presence of a large satellite (at a projected radius of $\sim 75\arcsec$,
zoomed-in in Fig. 11). This is also the case for the Western profile as there
are some galaxies at $\sim 145\arcsec$. We will discuss this in detail in the
following Section.
Given that the closest bright stars are only located East and West of the BCG,
these color profiles confirm that the change in gradient is not caused by the
presence of these stars but rather caused by the presence of diffuse light
associated with ongoing mergers.
### 3.3 Fraction of light in the ICL
Studying the amount of light in the ICL can provide information on the
efficiency of the interactions that form the ICL. This is given by the ICL
fraction, defined as the ratio between the ICL and the total (BCG + galaxies +
ICL) flux or luminosity of the cluster. This ICL fraction is an ill-defined
quantity in photometric-only studies as separating between BCG and ICL is not
obvious. To overcome this problem, astronomers have been using different ways
of defining the ICL component in deep photometry. In the following, we
describe two of the most widely used definitions. We derived the ICL fraction
for A85 using both of them, for ease of comparison with other studies.
#### 3.3.1 ICL fraction from surface brightness cuts
The most widely used definition is to apply a cut in surface brightness and
assume that the light fainter than a certain surface brightness limit is the
ICL (typically $\mu_{V}>26.5$ mag/arcsec2, e.g., Feldmeier et al., 2004;
Rudick et al., 2011). To derive the ICL fraction for the $g$ and $i$ bands, we
followed similar steps to Montes & Trujillo (2018). First, we applied the mask
where all the members of the cluster are unmasked, derived in Sec. 2.3, to
each of the images. In each of the bands, we summed all the pixels fainter
than a given ICL threshold. The fractions given have a fainter limit of
$\mu<29.5$ mag/arcsec2 in order to minimize the contamination from
inhomogeinities in the background. The ICL fractions are derived applying 3
different surface brightness cuts: $\mu>$ 26, 26.5 and 27 mag/arcsec2. We
provide the ICL fractions for both the $g$ and $i$ bands in Table 2..
The ICL fractions calculated this way account not only for the diffuse light
associated with the BCG but also with other galaxies in the cluster. Note that
defining the ICL this way means that the measured fractions are a lower limit
of the true value; we are missing light in both the brighter (e.g., in
projection) and fainter limit.
Table 2: ICL fraction ($\%$) for A85 Surface brightness cuts
---
| $26<\mu<29.5$ | $26.5<\mu<29.5$ | $27<\mu<29.5$ | $27.5<\mu<29.5$
[mag/arcsec2]
$f_{\mathrm{ICL}}(g)$ | $8.8\pm 0.5$ | $6.2\pm 0.7$ | $4.0\pm 0.9$ | $2.4\pm 0.9$
$f_{\mathrm{ICL}}(i)$ | $3.1\pm 0.7$ | $1.9\pm 0.7$ | $1.1\pm 0.7$ | $0.6\pm 0.7$
2D fit
| $g$ | $i$
$f_{\mathrm{ICL}}$ | $11.0\pm 1.0$ | $11.5\pm 1.0$
$f_{\mathrm{BCG+ICL}}$ | $16.7\pm 2.0$ | $18.0\pm 2.0$
$f_{\mathrm{ICL}/\mathrm{BCG+ICL}}$ | $66.1\pm 2.2$ | $63.7\pm 2.2$
#### 3.3.2 ICL fraction assuming a functional form
Despite its simplicity, one of the limitations of the above definition is that
it does not account for the amount of ICL in projection on top of the BCG.
Another common approach is using functional forms to describe both BCG and ICL
(e.g., Gonzalez et al., 2005; Seigar et al., 2007; Spavone et al., 2018, to
name a few). In our case, we use GALFIT (Peng et al., 2002) to simultaneously
fit two two-dimensional Sérsic profiles: one to describe the BCG and one for
the ICL. The parameters for the two fitted Sérsic components are given in
Table 4 in Appendix C. Although the fits seem to describe well the overall
profile, they are not able to reproduce the inner core of the galaxy (as in
the case of the single Sérsic fit, Fig. C). Contrary to the single Sérsic fit
performed in Sec. 3.1, we now find that the inner component is an exponential,
similar to the outer component ($n_{1}\sim 1$ and $n_{2}\sim 2.15)$. This
difference between the single and double Sérsic fits is probably caused by the
single component trying to fit the outer parts of the BCG+ICL profile.
As expected from the ellipse 1-D profiles in Sec. 3.1, the more extended
component (the ICL) has a higher ellipticity than the inner component (the
BCG, see also Kluge et al., 2020a). However, the PA in both models are not
significantly different ($\Delta\mathrm{PA}\sim 4^{\circ}$).
The 1-D surface brightness profiles obtained with ellipse for the double
Sérsic fits are shown in Fig. 10. As in Fig. 6, the observed surface
brightness profiles of the $g$ and $i$ bands are shown in mint green and
purple, respectively. The two different Sérsic models (inner and outer) are
shown with the dashed grey lines while the sum of both models is the solid
black line. As in Fig. 7, it can be seen that the outer component, the ICL,
dominates at around $\sim$ 60-70$\arcsec$.
The ICL fraction obtained using the outer Sérsic model is given in Table 2. We
have also derived the fraction of BCG+ICL with respect to the total and the
ratio between ICL and BCG+ICL.
Figure 10: ellipse 1-D profiles of the $g$ (top) and $i$ (bottom) bands of the
BCG + ICL of A85. The grey dashed lines correspond to the profiles of the two
Sérsic components fitted with GALFIT. The solid black line is the sum of both
components. A double Sérsic fit reproduces the light profile of A85 in both
bands and the outer component, the ICL, dominates at around $\sim$
60-70$\arcsec$.
The ICL fractions derived from assuming a double Sérsic to describe BCG+ICL
are higher than those from surface brightness cuts. This is expected because
we are extrapolating the contribution of the diffuse light in the line of
sight of the BCG which results in adding more ICL that surface brightness cuts
cannot account for. Fig. 10 shows that while the extended Sérsic component
begins to dominate at r $=70$ kpc, a surface brightness limit of $\mu_{g}>26$
mag/arcsec2 will measure all the light beyond $110$ kpc as ICL. Note that a
surface brightness cut also accounts for diffuse light that is associated with
the other galaxies in the cluster and might give a more complete picture of
the formation of ICL in clusters.
The fractions calculated in this Section include all the member galaxies of
the cluster in our images. That is r = 0$\fdg$42 = 0.67$\times R_{200}$ (where
R${}_{200}=$0$\fdg$63 = 2.42 Mpc, Owers et al., 2017). Other studies measure
the ICL fraction in smaller radius, tipically $R_{500}$ (e.g., Gonzalez et
al., 2007). This means that, in comparison, we are including more galaxies and
therefore deriving a higher total luminosity for the cluster111111While not
adding almost any ICL.. That yields to lower ICL fractions than in other
studies with more limited field of views (e.g., Burke et al., 2015; DeMaio et
al., 2020). For this reason, we have also calculated the fractions within
$R_{500}=1.2$ Mpc = $18\farcm 7$ (Ichinohe et al., 2015). The fractions within
$R_{500}$ can be found in Table 5 in Appendix F.
## 4 Discussion
In this work, we have used archival HSC data to explore the radial surface
brightness and color profile of the BCG of A85 to a radius of $200\arcsec$
($213$ kpc). We found that both the surface brightness and color profile
become shallower beyond $70\arcsec$ ($75$ kpc), indicating that an extra
component, the ICL, starts to dominate. In the following, we will discuss the
implications of our results.
### 4.1 The fraction of light in the ICL
The ICL is a product of the interactions between galaxies within the cluster
(Rudick et al., 2009), therefore its fraction can provide information of the
efficiency of those interactions, while the evolution of this component with
time gives an estimation of their timescales. However, measuring the ICL
fraction is difficult as the transition between BCG and ICL happens smoothly,
making it hard to separate both components. In addition, studies use different
bands and definitions for the ICL complicating direct comparison.
In general, the ICL fractions derived here using surface brightness cuts are
in agreement with those in the literature for clusters at similar redshifts
(although in different, adjacent, bands and surface brightness limits, e.g.,
Krick & Bernstein 2007). Our ICL fraction at $\mu_{g}>26$ mag/arcsec2 is $\sim
9.8\pm 0.5\%$ (Table 5). This is in agreement with the median ICL fraction
(using the same band and surface brightness cut) in Kluge et al. (2020a):
$13\pm 13\%$. It is also in agreement with the $\sim 11\%$ at $\mu_{V}>26.5$
mag/arcsec2 derived in the simulations of Rudick et al. (2011).
Simulations show that $70\%$ of the stellar mass of the BCG is accreted (Qu et
al., 2017; Pillepich et al., 2018). This means that most of the BCG is formed
in a similar way to the ICL, and therefore they should be studied in order to
understand the growth of BCGs. For this reason, we also measured the fraction
of BCG+ICL over the total luminosity of the cluster, $f_{BCG+ICL}$. This
fraction is $\sim 46\%$ at $r<R_{500}$ in agreement with Gonzalez et al.
(2007, 2013) for clusters at similar redshifts.
The fraction of ICL over the BCG+ICL component, fICL/BCG+ICL ($\sim 64\%$)
indicates that most of the total light in the BCG+ICL system in A85 is in the
ICL121212More specifically, it is the stellar halo or envelope (bound to the
BCG) + ICL (bound to the cluster) as we cannot distinguish between both
components using imaging alone.. This result agrees with the fractions from
previous observations and simulations (e.g., Gonzalez et al., 2005; Zibetti et
al., 2005; Seigar et al., 2007; Cañas et al., 2020; Kluge et al., 2020a). In
the simulations of Conroy et al. (2007), similar fractions are achieved if all
the stars from disrupted satellites end up in the ICL (their Fig. 4). These
results from simulations, coupled with the observed mild evolution in mass of
BCGs (e.g., Whiley et al., 2008; Collins et al., 2009; Lidman et al., 2012;
Oliva-Altamirano et al., 2014; Bellstedt et al., 2016), suggests indicate that
a significant fraction of the mass of infalling satellites goes to the stellar
halo + ICL instead of adding a significant fraction of mass to the BCG (e.g.,
Laporte et al., 2013; Contini et al., 2018).
### 4.2 Stellar populations of the BCG
Studying the colors of the ICL in clusters allows us to infer the properties
of the progenitor galaxies from which the ICL accreted its stars and,
consequently, the mechanisms at play in the formation of this component.
In Section 3.1, we presented the surface brightness radial profiles of the BCG
+ ICL for the $g$ and $i$ bands. Both surface brightness profiles show a flat
region in the inner $10\arcsec$ ($11$ kpc), denoting the presence of a core
(e.g., López-Cruz et al., 2014; Madrid & Donzelli, 2016). In the same way, the
measured color profile is flat in the inner $10\arcsec$, indicating that the
stellar populations in this region are well mixed. Mehrgan et al. (2019) used
MUSE data to infer the stellar kinematics of this BCG finding that this
central region hosts a supermassive black hole with a mass of $4.0\pm
0.8\times 10^{10}M_{\odot}$. They concluded that the BCG of A85 is a result of
the merger of two cored early-type galaxies.
Beyond $10\arcsec$, the surface brightness profiles follow a Sérsic (1968)
profile down to $\sim 70\arcsec$ ($\sim 75$ kpc). At the same time, the color
profile shows a negative gradient from $g-i=1.45$ to $g-i\approx 1.25$. The
central flattening and subsequent gradient in color is also observed in the
integral field spectroscopy observations of A85 in Edwards et al. (2020). They
find that out to $30$ kpc the metallicity of the galaxy shows the same
behaviour as our color profile: a flattening in the inner $\sim 10$ kpc ($\sim
10\arcsec$), followed by a decrease to $\sim 30$ kpc ($28\arcsec$).
At $\sim 70\arcsec$, the surface brightness profiles depart from the Sérsic
fit (top panel in Fig. 7). This corresponds to where the isophotes show a boxy
shape (indicated by the gray vertical line in Fig. 7). Simulations suggest
that boxyness is the result of a past dry merger event (e.g., Naab et al.,
2006). In addition, at this radius, the color profile becomes shallower. These
pieces of evidence point to an extra component originating from accreted
stars: the ICL131313ICL + stellar halo..
It is not possible to disentangle between stellar age and metallicity using
only one color. Previous deep observations of clusters of galaxies show clear
radial gradients in colors (e.g., Williams et al., 2007; Montes & Trujillo,
2014, 2018; DeMaio et al., 2015, 2018; Mihos et al., 2017; Iodice et al.,
2017) indicating radial gradients in metallicity while the ages of the ICL in
nearby systems are old ($>10$ Gyr, e.g., Williams et al., 2007; Coccato et
al., 2010). This is consistent with Edwards et al. (2020), who only found a
very mild decrease in age to $30$ kpc for A85, from $\sim 15$ to $10$ Gyr.
Therefore at $<30$ kpc, the color profiles likely mostly trace changes in
metallicity. However, we cannot test here whether the decrease in age becomes
significant beyond $30$ kpc.
The shape of the color profile is reminiscent of the three different regions
found in the metallicity profile of M87 in Montes et al. (2014) (see also
Coccato et al. 2010). In M87, the metallicity gradient becomes shallower in
the outer parts of the galaxy. This is the consequence of the mixing of the
stellar populations of the accreted galaxies. This also appears to be the case
for A85 and is supported by the change in the slope of the surface brightness
profiles, where the outer parts of the galaxy (the ICL) are built via the
accretion of satellite galaxies.
Figure 11: Inner $700\arcsec\times 700\arcsec$ RGB image of A85. The ellipse
model obtained in Sec. 3.1 has been subtracted. The teal cross marks the
centre of the BCG while the green arrow marks a collection of galaxies and
diffuse light to the South of the BCG. The colored arcs mark the area where
there is flattening in the color profiles derived in different directions as
seen in Fig. 9. The inset shows a zoom-in into a galaxy that presents a faint
tail towards the North as indicated by the purple arrow. The North (orange)
and West (magenta) arcs are highlighting diffuse light associated with
galaxies interacting with the BCG.
In Section 3.2, we derived color profiles of the BCG+ICL in four different 90
deg-wide sections, finding that the Southern color profile between $50\arcsec$
to $100\arcsec$ ($53$ to $106$ kpc) is redder than the other profiles ($\sim
1.3$, Fig. 9). Similarly, the North and West profiles become flat between
$80\arcsec$ to $130\arcsec$.
To explore whether there is any evidence of infalling material that might be
causing the flattening of the profiles, we subtracted the ellipse models from
the image for both bands to enhance any signs of interactions or asymmetries.
In Fig. 11, we show the inner $700\arcsec\times 700\arcsec$ region of A85 with
the model generated from the ellipse fits subtracted. We have drawn arcs to
demarcate the areas in the image corresponding to the flattening of the color
profiles, color-coded according to the direction of the corresponding profile
in Fig. 9. Towards the North, we found a faint tail associated with a large
satellite galaxy (zoomed-in in Fig. 11 and marked with a purple arrow). The
presence of this faint tail might explain the sudden reddening of the Northern
color profile at $\sim 130\arcsec$ in Fig. 9. As there are no other signs of
disturbance, this galaxy is probably just starting to interact with the BCG.
To the West, there is some diffuse light associated with two galaxies that are
likely interacting with the BCG. Even with our careful and conservative
masking, the diffuse light associated with these interactions might be
contaminating our profiles.
In addition, there is a collection of galaxies between $\sim 75\arcsec$ and
$\sim 160\arcsec$ ($80$ to $170$ kpc) South of the BCG with associated diffuse
light. This structure is marked with a green arrow in Fig. 11. These galaxies
appear to be interacting with each other rather than with the BCG. The color
of the diffuse light of the region, with the galaxies masked, is $g-i=1.24\pm
0.15$. However, this structure is not associated to any signature in the
Southern color profile (lies outside the purple arcs in Fig. 11).
To investigate whether these diffuse structures could be biasing our results,
we repeated the ellipse fitting and color profile derivation but, in this
case, generously masking all of these structures (North, South and West). The
resulting color profile and the $4th$ Fourier harmonic, B4, are shown in Fig.
16 in Appendix D. We also plotted the original color profile in dark blue for
reference. The new color profile is compatible with the original. The gradient
between $70\arcsec$ to $200\arcsec$ ($75$ to $213$ kpc) is now $-0.07\pm
0.04$, slightly steeper but compatible within errors with the gradient derived
in Sec. 3.2 ($-0.06\pm 0.04$). The boxyness is still present. Therefore, the
diffuse light associated with these interactions is not producing the change
of gradient nor the boxyness observed.
The lack of any obvious tidal feature related to the color flattening towards
the South means that any tidal feature has had time to smear, only emerging
here in the color profile. Given its preferential position to the South, it
did not have enough time to mix with the rest of the galaxy. The orbital
period for a star around this BCG in the radial range from $50\arcsec$ to
$100\arcsec$ ($53$ to $106$ kpc, the approximate range of the flattening in
the Southern color profile) is between $1.5$ to $3.7$ Gyr. The calculations of
the orbital period are described in Appendix E. In the simulations of Rudick
et al. (2009), streams found in the core of the cluster are quickly destroyed
by the strong, evolving tidal field of the cluster with timescales of
$\lesssim 1$ Gyr. Therefore, for a star not to have orbited the galaxy but any
stream to be smeared, we suggest that this interaction likely happened a few
Gyrs ago.
#### 4.2.1 The ellipticity profile and the ICL as a tracer of dark matter
A significant anisotropy in the orientation of the orbits of the progenitors
of the ICL will produce an elongation in the ICL distribution. This
elongation, i.e. ellipticity, is expected to increase with increasing radius
up to the value of the original distribution, i.e. the cluster distribution.
The ellipticity of the diffuse light of A85 increases with radius up to a
value of $\sim 0.55$ at $\sim 200\arcsec$ ($\sim 213$ kpc), for $g$ and $i$
(Fig. 6). However, the PA does not change significantly with radius, i.e.,
inner and outer components are aligned.
This increase in ellipticity with radius was also observed in this cluster by
Kluge et al. (2020b). The same trend with radius has been measured in other
massive galaxies and clusters (e.g., Gonzalez et al., 2005; Tal & van Dokkum,
2011; Huang et al., 2018; Kluge et al., 2020b, a). The ellipticities of the
diffuse light in these systems tend to the typical values for the distribution
of galaxies within clusters (Shin et al., 2018). When fitting a double Sérsic
model to the 2-D distribution of light of the BCG+ICL, we also find that the
ellipticity of the outer component, the ICL, has a higher ellipticity ($\sim
0.5$) than the inner component, the BCG ($\sim 0.2$).
The value of the ellipticity at large radii derived here is consistent with
the axis ratio measured for A85 using weak-lensing modeling by Cypriano et al.
(2004). That is, the ICL has the ellipticity of the dark matter halo of the
cluster. These results agree with the picture proposed in Montes & Trujillo
(2019) that the ICL is a good luminous tracer of the dark matter distribution
in clusters of galaxies.
### 4.3 The buildup of the ICL of A85
The change in slope of the surface brightness profile of the BCG, the boxyness
of the isophotes and the change in the slope of the color gradient at a radius
of $\sim 70\arcsec$ ($\sim 75$ kpc) suggests strongly that the BCG and ICL can
be considered as distinct stellar components with different assembly
histories, and that the accreted component (ICL) starts to dominate at that
radius. Integrated light spectroscopy (e.g., Dressler, 1979) and planetary
nebulae kinematics (e.g., Arnaboldi et al., 1996) of nearby clusters, show
that the radial velocity dispersion increases with radius to reach the value
of the velocity dispersion of the galaxies in the cluster (Longobardi et al.,
2018). That means that the stars forming the ICL are following the potential
of the cluster rather than the potential of the BCG. We can conclude that the
radius where the potential of the A85 cluster begins to dominate is $\sim
70\arcsec$ ($\sim 75$ kpc, see Fig. 10). Previous works have also shown that,
in massive clusters ($10^{14-15}M_{\odot}$), BCGs tend to show this break
radius at around $60-80$ kpc (e.g., Zibetti et al., 2005; Gonzalez et al.,
2005; Seigar et al., 2007; Iodice et al., 2016).
We can calculate an approximate mass of the progenitor of the merger using the
color of the Southern profile. If the average color of the reddening in the
Southern profile is around $g-i=1.3$ (Fig. 9), and assuming an age of $10$
Gyr, the metallicity of the progenitor would be [Z/H] $=-0.013$ (using the
models of Vazdekis et al., 2016), i.e. slightly subsolar metallicity. Using
the mass-metallicity relation from Gallazzi et al. (2005), this corresponds to
a galaxy of $\sim 3\times 10^{10}M_{\odot}$. The galaxies towards the North
and West that are interacting with the BCG have masses of the order of $\sim
7\times 10^{10}M_{\odot}$ (Owers et al., 2017). This is in agreement with
observations in other clusters (Montes & Trujillo, 2014, 2018; Morishita et
al., 2017; DeMaio et al., 2018) and with simulations (Purcell et al., 2007;
Cui et al., 2014; Contini et al., 2014, 2019). These studies conclude that
galaxies of $\sim 10^{10}M_{\odot}$ are the main contributors to the ICL in
massive clusters of galaxies.
## 5 Conclusions
In this work, we have presented deep observations in the $g$ and $i$ bands of
the central $52\arcmin\times 52\arcmin$ of the cluster Abell 85, obtained with
Hyper Suprime-Cam on the Subaru Telescope. The surface brightness limits
reached are $30.1$ and $29.7$ mag/arcsec2 ($3\sigma$, $10\times 10$ arcsec2),
for the $g$ and $i$ bands, respectively. Taking advantage of the depth of
these images, we are able to study the diffuse light of this cluster down to
$200\arcsec$ ($213$ kpc) from the BCG. At $\sim 70\arcsec$ ($\sim 75$ kpc),
the surface brightness profiles become shallower and the isophotes show a boxy
shape, strongly indicating the presence of an accreted component: the ICL. In
addition, at the same radius the color profile becomes shallower, a
consequence of the mixing of the stellar populations of the accreted galaxies.
Furthermore, the color profile towards the North, West and South of the BCG
show a redder color compared to the other profiles as if there is remaining
material in that direction from a merger that happened a few Gyrs ago. This
work shows that even short exposure times ($\sim 30$ mins) on large telescopes
can unveil the assembly history of clusters of galaxies.
The results presented in this work show the extraordinary power of ground-
based observatories to address the origin and evolution of the ICL. In the
future, the LSST will be able to provide deep multi-wavelength observations of
the southern sky allowing the study of the ICL in a range of cluster masses
and redshifts (Montes, 2019; Brough et al., 2020). However, as demonstrated in
this work, careful data processing techniques are crucial in order to take the
maximum benefit from the upcoming data.
We thank the referee for constructive comments that helped to improve the
original manuscript. MM would like to thank Raúl Infante-Sáinz and Nacho
Trujillo for their very useful comments on the data reduction and Alexandre
Vazdekis for his comments on stellar populations. SB acknowledges funding
support from the Australian Research Council through a Future Fellowship
(FT140101166). Based on data collected at Subaru Telescope and obtained from
the SMOKA, which is operated by the Astronomy Data Center, National
Astronomical Observatory of Japan. This research includes computations using
the computational cluster Katana supported by Research Technology Services at
UNSW Sydney.
## Appendix A Effect of instrument rotation on the flat-fields
In Sec. 2.1.3, we discussed that we used HSC-SSP images of adjacent nights to
the observations of A85 in order to derive a sky flat. However, in the case of
the $i$ band, using the images of adjacent nights resulted in significant
background substructure in the individual CCDs, and consequently, a global
structure in the individual frames. The source of this structure seems to be
related to the rotation angle of the instrument (‘INR-STR’ in the header)
being considerably different from that of the observed images ($<$INR-STR$>$ =
$-5$ in the HSC-SSP Wide images compared to $<$INR-STR$>$ = $124$ for A85).
Figure 12: Effect of rotation angles of the HSC (‘INR-STR’) on the flat-fields
derived for CCD = 80 in the $i$ band. Left panel: Flat derived using the
images taken from adjacent nights to the images of A85 ($<$INR-STR$>$ = $-5$).
Middle panel: Flat derived using the images with similar rotation angles as
the A85 science images ($<$INR-STR$>$ = $114$). Right panel: Ratio of the two
flats, Flat/Flatr, where a gradient of $\sim 1\%$ across the CCD can be seen.
For reference $<$INR-STR$>$ of the A85 images is $124$.
To test this hypothesis, we downloaded images from the SMOKA archive where
‘INR-STR’ was close to the angle of the A85 $i$ band images. As it was
difficult to find the same rotation angles as the A85 images, we downloaded
images with angles between 100 and 140. The average rotation angle of these
images are $<$INR-STR$>$ = 114. The dates when those images were taken are
listed in Sec. 2.1.3. In Fig. 12, we show a comparison of the two different
flats derived for CCD number 80141414Map of the CCD arrangement here:
https://hsc.mtk.nao.ac.jp/pipedoc/pipedoc_4_e/_images/CCDPosition_20140811-1.png.
The flat derived from the images from adjacent nights is shown in the left
panel, labelled as _Flat_. The middle panel shows the flat derived from the
images with a median instrument rotation close to the A85 images, labelled as
_Flat r_. The right panel of Fig. 12 is the ratio of the two flats; _Flat_
divided by _Flat r_. The presence of a significant gradient across the CCD can
be seen. This gradient is of the order of $\sim 1\%$.
In Fig. 13, we show the comparison of the final co-added images for the $i$
band using the flats derived using science images of adjacent nights
(labelled: with Flat, left panel) and using the flats obtained with the
science images with the same rotation as the A85 image (the final image used
in this work labelled: with Flatr, right panel). In the left image we can see
inhomogeneities caused by the poor flat-field correction to the individual
CCDs.
Figure 13: Final $i$ band co-added image using the flat-field frames from
science images of adjacent nights to the A85 observations (‘with Flat’, left
panel) and using the flat-field frames from science images with the same
rotation as A85 (‘with Flatr’, right panel). The left image shows
inhomogeneities caused by the inaccuracy of the flat-field correction.
## Appendix B Example of the custom-made flat-field
An accurate flat-field correction is crucial to minimise errors in low surface
brightness science, especially in extended and diffuse objects such as the
ICL. For this reason, we derived the flats from science observations instead
of using the HSC master flats as inhomogeneities in the illumination can
introduce gradients in our images. During this work, we found that the HSC
master flat from CCD 75 does not contain a feature that is present in the data
(indicated by purple lines in Fig. 14). We do not know the reason for this
discrepancy. However, the master flat does seem to reproduce all other
features seen in the CCD image.
Figure 14: Left panel: Image from CCD 75 processed before flat-field
correction. Middle panel: Master flat-field for CCD 75 from 2014-09-17. Right
panel: Custom flat derived in this work. The purple lines highlight the
structure present in both the image and custom flat, but not in the HSC master
flat.
## Appendix C Sérsic fits 1D and 2D
In Fig. 15, we show the single Sérsic fits to the radial surface brightness
profiles of A85, for the $g$ (green) and $i$ (purple) bands. The best fit
effective radius ($r_{eff}$) and Sérsic index (n) parameters are listed in
Table 3, for both bands.
Figure 15: 1-D single Sérsic fits to the BCG+ICL profile of A85. Table 3: Parameters of the single Sérsic fits for the surface brightness profiles of A85 Band | $r_{\mathrm{eff}}$ [arcsec] | n
---|---|---
g | $39\pm 6$ | $4.0\pm 0.7$
i | $36\pm 4$ | $4.9\pm 0.8$
Table 4: Parameters from the double Sérsic fit from GALFIT Band | m1 | $r_{\mathrm{eff},1}$ | n1 | $\varepsilon_{1}$ | PA1 | m2 | $r_{\mathrm{eff},2}$ | n2 | $\varepsilon_{2}$ | PA2
---|---|---|---|---|---|---|---|---|---|---
| [mag] | [arcsec] | | | | [mag] | [arcsec] | | |
g | $14.37\pm 0.01$ | $15.02\pm 0.01$ | $1.08\pm 0.01$ | $0.20\pm 0.01$ | $54.40\pm 0.02$ | $13.49\pm 0.01$ | $173.0\pm 0.2$ | $2.14\pm 0.01$ | $0.49\pm 0.01$ | $58.63\pm 0.02$
i | $12.96\pm 0.01$ | $14.29\pm 0.01$ | $1.04\pm 0.01$ | $0.19\pm 0.01$ | $54.16\pm 0.02$ | $12.20\pm 0.01$ | $172.6\pm 0.2$ | $2.18\pm 0.01$ | $0.53\pm 0.01$ | $58.82\pm 0.02$
## Appendix D Color profiles of A85 after masking diffuse structures
In Sec. 4.2, we discussed the existence of diffuse structures North, South and
West of the BCG with associated diffuse light that might be causing the
flattening of the color profile. In Fig. 16, we show the $g-i$ color profile
(top panel) and the B4 coefficient (bottom panel) with these structures
masked. We do not find any significant change within errors.
Figure 16: Color profile and B4 coefficients as a function of SMA after
masking the collection of galaxies to the South of the BCG. We do not find any
significant difference within errors with respect to the results from Sec. 3.2
(in dark blue).
## Appendix E Calculation of the orbital period around A85
To estimate the orbital period ($T$) of a star at a given radius ($R$) around
A85 we used Kepler’s third law:
$T=2\pi\sqrt{\frac{R^{3}}{GM_{R}}}$ (E1)
where $M_{R}$ is the mass of the BCG of A85 inside a radius $R$ and $G$ is the
gravitational constant (Binney & Tremaine, 1987).
In order to calculate $M_{R}$ for A85, we followed the prescription in Bell et
al. (2003) to calculate the mass-to-light ratio ($M/L$) as a function of the
$g-i$ color, assuming a Salpeter 1955 IMF. The radius is computed as the
elliptical distance to the BCG, where the morphological parameters
(ellipticity and PA) are the median values from the ellipse isophotes
excluding the inner $10\arcsec$: 0.37 for the ellipticity and 56 deg for the
PA. The expression we have used to estimate the $M/L$ in the $g$ band is:
$\log(M/L_{g})=-0.379+0.914\times(g-i)$ (E2)
from Table 7 in Bell et al. (2003). The color used is the median $g-i$ color
inside a radius $R$, $k$-corrected and corrected by the extinction of the
Milky Way.
## Appendix F ICL fraction at R$<$R500
In Table 5, we list the ICL fractions inside $R_{500}$ (= 1.2 Mpc =
18$\farcm$7). We also list the BCG+ICL fraction with respect to the total
luminosity of the cluster and the ICL fraction with respect to the BCG+ICL.
Table 5: ICL fraction (%) for A85 within r$<R_{500}$ Surface brightness cuts
---
| $26<\mu<29.5$ | $26.5<\mu<29.5$ | $27<\mu<29.5$ | $27.5<\mu<29.5$
[mag/arcsec2]
$f_{\mathrm{ICL}}(g)$ | $9.8\pm 0.5$ | $7.0\pm 0.8$ | $4.6\pm 1.0$ | $2.7\pm 1.0$
$f_{\mathrm{ICL}}(i)$ | $3.6\pm 0.8$ | $2.2\pm 0.8$ | $1.4\pm 0.8$ | $0.7\pm 0.8$
2D fit
| $g$ | $i$
$f_{\mathrm{ICL}}$ | $29.9\pm 1.0$ | $30.3\pm 1.0$
$f_{\mathrm{BCG+ICL}}$ | $45.4\pm 2.0$ | $47.7\pm 2.0$
$f_{\mathrm{ICL}/\mathrm{BCG+ICL}}$ | $66.1\pm 2.2$ | $63.7\pm 2.2$
## References
* Ahn et al. (2012) Ahn, C. P., Alexandroff, R., Allende Prieto, C., et al. 2012, ApJS, 203, 21, doi: 10.1088/0067-0049/203/2/21
* Aihara et al. (2018) Aihara, H., Armstrong, R., Bickerton, S., et al. 2018, PASJ, 70, S8, doi: 10.1093/pasj/psx081
* Aihara et al. (2019) Aihara, H., AlSayyad, Y., Ando, M., et al. 2019, PASJ, 71, 114, doi: 10.1093/pasj/psz103
* Alam et al. (2015) Alam, S., Albareti, F. D., Allende Prieto, C., et al. 2015, ApJS, 219, 12, doi: 10.1088/0067-0049/219/1/12
* Arnaboldi et al. (1996) Arnaboldi, M., Freeman, K. C., Mendez, R. H., et al. 1996, ApJ, 472, 145, doi: 10.1086/178050
* Baba et al. (2002) Baba, H., Yasuda, N., Ichikawa, S.-I., et al. 2002, Astronomical Society of the Pacific Conference Series, Vol. 281, Development of the Subaru-Mitaka-Okayama-Kiso Archive System, ed. D. A. Bohlender, D. Durand, & T. H. Handley, 298
* Bell et al. (2003) Bell, E. F., McIntosh, D. H., Katz, N., & Weinberg, M. D. 2003, ApJS, 149, 289, doi: 10.1086/378847
* Bellstedt et al. (2016) Bellstedt, S., Lidman, C., Muzzin, A., et al. 2016, MNRAS, 460, 2862, doi: 10.1093/mnras/stw1184
* Bertin (2006) Bertin, E. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 351, Astronomical Data Analysis Software and Systems XV, ed. C. Gabriel, C. Arviset, D. Ponz, & S. Enrique, 112
* Bertin (2011) Bertin, E. 2011, Astronomical Society of the Pacific Conference Series, Vol. 442, Automated Morphometry with SExtractor and PSFEx, ed. I. N. Evans, A. Accomazzi, D. J. Mink, & A. H. Rots, 435
* Bertin & Arnouts (1996) Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, doi: 10.1051/aas:1996164
* Bertin et al. (2002) Bertin, E., Mellier, Y., Radovich, M., et al. 2002, in Astronomical Society of the Pacific Conference Series, Vol. 281, Astronomical Data Analysis Software and Systems XI, ed. D. A. Bohlender, D. Durand, & T. H. Handley, 228
* Binney & Tremaine (1987) Binney, J., & Tremaine, S. 1987, Galactic dynamics
* Bosch et al. (2018) Bosch, J., Armstrong, R., Bickerton, S., et al. 2018, PASJ, 70, S5, doi: 10.1093/pasj/psx080
* Bradley et al. (2019) Bradley, L., Sipocz, B., Robitaille, T., et al. 2019, astropy/photutils: v0.7.1, v0.7.1, Zenodo, doi: 10.5281/zenodo.3478575
* Bravo-Alfaro et al. (2009) Bravo-Alfaro, H., Caretta, C. A., Lobo, C., Durret, F., & Scott, T. 2009, A&A, 495, 379, doi: 10.1051/0004-6361:200810731
* Brough et al. (2020) Brough, S., Collins, C., Demarco, R., et al. 2020, arXiv e-prints, arXiv:2001.11067. https://arxiv.org/abs/2001.11067
* Burke et al. (2015) Burke, C., Hilton, M., & Collins, C. 2015, MNRAS, 449, 2353, doi: 10.1093/mnras/stv450
* Cañas et al. (2020) Cañas, R., Lagos, C. d. P., Elahi, P. J., et al. 2020, MNRAS, 494, 4314, doi: 10.1093/mnras/staa1027
* Chilingarian et al. (2010) Chilingarian, I. V., Melchior, A.-L., & Zolotukhin, I. Y. 2010, MNRAS, 405, 1409, doi: 10.1111/j.1365-2966.2010.16506.x
* Chilingarian & Zolotukhin (2012) Chilingarian, I. V., & Zolotukhin, I. Y. 2012, MNRAS, 419, 1727, doi: 10.1111/j.1365-2966.2011.19837.x
* Coccato et al. (2010) Coccato, L., Gerhard, O., & Arnaboldi, M. 2010, MNRAS, 407, L26, doi: 10.1111/j.1745-3933.2010.00897.x
* Collins et al. (2009) Collins, C. A., Stott, J. P., Hilton, M., et al. 2009, Nature, 458, 603, doi: 10.1038/nature07865
* Conroy et al. (2007) Conroy, C., Wechsler, R. H., & Kravtsov, A. V. 2007, ApJ, 668, 826, doi: 10.1086/521425
* Contini et al. (2014) Contini, E., De Lucia, G., Villalobos, Á., & Borgani, S. 2014, MNRAS, 437, 3787, doi: 10.1093/mnras/stt2174
* Contini et al. (2018) Contini, E., Yi, S. K., & Kang, X. 2018, MNRAS, 479, 932, doi: 10.1093/mnras/sty1518
* Contini et al. (2019) —. 2019, ApJ, 871, 24, doi: 10.3847/1538-4357/aaf41f
* Cui et al. (2014) Cui, W., Murante, G., Monaco, P., et al. 2014, MNRAS, 437, 816, doi: 10.1093/mnras/stt1940
* Cypriano et al. (2004) Cypriano, E. S., Sodré, Laerte, J., Kneib, J.-P., & Campusano, L. E. 2004, ApJ, 613, 95, doi: 10.1086/422896
* Da Rocha & Mendes de Oliveira (2005) Da Rocha, C., & Mendes de Oliveira, C. 2005, MNRAS, 364, 1069, doi: 10.1111/j.1365-2966.2005.09641.x
* Davies et al. (1993) Davies, R. L., Sadler, E. M., & Peletier, R. F. 1993, MNRAS, 262, 650, doi: 10.1093/mnras/262.3.650
* De Lucia & Blaizot (2007) De Lucia, G., & Blaizot, J. 2007, MNRAS, 375, 2, doi: 10.1111/j.1365-2966.2006.11287.x
* DeMaio et al. (2015) DeMaio, T., Gonzalez, A. H., Zabludoff, A., Zaritsky, D., & Bradač, M. 2015, MNRAS, 448, 1162, doi: 10.1093/mnras/stv033
* DeMaio et al. (2018) DeMaio, T., Gonzalez, A. H., Zabludoff, A., et al. 2018, MNRAS, 474, 3009, doi: 10.1093/mnras/stx2946
* DeMaio et al. (2020) —. 2020, MNRAS, 491, 3751, doi: 10.1093/mnras/stz3236
* Dressler (1979) Dressler, A. 1979, ApJ, 231, 659, doi: 10.1086/157229
* Durret et al. (2005) Durret, F., Lima Neto, G. B., & Forman, W. 2005, A&A, 432, 809, doi: 10.1051/0004-6361:20041666
* Edwards et al. (2020) Edwards, L. O. V., Salinas, M., Stanley, S., et al. 2020, MNRAS, 491, 2617, doi: 10.1093/mnras/stz2706
* Feldmeier et al. (2004) Feldmeier, J. J., Mihos, J. C., Morrison, H. L., et al. 2004, ApJ, 609, 617, doi: 10.1086/421313
* Gallazzi et al. (2005) Gallazzi, A., Charlot, S., Brinchmann, J., White, S. D. M., & Tremonti, C. A. 2005, MNRAS, 362, 41, doi: 10.1111/j.1365-2966.2005.09321.x
* Gonzalez et al. (2013) Gonzalez, A. H., Sivanandam, S., Zabludoff, A. I., & Zaritsky, D. 2013, ApJ, 778, 14, doi: 10.1088/0004-637X/778/1/14
* Gonzalez et al. (2005) Gonzalez, A. H., Zabludoff, A. I., & Zaritsky, D. 2005, ApJ, 618, 195, doi: 10.1086/425896
* Gonzalez et al. (2007) Gonzalez, A. H., Zaritsky, D., & Zabludoff, A. I. 2007, ApJ, 666, 147, doi: 10.1086/519729
* Habas et al. (2018) Habas, R., Fadda, D., Marleau, F. R., Biviano, A., & Durret, F. 2018, MNRAS, 475, 4544, doi: 10.1093/mnras/sty005
* Huang et al. (2018) Huang, S., Leauthaud, A., Greene, J. E., et al. 2018, MNRAS, 475, 3348, doi: 10.1093/mnras/stx3200
* Ichinohe et al. (2015) Ichinohe, Y., Werner, N., Simionescu, A., et al. 2015, MNRAS, 448, 2971, doi: 10.1093/mnras/stv217
* Infante-Sainz et al. (2020) Infante-Sainz, R., Trujillo, I., & Román, J. 2020, MNRAS, 491, 5317, doi: 10.1093/mnras/stz3111
* Iodice et al. (2016) Iodice, E., Capaccioli, M., Grado, A., et al. 2016, ApJ, 820, 42, doi: 10.3847/0004-637X/820/1/42
* Iodice et al. (2017) Iodice, E., Spavone, M., Cantiello, M., et al. 2017, ApJ, 851, 75, doi: 10.3847/1538-4357/aa9b30
* Iodice et al. (2020) Iodice, E., Spavone, M., Cattapan, A., et al. 2020, A&A, 635, A3, doi: 10.1051/0004-6361/201936435
* Jedrzejewski (1987) Jedrzejewski, R. I. 1987, MNRAS, 226, 747, doi: 10.1093/mnras/226.4.747
* Jiménez-Teja et al. (2018) Jiménez-Teja, Y., Dupke, R., Benítez, N., et al. 2018, ApJ, 857, 79, doi: 10.3847/1538-4357/aab70f
* Kluge et al. (2020a) Kluge, M., Bender, R., Riffeser, A., et al. 2020a, arXiv e-prints, arXiv:2011.12992. https://arxiv.org/abs/2011.12992
* Kluge et al. (2020b) Kluge, M., Neureiter, B., Riffeser, A., et al. 2020b, ApJS, 247, 43, doi: 10.3847/1538-4365/ab733b
* Krick & Bernstein (2007) Krick, J. E., & Bernstein, R. A. 2007, AJ, 134, 466, doi: 10.1086/518787
* La Barbera et al. (2012) La Barbera, F., Ferreras, I., de Carvalho, R. R., et al. 2012, MNRAS, 426, 2300, doi: 10.1111/j.1365-2966.2012.21848.x
* Laporte et al. (2013) Laporte, C. F. P., White, S. D. M., Naab, T., & Gao, L. 2013, MNRAS, 435, 901, doi: 10.1093/mnras/stt912
* Lidman et al. (2012) Lidman, C., Suherli, J., Muzzin, A., et al. 2012, MNRAS, 427, 550, doi: 10.1111/j.1365-2966.2012.21984.x
* Longobardi et al. (2018) Longobardi, A., Arnaboldi, M., Gerhard, O., Pulsoni, C., & Söldner-Rembold, I. 2018, A&A, 620, A111, doi: 10.1051/0004-6361/201832729
* López-Cruz et al. (2014) López-Cruz, O., Añorve, C., Birkinshaw, M., et al. 2014, ApJ, 795, L31, doi: 10.1088/2041-8205/795/2/L31
* Madrid & Donzelli (2016) Madrid, J. P., & Donzelli, C. J. 2016, ApJ, 819, 50, doi: 10.3847/0004-637X/819/1/50
* Mehrgan et al. (2019) Mehrgan, K., Thomas, J., Saglia, R., et al. 2019, ApJ, 887, 195, doi: 10.3847/1538-4357/ab5856
* Merritt (1984) Merritt, D. 1984, ApJ, 276, 26, doi: 10.1086/161590
* Mihos (2016) Mihos, J. C. 2016, in IAU Symposium, Vol. 317, The General Assembly of Galaxy Halos: Structure, Origin and Evolution, ed. A. Bragaglia, M. Arnaboldi, M. Rejkuba, & D. Romano, 27–34, doi: 10.1017/S1743921315006857
* Mihos (2019) Mihos, J. C. 2019, arXiv e-prints, arXiv:1909.09456. https://arxiv.org/abs/1909.09456
* Mihos et al. (2005) Mihos, J. C., Harding, P., Feldmeier, J., & Morrison, H. 2005, ApJ, 631, L41, doi: 10.1086/497030
* Mihos et al. (2017) Mihos, J. C., Harding, P., Feldmeier, J. J., et al. 2017, ApJ, 834, 16, doi: 10.3847/1538-4357/834/1/16
* Miyazaki et al. (2012) Miyazaki, S., Komiyama, Y., Nakaya, H., et al. 2012, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8446, Hyper Suprime-Cam, 84460Z, doi: 10.1117/12.926844
* Miyazaki et al. (2018) Miyazaki, S., Komiyama, Y., Kawanomoto, S., et al. 2018, PASJ, 70, S1, doi: 10.1093/pasj/psx063
* Montes (2019) Montes, M. 2019, arXiv e-prints, arXiv:1912.01616. https://arxiv.org/abs/1912.01616
* Montes & Trujillo (2014) Montes, M., & Trujillo, I. 2014, ApJ, 794, 137, doi: 10.1088/0004-637X/794/2/137
* Montes & Trujillo (2018) —. 2018, MNRAS, 474, 917, doi: 10.1093/mnras/stx2847
* Montes & Trujillo (2019) —. 2019, MNRAS, 482, 2838, doi: 10.1093/mnras/sty2858
* Montes et al. (2014) Montes, M., Trujillo, I., Prieto, M. A., & Acosta-Pulido, J. A. 2014, MNRAS, 439, 990, doi: 10.1093/mnras/stu037
* Morishita et al. (2017) Morishita, T., Abramson, L. E., Treu, T., et al. 2017, ApJ, 846, 139, doi: 10.3847/1538-4357/aa8403
* Naab et al. (2006) Naab, T., Jesseit, R., & Burkert, A. 2006, MNRAS, 372, 839, doi: 10.1111/j.1365-2966.2006.10902.x
* Nieto & Bender (1989) Nieto, J. L., & Bender, R. 1989, A&A, 215, 266
* Oliva-Altamirano et al. (2014) Oliva-Altamirano, P., Brough, S., Lidman, C., et al. 2014, MNRAS, 440, 762, doi: 10.1093/mnras/stu277
* Owers et al. (2017) Owers, M. S., Allen, J. T., Baldry, I., et al. 2017, MNRAS, 468, 1824, doi: 10.1093/mnras/stx562
* Peletier et al. (1990) Peletier, R. F., Davies, R. L., Illingworth, G. D., Davis, L. E., & Cawson, M. 1990, AJ, 100, 1091, doi: 10.1086/115582
* Peng et al. (2002) Peng, C. Y., Ho, L. C., Impey, C. D., & Rix, H.-W. 2002, AJ, 124, 266, doi: 10.1086/340952
* Pillepich et al. (2018) Pillepich, A., Nelson, D., Hernquist, L., et al. 2018, MNRAS, 475, 648, doi: 10.1093/mnras/stx3112
* Purcell et al. (2007) Purcell, C. W., Bullock, J. S., & Zentner, A. R. 2007, ApJ, 666, 20, doi: 10.1086/519787
* Qu et al. (2017) Qu, Y., Helly, J. C., Bower, R. G., et al. 2017, MNRAS, 464, 1659, doi: 10.1093/mnras/stw2437
* Rix et al. (2004) Rix, H.-W., Barden, M., Beckwith, S. V. W., et al. 2004, ApJS, 152, 163, doi: 10.1086/420885
* Román et al. (2020) Román, J., Trujillo, I., & Montes, M. 2020, A&A, 644, A42, doi: 10.1051/0004-6361/201936111
* Rudick et al. (2009) Rudick, C. S., Mihos, J. C., Frey, L. H., & McBride, C. K. 2009, ApJ, 699, 1518, doi: 10.1088/0004-637X/699/2/1518
* Rudick et al. (2006) Rudick, C. S., Mihos, J. C., & McBride, C. 2006, ApJ, 648, 936, doi: 10.1086/506176
* Rudick et al. (2011) Rudick, C. S., Mihos, J. C., & McBride, C. K. 2011, ApJ, 732, 48, doi: 10.1088/0004-637X/732/1/48
* Salpeter (1955) Salpeter, E. E. 1955, ApJ, 121, 161, doi: 10.1086/145971
* Santucci et al. (2020) Santucci, G., Brough, S., Scott, N., et al. 2020, arXiv e-prints, arXiv:2005.00541. https://arxiv.org/abs/2005.00541
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103
* Seigar et al. (2007) Seigar, M. S., Graham, A. W., & Jerjen, H. 2007, MNRAS, 378, 1575, doi: 10.1111/j.1365-2966.2007.11899.x
* Sérsic (1968) Sérsic, J. L. 1968, Atlas de galaxias australes
* Shin et al. (2018) Shin, T.-h., Clampitt, J., Jain, B., et al. 2018, MNRAS, 475, 2421, doi: 10.1093/mnras/stx3366
* Slater et al. (2009) Slater, C. T., Harding, P., & Mihos, J. C. 2009, PASP, 121, 1267, doi: 10.1086/648457
* Sofue (1993) Sofue, Y. 1993, PASP, 105, 308, doi: 10.1086/133148
* Spavone et al. (2018) Spavone, M., Iodice, E., Capaccioli, M., et al. 2018, ApJ, 864, 149, doi: 10.3847/1538-4357/aad6e9
* Spavone et al. (2020) Spavone, M., Iodice, E., van de Ven, G., et al. 2020, A&A, 639, A14, doi: 10.1051/0004-6361/202038015
* Tal & van Dokkum (2011) Tal, T., & van Dokkum, P. G. 2011, ApJ, 731, 89, doi: 10.1088/0004-637X/731/2/89
* The Astropy Collaboration et al. (2018) The Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, ArXiv e-prints. https://arxiv.org/abs/1801.02634
* Thomas et al. (2005) Thomas, D., Maraston, C., Bender, R., & Mendes de Oliveira, C. 2005, ApJ, 621, 673, doi: 10.1086/426932
* Trujillo et al. (2011) Trujillo, I., Ferreras, I., & de La Rosa, I. G. 2011, MNRAS, 415, 3903, doi: 10.1111/j.1365-2966.2011.19017.x
* Trujillo & Fliri (2016) Trujillo, I., & Fliri, J. 2016, ApJ, 823, 123, doi: 10.3847/0004-637X/823/2/123
* Uson et al. (1991) Uson, J. M., Boughn, S. P., & Kuhn, J. R. 1991, ApJ, 369, 46, doi: 10.1086/169737
* van Kemenade et al. (2020) van Kemenade, H., wiredfool, Murray, A., et al. 2020, python-pillow/Pillow 7.1.1, 7.1.1, Zenodo, doi: 10.5281/zenodo.3738618
* Vazdekis et al. (2016) Vazdekis, A., Koleva, M., Ricciardelli, E., Röck, B., & Falcón-Barroso, J. 2016, MNRAS, 463, 3409, doi: 10.1093/mnras/stw2231
* Whiley et al. (2008) Whiley, I. M., Aragón-Salamanca, A., De Lucia, G., et al. 2008, MNRAS, 387, 1253, doi: 10.1111/j.1365-2966.2008.13324.x
* Williams et al. (2007) Williams, B. F., Ciardullo, R., Durrell, P. R., et al. 2007, ApJ, 656, 756, doi: 10.1086/510149
* Zhang et al. (2019) Zhang, Y., Yanny, B., Palmese, A., et al. 2019, ApJ, 874, 165, doi: 10.3847/1538-4357/ab0dfd
* Zibetti et al. (2005) Zibetti, S., White, S. D. M., Schneider, D. P., & Brinkmann, J. 2005, MNRAS, 358, 949, doi: 10.1111/j.1365-2966.2005.08817.x
|
# A Spatially-Resolved Survey of Distant Quasar Host Galaxies:
II. Photoionization and Kinematics of the ISM
Andrey Vayner Department of Physics, University of California San Diego, 9500
Gilman Drive La Jolla, CA 92093 USA Center for Astrophysics & Space Sciences,
University of California San Diego, 9500 Gilman Drive La Jolla, CA 92093 USA
Department of Physics and Astronomy, Johns Hopkins University, Bloomberg
Center, 3400 N. Charles St., Baltimore, MD 21218, USA Shelley A. Wright
Department of Physics, University of California San Diego, 9500 Gilman Drive
La Jolla, CA 92093 USA Center for Astrophysics & Space Sciences, University
of California San Diego, 9500 Gilman Drive La Jolla, CA 92093 USA Norman
Murray Canadian Institute for Theoretical Astrophysics, University of
Toronto, 60 St. George Street, Toronto, ON M5S 3H8, Canada Canada Research
Chair in Theoretical Astrophysics Lee Armus Spitzer Science Center,
California Institute of Technology, 1200 E. California Blvd., Pasadena, CA
91125 USA Anna Boehle ETH Zürich Wolfgang-Pauli-Str. 27 8093 Zürich,
Switzerland Maren Cosens Department of Physics, University of California San
Diego, 9500 Gilman Drive La Jolla, CA 92093 USA Center for Astrophysics &
Space Sciences, University of California San Diego, 9500 Gilman Drive La
Jolla, CA 92093 USA James E. Larkin Department of Physics and Astronomy,
University of California, Los Angeles, CA 90095 USA Etsuko Mieda National
Astronomical Observatory of Japan, Subaru Telescope, National Institutes of
Natural Sciences, Hilo, HI 96720, USA Gregory Walth Observatories of the
Carnegie Institution for Science 813 Santa Barbara Street Pasadena, CA 91101
USA
(Accepted January 15, 2021)
###### Abstract
We present detailed observations of photoionization conditions and galaxy
kinematics in eleven z$=1.39-2.59$ radio-loud quasar host galaxies. Data was
taken with OSIRIS integral field spectrograph (IFS) and the adaptive optics
system at the W.M. Keck Observatory that targeted nebular emission lines
(H$\beta$ ,[OIII] ,H$\alpha$ ,[NII] ) redshifted into the near-infrared (1-2.4
µm). We detect extended ionized emission on scales ranging from 1-30 kpc
photoionized by stars, shocks, and active galactic nuclei (AGN). Spatially
resolved emission-line ratios indicate that our systems reside off the star
formation and AGN-mixing sequence on the Baldwin, Phillips $\&$ Terlevich
(BPT) diagram at low redshift. The dominant cause of the difference between
line ratios of low redshift galaxies and our sample is due to lower gas-phase
metallicities, which are 2-5$\times$ less compared to galaxies with AGN in the
nearby Universe. Using gas velocity dispersion as a proxy to stellar velocity
dispersion and dynamical mass measurement through inclined disk modeling we
find that the quasar host galaxies are under-massive relative to their central
supermassive black hole (SMBH) mass, with all systems residing off the local
scaling ($M_{\bullet}-\sigma~{}$,$M_{\bullet}-M_{*}~{}$) relationship. These
quasar host galaxies require substantial growth, up to an order of magnitude
in stellar mass, to grow into present-day massive elliptical galaxies.
Combining these results with part I of our sample paper (Vayner et al., 2021)
we find evidence for winds capable of causing feedback before the AGN host
galaxies land on the local scaling relation between black hole and galaxy
stellar mass, and before the enrichment of the ISM to a level observed in
local galaxies with AGN.
††journal: ApJ††software: OSIRIS DRP (Larkin et al., 2013), Matplotlib
(Hunter, 2007), SciPy (Virtanen et al., 2020), NumPy (Harris et al., 2020),
Astropy (Astropy Collaboration et al., 2018), MAPPINGS (Alarie & Morisset,
2019), emcee (Foreman-Mackey et al., 2013)
## 1 Introduction
Today, feedback from supermassive black holes (SMBH) is an integral part of
galaxy evolution models. It is commonly used to explain the lack of observed
baryons in local massive galaxies (Behroozi et al., 2010), the enrichment of
the circumgalactic medium with metals (Prochaska et al., 2014) and the
observed local scaling relation between the mass of the galaxy/bulge and the
SMBH (Ferrarese & Merritt, 2000; Gebhardt et al., 2000; McConnell & Ma, 2013).
The latest observational and theoretical results point to a critical question;
at what points does the AGN drive an outflow powerful enough to clear the
galaxy of its gas into the surrounding CGM? (King & Pounds, 2015) According to
theoretical work, this typically happens once the galaxy reaches the
$M_{\bullet}-\sigma~{}$relationship (Zubovas & King, 2014). However, there has
been growing evidence for galaxies with massive SMBH and powerful outflows
that are offset from the local scaling relationship (Vayner et al., 2017). The
origin and evolution of the local scaling relationships with redshift have
been an active debate topic over the last decade. When are the local scaling
relations established? Are the local scaling relationships the end product of
galaxy evolution? Meaning, as galaxies form and evolve, do they fall in and
out of the relationships due to rapid growth or feedback process? Do galaxies
eventually end up on the local scaling relations once the galaxy or SMBH catch
up and finish growing (Volonteri, 2012)? Alternatively, is there an inherent
evolution in the scaling relationship with redshift and a symbiosis between
the galaxy and SMBH growth? (i.e., evolution in slope, offset, and scatter).
Finally, there is still an open question regarding the role of quasar feedback
in establishing the relationship and whether the merging of galaxies can
produce the $M_{\bullet}-\sigma~{}$relation following the central limit
theorem (Jahnke & Macciò, 2011). From a sample of AGN in the COSMOS field
(Merloni et al., 2010) finds an offset in the local scaling relationship
between redshift 0 and 2. These authors use SED decomposition with numerous
spectral bands to measure the stellar mass of the AGN host galaxy in the
redshift range of $1<\rm z<2.2$. From a sample of lensed quasars at $1<z<4.5$
and broadband HST imaging, Peng et al. (2006) finds an offset in the local
scaling relationship. While Sun et al. (2015) using multi-band SED fitting of
galaxies in the COSMOS field finds that z$\sim 0.2-2$ galaxies are consistent
with being on the local scaling relationship. Schramm & Silverman (2013) using
HST imaging in the Chandra Extended Deep Field also finds that galaxies at
z$\sim 0.6-1$ are also consistent with being on the local scaling
relationship. In the nearby Universe, there is tentative evidence that all of
the most massive black holes ($>10^{9}$ M⊙ ) are systematically more massive
relative to their host galaxies (Martín-Navarro et al., 2018). Fields such as
COSMOS or the Extended Chandra Deep Field-South are relatively small in the
sky; hence, the number of luminous quasars with massive SMBH is small. Studies
that explored the evolution of the local scaling relationships have generally
focused on lower-mass black holes with masses $<10^{9}$ M⊙ . Furthermore, a
large fraction of these studies used broadband HST imaging to study the host
galaxies of their quasars/AGN. It is often difficult to disentangle the bright
AGN emission from the host galaxy at smaller angular separations ($<$0.5″).
These studies have a limited number of filters to measure the stellar
population’s age and the mass to light ratio. Alternatively, mm-interferometry
observations have become an essential tool in measuring the dynamical masses
of quasar host galaxies across different redshift ranges. At the highest
redshifts (z$>4$), the [CII] 158µm line has been the most commonly used tracer
of the dynamics of the ISM. There is growing evidence that the most massive
($>10^{9}$ M⊙ ) SMBH in the highest redshift quasars known to date (z$>6$)
appear to be over massive for the mass of their host galaxies (Wang et al.,
2013; Venemans et al., 2016; Decarli et al., 2018), indicating that the most
massive SMBHs residing in high redshift quasars grow first relative to their
host galaxies. At more intermediate redshifts 1$<$z$<$3, some systems also
appear to have overly massive SMBH relative to their stellar/dynamical mass
(Shields et al., 2006; Trakhtenbrot et al., 2015; Vayner et al., 2017). While
a significant fraction of galaxies with lower SMBH $<10^{9}$ M⊙ appear closer
or within the scatter of the local scaling relations, galaxies with the
luminous quasars and massive SMBH appear to be under massive relative to the
mass of their SMBH. As outlined by (Lauer et al., 2007; Schulze & Wisotzki,
2014), the offset from the local scaling relations for the systems with more
massive black holes is biased due to the steep decline in the galaxy mass
function at the massive end.
Integral field spectroscopy (IFS) behind adaptive optics is another method
with which it is possible to disentangle the bright quasar emission from the
extended emission of the host galaxy. A point spread function can be
constructed using data channels confined to the broad emission line of the
quasar. After the point spread function is normalized, it is subtracted from
the rest of the data channels in the cube. This technique was first shown to
be able to resolve host galaxies of low redshift ($z<0.2$) luminous type-1
quasars in seeing limited observations (Jahnke et al., 2004) and extended
Ly$\alpha$ emission around high redshift quasars (Christensen et al., 2006).
Later, when the first near-infrared IFS came online along with their own
adaptive optics system, this technique was expanded to samples of higher
redshift quasars in search for their host galaxies (Inskip et al., 2011;
Vayner et al., 2016) and has shown to work on all the 8-10m class near-
infrared IFS (e.g., SINFONI, NIFS, and OSIRIS). This technique has shown
continued success in seeing limited optical IFS data as well (Herenz et al.,
2015; Arrigoni Battaia et al., 2019). This PSF subtraction routine provides
better contrast at smaller angular separations than HST, with an inner working
angle of 0.1-0.2″, compared to $\sim$ 0.5″for HST (Vayner et al., 2016).
Although today’s near-infrared IFSs are not sensitive enough to detect the
stellar continuum from the quasar/AGN host galaxies, they can still detect
extended ionized emission, enabling us to extract the dynamical properties of
the galaxy (Inskip et al., 2011; Vayner et al., 2017) and compare systems to
the local scaling relation. However, today, the largest fraction of quasar
host galaxy masses still come from HST and mm-interferometric observations.
Most likely, selection effects play an important role in determining whether
there is a systematic offset from the local scaling relations among the
different studies.
Besides measuring the host galaxies and SMBH masses, there are vital open
questions regarding the gas phase properties. Galaxies exhibit a correlation
between the stellar mass and metallicity across a wide redshift range (Erb et
al., 2006a; Sanders et al., 2015). It is often difficult to place galaxies
with bright AGN on the mass-metallicity relationship due to limited contrast
and the fact that the AGN has a strong impact on the ISM’s ionization state.
What are the metallicities of the gas in quasar hosts? How does the
metallicity in quasar host galaxies evolve with redshift? What is the dominant
source of ionization in quasar hosts? What are the star formation rates? One
of the best ways to measure the ionization properties of the gas in galaxies
is through the BPT (Baldwin, Phillips $\&$ Terlevich) diagram (Baldwin et al.,
1981; Veilleux & Osterbrock, 1987). The traditional BPT diagram plots the
ratio of log([OIII]/H$\beta$ ) vs. log([NII]/H$\alpha$) and contains two
clearly defined sequences: the star-forming sequence and the mixing sequence.
The star-forming sequence provides information about the metallicity of HII
regions, the stellar ionizing radiation field as well as information on the
gas condition in star-forming regions. On the other hand, the mixing sequence
consists of gas photoionized by hot stars, AGN, and shocks. It can potentially
provide information on the hardness of the AGN ionizing radiation and the
metallicity of the gas photoionized by the quasar/AGN (Groves et al., 2006)
and shocks. Studies of high redshift star-forming galaxies have shown evidence
for elevated line ratios relative to low redshift galaxies. At z$\sim$2, the
observed elevated line ratios have been attributed to denser ISM conditions
(Sanders et al., 2016) and harder ionizing radiation fields at fixed N/O and
O/H abundances relative to typical z=0 galaxies (Strom et al., 2017).
Evolutionary BPT models by Kewley et al. (2013a) are consistent with these
observations. The evolutionary BPT models also provide a prediction on the
evolution of the mixing sequence between z=0 and 3. The location of the mixing
sequence moves to lower log([NII]/H$\alpha$) value at a relatively fixed
log([OIII]/H$\beta$ ) value, primarily due to lower on average gas-phase
metallicity at higher redshift (Groves et al., 2006; Kewley et al., 2013a).
There is tentative evidence that gas photoionized by AGN is consistent with
this picture, as there are several galaxies with AGN, which have emission line
ratios offset from the local mixing sequence (Juneau et al., 2014; Coil et
al., 2015; Strom et al., 2017; Nesvadba et al., 2017b; Law et al., 2018).
Given the presence of the AGN, young stars and shocks in quasar host galaxies,
it is crucial to spatially resolve the quasar host galaxy to understand the
various contributions to gas ionization. In the distant Universe, this
generally requires observations with an IFS and adaptive optics. Resolved BPT
diagnostics in both nearby and distant AGN/quasar host galaxies have found
regions with distinct photoionization mechanisms (Davies et al., 2014;
Williams et al., 2017; Vayner et al., 2017). The question remains whether the
ISM condition in the most luminous high redshift quasar host galaxies is
different from local AGN and where they lie relative to the mass metallicity
relationship.
We have begun a survey to study the host galaxies of z$=1.4-2.6$ radio-loud
quasars, which are likely to evolve into the most massive elliptical galaxies
in the nearby Universe. The sample consists of eleven objects, selected to
have young-jets with sizes up to 20 kpc in order to study their impact on
galactic scales at early times. The observations consist of near-infrared IFS
observation behind laser-guide-star adaptive optics (LGS-AO) at the W.M. Keck
Observatory with the OSIRIS instrument. The survey aims to understand the gas
phase conditions and ionization mechanisms in high redshift quasar host
galaxies and search for evidence of quasar feedback and weighing the masses of
the quasar hosts. The observations target nebular emission lines (H$\beta$
,[OIII] ,H$\alpha$ ,[NII] ,[SII] ) redshifted in the near-infrared bands
($1-2.4$ µm), at the distance of our sample, the angular resolution of the
OSIRIS/LGS-AO mode corresponds to approximately 1.4 kpc in projection.
This paper is part two of two papers focusing on understanding the
photoionization mechanisms of gas in radio-loud quasar host galaxies and weigh
the mass of the galaxy and SMBH to compare them to the local scaling
relations. Refer to paper I (Vayner et al., 2021) for details on the sample
selection, properties, and data reduction. Details on archival HST imaging
data set are presented in §2. Blackhole masses are presented in §3, we
describe how we identify spatially-resolved dynamically quiescent regions in
each quasar host galaxy in §4.1, resolved BPT diagrams and our interpretation
of the line ratios are present in §5, dynamical masses of the quasar host
galaxies and their place relative to the local scaling relations is presented
in §7 & §6, we discuss our results in broader context of massive galaxy
evolution in §8 and present our conclusions in §9. Notes on individual sources
are presented in §9. Throughout the paper we assume a $\Lambda$-dominated
cosmology (Planck Collaboration et al., 2014) with $\Omega_{M}$=0.308,
$\Omega_{\Lambda}$=0.692, and Ho=67.8 km s-1 Mpc-1. All magnitudes are on the
AB scale unless otherwise stated.
## 2 Archival HST imaging
The sources within our sample have a rich set of multi-wavelength space and
ground-based data sets. To assist in our analysis and interpretation of
distinct regions within these quasar host galaxies, we utilize high angular
resolution images from the Hubble Space Telescope. We download fully-reduced
data from the Barbara A. Mikulski Archive for Space Telescopes (MAST). Table 1
list the archival HST observations used in this study.
Table 1: Archival HST imaging Object | Proposal ID | Instrument | Filter | Exposure time
---|---|---|---|---
| | | | (s)
3C446 | 12975 | ACS-WFC | F814W | 2200
3C298 | 13023 | WFC3-UV | F606W | 1100
3C268.4 | 13023 | WFC3-UV | F606W | 1100
4C09.17 | 5393 | WFPC2 | F555W | 2100
3C9 | 13945 | ACS-WFC | F814W | 2040
We construct a model of the PSF using stars in the vicinity of the quasar
within the FOV of each instrument. Images centered on each star are extracted
in a box region of roughly 5″x 5″. We then subtract the local background for
each star and median combine the stellar images into a normalized “master”
PSF. This PSF is then re-scaled to the quasar’s peak flux and subtracted out
at the spatial location of the quasar. In cases where the quasar was
saturated, we scale the flux in the diffraction pattern of the PSF.
## 3 Black hole mass measurement
Blackhole masses are calculated using the broad-H$\alpha$ line width and
luminosity using the scaling relation from Greene & Ho (2005) for a single
epoch SMBH mass estimate. We describe the details of the nuclear spectrum
fitting in Vayner et al. (2021), which comprises of multi Gaussian models with
a broad component for the BLR emission, a narrow Gaussian for the narrow-line
region, and an intermediate width Gaussian for the case where there is an
outflow. We use the flux and width of the broadest Gaussian to compute the
black hole mass. For 3C9, 3C298, there are strong telluric/filter transmission
issues that prevent accurate measurement of the FWHM for the emission line.
For these targets, we use the Mg II single epoch black hole mass estimate from
Shen et al. (2011). The black hole masses are provided in Table 2. We assume
an uncertainty of 0.4 dex on the SMBH masses.
Table 2: QUART Sample properties Name | RA | DEC | z | Lbol | L178MHz | MBH
---|---|---|---|---|---|---
| J2000 | J2000 | | ($10^{46}$ erg s-1 ) | ($10^{44}$ erg s-1 ) | M⊙
3C 9 | 00:20:25.22 | +15:40:54.77 | 2.0199 | 8.17$\pm$0.31 | 9.0 | 9.87
4C 09.17 | 04:48:21.74 | +09:50:51.46 | 2.1170 | 2.88$\pm$0.14 | 2.6 | 9.11
3C 268.4 | 12:09:13.61 | +43:39:20.92 | 1.3987 | 3.57$\pm$0.14 | 2.3 | 9.56
7C 1354+2552 | 13:57:06.54 | +25:37:24.49 | 2.0068 | 2.75$\pm$0.11 | 1.4 | 9.86
3C 298 | 14:19:08.18 | +06:28:34.76 | 1.439 | 7.80$\pm$0.30 | 12 | 9.51
3C 318 | 15:20:05.48 | +20:16:05.49 | 1.5723 | 0.79$\pm$0.04 | 4.0 | 9.30
4C 57.29 | 16:59:45.85 | +57:31:31.77 | 2.1759 | 2.1$\pm 0.1$ | 1.9 | 9.10
4C 22.44 | 17:09:55.01 | +22:36:55.66 | 1.5492 | 0.491$\pm$0.019 | 0.6 | 9.64
4C 05.84 | 22:25:14.70 | +05:27:09.06 | 2.320 | 20.3$\pm$1.00 | 4.5 | 9.75
3C 446 | 22:25:47.26 | -04:57:01.39 | 1.4040 | 7.76 | 4.4 | 8.87
4C 04.81 | 23:40:57.98 | +04:31:15.59 | 2.5883 | 0.62$\pm$0.02 | 9.3 | 9.58
## 4 Distinct regions within each quasar host galaxy
In this section we outline how we define various regions within the data cube
of each individual object.
### 4.1 Spatially-Resolved Dynamically “Quiescent” Regions
In the first survey paper, we outline our methodology for fitting the emission
lines in individual spaxels of our data cubes. From these fits, we derive
integrated intensity, velocity, and velocity dispersion maps. The errors on
the radial velocity and dispersion maps come directly from the Least-squares
Gaussian model fit. The flux map’s errors come directly from integrating a
noise spectrum in quadrature over the same wavelength range where the emission
line is integrated. The fits are presented in the appendix of (Vayner et al.,
2021). Here we utilize the radial velocity and dispersion maps to select
regions with low-velocity dispersion to search for gas in gravitational motion
and search for regions where star formation may have recently happened.
We define a dynamically “quiescent” region of our data set that contains gas
with a velocity dispersion ($V_{\sigma}$) less than 250 km s-1 . A quiescent
region that belongs to the host galaxy of the quasar must have a radial
velocity $<400$ km s-1 as we expect the maximum rotational velocity for a
given host galaxy to be at most 400 km s-1 . The maximum rotational velocity
found for the most massive galaxies studied with IFS at z$\sim$2 is about 400
km s-1 (Förster Schreiber et al., 2018). We define gas with $V_{r}>|400|$ km
s-1 and $V_{\sigma}<$ 250 km s-1 belonging to a merger system. A system is
defined as a merger if there are components with $V_{r}>|400|$ km s-1 or more
than one distinct kinematic component. For example, in the 3C298 system, two
galactic disks are found to be offset by less than 400 km s-1 . All radial
velocity and velocity dispersion measurements are relative to the redshift of
the quasar. The redshifts for the individual quasars are calculated in Vayner
et al. (2021) and are taken from the fit to the narrow-line region. For
sources with no spatially unresolved narrow emission, we use the redshift of
the broad-line region. We label quiescent regions in the following manner:
source name + direction + component A or B where A = component associated with
the quasar, B = component associated with the galaxy merging with the quasar
host galaxy. We follow these with a one or two-word comment about the region.
Examples of description words are clump, diffuse, or tidal feature. Where
clump referrers to a typical few kpc in size compact ionized emission
typically seen in high redshift star-forming galaxies. Diffuse referrers to
gas that has a surface density of less than typical clumpy star-forming
regions. A tidal feature refers to ionized gas associated with a tidal tail in
a merging system, containing both diffuse and clumpy ionized gas morphology.
For each dynamically quiescent region, we construct a 1D spectrum by
integrating over its spaxels. We show an example of this for 4C09.17 in Figure
1, spectra of distinct regions for the rest of the sources are presented in
the appendix (Figures 11-18). The emission lines in each spectrum are fit with
multi-Gaussian profiles. In these plots, we also present the outflow regions
from (Vayner et al., 2021), to illustrate the location of dynamically
quiescent regions relative to turbulent regions in these quasar hosts. From
these fits, we derive integrated intensity and velocity dispersion that are
presented in Tables 3 and 5.
Figure 1: On the left, we present the spectra of distinct regions and fits to individual emission lines for the 4C09.17 system. On the right, we present the three-color composite where H$\alpha$ is color-coded to red, [OIII] to green and [NII] to blue. The contour outlines the spatial location of the region. The bar on the right in each stamp represents 1″or approximately 8.6 kpc at the system’s redshift. Table 3: Fluxes of distinct “dynamically quiescent” regions in individual sources Source | Region | $\rm F_{[OIII]}$ | $\rm F_{H\alpha}$ | $\rm F_{[NII]}$
---|---|---|---|---
| | 10-17 erg s-1 cm-2 | 10-17 erg s-1 cm-2 | 10-17 erg s-1 cm-2
3C9 | SE-SW component A | 199$\pm$20 | 65$\pm$7 | 21$\pm$2
| N component B | 127$\pm$13 | 40$\pm$4 | 15$\pm$1
4C09.17 | SW component A | 9.55$\pm$0.98 | 3.37$\pm$0.35 | 1.32$\pm$0.2
| W component B clumps | 26$\pm$3 | 10$\pm$1 | 0.77$\pm$0.13
| W component B diffuse | 92$\pm$9 | 25$\pm$2 | 3.5$\pm$0.4
3C268.4 | SW component B | 245$\pm$25 | 51$\pm$5 | 9$\pm$1
7C 1354+2552 | component A | 46$\pm$1 | 12$\pm$1 | –
| E component B | 6.2$\pm$0.6 | 4.7$\pm$0.5 | $<$0.7
3C298 | SE component B ENLR | 649$\pm$65 | 188$\pm$20 | 65$\pm$7
| SE component B tidal feature | 55$\pm$5 | 20$\pm$2 | 3.6$\pm$0.5
4C57.29 | NE component A | 26$\pm$3 | – | –
| N component A/B(?) | 12$\pm$1 | – | –
4C22.44 | N,S component A | 54$\pm$5 | 25$\pm$2 | 3.5$\pm$0.3
4C05.84 | SW component A clump | 7.7$\pm$0.8 | 3.3$\pm$0.3 | 0.48$\pm$0.05
3C446 | NW component A tidal feature | 11$\pm$1 | 5.9$\pm$0.6 | $<$0.15
| E-W component B | 132$\pm$10 | 48$\pm$4 | 6.9$\pm$1.0
### 4.2 Spatially unresolved narrow-line regions
We search for narrow spatially unresolved emission in each object. To do so,
we first subtracted a model of the extended emission from our fits to each
emission line in individual spaxels. We then perform aperture photometry on
the spatially unresolved emission and extract a spectrum. The emission lines
are fit with multiple Gaussian profiles. The fluxes of the narrow emission
($\sigma<250$ km s-1 ) lines from unresolved regions are presented in Table 4.
For sources where no unresolved narrow emission line is detected, we place a 1
sigma upper limit on the line flux. Based on the average angular resolution of
about 0.1″, the unresolved narrow line emitting regions’ sizes are $<$ 1 kpc.
Table 4: Fluxes of spatially unresovled narrow emission line regions in individual sources Source | $\rm F_{[OIII]}$ | $\rm F_{H\alpha}$ | $\rm F_{[NII]}$
---|---|---|---
| 10-17 erg s-1 cm-2 | 10-17 erg s-1 cm-2 | 10-17 erg s-1 cm-2
4C09.17 | 52$\pm$5 | 30$\pm$3 | 102$\pm$1
3C268.4 | 649$\pm$70 | 239$\pm$20 | 76$\pm$8
4C22.44 | 1521$\pm$200 | 102$\pm$10 | 3.5$\pm$0.3
3C318 | 35$\pm$4 | 66 $\pm$7 | 5$\pm$ 1
3C446 | 11$\pm$1 | 5.9$\pm$0.6 | $<$0.15
7C1354 | 65$\pm$10 | $<$4 | $<$4
4C57.29 | $<12$ | $<$5 | $<$ 10
4C04.81 | $<3$ | $<2$ | $<2$
4C05.84 | $<0.9$ | $<1$ | $<1$
## 5 Nebular Emission Line Diagnostics and Sources of Gas Excitation
In this section, we explore the photoionization mechanism in all distinct
regions of each quasar host galaxy. The Baldwin, Phillips $\&$ Terlevich (BPT)
diagram is used to differentiate between different gas photoionization sources
(Baldwin et al., 1981). Here, we use the log([OIII]/H$\beta$ ) and
log([NII]/H$\alpha$) line flux ratios to distinguish heating from young stars,
AGN, and shocks.
To construct the BPT diagram for our sources, we integrated each emission line
over the same velocity width ($\Delta$V) and velocity offset relative to the
redshift derived from the [OIII] emission line at each spaxel. We integrate
the maps relative to [OIII] since it is typically the brightest emission line
in any given spaxel. The higher signal-to-noise [OIII] emission line leads to
a smaller spaxel-spaxel variation in the radial velocity and dispersion maps,
creating a more consistent log([OIII]/H$\beta$ ) and the log([NII]/H$\alpha$)
ratio between neighboring spaxels. We find that for the entire sample, the
standard deviation on the log([OIII]/H$\beta$ ) ratio decreases by 0.2 dex
compared to when integrating the cubes relative to the H$\alpha$ line.
A resolved BPT diagram allows us to investigate the source of ionization
throughout each quasar host galaxy. Due to sensitivity and, in some cases,
wavelength coverage, we cannot create an integrated emission-line map for
H$\beta$ on a similar scale to H$\alpha$ , [OIII] , or [NII] maps. For our BPT
diagrams, we construct our H$\beta$ map by assuming case B recombination
(H$\beta$ =H$\alpha$ /2.86) with a gas temperature of $10^{4}$ K and an
electron density of $10^{2}$ cm-3. Assuming other recombination cases and ISM
conditions with reasonable temperatures and densities would not change our
results by a significant amount as the ratios between H$\beta$ and H$\alpha$
would only change at most by a factor of $\sim$1.3 (Osterbrock & Ferland,
2006). For sources with the brightest extended emission and wavelength
coverage of both H$\alpha$ and H$\beta$ we find a maximum V band extinction of
1 mag, however in most cases, line ratios consistent with case B
recombination. In regions where gas extinction is present, the
log([OIII]/H$\beta$ ) ratios are preferentially lower.
Only spaxels where at least H$\alpha$ and [OIII] were detected are analyzed
and presented here. Typically [NII] is detected in far fewer spaxels compared
to H$\alpha$ and [OIII] . For spaxels where only H$\alpha$ and [OIII] are
detected, we calculate a limit on [NII] by integrating a sky spectrum over the
same velocity width as [OIII] at the expected wavelength location of [NII] .
In Figure 2 we plot the ratios from each spaxel. Diamonds are regions where
[NII] , H$\alpha$ , and [OIII] were detected, and triangles are regions where
only H$\alpha$ and [OIII] were detected with a limit on the [NII] flux. A
total of 3160 spaxels are plotted corresponding to 21 distinct galactic
regions.
For each distinct regions identified in section 4.1 and from (Vayner et al.,
2021) we over plot their line ratios and label them with a star. Individual
spaxels typically have high uncertainties in their ratios but tend to cluster
together on the BPT diagram. Integrating over distinct regions and re-
calculating the ratios from a high SNR spectrum confirms that region’s true
line ratio.
To conserve space, we do not over-plot the error bars on points from
individual spaxels in Figure 2, we only show the error bars of ratios computed
for integrated values of the distinct regions. In Figure 3, we plot points of
individual spaxels along with the error bars.
Figure 2: Line ratio diagnostics of individual resolved distinct regions. In
grey, we plot the line ratios of individual spaxels where at least [OIII] and
H$\alpha$ was detected at an SNR$>$3\. Uncertainties on these line ratios are
generally large; hence, we also integrate over all spaxels in individual
regions to increase the SNR and lower the uncertainties on the line ratios. We
show region-integrated line ratios with the colored symbols where each object
has the same symbol, and each region has a different color. The names of the
distinct region are present in the lower-left corner, and these match the
names given in Table 3. We present the evolutionary models of the mixing and
star-forming sequence with red and green curves from Kewley et al. (2013a). We
show the upper limit of a sequence with a straight line and the lower boundary
of each sequence with a dashed curve. Teal curves represent the bounds of the
two sequences where the majority of the line ratio in low redshift galaxies
fall. Our line ratios are consistent with a model where gas photoionized by
the quasar is denser, has lower metallicity, and experiencing harder
ionization compared to the gas photoionized by AGN in nearby galaxies. Figure
3: We present line ratio diagnostics for spaxels in each source where at least
[OIII] and H$\alpha$ were detected at SNR great than 3. We show the
uncertainties on the line ratio, which were omitted from figure 2 to conserve
space. The dashed red line in each panel shows the theoretical separation
between gas photoionized by star formation and AGN or shocks from Kewley et
al. (2013a). Points above the line are photoionized by the quasar, while
regions below are photoionized by O and B stars. Solid black mesh represents
the location of radiative shocks from the astrophysical plasma modeling code
MAPPINGS V (Alarie & Morisset, 2019). The shock model uses solar abundances
from Gutkin et al. (2016). Either shocks or the quasar photoionizes the
majority of the gas within our systems.
### 5.1 Ionization Diagnostic Models
We find that a large portion of our line ratios values lies outside the two
typical sequences of the BPT diagram (Figure 2). At a fixed
log([NII]/H$\alpha$) nearly, all values are above both the local mixing and
star-forming sequence. At a fixed log([OIII]/H$\beta$ ) value, nearly all
values are outside the local mixing sequence. A large portion of points falls
between the star-forming and mixing sequence, with relatively high
log([OIII]/H$\beta$ ) values. Metallicity, electron density, the hardness of
the ionization field, and the ionization parameter determines the location of
the galaxy/region on a given sequence. With changing conditions in the ISM
between z=0 and the median redshift of our sample, the locus of both the star-
forming and mixing sequence can change locations (Kewley et al., 2013a).
Galaxies at a fixed stellar mass have lower metallicities at high redshift
compared to galaxies today (Yuan et al., 2013). Near the peak of the star
formation rate density at z$\sim 1-3$, the ISM conditions and star formation
histories of star-forming galaxies may differ from local systems. Star
formation appears to happen in denser environments in the distant Universe
with higher electron densities (Sanders et al., 2016), akin to conditions seen
in local ULIRGs. According to Steidel et al. (2014); Strom et al. (2017) ISM
in high redshift galaxies experiences harder ionization at a fixed N/O and O/H
abundances than z=0 star-forming galaxies. On the other hand, galaxies at
higher redshift have elevated N/O rations (Shapley et al., 2015). Taken
together, Kewley et al. (2013a) shows that such changes in ISM conditions can
alter the location of the star formation sequence between z=0 and z=2.5.
Notably, the combination of harder ionization, electron density, and
ionization parameter can shift the locus of the star-forming sequence to
higher log([NII]/H$\alpha$) and log([OIII]/H$\beta$ ) values. It appears that
UV/emission line selected galaxy samples tend to show a more significant shift
from the SDSS star formation locus, as evident in a large sample of 377 star-
forming galaxies explored by Strom et al. (2017). Nearly all their galaxies
have a higher log([OIII]/H$\beta$ ) value at a fixed log([NII]/H$\alpha$)
compared to local galaxies studied in SDSS. Various galaxy selection
techniques may lead to samples of galaxies with inherently different
ionization properties. However, the overall conclusion from studying star-
forming galaxies in the distant Universe is that the line ratios of these
systems lie on different star formation locus compared to the local Universe.
Changes in the ISM conditions of distant galaxies may also lead to changes in
the location of the mixing sequence. Kewley et al. (2013a) and Groves et al.
(2006) show that for galaxies with lower metallicities, the mixing sequences
shifts to lower log([NII]/H$\alpha$) values with relatively small changes in
the log([OIII]/H$\beta$ ) value. We explore the various evolutionary models of
the star-forming and mixing sequence with redshift and ISM conditions proposed
by Kewley et al. (2013a). The best fit model to our sample is the one where
the ISM of high redshift galaxies have more extreme condition (higher electron
density, harder ionization field, and larger ionization parameters) and the
metallicity of the gas photoionized by the quasar is at a lower metallicity
compared to the gas ionized by local AGN in the SDSS sample. The median
log([NII]/H$\alpha$) value is about 1.0 dex lower than that of the mixing
sequence at z=0. If the primary source in the shift of the mixing sequence
from z=0 to z=1.5-2.5 is a change in the gas phase metallicity, then the gas
photoionized by the quasar in our sample has a metallicity a 1/2-1/5 of that
in narrow line regions of z=0 AGN on the Kewley & Dopita (2002) metallicity
scale. One of the consequences of the shift in the mixing sequence is that it
becomes harder to distinguish between gas photoionized by AGN vs. star
formation, especially in systems with potentially multiple ionization sources.
Changes in the photoionization condition likely also play a role in the offset
from the local mixing sequence. In a sample of local type-1 quasars, (Stern &
Laor, 2013) found that systems with higher bolometric luminosities and higher
Eddington ratios are systematically offset to lower log([NII]/H$\alpha$)
ratios.
Most of the gas in our quasar host galaxies lies in the mixing sequence where
the gas is photoionized by a combination of quasar ionization and radiative
shocks. In Figure 3, a significant fraction of the points in individual
objects match the predicted location of radiative shocks on the BPT diagram.
The radiative shock models assume solar metallicity and a preshock density of
215 cm-3 . With the present data, it is difficult to distinguish the
percentage of photoionization from shocks vs. AGN. However, given the overlap
of both line ratios and kinematics with shock models, we cannot rule them out
to contribute to gas photoionization.
A number of our distinct regions appear to have low log([NII]/H$\alpha$)
values ($<$0.5) with low-velocity dispersion gas (V${}_{\sigma}<250$ km s-1 ).
Morphologically these regions appear to be clumpy in their H$\alpha$ maps
reminiscent of typical star-forming regions in galaxies at $z>1$. The line
ratios of these points do not coincide with regions of fast or slow shocks
photoionization on the BPT diagram (Allen et al., 2008; Newman et al., 2014).
Archival HST data of 3C9, 3C298, 4C09.17, 3C268.4, 3C446 all showcase that the
dynamically “quiescent” regions in these galaxies have clumpy morphology in
rest-frame UV continuum data, similar to those of star-forming galaxies at
these redshifts. In Figure 4, we overlay the H$\alpha$ emission from
dynamically quiescent regions onto archival HST observations at rest-frame UV
wavelength. Combining these clues suggests that the quasar does not entirely
photoionize the gas in these regions. The elevated log([OIII]/H$\beta$ ) in
these regions compared to local and distant star-forming regions may be from
the mixing of photoionization from massive stars and the quasar. There is some
evidence for this based on the morphology of the ionized gas and their
respective log([OIII]/H$\beta$ ) ratios. For example, in 4C09.17, we see that
more diffuse emission with low-velocity dispersion tends to have a higher
log([OIII]/H$\beta$ ) value compared to clumpier regions where there is
evidence for recent star formation activity and potentially more shielding
from the quasar radiation field.
Using the empirical star formation rate H$\alpha$ luminosity relationship from
Kennicutt (1998), we convert the H$\alpha$ luminosity of the distinct extended
quiescent regions to star formation rates. Most likely, the majority of these
regions are photoionized by a combination of AGN and star formation, hence the
derived star formation rates are upper limits. Regions “3C298 SE component B
Tidal feature” and “4C09.17 W component B clumps” have line rations most
consistent with photoionization by O/B stars, the star formation rates derived
in these regions are closer to their actual value. To partially address the
contribution from AGN to photoionization in dynamically quiescent regions, we
also derive star formation rates only using spaxles that fall within the line
ratio error inside the star-forming region of the BPT diagram based on the
Kewley et al. (2013b) maximum separation between star formation and AGN
photoionization. Generally, we find lower (1/2 - 1/10) star formation rates
when using the BPT diagram criteria. We also measure the metallicities of
these regions using the Pettini & Pagel (2004) empirical gas-phase metallicity
- log([NII]/H$\alpha$) relationship. Given that log([NII]/H$\alpha$) is
elevated in the presence of an AGN/quasar ionization field, the metallicities
for the majority of the regions are also upper limits. We also calculate the
metallicity using theoretically derived chemical abundance calibration for
narrow-line regions of AGN (Storchi-Bergmann et al., 1998). We present
quantitative values of these regions in Table 5, where we show the H$\alpha$
luminosity of each quiescent region, along with the star formation rate,
metallicities, and velocity dispersion. Since nearly all of the unresolved
narrow line regions are consistent with quasar photoionization, we do not
include them in Table 5 with the exception of 3C318. In this object, the line
ratios are consistent with star formation, indicating a nuclear starburst on
scales $<$1 kpc. For cases where we do not detect any unresolved narrow
emission, we place an average upper limit on the star formation rate. We do so
if there is an ongoing nuclear starburst with a star formation rate below the
sensitivity of OSIRIS at our given exposure times. We find an average star
formation rate limit of 9 M⊙ yr-1 for the four objects (4C05.84, 4C04.81,
4C57.29, and 7C1354) where no strong narrow nuclear emission was detected.
Table 5: Star formation rates and metallicities of distinct dynamically
quiescent regions
Source | Region | LHα | SFR aaStar formation rate derived using the H$\alpha$ luminosity of the entire distinct quiescent region | L$\rm{}_{H\alpha_{BPT}}$ | SFR BPTbbStar formation rate derived using spaxels that fall within the star formation sequence on the BPT diagram. | 12+log(O/H) | 12+log(O/H) | $\sigma_{gas}$
---|---|---|---|---|---|---|---|---
| | 1043erg s-1 | M⊙ yr-1 | 1043erg s-1 | M⊙ yr-1 | PP04 | SB98 | km s-1
3C9 | SE-SW A | 2$\pm$0.2 | 160$\pm$16 | 0.20$\pm$0.02 | 15$\pm$2 | 8.6 | 8.6 | 173.1$\pm$25.7
| N B | 1.2$\pm$0.1 | 99$\pm$10 | 0.06$\pm$0.006 | 5$\pm$1 | 8.6 | 8.5 | 200.5$\pm$5
4C09.17 | SW A | 0.12$\pm$0.01 | 9$\pm$1 | 0.04$\pm$0.01 | 3$\pm$1 | 8.6 | 8.5 | 126.6$\pm$3
| W B clumps | 0.35$\pm$0.04 | 28$\pm$3 | aaStar formation rate derived using the H$\alpha$ luminosity of the entire distinct quiescent region | aaStar formation rate derived using the H$\alpha$ luminosity of the entire distinct quiescent region | 8.2 | 8.4 | 136$\pm$4.7
| W B diffuse | 0.86$\pm$0.09 | 68$\pm$7 | aaStar formation rate derived using the H$\alpha$ luminosity of the entire distinct quiescent region | aaStar formation rate derived using the H$\alpha$ luminosity of the entire distinct quiescent region | 8.4 | 8.5 | 146.2$\pm$7
3C268.4 | SW B | 0.64$\pm$0.06 | 51$\pm$5 | 0.20$\pm$0.02 | 18$\pm$2 | 8.5 | 8.5 | 144.6$\pm$5
7C 1354+2552 | A | 0.37$\pm$0.04 | 29$\pm$3 | 0.4$\pm$0.04 | 33$\pm$3 | $<$8.5 | 8.4 | 182.16$\pm$38.2
3C298 | SE B TTddTidal tail | 0.3$\pm$0.03 | 22$\pm$2 | aaStar formation rate derived using the H$\alpha$ luminosity of the entire distinct quiescent region | aaStar formation rate derived using the H$\alpha$ luminosity of the entire distinct quiescent region | 8.5 | 8.4 | 109.6$\pm$5.5
3C318 | Nuclear | 1.1$\pm$0.1 | 88$\pm$9 | bbStar formation rate derived using spaxels that fall within the star formation sequence on the BPT diagram. | aaStar formation rate derived using the H$\alpha$ luminosity of the entire distinct quiescent region | $<$8.3 | $<$8.5 | 179.8$\pm$7.4
4C22.44 | N,S A | 0.40$\pm$0.04 | 32$\pm$3 | 0.2$\pm$0.02 | 20$\pm$2 | 8.4 | 8.4 | 184.6$\pm$6.5
4C05.84 | SW A clump | 0.14$\pm$0.01 | 11$\pm$1 | 0.05$\pm$0.01 | 5$\pm$0.5 | 8.5 | 8.4 | 198.7$\pm$16
3C446 | NW A TTddTidal tail | 0.07$\pm$0.001 | 6 $\pm$1 | 0.06$\pm$0.01 | 5$\pm$1 | $<$8.4 | $<$8.4 | 167.9$\pm$0.7
| E-W B | 0.48$\pm$0.05 | 38$\pm$4 | 0.15$\pm$0.02 | 12$\pm$2 | 8.5 | 8.5 | 204.3 $\pm$ 15
ccfootnotetext: Value indistinguishable from the integrated value over the
entire dynamically quiescent region
Figure 4: Detection of dynamically quiescent regions in archival Hubble Space
Telescope observation. In the background, we show PSF-subtracted images of
rest-frame UV emission in the quasar host galaxy. Overlaid in contours is the
extended H$\alpha$ emission of the dynamically quiescent regions detected with
OSIRIS. Note the similarities in both morphology and extent, indicating
massive young stars photoionize at least a portion of the gas. The bar
represents a spatial scale of 1″ or about 8.5 kpc.
## 6 SMBH-galaxy scaling relationships
In this section, we place our galaxies on the velocity dispersion and galaxy
mass vs. SMBH mass plots, comparing their locations to the local scaling
relations ($M_{\bullet}-\sigma~{}$and $M_{\bullet}-M_{*}~{}$). We calculate
the SMBH masses from the broad H$\alpha$ luminosity and line width using the
methodology presented in Greene & Ho (2005). The SMBH masses span a range of
$10^{8.87-9.87}$ M⊙ . The velocity dispersions are taken from dynamically
quiescent regions, while the galaxy masses are calculated from the virial
equation and from modeling the radial velocity of targets with rotating disks
and extracting a dynamical mass.
### 6.1 Host Galaxy Velocity Dispersion
We identify several dynamically quiescent regions within most of the quasar
host galaxies in our sample. These regions show lower log([NII]/H$\alpha$)
line ratios and typically have clumpy morphology, reminiscent of the general
star-forming regions seen in nebular emission and UV continuum in high
redshift galaxies. In most galaxies, these regions lie away from any galactic-
scale outflows. Hence their observed dynamics could be a probe of the galactic
gravitational potential. These regions can be used to measure the velocity
dispersion of our quasar host galaxies. In combination with the measured black
hole masses, we can compare them to the local scaling relation between the
SMBH mass and the velocity dispersion of the galaxy/bulge. In Figure 5, we
plot the mass of the SMBH presented in Table 2 against the velocity dispersion
of distinct quiescent regions measured with the H$\alpha$ line. Also, we
include the velocity dispersion measured from CO (3-2) emission for 3C 298
from Vayner et al. (2017). We find a significant offset from the local scaling
relation between the SMBH mass and the velocity dispersion of the galaxy/bulge
($M_{\bullet}-\sigma~{}$) (Gültekin et al., 2009; McConnell & Ma, 2013). To
address the issue that the velocity dispersion may be systematically lower in
dynamically quiescent regions offset from the quasar (3C446) or regions where
the surface area of the narrow emission is significantly lower than the
outflow (4C09.17, 3C298), we recalculate the velocity dispersion in a larger
aperture that includes outflows and narrow emission. We see no strong
systematic difference in the velocity dispersion of the narrow gas. The source
integrated narrow velocity dispersion for 3C298, 3C446 and 4C09.17 are 100.7
$\pm$ 16.6, 187.5 $\pm$ 1.0 and 144.0$\pm$10.0 km s-1 , respectively. In the
nearby universe, the velocity dispersion is typically measured inside the
effective radius of the galactic bulge. The difference within our observations
is that we do not know the bulges’ true sizes for our galaxies as we have no
way to measure them with current data and telescope/instrument technology.
However, the extents of the dynamically quiescent regions are in the range of
the effective radii for bulges in massive ($10^{10.5-11.5}$ M⊙ ) galaxies
studied in the CANDELS survey (Bruce et al., 2014).
Figure 5: The location of our galaxies on the velocity dispersion vs. SMBH
mass plot compared to the local $M_{\bullet}-\sigma~{}$relationship. We use
the narrow H$\alpha$ emission line velocity dispersion of dynamically
quiescent regions as a proxy for the stellar velocity dispersion. Red stars
are the measurements from our sample, where we measure the velocity dispersion
from the narrow H$\alpha$ emission line. We measure the black hole masses
using the broad H$\alpha$ line from the broad-line-region discussed in section
3. The two blue stars represent the velocity dispersion measured in the disk
of the host galaxy of 3C 298 and the tidal tail feature 21 kpc away from the
quasar (Vayner et al., 2017). Blue circles are quasars from the Shields et al.
(2006) sample, where they measure the velocity dispersion from CO emission
lines. The yellow points are from quasars at z$>6$, where they measure the
velocity dispersion from the 158 µm [CII] emission line (Decarli et al.,
2018). Green points represent the local sample of all the bright cluster
galaxies with SMBH greater than $10^{9}$ M⊙ taken from McConnell & Ma (2013).
The blue curve represents the best fit to the entire galaxy sample from
McConnell & Ma (2013) with the blue shaded region represents the intrinsic
scatter on the $M_{\bullet}-\sigma~{}$relationship from the fit while the
green curve is the fit to the sample studied in Gültekin et al. (2009). We
find a significant offset between the galaxies in our sample and local BCG and
the general local $M_{\bullet}-\sigma~{}$relationship. This large offset
indicates that the host galaxies appear to be under-massive for their SMBHs.
Figure 6: We present the location of individual galaxies compared to the local
scaling relation between the mass of the SMBH and mass of the galaxy/bulge
shown with a blue curve. Blue points represent systems with virial dynamical
masses. Red points represent systems where we calculate the dynamical mass by
modeling the radial velocity maps with an inclined disk model. Gray points
show the location of galaxies at z$>0.5$, with lower SMBH masses and lower AGN
luminosity compared to our sample. The blue curve represents the local scaling
relationship as measured in McConnell & Ma (2013), with the shaded region
representing the intrinsic scatter. We find the majority of our points are
offset from the local scaling relationship, outside the observed scatter.
There have seen numerous discussions in the literature about whether the
velocity dispersion measured from gas traces the stellar velocity dispersion.
The gas and stars might not have the same uniform distribution, and winds can
broaden the nebular emission lines. Furthermore, the line of sight absorption
and emission lines from which the velocity dispersion is calculated are
luminosity weighted subject to galactic dust extinction. Because of the
different light distribution between stars and gas, the measured velocity
dispersion can differ. These differences can lead to increased scattering in
any correlation between $\sigma_{*}$ and $\sigma_{gas}$ . Data-sets that
spatially resolve the gas and stellar components and have enough resolving
power to separate multi-component emission from different regions (e.g.,
outflowing/in-flowing gas vs. galactic disk rotation) are important when
making a comparison between $\sigma_{*}$ and $\sigma_{gas}$ . In Bennert et
al. (2018), for a large sample of local AGN, they find when fitting a single
Gaussian component to the [OIII] emission line, they can overestimate the
stellar velocity dispersion by about 50-100$\%$. Only by fitting multiple
Gaussian components to account for both the narrow core and the broader wings
of the [OIII] line profile can they adequately match the velocity dispersion
from the narrow component of the [OIII] line to that of the stellar velocity
dispersion. For their entire sample, the average ratio between the velocity
dispersion of narrow Gaussian component and the stellar velocity dispersion is
$\sim$1\. The 1 $\sigma$ scatter on the ratio between $\sigma_{[OIII],narrow}$
and $\sigma_{*}$ is about 0.32 with a maximum measured ratio of about a factor
of 2 which translates to a scatter in
$\Delta\sigma=\sigma_{[OIII]}-\sigma_{*}$ of 43.22 km s-1 with a maximum
difference of about $\pm$100 km s-1 . However, only a few sources show such
drastic velocity differences ($\sim 2.5\%$ of the entire sample, 82$\%$ of the
sources show $|\sigma_{[OIII]}-\sigma_{*}|<$ 50 km s-1 ). When fitting for the
$M_{\bullet}-\sigma~{}$relationship with the narrow [OIII] emission as a proxy
for stellar velocity dispersion, the resultant fit agrees with that of
quiescent galaxies reverberation-mapped AGNs. These results indicate that for
the sample as a whole Bennert et al. (2018) finds that both the stars and gas
follow the same gravitation potential.
Given the Bennert et al. (2018) results that demonstrates that the gas
velocity dispersion can be used as a proxy for the stellar velocity
dispersion, we follow a similar analysis using our IFS data sets to explore
the location of our galaxies relative to the $M_{\bullet}-\sigma~{}$relation
at high redshift. We attempted to the best of our ability to separate regions
that contain galactic scales winds from those with more quiescent kinematics
both spectrally and spatially with OSIRIS. Hence similar to Bennert et al.
(2018) we think that the measured velocity dispersions in quiescent regions
are good tracers of the galactic potential on average. Throughout the paper we
use the narrow velocity dispersion of [OIII] and H$\alpha$ emission lines of
dynamically quiescent regions as a proxy for the stellar velocity dispersion.
We still find a significant offset for our sample after applying the observed
scatter in the difference between $\sigma_{*}$ and $\sigma_{gas}$ . This is
also true when applied to the more distant quasar host galaxies studied with
158 µm [CII] emission. In nearby galaxies, there is a dependence on the
velocity dispersion with the radius from the galaxy center (Bennert et al.,
2018; Ene et al., 2019). However, based on the local galaxy observations, the
velocity dispersion is unlikely to increase by $\sim$ 200 km s-1 that is
necessary to bring the galaxies onto the local scaling relations.
Using N-body smoothed-particle hydrodynamics simulations Stickley & Canalizo
(2014) examines how the stellar velocity dispersion evolves in a binary galaxy
merger. At various stages in the merger (e.g., a close passage, nucleus
coalescence), they measure the stellar velocity dispersion along $10^{3}$
random lines of sight. Near each close passage and during coalescence, they
find that the scatter on the velocity dispersion significantly increases from
$\sim 5-11$ km s-1 to about 60 km s-1 with the average velocity dispersion a
factor of $\sim$1.7 higher than after the galaxies have finished merging. For
several sources in our sample (3C9, 3C298, and 3C446), the measured velocity
dispersion might be higher than what it will be once the galactic merger is
complete adding uncertainty due to projection effects. Following the
simulations results, we add in quadrature an additional uncertainty on the
velocity dispersion of 60 km s-1 given that the majority of our mergers are
near coalescence or a close passage ($\Delta R<10$ kpc).
It should be noted that this is near the maximum scatter seen in the
simulations on $\sigma$. These simulations also find that for merging galaxies
at their maximum separation, the measured velocity could be a factor of $\sim
1.7$ times smaller compared to the final system. They find that for a 1:1
merger, the maximum separation after the first passage is 10-100 kpc, which is
much larger than any separation that we find in our systems from observed
projected separations and measured relative velocities. No obvious merging
companions are found for 3C318, 4C22.44, or 4C05.84 hence for these systems,
the mergers might be past their coalescence stage where the measured velocity
dispersion is close to its final value, and the scatter due to the line of
sight effects is minimal ($\sim$ a few km s-1 ). However, we still apply an
additional 60 km s-1 uncertainty in these regions.
Even after these corrections are made to approximate the stellar velocity
dispersion from the [OIII] emission lines in our sample, we still find that
all of our systems are offset from the local scaling relation between the mass
of the SMBH and the velocity dispersion of the bulge/galaxy. Given that we are
dealing with relatively small sample size, we performed statistical tests to
confirm the offset between the local scaling relation and our sample. We
measure the offset between the observed and predicted velocity dispersion for
the SMBH mass of our systems for each object. We use the local scaling
relation fit from (McConnell & Ma, 2013), and H$\alpha$ measured SMBH masses.
We construct a data set consisting of velocity differences. From bootstrap re-
sampling of the velocity difference data set, we find that the average offset
of 188.7 km s-1 is significant at the 3.25$\sigma$ level. Using Jackknife re-
sampling similarly, we find that the offset is significant at the 3.3$\sigma$
level with the 95$\%$ confidence intervals of 154.4 km s-1 to 223.0 km s-1 on
the velocity dispersion offset. Performing similar statistical tests on the
Decarli et al. (2018) sample, we find an average offset of 178.8 km s-1 with a
significance of the shift at 2.7$\sigma$ and 2.8$\sigma$ for Jackknife and
bootstrap re-sampling, respectively from the local relationship. We also
measure the offsets of massive BCGs in the local Universe from the
$M_{\bullet}-\sigma~{}$relationship. Using a two-sided Kolmogorov-Smirnov
test, we can ask if the observed offsets of the local and high redshift data
sets are drawn from the same continuous distribution. We find a p-value of
5.7$\times 10^{-9}$, indicating that the two populations are not drawn from
the same distribution. Applying the Kolmogorov-Smirnov test to the velocity
dispersion offsets from our sample and in the higher redshift quasars, we find
a p-value of 0.84, indicating that these two data sets could be drawn from the
same continuous distribution. We find similar results by comparing the Shields
et al. (2006) sample at $z\sim 2$ to our own and that of Decarli et al.
(2018).
## 7 Dynamical mass measurements
We can also test whether these systems lie off the local scaling relationship
between the SMBH mass and the dynamical mass of the bulge/galaxy. First by
using a virial estimator for the dynamical mass of the galaxy $\rm
M_{virial}=\frac{C\sigma^{2}r}{G}$ where C=5 for a uniform rotating sphere
(Erb et al., 2006b). We assume 7 kpc for the radius, which is the median
effective radius of massive quiescent galaxies in the local Universe (Ene et
al., 2019). Here $\sigma$ is derived from a Gaussian fit to the integrated
spectra over the distinct region. For galaxies with multiple distinct regions,
we derive two or more dynamical masses as there may be a dependence on the
velocity dispersion as a function of position with the galaxy. For systems in
a clear merger, the galactic component belonging to the quasar is used to
estimate the dynamical mass since we are interested in the correlation between
the SMBH and the velocity dispersion of the quasar host galaxy.
For systems with velocity shear in the 2D radial velocity map, we fit a 2D
inclined disk model to the kinematics data to measure the dynamical mass. The
model is a 2D arctan function
$V(r)=\frac{2}{\pi}V_{max}\arctan\Big{(}\frac{r}{r_{dyn}}\Big{)},$ (1)
where V(r) is rotation velocity at radius r from the dynamical center,
$V_{max}$, is the asymptotic velocity, and $r_{dyn}$ is the radius at which
the arc-tangent function transitions from increasing to flat velocity. The
measured line-of-sight velocity from our observations relates to V(r) as
$V=V_{0}+\sin i\cos\theta V(r),$ (2)
where
$\cos\theta=\frac{(\sin\phi(x_{0}-x))+(\cos\phi(y_{0}-y))}{r}.$ (3)
Radial distance from the dynamical center to each spaxel is given by
$r=\sqrt{(x-x_{0})^{2}+\Big{(}\frac{y-y_{0}}{\cos i}\Big{)}^{2}},$ (4)
where $x_{0},y_{0}$ is spaxel location of the dynamical center, we quote the
value relative to the centroid of the quasar, $V_{0}$ is velocity offset at
the dynamical center relative to the redshift of the quasar, $\phi$ is
position angle in spaxel space, and $i$ is the inclination of the disk.
$V_{max}$ is not the true “plateau” velocity of the galaxy’s disk. $V_{max}$
can have arbitrarily large numbers, especially when $r_{dyn}$ is very small
(Courteau, 1997). To fit the data we use the MCMC code emcee. We construct the
model in a grid with a smaller plate scale than the observed data, which gets
convolved with a 2D Gaussian PSF with an FWHM measured from the quasar PSF
image. The image is then re-sized to the plate scale of the data. We construct
the priors on each of the seven free parameters. The prior on $V_{max}$ is
$300<V_{max}<1000$ km s-1 the prior on both $x_{0},y_{0}$ is the boundary of
the FOV of the imaged area, the prior on the position angle is $0<\phi<2\pi$,
the prior on the inclination angle is $0<i<\pi/2$, the prior on the radius is
$0.5<r_{dyn}<10$ pixels and the prior on $V_{0}$ is $-100<V_{0}<100$ km s-1 .
We then sample this distribution with emcee. We initialize 1000 walkers for
each free parameter using the best fit values from leastsquares fitting as the
starting point, with a small random perturbation in each walker. We run MCMC
for 500 steps starting from the perturbed initial value. The best-fit
parameters, along with their confidence intervals, are presented in 6 for the
quasar host galaxies of 7C 1354+2552, 3C9. For 3C 298 we do not see the disk
in the ionized emission with the OSIRIS data, it is solely detected in CO
(3-2) observations from ALMA, here we present the best fit values from Vayner
et al. (2017). Also, we present $\Delta v_{obs}/2$, the average between the
maximum and the minimum velocity along the kinematic major axis as determined
by the position angle ($\phi$). We also present the intrinsic velocity
dispersion ($\sigma_{0}$), measured along the kinematic major axis, towards
the outskirts, away from the steep velocity gradient near the center of the
disk.
Table 6: Best fit values for each inclined disk model parameter Parameters | 7C 1354+2552 | 3C9 | 3C298
---|---|---|---
$V_{max}$ [km s-1 ] | 449.67${}^{+0.24}_{-0.64}$ | 442.0${}^{+23.9}_{-5.7}$ | 392${}^{+65}_{-65}$
$x_{0}$ [kpc] | -2.37${}^{+0.04}_{-0.03}$ | 0.5${}^{+2}_{-1}$ | 0.43${}^{+0.1}_{-0.1}$
$y_{0}$ [kpc] | -0.93${}^{+0.08}_{-0.08}$ | -4.8${}^{+1.22}_{-1.5}$ | 0 ${}^{+0.1}_{-0.1}$
$\phi$ [∘] | 75.68${}^{+0.47}_{-0.48}$ | 74.10${}^{+3.5}_{-35.4}$ | 5.3${}^{+1.28}_{-1.28}$
$i$ [∘] | 47.6${}^{+0.8}_{-0.8}$ | 47.1${}^{+5.0}_{-3.7}$ | 54.37${}^{+6.4}_{-6.4}$
$r$ [kpc] | $<$0.017 | 0.26${}^{+0.49}_{-0.14}$ | 2.1${}^{+0.9}_{-0.9}$
$V_{0}$ [km s-1 ] | -93.9${}^{+1.2}_{-1.7}$ | -9.22${}^{+30.45}_{-86.46}$ | -13.0${}^{+3.15}_{-3.15}$
$\Delta v_{obs}/2$ [km s-1 ] | 309.84$\pm$20.47 | 370.84$\pm$45.4 | 150.0 $\pm$23.7
$\sigma_{0}$ [km s-1 ] | 61.3$\pm$7.9 | 186.9$\pm$32.7 | 42.35 $\pm$ 12.68
In addition, we measure $V_{rot}/\sigma_{0}$ to gauge whether these systems
are dynamically supported by rotation or dispersion. We measure a value of
6.8$\pm$1, 2.7$\pm$0.6, and 4.4$\pm$1.5 for 7C 1354, 3C9, and 3C298,
respectively. In all systems, rotation dominates over the velocity dispersion
for the dynamical support according to the criteria outlined by Förster
Schreiber et al. (2018), and henceforth the systems can be classified as true
disks.
Assuming a spherically symmetric system, we can compute the total enclosed
mass using the following formula:
$M(R)=2.33\times 10^{5}rV_{r}^{2}/\sin(i)^{2}$ (5)
Where $V_{r}$ is the radial velocity, $i$ is the inclination angle from the
disk fit. For the radial velocity we use $\Delta v_{obs}/2$. Similarly, we
assume a radius that is the median value of nearby BCGs (7.1 kpc). The
selected radius should give us an absolute upper limit on the dynamical mass
of the galaxy/bulge as this radius is much larger than the typical size of a
galactic bulge at this redshift and is larger than the observed extent of the
galactic disks. The reason for choosing a larger radius is to address the case
where the quasar host galaxy extends to a larger radius and is not captured in
our OSIRIS observations because they are not sensitive enough to low surface
brightness emission at larger separation from the quasar. Virial and dynamical
masses are presented in 7. However, it is not guaranteed that the extent of
the ionized gas will match the stellar. We attempted to measure the size of
the stellar continuum from the HST observations but were unsuccessful. Using
the Galfit package, we were unable to constrain the radius due to the sources’
complex morphologies and the increased inner working angle due to the quasars’
brightness and saturated counts in the HST observations.
Due to the limited sensitivity of OSIRIS to lower surface brightness emission,
we are missing an accurate measurement of the plateau velocity for the
galactic disks at large separations from the quasar. Hence, our fitting
routine is unable to constrain $V_{max}$ for 3C9 and 7C 1354. Also, it appears
that the turn over radius is very small for these two systems, smaller than
the resolution element of our observations. For this reason, we are unable to
constrain the turn over radius, and we only provide a limit. For 7C 1354,
there is a degeneracy between the maximum velocity, turn over radius, and
inclination; thus, the values that we provide are those that give the smallest
velocity residual.
Figure 7: Fitting an inclined disk model to the radial velocity map of the 3C9 quasar host galaxy. Far-left we plot the isolated radial velocity structure belonging to the quasar host galaxy of 3C9, middle left shows the best fit model overlaid as contours on top of the radial velocity map, middle right is the best fit model. On the right, we plot the residuals. Larger blue-shifted residuals at $-1$″ south from the quasar are from the outflow (3C9 SE component A outflow A). Figure 8: Fitting an inclined disk model to the radial velocity map of the 7C1354 quasar host galaxy. Far-left we plot the isolated radial velocity structure belonging to the quasar host galaxy of 7C1354, middle left shows the best fit model overlaid as contours on top of the radial velocity map, middle right is the best fit model, and on the right, we plot the residuals. Figure 9: Fitting an inclined disk model to the radial velocity map of the 3C298 quasar host galaxy. Far-left we plot the isolated radial velocity structure belonging to the quasar host galaxy of 3C298, middle left shows the best fit model overlaid as contours on top of the radial velocity map, middle right is the best fit model, and on the right, we plot the residuals. Table 7: Virial and dynamical mass values. Source | Virial Mass | Disk-fit Dynamical mass
---|---|---
| $\times 10^{11}$ M⊙ | $\times 10^{11}$ M⊙
3C9 | 2.5$\pm$0.7 | 4.3$\pm$0.8
4C09.17 | 1.3$\pm$0.1 | –
3C268.4 | 1.4$\pm$0.1 | –
7C1354+2552 | 1.5$\pm$0.3 | 3.0$\pm$0.4
3C298 | 0.45$\pm$0.13aacomputed from CO 3-2 velocity dispersion | 0.6$\pm$0.1
3C318 | 2.9$\pm$0.5 | –
4C57.29 | 3.3$\pm$0.3 | –
4C22.44 | 2.8$\pm$0.1 | –
4C05.84 | 3.3$\pm$0.1 | –
3C446 | 2.3$\pm$0.1 | –
Using the measured virial and disk fit dynamical masses and the SMBH masses,
we can now compare our galaxies to the local
$M_{\bullet}-M_{*}~{}$relationship. Not only are these galaxies offset from
the local $M_{\bullet}-\sigma~{}$relationship, but we also find that these
galaxies are on an average offset from the local
$M_{\bullet}-M_{*}~{}$relationship. The galaxies need about an order of
magnitude of stellar growth if they are to evolve into the present-day massive
elliptical galaxies.
We note that we have used two different methods for exploring the scaling
relationship for galaxy mass vs. SMBH. Both the gas velocity dispersion method
and dynamical measurement imply that the SMBH is over-massive compared to
their host galaxies when exploring the local scaling relationship. It will be
important to further compare these methods with larger samples, as well as
future observations with the James Webb Space Telescope that will be able to
directly measure the stellar velocity dispersion.
## 8 Discussion
Our survey aimed to study host galaxies of redshift 1.4 - 2.6 radio-loud
quasars through rest frame nebular emission lines redshifted into the near-
infrared.
We place distinct regions of each quasar host galaxy on the traditional BPT
diagram (log([OIII]/H$\beta$ ) vs. log([NII]/H$\alpha$) ). The majority of the
points for our sources lie outside the two local sequences (the mixing and
star-forming sequence). In section 5, we introduce evolutionary BPT models
from Kewley et al. (2013a) that indicate changes in the photoionization and
metallicity conditions of the gas can shift both of the star-forming and
mixing sequences. We fit these models to our data and find that the best-
fitting model is the one where the gas in our quasar host galaxies is at least
two to five times less metal-rich compared to the narrow line regions of
nearby (z$<$0.2) AGN. The best fit model also indicates that the gas is ten
times denser compared to nearby galaxies. In Figure 2, we show all of our
points on the BPT diagram along with the best fit model. Kewley et al. (2013b)
studied a sample of star-forming galaxies and galaxies with AGN in the
redshift range of 0.8$<$z$<$2.5. They also find that galaxies at z$>$2 show
elevated line ratios on average outside the local star formation and mixing
sequences. They find that normal ISM conditions similar to the SDSS sample
transition to the more extreme conditions with elevated line ratios somewhere
between redshift z=1.5 and z=2. This is an agreement with our results as the
majority of our targets are at $z>1.5$.
High redshift radio galaxies also appear to show ISM conditions with
metallicities that are lower compared to local AGN. In a study of a large
sample of distant radio galaxies, Nesvadba et al. (2017a) finds that their
gas-phase metallicities are at least half of that seen in local AGN. Nesvadba
et al. (2017a) finds the same best-fitting model from Kewley et al. (2013a) as
we do for our sample to explain their observed nebular line ratios. The
average log([NII]/H$\alpha$) value of our sample seems to be lower than that
of Nesvadba et al. (2017a); this could be due to the lower metallicity of our
sample. On the other hand, a different approach to how we compute our line
ratios can cause the discrepancy. Nesvadba et al. (2017a) only presents source
integrated line ratios, while we explore ratios of distinct regions because we
typically have a factor of 5-10 better angular resolution due to adaptive
optics and hence can resolve the different ionized/kinematics structures of
our galaxies. In the majority of our sources, we see significant variations in
log([NII]/H$\alpha$) and log([OIII]/H$\beta$ ) values across each system,
hence why we explore distinct regions. Line ratios from integrated spectra
that include regions with various ionization sources and from multiple
components of a merger system may shift towards higher log([NII]/H$\alpha$) ,
and log([OIII]/H$\beta$ ) values as the regions photoionized by the quasar/AGN
tend to be brighter. Line ratios of galaxies with lower luminosity AGN
compared to quasars/radio galaxies studied in Strom et al. (2017) are nearly
all outside the local mixing sequence. These points overlap with the location
of our line ratios and that of the radio galaxy sample. The MOSDEF survey
finds similar results for their AGN sample at a range of bolometric
luminosities (Coil et al., 2015). The ubiquity of elevated line ratios in host
galaxies of AGN, meaning they are typically above the local mixing or star-
forming sequence on the traditional BPT diagram (log([OIII]/H$\beta$ ) vs.
log([NII]/H$\alpha$) ), indicates that regardless of the active galaxy
population selected at z$\sim$2 the conditions of the gas that is photoionized
by an AGN are different from those in the local Universe.
Overall, this suggests that the ISM conditions in high redshift galaxies with
AGN at a range of bolometric luminosities are different from those in local
systems. The ISM conditions appear to be far more extreme with gas-phase
metallicity lower than that of local AGN, suggesting an evolution in the ISM
gas that is photoionized by AGN from z=0 to z=2.5.
### 8.1 Star formation and dynamically “quiescent” regions in the host
galaxies
In 9/11 quasar host galaxies within our sample, we see the morphology of
clumpy star-forming regions seen in other galaxies at these redshifts. These
regions also typically show lower velocity dispersion and lower
log([NII]/H$\alpha$) values. We described them in more detail in section 4.1.
These regions lie 1 - 21 kpc from the quasar and generally do not coincide
with the location of galactic outflows. For sources with available HST imaging
of rest-frame UV continuum, these regions appear bright and clumpy (see Figure
4). Taking these two results together indicates that O and B stars could
photoionize a non-negligible fraction of the gas in these clumpy regions. In
section 5, we derive an upper limit on their star formation rates and gas-
phase metallicities.
Taking this together, there is evidence for very recent star formation
activity in 9/11 quasars within our sample. We find an average star formation
rate of 50 M⊙ yr-1 for the star-forming regions within our sample. The average
dynamical mass of our quasar host galaxies of $\sim 10^{11}$ M⊙ , indicates
that the galaxies sit near the galaxy star formation rate - stellar-mass
sequence at z$\sim$ 2 (Tomczak et al., 2016). Using the average metallicity of
8.5 measured in dynamically quiescent regions and the average stellar mass of
our sample indicates that our galaxies sit on the mass-metallicity
relationship at z$\sim$2 (Sanders et al., 2015).
Quasars at z$\sim$ 2 are found to reside in galaxies with a broad range of
star formation rates, spanning from quiescent to star-bursting galaxies.
However, our sample preferentially contains quasar host galaxies in a star-
burst phase. High specific accretion rate AGN are more likely to be found in
star-bursting galaxies with rates on or above the star formation rate -
stellar-mass sequence in the distant Universe (Aird et al., 2019). We selected
to observe compact steep spectrum radio-loud quasars, this class of objects
tend to contain younger AGN. One of the mechanisms to trigger a luminous AGN
is through a massive gas-rich galaxy merger (Treister et al., 2012). During
the ongoing merger, the loss of angular momentum feeds gas into the centers of
galaxies, providing fuel for both star formation and SMBH growth. Since we
selected AGN that may have recently triggered, they are more likely to be in
an ongoing merger, where star formation activity is enhanced. Indeed, about
7/11 of the quasar host galaxies in our sample are mergers. This can explain
why our sample preferentially contains galaxies with active or recent star
formation and rapid accretion onto the SMBH.
The measured star formation rates within our sample are significantly lower
than those measured through dust emission in the far-infrared by the Herschel
Space Observatory (Podigachoski et al., 2015; Barthel et al., 2017) for
4C04.81, 4C09.17, 3C318, and 3C298. The most likely explanation is that the
quasar itself could partially heat the dust, H$\alpha$ misses a significant
fraction of the obscured star formation, or the dust traces a different star
formation history. Interestingly for 3C298 and 3C318, where both high spatial
resolution imaging of the dust and H$\alpha$ emission is present, there is a
significant misalignment between the maps. In places where we see evidence for
recent star formation based on nebular emission-line ratios in 3C 298 and 3C
318, Barthel et al. (2018); Barthel & Versteeg (2019) does not see any dust
emission. For the case of 3C 298 in the location where we see recent star
formation traced by H$\alpha$ , we also detect a molecular reservoir; however,
no dust emission is present there. Furthermore, in the places where dust
emission exists in the case of 3C 298, the molecular gas at that location is
stable against gravitational collapse and has been on a time scale longer than
the propagation of the observed outflow. For the case of 4C09.17 and 4C04.81,
no high-resolution dust maps are available. The dust emission could originate
at any location within the $\sim$ 17″ Herschel SPIRE beam, which translates to
a physical scale of about 150 kpc. Future high spatial resolution dust and
molecular gas emission maps are necessary for proper comparison between the
obscured and unobscured star formation traces and the molecular gas dynamics.
### 8.2 Offset from local scaling relations
The majority of our systems appear to be offset from both local scaling
relationships between the mass of the SMBH and mass and the velocity
dispersion of the bulge (see Figures 5, 6). To explain the large offset from
the local $M_{\bullet}-\sigma~{}$and $M_{\bullet}-M_{*}~{}$relationship, we
could invoke a significant error in the estimated SMBH masses. The bolometric
luminosities of some of our quasars are far greater than those used for
reverberation mapping in the nearby Universe, which is used in calibrating the
single epoch SMBH mass (Greene & Ho, 2005). The SMBH masses would have to be
off by 2-3 orders of magnitude to explain the observed offsets. By assuming
that the SMBH grows primarily through gas accretion, we can use the Eddington
luminosity formula to estimate the SMBH mass. Given that our quasars are most
likely not all accreting at or close to the Eddington limit, this derived mass
is effectively a lower limit.
$\rm M_{SMBH,min}=\frac{L_{Eddington}}{1.26\times 10^{38}}M_{\odot}$ (6)
For the derived bolometric luminosities in Table 2 we find a range of minimum
SMBH of 107.5-9M⊙ , consistent with what we measure from single epoch SMBH
masses using the H$\alpha$ emission line. Hence, ther is likely no significant
error in our measured black hole masses.
In Figure 10, we plot the offset from the local scaling relation against the
redshift of each object from our sample, the local galaxies sample with SMBH
$>10^{9}$ M⊙ and higher redshift quasars. Quasars with SMBH $>10^{9}$ M⊙
appear to be offset from the local scaling relationship, which indicates that
SMBH growth appears to outpace that of stars in these systems. The SMBHs may
grow rapidly up to a mass of several times $10^{9}$ M⊙ as early as 690 Myr
after the Big Bang (Bañados et al., 2018), matching in mass to some of the
most massive SMBH seen today. Some galaxies with lower luminosity AGN and
lower mass SMBH also appear to be offset from the local scaling relation at
z$>1$ (Merloni et al., 2010; Bennert et al., 2011). Given the typically large
uncertainty on the measured values and generally small sample sizes, it is
difficult today to say whether a different population of AGN/galaxies are
offset differently from the local scaling relationships at z$>1$.
Figure 10: Measured offset of galaxies from the local
$M_{\bullet}-\sigma~{}$scaling relationship (McConnell & Ma (2013),
$\log_{10}(M_{BH}/M_{\odot})=8.32+5.64\log_{10}(\sigma/200\rm\enspace
km\enspace s^{-1})$). On the y-axis, we quantify the offset as the difference
between the observed and predicted velocity dispersion from the local scaling
relation based on the observed SMBH mass. We plot the observed offset from the
local scaling relation against the redshift for individual targets. The labels
are similar to 5. The shaded blue region represents the intrinsic scatter in
the $M_{\bullet}-\sigma~{}$relationship for black holes with a mass of
$10^{9.5}$ M⊙ . There is an overall offset for galaxies with massive SMBH at
z$>$1 from the local $M_{\bullet}-\sigma~{}$relationship. We find no
statistically significant difference in the offset between any of the high
redshift samples, while there is a statistically significant offset from the
local BCG points (green).
Under the assumption that SMBH primarily grows through Eddington-limited gas
accretion, the growth is expected to be exponential. The e-folding or
“Salpeter” time scale is about 50-300 Myr, depending on the SMBH spin. At the
mean redshift of our sample (z=1.87), the SMBHs are expected to experience
30-200 e-folds in mass growth. However, for a duty cycle of around $10\%$
(Wang et al., 2006) the expected number of e-folds drops down to about 3-20.
Furthermore, the quasars in our sample are not accreting near the Eddington
limit and can eventually switch from high to low accretion-rate mode, further
decreasing the Eddington ratio. Hence, the SMBHs in our sample have nearly
finished forming and will only further grow by a factor of 1.2-7 under the
assumption of an Eddington ratio of 10$\%$, and a duty cycle of 10$\%$. If
these galaxies are to assemble onto the local scaling relation and to evolve
into the most massive early-type galaxies that we see today, then the rapid
SMBH growths at early times in the Universe must be followed by significant
stellar growth. On average, the galaxies within our sample need to grow the
stellar mass within a radius of 7 kpc at a constant rate of 100 M⊙ yr-1 from
z=2 to z=0.
In the host galaxy of 3C 298, there is currently insufficient molecular gas
for the galaxy to grow in stellar mass to match the mass predicted by the
local scaling relationship. Furthermore, the quasar 3C 298 does not appears to
live in an over-dense environment based on the number count of galaxies seen
with the Spitzer space telescope imaging data (Ghaffari et al., 2017). The
open question is, how do these galaxies obtain the stellar mass necessary to
grow into the massive galaxies we see today? Are minor mergers responsible for
growing these galaxies? Alternatively, is the accretion of cool gas from the
CGM responsible for providing the fuel necessary for future star formation?
The results we find for the host galaxy of 3C 298 favor the scenario where
cold accretion flows from the CGM will supply most of the fuel necessary for
future star formation. Another scenario could be that the Spitzer observations
are too shallow to see lower mass galaxies. If these systems are gas-rich,
they can supply future fuel for star formation from merging the gas in their
CGM and ISM with the quasar’s host. Indeed in recent hydrodynamical simulation
(Anglés-Alcázar et al., 2017) found that for dark matter halos with masses
$>10^{12.5}$ M⊙ majority of the mass build up happens from gas accreted from
the CGM and transfer/exchange of gas from CGM and ISM of cannibalized low mass
galaxies. These simulations also find that stellar build-up from dry mergers
and just accretion of stars from merging galaxies is not significant to grow
the stellar mass of galaxies in massive halos. If this is the case for the
majority of our galaxies, it implies that they have enormous amounts of gas
inside their CGM.
Our results can be in stark contrast to the predicted evolutionary paths of
massive galaxies. In today’s theoretical framework (Di Matteo et al., 2005;
Hopkins et al., 2008; Zubovas & King, 2012, 2014), feedback from the SMBH is
predicted to happen once the galaxy reaches the local
$M_{\bullet}-\sigma~{}$relationship. However, our systems are showcasing
outflows that are capable of causing major feedback when the mass of the
galaxies is a fraction of their predicted final mass from the local scaling
relations. Also, the gas-phase metallicities are far lower than those observed
in nearby AGN. The kinetic luminosities for half of the outflows in our sample
are far lower than the values predicted in simulations for the bolometric
luminosities of our quasars (Vayner et al., 2021). Ionized outflows in other
samples show similar results, where about half the objects lie below the
predicted minimum energy-coupling between the quasar and the outflow of
0.1$\%$ at z$\sim 2$ (Vayner et al., 2021). If all these systems are offset
from the local scaling relationship, it would be easier to launch the outflows
because their masses are smaller compared to if they were on the local scaling
relations. This could lead to lower energy coupling efficiency. On the other
hand, we might be missing a significant fraction of the gas within the
outflows because a large portion of the gas could be in either a molecular or
neutral phase.
In the quasar host galaxy of 3C298, we find the majority of the gas in the
outflow is in a molecular state, and once combined with the ionized kinetic
luminosity we find values closer to those predicted in simulations. The
kinetic luminosity in 3C298 is close to 1$\%$ of the quasar’s bolometric
luminosity. Regardless if we are accounting all the gas in the outflow,
outflows capable of causing feedback are occurring before the galaxies are on
the $M_{\bullet}-\sigma~{}$relationship. We might need to reconsider our
theoretical framework for massive galaxy formation, where the gas is not
cleared from the galaxy in a single “burst” of feedback once the galaxies
reach the $M_{\bullet}-\sigma~{}$relationship. Instead, the SMBH grows first
in massive dark matter haloes, followed by a delayed growth of the host galaxy
with regulatory feedback from the SMBH and near-continuous accretion of gas
from the CGM and nearby satellite galaxies. In such a scenario, the coupling
efficiency might be lower per outflow event, compared to a single burst model
where a single outflow-event clears all the gas. At later times, maintenance
mode feedback from jets can heat the CGM, preventing gas from cooling and
accreting onto the galaxy, keeping the galaxies “quenched”.
## 9 Conclusions
We have conducted a near diffraction-limited survey of 11 quasar host galaxies
to study the distribution, kinematics, and dynamics of the ionized ISM using
the OSIRIS IFS at the W.M. Keck Observatory. This survey paper aimed to
understand the source of gas ionization, the physical and chemical conditions
of the ISM and to estimate the masses of radio-loud quasar host galaxies at
z$\sim$2\. We detected extended emission in all objects on scales from 1-30
kpc and found that:
* •
The AGN photoionizes the majority of the extended gas. A significant fraction
of emission-line ratios are found to reside between the two sequences on the
traditional BPT diagram. By applying evolutionary models of the mixing and
star-forming sequence from z=0 to z=2.5, we find that the gas within our
systems is denser and has lower metallicity compared to the gas photoionized
in local AGN.
* •
In 9 objects, we find dynamically quiescent regions, with lower average
log([OIII]/H$\beta$ ) ratios. For systems where Hubble Space Telescope imaging
is available, their morphologies are consistent with clumpy star-forming
regions commonly observed in the distant Universe, indicating the presence of
recent star formation. We find these systems to be forming stars at a maximum
rate of 9-160 M⊙ yr-1 based on the H$\alpha$ luminosity.
* •
For nine objects, we are able to measure the mass of the SMBH, the stellar
velocity dispersion using the narrow component of H$\alpha$ emission line as a
proxy, and galaxy mass. We compare these nine objects to the local scaling
relation between the mass of the SMBH and the mass or velocity dispersion of
the galaxy. Our systems are both offset from the $M_{\bullet}-\sigma~{}$and
$M_{\bullet}-M_{*}~{}$relationship. Substantial growth is still necessary if
these systems are to evolve into the present-day massive elliptical galaxies.
Gas accretion from the CGM and gas-rich minor mergers are necessary to grow
the stellar mass and increase the metallicity of the ISM. On average, the
galaxies need to grow by at least an order of magnitude in stellar mass if
they are to assemble onto the local scaling relations. A near-constant mass
growth rate of $\sim$100 M⊙ yr-1 is necessary within a radius of 10 kpc from
the quasar from z$\sim 2$ to 0.
* •
Combining the results of this paper with (Vayner et al., 2021) we find
evidence for the onset of feedback before the galaxies are on the local
$M_{\bullet}-\sigma~{}$relationship. Luminous type-1 quasars are not the end
phase of massive galaxy formation.
The authors wish to thanks Jim Lyke, Randy Campbell, and other SAs with their
assistance at the telescope to acquire the Keck OSIRIS data sets. We want to
thank the anonymous referee for their constructive comments that helped
improve the manuscript. The data presented herein were obtained at the W.M.
Keck Observatory, which is operated as a scientific partnership among the
California Institute of Technology, the University of California and the
National Aeronautics and Space Administration. The Observatory was made
possible by the generous financial support of the W.M. Keck Foundation. The
authors wish to recognize and acknowledge the very significant cultural role
and reverence that the summit of Maunakea has always had within the indigenous
Hawaiian community. We are most fortunate to have the opportunity to conduct
observations from this mountain. This research has made use of the NASA/IPAC
Extragalactic Database (NED) which is operated by the Jet Propulsion
Laboratory, California Institute of Technology, under contract with the
National Aeronautics and Space Administration.
Figure 11: Spectra of distinct regions along with fits to individual emission
lines for the 3C 9 system. Figure 12: Spectra of distinct regions along with
fits to individual emission lines for the 3C268.4 system. Figure 13: Spectra
of distinct regions along with fits to individual emission lines for the 7C
1354+2552 system. Figure 14: Spectra of distinct regions along with fits to
individual emission lines for the 3C 298 system. Figure 15: Spectra of
distinct regions along with fits to individual emission lines for the 3C 446
system. Figure 16: Spectra of distinct regions along with fits to individual
emission lines for the 4C 57.29 system. Figure 17: Spectra of distinct regions
along with fits to individual emission lines for the 4C 22.44 system. Figure
18: Spectra of distinct regions along with fits to individual emission lines
for the 4C 05.84 system.
## Appendix A 3C 9
3C9 is a luminous quasar at z = 2.019922 with a prominent blue rest-frame UV
continuum. For this source, we identify three distinct regions. “SE-SW
component A” is a region with a ring-like morphology associated with the 3C9
quasar host galaxy. We measure a velocity dispersion from a Gaussian fit to
the nebular emission lines of 407.6$\pm$12.9km s-1 and the kinematics
resembling a rotating disk. “SE component A” is classified as an outflow
region with a very high emission line FWHM of 1362.7$\pm$60.5 km s-1 and
elevated log([OIII]/H$\beta$ ) and log([NII]/H$\alpha$) ratios relative to the
rest of the system. “N component B” is the merging galaxy in the 3C9 system
showcasing a line FWHM of 472.15$\pm$11.8 km s-1 and a velocity offset of
$\sim$200 km s-1 from the quasar. The projected spatial separation between the
two apparent nuclei is 9 kpc. The quasar lies in the galaxy with a ring-like
morphology showing the kinematic structure of a disk. Archival HST imaging of
rest-frame UV continuum shows the ring morphology as well (see Figure 4),
indicating very recent star formation activity in the ring. The merging galaxy
“N component B” appears to be a dispersion dominated system with active star
formation and appears in rest-frame UV emission. The 3C9 system best resembles
the local galaxy merger system Arp 148 (z=0.036), also known as Mayall’s
Object. The outflow in this system appears to be emanating from the ring of
the galaxy with the quasar.
## Appendix B 4C 09.17
4C 09.17 is a luminous quasar at z=2.117 with a blue UV continuum. For this
source, we identify four distinct regions. “SW component A” is a star-forming
clump associated with the quasar host galaxy. The spectrum of this region
shows a single narrow emission line with an FWHM of 312.0$\pm$7 km s-1 . “S/E
component A” is an outflow region driven by the quasar, the nebular emission
lines for this region have an FWHM of 887.2$\pm$22.4 km s-1 . A second narrow
component is required for a good fit for each emission line in this region,
with a line FWHM of 290.4$\pm$29.9 km s-1 . “W component B clumps” is a region
part of the merging galaxy within the 4C09.17 system. The region consists of
clumpy emission selected by isolating spaxels with an H$\alpha$ line surface
density $>6\times 10^{-16}$ erg s-1 cm-2 arcsec-2. “W component B diffuse” is
emission associated with “diffuse” ionized emission in the merging galaxy
selected by isolating spaxels with an H$\alpha$ spatial line surface density
$<6\times 10^{-16}$ erg s-1 cm-2 arcsec-2. The diffuse region shows higher
log([OIII]/H$\beta$ ) and log([NII]/H$\alpha$) line ratios associated with
both AGN and star formation photoionization while the clumpy regions of the
merging galaxy showcase lower ionization levels consistent with
photoionization by star formation. This region is associated with bright UV
emission in HST imaging of this object (Lehnert et al., 1999). “S/E component
A outflow” shows high log([NII]/H$\alpha$) and log([OIII]/H$\beta$ ) values
relative to the rest of the system, indicating this region is predominantly
photoionized by the quasar. The 4C09.17 system is a merger of two galaxies
with velocity offsets of $\sim$1000 km s-1 and a projected separation of
$\sim$ 4 kpc. HST imaging of rest-frame UV continuum (see Figure 4) shows
evidence for a population of young hot stars indicating recent star formation
activity. The majority of the star formation activity is confined to the
merging galaxy.
## Appendix C 3C 268.4
3C 268.4 is a luminous quasar at z=1.39, with a slightly reddened UV continuum
compared to the average type-1 quasar. For this target, we identified two
distinct regions. “SW component A” is an outflow driven by the quasar. The
FWHM of the emission lines is 2075$\pm$354 km s-1 as measured from the
Gaussian fit to the [OIII] line. The spectrum extracted over this region also
shows a narrow component with an FWHM of 603.7$\pm$54.9 km s-1 , most likely
signaling emission from an extended narrow-line region close to the quasar.
Because of issues with miss-assignment of flux in the OSIRIS pipeline
(Lockhart et al., 2019), the rows below and above the centroid of the quasar
do not have properly extracted spectra in the H band observations of this
object. Hence we do not have a good spectrum of the extended emission in a
0.2-0.3″ radius around the quasar in the H band, which covers the H$\alpha$
and [NII] emission lines of the ionized outflow. “SW component B” is a region
associated with the merging galaxy, showcasing clumpy morphology in ionized
gas emission. The emission lines have an FWHM of 367.7 $\pm$ 3.9 km s-1 and an
offset of $-300$ km s-1 relative to the redshift of the quasar. The
log([OIII]/H$\beta$ ) line ratios are lower for this region compared to the
rest of the system, consistent with a mixture of AGN and star formation
photoionization. This region is also associated with bright rest-frame UV
continuum emission, seen with HST observations of this target Hilbert et al.
(2016).
## Appendix D 7C 1354+2552
7C 1354+2552 is a luminous quasar at z=2.0064 with a blue UV continuum. For
this target, we identify two distinct regions. “Component A” is the extended
emission associated with the quasar host galaxy. The kinematics show a smooth
velocity gradient, indicating the presence of a galactic disk. The size,
morphology, and kinematics of the disk are similar to that of star-forming
galaxies on the more massive end of the star formation main sequence at
$z\sim$2 (Förster Schreiber et al., 2018). We measure an emission line FWHM of
357.2$\pm$2.0 km s-1 on the redshifted side of the disk and 497.7$\pm$6.5 km
s-1 on the blue-shifted side of the disk. Although this region only has a
single label (“component A”), in Figure 13 rows one and two show the fits to
the red and blue-shifted sides of the disk that are part of this region. This
region is selected based on the location where H$\alpha$ emission is detected.
This is done to boost the SNR in the H$\alpha$ line as it appears to be
clumpier, more compact, and less extended than [OIII] . In Table 3 we provide
values integrated over the entire galactic disk. “E component B ” is a region
associated with the merging galaxy at a projected separation of 6-7 kpc. The
kinematics are consistent with a dispersion dominated galaxy. The entire
“component A” is consistent with quasar photoionization. The gas in “E
component B” is photoionized by star formation.
## Appendix E 3C 298
3C298 is a luminous quasar at z=1.439 with a slightly reddened UV continuum.
For this target, we identify five distinct regions. We present a detailed
discussion of each region in Vayner et al. (2017). “W/E component A” are
outflow regions with a bi-conical morphology, where the western (W) is the
redshifted receding cone, and the eastern (E) is the approaching cone. In
Vayner et al. (2017), they are referred to as the red(blue) shifted outflow
region. The emission lines over the outflows are very broad, with FWHM up to
$\sim$1500 km s-1 . A combination of shocks and quasar activity is likely
responsible for photoionizing the gas. “SE component B outflow” is an outflow
region belonging to a merging galaxy. “SE component B ENLR” is an extended
narrow-line region belonging to the disk of the merging galaxy, with gas
photoionized by the quasar or secondary AGN. “SE component B Tidal feature” is
a region of the merging galaxy with active/recent star formation as evident by
lower log([NII]/H$\alpha$) and log([OIII]/H$\beta$ ) values compared to the
rest of the regions.
## Appendix F 3C318
3C318 is a luminous quasar at z=1.5723 with a reddened UV continuum. There is
evidence for a spatially unresolved nuclear star-burst with an upper limit on
the star formation rate of 88$\pm$9M⊙ yr-1 . This star formation rate is far
lower than the far infrared derived rate of 580M⊙ yr-1 . The extinction
towards the nuclear region measured from Willott et al. (2000) alone cannot
explain the mismatch between the H$\alpha$ and far-infrared derived SFR.
Either a larger fraction of the far-infrared emission is from dust that is
being heated by the AGN, or the far-infrared emission traces a different star
formation history than H$\alpha$ (Calzetti, 2013). No narrow extended emission
is detected in this object.
The merger status of this object is unclear. Two nearby galaxies to the north
and west of the quasar are visible in archival HST imaging (Willott et al.,
2000). We do not detect the western object that is 2″ away from the quasar in
our OSIRIS observations in any emission line. Willott et al. (2007) studied
this object with PdBI through CO 2-1 emission at a fairly coarse ($\sim$8
arcseconds) resolution. There appears to be CO emission that could plausibly
be associated with the western object. We have recently obtained a much higher
angular resolution CO 3-2 spectroscopy of this target that will be discussed
in detail in a forthcoming paper. We confirm the existence of CO 3-2 emission
associated with the CO 2-1 emission. We resolve the molecular emission into
multiple components. However, the CO 3-2 emission is not associated with
either one of the galaxies seen in the HST data. We obtained a wide field of
view IFS observations of this target with KCWI aimed at attempting to measure
the redshifts of the nearby galaxies and to confirm the merger scenario of
this object. We detect both the northern and western objects in the continuum.
We confirm that the northern target is at a different redshift than the quasar
from the detection of [OII] emission, while for the western object, a reliable
redshift is challenging to determine with the current data set. Hence no clear
evidence of a companion galaxy that is part of a merger is detected for this
quasar associated with the brightest galaxies seen in optical imaging within a
few arcseconds from the quasar.
## Appendix G 4C 57.29
4C 57.29 is a luminous quasar at z=2.1759 with a blue UV continuum. For this
target, we identify two regions. Region “NE component A” belongs to the host
galaxy of the quasar. The relatively high log([OIII]/H$\beta$ ) value
indicates that this region is consistent with being photoionized by the
quasar. The 500.7 nm [OIII] is the only emission line detected for this
region. The region is marginally resolved, making it hard to measure the
kinematic structure. We require a double Gaussian fit to the [OIII] emission
in this region to obtain a good fit, and we measure an FWHM of 474.3 and 502.5
km s-1 with offsets of 35.0 km s-1 and -1050.1 km s-1 relative to the redshift
of the quasar. We identify a second region north of the quasar. It is unclear
if it belongs to a merging galaxy or the quasar host galaxy. There is a $\sim
100$ km s-1 offset from the quasar, and the line has an FWHM of 550.13$\sim$19
km s-1 . This region is also only detected in [OIII] . The SNR is too low to
measure any kinematics structure.
## Appendix H 4C 22.44
4C22.44 is a luminous quasar at z=1.5492 with a reddened UV continuum. Similar
to 3C318, we do not detect any evidence for a merging galaxy for this system.
For this target, we identify a single region, “N, S component A”. The
kinematics of this region may be consistent with a galactic disk belonging to
the quasar host galaxy. We see evidence for a smooth gradient in the radial
velocity map, however, the region is marginally resolved. We measure an
emission line FWHM of 434.8 km s-1 . The region is consistent with being
ionized by star formation with some contribution from quasar photoionization.
## Appendix I 4C 05.84
4C05.84 is a luminous quasar at z=2.323 with a slightly reddened UV continuum.
For this target, we identify three distinct regions. Regions “S component A”
and “NE component A” are the blue(red) shifted outflow regions resembling a
bi-conical outflow. They showcase broad extended emission with a line FWHM of
$\sim$800 km s-1 . The quasar photoionizes these regions. Region “SW component
A clump”, shows a line FWHM of 467.9$\pm$3.0 km s-1 and is photoionized by a
combination of star formation and the quasar. This clump is also detected in
NIRC2 imaging of this object studied by Krogager et al. (2016), where they
consider this clump to be associated with a damped Ly$\alpha$ system. However,
here we confirm that this objected is part of the quasar host galaxy. We find
no evidence for a merging galaxy within our OSIRIS observations.
## Appendix J 3C 446
3C446 is a quasar at z = 1.404. For this target, we identify two regions, “N
component A tidal feature” is a region belonging to the quasar host galaxy,
resembling a tidal feature that is most likely induced by the merger. We
measure an FWHM of 395.14$\pm$2.0 km s-1 for this region. “E-W component B”
belongs to the merging galaxy, a portion of it resembles a tidal feature,
counter to the tidal arm of “N component A tidal feature.” For this region, we
measure a line FWHM of 558.5$\pm$63 km s-1 however, it appears to be a blend
of two velocity components. It is unclear where the nucleus of the merging
galaxies resides. It could be that it has already merged with that of the
quasar host galaxy. The two galaxies appear to be offset by at least 500 km
s-1 in velocity, and there is a possibility that a portion of the merging
galaxy lies on top of the quasar host galaxy.
## Appendix K 4C 04.81
4C04.81 is a luminous quasar at z=2.5883 with a reddened UV continuum. For
this target, we identify a single region, “E component A outflow”. The
kinematics show blue and red-shifted broad (FWHM$\sim$800 km s-1 ) emission.
The quasar mainly photoionizes the gas. We do not identify any narrow extended
emission in this object.
## References
* Aird et al. (2019) Aird, J., Coil, A. L., & Georgakakis, A. 2019, MNRAS, 484, 4360, doi: 10.1093/mnras/stz125
* Alarie & Morisset (2019) Alarie, A., & Morisset, C. 2019, arXiv e-prints, arXiv:1908.08579. https://arxiv.org/abs/1908.08579
* Allen et al. (2008) Allen, M. G., Groves, B. A., Dopita, M. A., Sutherland, R. S., & Kewley, L. J. 2008, Astrophys. J. Suppl., 178, 20, doi: 10.1086/589652
* Anglés-Alcázar et al. (2017) Anglés-Alcázar, D., Faucher-Giguère, C.-A., Kereš, D., et al. 2017, MNRAS, 470, 4698, doi: 10.1093/mnras/stx1517
* Arrigoni Battaia et al. (2019) Arrigoni Battaia, F., Hennawi, J. F., Prochaska, J. X., et al. 2019, MNRAS, 482, 3162, doi: 10.1093/mnras/sty2827
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Bañados et al. (2018) Bañados, E., Venemans, B. P., Mazzucchelli, C., et al. 2018, Nature, 553, 473, doi: 10.1038/nature25180
* Baldwin et al. (1981) Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5, doi: 10.1086/130766
* Barthel et al. (2017) Barthel, P., Podigachoski, P., Wilkes, B., & Haas, M. 2017, ApJ, 843, L16, doi: 10.3847/2041-8213/aa7631
* Barthel & Versteeg (2019) Barthel, P., & Versteeg, J. 2019, The Messenger, 176, 37, doi: 10.18727/0722-6691/5140
* Barthel et al. (2018) Barthel, P. D., Versteeg, M. J. F., Podigachoski, P., et al. 2018, ApJ, 866, L3, doi: 10.3847/2041-8213/aae3e2
* Behroozi et al. (2010) Behroozi, P. S., Conroy, C., & Wechsler, R. H. 2010, ApJ, 717, 379, doi: 10.1088/0004-637X/717/1/379
* Bennert et al. (2011) Bennert, V. N., Auger, M. W., Treu, T., Woo, J.-H., & Malkan, M. A. 2011, ApJ, 742, 107, doi: 10.1088/0004-637X/742/2/107
* Bennert et al. (2018) Bennert, V. N., Loveland, D., Donohue, E., et al. 2018, MNRAS, 481, 138, doi: 10.1093/mnras/sty2236
* Bruce et al. (2014) Bruce, V. A., Dunlop, J. S., McLure, R. J., et al. 2014, MNRAS, 444, 1660, doi: 10.1093/mnras/stu1537
* Calzetti (2013) Calzetti, D. 2013, Star Formation Rate Indicators, ed. J. Falcón-Barroso & J. H. Knapen, 419
* Christensen et al. (2006) Christensen, L., Jahnke, K., Wisotzki, L., & Sánchez, S. F. 2006, A&A, 459, 717, doi: 10.1051/0004-6361:20065318
* Coil et al. (2015) Coil, A. L., Aird, J., Reddy, N., et al. 2015, ApJ, 801, 35, doi: 10.1088/0004-637X/801/1/35
* Courteau (1997) Courteau, S. 1997, AJ, 114, 2402, doi: 10.1086/118656
* Davies et al. (2014) Davies, R. L., Rich, J. A., Kewley, L. J., & Dopita, M. A. 2014, MNRAS, 439, 3835, doi: 10.1093/mnras/stu234
* Decarli et al. (2018) Decarli, R., Walter, F., Venemans, B. P., et al. 2018, ApJ, 854, 97, doi: 10.3847/1538-4357/aaa5aa
* Di Matteo et al. (2005) Di Matteo, T., Springel, V., & Hernquist, L. 2005, Nature, 433, 604, doi: 10.1038/nature03335
* Ene et al. (2019) Ene, I., Ma, C.-P., McConnell, N. J., et al. 2019, ApJ, 878, 57, doi: 10.3847/1538-4357/ab1f04
* Erb et al. (2006a) Erb, D. K., Shapley, A. E., Pettini, M., et al. 2006a, ApJ, 644, 813, doi: 10.1086/503623
* Erb et al. (2006b) Erb, D. K., Steidel, C. C., Shapley, A. E., et al. 2006b, ApJ, 646, 107, doi: 10.1086/504891
* Ferrarese & Merritt (2000) Ferrarese, L., & Merritt, D. 2000, ApJ, 539, L9, doi: 10.1086/312838
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Conley, A., Meierjurgen Farr, W., et al. 2013, emcee: The MCMC Hammer. http://ascl.net/1303.002
* Förster Schreiber et al. (2018) Förster Schreiber, N. M., Renzini, A., Mancini, C., et al. 2018, ApJS, 238, 21, doi: 10.3847/1538-4365/aadd49
* Gebhardt et al. (2000) Gebhardt, K., Bender, R., Bower, G., et al. 2000, ApJ, 539, L13, doi: 10.1086/312840
* Ghaffari et al. (2017) Ghaffari, Z., Westhues, C., Haas, M., et al. 2017, Astronomische Nachrichten, 338, 823, doi: 10.1002/asna.201713389
* Greene & Ho (2005) Greene, J. E., & Ho, L. C. 2005, ApJ, 630, 122, doi: 10.1086/431897
* Groves et al. (2006) Groves, B. A., Heckman, T. M., & Kauffmann, G. 2006, MNRAS, 371, 1559, doi: 10.1111/j.1365-2966.2006.10812.x
* Gültekin et al. (2009) Gültekin, K., Richstone, D. O., Gebhardt, K., et al. 2009, ApJ, 698, 198, doi: 10.1088/0004-637X/698/1/198
* Gutkin et al. (2016) Gutkin, J., Charlot, S., & Bruzual, G. 2016, MNRAS, 462, 1757, doi: 10.1093/mnras/stw1716
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357–362, doi: 10.1038/s41586-020-2649-2
* Herenz et al. (2015) Herenz, E. C., Wisotzki, L., Roth, M., & Anders, F. 2015, A&A, 576, A115, doi: 10.1051/0004-6361/201425580
* Hilbert et al. (2016) Hilbert, B., Chiaberge, M., Kotyla, J. P., et al. 2016, Astrophys. J. Suppl., 225, 12, doi: 10.3847/0067-0049/225/1/12
* Hopkins et al. (2008) Hopkins, P. F., Hernquist, L., Cox, T. J., & Kereš, D. 2008, Astrophys. J. Suppl., 175, 356, doi: 10.1086/524362
* Hunter (2007) Hunter, J. D. 2007, Computing in Science Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Inskip et al. (2011) Inskip, K. J., Jahnke, K., Rix, H.-W., & van de Ven, G. 2011, Astrophys. J., 739, 90, doi: 10.1088/0004-637X/739/2/90
* Jahnke & Macciò (2011) Jahnke, K., & Macciò, A. V. 2011, ApJ, 734, 92, doi: 10.1088/0004-637X/734/2/92
* Jahnke et al. (2004) Jahnke, K., Sánchez, S. F., Wisotzki, L., et al. 2004, Astrophys. J., 614, 568, doi: 10.1086/423233
* Juneau et al. (2014) Juneau, S., Bournaud, F., Charlot, S., et al. 2014, ApJ, 788, 88, doi: 10.1088/0004-637X/788/1/88
* Kennicutt (1998) Kennicutt, Jr., R. C. 1998, Astrophys. J., 498, 541, doi: 10.1086/305588
* Kewley & Dopita (2002) Kewley, L. J., & Dopita, M. A. 2002, ApJS, 142, 35, doi: 10.1086/341326
* Kewley et al. (2013a) Kewley, L. J., Dopita, M. A., Leitherer, C., et al. 2013a, ApJ, 774, 100, doi: 10.1088/0004-637X/774/2/100
* Kewley et al. (2013b) Kewley, L. J., Maier, C., Yabe, K., et al. 2013b, ApJ, 774, L10, doi: 10.1088/2041-8205/774/1/L10
* King & Pounds (2015) King, A., & Pounds, K. 2015, ARA&A, 53, 115, doi: 10.1146/annurev-astro-082214-122316
* Krogager et al. (2016) Krogager, J.-K., Fynbo, J. P. U., Noterdaeme, P., et al. 2016, MNRAS, 455, 2698, doi: 10.1093/mnras/stv2346
* Larkin et al. (2013) Larkin, J., Wright, S., Weiss, J., et al. 2013, Keck OSIRIS Data Reduction Pipeline, https://github.com/Keck-DataReductionPipelines/OsirisDRP/tree/master, GitHub
* Lauer et al. (2007) Lauer, T. R., Tremaine, S., Richstone, D., & Faber, S. M. 2007, ApJ, 670, 249, doi: 10.1086/522083
* Law et al. (2018) Law, D. R., Steidel, C. C., Chen, Y., et al. 2018, ApJ, 866, 119, doi: 10.3847/1538-4357/aae156
* Lehnert et al. (1999) Lehnert, M. D., van Breugel, W. J. M., Heckman, T. M., & Miley, G. K. 1999, Astrophys. J. Suppl., 124, 11, doi: 10.1086/313252
* Lockhart et al. (2019) Lockhart, K. E., Do, T., Larkin, J. E., et al. 2019, AJ, 157, 75, doi: 10.3847/1538-3881/aaf64e
* Martín-Navarro et al. (2018) Martín-Navarro, I., Brodie, J. P., Romanowsky, A. J., Ruiz-Lara, T., & van de Ven, G. 2018, Nature, 553, 307, doi: 10.1038/nature24999
* McConnell & Ma (2013) McConnell, N. J., & Ma, C.-P. 2013, ApJ, 764, 184, doi: 10.1088/0004-637X/764/2/184
* Merloni et al. (2010) Merloni, A., Bongiorno, A., Bolzonella, M., et al. 2010, ApJ, 708, 137, doi: 10.1088/0004-637X/708/1/137
* Nesvadba et al. (2017a) Nesvadba, N. P. H., De Breuck, C., Lehnert, M. D., Best, P. N., & Collet, C. 2017a, A&A, 599, A123, doi: 10.1051/0004-6361/201528040
* Nesvadba et al. (2017b) Nesvadba, N. P. H., Drouart, G., De Breuck, C., et al. 2017b, A&A, 600, A121, doi: 10.1051/0004-6361/201629357
* Newman et al. (2014) Newman, S. F., Buschkamp, P., Genzel, R., et al. 2014, ApJ, 781, 21, doi: 10.1088/0004-637X/781/1/21
* Osterbrock & Ferland (2006) Osterbrock, D. E., & Ferland, G. J. 2006, Astrophysics of gaseous nebulae and active galactic nuclei
* Peng et al. (2006) Peng, C. Y., Impey, C. D., Rix, H.-W., et al. 2006, ApJ, 649, 616, doi: 10.1086/506266
* Pettini & Pagel (2004) Pettini, M., & Pagel, B. E. J. 2004, MNRAS, 348, L59, doi: 10.1111/j.1365-2966.2004.07591.x
* Planck Collaboration et al. (2014) Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2014, Astron. $\&$ Astrop., 571, A16, doi: 10.1051/0004-6361/201321591
* Podigachoski et al. (2015) Podigachoski, P., Barthel, P. D., Haas, M., et al. 2015, Astron. & Astrop., 575, A80, doi: 10.1051/0004-6361/201425137
* Prochaska et al. (2014) Prochaska, J. X., Lau, M. W., & Hennawi, J. F. 2014, ApJ, 796, 140, doi: 10.1088/0004-637X/796/2/140
* Sanders et al. (2015) Sanders, R. L., Shapley, A. E., Kriek, M., et al. 2015, ApJ, 799, 138, doi: 10.1088/0004-637X/799/2/138
* Sanders et al. (2016) —. 2016, ApJ, 816, 23, doi: 10.3847/0004-637X/816/1/23
* Schramm & Silverman (2013) Schramm, M., & Silverman, J. D. 2013, ApJ, 767, 13, doi: 10.1088/0004-637X/767/1/13
* Schulze & Wisotzki (2014) Schulze, A., & Wisotzki, L. 2014, MNRAS, 438, 3422, doi: 10.1093/mnras/stt2457
* Shapley et al. (2015) Shapley, A. E., Reddy, N. A., Kriek, M., et al. 2015, ApJ, 801, 88, doi: 10.1088/0004-637X/801/2/88
* Shen et al. (2011) Shen, Y., Richards, G. T., Strauss, M. A., et al. 2011, ApJS, 194, 45, doi: 10.1088/0067-0049/194/2/45
* Shields et al. (2006) Shields, G. A., Menezes, K. L., Massart, C. A., & Vanden Bout, P. 2006, ApJ, 641, 683, doi: 10.1086/500542
* Steidel et al. (2014) Steidel, C. C., Rudie, G. C., Strom, A. L., et al. 2014, ApJ, 795, 165, doi: 10.1088/0004-637X/795/2/165
* Stern & Laor (2013) Stern, J., & Laor, A. 2013, MNRAS, 431, 836, doi: 10.1093/mnras/stt211
* Stickley & Canalizo (2014) Stickley, N. R., & Canalizo, G. 2014, ApJ, 786, 12, doi: 10.1088/0004-637X/786/1/12
* Storchi-Bergmann et al. (1998) Storchi-Bergmann, T., Schmitt, H. R., Calzetti, D., & Kinney, A. L. 1998, AJ, 115, 909, doi: 10.1086/300242
* Strom et al. (2017) Strom, A. L., Steidel, C. C., Rudie, G. C., et al. 2017, ApJ, 836, 164, doi: 10.3847/1538-4357/836/2/164
* Sun et al. (2015) Sun, M., Trump, J. R., Brandt, W. N., et al. 2015, ApJ, 802, 14, doi: 10.1088/0004-637X/802/1/14
* Tomczak et al. (2016) Tomczak, A. R., Quadri, R. F., Tran, K.-V. H., et al. 2016, ApJ, 817, 118, doi: 10.3847/0004-637X/817/2/118
* Trakhtenbrot et al. (2015) Trakhtenbrot, B., Urry, C. M., Civano, F., et al. 2015, Science, 349, 168, doi: 10.1126/science.aaa4506
* Treister et al. (2012) Treister, E., Schawinski, K., Urry, C. M., & Simmons, B. D. 2012, Astrophys. J. Lett., 758, L39, doi: 10.1088/2041-8205/758/2/L39
* Vayner et al. (2016) Vayner, A., Wright, S. A., Do, T., et al. 2016, Astrophys. J., 821, 64, doi: 10.3847/0004-637X/821/1/64
* Vayner et al. (2017) Vayner, A., Wright, S. A., Murray, N., et al. 2017, ApJ, 851, 126, doi: 10.3847/1538-4357/aa9c42
* Vayner et al. (2021) —. 2021, ApJ
* Veilleux & Osterbrock (1987) Veilleux, S., & Osterbrock, D. E. 1987, ApJS, 63, 295, doi: 10.1086/191166
* Venemans et al. (2016) Venemans, B. P., Walter, F., Zschaechner, L., et al. 2016, ApJ, 816, 37, doi: 10.3847/0004-637X/816/1/37
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2
* Volonteri (2012) Volonteri, M. 2012, Science, 337, 544, doi: 10.1126/science.1220843
* Wang et al. (2006) Wang, J.-M., Chen, Y.-M., & Zhang, F. 2006, ApJ, 647, L17, doi: 10.1086/507271
* Wang et al. (2013) Wang, R., Wagg, J., Carilli, C. L., et al. 2013, ApJ, 773, 44, doi: 10.1088/0004-637X/773/1/44
* Williams et al. (2017) Williams, R. J., Maiolino, R., Krongold, Y., et al. 2017, MNRAS, 467, 3399, doi: 10.1093/mnras/stx311
* Willott et al. (2007) Willott, C. J., Martínez-Sansigre, A., & Rawlings, S. 2007, ApJ, 133, 564, doi: 10.1086/510291
* Willott et al. (2000) Willott, C. J., Rawlings, S., & Jarvis, M. J. 2000, MNRAS, 313, 237, doi: 10.1046/j.1365-8711.2000.03267.x
* Yuan et al. (2013) Yuan, T.-T., Kewley, L. J., & Richard, J. 2013, ApJ, 763, 9, doi: 10.1088/0004-637X/763/1/9
* Zubovas & King (2012) Zubovas, K., & King, A. 2012, ApJ, 745, L34, doi: 10.1088/2041-8205/745/2/L34
* Zubovas & King (2014) Zubovas, K., & King, A. R. 2014, MNRAS, 439, 400, doi: 10.1093/mnras/stt2472
|
# Run-Time Safety Monitoring of Neural-Network-Enabled Dynamical Systems
Weiming Xiang Weiming Xiang is with the School of Computer and Cyber Sciences,
Augusta University, Augusta, GA, 30912 USA e-mail<EMAIL_ADDRESS>
###### Abstract
Complex dynamical systems rely on the correct deployment and operation of
numerous components, with state-of-the-art methods relying on learning-enabled
components in various stages of modeling, sensing, and control at both offline
and online levels. This paper addresses the run-time safety monitoring problem
of dynamical systems embedded with neural network components. A run-time
safety state estimator in the form of an interval observer is developed to
construct lower-bound and upper-bound of system state trajectories in run
time. The developed run-time safety state estimator consists of two auxiliary
neural networks derived from the neural network embedded in dynamical systems,
and observer gains to ensure the positivity, namely the ability of estimator
to bound the system state in run time, and the convergence of the
corresponding error dynamics. The design procedure is formulated in terms of a
family of linear programming feasibility problems. The developed method is
illustrated by a numerical example and is validated with evaluations on an
adaptive cruise control system.
###### Index Terms:
Dynamical systems, interval observer, neural networks, run-time monitoring.
## I Introduction
Complex dynamical systems, for instance, medical robotic systems, autonomous
vehicles and a variety of cyber-physical systems (CPS), have been increasingly
benefiting from the recent rapid development of machine learning (ML) and
artificial intelligence (AI) techniques in various aspects ranging from
modeling to control, for instance stabilizing neural network controllers and
state observers [1, 2, 3], adaptive neural network controllers [4, 5] and a
variety of neural network controllers [6]. However, because of the well-known
vulnerability of neural networks, those systems equipped with neural networks
which are also called neural-network-enabled systems are only restricted to
scenarios with the lowest levels of the requirement of safety. As often
observed, a slight perturbation that is imposed onto the input of a well-
trained neural network would lead to a completely incorrect and unpredictable
result [7]. When neural network components are involved in dynamical system
models such as neural network controllers applied in feedback channels, there
inevitably exist noises and disturbances in output measurements of the system
that are fed into the neural network controllers. These undesired but
inevitable noises and disturbances may bring significant safety issues to
dynamical systems in run-time operation. Moreover, with advanced adversarial
machine learning techniques recently developed which can easily attack
learning-enabled systems in run time, the safety issue of such systems only
becomes worse. Therefore, for the purpose of safety assurance of dynamical
systems equipped with neural network components, there is a need to develop
safety monitoring techniques that are able to provide us the online
information regarding safety properties for neural-network-enabled dynamical
systems.
To assure the safety property of neural networks, there are a few safety
verification methods developed recently. These approaches are mostly designed
in the framework of offline computation and usually represent high
computational complexities and require huge computation resources to conduct
safety verification. For instance, the verification problem of a class of
neural networks with rectified linear unit (ReLU) activation functions can be
formalized as a variety of sophisticated computational problems. One geometric
computational approach based on the manipulation of polytopes is proposed in
[8, 9] which is able to compute the exact output set of an ReLU neural
network. In their latest work [10, 11], a novel Star set is developed to
significantly improve the scalability. Optimization-based methods are also
developed for verification of ReLU neural networks such as mixed-integer
linear programming (MILP) approach [12, 13], linear programming (LP) based
approach [14], and Reluplex algorithm proposed in [15] which is stemmed from
classical Simplex algorithm. For neural networks with general activation
function, a simulation-based approach is introduced in [16] inspired by the
maximal sensitivity concept proposed in [17]. The output reachable set
estimation for feedforward neural networks with general activation functions
is formulated in terms of a chain of convex optimization problems, and an
improved version of the simulation-based approach is developed in the
framework of interval arithmetic [18, 19]. These optimization and geometric
methods require a substantial computational ability to verify even a simple
property of a neural network. For example, some properties in the proposed
ACAS Xu neural network in [15] need even more than 100 hours to complete the
verification, which does not meet the real-time requirement of run-time safety
monitoring for dynamical systems.
One way to resolve the real-time challenge of run-time monitoring is to
develop more computational efficient verification methods that can be executed
sufficiently fast to satisfy the run-time requirement such as the
specification-guide method and Star set method do in [19, 10, 11]. However,
these offline methods are essentially with an open-loop computation structure
and there always exist computational limitations for these offline algorithms
implemented online. On the other hand, inspired by observer design techniques
in classical control theory, another way is to design a closed-loop structure
of run-time monitoring using the instantaneous measurement of the system,
which is illustrated in Figure 1. Recently, interval observer design
techniques have been developed to provide lower- and upper-bounds of state
trajectories during the system’s operation which can be used to conduct run-
time monitoring for dynamical systems [20, 21, 22, 23, 24, 25, 26]. Inspired
by the idea of interval observer methods developed in the framework of
positive systems [27, 28, 29, 30], a novel run-time safety state estimator is
developed for neural-network-enabled dynamical systems. The run-time state
estimator design consists of two essential elements, the auxiliary neural
networks and observer gains. Briefly speaking, the auxiliary neural networks
stemmed from the neural network in the original system are designed to deal
with neural network components and observer gains are handling system
dynamics, ensuring positivity of error states and convergence. The design
process can be formulated in terms of a family of LP feasibility problems.
Notably, if the neural network component is driven by measurement instead of
system state, the design process is independent with the neural network which
makes the developed method applicable for neural-network-enabled systems
regardless of the scalability concern for the size of neural networks.
Figure 1: The generic structure of run-time safety monitoring of neural-
network-enabled dynamical systems considered in this paper.
The rest of this paper is organized as follows. In Section II, some
preliminaries and problem formulation are introduced. The main result, run-
time monitoring design, is proposed in Section III. Two auxiliary neural
networks are derived from the weights and biases of the neural network of
original systems. Interval observers are designed in the framework of LP
problems and furthermore, the convergence of the error system is discussed. In
Section IV, the developed approach is applied to an adaptive cruise control
(ACC) system. Conclusions and future remarks are given in Section V.
_Notations:_ $\mathbb{R}$ and $\mathbb{R}_{+}$ stand for the sets of real
numbers and nonnegative real numbers respectively, and $\mathbb{R}^{n}$
denotes the vector space of all $n$-tuples of real numbers,
$\mathbb{R}^{n\times n}$ is the space of $n\times n$ matrices with real
entries. We denote $I_{n\times n}\in\mathbb{R}^{n\times n}$ as an
$n$-dimensional identity matrix and $\mathbf{1}_{n\times
1}=[1,\ldots,1]^{\top}\in\mathbb{R}^{n\times 1}$. Matrix
$A\in\mathbb{R}^{n\times n}$ is a Metzler matrix if its off-diagonal entries
are nonnegative, and $\mathbb{M}_{n}$ denotes the set of the Metzler matrices
of the size $n$. For $x\in\mathbb{R}^{n}$, $x_{i}$ denotes the $i$th component
of $x$, and the notation $x>0$ means $x_{i}>0$ for $1\leq i\leq n$.
$\mathbb{R}_{+}^{n}=\\{x\in{\mathbb{R}^{n}}:x>0\\}$ denotes the nonnegative
orthant in $\mathbb{R}^{n}$, $\mathbb{R}_{+}^{n\times m}$ denotes the set of
$n\times m$ real non-negative matrices. For $x\in\mathbb{R}^{n}$, its 1-norm
is $\left\|x\right\|=\sum\nolimits_{k=1}^{n}{\left|{{x_{k}}}\right|}$.
Similarly, for an $A\in\mathbb{R}^{n\times m}$, $a_{ij}$ denotes the element
in the $(i,j)$ position of $A$, and $A>0$ means that $a_{ij}>0$ for $1\leq
i,j\leq n$. $A>B$ means that $A-B>0$. $\left|A\right|$ means
$\left|a_{ij}\right|$ for $1\leq i,j\leq n$ and $A^{\top}$ is the transpose of
$A$.
## II System Description and Problem Formulation
### II-A Neural-Network-Enabled Dynamical Systems
In this work, we consider an $L$-layer feedforward neural network
$\Phi:\mathbb{R}^{n_{0}}\to\mathbb{R}^{n_{L}}$ defined by the following
recursive equations in the form of
$\displaystyle\begin{cases}\eta_{\ell}=\phi_{\ell}(W_{\ell}\eta_{\ell-1}+b_{\ell}),~{}\ell=1,\ldots,L\\\
\Phi(\eta_{0})=\eta_{L}\end{cases}$ (1)
where $\eta_{\ell}$ denotes the output of the $\ell$-th layer of the neural
network, and in particular $\eta_{0}\in\mathbb{R}^{n_{0}}$ is the input to the
neural network and $\eta_{L}\in\mathbb{R}^{n_{L}}$ is the output produced by
the neural network, respectively. $W_{\ell}\in\mathbb{R}^{n_{\ell}\times
n_{\ell-1}}$ and $b_{\ell}\in\mathbb{R}^{n_{\ell}}$ are weight matrices and
bias vectors for the $\ell$-th layer.
$\phi_{\ell}=[\psi_{\ell},\cdots,\psi_{\ell}]$ is the concatenation of
activation functions of the $\ell$-th layer in which
$\psi_{\ell}:\mathbb{R}\to\mathbb{R}$ is the activation function. The
following assumptions which are related to activation functions are proposed.
###### Assumption 1
Assume that the following properties holds for activation functions
$\psi_{\ell}$, $\ell=1,\ldots,L$:
1. 1.
Given any two scalar $x_{1}$ and $x_{2}$, there exists an $\alpha>0$ such that
$\left|\psi_{\ell}(x_{1})-\psi_{\ell}(x_{2})\right|\leq\alpha\left|x_{1}-x_{2}\right|,~{}\forall\ell=1,\ldots,L.$
(2)
2. 2.
Given any two scalars $x_{1}\leq x_{2}$, the following inequality holds
$\psi_{\ell}(x_{1})\leq\psi_{\ell}(x_{2}),~{}\forall\ell=1,\ldots,L.$ (3)
###### Remark 1
The above two assumptions hold for most popular activation functions such as
ReLU, sigmoid, tanh, for instance. The maximum Lipschitz constant of all
$\psi_{\ell}$ can be chosen as the $\alpha$ for condition (2). In addition,
those popular activation functions are monotonically increasing so that
condition (3) is explicitly satisfied.
Neural-network-enabled dynamical systems are dynamical systems driven by
neural network components such as neural network feedback controllers. In
general, neural-network-enabled dynamical systems are in the form of
$\displaystyle\begin{cases}\dot{x}(t)=f(x(t),u(t),\Phi(x(t),u(t)))\\\
y(t)=g(x(t),u(t))\end{cases}$ (4)
where $x(t)\in\mathbb{R}^{n_{x}}$ is the state vector,
$u(t)\in\mathbb{R}^{n_{u}}$ is the system input and
$y(t)\in\mathbb{R}^{n_{y}}$ is the measurement of the system, respectively.
$f:\mathbb{R}^{n_{x}+n_{u}}\to\mathbb{R}^{n_{x}}$ and
$g:\mathbb{R}^{n_{x}+n_{u}}\to\mathbb{R}^{n_{y}}$ are nonlinear functions.
$\Phi:\mathbb{R}^{n_{x}+n_{u}}\to\mathbb{R}^{n_{x}}$ is the neural network
component embedded in the system dynamics. In the rest of this paper, the time
index $t$ in some variables may be omitted for brevity if no ambiguity is
introduced.
In the work, we focus on a class of neural-network-enabled systems with system
dynamics in the form of Lipschitz nonlinear model described as
$\displaystyle\mathfrak{L}:\begin{cases}\dot{x}=Ax+f(x)+\Phi(x,u)\\\
y=Cx\end{cases}$ (5)
where $A\in\mathbb{R}^{n_{x}\times x_{x}}$, $C\in\mathbb{R}^{n_{y}\times
n_{x}}$, and $f(x,u)$ is a Lipschitz nonlinearity satisfying the Lipschitz
inequality
$\displaystyle\left\|f(x_{1})-f(x_{2})\right\|\leq\beta\left\|x_{1}-x_{2}\right\|,~{}\beta>0.$
(6)
###### Remark 2
It is worthwhile mentioning that any nonlinear system in the form of
$\dot{x}=f(x)+\Phi(x,u)$ can be expressed in the form of (5), as long as
$f(x)$ is differentiable with respect to $x$. The neural network $\Phi(x,u)$
is an interval component affecting the system behavior. For example, if
$\Phi(x,u)$ is trained as the neural network feedback controller, model (5)
represents a state-feedback closed-loop system.
Finally, the nonlinearity $f(x)$ is assumed to have the following property.
###### Assumption 2
It is assumed that there exist functions
$\underline{f},\overline{f}:\mathbb{R}^{2n_{x}}\to\mathbb{R}^{n_{x}}$ such
that
$\displaystyle\underline{f}(\underline{x},\overline{x})\leq
f(x)\leq\overline{f}(\underline{x},\overline{x})$ (7)
holds for any $\underline{x}\leq x\leq\overline{x}$.
### II-B Problem Statement
The run-time safety monitoring problem considered in this paper is to design a
run-time safety state estimator $\mathfrak{E}$ which is able to estimate the
lower- and upper-bounds of the instantaneous value of $x(t)$ for the purpose
of safety monitoring. The information of system $\mathfrak{L}$ that is
available for estimator $\mathfrak{E}$ includes: System matrices $A$, $C$,
nonlinearity $f$ and neural network $\Phi$, namely the weight matrices
$\\{W_{\ell}\\}_{\ell=1}^{L}$ and bias vectors $\\{b_{\ell}\\}_{\ell=1}^{L}$,
and the $\underline{u}$ and $\overline{u}$ such that input $u(t)$ satisfies
$\underline{u}\leq u(t)\leq\overline{u}$, $\forall t\geq 0$, and the
instantaneous value of measurement $y(t)$ in running time. The run-time safety
monitoring problem for neural-network-enabled dynamical system (5) is
summarized as follows.
###### Problem 1
Given a neural-network-enabled dynamical system $\mathfrak{L}$ in the form of
(5) with input $u(t)$ satisfying $\underline{u}\leq u(t)\leq\overline{u}$,
$\forall t\geq 0$, how does one design a run-time safety state estimator
$\mathfrak{E}$ to reconstruct two instantaneous values $\underline{x}(t)$ and
$\overline{x}(t)$ such that $\underline{x}(t)\leq x(t)\leq\overline{x}(t)$,
$\forall t\geq 0$?
Inspired by interval observers proposed in [20, 21, 22, 23, 24, 25, 26], the
run-time safety state estimator $\mathfrak{E}$ is developed in the following
Luenberger observer form
$\displaystyle\mathfrak{E}:\begin{cases}\dot{\underline{x}}=A\underline{x}+\underline{f}(\underline{x},\overline{x})+\underline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})+\underline{L}(y-C\underline{x})\\\
\dot{\overline{x}}=A\overline{x}+\overline{f}(\underline{x},\overline{x})+\overline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})+\overline{L}(y-C\overline{x})\end{cases}$
(8)
where initial states satisfy $\underline{x}(t_{0})\leq
x(t_{0})\leq\overline{x}(t_{0})$ and $\underline{f}$, $\overline{f}$ are
functions satisfying Assumption 2. Neural networks $\underline{\Phi}$,
$\overline{\Phi}$ and observer gains $\underline{L}$, $\overline{L}$ are to be
determined. Furthermore, letting the error states
$\underline{e}(t)=x(t)-\underline{x}(t)$ and
$\overline{e}(t)=\overline{x}(t)-x(t)$, the error dynamics can be obtained as
follows:
$\displaystyle\begin{cases}\dot{\underline{e}}=(A-\underline{L}C)\underline{e}+f(x)-\underline{f}(\underline{x},\overline{x})+\Phi(x,u)-\underline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})\\\
\dot{\overline{e}}=(A-\overline{L}C)\overline{e}+\overline{f}(\underline{x},\overline{x})-f(x)+\overline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})-\Phi(x,u)\end{cases}$
(9)
with initial states $\underline{e}(t_{0})\geq 0$ and $\overline{e}(t_{0})\geq
0$.
The problem of ensuring the run-time value of $x(t)$ satisfying
$\underline{x}(t)\leq x(t)\leq\overline{x}(t)$, $\forall t\geq 0$ is
equivalent to the one that the run-time values of error states
$\underline{e}(t)$ and $\overline{e}(t)$ are required to be always positive,
that is to say, $\underline{e}(t)\geq 0$ and $\overline{e}(t)\geq 0$, $\forall
t\geq 0$. Thus, with the run-time safety state estimator in the form of (8),
the run-time safety monitoring problem for system (5) can be restated as
follows.
###### Problem 2
Given a neural-network-enabled dynamical system $\mathfrak{L}$ in the form of
(5) with input $u(t)$ satisfying $\underline{u}\leq u(t)\leq\overline{u}$,
$\forall t\geq 0$, how does one construct proper neural networks
$\underline{\Phi}$, $\overline{\Phi}$ and observer gains $\underline{L}$,
$\overline{L}$ such that the error states $\underline{e}(t)$,
$\overline{e}(t)$ governed by (9) satisfy $\underline{e}(t)\geq 0$ and
$\overline{e}(t)\geq 0$, $\forall t\geq 0$?
As stated in Problem 2, the run-time safety monitoring consists of two
essential design tasks, neural network design and observer gain design. Then,
the result about positivity of dynamical systems is recalled by the following
lemma.
###### Lemma 1
[21] Consider a system $\dot{z}=Mz+p(t)$, $z\in\mathbb{R}^{n}$, where
$M\in\mathbb{M}_{n}$ and $p:\mathbb{R}_{+}\to\mathbb{R}^{n}_{+}$, the system
is called cooperative and the solutions of the system satisfy $z(t)\geq 0$,
$\forall t\geq 0$ if $z(0)\geq 0$.
Based on Lemma 1 and owing to Assumption 2 implying
$f(x)-\underline{f}(\underline{x},\overline{x})\in\mathbb{R}_{+}^{n_{x}}$ and
$\overline{f}(\underline{x},\overline{x})-f(x)\in\mathbb{R}_{+}^{n_{x}}$, the
run-time safety monitoring problem can be resolved if observer gains
$\underline{L}$, $\overline{L}$ and nerual networks $\underline{\Phi}$,
$\overline{\Phi}$ satisfy the conditions proposed in the following
proposition.
###### Proposition 1
The run-time safety monitoring Problem 1 is solvable if there exist observer
gains $\underline{L}$, $\overline{L}$ and neural networks $\underline{\Phi}$
and $\overline{\Phi}$ such that
1. 1.
$A-\underline{L}C\in\mathbb{M}_{n_{x}}$,
$A-\overline{L}C\in\mathbb{M}_{n_{x}}$.
2. 2.
$\Phi(x,u)-\underline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})\in\mathbb{R}_{+}^{n_{x}}$,
$\overline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})-\Phi(x,u)\in\mathbb{R}^{n_{x}}_{+}$.
Proof. Due to Assumption 2, it implies
$f(x)-\underline{f}(\underline{x},\overline{x})\in\mathbb{R}_{x}^{n_{x}}$.
Then, owing to
$\Phi(x,u)-\underline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})\in\mathbb{R}_{+}^{n_{x}}$,
one can obtain
$f(x)-\underline{f}(\underline{x},\overline{x})+\Phi(x,u)-\underline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})\in\mathbb{R}_{+}^{n_{x}}$.
Together with $A-\underline{L}C\in\mathbb{M}_{n_{x}}$, it leads to
$\underline{e}(t)\geq 0$, $\forall t\geq 0$ according to Lemma 1. Same
guidelines can be applied to ensure $\overline{e}(t)\geq 0$. The proof is
complete. $\hfill\hfill\square$
The observer gains $\underline{L}$, $\overline{L}$ and neural networks
$\underline{\Phi}$, $\overline{\Phi}$ satisfying conditions in Proposition 1
can ensure the system state $x(t)$ to be bounded by the estimator states
$\underline{x}(t)$ and $\overline{x}(t)$, but there is no guarantee on the
boundedness and convergence of error state $\underline{e}(t)$ and
$\overline{e}(t)$. The values of $\underline{e}(t)$ and $\overline{e}(t)$ may
diverge, namely $\lim_{t\to\infty}\underline{e}(t)=\infty$ and
$\lim_{t\to\infty}\overline{e}(t)=\infty$, thus make no sense in terms of
safety monitoring in practice. The following notion of practical stability
concerned with boundedness of system state is introduced.
###### Definition 1
[31] Given $(\epsilon,\delta)$ with $0<\epsilon\leq\delta$. Let
$x(t,x(t_{0}))$, $t\geq t_{0}$, be a solution of the system
$\dot{x}(t)=f(x(t),u(t))$, then the trivial solution $x=0$ of the system is
said to be practically stable with respect to $(\epsilon,\delta)$ if
$\left\|x(t_{0})\right\|\leq\epsilon$ implies $\left\|x(t)\right\|\leq\delta$,
$\forall t\geq t_{0}$. Furthermore, if there is a
$T=T(t_{0},\epsilon,\delta)>0$ such that $\left\|x(t_{0})\right\|\leq\epsilon$
implies $\left\|x(t)\right\|\leq\delta$ for any $t\geq t_{0}+T$, then the
system is practically asymptotically stable.
###### Remark 3
Practical stability ensures the boundedness of state trajectories of a
dynamical system. If the inequality
$\displaystyle\left\|x(t)\right\|\leq
Ce^{-\lambda(t-t_{0})}\left\|x(t_{0})\right\|+r$ (10)
holds for any $x(t_{0})\in\mathbb{R}^{n_{x}}$, any $t\geq t_{0}$ and constants
$C>0$, $r\leq 0$, it implies that the state trajectories of a dynamical system
are bounded in terms of $\left\|x(t)\right\|\leq C\left\|x(t_{0})\right\|+r$,
$\forall t\geq t_{0}$, and moreover, the state trajectories also converge to
ball $\mathcal{B}_{r}=\\{x\in\mathbb{R}^{n_{x}}\mid\left\|x\right\|\geq
r,r\geq 0\\}$ exponentially at a decay rate of $\lambda$. In particular, we
call this system is globally practically uniformly exponentially stable.
## III Run-Time Safety Monitoring Design
This section aims to design neural networks $\underline{\Phi}$,
$\overline{\Phi}$ and observer gains $\underline{L}$, $\overline{L}$
satisfying conditions in Proposition 1. Furthermore, the convergence of error
state is also analyzed and assured. First, we design neural networks
$\underline{\Phi}$ and $\overline{\Phi}$ based on the weight matrices
$W_{\ell}$ and bias vectors $b_{\ell}$ of neural network $\Phi$ in system (5).
Given a neural network $\Phi:\mathbb{R}^{n_{0}}\to\mathbb{R}^{n_{L}}$ in the
form of (1) with weight matrices
$\displaystyle
W_{\ell}=[w_{\ell}^{i,j}]=\begin{bmatrix}w_{\ell}^{1,1}&w_{\ell}^{1,2}&\cdots&w_{\ell}^{1,n_{\ell-1}}\\\
w_{\ell}^{2,1}&w_{\ell}^{2,2}&\cdots&w_{\ell}^{2,n_{\ell-1}}\\\
\vdots&\vdots&\ddots&\vdots\\\
w_{\ell}^{n_{\ell},1}&w_{\ell}^{n_{\ell},2}&\cdots&w_{\ell}^{n_{\ell},n_{\ell-1}}\end{bmatrix}$
(11)
where $w_{\ell}^{i,j}$ denotes the element in $i$-th row and $j$-th column, we
define two auxiliary weight matrices as below:
$\displaystyle\underline{W}_{\ell}$
$\displaystyle=[\underline{w}_{\ell}^{i,j}],~{}\underline{w}_{\ell}^{i,j}=\begin{cases}w_{\ell}^{i,j}&w_{\ell}^{i,j}<0\\\
0&w_{\ell}^{i,j}\geq 0\end{cases}$ (12) $\displaystyle\overline{W}_{\ell}$
$\displaystyle=[\overline{w}_{\ell}^{i,j}],~{}\overline{w}_{\ell}^{i,j}=\begin{cases}w_{\ell}^{i,j}&w_{\ell}^{i,j}\geq
0\\\ 0&w_{\ell}^{i,j}<0\end{cases}$ (13)
for which it is explicit that we have
$W_{\ell}=\underline{W}_{\ell}+\overline{W}_{\ell}$. Then, we construct two
auxiliary neural networks
$\underline{\Phi}:\mathbb{R}^{2n_{0}}\to\mathbb{R}^{n_{L}}$,
$\overline{\Phi}:\mathbb{R}^{2n_{0}}\to\mathbb{R}^{n_{L}}$ with inputs
$\underline{\eta}_{0},\overline{\eta}_{0}\in\mathbb{R}^{n_{0}}$ in the
following form:
$\displaystyle\begin{cases}\underline{\eta}_{\ell}=\phi_{\ell}(\underline{W}_{\ell}\overline{\eta}_{\ell-1}+\overline{W}_{\ell}\underline{\eta}_{\ell-1}+b_{\ell}),~{}\ell=1,\ldots,L\\\
\underline{\Phi}(\underline{\eta}_{0},\overline{\eta}_{0})=\underline{\eta}_{L}\end{cases}$
(14)
$\displaystyle\begin{cases}\overline{\eta}_{\ell}=\phi_{\ell}(\underline{W}_{\ell}\underline{\eta}_{\ell-1}+\overline{W}_{\ell}\overline{\eta}_{\ell-1}+b_{\ell}),~{}\ell=1,\ldots,L\\\
\overline{\Phi}(\underline{\eta}_{0},\overline{\eta}_{0})=\overline{\eta}_{L}\end{cases}$
(15)
Given $\underline{\eta_{0}}\leq\eta_{0}\leq\overline{\eta_{0}}$, the following
theorem can be derived with auxiliary neural networks in the form of (14) and
(15), which implies the positivity of
$\Phi(\eta_{0})-\underline{\Phi}(\underline{\eta_{0}},\overline{\eta}_{0})$
and
$\overline{\Phi}(\underline{\eta_{0}},\overline{\eta}_{0})-\Phi(\eta_{0})$.
###### Theorem 1
Given neural networks $\Phi:\mathbb{R}^{n_{0}}\to\mathbb{R}^{n_{L}}$ and its
two auxiliary neural networks
$\underline{\Phi}:\mathbb{R}^{2n_{0}}\to\mathbb{R}^{n_{L}}$,
$\overline{\Phi}:\mathbb{R}^{2n_{0}}\to\mathbb{R}^{n_{L}}$ defined by (14) and
(15), the following condition
$\displaystyle\begin{bmatrix}\Phi(\eta_{0})-\underline{\Phi}(\underline{\eta_{0}},\overline{\eta}_{0})\\\
\overline{\Phi}(\underline{\eta_{0}},\overline{\eta}_{0})-\Phi(\eta_{0})\end{bmatrix}\in\mathbb{R}_{+}^{2n_{L}}$
(16)
holds for any $\underline{\eta}_{0}\leq\eta_{0}\leq\overline{\eta}_{0}$.
Proof. Let us consider the $\ell$-th layer. For any
$\underline{\eta}_{\ell-1}\leq\eta_{\ell-1}\leq\overline{\eta}_{\ell-1}$, it
can be obtained from (12), (13) such that
$\displaystyle\underline{w}_{\ell}^{i,j}\overline{\eta}_{\ell-1}^{j}+\overline{w}_{\ell}^{i,j}\underline{\eta}_{\ell-1}^{j}\leq
w^{i,j}_{\ell}\eta_{\ell-1}^{j}\leq\underline{w}_{\ell}^{i,j}\underline{\eta}_{\ell-1}^{j}+\overline{w}_{\ell}^{i,j}\overline{\eta}_{\ell-1}^{j}$
which implies that
$\displaystyle
W_{\ell}\eta_{\ell-1}+b_{\ell}-(\underline{W}_{\ell}\overline{\eta}_{\ell-1}+\overline{W}_{\ell}\underline{\eta}_{\ell-1}+b_{\ell})\geq
0$
$\displaystyle\underline{W}_{\ell}\underline{\eta}_{\ell-1}+\overline{W}_{\ell}\overline{\eta}_{\ell-1}+b_{\ell}-(W_{\ell}\eta_{\ell-1}+b_{\ell})\geq
0.$
Under Assumption 1, the monotonic property (3) of activation function
$\phi_{\ell}$ leads to
$\displaystyle\phi_{\ell}(W_{\ell}\eta_{\ell-1}+b_{\ell})-\phi_{\ell}(\underline{W}_{\ell}\overline{\eta}_{\ell-1}+\overline{W}_{\ell}\underline{\eta}_{\ell-1}+b_{\ell})\geq
0$
$\displaystyle\phi_{\ell}(\underline{W}_{\ell}\underline{\eta}_{\ell-1}+\overline{W}_{\ell}\overline{\eta}_{\ell-1}+b_{\ell})-\phi_{\ell}(W_{\ell}\eta_{\ell-1}+b_{\ell})\geq
0.$
Using the definitions of neural networks $\Phi$, $\underline{\Phi}$ and
$\overline{\Phi}$ described by (1), (14) and (15), namely
$\eta_{\ell}=\phi_{\ell}(W_{\ell}\eta_{\ell-1}+b_{\ell})$,
$\underline{\eta}_{\ell}=\phi_{\ell}(\underline{W}_{\ell}\overline{\eta}_{\ell-1}+\overline{W}_{\ell}\underline{\eta}_{\ell-1}+b_{\ell})$
and
$\overline{\eta}_{\ell}=\phi_{\ell}(\underline{W}_{\ell}\underline{\eta}_{\ell-1}+\overline{W}_{\ell}\overline{\eta}_{\ell-1}+b_{\ell})$,
the above derivation implies that
$\displaystyle\begin{bmatrix}\eta_{\ell-1}-\underline{\eta}_{\ell-1}\\\
\overline{\eta}_{\ell-1}-\eta_{\ell-1}\end{bmatrix}\in\mathbb{R}_{+}^{2n_{\ell-1}}\Rightarrow\begin{bmatrix}\eta_{\ell}-\underline{\eta}_{\ell}\\\
\overline{\eta}_{\ell}-\eta_{\ell}\end{bmatrix}\in\mathbb{R}_{+}^{2n_{\ell}}.$
(17)
Thus, given any $\underline{\eta}_{0}\leq\eta_{0}\leq\overline{\eta}_{0}$, one
can obtain
$\displaystyle\begin{bmatrix}\eta_{L}-\underline{\eta}_{L}\\\
\overline{\eta}_{L}-\eta_{L}\end{bmatrix}=\begin{bmatrix}\Phi(\eta_{0})-\underline{\Phi}(\underline{\eta_{0}},\overline{\eta}_{0})\\\
\overline{\Phi}(\underline{\eta_{0}},\overline{\eta}_{0})-\Phi(\eta_{0})\end{bmatrix}\in\mathbb{R}_{+}^{2n_{L}}.$
(18)
The proof is complete. $\hfill\hfill\square$
With the two auxiliary neural networks $\underline{\Phi}$, $\overline{\Phi}$,
we are ready to design observer gains $\underline{L}$ and $\overline{L}$ to
construct run-time safety state estimator $\mathfrak{E}$ in the form of (8)
via the following theorem.
###### Theorem 2
The safety monitoring Problem 1 is solvable if the following conditions hold
for observer gains $\underline{L}$, $\overline{L}$ and neural networks
$\underline{\Phi}$ and $\overline{\Phi}$:
1. 1.
There exist $a\in\mathbb{R}$,
$\underline{L},\overline{L}\in\mathbb{R}^{n_{x}\times n_{y}}$ such that
$\displaystyle A-\underline{L}C$ $\displaystyle\geq aI_{n_{x}\times n_{x}}$
(19) $\displaystyle A-\overline{L}C$ $\displaystyle\geq aI_{n_{x}\times
n_{x}}.$ (20)
2. 2.
$\underline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})$,
$\overline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})$ are
in the form of (14) and (15) with
$\underline{\eta}_{0}=[\underline{x}^{\top},\underline{u}^{\top}]^{\top}$ and
$\overline{\eta}_{0}=[\overline{x}^{\top},\overline{u}^{\top}]^{\top}$.
Proof. First note that (19) and (20) imply that
$A-\underline{L}C\in\mathbb{M}_{n_{x}}$,
$A-\overline{L}C\in\mathbb{M}_{n_{x}}$. Then, from Theorem 1, it leads to the
fact of
$\Phi(x,u)-\underline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})\in\mathbb{R}_{+}^{n_{x}}$,
$\overline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})-\Phi(x,u)\in\mathbb{R}^{n_{x}}_{+}$.
Based on Proposition 1, the error $e(t)$ will be bounded as
$\underline{e}(t)\leq e(t)\leq\overline{e}(t)$, $\forall t\geq 0$, thus the
safety monitoring problem is solvable. The proof is complete.
$\hfill\hfill\square$
Theorem 2 provides us a method to design run-time safety state estimator in
the interval observer form of (8). The observer gains $\underline{L}$ and
$\overline{L}$ can be obtained by solving the linear inequalities (19), (20),
and neural networks $\underline{\Phi}$ and $\overline{\Phi}$ are determined by
(14) and (15) with weight matrices $\underline{W}_{\ell}$,
$\overline{W}_{\ell}$ defined by (12), (13). The boundedness of
$\underline{x}(t)\leq x(t)\leq\overline{x}(t)$, $\forall t\geq 0$ can be
established during the system’s operation, however, the boundedness and
convergence of error states $\underline{e}(t)$ and $\overline{e}(t)$ are not
guaranteed, which means the error dynamics (9) could be unstable. In this
case, the estimated bounds $\underline{x}(t)$ and $\overline{x}(t)$ will
diverge from system state $x(t)$ to infinite values, and consequently, the
run-time safety monitoring does not make sense in practice. In the following,
the convergence of run-time estimation bounds is discussed in the framework of
practical stability proposed in Definition 1.
First, the following assumption is proposed for nonlinearity $f(x)$ and
$\underline{f}(\underline{x},\overline{x})$,
$\overline{f}(\underline{x},\overline{x})$ mentioned in Assumption 2.
###### Assumption 3
It is assumed that there exist scalars $\underline{\gamma}_{1}$,
$\overline{\gamma}_{1}$, $\underline{\gamma}_{2}$,
$\overline{\gamma}_{2}\in\mathbb{R}_{+}$ and vector
$\underline{\rho},\overline{\rho}\in\mathbb{R}^{n_{x}}_{+}$ such that
$\displaystyle
f(x)-\underline{f}(\underline{x},\overline{x})\leq\underline{\gamma}_{1}(x-\underline{x})+\underline{\gamma}_{2}(\overline{x}-x)+\underline{\rho}$
(21)
$\displaystyle\overline{f}(\underline{x},\overline{x})-f(x)\leq\overline{\gamma}_{1}(x-\underline{x})+\overline{\gamma}_{2}(\overline{x}-x)+\overline{\rho}$
(22)
holds for $f(x)$, $\underline{f}(\underline{x},\overline{x})$,
$\overline{f}(\underline{x},\overline{x})$.
###### Remark 4
These parameters $\underline{\gamma}_{1}$, $\overline{\gamma}_{1}$,
$\underline{\gamma}_{2}$, $\overline{\gamma}_{2}$, $\underline{\rho}$, and
$\overline{\rho}$ in Assumption 3 can be estimated under Lipschitz condition
(6), using the results in [32], i.e., Lemma 6 in [32].
The following lemma is developed for neural network $\Phi$ and its auxiliary
neural networks $\underline{\Phi}$ and $\overline{\Phi}$.
###### Lemma 2
Given a feedforward neural network
$\Phi:\mathbb{R}^{n_{0}}\to\mathbb{R}^{n_{L}}$, there always exist a series of
matrices $\underline{S}_{\ell},\overline{S}_{\ell}\in\mathbb{R}^{n_{L}\times
n_{\ell}}_{+}$, $\ell=0,\ldots,L$, with
$\underline{S}_{L}=\overline{S}_{L}=I_{n_{L}\times{n_{L}}}$ such that
$\displaystyle\alpha\begin{bmatrix}\underline{S}_{\ell}\overline{W}_{\ell}-\overline{S}_{\ell}\underline{W}_{\ell}\\\
\overline{S}_{\ell}\overline{W}_{\ell}-\underline{S}_{\ell}\underline{W}_{\ell}\end{bmatrix}$
$\displaystyle\leq\begin{bmatrix}\underline{S}_{\ell-1}\\\
\overline{S}_{\ell-1}\end{bmatrix},~{}\ell=1,\ldots,L$ (23)
$\displaystyle\overline{\Phi}(\underline{\eta}_{0},\overline{\eta}_{0})-\underline{\Phi}(\underline{\eta}_{0},\overline{\eta}_{0})$
$\displaystyle\leq\underline{S}_{0}(\eta_{0}-\underline{\eta}_{0})+\overline{S}_{0}(\overline{\eta}_{0}-\eta_{0})$
(24)
hold for any $\underline{\eta}_{0}\leq\eta_{0}\leq\overline{\eta}_{0}$, where
$\alpha$ is the Lipschitz constant of activation functions given in (2).
Proof. Starting from
$\underline{S}_{L}=\overline{S}_{L}=I_{n_{L}\times{n_{L}}}$, we can
recursively define
$\displaystyle\underline{S}_{\ell-1}=\alpha(\underline{S}_{\ell}\overline{W}_{\ell}-\overline{S}_{\ell}\underline{W}_{\ell})+\epsilon\mathbf{1}_{n_{L}\times
1}\mathbf{1}_{n_{\ell-1}\times 1}^{\top}$ (25)
$\displaystyle\overline{S}_{\ell-1}=\alpha(\overline{S}_{\ell}\overline{W}_{\ell}-\underline{S}_{\ell}\underline{W}_{\ell})+\epsilon\mathbf{1}_{n_{L}\times
1}\mathbf{1}_{n_{\ell-1}\times 1}^{\top}$ (26)
where $\epsilon>0$ could be any positive value. Thus, there always exist
$\underline{S}_{\ell}$, $\overline{S}_{\ell}$, $\ell=0,\ldots,L$ such that
(23) holds.
Then, we are going to establish (24). We consider the $\ell$-th layer
$\eta_{\ell}=\phi_{\ell}(W_{\ell}\eta_{\ell-1}+b_{\ell})$. Under Assumption 1,
it implies
$\displaystyle\eta_{\ell}-\underline{\eta}_{\ell}=$
$\displaystyle\phi_{\ell}({W}_{\ell}\eta_{\ell-1}+b_{\ell})-\phi_{\ell}(\underline{W}_{\ell}\overline{\eta}_{\ell-1}+\overline{W}_{\ell}\underline{\eta}_{\ell-1}+b_{\ell})$
$\displaystyle\leq$
$\displaystyle\alpha\left|{W}_{\ell}\eta_{\ell-1}+b_{\ell}-\underline{W}_{\ell}\overline{\eta}_{\ell-1}-\overline{W}_{\ell}\underline{\eta}_{\ell-1}-b_{\ell}\right|$
(27)
Following the same guideline in the proof of Theorem 1 one obtains
$\displaystyle{W}_{\ell}\eta_{\ell-1}+b_{\ell}-\underline{W}_{\ell}\overline{\eta}_{\ell-1}-\overline{W}_{\ell}\underline{\eta}_{\ell-1}-b_{\ell}\geq
0$ (28)
and using the fact of $W_{\ell}=\underline{W}_{\ell}+\overline{W}_{\ell}$,
inequality (27) equals
$\displaystyle\eta_{\ell}-\underline{\eta}_{\ell}\leq$
$\displaystyle\alpha\overline{W}_{\ell}(\eta_{\ell-1}-\underline{\eta}_{\ell-1})-\alpha\underline{W}_{\ell}(\overline{\eta}_{\ell-1}-\eta_{\ell-1})$
$\displaystyle=$
$\displaystyle\begin{bmatrix}\alpha\overline{W}_{\ell}&-\alpha\underline{W}_{\ell}\end{bmatrix}\begin{bmatrix}\eta_{\ell-1}-\underline{\eta}_{\ell-1}\\\
\overline{\eta}_{\ell-1}-\eta_{\ell-1}\end{bmatrix}.$ (29)
Similarly, one can obtain
$\displaystyle\overline{\eta}_{\ell}-\eta_{\ell}\leq\begin{bmatrix}-\alpha\underline{W}_{\ell}&\alpha\overline{W}_{\ell}\end{bmatrix}\begin{bmatrix}\eta_{\ell-1}-\underline{\eta}_{\ell-1}\\\
\overline{\eta}_{\ell-1}-\eta_{\ell-1}\end{bmatrix}.$ (30)
Based on inequalities (29) and (30), the following inequality can be
established
$\displaystyle\underline{S}_{\ell}(\eta_{\ell}-\underline{\eta}_{\ell})+\overline{S}_{\ell}(\overline{\eta}_{\ell}-\eta_{\ell})$
$\displaystyle\leq\alpha\begin{bmatrix}\underline{S}_{\ell}\overline{W}_{\ell}-\overline{S}_{\ell}\underline{W}_{\ell}&\overline{S}_{\ell}\overline{W}_{\ell}-\underline{S}_{\ell}\underline{W}_{\ell}\end{bmatrix}\begin{bmatrix}\eta_{\ell-1}-\underline{\eta}_{\ell-1}\\\
\overline{\eta}_{\ell-1}-\eta_{\ell-1}\end{bmatrix}$
with any $\underline{S}_{\ell},\overline{S}_{\ell}\in\mathbb{R}^{n_{L}\times
n_{\ell}}_{+}$.
Due to (24) which always holds with existence of $\underline{S}_{\ell}$,
$\overline{S}_{\ell}$, $\ell=0,\ldots,L$ as proved by (25) and (26), the above
inequality ensures
$\displaystyle\underline{S}_{\ell}(\eta_{\ell}-\underline{\eta}_{\ell})+\overline{S}_{\ell}(\overline{\eta}_{\ell}-\eta_{\ell})$
$\displaystyle\leq$
$\displaystyle\begin{bmatrix}\underline{S}_{\ell-1}&\overline{S}_{\ell-1}\end{bmatrix}\begin{bmatrix}\eta_{\ell-1}-\underline{\eta}_{\ell-1}\\\
\overline{\eta}_{\ell-1}-\eta_{\ell-1}\end{bmatrix}$ $\displaystyle=$
$\displaystyle\underline{S}_{\ell-1}(\eta_{\ell-1}-\underline{\eta}_{\ell-1})+\overline{S}_{\ell-1}(\overline{\eta}_{\ell-1}-\eta_{\ell-1})$
which can be iterated to yield
$\displaystyle\underline{S}_{L}(\eta_{L}-\underline{\eta}_{L})+\overline{S}_{L}(\overline{\eta}_{L}-\eta_{L})\leq\underline{S}_{0}(\eta_{0}-\underline{\eta}_{0})+\overline{S}_{0}(\overline{\eta}_{0}-\eta_{0}).$
Owing to the fact of $\eta_{L}=\Phi(\eta_{0})$,
$\underline{\eta}_{L}=\underline{\Phi}(\underline{\eta}_{0},\overline{\eta}_{0})$,
$\overline{\eta}_{L}=\overline{\Phi}(\underline{\eta}_{0},\overline{\eta}_{0})$
and $\underline{S}_{L}=\overline{S}_{L}=I_{n_{L}\times n_{L}}$, the following
inequality can be established
$\overline{\Phi}(\underline{\eta}_{0},\overline{\eta}_{0})-\underline{\Phi}(\underline{\eta}_{0},\overline{\eta}_{0})\leq\underline{S}_{0}(\eta_{0}-\underline{\eta}_{0})+\overline{S}_{0}(\overline{\eta}_{0}-\eta_{0})$
(31)
for any $\underline{\eta}_{0}\leq\eta_{0}\leq\overline{\eta}_{0}$. The proof
is complete. $\hfill\hfill\square$
###### Remark 5
Lemma 2 ensures the existence of $\underline{S}_{\ell}$,
$\overline{S}_{\ell}$, $\ell=0,\ldots,L$ such that the input-output
relationship in the description of (24) holds for auxiliary neural networks
$\underline{\Phi}$ and $\overline{\Phi}$. It also provides a method to compute
$\underline{S}_{\ell}$, $\overline{S}_{\ell}$, $\ell=0,\ldots,L$, that is
solving linear inequality (23) with initialized
$\underline{S}_{L}=\overline{S}_{L}=I_{n_{L}\times{n_{L}}}$. In practice,
optimal solution of $\underline{S}_{0}$, $\overline{S}_{0}$ such as
$\min~{}\mathrm{trace(diag}\\{\underline{S}_{0},\overline{S}_{0}\\})$ is of
interest. With respect to objective functions of interest, optimization
problems such as linear programming (LP) problems can be formulated to compute
$\underline{S}_{0}=\overline{S}_{0}$. For instance, the following LP problem
can be formulated
$\displaystyle\min~{}\mathrm{trace(diag}\\{\underline{S}_{0},\overline{S}_{0}\\})$
$\displaystyle\mathrm{s.t.~{}}$
$\displaystyle\alpha\begin{bmatrix}\underline{S}_{\ell}\overline{W}_{\ell}-\overline{S}_{\ell}\underline{W}_{\ell}\\\
\overline{S}_{\ell}\overline{W}_{\ell}-\underline{S}_{\ell}\underline{W}_{\ell}\end{bmatrix}\leq\begin{bmatrix}\underline{S}_{\ell-1}\\\
\overline{S}_{\ell-1}\end{bmatrix},~{}\ell=1,\ldots,L$
$\displaystyle\underline{S}_{L}=\overline{S}_{L}=I_{n_{L}\times{n_{L}}}.$ (32)
Based on Lemma 2, we are ready to derive the following result to ensure the
boundedness and convergence of run-time error states, namely the practical
stability of error system (9). Before presenting the result, we assume that
the input vector of neural network component $\Phi(\eta_{0})$ is
$\eta_{0}=[x,u]\in\mathbb{R}^{n_{x}+n_{u}}$.
###### Theorem 3
Consider error system (9), if there exist a diagonal matrix
$X\in\mathbb{R}^{n_{x}\times n_{x}}$, matrices
$\underline{Y},\overline{Y}\in\mathbb{R}^{n_{x}\times n_{y}}$ and a scalar
$a\in\mathbb{R}$ such that
$\displaystyle X\mathbf{1}_{n_{x}\times 1}$ $\displaystyle>0$ (33)
$\displaystyle XA-\underline{Y}C$ $\displaystyle>aI_{n_{x}\times n_{x}}$ (34)
$\displaystyle XA-\overline{Y}C$ $\displaystyle>aI_{n_{x}\times n_{x}}$ (35)
$\displaystyle\begin{bmatrix}A^{\top}X-C^{\top}\underline{Y}^{\top}+\underline{\gamma}X+\underline{U}^{\top}X\\\
A^{\top}X-C^{\top}\overline{Y}^{\top}+\overline{\gamma}X+\overline{U}^{\top}X\end{bmatrix}\mathbf{1}_{n_{x}\times
1}$ $\displaystyle<0$ (36)
where $\underline{\gamma}=\underline{\gamma}_{1}+\underline{\gamma}_{2}$,
$\overline{\gamma}=\overline{\gamma}_{1}+\overline{\gamma}_{2}$ and
$\underline{U}$, $\overline{U}$ are defined by
$\underline{S}_{0}=[\underline{U},~{}\underline{V}]$ and
$\overline{S}_{0}=[\overline{U},~{}\overline{V}]$ where
$\underline{U},\overline{U}\in\mathbb{R}^{n_{x}\times n_{x}}$,
$\underline{V},\overline{V}\in\mathbb{R}^{n_{x}\times n_{u}}$, and
$\underline{S}_{0},\overline{S}_{0}\in\mathbb{R}^{n_{x}\times(n_{x}+n_{u})}$
are the solution of the following conditions:
$\displaystyle\alpha\begin{bmatrix}\underline{S}_{\ell}\overline{W}_{\ell}-\overline{S}_{\ell}\underline{W}_{\ell}\\\
\overline{S}_{\ell}\overline{W}_{\ell}-\underline{S}_{\ell}\underline{W}_{\ell}\end{bmatrix}$
$\displaystyle\leq\begin{bmatrix}\underline{S}_{\ell-1}\\\
\overline{S}_{\ell-1}\end{bmatrix},~{}\ell=1,\ldots,L$ (37)
with $\underline{S}_{L}=\overline{S}_{L}=I_{n_{x}\times n_{x}}$, then the
error system (9) is globally practically uniformly exponentially stable with
observer gains $\underline{L}=X^{-1}\underline{Y}$ and
$\overline{L}=X^{-1}\overline{Y}$.
Proof. First, we construct the co-positive Lyapunov function candidate
$V(\underline{e},\overline{e})=\underline{V}(\underline{e})+\overline{V}(\overline{e})$,
where $\underline{V}(\underline{e})=\underline{e}^{\top}v$,
$\overline{V}(\overline{e})=\overline{e}^{\top}v$ with
$v=X\mathbf{1}_{n_{x}\times 1}\in\mathbb{R}_{+}^{n_{x}}$.
Considering $\underline{V}(\underline{e})$, one can obtain
$\displaystyle\dot{\underline{V}}(\underline{e})=\underline{e}^{\top}(A^{\top}-C^{\top}\underline{L}^{\top})v+\underline{\mathcal{F}}^{\top}v+\underline{\mathcal{G}}^{\top}v$
(38)
where
$\underline{\mathcal{F}}=f(x)-\underline{f}(\underline{x},\overline{x})$,
$\underline{\mathcal{G}}=\Phi(x,u)-\underline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})$.
Under Assumption 3, it implies that
$\displaystyle\underline{\mathcal{F}}^{\top}v\leq\underline{\gamma}_{1}\underline{e}^{\top}v+\underline{\gamma}_{2}\overline{e}^{\top}v+\underline{\rho}^{\top}v.$
(39)
Thus, we have
$\displaystyle\dot{\underline{V}}(\underline{e})\leq\underline{e}^{\top}\underline{\Theta}v+\underline{\rho}^{\top}v+\underline{\mathcal{G}}^{\top}v$
(40)
where
$\underline{\Theta}=A^{\top}-C^{\top}\underline{L}^{\top}+(\underline{\gamma}_{1}+\underline{\gamma}_{2})I_{n_{x}\times
n_{x}}$.
Similarly, the following inequality can be obtained for
${\overline{V}(\overline{e})}$:
$\displaystyle\dot{\overline{V}}(\overline{e})\leq\overline{e}^{\top}\overline{\Theta}v+\overline{\rho}^{\top}v+\overline{\mathcal{G}}^{\top}v$
(41)
where
$\overline{\Theta}=A^{\top}-C^{\top}\overline{L}^{\top}+(\overline{\gamma}_{1}+\overline{\gamma}_{2})I_{n_{x}\times
n_{x}}$ and
$\overline{\mathcal{G}}=\overline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})-\Phi(x,u)$.
From (40) and (41), we have
$\displaystyle\dot{V}(\underline{e},\overline{e})\leq\begin{bmatrix}\underline{e}^{\top}&\overline{e}^{\top}\end{bmatrix}\begin{bmatrix}\underline{\Theta}\\\
\overline{\Theta}\end{bmatrix}v+(\underline{\rho}^{\top}+\overline{\rho}^{\top})v+\mathcal{G}^{\top}v$
(42)
where $\mathcal{G}=\underline{\mathcal{G}}+\overline{\mathcal{G}}$.
Due to
$\mathcal{G}=\underline{\mathcal{G}}+\overline{\mathcal{G}}=\overline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})-\underline{\Phi}(\underline{x},\overline{x},\underline{u},\overline{u})$
and using (37) based on Lemma 2, it leads to
$\displaystyle\mathcal{G}\leq$
$\displaystyle\begin{bmatrix}\underline{U}&\underline{V}\end{bmatrix}\left(\begin{bmatrix}x\\\
u\end{bmatrix}-\begin{bmatrix}\underline{x}\\\
\underline{u}\end{bmatrix}\right)+\begin{bmatrix}\overline{U}&\overline{V}\end{bmatrix}\left(\begin{bmatrix}\overline{x}\\\
\overline{u}\end{bmatrix}-\begin{bmatrix}{x}\\\ {u}\end{bmatrix}\right)$
$\displaystyle=$
$\displaystyle\underline{U}\underline{e}+\underline{V}(u-\underline{u})+\overline{U}\overline{e}+\overline{V}(\overline{u}-u)$
$\displaystyle\leq$
$\displaystyle\begin{bmatrix}\underline{U}&\overline{U}\end{bmatrix}\begin{bmatrix}\underline{e}\\\
\overline{e}\end{bmatrix}+(\underline{V}+\overline{V})(\overline{u}-\underline{u}).$
(43)
Therefore, one has
$\displaystyle\dot{V}(\underline{e},\overline{e})\leq\begin{bmatrix}\underline{e}^{\top}&\overline{e}^{\top}\end{bmatrix}\begin{bmatrix}\underline{\Theta}+\underline{U}^{\top}\\\
\overline{\Theta}+\overline{U}^{\top}\end{bmatrix}v+\theta$ (44)
where
$\theta=(\overline{u}^{\top}-\underline{u}^{\top})(\underline{V}^{\top}+\overline{V}^{\top})v+(\underline{\rho}^{\top}+\overline{\rho}^{\top})v$.
Due to (36) and $v=X\mathbf{1}_{n_{x}\times 1}$, it leads to
$\displaystyle\begin{bmatrix}\underline{\Theta}+\underline{U}^{\top}\\\
\overline{\Theta}+\overline{U}^{\top}\end{bmatrix}v<0$ (45)
which implies that there always exists a sufficient small $\lambda>0$ such
that
$\displaystyle\begin{bmatrix}\underline{\Theta}+\underline{U}^{\top}\\\
\overline{\Theta}+\overline{U}^{\top}\end{bmatrix}v<-\lambda\begin{bmatrix}I_{n_{x}\times
n_{x}}\\\ I_{n_{x}\times n_{x}}\end{bmatrix}v$ (46)
and that implies
$\displaystyle\dot{V}(\underline{e},\overline{e})\leq-\lambda\underline{e}^{\top}v-\lambda\overline{e}^{\top}v+\theta=-\lambda
V(\underline{e},\overline{e})+\theta.$ (47)
Defining $\underline{v}$ and $\overline{v}$ the minimal and maximal element in
$v$, (47) implies that
$\displaystyle\underline{v}\left\|\xi\right\|\leq
e^{-\lambda(t-t_{0})}\overline{v}\left\|\xi_{0}\right\|+\frac{\theta}{\lambda}\Rightarrow\left\|\xi\right\|\leq
Ce^{-\lambda(t-t_{0})}\left\|\xi_{0}\right\|+r$
where $\xi^{\top}=[\underline{e}^{\top},\overline{e}]^{\top}$,
$C={\overline{v}}/{\underline{v}}$ and $r={\theta}/{\lambda\underline{v}}$.
Therefore, the error system is globally practically uniformly exponentially
stable, the error state converges to ball
$\mathcal{B}_{r}=\\{x\in\mathbb{R}^{n_{x}}\mid\left\|x\right\|\geq r,r\geq
0\\}$ exponentially at a decay rate of $\lambda$. The proof is complete.
$\hfill\hfill\square$
The design process of observer gains $\underline{L}$ and $\overline{L}$ allows
one to use coordinate transformation techniques used in several works such as
[21, 33, 25] to relax the conditions of both Mezleter and Hurwitz conditions
being satisfied. Based on Theorem 3, an design algorithm is proposed in
Algorithm 1. The outputs of the algorithm, the auxiliary neural networks
$\underline{\Phi}$, $\overline{\Phi}$ and observer gains $\underline{L}$,
$\overline{L}$, are able to ensure the run-time boundedness as well as the
convergence of the error state in terms of practical stability.
1 Compute $\underline{W}_{\ell}$, $\overline{W}_{\ell}$, $\ell=1,\ldots,L$ by
(12), (13) and obtain neural networks $\underline{\Phi}$, $\overline{\Phi}$;
2 Solve LP problem (32) to obtain $\underline{S}_{0}$ and $\overline{S}_{0}$ ;
3 Compute $\underline{U}$, $\overline{U}$ by
$\underline{S}_{0}=[\underline{U},~{}\underline{V}]$,
$\overline{S}_{0}=[\overline{U},~{}\overline{V}]$ where
$\underline{U},\overline{U}\in\mathbb{R}^{n_{x}\times n_{x}}$,
$\underline{V},\overline{V}\in\mathbb{R}^{n_{x}\times n_{u}}$ ;
4 Solve LP problem (33)-(36) to obtain $X$, $\underline{Y}$, $\overline{Y}$ ;
Compute observer gains $\underline{L}$, $\overline{L}$.
Algorithm 1 Run-Time Safety State Estimator Design
A numerical example is proposed to illustrate the design process of Algorithm
1.
###### Example 1
Consider a neural-network-enabled system in the form of
$\dot{x}=Ax+\Phi(x,u)$, $y=Cx$, where system matrices are
$\displaystyle A=\begin{bmatrix}-2&1\\\
3&-5\end{bmatrix},~{}C=\begin{bmatrix}0&1\end{bmatrix}$
and neural network $\Phi$ is determined by
$\displaystyle W_{1}=\begin{bmatrix}0.6266&0.8433&0.3241\\\
-0.2485&-1.5838&-0.5620\\\ 0.5243&-1.4939&1.1992\\\ -0.4300&-1.4659&0.1102\\\
0.2629&0.6789&-1.2695\end{bmatrix},~{}b_{1}=\begin{bmatrix}-1.0191\\\
-1.3852\\\ 0.9549\\\ -0.6011\\\ -1.1719\end{bmatrix}$ $\displaystyle
W_{2}^{\top}=\begin{bmatrix}-0.4617&-0.6691\\\ 0.6824&0.3819\\\
0.2419&0.3326\\\ 0.0344&-0.7591\\\
0.4333&-0.6569\end{bmatrix},~{}b_{2}=\begin{bmatrix}-1.0719\\\
-1.0741\end{bmatrix}$
and activation functions are tanh and purelin.
_Step 1. Design Auxiliary Neural Networks:_ By (12) and (13), matrices
$\underline{W}_{\ell}$, $\overline{W}_{\ell}$, $\ell=1,2$ are as follows:
$\displaystyle\underline{W}_{1}=\begin{bmatrix}0&0&0\\\
-0.2485&-1.5838&-0.5620\\\ 0&-1.4939&0\\\ -0.4300&-1.4659&0\\\
0&0&-1.2695\end{bmatrix}$
$\displaystyle\overline{W}_{1}=\begin{bmatrix}0.6266&0.8433&0.3241\\\ 0&0&0\\\
0.5243&01.1992\\\ 0&0&0.1102\\\ 0.2629&0.6789&0\end{bmatrix}$
$\displaystyle\underline{W}_{2}=\begin{bmatrix}-0.4617&0&0&0&0\\\
-0.6691&0&0&-0.7591&-0.6569\end{bmatrix}$
$\displaystyle\overline{W}_{2}=\begin{bmatrix}0&0.6824&0.2419&0.0344&0.4333\\\
0&0.3819&0.3326&0&0\end{bmatrix}.$
_Step 2: Design Observer Gains:_ By (33)–(37), the observer gains are computed
as
$\displaystyle\underline{L}=\begin{bmatrix}0\\\
12.0394\end{bmatrix},~{}\overline{L}=\begin{bmatrix}1\\\ 8.0044\end{bmatrix}.$
Assuming input $u=10\sin(5t)$, thus we have $\underline{u}=-10$ and
$\overline{u}=10$. The initial state is assumed to be bounded in $[-1,1]$. The
run-time safety monitoring of system state $x_{1}(t)$ and $x_{2}(t)$ is
illustrated in Figure 2. The run-time state trajectories $x_{1}(t)$,
$x_{2}(t)$ are bounded in run-time estimated states $\underline{x}_{i}(t)$,
$\overline{x}_{i}(t)$, $i=1,2$, in other words, the safety of system state
$x(t)$ can be monitored by $\underline{x}_{i}(t)$, $\overline{x}_{i}(t)$,
$i=1,2$ in run time.
Figure 2: State response of $x(t)$ (solid lines) and run-time safety
monitoring of $\underline{x}(t)$ and $\overline{x}(t)$ (dashed lines). State
response $x(t)$ is bounded between states $\underline{x}(t)$,
$\overline{x}(t)$ of state estimator in run time.
As one of the most common neural-network-enabled dynamical systems, the neural
network is driven by the measurement of the system, which means the input of
the neural network is the measurement of the system $y(t)$ instead of system
state $x(t)$. For instance, neural network control systems use the measurement
$y(t)$ to compute the control input instead of system state $x(t)$ since
system state $x(t)$ may not be measurable. This class of systems with neural
network $\Phi(y,u)$ are in the following description of
$\displaystyle\mathfrak{L}_{y}:\begin{cases}\dot{x}=Ax+f(x)+\Phi(y,u)\\\
y=Cx\end{cases}$ (48)
where neural network $\Phi(y,u)$ is measured output driven. Since the output
$y(t)$ is measurable in run time, it can be employed in the safety monitoring.
The run-time safety state estimator is developed in the form of
$\displaystyle\mathfrak{E}_{y}:\begin{cases}\dot{\underline{x}}=A\underline{x}+\underline{f}(\underline{x},\overline{x})+\underline{\Phi}(y,\underline{u},\overline{u})+\underline{L}(y-C\underline{x})\\\
\dot{\overline{x}}=A\overline{x}+\overline{f}(\underline{x},\overline{x})+\overline{\Phi}(y,\underline{u},\overline{u})+\overline{L}(y-C\overline{x})\end{cases}$
(49)
where $\underline{\Phi}$ and $\overline{\Phi}$ are defined by (14) and (15)
with $\underline{y}=\overline{y}=y$, respectively. Consequently, the error
dynamics is in the form of
$\displaystyle\begin{cases}\dot{\underline{e}}=(A-\underline{L}C)\underline{e}+f(x)-\underline{f}(\underline{x},\overline{x})+\Phi(y,u)-\underline{\Phi}(y,\underline{u},\overline{u})\\\
\dot{\overline{e}}=(A-\overline{L}C)\overline{e}+\overline{f}(\underline{x},\overline{x})-f(x)+\overline{\Phi}(y,\underline{u},\overline{u})-\Phi(y,u)\end{cases}$
(50)
The following result represents the observer gain design process in (49).
###### Corollary 1
Consider error system (50), if there exist a diagonal matrix
$X\in\mathbb{R}^{n_{x}\times n_{x}}$, matrices
$\underline{Y},\overline{Y}\in\mathbb{R}^{n_{x}\times n_{y}}$ and a scalar
$a\in\mathbb{R}$ such that
$\displaystyle X\mathbf{1}_{n_{x}\times 1}$ $\displaystyle>0$ (51)
$\displaystyle XA-\underline{Y}C$ $\displaystyle>aI_{n_{x}\times n_{x}}$ (52)
$\displaystyle XA-\overline{Y}C$ $\displaystyle>aI_{n_{x}\times n_{x}}$ (53)
$\displaystyle\begin{bmatrix}A^{\top}X-C^{\top}\underline{Y}^{\top}+\underline{\gamma}X\\\
A^{\top}X-C^{\top}\overline{Y}^{\top}+\overline{\gamma}X\end{bmatrix}\mathbf{1}_{n_{x}\times
1}$ $\displaystyle<0$ (54)
where $\underline{\gamma}=\underline{\gamma}_{1}+\underline{\gamma}_{2}$,
$\overline{\gamma}=\overline{\gamma}_{1}+\overline{\gamma}_{2}$, then the
error system (50) is globally practically uniformly exponentially stable with
observer gains $\underline{L}=X^{-1}\underline{Y}$ and
$\overline{L}=X^{-1}\overline{Y}$.
Proof. Construct a co-positive Lyapunov function candidate
$V(\underline{e},\overline{e})=\underline{e}^{\top}v+\overline{e}^{\top}v$
where $v=X\mathbf{1}_{n_{x}\times 1}\in\mathbb{R}^{n_{x}}_{+}$. Following the
same guideline in Theorem 3, the following inequality can be obtained
$\displaystyle\dot{V}(\underline{e},\overline{e})\leq\begin{bmatrix}\underline{e}^{\top}&\overline{e}^{\top}\end{bmatrix}\begin{bmatrix}\underline{\Theta}\\\
\overline{\Theta}\end{bmatrix}v+(\underline{\rho}^{\top}+\overline{\rho}^{\top})v+\mathcal{G}^{\top}v$
(55)
where
$\overline{\Theta}=A^{\top}-C^{\top}\overline{L}^{\top}+(\overline{\gamma}_{1}+\overline{\gamma}_{2})I_{n_{x}\times
n_{x}}$,
$\underline{\Theta}=A^{\top}-C^{\top}\underline{L}^{\top}+(\underline{\gamma}_{1}+\underline{\gamma}_{2})I_{n_{x}\times
n_{x}}$, and
$\mathcal{G}=\overline{\Phi}(y,\underline{u},\overline{u})-\underline{\Phi}(y,\underline{u},\overline{u})$.
Based on Lemma 2 and using the fact of $\underline{y}=\overline{y}=y$ in
$\underline{\Phi}$ and $\overline{\Phi}$, it leads to
$\displaystyle\mathcal{G}\leq$
$\displaystyle\begin{bmatrix}\underline{U}&\underline{V}\end{bmatrix}\left(\begin{bmatrix}y\\\
u\end{bmatrix}-\begin{bmatrix}y\\\
\underline{u}\end{bmatrix}\right)+\begin{bmatrix}\overline{U}&\overline{V}\end{bmatrix}\left(\begin{bmatrix}y\\\
\overline{u}\end{bmatrix}-\begin{bmatrix}y\\\ {u}\end{bmatrix}\right)$
$\displaystyle\leq$
$\displaystyle\begin{bmatrix}\underline{U}&\overline{U}\end{bmatrix}\begin{bmatrix}0\\\
0\end{bmatrix}+(\underline{V}+\overline{V})(\overline{u}-\underline{u})$
$\displaystyle=$
$\displaystyle(\underline{V}+\overline{V})(\overline{u}-\underline{u})$ (56)
which is irrelevant to $\underline{U}$ and $\overline{U}$. Then, we have
$\displaystyle\dot{V}(\underline{e},\overline{e})\leq\begin{bmatrix}\underline{e}^{\top}&\overline{e}^{\top}\end{bmatrix}\begin{bmatrix}\underline{\Theta}\\\
\overline{\Theta}\end{bmatrix}v+\theta$ (57)
where
$\theta=(\overline{u}^{\top}-\underline{u}^{\top})(\underline{V}^{\top}+\overline{V}^{\top})v+(\underline{\rho}^{\top}+\overline{\rho}^{\top})v$.
Due to (54) and following the same guidelines in Theorem 1, the following
inequality can be derived
$\displaystyle\dot{V}(\underline{e},\overline{e})\leq-\lambda\underline{e}^{\top}v-\lambda\overline{e}^{\top}v+\theta=-\lambda
V(\underline{e},\overline{e})+\theta$ (58)
which ensure the practical stability of error system. The proof is complete.
$\hfill\hfill\square$
###### Remark 6
As Corollary 1 indicates, the design process of observer gain computation has
nothing to do with the neural network. The observer gains $\underline{L}$ and
$\overline{L}$ are obtained by solving an LP problem in terms of (51)–(54)
which is dependent upon system dynamics without considering neural network
components. This is because the measurable output $y$ makes the portion of the
output of neural network $\Phi$ driven by $y$ completely compensated by the
outputs of auxiliary neural networks $\underline{\Phi}$, $\overline{\Phi}$
which are also driven by same values of measurement $y$. This promising
feature of irrelevance to neural networks leads this developed methods to be
able to deal with dynamical systems with large-scale neural network components
such as deep neural network controllers regardless of the size of neural
networks.
## IV Application to Adaptive Cruise Control Systems
Figure 3: Illustration of adaptive cruise control systems and simulink block
diagram of the closed-loop system.
In this section, the developed run-time safety monitoring approach will be
evaluated by an adaptive cruise control (ACC) system which is under control of
a neural network controller as depicted in Figure 3. Two cars are involved in
the ACC system, an ego car with ACC module and a lead car. A radar sensor is
equipped on the car to measure the distance to the lead car in run time. The
run-time measured distance is denoted by $d_{\mathrm{rel}}$. Moreover, the
relative velocity against the lead car is also measured in run time which is
denoted by $v_{\mathrm{rel}}$. There are two system operating modes including
speed control and spacing control. Two control modes are operating in run
time. In speed control mode, the ego car travels at a speed of
$v_{\mathrm{set}}$ set by the driver. In spacing control mode, the ego car has
to maintain a safe distance from the lead car denoted by $d_{\mathrm{safe}}$.
The system dynamics of ACC is expressed in the following form of
$\displaystyle\left\\{{\begin{array}[]{*{20}l}\dot{x}_{l}(t)=v_{l}(t)\\\
\dot{v}_{l}(t)=\gamma_{l}(t)\\\
\dot{\gamma}_{l}(t)=-2\gamma_{l}(t)+2\alpha_{l}(t)-\mu v^{2}_{l}(t)\\\
\dot{x}_{e}=v_{e}(t)\\\ \dot{v}_{e}(t)=\gamma_{e}(t)\\\
\dot{\gamma}_{e}(t)=-2\gamma_{e}(t)+2\alpha_{e}(t)-\mu
v^{2}_{e}(t)\end{array}}\right.$ (65)
where $x_{l}$, $v_{l}$ and $\gamma_{l}$ are the position, velocity and actual
acceleration of the lead car, and $x_{e}$, $v_{e}$ and $\gamma_{e}$ are the
position, velocity and actual acceleration of the ego car, respectively.
$\alpha_{l}$ and $\alpha_{e}$ is the acceleration control inputs applied to
the lead and ego car. $\mu=0.0001$ is the friction parameter. A $2\times 20$
feedforward neural network controller with tanh and purelin is trained for the
ACC system. Specifically, the measurement of the ACC system which also
performs as the inputs to the neural network ACC control module are listed in
Table I.
TABLE I: Measured Outputs of ACC system and Inputs to Neural Network Controller Driver-set velocity | $v_{\mathrm{set}}$
---|---
Time gap | $t_{\mathrm{gap}}$
Velocity of the ego car | $v_{e}$
Relative distance to the lead car | $d_{\mathrm{rel}}=x_{l}-x_{e}$
Relative velocity to the lead car | $v_{\mathrm{rel}}=v_{l}-v_{e}$
The output for the neural network ACC controller is the acceleration of the
ego car, namely $\alpha_{e}$. In summary, the neural network controller for
the acceleration control of the ego car is in the form of
$\displaystyle\alpha_{e}(t)=\Phi(d_{\mathrm{rel}}(t),v_{\mathrm{rel}}(t),v_{e}(t),v_{\mathrm{set}}(t),t_{\mathrm{gap}}).$
(66)
Letting $x=[x_{l},v_{l},\gamma_{l},x_{e},v_{e},\gamma_{e}]^{\top}$,
$u=[\alpha_{l},v_{\mathrm{set}},t_{\mathrm{gap}}]^{\top}$ and
$y=[v_{e},d_{\mathrm{rel}},v_{\mathrm{rel}}]^{\top}$, the ACC system can be
rewritten in the following neural-network-enabled system
$\displaystyle\begin{cases}\dot{x}=Ax+f(y)+\tilde{\Phi}(y,u)\\\
y=Cx\end{cases}$ (67)
where system matrices $A$, $C$, nonlinearity $f(y)$ and neural network
component $\Phi(y,u)$ are defined as below:
$\displaystyle A$ $\displaystyle=\begin{bmatrix}0&1&0&0&0&0\\\ 0&0&1&0&0&0\\\
0&0&-2&0&0&0\\\ 0&0&0&0&1&0\\\ 0&0&0&0&0&1\\\ 0&0&0&0&0&-2\end{bmatrix}$
$\displaystyle C$ $\displaystyle=\begin{bmatrix}0&0&0&0&1&0\\\ 1&0&0&-1&0&0\\\
0&1&0&0&-1&0\end{bmatrix}$ $\displaystyle f(y)$
$\displaystyle=\begin{bmatrix}0&0&-0.0001(y_{1}+y_{3})^{2}&0&-0.0001y_{1}^{2}\end{bmatrix}^{\top}$
$\displaystyle\tilde{\Phi}(y,u)$
$\displaystyle=\begin{bmatrix}0&0&2u_{1}&0&0&\Phi(y,u_{2},u_{3})\end{bmatrix}^{\top}$
where $\Phi(y,u_{2},u_{3})$ is the neural network controller (66). In
addition, considering the physical limitations of the vehicle dynamics, the
acceleration is constrained to the range $[-3,2]$ ($m/s^{2}$), thus input
$u_{1}\in[-3,2]$.
Since the nonlinearity of $f(y)$ can be obtained with the measurement of $y$,
we can let
$f(y)=\underline{f}(\underline{x},\overline{x})=\overline{f}(\underline{x},\overline{x})$
in state estimator (49), which is thus constructed as follows
$\displaystyle\begin{cases}\dot{\underline{x}}=A\underline{x}+f(y)+\underline{\Phi}(y,\underline{u},\overline{u})+\underline{L}(y-C\underline{x})\\\
\dot{\overline{x}}=A\overline{x}+f(y)+\overline{\Phi}(y,\underline{u},\overline{u})+\overline{L}(y-C\overline{x})\end{cases}.$
(68)
The auxiliary neural networks $\underline{\Phi}$ and $\overline{\Phi}$ are
designed based on $\Phi$ according to (14) and (15). The observer gains
$\underline{L}$ and $\overline{L}$ can be computed by Corollary 1 via solving
a collection of LP problems.
Figure 4: Run-time safety monitoring of positions $x_{l}(t)$ and $x_{e}(t)$.
The state trajectories $x(t)$ (solid lines) are bounded within the estimated
bounds $\underline{x}(t)$, $\overline{x}(t)$ (dashed lines). Magnified time
windows are used for clear clarification. Figure 5: Run-time safety
monitoring of velocities $v_{l}(t)$ and $v_{e}(t)$. The state trajectories
$v(t)$ (solid lines) are bounded within the estimated bounds
$\underline{v}(t)$, $\overline{v}(t)$ (dashed lines). Magnified time windows
are used for clear clarification. Figure 6: Run-time safety monitoring of
accelerations $\gamma_{l}(t)$ and $\gamma_{e}(t)$. The state trajectories
$\gamma(t)$ (solid lines) are bounded within the estimated bounds
$\underline{\gamma}(t)$, $\overline{\gamma}(t)$ (dashed lines). Magnified time
windows are used for clear clarification.
The run-time boundary estimations of state trajectories of positions
$\\{x_{l}(t),x_{e}(t)\\}$, velocities $\\{v_{l}(t),v_{e}(t)\\}$ and
accelerations $\\{\gamma_{l}(t),\gamma_{e}(t)\\}$ during ACC system evolves in
time interval $[0,100]$ are shown in Figures 4, 5 and 6. As shown in these
simulation results, the state trajectories are always bounded within the lower
and upper-bounds of observers which can be used as a run-time safety
monitoring state for system state during operation.
## V Conclusions
The run-time safety monitoring problem of neural-network-enabled dynamical
systems is addressed in this paper. The online lower- and upper-bounds of
state trajectories can be provided by the run-time safety estimator in the
form of interval Luenberger observer form. The design process includes two
essential parts, namely two auxiliary neural networks and observer gains. In
summary, two auxiliary neural networks are derived from the neural network
component embedded in the original dynamical system and observer gains are
computed by a family of LP problems. Regarding neural networks driven by
measurements of the system, it is noted that the design process is independent
with the neural network so that there is no scalability concern for the size
of neural networks. An application to ACC system is presented to validate the
developed method. Further applications to complex dynamical systems such as
systems with switching behaviors [34, 35, 36, 37, 38] should be considered in
the future. Beyond the run-time safety monitoring approach developed in this
paper, the future work should be the run-time correction of neural networks
once the unsafe behavior is detected. Moreover, run-time safety-oriented
training of neural networks such as online neural network controller training
with safety guarantees should be also considered based on the techniques
developed in this paper.
## References
* [1] Z.-G. Wu, P. Shi, H. Su, and J. Chu, “Exponential stabilization for sampled-data neural-network-based control systems,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 25, no. 12, pp. 2180–2190, 2014\.
* [2] D. Li, C. P. Chen, Y.-J. Liu, and S. Tong, “Neural network controller design for a class of nonlinear delayed systems with time-varying full-state constraints,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 30, no. 9, pp. 2625–2636, 2019.
* [3] L. Zhang, Y. Zhu, and W. X. Zheng, “State estimation of discrete-time switched neural networks with multiple communication channels,” _IEEE Transactions on Cybernetics_ , vol. 47, no. 4, pp. 1028–1040, 2016.
* [4] B. Niu, Y. Liu, G. Zong, Z. Han, and J. Fu, “Command filter-based adaptive neural tracking controller design for uncertain switched nonlinear output-constrained systems,” _IEEE Transactions on Cybernetics_ , vol. 47, no. 10, pp. 3160–3171, 2017.
* [5] B. Niu, Y. Liu, W. Zhou, H. Li, P. Duan, and J. Li, “Multiple Lyapunov functions for adaptive neural tracking control of switched nonlinear nonlower-triangular systems,” _IEEE Transactions on Cybernetics_ , 2019.
* [6] K. J. Hunt, D. Sbarbaro, R. Żbikowski, and P. J. Gawthrop, “Neural networks for control systems—a survey,” _Automatica_ , vol. 28, no. 6, pp. 1083–1112, 1992.
* [7] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” in _International Conference on Learning Representations_ , 2014.
* [8] W. Xiang, H.-D. Tran, J. A. Rosenfeld, and T. T. Johnson, “Reachable set estimation and safety verification for piecewise linear systems with neural network controllers,” in _2018 Annual American Control Conference (ACC)_. IEEE, 2018, pp. 1574–1579.
* [9] W. Xiang, H.-D. Tran, and T. T. Johnson, “Reachable set computation and safety verification for neural networks with ReLU activations,” _arXiv preprint arXiv:1712.08163_ , 2017.
* [10] H.-D. Tran, D. M. Lopez, P. Musau, X. Yang, L. V. Nguyen, W. Xiang, and T. T. Johnson, “Star-based reachability analysis of deep neural networks,” in _International Symposium on Formal Methods_. Springer, 2019, pp. 670–686.
* [11] H.-D. Tran, P. Musau, D. M. Lopez, X. Yang, L. V. Nguyen, W. Xiang, and T. T. Johnson, “Parallelizable reachability analysis algorithms for feed-forward neural networks,” in _2019 IEEE/ACM 7th International Conference on Formal Methods in Software Engineering (FormaliSE)_. IEEE, 2019, pp. 51–60.
* [12] A. Lomuscio and L. Maganti, “An approach to reachability analysis for feed-forward ReLU neural networks,” _arXiv preprint arXiv:1706.07351_ , 2017.
* [13] S. Dutta, S. Jha, S. Sankaranarayanan, and A. Tiwari, “Output range analysis for deep feedforward neural networks,” in _NASA Formal Methods Symposium_. Springer, 2018, pp. 121–138.
* [14] R. Ehlers, “Formal verification of piece-wise linear feed-forward neural networks,” in _International Symposium on Automated Technology for Verification and Analysis_. Springer, 2017, pp. 269–286.
* [15] G. Katz, C. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer, “Reluplex: An efficient SMT solver for verifying deep neural networks,” in _International Conference on Computer Aided Verification_. Springer, 2017, pp. 97–117.
* [16] W. Xiang, H.-D. Tran, and T. T. Johnson, “Output reachable set estimation and verification for multilayer neural networks,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 29, no. 11, pp. 5777–5783, 2018.
* [17] X. Zeng and D. S. Yeung, “Sensitivity analysis of multilayer perceptron to input and weight perturbations,” _IEEE Transactions on Neural Networks_ , vol. 12, no. 6, pp. 1358–1366, 2001.
* [18] W. Xiang, H.-D. Tran, X. Yang, and T. T. Johnson, “Reachable set estimation for neural network control systems: a simulation-guided approach,” _IEEE Transactions on Neural Networks and Learning Systems_ , 2020, DOI: 10.1109/TNNLS.2020.2991090.
* [19] W. Xiang, H.-D. Tran, and T. T. Johnson, “Specification-guided safety verification for feedforward neural networks,” _arXiv preprint arXiv:1812.06161_ , 2018.
* [20] J.-L. Gouzé, A. Rapaport, and M. Z. Hadj-Sadok, “Interval observers for uncertain biological systems,” _Ecological modelling_ , vol. 133, no. 1-2, pp. 45–56, 2000.
* [21] D. Efimov and T. Raïssi, “Design of interval observers for uncertain dynamical systems,” _Automation and Remote Control_ , vol. 77, no. 2, pp. 191–225, 2016.
* [22] C. Briat and M. Khammash, “Interval peak-to-peak observers for continuous-and discrete-time systems with persistent inputs and delays,” _Automatica_ , vol. 74, pp. 206–213, 2016.
* [23] X. Chen, M. Chen, J. Shen, and S. Shao, “$\ell_{1}$-induced state-bounding observer design for positive takagi–sugeno fuzzy systems,” _Neurocomputing_ , vol. 260, pp. 490–496, 2017.
* [24] X. Chen, M. Chen, and J. Shen, “State-bounding observer design for uncertain positive systems under $\ell_{1}$ performance,” _Optimal Control Applications and Methods_ , vol. 39, no. 2, pp. 589–600, 2018.
* [25] J. Li, Z. Wang, W. Zhang, T. Raïssi, and Y. Shen, “Interval observer design for continuous-time linear parameter-varying systems,” _Systems & Control Letters_, vol. 134, p. 104541, 2019.
* [26] W. Tang, Z. Wang, and Y. Shen, “Interval estimation for discrete-time linear systems: A two-step method,” _Systems & Control Letters_, vol. 123, pp. 69–74, 2019.
* [27] Y. Chen, J. Lam, Y. Cui, J. Shen, and K.-W. Kwok, “Reachable set estimation and synthesis for periodic positive systems,” _IEEE Transactions on Cybernetics_ , 2019, DOI: 10.1109/TCYB.2019.2908676.
* [28] W. Xiang, J. Lam, and J. Shen, “Stability analysis and $l_{1}$-gain characterization for switched positive systems under dwell-time constraint,” _Automatica_ , vol. 85, pp. 1–8, 2017.
* [29] J. Shen and J. Lam, “Static output-feedback stabilization with optimal $l_{1}$-gain for positive linear systems,” _Automatica_ , vol. 63, pp. 248–253, 2016.
* [30] C. Briat, “Robust stability and stabilization of uncertain linear positive systems via integral linear constraints: $l_{1}$-gain and $l_{\infty}$-gain characterization,” _International Journal of Robust and Nonlinear Control_ , vol. 23, no. 17, pp. 1932–1954, 2013.
* [31] V. Lakshmikantham, D. Bainov, and P. Simeonov, _Practical stability of nonlinear systems_. World Scientific, 1990\.
* [32] G. Zheng, D. Efimov, and W. Perruquetti, “Design of interval observer for a class of uncertain unobservable nonlinear systems,” _Automatica_ , vol. 63, pp. 167–174, 2016.
* [33] T. Raissi, D. Efimov, and A. Zolghadri, “Interval state estimation for a class of nonlinear systems,” _IEEE Transactions on Automatic Control_ , vol. 57, no. 1, pp. 260–265, 2011.
* [34] Y. Zhu, W. X. Zheng, and D. Zhou, “Quasi-synchronization of discrete-time Lur’e-type switched systems with parameter mismatches and relaxed pdt constraints,” _IEEE Transactions on Cybernetics_ , vol. 50, no. 5, pp. 2026–2037, 2019.
* [35] W. Xiang, “Necessary and sufficient condition for stability of switched uncertain linear systems under dwell-time constraint,” _IEEE Transactions on Automatic Control_ , vol. 61, no. 11, pp. 3619–3624, 2016.
* [36] L. Zhang and W. Xiang, “Mode-identifying time estimation and switching-delay tolerant control for switched systems: An elementary time unit approach,” _Automatica_ , vol. 64, pp. 174–181, 2016.
* [37] W. Xiang, H.-D. Tran, and T. T. Johnson, “Output reachable set estimation for switched linear systems and its application in safety verification,” _IEEE Transactions on Automatic Control_ , vol. 62, no. 10, pp. 5380–5387, 2017.
* [38] W. Xiang, J. Xiao, and M. N. Iqbal, “Robust observer design for nonlinear uncertain switched systems under asynchronous switching,” _Nonlinear Analysis: Hybrid Systems_ , vol. 6, no. 1, pp. 754–773, 2012.
|
# Text Line Segmentation for Challenging Handwritten Document Images Using
Fully Convolutional Network
Berat Barakat, Ahmad Droby, Majeed Kassis and Jihad El-Sana The Department of
Computer Science
Ben-Gurion University of the Negev
Email: {berat, drobya, majeek<EMAIL_ADDRESS>
###### Abstract
This paper presents a method for text line segmentation of challenging
historical manuscript images. These manuscript images contain narrow interline
spaces with touching components, interpenetrating vowel signs and inconsistent
font types and sizes. In addition, they contain curved, multi-skewed and
multi-directed side note lines within a complex page layout. Therefore,
bounding polygon labeling would be very difficult and time consuming. Instead
we rely on line masks that connect the components on the same text line. Then
these line masks are predicted using a Fully Convolutional Network (FCN). In
the literature, FCN has been successfully used for text line segmentation of
regular handwritten document images. The present paper shows that FCN is
useful with challenging manuscript images as well. Using a new evaluation
metric that is sensitive to over segmentation as well as under segmentation,
testing results on a publicly available challenging handwritten dataset are
comparable with the results of a previous work on the same dataset.
## I Introduction
Historical handwritten documents are valuable since they connect past and
present. Commonly they are converted into digital form for being easily
available to scholars worldwide. However, digital historical documents pose
real challenges for automatic writer identification, keyword searching, and
indexing. Text line segmentation of document images is an essential pre-
processing operation for these automatizing problems. The problems for text
line segmentation of handwritten documents consist of touching, overlapping
and crowded characters and vowel signs among consecutive text lines besides
narrow interline spacing (Figure 1).
In addition to the problems of handwritten documents, challenging handwritten
documents contain various writing styles with inconsistent font types and font
sizes through multi-skewed, multi-directed and curved text lines (Figure 2).
Several text line extraction methods for handwritten documents have been
proposed. Projection method was initially used for printed documents [1, 2]
then modified for skewed [3, 4] and multi-skewed documents [5]. Smearing
method [6] which fills the space between consecutive foreground pixels can be
used on skewed documents [7] as well. Grouping method aggregates pixels or
connected components in a bottom up strategy and is superior in case of skewed
and curved text lines [8, 9]. Machine learning algorithms, a type of grouping
method, handle text line segmentation as a pixel classification problem. Pixel
classification can be done in a sliding window manner [10, 11] which is not
desirable due to redundant and expensive computation of overlapping areas in
the sliding windows. On the other hand, dense prediction does not suffer from
redundant computation and has been successfully used for text line
segmentation of handwritten documents [12, 13].
Figure 1: Text line segmentation problems with regular handwritten documents
Figure 2: Additional text line segmentation problems with challenging
handwritten documents. Various writing styles are also noticeable.
However, text line extraction of challenging documents has not been
extensively studied. This paper provides a dataset (https://www.cs.bgu.ac.il/
vml/) of challenging documents with multi-skewed, multi-directed and curved
handwritten text lines. It then describes text line segmentation of this
dataset using Fully Convolutional Network (FCN). We also propose a new
evaluation metric that is sensitive to both, over and under segmentation of
lines in contrast to the available metrics. Using the new evaluation metric we
show that FCN based method is comparable to Cohen et al.’s method [9].
In the rest of the paper we describe our method and the new evaluation metric
in detail, and present the challenging dataset and report experimental
results. After comparing results we conclude and discuss the future
directions.
## II Method
Fully Convolutional Network has made great improvements in object segmentation
field [14]. It is an end to end semantic segmentation framework that extracts
the features and learns the classifier function simultaneously. FCN inputs the
original images and their pixel level annotations for learning the hypothesis
function that can predict whether a pixel belongs to a text line label or not.
So the crucial question is how to annotate the text lines. Baseline labeling
is not applicable to all the alphabets whereas bounding polygon is applicable
but very cumbersome for crowded documents [15]. Instead of baseline or
bounding polygon, we used line mask labeling that connects the characters in
the same line (Figure 4). A line mask disregards diacritics and touching
components between lines.
### II-A FCN architecture
The FCN architecture (Figure 3) we used is based on the FCN proposed for
semantic segmentation [14]. First five blocks, encoder part, follow the design
of VGG 16-layer network [16] except the discarded final layer. The encoder
consists of five convolutional blocks. Each convolutional block contains a
number of convolutional layers followed by a max pooling layer. Through the
encoder, input image is downsampled, and the filters can see coarser
information with larger receptive field. Then the decoder part of FCN
upsamples coarse outputs to dense pixels. Upsampling is done by transpose
convolution by applying a convolution filter with a stride equal to
$\frac{1}{f}$, for upsampling by a factor $f$.
FCN has two types, FCN8 and FCN32, according to the upsampling factor used in
the last layer. FCN32 upsamples the last convolutional layer by $f=32$ at one
time. However, particularly FCN8 architecture was selected because it has been
successful in page layout analysis of a similar dataset [17]. FCN8 adds final
layer of encoder to the lower layers with finer information, then upsamples
the combined layer back to the input size. Default input size of VGG is
$224\times 224$, which does not cover more than 2 main text lines and 3 side
text lines. To include more context we changed the input size to $320\times
320$ pixels. We also changed the output channel to 2 which is the number of
classes, text line or background.
Figure 3: The FCN architecture. Pooling and prediction layers are shown as
grids that show relative coarseness. Convolutional layers are shown as
vertical lines. FCN8 4 times upsamples the final layer, 2 times upsamples the
pool4 layer and combine them with pool3 layer finally to upsample to input
size.
### II-B Pre-processing
We binarize the 30 document images, each with an approximate size of
$3000\times 4000$, by applying an adaptive binarization method for historical
documents [18]. Binarized images were inverted before inputting them to the
FCN. Then we manually annotated the line masks on the document images. A
sequence of original, binarized and labeled document images is demonstrated in
Figure 4. Finally a total of $50.000$ and $6.000$ random patches of size
$320\times 320$ were generated for training and validation sets of each fold
respectively.
Figure 4: A sequence of original, binarized and labeled document images.
Random patches for training are generated from the binarized and labeled
images.
### II-C Training and testing
We applied 6 fold cross validation scheme for the experiments. Each fold was
split into train, validation and test sets. In each fold, training continued
for 80 epochs and the model with the least validation loss value was saved.
The best model was then used to predict the corresponding test set. This
training procedure ensures generalizability of the proposed model. The FCN was
trained by a batch size of 16, using Stochastic Gradient Descent (SGD) with
momentum equals to $0.9$ and learning rate equals to $0.001$. VGG was
initialized with its publicly available pre-trained weights.
During the testing, a sliding window of size $320\times 320$ was used for
prediction, but only the inner window of size $100\times 100$ was considered.
Page was padded with black pixels at its right and bottom sides if its size is
not an integer multiple of the sliding window size, in addition to padding it
at 4 sides for considering only the central part of the sliding window.
### II-D Post-processing
Occasionally predicted line masks were disconnected. Thus, we needed to post-
process the FCN output. Given a predicted line mask image, firstly the
orientation of each connected component was computed. Then the image was split
into $N$ layers where each layer contains the connected components with same
orientation. Later a directional morphological operation was applied on each
layer. Resulting layers at the end were combined back using a pixel-wise OR
operation.
Let $C=\\{c_{1},c_{2},...,c_{M}\\}$ is the set of connected components in the
predicted line mask image. $C$ is further divided into $N$ intersecting
subsets $B_{1},B_{2},...,B_{N}\subseteq C$ such that:
$B_{i}=\\{c_{i}:\alpha(c_{i})^{2}|v_{j}^{T}\cdot\theta(c_{i})|<\epsilon\\}$
(1) $i=1,2,\dots M,j=1,2,\dots N$
$v_{j}=(\cos(j\frac{\pi}{N}),\sin(j\frac{\pi}{N}))$ (2)
$\alpha(c)=\frac{R_{maj}}{R_{maj}+R_{min}}$ (3)
where $v_{j}\in[0,\pi]$ is a particular orientation and $\epsilon\in[0,1]$ is
the threshold for selecting the connected components perpendicular to this
particular orientation. $R_{maj}$ and $R_{min}$ are the major and minor axes
of the fitted ellipse to the connected component $c$ respectively.
$\alpha(c)\in[0.5,1]$ indicates how sure are we about the orientation of the
component $c$. $\theta(c)$ is the unit vector that represents the orientation
of the fitted ellipse to the connected component $c$. Ellipse fitting was done
using the algorithm described in [19].
Eventually for each subset $B_{i}$ a morphological operation with a narrow
kernel in the orientation of this subset was applied. Figure 5 shows the
result of post-processing on a sample predicted line mask image.
Figure 5: Post processing phases: (a) Predicted line mask may have
disconnected components. (b) For each component an ellipse (red) is fitted and
its orientation vector $\theta(c)$ (blue) is computed. (c) Morphological
dilation is applied to each component with a narrow kernel in the direction of
its fitted ellipse.
### II-E Connectivity Component Based Line Extraction Accuracy Metric
Available evaluation methods for text line segmentation either use a pixel-
wise matching mostly normalized by line length or maximum overlap according to
a certain threshold between the extracted and annotated lines. These methods
have their short-comings. Thus, we present a different evaluation method that
provides a better picture of the results.
The theoretical basis is as follows. A line extraction algorithm succeeds in
extracting a complete text line if it has succeeded in finding all the
connected components of this line. That is if the algorithm labels all the
connected components of a line with the same label, then it has successfully
extracted this line without any errors. This is in contrast to having multiple
labels, over segmentation, or extracting part of the connected components,
under segmentation, along the same text line.
To describe the new metric, we define the term connectivity component. A
connectivity component is the connection between two consecutive components
with the same label. The number of connectivity components in a line is equal
to the number of connectivity components between every two consequent
connected components and in addition to it a beginning of line connectivity
component. The extra connectivity component handles cases where a line
contains one connected component only. _C_ omplete extraction of a line with
several connectivity components is extracting all its connectivity components
and assigning them the same label.
To quantify the new metric we define recall and precision for calculating
F-measure. Recall is the number of connectivity components extracted by the
algorithm in a line, out of all connectivity components found in the
corresponding line in ground truth. Precision is the number of correct
connectivity components extracted by the algorithm in a line out of all
connectivity components extracted by the algorithm. Note that some
connectivity components extracted by the algorithm are not found in the ground
truth, and some connectivity components are found in the ground truth but not
extracted by the algorithm. First type of error is quantified in the precision
part of the metric, while the latter type of error is quantified in the recall
part of the metric.
Let $G=\\{g_{1},g_{2},g_{3},\dots g_{m}\\}$ is the set of connected components
of a line in the ground truth, $E_{i}\in\\{E_{1},E_{2},E_{3},\dots E_{n}\\}$
is the set of extracted lines such that $E_{i}\cap G\neq\emptyset$, then for
this line in the ground truth, recall ($R$) and precision ($P$) is:
$R=\sum\limits_{i}{\frac{|E_{i}\cap G|-1}{|G|-1}}$ (4)
$P=\frac{\sum\limits_{i}{|E_{i}\cap G|-1}}{\sum\limits_{i}{|E_{i}|-1}}$ (5)
The recall definition penalizes over segmentation of a line where an
extraction algorithm assigns multiple labels to the components of a single
line. In contrast, the precision definition penalizes under segmentation where
an extraction algorithm incorrectly assigns a line label to the components
that are not in the ground truth of this line (Figure 6).
Figure 6: Connectivity component based metric penalizes under segmentation by
its precision definition and over segmentation by its recall definition.
## III Dataset
Although several benchmark datasets [20, 21, 22] of handwritten document
images are available, a challenging document dataset is absent. We collected a
set of challenging document images from the Islamic Heritage Project (IHP),
Harvard. This dataset is publicly available (https://www.cs.bgu.ac.il/ vml/).
The challenging dataset contains 30 pages from two different manuscripts. It
is written in Arabic language and contains 2732 text lines where a
considerable amount of them are multi-directed, multi-skewed or curved. Ground
truth where text lines were labeled manually by line masks is also available
in the dataset.
## IV Results
We tested the proposed system on the new challenging handwritten document
dataset. In each fold we trained FCN on 50.000 patches randomly cropped from
20 pages, validated on 6.000 patches randomly cropped from 5 pages and tested
on 5 whole pages using a sliding window. Predicted line mask images were then
post-processed with $N=10$ and $\epsilon=0.2$. Extracted text lines were
evaluated using the new metric to calculate the F-measure.
Entire training took around 9 days. Visualization of the first convolutional
layer filters shows that network have learned and filters have converged
(Figure 7). The model achieved $89\%$ training accuracy and $88\%$ validation
accuracy on average. Two characteristics of the dataset lead the model lacking
to overfit to the training set. First it contains two manuscripts with 6 and
24 pages. The manuscript with 6 pages caused most of the errors. Second,
although dataset contains considerable amount of multi-skewed, multi-directed
and curved lines, they spatially cover smaller area due to smaller font size.
This lead to less number of random patches with skewed or curved lines in
relative to the number of random patches with regular lines.
Figure 7: Visualization of the filters in the first convolutional layer.
Table I shows the performance of our method compared with the method of Cohen
et al.[9]. Their approach achieved outstanding results on ICDAR2009 [20] and
ICDAR2013 [21] datasets. We run their publicly available code
(http://www.cs.bgu.ac.il/ rafico/LineExtraction.zip) on the challenging
handwritten dataset.
TABLE I: Comparison with the method of Cohen et al. Method | Recall | Precision | F-measure
---|---|---|---
Proposed | 0.82 | 0.78 | 0.80
Cohen et al.[9] | 0.74 | 0.60 | 0.66
Figure 8: Sample image of ground truth and corresponding outputs of Cohen et
al. [9] and FCN. Lower precision values show that both method tend to under
segment. Most errors of FCN method occur at curved areas whereas most errors
of method of Cohen et al. occur at the main text areas.
Our method outperforms the method of Cohen et al. in terms of both recall and
precision. Both methods have lower precision values than recall values. This
demonstrates that most of their errors are due to wrongly connected lines in
their output. Therefore both method tend to under segment more than over
segment. We have noticed that in the output of our method, wrongly connected
lines mostly crop up at the curved areas in contrast to the output of Cohen et
al where the wrongly connected lines are mostly crop up at the main text
areas. The former was a result of small number of training patches with curved
lines. Curved lines can be long but their curved part covers relatively a
small spatial area which is one or two corner parts of a page. The latter was
a result of small number of main text lines in relative to the number of side
text lines in a page, where the average height of text lines converges to the
height of side text lines. Therefore method of Cohen et al., which runs
according to the average height of text lines, has most errors in main text
areas. Figure 8 shows some qualitative results for the latter and the former
types of errors on the challenging dataset.
## V Conclusion
This paper introduces challenging handwritten documents, presents a dataset of
challenging handwritten documents and its text line segmentation using FCN.
Line mask labeling is less cumbersome for challenging handwritten documents
and is a proper way for FCN training. We have also defined a new evaluation
metric with the concept of connectivity component. This metric is sensitive to
both over and under segmentation. New metric is used to validate the proposed
method on the challenging handwritten dataset. As a future work, performance
on curved text lines can be improved by augmenting patches with curved lines.
## Acknowledgment
The authors would like to thank the support of the Frankel Center for Computer
Science at Ben-Gurion University of the Negev.
## References
* [1] J. Ha, R. M. Haralick, and I. T. Phillips, “Document page decomposition by the bounding-box project,” in _Document Analysis and Recognition, 1995., Proceedings of the Third International Conference on_ , vol. 2. IEEE, 1995, pp. 1119–1122.
* [2] R. Manmatha and N. Srimal, “Scale space technique for word segmentation in handwritten documents,” _Lecture notes in computer science_ , pp. 22–33, 1999.
* [3] M. Arivazhagan, H. Srinivasan, and S. Srihari, “A statistical approach to handwritten line segmentation,” _Document Recognition and Retrieval XIV, Proceedings of SPIE, San Jose, CA_ , pp. 6500T–1, 2007.
* [4] I. Bar-Yosef, N. Hagbi, K. Kedem, and I. Dinstein, “Line segmentation for degraded handwritten historical documents,” in _Document Analysis and Recognition, 2009. ICDAR’09. 10th International Conference on_. IEEE, 2009, pp. 1161–1165.
* [5] N. Ouwayed and A. Belaïd, “A general approach for multi-oriented text line extraction of handwritten documents,” _International Journal on Document Analysis and Recognition_ , pp. 1–18, 2012.
* [6] K. Y. Wong, R. G. Casey, and F. M. Wahl, “Document analysis system,” _IBM journal of research and development_ , vol. 26, no. 6, pp. 647–656, 1982\.
* [7] A. Alaei, U. Pal, and P. Nagabhushan, “A new scheme for unconstrained handwritten text-line segmentation,” _Pattern Recognition_ , vol. 44, no. 4, pp. 917–928, 2011.
* [8] S. S. Bukhari, F. Shafait, and T. M. Breuel, “Script-independent handwritten textlines segmentation using active contours,” in _Document Analysis and Recognition, 2009. ICDAR’09. 10th International Conference on_. IEEE, 2009, pp. 446–450.
* [9] R. Cohen, I. Dinstein, J. El-Sana, and K. Kedem, “Using scale-space anisotropic smoothing for text line extraction in historical documents,” in _International Conference Image Analysis and Recognition_. Springer, 2014, pp. 349–358.
* [10] B. Moysset, C. Kermorvant, C. Wolf, and J. Louradour, “Paragraph text segmentation into lines with recurrent neural networks,” in _Document Analysis and Recognition (ICDAR), 2015 13th International Conference on_. IEEE, 2015, pp. 456–460.
* [11] J. Pastor-Pellicer, M. Z. Afzal, M. Liwicki, and M. J. Castro-Bleda, “Complete system for text line extraction using convolutional neural networks and watershed transform,” in _Document Analysis Systems (DAS), 2016 12th IAPR Workshop on_. IEEE, 2016, pp. 30–35.
* [12] Q. N. Vo and G. Lee, “Dense prediction for text line segmentation in handwritten document images,” in _Image Processing (ICIP), 2016 IEEE International Conference on_. IEEE, 2016, pp. 3264–3268.
* [13] G. Renton, C. Chatelain, S. Adam, C. Kermorvant, and T. Paquet, “Handwritten text line segmentation using fully convolutional network,” in _Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on_ , vol. 5. IEEE, 2017, pp. 5–9.
* [14] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 3431–3440.
* [15] T. Grüning, R. Labahn, M. Diem, F. Kleber, and S. Fiel, “Read-bad: A new dataset and evaluation scheme for baseline detection in archival documents,” _arXiv preprint arXiv:1705.03311_ , 2017.
* [16] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” _arXiv preprint arXiv:1409.1556_ , 2014.
* [17] B. Barakat and J. El-Sana, “Binarization free layout analysis for arabic historical documents using fully convolutional networks,” in _Arabic Script Analysis and Recognition (ASAR), 2nd International Workshop_. IEEE, 2018, pp. 26–30.
* [18] I. Bar-Yosef, I. Beckman, K. Kedem, and I. Dinstein, “Binarization, character extraction, and writer identification of historical hebrew calligraphy documents,” _International Journal of Document Analysis and Recognition (IJDAR)_ , vol. 9, no. 2-4, pp. 89–99, 2007.
* [19] A. W. Fitzgibbon, R. B. Fisher _et al._ , “A buyer’s guide to conic fitting,” _DAI Research paper_ , 1996.
* [20] B. Gatos, N. Stamatopoulos, and G. Louloudis, “Icdar2009 handwriting segmentation contest,” _International Journal on Document Analysis and Recognition (IJDAR)_ , vol. 14, no. 1, pp. 25–33, 2011.
* [21] N. Stamatopoulos, B. Gatos, G. Louloudis, U. Pal, and A. Alaei, “Icdar 2013 handwriting segmentation contest,” in _Document Analysis and Recognition (ICDAR), 2013 12th International Conference on_. IEEE, 2013, pp. 1402–1406.
* [22] M. Diem, F. Kleber, S. Fiel, T. Grüning, and B. Gatos, “cbad: Icdar2017 competition on baseline detection,” in _Document Analysis and Recognition (ICDAR), 2017 14th IAPR International Conference on_ , vol. 1. IEEE, 2017, pp. 1355–1360.
|
# Aesthetics, Personalization and Recommendation: A survey on Deep Learning in
Fashion
Wei Gong<EMAIL_ADDRESS>University of Science and Technology of
ChinaNo.96, JinZhai Road Baohe DistrictHefeiAnhuiChina230026 and Laila Khalid
<EMAIL_ADDRESS>University of Science and Technology of ChinaNo.96,
JinZhai Road Baohe DistrictHefeiAnhuiChina230026
(2020)
###### Abstract.
Machine learning is completely changing the trends in the fashion industry.
From big to small every brand is using machine learning techniques in order to
improve their revenue, increase customers and stay ahead of the trend. People
are into fashion and they want to know what looks best and how they can
improve their style and elevate their personality. And since systems are
already monitoring every sale and coming trends , why not utilize their power
in getting a recommendation regarding outfit. Using Deep learning technology
and infusing it with Computer Vision techniques one can do so by utilizing
Brain-inspired Deep Networks, and engaging into Neuroaesthetics, working with
GAN’s and Training them, playing around with Unstructured Data,and infusing
the transformer architecture are just some highlights which can be touched
with the Fashion domain. It’s all about designing a system that can tell us
information regarding the fashion aspect that can come in handy with the ever
growing demand. Personalization is a big factor that impacts the spending
choices of customers.The survey also shows remarkable approaches that encroach
the subject of achieving that by divulging deep into how visual data can be
interpreted and leveraged into different models and approaches. Aesthetics
play a vital role in clothing recommendation as users’ decision depends
largely on whether the clothing is in line with their aesthetics, however the
conventional image features cannot portray this directly. For that the survey
also highlights remarkable models like tensor factorization model, conditional
random field model among others to cater the need to acknowledge aesthetics as
an important factor in Apparel recommendation.These AI inspired deep models
can pinpoint exactly which certain style resonates best with their customers
and they can have an understanding of how the new designs will set in with the
community. With AI and machine learning your businesses can stay ahead of the
fashion trends and deliver exactly what your customers want and when they want
it.
Deep Learning, neural networks, Fashion, Aesthetics ,Recommendation,
Personalization
††copyright: acmcopyright††journalyear: 2020††doi: not available yet††journal:
JACM††journalvolume: 00††journalnumber: 0††article: 111††publicationmonth:
0††ccs: Computing methodologies Computer vision††ccs: Applied computing Online
shopping††ccs: Computing methodologies Neural networks
## 1\. Introduction
If we go over the past decade and see how deep learning has achieved
significant success in many popular Industries and areas. We observe how
perception tasks, including visual object recognition and text understanding
and speech recognition, have revolutionized different regions. There is no
comparison as to how successful deep learning has been. Still, suppose we want
to discuss deep learning in the real terms of the fashion industry. In that
case, we see a lot of opportunities and research areas that are still
available to work on. As we all know, fashion is an ever-evolving industry.
There are new trends that are setting in every second that is passing by.
Although clothing design is like one of the most creative realms in the
Contemporary World (Insight, [n.d.]), whether it’s because of the considerable
creative part of the design process or equivocal information about clothing,
the fact remains to be. Internet shopping has also grown incredibly in the
last few years, and fashion has created immense opportunities. Exciting
applications for image understanding , retrieval and tagging are surfacing,
and there are loads of different application areas that they can be used on.
For example, text analysis, image analysis, and similarity retrieval can be
utilized in fashion. So deep learning is an aspect that we can use to train
our computer to perform human-like tasks such as recognizing speech,
identifying images or making predictions. For example, the results described
in the apparel design and fashion industry allow users to translate the image
into the text that might as well be interpreted as a description of the
garment based on its sketch.
We also know that images are an essential aspect because they display content
and convey emotions like sadness, excitement, anger, etc. So useful image
classification is beneficial, and obviously, it’s been used in computer vision
in multimedia. Still, if you find research regarding fashion and specifically
in terms of aesthetic features or personalization, you will find only a few
specific directions. Discussing one of them that is available to describe
images inspired by art theories, which are, you know, intuitive,
discriminative, and easily understandable. So we know that the effective image
classification based on these features can achieve high accuracy compared with
the state-of-the-art. For that, we take an example in the paper (Wang, 2013)
where they develop an Emotion guided image gallery to demonstrate the proposed
feature collection. So the authors achieve mining the interpretable visual
features directly affecting human emotional perception from the viewpoint of
art theories.
Another example in another paper (Borràs et al., 2003) is where they discussed
that content-based image retrieval is done in terms of people’s appearance.
It’s a two-stage process that is composed of image segmentation and region-
based interpretation. The modelling of an image is due to an attributed graph
and a hybrid method that follows a split and merge strategy. There are a lot
of different stuff that is being worked on in this field of computer vision
specifically, and image retrieval from databases is usually a formula, in
terms of descriptions that combine the Salient features such as colour,
texture, shapes etc.
Today, more and more retail companies are trying to understand, to stay ahead
of the trend curve and because they want to reshape their business to stay
ahead while implementing tech forward approaches and solutions. And data
analysis brings diverse opportunities to companies, which allows them to reach
their customer goal and offer a smarter experience to them. But the thing is
that the lack of profound insights based on reliable statistics is the major
challenge of fashion retailers that they face. So for that, computer vision
technologies and deep learning can come in very handy. And as we all know,
computer vision is still an evolving technology, so we can speak about
specific optimization and cost reduction techniques that can come in handy,
for example, like how the information regarding what people wear, how
customers kind of match their garments and what or which or who influences
their taste is essential for fashion retailers. As we can see the Instagram
influencers, we see that many people follow them and try to copy their trends
and how they are inspiring a lot of followers. Image recognition technology
also helps business owners collect data, process it, and gain an actionable
insight for Effective Trend forecasting.
For that, in this particular article (ELEKS, [n.d.]b), we see that the
dashboard they developed allows seeing how frequently one specific type of
garment appears a day. Like what type of apparel is popular within a
particular age range or how people sort of match their attire. Like for
example, how a specific jacket is trending or why is it popular among
teenagers? Or why is a scarf popular amongst the elders. They developed the
graph that shows how certain prevalent types of garments would be over the
next season’s you know, which could broadly impact the new upcoming trend for
the fashion. This kind of analysis also aims to help fashion retailers and
brands plan sales and learn to avoid any surplus. The author suggests that in
the visual search domain with a focus mainly on image similarity for like,
e-commerce and Online shops and understanding images of clothing, it means a
lot more than just classifying them into different categories. Because if you
don’t get a meaningful description of the whole image you classify, then you
are losing a lot of information that could come in handy. In this way, one can
gain reliable and timely insights into fashion trends across any location.
What defines those trends is people’s unique choices, like how they choose
something and what goes with their personality. The element of personalization
is one of the biggest game-changers in this apparel recommendation. By
targeting this factor, businesses can attract more customers.
The thing that I like about this deep learning aspect is that it penetrates
the industry and, you know, activities where human creativity has
traditionally dominated. It adds a futuristic touch to fashion, art ,
architecture and music so on. Another paper’s (ELEKS, [n.d.]a) key finding is
that the representation of content and style in the convolutional neural
networks are separable. That is, you know if we can manipulate both
representations independently to produce new and perceptually meaningful
images. If you look, fashion is an entirely new direction for machine
learning. So to design clothes one should you know, basically have an
understanding of the mechanism of technique, like how certain styles go
famous, what things they are having that are attracting millions of followers
around and what causes the spread, you know the spread of the Fashion trends
and principles and evolution of patterns, so the task of designing or
predicting trends can be simplified. The paper under discussion where the
author suggests that now designing or predicting Trends can be simplified,
thanks to a new class of neural networks. These networks basically can
automatically allocate shapes, elements, and types of clothing and further
combine them. This allows a whole fresh feel of how you can manipulate the
patterns and see which patterns can influence more influence than the others.
Now aesthetics play a vital role in the user’s pick, and even though
personalization is tricky to play with, aesthetics are not. Because everyone
appreciates eye-pleasing items and if we can manipulate the role of aesthetics
in our fashion recommendation, we can hit the jackpot.
So there are many various aspects of fashion in which deep learning can
enhance and help us out. There are multiple domains for improving the current
elements and how we can help predict and revolutionize this industry. This
survey is organized in the following sections. Sec. 2 reviews the fashion
recommendation systems and approaches that come out on top and are the basis
for future work. Sec. 3 illustrates the positions for aesthetics in fashion,
all it’s analysis containing various approaches. Sec. 4 provides an overview
of personalization in fashion , different top approaches that have tasks
comprising Deep Neural Networks, GAN’s, and handling unstructured data. Sec. 5
demonstrates selected applications and future horizons that can be worked on.
Last but not least, concluding remarks are given in Sec. 6.
## 2\. Recommendation Systems
Well, if you indulge in object recognition, you will find that fashion sense
is a bit more subtle and sophisticated, you know, which can require specific
domain expertise in outfit composition. So, for example, if you refer to an
outfit as a set of clothes working together kind of typically for a desired
specific style or to find a good Outfit composition, what we need is not only
to follow the appropriate dressing course, but it can also have a creative
aspect in balancing the contrast of colours and different styles. And although
we have seen a relative number of researches that are mainly based on clothes
retrieval and recommendation but what we have seen is that none of them
consider the problem of fashion outfit composition. On the one hand, you know
a fashion concept is often subtle and subjective and is non-trivial to get you
to know consensus from ordinary labellers if they are not Fashion experts. On
the other hand, there may be a large number of attributes for describing
fashion.
### 2.1. End-to-End Deep Learning Approach on Set Data.
It is challenging to obtain exhaustive labels for training. So, as a result,
most of the existing studies are kind of, you know, limited to the simple
scenario of retrieving similar clothes or choosing individual clothes for a
given event. So the paper (Li et al., 2016) that is being reviewed proposes a
data-driven approach to train a model that can automatically compose a
suitable fashion outfit. This approach is motivated by the surge of the
increasing online fashion trends, including Pinterest and YouTube, and how
teenagers have been addicted to creating every new culture trend on these
sites.
#### 2.1.1. Background
So basically what they have done is that they have developed a full automatic
composition system that is based upon a scorer by iteratively evaluating all
the possible outfit candidates. But this model had some challenges in which
they had to look out for possible solutions. For example, one of the
challenges that they Encountered was that complicated visual contents of the
fashion images? So, you know, there are potentially many kinds of different
attributes like color, textures, categories and spectrum’s etc and it is
impossible to label or even list all possible attributes. So there is this
hindrance and second one would be the rich context of fashion outfit for
example, clothing outfits can kind of sort of reflect current personality and
interest. So if one style is acceptable to a specific group or culture. It may
be offensive to the others. So to infer such information they have taken into
account not only the pixel information but also the context information in the
fashion output.
#### 2.1.2. Proposed Approach
So basically for These challenges they proposed different solutions like for
the first challenge they have proposed an end-to-end system of encoding visual
features through a deep convolutional network which sort of, you know, takes a
fashion outfit as an input and processes it and then predicts the user
engagement levels. And for the Second Challenge what happens is that a
multimode Deep learning framework, which sort of leverages the context
information from the image itself and the experiment that they did through
that was that the multi-modal approach significantly outperforms the single
model. And provides the suitable and more reliable solution for the fashion
outfit for scoring tasks and thus the full composition tasks. So these are the
contributions that they are enlisting and they are basically proposing an end-
to-end trainable system to fuse signals from multi-level hybrid modalities
that includes image and metadata of the fashion items and they also collected
a large scale of database that are for the fashion outfit related research.
Lastly they propose a fashion outfit composition to the solution based on a
reliable sort of outfit quality predictor and predicting fashion is never
easy, but it is something that they have put forward because many interleaving
factors visible or hidden contribute to the process the combinatorial nature
of the problem also makes it very interesting and it’s a test tone for the
state-of-the-art machine learning systems.
Figure 1. The proposed fashion outfit scoring model
### 2.2. Implicit Feedback Based
As we know that the fashion domain has quite a lot of several intriguing
properties that can be personalized and which make personalization
recommendations even far more difficult than the traditional domains. So in
order to sort of avoid potential bias, like when using explicit user ratings,
which are also pretty much expensive to obtain.
#### 2.2.1. Background
So this paper (Nguyen et al., 2014) basically suggests the work that
approaches fashion recommendations by sort of analyzing the implicit feedback
from users in an app. Basically the design criteria is that the system shall
be completely unobstructive and thus the recommendation system cannot , you
know, rely explicitly on the ratings rather It will be based on the rich
history and the interaction between the user and the app. In simple words it
relies on the implicit feedback that is you know, the user preference is to be
automatically inferred from the behavior.
Though there are still some challenges that can be gathered in this approach
that is the most notable interaction a user has with an item is a sign of
interest ,so the system therefore never receives a negative feedback and of
course, you know an item can be both clicked and loved so it is also multi-
faceted and then the different types of feedback will have to be combined into
a single numerical value as defined for an experiment. Set a preference score
for the recommendation algorithms. It is difficult to evaluate such a system
compared to explicit-rating-systems, because the system does not have a target
rating to compare its predictions to. So all in all the success basically
relies on the implicit feedback system that has a well-defined strategy for
inferring user preference from implicit feedback data and combining even types
into implicit scores and then evaluating these scores and recommendations by
using a suitable metric.
#### 2.2.2. Proposed Approach
So basically the authors in order to build this recommendation system took the
first step and that was to generate implicit preference scores and to you know
translate data that is being captured by a user’s interaction with an item
into a specific number that can be called employees implicit preference score
and that can be also later used to rank it with the other items so that most
important factor in this was when they created such numbers to understand the
data available and their implications for user preference. So once you can
have the data analyzed suitable generalizations can then be furthermore
chosen. And then the second step was for defining the penalisation functions.
Important properties in the fashion domain that must be captured by the system
include seasons and trends, price sensitivity and popularity. In general, when
a user $u$ triggers an event $e,$ e.g. Clicks, we have a range of possible
scores to give this event. We use $S_{e}$ to denote this score, and let
$m_{e}$ and $M_{e}$ denote the minimum and maximum score possible for event
$e,$ respectively. We then use a penalisation function $p_{u}(x)$ taking a
feature value $x$ (e.g., related to an item’s price), and returns a number
between zero and one to adjust the score inside the possible range.
$S_{e}=M_{e}-\left(M_{e}-m_{e}\right)\cdot p_{u}(x)$
So as mentioned that you know fashion is all about the trend and timing, so
the recentness of an event is a natural feature for having the events
importance and therefore, penalise the items that the user has not , you know
considered recently. So for that they had a look at the number of days since
the user did the event in question. Let’s say we can denote that event by $x$.
And then compare this to the old event that the user has in the database and
that can be, you know denoted by $F_{u}.$
So this can be known later on, forced to create a linear penalization letting
$p_{u}(x)=x/F_{u},$ , but it wasn’t fitting well with the idea of Seasons. So
as an example what they did was that even if a store may be selling the warm
clothes from November to March , they wanted to focus on the recommendations
on summer clothes when the season changes so for that they had to, kind of
duplicate this behavior and choose a sigmoid function that you know, considers
the recentness in a way that could obscure the preference of users that have
been absent from the app for some time. So they used linear penalization
because you know, it could ensure that the difference in penalization between
the two most recent items is equal to the difference between the two old ones.
Figure 2. Screenshots from the application. From the left to right: editorial
information, product details, and a “love list”.
So for the price what they did was that, you know different users have
different price range because they tend to be price sensitive and if an item’s
price should also be taken into account then what they did was that the users
typical price range was used and that was that created a personalized score
and penalized that were not in the price layer range preferred by the user. So
this procedure was basically done in simple two steps. In the first step what
they did was they found the average of all the price items related to a user
and on second base they pretty much calculated the difference that was found
in the price of an item $i$ that triggered the event $e$ and the average and
then used that to penalize that item. Rregarding the third aspect that they
used was popularity. So for the popularity expect what they did was that they
considered popularity as a feature by, you know, having a comparison with
users Behavior to the rest of the population. So, you know, we can tell it
like that that if a user’s activities conform to the common standards that are
likely to be his or her taste then it is more unique giving significant clues
about the items to recommend. So basically they judged each user’s behavior by
looking at the overall popularity of the items. They pretty much interacted
with them and they use a linear pair punishment for items with different
popularities.
And lastly what they did was they combined all these different penalisation
and came over a sum of all models this sort of required setting different
weights for different factors. So simply what they did was in order to
validate their approach that there were scores built using features and that
was you know, Event for the fashion domain and secondly, they distributed the
scores over a full range of valid scores and had an average confirming the
hypothesis.
### 2.3. Based on Weak Appearance Feature
As we know that the technology regarding online shopping has been developing
rapidly and that online fitting and other clothing intelligent equipment have
been introduced in the fashion industry. A lot of different Advanced
algorithms have been developed and there are many more currently in the
process.
#### 2.3.1. Background
For example the CRESA (Melo et al., 2015) combined textual attributes, visual
features and human visual attention to compose the clothes profile in the
recommendation. Recommendation that is based on the content is usually
applicable for multiple regions. So for new projects, let’s say if the user
has according to the individual browsing records, they can recommend results
have been proven to be explicit and accessible but the content-based
recommendation usually is improper when you kind of apply it in the industry.
And obviously this means that the new users that sign up would not be getting
any recommendations based on the browsing record.
#### 2.3.2. Proposed Approach
Basically what this paper (Zhang et al., 2017) proposes is that the
classification process usually needs to consider the quarter sales clothing
styles and other factors. So as a result, they basically divided this into
four categories where the fashion level is a subjective method that usually
needs subjective evaluation on image characters through the expert group. So
knowledge background and psychological motivation of the edge experts is
involved. And as for the researchers of visual psychological characteristics,
there wasn’t a quantitative description method by which the objective
evaluated results can represent the subjective evaluation results. So what
this aims to find out is to have a set of objective indexes, which can be used
to access the fashion level. This was done by considering all the factors that
usually affect the evaluation of personal scoring. So this paper basically
regards the weak appearance feature as an important index that can influence
the fashion level. So there are many, as you know weak appearance features
related to the individual fashion level. But the three major categories that
can be known namely if we want to go over are makeup ,accessories and hair
colors. So this could include the blush, the lip color, eyebrow color ,hat,
any accessories on hand and neck etc. By utilizing all these features what
they do is that the SVM classification method is leveraged in this and they
evaluate based on whether the human body has weak appearance features. So
there is no effective way to sort of establish a fashion level database. But
the one established in this paper is a basis of the follow-up studies that can
be taken up by the future researchers. So basically the image database is of a
pretty much very important significance in all this training and testing of
algorithms.
Figure 3. Fashion level classification framework based on weak appearance
feature.
For the extraction of weak feature index, the current face detection methods
usually have sort of two categories in which knowledge based ones and
statistics based ones are available. So in order to extract the weak facial
feature, they find the facial feature points and then they use the facial
recognition. This paper basically adopts the Adaptive boosting method for
facial feature positioning. So the idea behind is that they have to endure
large amounts of unsuccessful training samples making the algorithm learning
focus on the difficult training samples in the subsequent study and finally
they weight and add the number of weak classifiers selected by the algorithms
to Strong classifier.
Table 1. Customer fashion level classification. Fashion level | Description classification
---|---
First level | Wonderful
Second level | Great
Third level | Good
Fourth level | Common
Table 2. Weak appearance features catalogue. Category | Weak feature index
---|---
Make-up | Eyebrow, blush, lips, eye shadow
Accessories | Neck accessories, hand accessories, brooch, nail, hat
Hair color | Red, yellow, green, blue, brown, black, gray, white
So all in all what the paper does is that it uses the appearance week feature
to sort of characterize consumers’ fashion level and what it does is that it
draws the conclusion by, you know, comparing the science experiment and expert
evaluation. So both categories of evaluation are involved in this study.
Basically the fashion level of the users is what they determine which is based
on their makeup ,the accessories they are wearing and the hair color they
have. So if a person is into red hair color or you know, having a lot of
makeup on they can you know access their level that oh, okay so this person is
more into fashion. So based on their level they kind of you know just
recommend them the things that they like so for example, let’s say if a
certain person is into dark eye shades and dark lip color and you know, they
are having some sort of streaks in their hair and stuff like that. So these
May indicate a level that is higher in the fashion aspect and they will
obviously recommend the products accordingly.
### 2.4. Semantic Attribute Region Guided Approach
A lot of multiple semantic attributes built up a fashion product for example
sleeves, collars etc. So while making you know these decisions regarding the
clothes, a lot of preferences for different semantics attributes, like v neck
collar ,deep neck or pointed toes shoes, high heels etc, are looked over.
Semantic attributes can not only let you know how one generates a
comprehensive representation of products, but they can also help us make an
understanding of how the user preferences work. But unfortunately, there
aren’t any unique challenges that can be inherited in designing efficient
solutions in order to integrate semantic attribute information for the fact
that we want fashion recommendation.
#### 2.4.1. Previous Methods
It is quite difficult to obtain semantic attribute features without the manual
attribute annotation and especially in large scale e-commerce. On the other
hand if the user preferences are basically classy or sophisticated while
traditional methods usually have to transform the item image into a vector
directly. So these two aspects make it very very difficult to explain
recommendations with current recommendation models. It is very hard on the
other hand with these aspects to generate an explainable recommendation with
the current recommendation models (Wu et al., 2019; Kang et al., 2017; Xu
Chen, 2018) that are currently being used in the industry.
#### 2.4.2. Proposed Approach
So for that this the paper (Hou et al., 2019) basically proposes a novel
semantic attribute explainable recommendation system as a fine-grained
interpretable space name semantic attribute space is introduced in which each
Dimension corresponds to a semantic attribute. So basically they project the
users and items into this space. The users’ fine-grained preferences are being
able to generate explainable recommendations specifically if they first
develop a semantic extraction Network that can be used to extract the region
specific attribute representations. Then by this each item is then projected
to the semantic attribute space and then you can easily capture the diversity
of semantic attribute. The design aspects contain a fine-grained preferences
attention FPA module which basically does that it automatically matches and
the user preferences for each given attribute in the space and aggregate all
these attributes with different weights. So now each attribute has a weight of
it’s own so in the end what happens is that finally they optimize the SAERS
models in Bayesian personalized rank BPR framework, which not only
significantly improves and out performs several base lines on the visual
recommendation task, but it also sort of provides interpretable insights by
highlighting attribute semantics in a personalized manner. Basically, what
they have done is that previously as we know that these attempts were made to
capture users’ visual preferences, but in order to make institutional
explanations for the recommendations, they were pretty much very limited on
item level. So the paper basically takes a further step to discuss the user
preferences on Visual attribute level.
Table 3. List of semantic attributes used in method Category | Attribute: Class
---|---
Top | high neck: ruffle semi-high, turtle,…
| collar: rib collar, puritan collar,…
| lapel: notched, shawl, collarless,…
| neckline: V, square, round,…
| sleeves length: sleeveless, cap, short,…
| body length: high waist, long, regular,…
Bottom | skirt length: short, knee, midi, ankle,…
| trousers length: short, mid, 3/4, cropped,…
Shoes | heel height: flat, 1 in-7/4 in, under 1 in,…
| boots height: ankle, knee-high, mid-calf,…
| closure: lace-up, slip-on, zipper,…
| toe style: round, pointed, peep, open,…
With their semantic attribute explainable recommendations system. They
basically bridge the gap and utilize a new semantic attribute visual space in
which each Dimension represents an attribute that corresponds to the region
that basically different regions of the clothing items are usually split into
several semantics attributes via the extraction Network and then they are
later projected into the visual space. So later the users are projected
according to the Fine graded preferences for clothing attributes. So this all
makes it easily for them to obtain the fashion item projection in the semantic
feature space. And from there they can use the FPA to project users into the
same semantic feature space. Here FPA is the Fine grain preferences attention
where they jointly learned the item representation in both Global visual space
and semantic attribute visual space under a pairwise learning framework. And
with this they are able to generate the explainable recommendations.
### 2.5. Complementary Recommendations Using Adversarial Feature Transformer
Traditional procedures for complementary product hints depend on behavioral
and non-visible facts along with consumer co-perspectives or co-buys. However,
positive domain names along with style are often visible. Recommendation
algorithms are important to many business applications, specially for online
shopping. In domain names along with style, clients are seeking out apparel
hints that visually supplement their modern outfits, styles, and wardrobe.
Which the conventional strategies do now no longer cater to.
#### 2.5.1. Previous Methods
Now we have seen that there are traditional content-based and collaborative
recommendation algorithms (Adomavicius and Tuzhilin, 2005; Lew et al., 2006).
But among these collaborative filtering approaches (Koren and Bell, 2015;
Melville et al., 2002) are the common ones that primarily rely on behavioral
and historical data such as you know, Co purchases , the views and past
purchases to suggest new items to customers. So this work basically on
providing complimentary item recommendations for a given query item based on
visual cues.
#### 2.5.2. Proposed Approach
So basically what this paper (Huynh et al., 2018) does is that it proposes a
framework in which they harness visual clues in an unsupervised manner in
order to learn the distribution that exists between co-occurring complimentary
items in real world images. The model runs are nonlinear transformations
between two manifolds of source and Target complimentary item categories, for
example, a top and a bottom in an outfit. And training it on a large data set
they train generative Transformer Network directly on the feature
representation space by just casting it as an Adversarial Optimization
problem. Now such a conditional generative model can produce multiple novel
samples of complimentary items in the feature space for a given query item.Now
for that they develop an unsupervised learning approach for complementary
recommendation using adversarial feature transform CRAFT by learning the co-
occurrence of item pairs in real images. So the Assumption here is that the
co-occurrence frequency of item pairs is sort of a strong indication of
likelihood of their complementary relationship. So the paper advises a defined
and adversarial process to train a conditional generative Transformer Network
which can then learn the joint distribution of item pairs by observing samples
from the real distribution.
Now their approach is quite novel and unique in a certain way that they
utilize generative adversarial training with several advantages over
traditional generative adversarial network (GAN) (Goodfellow et al., 2014)
based image generation. Well, we know that the quality of visual image
generation using GANs has improved a lot but it still lacks the realism
required for many real world applications and fashion apparel is one of them.
And more importantly if we see that their goal of recommendation systems in
certain types of application is often not to generate synthetic images, but
they have to recommend real images from a catalog of items. Now we know that
an approach that generates synthetic images will still need to perform a
search and that will be typically done by searching in the feature space in
order to find the most visually similar image in the catalog. Now CRAFT
directly generates these features of the recommended items and bypasses the
need to generate synthetic images and enable a simpler and more efficient
algorithm. So by working in a feature space, what they do is that they can use
a simpler Network architecture that improves stability during the training
time and avoid common pitfalls such as model collapse (Che et al., 2016).
Figure 4. Generating recommendations using the proposed CRAFT network.
#### 2.5.3. Network Architecture
The network architecture basically comprises several steps and first is the
selection of appropriate visual representations for the source and Target
images. Then what they do is that the encoding which are fixed feature
representations are generally derived from pre-trained CNN’s. Typically it is
advisable to use application specific feature representations, for example,
apparel feature embeddings for clothing recommendations, but a general
representation such as one trained on ImageNet (Deng et al., 2009) or MS-COCO
(Lin et al., 2014) offer nice efficient alternatives. So as shown in figure,
what basically is happening, is that the source and the target feature
encoders $E_{s}$ and $E_{t},$ respectively are fixed and are used to generate
feature vectors for training and inference. Now, the architecture resembles
traditional Grand designs with two main components , a conditional feature
transformer and a discriminator. The role of the feature transformer is to
transform the source feature $s_{q}$ into a complementary target feature
$\hat{t}_{q}.$ The input to the transformer also consists of a random noise
vector $z$ sampled uniformly from a unit sphere in a $d_{z}$ -dimensional
space. By design, the transformer is generative since it is able to sample
various features in the target domain.
As the transformer consists of several fully-connected layers in which each is
followed by batch normalization (Ioffe and Szegedy, 2015) and leaky ReLU
(Maas, 2013) activation layers. The discriminator is commensurate to the
transformer in capacity, consisting of the same number of layers. This helps
balance the power between the transformer and the discriminator in the two-
player game, leading to stable training and convergence.
From a query image, the query feature $f$ is extracted by the source encoder,
$E_{s},$ and multiple samples of transformed features
$\left\\{\hat{t}_{i}\right\\}$ are generated by sampling random vectors
$\left\\{z_{i}\right\\}.$ Now basically what it does is that it allows them to
generate a diverse set of complementary recommendations by sampling the
underlying conditional probability distribution function. And when they
performed a nearest neighbor search within a set of pre-indexed target
features extracted using the same target encoder, $E_{t},$ used during
training. Actual recommendation images were retrieved by a reverse lookup that
maps the selected features to the original target images.
Figure 5. Complementary recommendation for a common query item (dark jeans)
#### 2.5.4. Performance evaluation
The feature transformer in CRAFT samples from a conditional distribution to
generate diverse and relevant item recommendations for a given query. The
recommendations generated by CRAFT are preferred by the domain experts over
those produced by competing approaches.
### 2.6. Neural Compatibility Modeling
It’s easy these days where fashion communities are online and we can
experience that a lot of fashion experts are publicly sharing their own
fashion tips by showing how their outfit compositions work , where each item a
top or a bottom usually has an image and context metadata title and category.
With such Rich information, fashion data offers us an opportunity to
investigate the code in clothing matching. Now we know that the colors,
materials and shape are some aspects that affect the compatibility of fashion
items and also each fashion item involves multiple modalities and also if we
notice that the composition relation between fashion items is rather sparse.
Now this makes Matrix factorization methods not applicable.
Figure 6. Illustration of the proposed scheme. They employed a dual
autoencoder network to learn the latent compatibility space, where they
jointly model the coherent relation between visual and contextual modalities
and the implicit preference among items via the Bayesian personalized ranking.
#### 2.6.1. Previous Methods
The recent advancement in these Fashion aspects has been done, but the
previous models (Iwata et al., 2011; Hu et al., 2015; McAuley et al., 2015;
Liu et al., 2012) proposed were lacking in terms of how they wanted to
approach this subject.
#### 2.6.2. Proposed Approach
So what this paper (Song et al., 2017) proposes is a content-based neural
scheme that models the compatibility between fashion items based on the
Bayesian personalized ranking BPR framework. Now this scheme jointly models
the coherent relation between modalities of items and their implicit matching
preference.So basically they propose focusing on modeling the sophisticated
compatibility between fashion items by seeking the nonlinear latent
compatibility space with neural networks. And they also were able to aggregate
the multimodal data of fashion items and exploit the inherent relationship
that basically exists between different modalities to comprehensively model
the compatibility between fashion items.
Now we know that it is not correct to directly measure the compatibility
between fashion items from a distinct space due to their heterogeneity. So for
that the author’s they assume that there exists a little compatibility space
that is able to bridge the gap between heterogeneous fashion items where
highly compatible fashion items share the similar style material which can
show high similarity or functionality should also show high similarity. For
example a wide casual T-shirt goes really well with black jeans, but it does
not go with a black suit while a pair of high boots prefer skinny jeans rather
than flared pants. So they further go along and assume that the subtle
compatibility factors lie in a highly nonlinear space that can be learned by
the advanced neural network models. So they employ the auto encoders networks
to learn the latent space which has been proven to be effective in the latent
space learning.(Wang et al., 2016)
To fully take advantage of the implicit relation between tops and bottoms,
basically what they did was that they naturally adopt the BPR framework and
assumed that bottoms from the positive set $\mathcal{B}_{i}^{+}$ are more
favorable to top $t_{i}$ than those unobserved neutral bottoms. According to
BPR, built a training set:
$\mathcal{D}_{S}:=\left\\{(i,j,k)\mid
t_{i}\in\mathcal{T},b_{j}\in\mathcal{B}_{i}^{+}\wedge
b_{k}\in\mathcal{B}\backslash\mathcal{B}_{i}^{+}\right\\}$
where the triple $(i,j,k)$ indicates that bottom $b_{j}$ is more compatible
than bottom $b_{k}$ with top $t_{i}$
Then according to(Rendle et al., 2012) , they got the following objective
function,
$\mathcal{L}_{bpr}=\sum_{(i,j,k)\in\mathcal{D}_{S}}-\ln\left(\sigma\left(m_{ij}-m_{ik}\right)\right)$
Taking the modality consistency into consideration, they got the following
objective function:
$\mathcal{L}=\mathcal{L}_{bpr}+\gamma\mathcal{L}_{mod}+\mu\mathcal{L}_{rec}+\frac{\lambda}{2}\|\Theta\|_{F}^{2}$
where
$\mathcal{L}_{\text{rec}}=\mathcal{L}_{\text{rec}}^{v}+\mathcal{L}_{\text{rec}}^{c}$
with
$\mathcal{L}_{\text{rec}}^{v}=\Sigma_{(i,j,k)\in\mathcal{D}_{S}}\left(l\left(\mathbf{v}_{i}^{t}\right)+l\left(\mathbf{v}_{j}^{b}\right)+\right.\left.l\left(\mathbf{v}_{k}^{b}\right)\right)$
and
$\mathcal{L}_{rec}^{c}=\Sigma_{(i,j,k)\in\mathcal{D}_{S}}\left(l\left(\mathbf{c}_{i}^{t}\right)+l\left(\mathbf{c}_{j}^{b}\right)+l\left(\mathbf{c}_{k}^{b}\right)\right)\cdot\mu,\gamma,\lambda$
are non-negative trade-off hyperparameters. $\Theta$ refers to the set of
network parameters (i.e., $\mathbf{W}_{k}$ and
$\left.\hat{\mathbf{W}}_{k}\right)$. The last regularizer term is designed to
avoid overfitting.
## 3\. Aesthetics and Fashion
The word aesthetic (of Philosophy, 2009) was basically introduced in the 18th
century where it has come to be used to designate among other things a kind of
object, a kind of judgment, a kind of attitude or experience and a kind of
value. Where aesthetic comes the concept of aesthetic descends usually from
the concept of taste. So in the 18th century, the theory of taste emerged in
part as a corrective to the rise of rationalism particularly as applied to
Beauty and the rise of egoism particularly as applied to virtue.
### 3.1. Mapping Association
So how do people usually describe clothing ,so there are words like informal,
casual ,formal ,party, where they are usually used. But the recent focus on
recognizing or extraction of the features that are available visually in
clothing is pretty much different.
#### 3.1.1. Background
To accurately guess that, the authors in the paper (Jia et al., 2016) describe
a way to bridge the gap between visual features and the aesthetic words. So
what they basically do is that they formulate a novel three-level framework
visual features (VF) - image-scale space (ISS) - aesthetic words space (AWS)
and then they leverage the Art field image scale space which serves as an
intermediate layer. So firstly they proposed a stacked diagnosing auto encoder
Guided by correlative labels SDAEGCL, to map the visual features to the image
scale space and then with that accordingly what they do is that the semantic
distance is computed by the Wordnet similarity (Pedersen et al., 2004). They
map the most often using static words available and being used by people in
the online clothing shops to the image scale space. Now, what they do is that
they employ the upper body menswear images that they have downloaded from
several different online shops as their experimental data and they proposed a
3-level framework that can help to capture the relationship that is standing
between visual features and aesthetic words. It is quite important for people
to wear aesthetically and properly and specifically given a user input
occasion wedding ,shopping or dating ,a system should be able to suggest the
most suitable clothing that is from the user’s own clothing available. So
another paper (Liu et al., 2012) similar idea was mentioned where the two
criterion’s are explicitly considered for the system where it is paid heed to
wear properly and to wear aesthetically like for example that red T shirt
matches better with white pants than green pants and to basically narrow down
the semantic Gap that is between the low-level features of clothing and the
high-level occasion categories. From where these clothing attributes are
treated as latent variables in the support Vector machine based recommendation
model. But nevertheless the matching rules cannot reveal the aesthetic effects
holistically and lacked Interpretability.
Figure 7. Examples of clothing images and their corresponding aesthetic
words.
#### 3.1.2. Proposed Approach
So the paper (Jia et al., 2016) basically aims to bridge the gap between
visual features and aesthetic words of clothing where in order to capture the
intrinsic and holistic relationship between them they sort of introduce a
middle layer ,intermediate layer and form a novel three-level framework, which
is based on the proposed Theory by Kobayashi (Kobayashi, [n.d.]). Where two
dimensional space warm cool and hard soft aspects are applied in the art
design.
Basically the contribution of the papers is that they build an association
between clothing images and aesthetic words by proposing a three-level
framework. It basically does a novel notation of using the 2D continuous image
scale space as a layer that is intermediate with a very strong ability of
description thus it facilitates the deep and high-level understanding of
aesthetic effects. And secondly what it does is that the paper proposes a
stacked denoising auto-encoder Guided by correlative labels SDAEGCL to
implement mapping of visual features to the image scale space and that can
amend the random error existing in initial input and make full use of the
information of both labeled and unlabeled data and moreover we can also find
that the stack methods improve the representation capability of model by
adding more hidden layers.
So basically Kobayashi proposed 180 keywords into different 16 categories of
Aesthetics and defined their coordinate values in the image scale space. But
as in fashion, there are some words that are unrelated like alert ,robust,
sad, happy. These are not something that we usually use to describe clothing.
So first the authors sort of removed manually all these not often used words
and established a static word space $Y$ for clothing containing 527 words.
Now in order to illustrate how to map the aesthetic words $y_{i}\left(\forall
y_{i}\in Y\right)$ to the image-scale space $D.$ To determine the coordinate
value $D_{y_{i}}\left(wc_{y_{i}},hs_{y_{i}}\right)$ of an aesthetic word
$y_{i}\in Y,$ the authors basically first define the 180 keywords as keyword
${}_{j}(j=1,2,\cdots,180)$ and calculate the semantic distances between
$y_{i}$ and each keyword j using WordNet::Similarity . Then what they do is
that they basically pick 3 keywords with the shortest distances
$d_{i_{1}},d_{i_{2}}$ and $d_{i_{3}},$ marking the coordinate values of these
3 keywords as
$D_{i_{1}}\left(wc_{i_{1}},hs_{i_{1}}\right),D_{i_{2}}\left(wc_{i_{2}},hs_{i_{2}}\right)$
$D_{i_{3}}\left(wc_{i_{3}},hs_{i_{3}}\right).$ Afer that they take the
reciprocals of distances $rec_{i_{1}}$ rec ${}_{i_{2}},$ rec ${}_{i_{3}}$ as
weights (e.g. rec ${}_{i_{1}}=\frac{1}{d_{i_{1}}}$ ), the weighted arithmetic
mean 1 of $D_{i_{1}},D_{i_{2}}$ and $D_{i_{3}}$ can also be regarded as the
coordinate value $D_{y_{i}}\left(wc_{y_{i}},hs_{y_{i}}\right)$ of $y_{i}.$ The
formula is shown as follows:
$wc_{y_{i}}=\frac{\sum_{k=1}^{3}wc_{i_{k}}\cdot
rec_{i_{k}}}{\sum_{k=1}^{3}rec_{i_{k}}},hs_{y_{i}}=\frac{\sum_{k=1}^{3}hs_{i_{k}}\cdot
rec_{i_{k}}}{\sum_{k=1}^{3}rec_{i_{k}}}$
So by this way what they do is that for each $y_{i}\in Y,$ they basically
calculate its coordinate value $D_{y_{i}}$ in the image-scale space as
$\left(wc_{yi},hs_{yi}\right).$ To label an input clothing image $v$ with an
aesthetic word, they use the proposed SDAE-GCL to predict its coordinate value
$D_{v}\left(wc_{v},hs_{v}\right)$ in $D.$ Then, after that they find a word
$y_{v}\in Y$ whose corresponding coordinate value $D_{yv}$ has the shortest
Euclidean distance to the $D_{v}$. Thus, $y_{v}$ can be regarded as the
aesthetic word of image $v$
### 3.2. Brain-inspired Deep Network
Now we know that most existing methods sort of rely on conventional features
in order to represent an image. Such features that can be extracted by
convolutional neural networks are the scale-invariant feature, transform
algorithm, color histogram and so on but one important type of feature is the
aesthetic feature and as we have already discussed it before it plays an
important role in clothing and specially in clothing recommendation since
users largely depend on whether the clothing is in line with their aesthetics
or not.
#### 3.2.1. Previous Methods
Now we have seen in some papers (Han et al., 2017; Hsiao and Grauman, 2017;
McAuley et al., 2015; Vasileva et al., 2018) in which there was a
recommendation for different fashion garments for an unfinished outfit. But
their goal was different from the one mentioned in this paper. That is
basically that they focused on clean per-garment catalog photos and the
recommendations were mostly restricted to retrieve garments from a specific
data set. Now the only feature in those recommendation systems was that they
were adding to the Garment. Most prior fashion work addresses recognition
problems, like matching street-to shop (Kalantidis et al., 2013; Kiapour et
al., 2015; Yan, 2012; Vittayakorn et al., 2015) But in this case, what they
are doing is that they are saying that some problems demand going beyond
seeking an existing garment and adding to it and for that, they said that
there are garments which are detrimental and it should be taken off. You know
like cuff the jeans above the ankle or how to adjust the presentation and
detail of them within a complete outfit to improve its style.
#### 3.2.2. Background
So in order to bridge the gap there are a lot of different methods but we are
going to discuss another one (Yu et al., 2018) which introduces the intense
static information. Which is highly relevant with user’s preference into the
clothing recommendation system. So what they basically do, is that the
aesthetic feature extracted by the pre-training on network, which is a brain
inspired deep structured trained for the assessment task of Aesthetics. So for
that they consider the aesthetic preference which varies significantly from
user to user as different people have different sorts of reference in
Aesthetics. So they proposed a new tensor factorization model that
incorporates the static features in a very personalized manner. So what they
do is that they conduct different experiments and demonstrate that the
approach they are putting forward captures the static preference of the user.
It significantly outperforms the already available state-of-the-art
recommendation methods.What happens is that usually when we are shopping for
clothing on the web. We used to look through product images before making a
certain decision before buying that thing and product images usually provide a
lot of information including design, color schemes ,patterns structure and so
on. We can get an estimation of the thickness and quality of a product from
its images. As such product images play a lot of key roles in the clothing
recommendation task. So what the authors in this paper do is that they
leverage this information and enhance the performance of the existing clothing
recommendation systems.
Figure 8. Brain-inspired Deep Network (BDN) architecture.
However, an important factor regarding aesthetics is that it has been
considered not much in previous researchers’ research. So basically what
happens is that while most user’s concern regarding clothing is that the
product should be good looking. What happens is that the author’s use the
static Network to extract relevant features that is between an aesthetic
network and a CNN. That are demonstrated and they proposed a brain inspired
deep Network, which is a deep structure trained for image aesthetic assessment
that inputs several raw features that are indicative of aesthetic feelings
like hue, saturation, color, duotones ,complementary color etc. And what is it
that extracts high-level aesthetics from these barely barely raw features.
#### 3.2.3. Proposed Approach
So the paper works on BDN that is utilized to strike the holistic feature in
order to represent the static elements of a clothing. And as different people
prefer different aesthetic tastes. So to capture the diversity of the
aesthetic preference among different consumers and over different times. They
exploit tensor factorization as a basic model. Now, there are several ways to
decompose a tensor however, there are certain drawbacks in existing models
(Kolda and Bader, 2009; Rendle and Schmidt-Thieme, 2010; Sidiropoulos et al.,
2016) . So what they do is that they address the clothing recommendation task
better and propose a dynamic collaborative filtering DCF model that is trained
with coupled matrices to mitigate the sparsity problem. And then afterwards
they combined the models with Bayesian personalized ranking optimization
criteria and evaluated the proper performance on an Amazon clothing dataset.
So basically what they are doing is that they are proposing an novel DCF model
to portray the purchase events in three dimensions: user, items, and time and
then incorporate the aesthetic features into DCF and train it. And of course,
they are leveraging the novel aesthetic features in recommendation to capture
consumers specific aesthetic preference and they compare the effect with
several conventional features to demonstrate the necessity of the aesthetic
features.
So in order to illustrate the hybrid model that integrates image features into
the basic model the DCFA. They first introduced the basic tensor factorization
model DCF. So the basic model is the impact of time on aesthetic preference.
So what they do is that they proposed a context-aware model as the basic model
to account for the temporal factor. What they do is that they use P × Q × R
tensor a to indicate the purchase events among the users clothes and time
dimensions. So if a user P purchase an item Q in the time interval R
Then ${\mathrm{A}}_{pqr}$ is 1 otherwise, it will be 0. so for that the tensor
factorization has been widely used to predict all the missing entries 0
elements in $A$ which can be used for recommendation. So as the previous
models have some limitations what they do is that they proposed a new tensor
factorization method in which a user makes a purchase by deciding a product
and there are two primary factors. So the first one is that if the product
fits the users preference and the appearance is good looking or appealing to
that specific user. And if the time is correct that if it’s in the season and
fashionable, for example, of course winter clothing cannot be recommended or
aesthetically fine if it’s being recommended in the summer season, so for user
$p$, clothing $q$, and time interval $r$, they use the scores $S_{1}$ and
$S_{2}$ to indicate how the user likes the clothing and how the clothing fits
the time respertively. $S_{1}=1$ when the user likes the clothing and
$S_{1}=0$ otherwise. Similarly, $S_{2}=1$ if the clothing fits the time and
$S_{2}=0$ otherwise. The consumer will buy the clothing only if $S_{1}=1$ and
$S_{2}=1,\mathrm{so},\hat{\mathrm{A}}_{pqr}=S_{1}\&S_{2}.$ To make the formula
differentiable, they approximately formulated it as
$\hat{\mathrm{A}}_{pqr}=S_{1}\cdot S_{2}.$ And the presented $S_{1}$ and
$S_{2}$ in the form of matrix factorization:
$\begin{array}[]{l}S_{1}=\sum_{i=1}^{K_{1}}\mathbf{U}_{ip}\mathbf{V}_{iq}\\\
S_{2}=\sum_{j=1}^{K_{2}}\mathbf{T}_{jr}\mathbf{W}_{jq}\end{array}$
where $\mathrm{U}\in\mathbb{R}^{K_{1}\times
P},\mathrm{V}\in\mathbb{R}^{K_{1}\times
Q},\mathrm{T}\in\mathbb{R}^{K_{2}\times R},$ and
$\mathrm{W}\in\mathbb{R}^{K_{2}\times Q}.$ The prediction is then given by:
$\hat{\mathrm{A}}_{pqr}=\left(\mathrm{U}_{*p}^{\mathrm{T}}\mathrm{V}_{\cdot
q}\right)\left(\mathrm{T}_{*r}^{\mathrm{T}}\mathrm{W}_{*q}\right)$
We can see that in Equation that the latent features relating users and
clothes are independent with those relating clothes and time. Though $K_{1}$
-dimensional vector $\mathrm{V}_{*q}$ and $K_{2}$ -dimensional vector
$\mathrm{W}_{*}q$ are all latent features of clothing $q,\mathrm{V}_{*q}$
captures the information about users” preference intuitively whereas
$\mathrm{W}_{*}q$ captures the temporal information of the clothing. The model
is more expressive in capturing. The underline related patterns in purchases.
Moreover this model is efficient and easy to train compared with the Tucker
decomposition.
### 3.3. Minimalistic Approach
We know that the physical attributes of a product are very much influencing
the buying behavior. (Streamoid, [n.d.]) We also know that the aesthetic calls
intuitively while we shop. so it may not even be you know, the person might
not even be aware of making multiple decisions on every product, for example,
you know like the style but not the color of the product. Various aspects of
our life influence the style of how we dress. Every look that we wear tells a
different story about us. So basically it communicates a certain image
representation which is you know decoded by others within their own cultural
context. So it is sort of possible that the Aesthetics of a garment is similar
for all in a particular society.
#### 3.3.1. Background
So when we look into a garment, what are the main things that we should or we
usually look into. Queries like. so can I wear it? , What occasion it would
suit and how does it make me feel? And also another precise preference is ,
you know included in this aspect and how does it reflect their own
personality. So these are just a few of the questions that we usually, ask
ourselves when we are out shopping and when we want to wear clothes that are
aesthetically pleasing. But as we have seen in this new modern era that
minimalism is getting into every aspect of life and people are tending to move
towards simpler versions, but aesthetically pleasing ones. As Coco Chanel has
said:
“before you leave the house look in the mirror and take one thing off”
So minimal outfit edits in an already used outfit they can use to change the
existing outfit and improve its fashionability. Whether it can be removing an
accessory selecting a blouse with a higher neckline or you know, just tucking
your shirt in or simply, you know, changing the pants to a darker color. So
these all small adjustments are accountable for a more stylish outfit that is
more aesthetically pleasing to a large group of people or to your own self as
well.
#### 3.3.2. Proposed Approach
So motivated by these observations which made the authors of this particular
paper (Hsiao et al., 2019) go for the minimal edits for fashion outfit
improvement. So minimally editing an outfit and getting an algorithm must
impose alternations to the garments and accessories that are slight, yet
visibly improve the overall fashionability. So basically what they’re doing is
that a minimal edit need not strictly minimize the out amount of change rather
it incrementally adjust in an outfit as opposed to starting from scratch. So
basically, it can be a recommendation regarding which garment you need to you
know, replace or take off or you know to swap out or simply, you know, just
wear the same garment in a better way.
And also it is well known that clothing fashion is sort of just intuitive and
often a habitual trend in the style in which you know, an individual usually
dresses but it is sort of not clear which visual stimulus places higher or
lower significance or influence on the updation of clothing and fashion
trends. So another paper (Zou et al., 2016) that we have seen in which they
have employed machine learning techniques in order to analyze the influence
that the visual stimuli of different clothing fashion are having on the
fashion trends and specifically classification-based model was proposed by
them that quantified the influence of different visual stimuli in which each
stimuli influenced was quantified by, you know, it’s a corresponding accuracy
in fashion classification. So experimental results also, demonstrated that if
they were quantifying style color and texture so out of those three on
clothing fashion updates the style holds a higher influence than the color.
And the color holds a higher influence than the texture. So all of these are
very important in determining the Aesthetics as well.
Figure 9. Overview of our Fashion $++$ framework. We first obtain latent
features from texture and shape encoders $E_{t}$ and $E_{s}$. Our editing
module $F^{++}$ operates on the latent texture feature $t$ and shape feature
s. After an edit, the shape generator $G_{s}$ first decodes the updated shape
feature $s^{++}$ back to a $2\mathrm{D}$ segmentation mask $\mathrm{m}^{++},$
and then we use it to region-wise broadcast the updated texture feature
$\mathrm{t}^{++}$ into a $2\mathrm{D}$ feature
$\operatorname{map}\mathbf{u}^{++}.$ This feature map and the updated
segmentation mask are passed to the texture generator $G_{t}$ to generate the
final updated outfit $x^{++}$.
So basically the main idea and approach for this model. Is that the activation
maximization method. That works on localized encodings from a deep image
generation Network. So what they basically do is that you give them an
original outfit and they map it’s composing pieces for example, you know, the
bag, boots, jeans. blouse to their respective codes. And then what they do is
that they use a discriminative fashionability model for the editing in which
it gradually updates the encodings in the direction that maximizes the outfit
score so when they do this, they are hence improving its style. And also the
update trajectory offers various ranges of edits starting from you know, the
least changed and going towards the item that is most fashionable from you
know, which users can choose a preferred endpoint. The approach basically says
that it provides its outputs in two formats:
1. (1)
Retrieved garments from an inventory that would best achieve its
recommendation.
2. (2)
And the second one is rendering of the same person in the newly adjusted look
generated from the edited outfits encoding
#### 3.3.3. System Working
So basically, what they do is that they present an image generation framework,
which is comprised of outfit images into their garment regions and factorizes
shape/fit and texture in support of the later objectives. So the framework is
basically about coordination of all composing pieces defines and outfits look.
What they do is that they can control which parts like the pants or the skirts
or you know shirts and then aspects like the length of their sleeve, color,
the pattern and neckline to change and sort of, you know, keep the identity
and fashion irrelevant factors unchanged. So what they want to do is they want
to explicitly model their spatial locality and to perform minimal edits. So
what they needed to do was to control the piece’s textures as well as their
shapes. So basically what textures comprise in outfits is for example, like in
denim with solid patterns gives more casual look or like leather with red
colors, give more street style look. So with the same material color and
pattern of garment and how they are worn, you know, like tucked in or pulled
out and skinny or baggy pants and you know, what sort of cut they have v-neck
or turtleneck or you know boatneck. So the Garment will compliment a person’s
silhouette in different ways. So what they do is that they account for all of
these factors and devise an image generation framework that gives control over
individual pieces accessories body parts and also factorize the shapes from
the texture.
For computing an edit the main steps are: calculating the desired edit, and
generating the edited image. For calculation of an edit, they basically took
an activation maximization approach where they iteratively alter the outfit’s
feature such that it increases the activation of the fashionable label
according to $f$. Formally, let
$\mathbf{z}^{(0)}:=\left\\{\mathbf{t}_{0},\mathbf{s}_{0},\ldots,\mathbf{t}_{n-1},\mathbf{s}_{n-1}\right\\}$
be the set of all features in an outfit, and
$\tilde{\mathbf{z}}^{(0)}\subseteq\mathbf{z}^{(0)}$ be a subset of features
corresponding to the target regions or aspects that are being edited ( $e.g.,$
shirt region, shape of skirt, texture of pants). The updated outfit’s
representation is as follows:
$\tilde{\mathbf{z}}^{(k+1)}:=\tilde{\mathbf{z}}^{(k)}+\lambda\frac{\partial
p_{f}\left(y=1\mid\mathbf{z}^{(k)}\right)}{\partial\tilde{\mathbf{z}}^{(k)}},k=0,\ldots,K-1$
where $\tilde{\mathbf{z}}^{(k)}$ denotes the features after $k$ updates,
$\mathbf{z}^{(k)}$ denotes substituting only the target features in
$\mathbf{z}^{(0)}$ with $\tilde{\mathbf{z}}^{(k)}$ while keeping other
features unchanged, $p_{f}\left(y=1\mid\mathbf{z}^{(k)}\right)$ denotes the
probability of fashionability according to classifier $f$, and $\lambda$
denotes the update step size. Each gradient step yields an incremental
adjustment to the input outfit.
#### 3.3.4. Performance evaluation
This Approach makes slight yet noticeable improvements better than baseline
methods in both quantitative evaluation and user studies and it effectively
communicates to users through image generation and supports all possible edits
from swapping, adding, removing garments to adjusting outfit presentations
through qualitative examples.
### 3.4. Neuroaesthetics
Mark Twain has said that the “Finest Clothing made is a person skin”, but of
course society demands something more than this. Now, we know that fashion has
a tremendous impact on our society and clothing is basically something that
reflects the person’s social status and thus puts pressure on how they are to
dress to, you know, fit a particular occasion. For this the authors of this
particular paper (Simo-Serra et al., 2015) analyze the fashion of clothing of
a large social website in which their main aim is to learn and predict how
fashionable a person looks on a photograph and suggest subtle improvements
that they can make in order to improve their image and appeal.
#### 3.4.1. Previous Methods
Now the approach these authors have suggested is also somewhat related to
recent approaches (Dhar et al., 2011; Gygli et al., 2013; Isola et al., 2013;
Khosla et al., 2014) that were aimed at modeling the human perception of what
beauty actually is. So in papers these authors basically address the questions
of what makes a particular image memorable and interesting or you know popular
to viewers. So this line of work usually contains mining of large image data
sets in order to you know, find a relation of visual cues to popularity
scores. But in this paper what they do is that they tackle the problem of
predicting fashionability. So they are going a step further from the previous
work by identifying High-level semantic properties that cause a particular
aesthetic score which can be then conveyed to the user so that they can
improve their outfit or their look. And this work is very much closest to
(Khosla et al., 2013) which was able to infer whether our faces are memorable
or not and then upon that results modify it such that it becomes. Although
this is quite different as their domain is different and it is also different
in formulation.
#### 3.4.2. Proposed Approach
So they are modeling the perception of fashionability. And for that what they
have done is that they have proposed a conditional random field model that
jointly reasons about several fashionability factors such as the type of
outfit and garments that an individual is wearing and the type of user and the
photograph setting for example, the scenery and fashionability score. And
based on that they give the recommendation to user in which they convey which
garments or scenery the individual should change in order to improve
fashionability.
This paper predicts how fashionable a person looks on a particular photograph.
So the fashionability is then affected by the clothes the subject is wearing
and also by a large number of other factors such as how appealing they are in
a scene that is containing that person and how that image was taken and how
appealing visually the person is ,their age and also the garment itself being
fashionable is not a perfect indicator of someone’s fashionability as people
typically judge how well the garments aligned with someone’s look, body,
characteristic or even personality. So the model proposed exploit several
domain inspired features which include beauty, age and mood inferred from the
image. And the scene and the type of photograph and if available metadata in
the form of where the user is from, how many online followers he/she has the
and the sentiment of comments by other users. For this they have to create
their own data set from different online sources. And if we see our daily
lives we can see how much of an impact fashion has in it. So this also proves
the growing interest in clothing related applications in Vision community.
Early work (Jammalamadaka et al., 2013; Simo-Serra et al., 2014; Yamaguchi et
al., 2013, 2012; Yang et al., 2015) that was focused was mainly on clothing
parsing in terms of diverse set of garments types.The paper’s objective was
basically to be able to predict fashionability of a given post, but they also
wanted to build a model that can understand fashion at a higher level. So for
that purpose what they did was they made a Conditional Random Field (CRF) to
learn the different outfits , types of peoples and settings. Now here the word
setting is basically something that describes the location where the picture
is taken and both at a scenic and geographic level. They use their own fashion
data set fashion144k Images and metadata to produce accurate predictions of
how fashionable a certain person is.
More formally, let $u\in\left\\{1,\cdots,N_{U}\right\\}$ be a random variable
capturing the type of user, $o\in\left\\{1,\cdots,N_{O}\right\\}$ the type of
outfit, and $s\in\left\\{1,\cdots,N_{S}\right\\}$ the setting. Further, we
denote $f\in\\{1,\cdots,10\\}$ as the fashionability of a post $\mathbf{x}$.
They represented the energy of the CRF as a sum of energies encoding unaries
for each variable as well as non-parametric pairwise potentials which
reflected the correlations between the different random variables. It is
defined as:
$\displaystyle E(u,o,s,f)$
$\displaystyle=E_{user}(u)+E_{out}(o)+E_{set}(s)+E_{fash}(f)$
$\displaystyle+E_{np}^{uf}(u,f)+E_{np}^{of}(o,f)+E_{np}^{sf}(s,f)$
$\displaystyle+E_{np}^{uo}(u,o)+E_{np}^{so}(s,o)+E_{np}^{us}(u,s)$
Figure 10. An overview of the CRF model and the features used by each of the
nodes.
#### 3.4.3. Performance Output
An exciting property of this specific model was that it could be used for
outfit recommendation.What they basically did was they used to take a post as
an input and estimated the outfit that maximizes the fashionability while the
kept the other variables fixed. So basically what was happening was that they
were predicting what the user should be wearing in order to increase their
looks instead of their current outfit. And this can be just one example of the
flexibility of the approach. They proposed other thoughts such as what would
be the low fitting outfit and what would be the best place to go with the
current outfit or you know, what type of users this outfit fits the most, this
can be done with this same model.
## 4\. Personalisation in Fashion
One of the key aspects in fashion is personalization. So personalization is
basically something that is intended for a certain individual based on their
likes and dislikes and what they cater as good for them. And we know that
fashion industry included e-commerce worldwide is supposed to hit the 35
billion dollar Mark by 2020 this year and there’s a need for applications
which can help the user in making Intelligent Decisions on their day-to-day
purchases or a system that can recommend them a model or something that is
personalized to their liking.
### 4.1. Personalized Outfit Recommendation with Deep Neural Network
So for this purpose the use of deep neural networks for this challenge is
needed and we are going to discuss one of a system that is dubbed as
FashionNet (He and Hu, 2018) that consists of basically two components: a
feature Network for the feature extraction function and a matching Network for
the compatibility computation. The former one is achieved through a deep
convolutional Network and the second one for that they adopt a multi-layered
fully connected Network structure and design, and compare the three
alternative architectures for FashionNet and to achieve personalized
recommendations, what they do is that they develop a two stage training
strategy, which uses the fine-tuning technique to sort of transfer a general
compatibility model to the model that embeds personal preference.
#### 4.1.1. Previous Methods
Now we know that existing recommender systems are heavily dependent on the
collaborative filtering techniques CF which basically uses historical ratings
given to the item by users as the sole source of information for their
learning expect and the performance is very much sensitive to the sparsity
level of user item metrics. The recent progress of deep neural networks
provides promising solution to the representation problem of image
content(Lecun et al., 1998; Krizhevsky et al., 2012; Chatfield et al., 2014;
Szegedy et al., 2015) .
#### 4.1.2. Background
This specific paper explores the Deep use of neural networks for outfit
recommendation and specifically for the personalized outfit recommendation.
Now for this they encounter two key problems. The first one was modeling of
the compatibility among multiple fashion items and obviously the second one
was capturing users personal interest.
So for that the former one was solved by first mapping the item images to a
latent semantic space with convolutional neural network and for the second one
they adopt a multi-layer fully-connected network structure. And they also
studied alternative architectures that combine feature learning and
compatibility modeling. Different ways for the other problem. What they do is
that they encode user-specific information in terms of parameters of the
network. Although we know that each user may have his own unique personal
taste and they follow some general rules for making outfits. But besides that
the usual small number of training samples for individual users makes it very
much important to borrow training data from other users that share similar
tastes. So with these observations in mind, what they do is that they adopt a
two-stage strategy for the training of their model network; the first stage
basically learns a general compatibility model from outfits of users. And in
the later stage, what they do is that they fine-tune the general model with
the specific data that they get from the user in fine-tuning. It is an
important technique for training deep neural networks for applications that
have limited number of training samples.
#### 4.1.3. Proposed Approach
So in their approach they basically assume that heterogeneous fashion items
can be grouped into n categories. Let’s take an example where the three most
social categories for fashion are usually shoes, tops and bottoms and outfit
is a collection of fashion items which are usually coming from different
categories. So an outfit can consist of a bottom, top and a pair of shoes. So
given some historical data what they did was that for any user outfit pair
they pretty much assigned a rating score as the score kind of reflected the
level of affection the user has for the outfit. So the higher the score then
obviously the more appealing the outfit is for the users and those outfits
that had the highest score were recommended to the users. So basically the
rating system was used and the rating $s$ for a user outfit pair is determined
by how well the items in the outfit go with each other. So if you know a pair
of red shirts and you know, let’s say black slacks or tight jeans and maybe
they go well instead of, you know, something with a yellow skirt and red
shirt. So we basically see the author’s design appropriate deep neural network
structure to model the interactions among these items and they achieve
Personalization by developing a two-stage training strategy and embed the user
specific preferences in the parameter of the network.
So what they basically do is that they explore three different network
architectures and naming them as fashionet A ,B and C and without the loss of
generality. They assume an outfit consists of three items: top, bottom and
pair of shoes. So in fashionNet A the images of the items are first
concatenated to create a new image with nine color channels, and the
compounded images are then forwarded to a widely used CNN model VGGNet. The
output layer is a fully connected layer with softmax function as its
activation function. So in this architecture the components of representation
learning and compatibility measure are fully integrated. The two steps are
carried out simultaneously right from the first convolution layer.
Figure 11. Network architectures
Now in fashionNet B we see that they apply representation learning and
compatibility measures sequentially and the images are first of all mapped to
a feature representation through a feature Network. So the same CNN model is
used for items from different categories. To model the compatibility they
concatenate the features of all items and feed them to three fully connected
layers. So in this work what they show that this network structure also has
the capacity for approximating the underlying compatibility among multiple
features.
Now for fashionNet C , what they do is that both FashionNet A and B try to
directly model the compatibility among multiple items. They sort of come
across difficulties when trying to capture the High order relationships and
the data is significantly expanded when we concatenate all the items. Due to
the dimensionality issue a huge number of training samples may be required for
a good model to be learned and we know that users on the internet have
contributed so many outfit ideas. It is still minor compared to the number of
all possible outfits. So in order to overcome this problem what the authors
propose is that a prior restraint in fashionNet C. They assume that the
compatibility of a set of items is mainly determined by how well a pair of
these items go with each other. Then all the outfits from the final layers
regarding the probabilities that the item pairs are matched while are added
together to get a final score as for the whole outfit. The learning task is
formulated as a learn to rank problem.A training sample contains two outfits,
e.g. $\left\\{I_{t}^{+},I_{b}^{+},I_{s}^{+}\right\\}$ and
$\left\\{I_{t}^{-},I_{b}^{-},I_{s}^{-}\right\\},$ where the former is
preferable to the latter. A two-tower structure to train the networks and rank
loss is used to minimize this following equation.
$L=\frac{1}{M}\sum_{i=1}^{M}\log\left(1+\exp\left(-\left(s_{i}^{+}-s_{i}^{-}\right)\right)\right)$
In the training expect what happens is that for an individual user they
usually have a small number of training outfits. And furthermore, although
each user may have their own preference. There are some rules that should be
followed by most people for making an outfit. For example t-shirts and jeans
are usually paired up. With these observations. What they do is that they
design a two stage procedure to train the deep network for personalized outfit
recommendation. So the first stage is basically that they learn a general
model for compatibility. Here they discard the information of the user and mix
the outfit created by different users all together. And then they create a new
neutral outfit by mixing randomly selected fashion items. Now, this is
reasonable in order to assume that items in a user created outfit are more
compatible than those in neutral outfit. So for that ,training samples can be
made by pairing a user-generated outfit with a neutral one. So they initialize
the parameters in VGGNet that would be trained on imagenet and initialize the
other layers with random numbers drawn from gaussian distribution.
Then furthermore these are optimized for the whole network using the mixed
data set and in the second stage we see that the authors train using the
specific model for personalized recommendations so we can say that for each
user what they did was they first initialize the network with the certain
parameters that were obtained by the previous general training and then they
use each user’s own personal data to fine grain or fine tune the parameters.
We know that fine-tuning is very important in this aspect. It sort of helps
the data insufficiency problem in a lot of different applications. So for
fashionNet A they saw that they fine-tune the whole network in this stage and
for fashionNet B and C. There were two strategies used. The first one was to
fine-tune the whole network. So both the feature Network and the matching
network will have personalized parameters. Now this one resulted in different
feature representations of each item for different users. The second method
was to freeze the feature Network and only fine-tune the matching Network. So
the features will keep the same and the user-specific information will be
carried only by the matching Network and this will save a lot of computation
during testing and which is quite a favorable aspect in terms of practice.
#### 4.1.4. Performance evaluation
In the end they found that the performance of FashionNet A was inferior to the
other two architectures namely FashionNet B and C. When all the possible
reasons for fashionNet B and C to obtain such an advantage was that the
representation learning incompatibility modeling was performed in them
separately so that they were able to use different network structures in order
to achieve different functionalities. So these kinds of networks are easier to
design and optimize in this case.
### 4.2. Generative Adversarial Training
For personalization another approach is the generative adversarial training.
So for that we go over another paper (Yu et al., 2019) in which they propose
an approach in which a convolutional network is first used to map the query
image into a latent Vector presentation. Now this latent representation all
together with another Vector which characterizes users style preference as an
input are taken into the generator Network in order to generate the target
image item.
#### 4.2.1. Previous Methods
Although there are few works (Hu et al., 2015; Xu Chen, 2018) that have shown
the personalized model is more capable of picking outfits that suit or a model
to generate new items images for some category for a user that was
personalized. But no query item was provided in their settings. They did not
consider the compatibility between items.
Figure 12. Network architecture for personalized fashion design. It contains
one generator and two discriminators. The generator uses an encoder-decoder
architecture. One of the discriminators is for real/fake supervision. And the
other one is for compatibility prediction
#### 4.2.2. Proposed Method
Now, discriminator networks are built to guide the generation process. One of
them is the classic real fake discriminator. And the other is a matching
Network which simultaneously models the compatibility between fashion items
and also learns the preference representations.When the given inventory is
limited. It’s a possibility there. There are no good items enough to
complement the query and when we have the inventory that is too large then
generating the recommendation may face some efficiency problems. So this paper
basically suggests that existing items can be synthesized images of new items
that are compatible to a given one. So basically this solves the deficit
problem for small inventories and for large inventory when targeting real
items is necessary. We can adjust search items that are similar with the
synthesized ones. Which is pretty much more efficient in terms than the
exhaustive compatibility valuations, since similarity search can be very fast
with techniques like hashing.
Now aside from General compatibility they are also considering the personal
issue. Personalization comes in here, which is an important trend as we have
already discussed. Now given the same query item different persons would like
to choose different items which goes with their own personal style. So while
personalized recommendations have been prevalent in areas, like movies, songs
and book recommendations, but for fashion, they are still not user-specific.
So basically what this paper suggests is that the proposed system is
personalized using the generative adversarial training framework GAN’s.
Generative adversity networks have pretty much achieved a great success in
synthesizing realistic images for different applications. So they apply this
technique and they first use an encoder Network to map the query image into a
latent Vector representation. And then this representation together with
another vector that characterizes user style preference is taken into the
input as for the generator Network that generates the target item. So
basically the approach goes like this: the task of personalized fashion design
is basically to develop a fashion item for a specific individual given an
input query item. So there are two general requirements for this design that
they have: the first one is the realness requirement which practically means
that the design item should look realistic. And then the second thing comes is
the compatibility requirement that is basically that the design item should be
compatible with the query item.
### 4.3. Personalization in Unstructured Data
Now we know that a lot of challenges in e-commerce usually come up from the
fact that new products are continuously being added to the catalog. So the
challenge invoked is properly personalizing the customers experience
forecasting demand and planning the product range.
#### 4.3.1. Background
The paper (Ângelo Cardoso et al., 2018) in discussion is about a global
e-commerce company that creates and curates clothing and beauty products for
fashion lovers. So over the years they have a lot of products and this amounts
to more than 1 million unique Styles. So for each product different divisions
within the company produce and consume different product attributes, so mostly
the attributes are manually curated and there could be cases in which
information is sometimes missing or wrongly labeled. However, sometimes
incomplete information still carries a lot of potential value for the
business; the ability to have a systematic and quantitative characterization
of a product is basically one of the key aspects for the company to make data-
driven decisions that can be used across a set of problems including
personalization. So the paper basically shows how to predict a consistent and
complete set of product attributes that will illustrate how this enables them
to personalize the customer experience by providing more relevant products.
Figure 13. Schematic view of the multi-task attribute prediction network
#### 4.3.2. Proposed Approach
So basically the model that they proposed attracts attribute values from
product images and textual descriptions. In terms of image processing what
they do is that fashion is predominantly a visual business and visual features
are at the core of many data science products. They use image features for
many of their applications. So in order to minimize the computational cost
what they did was they implemented a centralized visual feature generation
pipeline. That uses a pre-trained convolutional neural network to extract
product representation from images. Now for the text processing what they did
was that the CNN’s were originally applied to images which are treated as
matrices of pixel color values. And it’s a possibility to apply these
convolutions to other types of matrices as well and in particular paragraphs
of text. So similarly, they process images to produce product representations
they also used the same technique for text descriptions. In multi modal
Fusion, they say that the image and the text representations simply are
concatenated together within a neural network, which is trained to predict the
product attributes. This is pretty much straight forward because it’s a common
way to fuse the different inputs. That works well in practice.
Now the primary focus of the paper design was to find a solution that deals
with missing labels at scale. Because in the paper, they also argue that the
foundational piece to solve all of the problems is having consistent and
detailed information about each product, which is rarely available. So they
show this by having a quantitative understanding of the products. Can be used
to improve recommendations in a Hybrid recommender system approach.They say
that they could have chosen to build a separate model for each attribute, but
then they would have to maintain multiple models in production. And in terms
of independent models would also be oblivious to the correlations between
attribute values and they would also only work well for common attributes,
where there must be enough training data. Alternatively they said that they
could have built a single model to predict all attributes at once also, but
however few products are fully annotated and there would have not been enough
data to train such a model. So because of these reasons what they did was they
chose to cast attribute prediction as a multitask learning problem. This means
training a neural network for each attribute but sharing most of the
parameters between Networks.
#### 4.3.3. Hybrid Approach
The hybrid approach incorporates several state-of-the-art advances in
recommender systems and not only incorporates new products, but also enhances
the recommendations that customers receive overall. Their approach creates an
embedding of products, i.e. a representation of all the products in their
catalogue in a high-dimensional vector space. In this vector space, products
with similar styles and attributes will be closer than unrelated ones. When
producing personalised recommendations, the algorithm also assigns a vector to
every customer. The items with the highest inner product with the customer
vector are the recommended ones. The position of products and customers in
this space is determined not only by the customer-product interactions, but
also by the augmented product attributes. This ensures that newly added
products are positioned correctly in the space and can be recommended to the
right customers.
### 4.4. POG: Personalized Outfit Generation
Another paper (Chen et al., 2019) proposes a personalized outfit generation
POG model. Basically what happens in this model is that they connect the user
preferences regarding individual items and then the outfits with transformer
architecture. So the extensive offline and online experiments they did
provided them with strong quantitative evidence that the method they proposed
found alternative methods regarding port compatibility and personalization
metrics. So basically what happens is that they can generate compatible and
personalized outfits based on user recent behavior. So specifically for this
they use a Transformer encoder decoder architecture that models both signals
from user preference and outfit compatibility. And this is interestingly one
of the first study to generate personalized outfits based on user historical
Behavior within encoder decoder framework. They also developed a platform
named IDA where POG. has been deployed in order to help out without
regeneration and recommendation at a very large scale application Ifashion.
#### 4.4.1. Previous Methods
There are several methods for generating a fashion outfit that is likeable by
the user and usually these methods fall into basically two types. So the first
type is basically the one in which they focus on calculating a pairwise
compatibility metric (McAuley et al., 2015; Song et al., 2018; Veit et al.,
2015) . And the second type is in which they present modeling and outfit as a
set or an ordered sequence. And then there are models (Li et al., 2016) in
which they classify a given outfit as popular or unpopular or train a bi-
directional LSTM model (Han et al., 2017) sequentially generate outfits. Now
we can see that all these methods generally use a simple pooling of item
vectors in order to represent an outfit and they have to rely on the order of
the outfits item. So this is noted that these methods belonging to either
category hardly considers all the interactions between the items in an outfit.
And it is quite unreasonable to consider an outfit as an ordered sequence
because you know shuffling of items in the outfit itself should make no
difference on its compatibility.
#### 4.4.2. Proposed Approach
So what they are trying to say is that they want to explicitly incorporate
this into their modeling architecture by which they require that each item
should have a different interaction weight with respect to other item in one
outfit and they have given example like a red shirt should have a higher
interaction with you know, blue jeans or black jeans, but a smaller weight
with a pair of white gloves.
So the model they propose in this is basically what they do, is that they
build a three-step process in which the first step has the items that are to
be embedded and in the second they build FOM which learns compatibilities of
items within an outfit and lastly the third stage once their training is
completed. They use the result to pretrained FOM to initialize POG Transformer
architecture. Representing these items using a multi model embedding model. So
for every fashion item f they compute a non linear feature embedding f . The
concept of fashion basically relies on Visual and textual information So
basically in previous models (Li et al., 2016; Han et al., 2017) what they did
was the authors used the image and text to learn the multimodal embeddings.
But in this scenario, what they do is that they use a multi-modal embedding
model that takes the following input for every item
1. (1)
Dense vector encoding the white background picture of the item from a CNN
model,
2. (2)
Dense vector encoding the title of the item obtained from a TextCNN network,
which has been pre-trained to predict an item’s leaf category based on its
title
3. (3)
D ense vector encoding a collaborative filtering signal for the item using
Alibaba’s proprietary Behemoth Graph embedding platform. So this platform is
used for generating item embeddings based on the co-occurrence statistics of
items in recorded user click sessions in the taobao application.
Figure 14. The architecture of POG, which is an encoder-decoder architecture
with a Per network and a Gen network. The outfit item is generated step by
step according to the user preference signal from the Per network and the
compatibility signal from the Gen network.
So the generation model works like this, it generates personalized and
compatible outfit by introducing user preference signals. Taking the advantage
of encoder-decoder structure, it translates an user’s historical behaviors to
a personalized outfit. Let $\mathcal{U}$ denote the set of all users and
$\mathcal{F}$ be the set of all outfits. They have used a sequence of user
behaviors $U=\left\\{u_{1},\ldots,u_{i},\ldots,u_{m}\right\\}$ to characterize
an user, where $u_{i}$ are the clicked items by the user.
$F=\left\\{f_{1},\ldots,f_{t},\ldots,f_{n}\right\\}$ is the clicked outfit
from the same user, where $f_{t}$ are the items in the outfit. At each time
step, it predicts the next outfit item given previous outfit items and user’s
click sequence on items $U.$ Thus for pair $(U,F)$ the objective function of
$\mathrm{POG}$ can be written as:
$\mathcal{L}_{(U,F)}=-\frac{1}{n}\sum_{t=1}^{n}\log\operatorname{Pr}\left(f_{t+1}\mid
f_{1},\ldots,f_{t},U;\Theta_{(U,F)}\right)$
where $\Theta_{(U,F)}$ denotes the model parameters.
$\operatorname{Pr}(\cdot)$ is the probability of seeing $f_{t+1}$ conditioned
on both previous outfit items and user clicked items.
In POG the encoder basically what it does is that it takes the user clicked
input items and then it gives a special token like [start]. And then the
decoder generates an outfit one item at a time. So at each step what happens
is that the model is basically autoregressively consuming the previously
generated items as input.The generation basically stops when a special token
[end] appears. So basically what happens is that there in the end an outfit is
given that is generated by composing the output items. So in the figure, you
can also see that the encoder is termed as PER Network and then the decoder is
as Gen Network. So the PER’s natural is basically that it provides a user
preference in terms of signal and then the Gen Network what it does is that it
generates outfits based on both personalization signal and compatibility
signal. So basically the general network is initialized using the
aforementioned pre trained FOM.
### 4.5. Item-to-Set Metric Learning Approach
Social media has been a great source for fashion recommendation and fashion
promotion. It provides us with an open and new data source for personalized
fashion analysis.
#### 4.5.1. Background
So this paper (Zheng et al., 2020) basically studies the problem of
personalized fashion recommendation by gathering the data from different
social media. That is they recommend new outfits to social media users that
fit their fashion preferences. They present an item to set metric learning
framework that basically learns to compute similarity that exists between a
set of historical fashion items of a user to a new fashion item. For
extracting features from a multi model street view fashion item the author
basically proposes an embedding module that performs multi-modality feature
extraction and cross Modality gated fusion. By studying the problem of
personalized fashion recommendation with social media data that they are
seeking to recommend new fashion outfits based on the activities that are
being carried by the social network users.
#### 4.5.2. Previous Methods
A lot of different studies (Kiapour et al., 2015; Hu et al., 2015; Huang et
al., 2015; Iwata et al., 2011; Jagadeesh et al., 2014) are done for clothing
retrieval and recommendation. But leveraging the user’s interaction on social
media for data for fashion recommendation is very much still challenging and
is quite less explored. And usually what we can gather from social media is
online activities like a street view selfie with additional word description.
So this gives that the granularity of such data is much coarser than you know,
that is unexplored. And most models (Li et al., 2016; Tangseng et al., 2018)
are not directly applicable to the task due to their lack of supervision.
#### 4.5.3. Proposed Approach
So paper basically proposes a self supervisor approach for effective and
personalized fashion recommendation in which they divide into two categories
the pictures in which the selfie posts of users a set that reveals their
personal fashion preferences or outfit items that are to be recommended items.
So they proposed that to learn an item to set metric that measures
similarities between a set and items for personalized recommendation. They
minimize the item to set distance for the set and items of a user and while
making sure they maximize such distances for certain items of different users.
And benefiting from this framework they are able to perform personalized
recommendations without requiring any sort of additional supervision. Now we
know that metric learning is well studied in literature and learning such an
item to set metric is previously unexplored. And therefore pose new challenges
because we know that the user can have interest in more than one fashion style
and not the one that is being depicted in their picture. So the item to set
similarity cannot be captured by an over simplified average of multiple items
by similarities. Which therefore states that the nearest neighbor item to set
metric is difficult to learn as it is susceptible to noise and outliers.
So in highlight what their contribution is that they present a fashion
recommendation system built on personal social media data and their system
recommends personalized outfits for using few constraint street view selfie
post of the users. They also proposed a self supervise scheme in which they
enable the training of the system. The approach is based on a novel item to
set a metric learning framework that basically needs only the user selfie pose
as the supervision. For this they design a multi model embedding module that
better fuses the social media data for obstruction of fashion features.
Built upon the item-wise measurement $d\left(f_{i},f_{j}\right),$ they propose
an item-to-set similarity metric $D(S,f),$ which measures how dissimilar an
item $f$ is to a set of items $S=\left\\{f_{1},\cdots,f_{K}\right\\}.$ The
itemto-set metric aims to predict how similar a outfit candidate is to a set
of user selfies for personalized fashion recommendation.
To design a metric that better captures the multiple interests of a user while
facilitating robust training, the paper proposes a generalized item-to-set
distance. Specifically, given a set $S$ and a query $f$, they first assign an
importance weight $w_{i}$ to each item $f_{i}\in S$ before feature averaging
and distance computation. The importance weight is computed using an
importance estimator $w_{i}=K\left(f_{i};f,S\right)$ Such a item-to-set
distance is defined by:
$\displaystyle D(S,\boldsymbol{f})$
$\displaystyle=d\left(\sum_{i=1}^{K}\alpha_{i}f_{i},\boldsymbol{f}\right)$
$\displaystyle\alpha_{i}$
$\displaystyle=\frac{\exp\left(w_{i}\right)}{\sum_{j}\exp\left(w_{j}\right)}$
To reduce the influences of noise and outliers when computing the distance,
basically what they did was that they further considered an intra-set
importance weight:
$v\left(f_{i};S\right)=\operatorname{MLP}_{v}\left(\left[f_{i},\operatorname{stat}(S)\right]\right)$
where $\mathrm{MLP}_{v}$ outputs a scalar from an input vector, and
$\operatorname{stat}(S)$ is a vector that captures the statistics of the set
$S$ along all feature dimensionalities ${}^{2}.$ In this way, we compare each
item $f_{i}$ with the set $S$ to eliminate the outliers from the sets.
Now as we know that there are different individuals that focus on different
particular aspects of fashion items and the item to set metric itself should
be user specific. So for that issue what they did was that for the minimalist
fashion style users the items that distance was made more sensitive to the
amount of colors that are used but for users of the artsy style the item to
set distance should focus more on unusual prints and the complexity of
accessories. So they extended the similarity metric equation to a user
specific metric in which they performed a user specific space transformation
before the distance computation. In particular, given the set $S$, we compute
a scaling vector $t(S)$ which indicates the scaling factor at each feature
dimension:
$\boldsymbol{t}(S)=\operatorname{softmax}\left(\operatorname{MLP}_{t}(\operatorname{stat}(S))\right)$
Using the space transformation, they extended the item-to-set metric to a set-
specific metric. Specifically, they defined a user-specific item-to-set
metric:
$D_{us}(S,f)=d\left(t(S)\odot\left(\sum_{i=1}^{K}\alpha_{i}f_{i}\right),t(S)\odot
f\right)$
where $\odot$ represents vector elementwise multiplication. It filters out the
feature dimensions that a user focuses less on before the distance
computation. This procedure helps the recommendation system to be more user-
specific.
## 5\. Future Research
In the post coronavirus era one of the industries that is obviously
undoubtedly incorporating advanced technologies at much faster speed than ever
before is fashion. And thanks to AI and computer vision power tools, new and
engaging experiences are being born for both retailers and consumers. The
e-commerce customer experience is completely incorporated with AI Solutions
like online site navigation, search, retrieval ,target marketing, labeling,
personalized offers ,size fitting, recommendations and online fitting rooms
and also style recommendation analytics and much more. So by using computer
vision and AI the image pixels are automatically taken and then they generate
semantic data from them, which is very crucial for the e-commerce stores.
One of the things that is the basic thing is the discovery of the products
that the visual search should be easy enough for the Shoppers to find what
they are looking for and should also be benefiting the retailers as well so
that they can take the advantage of users behavior and then show them the
recommendations and can get more profit from this aspect as the stores are
getting more online this post covid era. So the AI technology enables fashion
brands to sort of gain insight as to which product features their customers
would like to prefer.
Now an interesting aspect (Countants, 2020) is that we can see that the
fashion industry is at over 3 trillion dollars that contributes to the healthy
portion of the global GDP and in the 21st century, we can see that AI or
machine learning or specifically deep learning in the fashion industry is
changing every expectation of this forward-looking business. So the use of AI
let alone in the fashion industry of 2020 has so entrenched that 44 percent of
the fashion retailers that are not using AI today are facing bankruptcy. So
you can take this as an example and as a result of this Global spending on AI
Technologies by fashion and retail industry is expected to reach 7.3 billion
each year by the year 2022 and that’s just in two years.
AI powered fashion designing can be based to get the preferred customer color
textures and other style preferences and then they can be used ahead in order
to design the apparel, the textile itself. Regarding The factoring process,
what they can do is that they can use AI tools to identify the Super fast
changing trends and supply the latest fashion accessories to the retail
shelves, which is pretty much faster than the traditional retailing system.
And a lot of leading fashion brands like Zara, Topshop and Achieve and are
already using this and they are pretty much quicker in providing instant
gratification to retail customers by recognizing seasonal demand and
Manufacturing the right supply of the latest clothing and obviously virtual
merchandising is something that has enabled technologies like augmented
reality and virtual reality and now are closing the gap that is between online
and in-store shopping. So this is also really popular regarding this system.
And this is something that can be worked in the recommendation systems. As a
lot of people would like to experience the virtual reality and augmented
reality aspect in terms of the clothes fitting and checking out the online
buying experience and making it more human-like.
## 6\. Conclusion
As the advancements in deep learning, CV and AI are getting stronger day by
day their usage in the fashion industry has also become a very popular topic.
From product personalization or better designing there are multiple ways in
which AI and machine learning Technologies are impacting the global fashion
industry and they are increasing the investment by Leading fashion brands in
these Technologies are a proof of their immense potential. They provide
enhanced customer service, Virtual merchandising, smart manufacturing process
and improved inventory management and need less Manpower through Automation
and provide reduction in returned products which also improves customer
satisfaction. And one of the biggest things is personalization, which is
pretty much the key of business success and thanks to deep learning
Technologies like AI and ML along with business analytics is enabling fashion
business to keep track of fashion trends and purchasing behavior of individual
customers. So now it may be a trend or it may be a season prediction. You can
do anything with these powerful tools and the fashion industry is magnified.
And this is a field that has the potential to grow and ever expand, so any
future research in this line that will be done would be something that paves
way ahead for more jaw dropping phenomenon.
## References
* (1)
* Adomavicius and Tuzhilin (2005) Gediminas Adomavicius and Alexander Tuzhilin. 2005. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. _Knowledge and Data Engineering, IEEE Transactions on_ 17 (07 2005), 734–749. https://doi.org/10.1109/TKDE.2005.99
* Borràs et al. (2003) Agnés Borràs, Francesc Tous, Josep Lladós, and María Vanrell. 2003. High-Level Clothes Description Based on Colour-Texture and Structural Features. 108–116. https://doi.org/10.1007/978-3-540-44871-6_13
* Chatfield et al. (2014) Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Return of the Devil in the Details: Delving Deep into Convolutional Nets. _BMVC 2014 - Proceedings of the British Machine Vision Conference 2014_ (05 2014). https://doi.org/10.5244/C.28.6
* Che et al. (2016) Tong Che, Yanran Li, Athul Jacob, Y. Bengio, and Wenjie Li. 2016. Mode Regularized Generative Adversarial Networks. (12 2016).
* Chen et al. (2019) Wen Chen, Binqiang Zhao, Pipei Huang, Jiaming Xu, Xin Guo, Cheng Guo, Fei Sun, Chao Li, Andreas Pfadler, and Huan Zhao. 2019. POG: Personalized Outfit Generation for Fashion Recommendation at Alibaba iFashion. 2662–2670. https://doi.org/10.1145/3292500.3330652
* Countants (2020) Countants. 2020\. _AI and Machine Learning For Fashion Industry — Global Trends and Benefits_. https://medium.com/datadriveninvestor/ai-and-machine-learning-for-fashion-industry-global-trends-benefits-3fe11a17849e
* Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Fei Fei Li. 2009\. ImageNet: a Large-Scale Hierarchical Image Database. _IEEE Conference on Computer Vision and Pattern Recognition_ , 248–255. https://doi.org/10.1109/CVPR.2009.5206848
* Dhar et al. (2011) Sagnik Dhar, Vicente Ordonez, and Tamara Berg. 2011\. High level describable attributes for predicting aesthetics and interestingness. _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , 1657–1664. https://doi.org/10.1109/CVPR.2011.5995467
* ELEKS ([n.d.]a) ELEKS. [n.d.]a. _Designing Apparel with Neural Style Transfer_. https://labs.eleks.com/2016/09/designing-apparel-neural-style-transfer.html
* ELEKS ([n.d.]b) ELEKS. [n.d.]b. _Fashion and Technology: How Deep Learning Can Create an Added Value in Retail_. http://labs.eleks.com/2017/05/fashion-technology-deep-learning-can-create-added-value-retail.html
* Goodfellow et al. (2014) Ian J. Goodfellow, Jean Pouget-Abadie, M. Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. 2014\. Generative Adversarial Nets. In _NIPS_.
* Gygli et al. (2013) Michael Gygli, Helmut Grabner, Hayko Riemenschneider, Fabian Nater, and Luc Van Gool. 2013\. The Interestingness of Images. _Proceedings of the IEEE International Conference on Computer Vision_ , 1633–1640. https://doi.org/10.1109/ICCV.2013.205
* Han et al. (2017) Xintong Han, Zuxuan Wu, Yu-Gang Jiang, and Larry Davis. 2017\. Learning Fashion Compatibility with Bidirectional LSTMs. (07 2017). https://doi.org/10.1145/3123266.3123394
* He and Hu (2018) Tong He and Yang Hu. 2018\. FashionNet: Personalized Outfit Recommendation with Deep Neural Network.
* Hou et al. (2019) Min Hou, Le Wu, Enhong Chen, Zhi Li, Vincent Zheng, and Qi Liu. 2019\. Explainable Fashion Recommendation: A Semantic Attribute Region Guided Approach. 4681–4688. https://doi.org/10.24963/ijcai.2019/650
* Hsiao and Grauman (2017) Wei-Lin Hsiao and Kristen Grauman. 2017. Creating Capsule Wardrobes from Fashion Images. (12 2017).
* Hsiao et al. (2019) Wei-Lin Hsiao, Isay Katsman, Chao-Yuan Wu, Devi Parikh, and Kristen Grauman. 2019. Fashion++: Minimal Edits for Outfit Improvement. arXiv:1904.09261 [cs.CV]
* Hu et al. (2015) Yang Hu, Xi Yi, and Larry Davis. 2015. Collaborative Fashion Recommendation: A Functional Tensor Factorization Approach. 129–138. https://doi.org/10.1145/2733373.2806239
* Huang et al. (2015) Junshi Huang, Rogerio Feris, Qiang Chen, and Shuicheng Yan. 2015\. Cross-Domain Image Retrieval with a Dual Attribute-Aware Ranking Network. (05 2015). https://doi.org/10.1109/ICCV.2015.127
* Huynh et al. (2018) Cong Phuoc Huynh, Arridhana Ciptadi, Ambrish Tyagi, and Amit Agrawal. 2018. CRAFT: Complementary Recommendations Using Adversarial Feature Transformer. arXiv:1804.10871 [cs.CV]
* Insight ([n.d.]) First Insight. [n.d.]. _AI and Machine Learning for Fashion_. https://www.firstinsight.com/knowledge-base/machine-learning-ai-for-retail-fashion
* Ioffe and Szegedy (2015) Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. (02 2015).
* Isola et al. (2013) Phillip Isola, Jianxiong Xiao, Devi Parikh, Antonio Torralba, and Aude Oliva. 2013. What Makes a Photograph Memorable? _IEEE transactions on pattern analysis and machine intelligence_ 36 (10 2013). https://doi.org/10.1109/TPAMI.2013.200
* Iwata et al. (2011) Tomoharu Iwata, Shinji Wanatabe, and Hiroshi Sawada. 2011\. Fashion Coordinates Recommender System Using Photographs from Fashion Magazines. 2262–2267. https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-377
* Jagadeesh et al. (2014) Vignesh Jagadeesh, Robinson Piramuthu, Anurag Bhardwaj, Wei di, and Neel Sundaresan. 2014\. Large Scale Visual Recommendations From Street Fashion Images. _Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ (01 2014). https://doi.org/10.1145/2623330.2623332
* Jammalamadaka et al. (2013) Nataraj Jammalamadaka, Ayush Minocha, Digvijay Singh, and CV Jawahar. 2013. Parsing Clothes in Unrestricted Images. 88.1–88.11. https://doi.org/10.5244/C.27.88
* Jia et al. (2016) Jia Jia, Jie Huang, G. Shen, T. He, Zhiyuan Liu, H. Luan, and Chao Yan. 2016\. Learning to Appreciate the Aesthetic Effects of Clothing. In _AAAI_.
* Kalantidis et al. (2013) Yannis Kalantidis, Lyndon Kennedy, and Li-Jia Li. 2013\. Getting the Look: Clothing Recognition and Segmentation for Automatic Product Suggestions in Everyday Photos. https://doi.org/10.1145/2461466.2461485
* Kang et al. (2017) Wang-Cheng Kang, Chen Fang, Zhaowen Wang, and Julian McAuley. 2017. Visually-Aware Fashion Recommendation and Design with Generative Image Models. (11 2017).
* Khosla et al. (2013) Aditya Khosla, Wilma Bainbridge, Antonio Torralba, and Aude Oliva. 2013. Modifying the Memorability of Face Photographs. _Proceedings of the IEEE International Conference on Computer Vision_ , 3200–3207. https://doi.org/10.1109/ICCV.2013.397
* Khosla et al. (2014) A. Khosla, A. D. Sarma, and R. Hamid. 2014. What makes an image popular?. In _WWW_.
* Kiapour et al. (2015) M. Kiapour, Xufeng Han, Svetlana Lazebnik, Alexander Berg, and Tamara Berg. 2015. Where to Buy It: Matching Street Clothing Photos in Online Shops. 3343–3351. https://doi.org/10.1109/ICCV.2015.382
* Kobayashi ([n.d.]) S Kosdansha International Kobayashi. [n.d.]. _Art of color combinations_.
* Kolda and Bader (2009) T. Kolda and B. Bader. 2009\. Tensor Decompositions and Applications. _SIAM Rev._ 51 (2009), 455–500.
* Koren and Bell (2015) Yehuda Koren and Robert Bell. 2015. _Advances in Collaborative Filtering_. 77–118. https://doi.org/10.1007/978-1-4899-7637-6_3
* Krizhevsky et al. (2012) Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. 2012\. ImageNet Classification with Deep Convolutional Neural Networks. _Neural Information Processing Systems_ 25 (01 2012). https://doi.org/10.1145/3065386
* Lecun et al. (1998) Yann Lecun, Leon Bottou, Y. Bengio, and Patrick Haffner. 1998\. Gradient-Based Learning Applied to Document Recognition. _Proc. IEEE_ 86 (12 1998), 2278 – 2324. https://doi.org/10.1109/5.726791
* Lew et al. (2006) Michael Lew, Nicu Sebe, Chaabane Djeraba, and Ramesh Jain. 2006\. Content-based multimedia information retrieval: State of the art and challenges. _TOMCCAP_ 2 (02 2006), 1–19. https://doi.org/10.1145/1126004.1126005
* Li et al. (2016) Yuncheng Li, LiangLiang Cao, Jiang Zhu, and Jiebo Luo. 2016\. Mining Fashion Outfit Composition Using An End-to-End Deep Learning Approach on Set Data. _IEEE Transactions on Multimedia_ PP (08 2016). https://doi.org/10.1109/TMM.2017.2690144
* Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Zitnick. 2014\. Microsoft COCO: Common Objects in Context. (05 2014).
* Liu et al. (2012) Si Liu, Tam Nguyen, Jiashi Feng, Meng Wang, and Shuicheng Yan. 2012. Hi, magic closet, tell me what to wear! 1333–1334. https://doi.org/10.1145/2393347.2396470
* Maas (2013) Andrew L. Maas. 2013\. Rectifier Nonlinearities Improve Neural Network Acoustic Models.
* McAuley et al. (2015) Julian McAuley, Christopher Targett, Qinfeng Shi, and Anton Hengel. 2015. Image-Based Recommendations on Styles and Substitutes. (06 2015). https://doi.org/10.1145/2766462.2767755
* Melo et al. (2015) Ernani Melo, Emilia Nogueira, and Denise Guliato. 2015\. Content-Based Filtering Enhanced by Human Visual Attention Applied to Clothing Recommendation. 644–651. https://doi.org/10.1109/ICTAI.2015.98
* Melville et al. (2002) Prem Melville, Raymond Mooney, and Ramadass Nagarajan. 2002\. Content-Boosted Collaborative Filtering for Improved Recommendations. _Proceedings of the National Conference on Artificial Intelligence_ (05 2002).
* Nguyen et al. (2014) Hai Nguyen, Martin Havig, Herman Schistad, Thomas Almenningen, Anders Kofod-Petersen, Helge Langseth, and Heri Ramampiaro. 2014. Learning to Rank for Personalised Fashion Recommender Systems via Implicit Feedback. https://doi.org/10.1007/978-3-319-13817-6_6
* of Philosophy (2009) Stanford Encyclopedia of Philosophy. 2009\. _The Concept of the Aesthetic_. https://plato.stanford.edu/entries/aesthetic-concept/
* Pedersen et al. (2004) Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004\. WordNet::Similarity - Measuring the Relatedness of Concepts. (04 2004).
* Rendle et al. (2012) Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2012. BPR: Bayesian Personalized Ranking from Implicit Feedback. _Proceedings of the 25th Conference on Uncertainty in Artificial Intelligence, UAI 2009_ (05 2012).
* Rendle and Schmidt-Thieme (2010) Steffen Rendle and Lars Schmidt-Thieme. 2010. Pairwise Interaction Tensor Factorization for Personalized Tag Recommendation. _WSDM 2010 - Proceedings of the 3rd ACM International Conference on Web Search and Data Mining_ , 81–90. https://doi.org/10.1145/1718487.1718498
* Sidiropoulos et al. (2016) N.D. Sidiropoulos, Lieven Lathauwer, Xiao Fu, Kejun Huang, Evangelos Papalexakis, and Christos Faloutsos. 2016. Tensor Decomposition for Signal Processing and Machine Learning. _IEEE Transactions on Signal Processing_ PP (07 2016). https://doi.org/10.1109/TSP.2017.2690524
* Simo-Serra et al. (2014) Edgar Simo-Serra, Sanja Fidler, Francesc Moreno-Noguer, and Raquel Urtasun. 2014. A High Performance CRF Model for Clothes Parsing. 64–81. https://doi.org/10.1007/978-3-319-16811-1_5
* Simo-Serra et al. (2015) Edgar Simo-Serra, Sanja Fidler, Francesc Moreno-Noguer, and Raquel Urtasun. 2015. Neuroaesthetics in fashion: Modeling the perception of fashionability. 869–877. https://doi.org/10.1109/CVPR.2015.7298688
* Song et al. (2018) X. Song, Fuli Feng, Xianjing Han, X. Yang, W. Liu, and L. Nie. 2018\. Neural Compatibility Modeling with Attentive Knowledge Distillation. _The 41st International ACM SIGIR Conference on Research &. Development in Information Retrieval_ (2018).
* Song et al. (2017) Xuemeng Song, Fuli Feng, Jinhuan Liu, Zekun Li, Liqiang Nie, and Jun Ma. 2017\. NeuroStylist: Neural Compatibility Modeling for Clothing Matching. 753–761. https://doi.org/10.1145/3123266.3123314
* Streamoid ([n.d.]) Streamoid. [n.d.]. _The Aesthetics of Fashion Part 2_. https://blog.streamoid.com/the-aesthetics-of-fashion-part-2-66deaaf349dc
* Szegedy et al. (2015) Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 1–9. https://doi.org/10.1109/CVPR.2015.7298594
* Tangseng et al. (2018) Pongsate Tangseng, Kota Yamaguchi, and Takayuki Okatani. 2018\. Recommending Outfits from Personal Closet.
* Vasileva et al. (2018) Mariya Vasileva, Bryan Plummer, Krishna Dusad, Shreya Rajpal, Ranjitha Kumar, and David Forsyth. 2018\. Learning Type-Aware Embeddings for Fashion Compatibility. (03 2018).
* Veit et al. (2015) Andreas Veit, Balazs Kovacs, Sean Bell, Julian McAuley, Kavita Bala, and Serge Belongie. 2015\. Learning Visual Clothing Style with Heterogeneous Dyadic Co-Occurrences. (09 2015). https://doi.org/10.1109/ICCV.2015.527
* Vittayakorn et al. (2015) Sirion Vittayakorn, Kota Yamaguchi, Alexander Berg, and Tamara Berg. 2015. Runway to Realway: Visual Analysis of Fashion. _Proceedings - 2015 IEEE Winter Conference on Applications of Computer Vision, WACV 2015_ (02 2015), 951–958. https://doi.org/10.1109/WACV.2015.131
* Wang et al. (2016) Daixin Wang, Peng Cui, and Wenwu Zhu. 2016. Structural Deep Network Embedding. 1225–1234. https://doi.org/10.1145/2939672.2939753
* Wang (2013) Xiaohui Wang. 2013\. Interpretable Aesthetic Features for Affective Image Classification. _Proceedings / ICIP … International Conference on Image Processing_ (09 2013), 3230–3234. https://doi.org/10.1145/1188913.1188915
* Wu et al. (2019) Le Wu, Lei Chen, Richang Hong, Yanjie Fu, Xing Xie, and Meng Wang. 2019\. A Hierarchical Attention Model for Social Contextual Image Recommendation. _IEEE Transactions on Knowledge and Data Engineering_ PP (04 2019), 1–1. https://doi.org/10.1109/TKDE.2019.2913394
* Xu Chen (2018) Hongteng Xu Yixin Cao Zheng Qin Hongyuan Zha Xu Chen, Yongfeng Zhang. 2018. Visually Explainable Recommendation. _CoRR_ abs/1801.10288 (2018). arXiv:1801.10288 http://arxiv.org/abs/1801.10288
* Yamaguchi et al. (2013) Kota Yamaguchi, M. Kiapour, and Tamara Berg. 2013\. Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items. _Proceedings of the IEEE International Conference on Computer Vision_ , 3519–3526. https://doi.org/10.1109/ICCV.2013.437
* Yamaguchi et al. (2012) Kota Yamaguchi, M.H. Kiapour, L.E. Ortiz, and T.L. Berg. 2012\. Parsing clothing in fashion photographs. _Proceedings / CVPR, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ , 3570–3577. https://doi.org/10.1109/CVPR.2012.6248101
* Yan (2012) Shuicheng Yan. 2012\. Street-to-shop: Cross-scenario clothing retrieval via parts alignment and auxiliary set. 3330–3337.
* Yang et al. (2015) Wei Yang, Ping Luo, and Liang Lin. 2015. Clothing Co-Parsing by Joint Image Segmentation and Labeling. _Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition_ (02 2015). https://doi.org/10.1109/CVPR.2014.407
* Yu et al. (2019) Cong Yu, Yang Hu, Yan Chen, and Bing Zeng. 2019\. Personalized Fashion Design. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_.
* Yu et al. (2018) Wenhui Yu, Huidi Zhang, Xiangnan He, Xu Chen, Li Xiong, and Zheng Qin. 2018. Aesthetic-based Clothing Recommendation. _CoRR_ abs/1809.05822 (2018). arXiv:1809.05822 http://arxiv.org/abs/1809.05822
* Zhang et al. (2017) Yan Zhang, Xiang Liu, Yunyu Shi, Yunqi Guo, Chaoqun Xu, Erwen Zhang, Jiaxun Tang, and Zhijun Fang. 2017\. Fashion Evaluation Method for Clothing Recommendation Based on Weak Appearance Feature. _Scientific Programming_ 2017 (10 2017), 1–12. https://doi.org/10.1155/2017/8093057
* Zheng et al. (2020) Haitian Zheng, Kefei Wu, Jong Park, Wei Zhu, and Jiebo Luo. 2020. Personalized Fashion Recommendation from Personal Social Media Data: An Item-to-Set Metric Learning Approach.
* Zou et al. (2016) Qin Zou, Zheng Zhang, Qian Wang, Qingquan Li, Long Chen, and Song Wang. 2016\. Who Leads the Clothing Fashion: Style, Color, or Texture? A Computational Study. (08 2016).
* Ângelo Cardoso et al. (2018) Ângelo Cardoso, Fabio Daolio, and Saúl Vargas. 2018\. Product Characterisation towards Personalisation: Learning Attributes from Unstructured Data to Recommend Fashion Products. arXiv:1803.07679 [stat.ML]
|
Tweaking the Beukers Integrals In Search of More Miraculous Irrationality
Proofs À La Apéry
Robert DOUGHERTY-BLISS, Christoph KOUTSCHAN, and Doron ZEILBERGER
In honor of our irrational guru Wadim Zudilin, on his $\lfloor
50\,\zeta(5)\rfloor$-th birthday
[Actual] Historical Introduction: How Beukers’ Proofs Were ACTUALLY found
Hilbert’s 0-th problem
Before David Hilbert [H] stated his famous 23 problems, he mentioned two
problems that he probably believed to be yet much harder, and indeed, are
still wide open today. One of them was to prove that there are infinitely many
prime numbers of the form $2^{n}+1$, and the other one was to prove that the
Euler-Mascheroni constant is irrational.
Two paragraphs later he stated his optimistic belief that “in mathematics
there is no ignorabimus.”
As we all know, he was proven wrong by Gödel and Turing in general, but even
for such concrete problems, like the irrationality of a specific, natural,
constant, like the Euler-Mascheroni constant (that may be defined in terms of
the definite integral $\quad-\int_{0}^{\infty}e^{-x}\log x$) , that is most
probably decidable in the logical sense, (i.e. there probably exists a
(rigorous) proof), we lowly humans did not yet find it, (and may never will!).
While the Euler-Mascheroni constant (and any other, natural, explicitly-
defined, constant that is not obviously rational) is surely irrational, in the
everyday sense of the word sure (like death and taxes), giving a proof, in the
mathematical sense of ‘proof’ is a different matter. While $e$ was proved
irrational a long time ago (trivial exercise), and $\pi$ was proved irrational
by Lambert around 1750, we have no clue how to prove that $e+\pi$ is
irrational. Ditto for $e\cdot\pi$. Exercise: Prove that at least one of them
is irrational.
Apéry’s Miracle
As Lindemann first proved in 1882, the number $\pi$ is more than just
irrational, it is transcendental, hence it follows that $\zeta(n)$ is
irrational for all even arguments, since Euler proved that $\zeta(2n)$ is a
multiple of $\pi^{2n}$ by a rational number. But proving that $\zeta(3)$,
$\zeta(5)$, $\dots$ are irrational remained wide open.
Since such problems are so hard, it was breaking news, back in 1978, when
64-years-old Roger Apéry announced and sketched a proof that
$\zeta(3):=\sum_{n=1}^{\infty}{1\over n^{3}}$ is irrational. This was
beautifully narrated in a classic expository paper by Alf van der Poorten
[vdP], aided by details filled-in by Henri Cohen and Don Zagier. While
beautiful in our eyes, most people found the proof ad-hoc and too complicated,
and they did not like the heavy reliance on recurrence relations.
To those people, who found Apéry’s original proof too magical, ad-hoc, and
computational, another proof, by a 24-year-old PhD student by the name of
Frits Beukers [B] was a breath of fresh air. It was a marvelous gem in human-
generated mathematics, and could be easily followed by a first-year student,
using partial fractions and very easy estimates of a certain triple integral,
namely
$\int_{0}^{1}\,\int_{0}^{1}\,\int_{0}^{1}\,{(x(1-x)y(1-y)z(1-z))^{n}\over(1-z+xyz)^{n+1}}\,dx\,dy\,dz\quad.$
The general approach of Apéry of finding concrete sequences of integers
$a_{n},b_{n}$ such that
$|\zeta(3)-{a_{n}\over b_{n}}|\,<\,{CONST\over b_{n}^{1+\delta}}\quad,$
(see below) for a positive $\delta$ was still followed, but the details were
much more palatable and elegant to the average mathematician in the street.
As a warmup, Beukers, like Apéry before him, gave a new proof of the already
proved fact that $\zeta(2)={\pi^{2}\over 6}$ is irrational, using the double
integral
$\int_{0}^{1}\,\int_{0}^{1}\,{(x(1-x)y(1-y))^{n}\over(1-xy)^{n+1}}\,dx\,dy\quad.$
Ironically, we will follow Beukers’ lead, but heavily using recurrence
relations, that will be the engine of our approach. Thus we will abandon the
original raison d’être of Beukers’ proof of getting rid of recurrences, and
bring them back with a vengeance.
[Alternative World] Historical Introduction: How Beukers’s Proofs Could (and
Should!) have been Discovered
Once upon a time, there was a precocious teenager, who was also a computer
whiz, let’s call him/her/it/they Alex. Alex just got a new laptop that had
Maple, as a birthday present.
Alex typed, for no particular reason,
int(int(1/(1-x*y),x=0..1),y=0..1);
and immediately got the answer: ${\pi^{2}\over 6}$. Then Alex was wondering
about the sequence
$I(n):=\int_{0}^{1}\,\int_{0}^{1}\,{(x(1-x)y(1-y))^{n}\over(1-xy)^{n+1}}\,dx\,dy\quad.$
(why not, isn’t it a natural thing to try out for a curious teenager?), and
typed
I1:=n->int(int(1/(1-x*y)*(x*(1-x)*y*(1-y)/(1-x*y))**n,x=0..1),y=0..1);
(I is reserved in Maple for $\sqrt{-1}$, so Alex needed to use I1),
and looked at the first ten values by typing:
L:=[seq(I1(i),i=1..10)]; ,
getting after a few seconds
$[5-{\pi^{2}\over 2},-{{125\over 4}}+{{19\,{\pi}^{2}\over 6}},{{8705\over
36}}-{{49\,{\pi}^{2}\over 2}},-{{32925\over 16}}+{{417\,{\pi}^{2}\over 2}},$
${{13327519\over 720}}-{{3751\,{\pi}^{2}\over 2}},-{{124308457\over
720}}+{{104959\,{\pi}^{2}\over 6}},$ ${{19427741063\over
11760}}-{{334769\,{\pi}^{2}\over 2}},-{{2273486234953\over
141120}}+{{9793891\,{\pi}^{2}\over 6}},$ ${{202482451324891\over
1270080}}-{{32306251\,{\pi}^{2}\over 2}},-{{2758128511985\over
1728}}+{{323445423\,{\pi}^{2}\over 2}}]\quad.$
Alex immediately noticed that, at least for $n\leq 10$,
$I(n)=a_{n}-b_{n}{\pi^{2}\over 6}\quad,$
for some integers $b_{n}$ and some rational numbers $a_{n}$. By taking
evalf(L), Alex also noticed that $I(n)$ get smaller and smaller. Knowing that
Maple could not be trusted with floating point calculations (unless you change
the value of Digits from its default, to something higher, say, in this case
Digits:=30), that they get smaller and smaller. Typing ‘evalf(L,30);’, Alex
got:
$[0.06519779945532069058275450006,0.0037472701163022929758881663,$
$0.000247728866269394110526059,0.00001762713127202699137347,$
$0.0000013124634659314676853,0.000000100776323486001254,$
$0.00000000791212964371946,0.0000000006317437711206,$ ${5.1111100706\times
10^{-11}},{4.17922459\times 10^{-12}}]\quad.$
Alex realized that $I(n)$ seems to go to $zero$ fairly fast, and since
$I(10)/I(9)$ and $I(9)/I(8)$ were pretty close, Alex conjectured that the
limit of $I(n)/I(n-1)$ tends to a certain constant. But ten data points do not
suffice!
When Alex tried to find the first $2000$ terms, Maple got slower and slower.
Then Alex asked Alexa, the famous robot,
Alexa: how do I compute many terms of the sequence $I(n)$ given by that
double-integral?
and Alexa replied:
Go to Doron Zeilberger’s web-site and download the amazing program
https://sites.math.rutgers.edu/~zeilberg/tokhniot/MultiAlmkvistZeilberger.txt
,
that accompanied the article [ApaZ]. Typing
MAZ(1,1/(1-x*y),x*(1-x)*y*(1-y)/(1-x*y),[x,y],n,N, $\\{\\}$)[1];
immediately gave a recurrence satisfied by $I(n)$
$I(n)=-{{\left(11\,{n}^{2}-11\,n+3\right)\over{n}^{2}}}\cdot{\it
I}\left(n-1\right)+{{\left(n-1\right)^{2}\over{n}^{2}}}\cdot{\it
I}\left(n-2\right)\quad.$
Using this recurrence, Alex easily computed the first $2000$ terms, using the
following Maple one-liner (calling the sequence defined by the recurrence
I2(n)):
I2:=proc(n) option remember: if n=0 then Pi**2/6 elif n=1 then 5-Pi**2/2 else
-(11*n**2-11*n+3)/n**2*I2(n-1)+(n-1)**2/n**2*I2(n-2):fi: end:
and found out that indeed $I(n)/I(n-1)$ tends to a limit, about $0.09016994$.
Writing
$I(n)=a_{n}-b_{n}{\pi^{2}\over 6}\quad$
and realizing that $I(n)$ is small, Alex found terrific rational
approximations to ${\pi^{2}\over 6}$, $a_{n}/b_{n}$, that after clearing
denominators can be written as $a^{\prime}_{n}/b^{\prime}_{n}$ where now both
numerator $a^{\prime}_{n}$ and denominator $b^{\prime}_{n}$ are integers.
${\pi^{2}\over 6}\approx{a^{\prime}_{n}\over b^{\prime}_{n}}\quad.$
Alex also noticed that for all $n$ up to $2000$, for some constant $C$,
$|{\pi^{2}\over 6}-{a^{\prime}_{n}\over
b^{\prime}_{n}}|\leq{C\over(b^{\prime}_{n})^{1+\delta}}\quad,$
where $\delta$ is roughly $0.09215925467$. Then Alex concluded that this
proves that ${\pi^{2}\over 6}$ is irrational, since if it were rational the
left side would have been $\geq{C_{1}\over b^{\prime}_{n}}$, for some constant
$C_{1}$. Of course, some details would still need to be filled-in, but that
was not too hard.
The General Strategy
Let’s follow Alex’s lead. (Of course our fictional Alex owes a lot to the real
Beukers and also to Alladi and Robinson [AR]).
Start with a constant, let’s call it $C$, given by an explicit integral
$\int_{0}^{1}K(x)\,dx\quad,$
for some integrand $K(x)$, or, more generally, a $d$-dimensional integral
$\int_{0}^{1}\dots\int_{0}^{1}K(x_{1},\dots,x_{k})\,dx_{1}\dots dx_{k}\quad.$
Our goal in life is to prove that $C$ is irrational. Of course $C$ may turn
out to be rational (that happens!), or more likely, an algebraic number, or
expressible in terms of a logarithm of an algebraic number, for which, there
already exist irrationality proofs (albeit not always effective ones). But who
knows? Maybe this constant has never been proved irrational, and if it will
happen to be famous (e.g. Catalan’s constant, or $\zeta(5)$, or the Euler-
Mascheroni constant mentioned above), we will be famous too. But even if it is
a nameless constant, it is still extremely interesting, if it is the first
irrationality proof, since these proofs are so hard, witness that, in spite of
great efforts by experts like Wadim Zudilin, the proofs of these are still
wide open.
In this article we will present numerous candidates. Our proofs of
irrationality are modulo a ‘divisibility lemma’ (see below), that we are sure
that someone like Wadim Zudilin, to whom this paper is dedicated, can fill-in.
Our only doubts are whether these constants are not already proved to be
irrational because they happen to be algebraic (probably not, since Maple was
unable to identify them), or more complicated numbers (like logarithms of
algebraic numbers). Recall that Maple’s identify can’t (yet) identify
everything that God can.
Following Beukers and Alladi-Robinson, we introduce a sequence of integrals,
parameterized by a non-negative integer $n$
$I(n)=\int_{0}^{1}K(x)\,(x(1-x)K(x))^{n}\,dx\quad,$
and analogously for multiple integrals, or more generally
$I(n)=\int_{0}^{1}K(x)\,(x(1-x)S(x))^{n}\,dx\quad,$
for another function $S(x)$. Of course $I(0)=C$, our constant that we want to
prove irrational.
It so happens that for a wide class of functions $K(x)$, $S(x)$, (for single
or multivariable $x$) using the Holonomic ansatz [Ze1], and implemented (for
the single-variable case) in [AlZ], and for the multi-variable case in [ApZ],
and much more efficiently in [K], there exists a linear recurrence equation
with polynomial coefficients, that can be actually computed (always in theory,
but also often in practice, unless the dimension is high). In other words we
can find a positive integer $L$, the order of the recurrence, and polynomials
$p_{0}(n),p_{1}(n),\dots,p_{L}(n)$, such that
$p_{0}(n)I(n)+p_{1}(n)I(n+1)+\dots+p_{L}(n)I(n+L)\,=\,0\quad.$
If we are lucky (and all the cases in this paper fall into this case) the
order $L$ is $2$. Furthermore, it would be evident in all the examples in this
paper that $p_{0}(n),p_{1}(n),p_{2}(n)$ can be taken to have integer
coefficients.
Another ‘miracle’ that happens in all the examples in this paper is that
$I(0)$ and $I(1)$ are rationally-related, i.e. there exist integers
$c_{0},c_{1},c_{2}$ such that
$c_{0}I(0)+c_{1}I(1)=c_{2}\quad,$
that our computers can easily find.
It then follows, by induction, that one can write
$I(n)=b_{n}C-a_{n}\quad,$
for some sequences of rational numbers $\\{a_{n}\\}$ and $\\{b_{n}\\}$ that
both satisfy the same recurrence as $I(n)$.
Either using trivial bounds on the integral, or using the so-called Poincaré
lemma (see, e.g. [vdP], [ZeZu1],[ZeZu2]) it turns out that
$a_{n}\,=\,\Omega(\alpha^{n})\quad,\quad b_{n}\,=\,\Omega(\alpha^{n})\quad,$
for some constant $\alpha>1$, and
$|I(n)|=\Omega(\,{1\over\beta^{n}}\,)\quad,$
for some constant $\beta>1$.
[Please note that we use $\Omega$ in a looser-than-usual sense, for us
$x(n)=\Omega(\alpha^{n})$ means that $\lim_{n\rightarrow\infty}\,{\log
x(n)\over n}=\alpha$.]
In the tweaks of Beukers’ integrals for $\zeta(2)$ and $\zeta(3)$ coming up
later, $\alpha$ and $\beta$ are equal, but in the tweaks of the Alladi-
Robinson integrals, $\alpha$ is usually different than $\beta$.
It follows that
$|C-{a_{n}\over b_{n}}|=\Omega({1\over(\alpha\beta)^{n}})\quad.$
Note that $a_{n}$, and $b_{n}$ are, usually, not integers, but rather rational
numbers (In the original Beukers/Apéry cases, the $b_{n}$ were integers, but
the $a_{n}$ were not, in the more general cases in this article, usually
neither of them are integers).
It so happens, in all the cases that we discovered, that there exists another
sequence of rational numbers $E(n)$ such that
$a^{\prime}_{n}:=a_{n}\,E(n)\quad,\quad b^{\prime}_{n}:=b_{n}\,E(n)\quad,$
are always integers, and, of course
$gcd(a^{\prime}_{n}\,,\,b^{\prime}_{n})=1$. We call $E(n)$ the integer-ating
factor.
In some cases we were able to conjecture $E(n)$ exactly, in terms of products
of primes satisfying certain conditions (see below), but in other cases we can
only conjecture that such an explicitly-describable sequence exists.
In either case there exists a real number, that sometimes can be described
exactly, and other times only estimated, let’s call it $\nu$, such that
$\lim_{n\rightarrow\infty}{\log E(n)\over n}\,=\,\nu\quad,$
or, in our notation, $E(n)=\Omega(\,e^{n\nu}\,)$ .
Since we have
$|C-{a^{\prime}_{n}\over
b^{\prime}_{n}}|=\Omega({1\over(\alpha\beta)^{n}})\quad,$
where $b^{\prime}_{n}=\Omega(e^{\nu\,n}\alpha^{n})$. We need a positive
$\delta$ such that
$(e^{\nu\,n}\alpha^{n})^{1+\delta}=(\alpha\beta)^{n}\quad.$
Taking $\log$ (and dividing by $n$) we have
$(\nu+\log\alpha)(1+\delta)=\log\alpha+\log\beta\quad,$
giving
$\delta={\log\beta-\nu\over\log\alpha+\nu}\quad.$
If we are lucky, and $\log\beta>\nu$, then we have $\delta>0$, and an
irrationality proof!, Yea! We also, at the same time, determined an
irrationality measure (see [vdP])
$1+{1\over\delta}\,=\,{\log\alpha+\log\beta\over\log\beta-\nu}\quad.$
If we are unlucky, and $\delta<0$, it is still an exponentially fast way to
compute our constant $C$ to any desired accuracy.
Summarizing: For each specific constant defined by a definite integral, we
need to exhibit
$\bullet$ A second-oder recurrence equation for the numerator and denominator
sequence $a_{n}$ and $b_{n}$ that feature in $I(n)=b_{n}C-a_{n}$.
$\bullet$ The initial conditions $a_{0},a_{1}$, $b_{0},b_{1}$ enabling a very
fast computation of many terms of $a_{n},b_{n}$.
$\bullet$ The constants $\alpha$ and $\beta$
$\bullet$ Exhibit a conjectured integer-ating factor $E(n)$, or else
conjecture that one exists, and find, or estimate (respectively),
$\nu:=\,\lim_{n\rightarrow\infty}{\log E(n)\over n}$ .
$\bullet$ Verify that $\beta>e^{\nu}$ and get (potentially) famous.
The Three Classical Cases
${\bf\log 2}$ ([AR])
$C\,=\,\int_{0}^{1}\,{1\over 1+x}\,dx\,=\,\log 2\quad.$
$I(n)=\int_{0}^{1}{(x(1-x))^{n}\over(1+x)^{n+1}}\,dx\quad.$
Recurrence:
$\left(n+1\right)X\left(n\right)+\left(-6\,n-9\right)X\left(n+1\right)+\left(n+2\right)X\left(n+2\right)\,=0\,\quad.$
$\alpha\,=\,\beta=3+2\sqrt{2}\quad.$
Initial conditions
$a_{0}=0\,,\,a_{1}=2\quad;\quad b_{0}=1\,,\,b_{1}=3\quad.$
Integer-ating factor $E(n)=lcm(1\dots n)$, $\nu=1$.
$\delta={\log\beta-\nu\over\log\alpha+\nu}={\log\beta-1\over\log\alpha+1}={\log(3+2\sqrt{2})-1\over\log(3+2\sqrt{2})+1}=0.276082871862633587\quad.$
Implied irrationality measure: $1+1/\delta=4.622100832454231334\dots$.
$\bf{\zeta(2)}$ ([B])
$C=\int_{0}^{1}\,\int_{0}^{1}\,{1\over 1-xy}\,dx\,dy\,=\,\zeta(2)\quad.$
$I(n)=\int_{0}^{1}\,\int_{0}^{1}\,{(x(1-x)y(1-y))^{n}\over(1-xy)^{n+1}}\,dx\,dy\quad.$
Recurrence:
$-\left(1+n\right)^{2}X\left(n\right)+\left(11\,{n}^{2}+33\,n+25\right)X\left(n+1\right)+\left(2+n\right)^{2}X\left(n+2\right)\,=0\quad.$
$\alpha\,=\,\beta={11\over 2}+{5\sqrt{5}\over 2}\quad.$
Initial conditions
$a_{0}=0\,,\,a_{1}=-5\quad;\quad b_{0}=1\,,\,b_{1}=-3\quad.$
Integer-ating factor $E(n)=lcm(1\dots n)^{2}$, $\nu=2$.
$\delta={\log\beta-\nu\over\log\alpha+\nu}={\log\beta-2\over\log\alpha+2}={\log(11/2+5\sqrt{5}/2)-2\over\log(11/2+5\sqrt{5}/2)+2}=0.09215925473323\dots\quad.$
Implied irrationality measure: $1+1/\delta=11.8507821910523426959528\dots$.
$\bf{\zeta(3)}$ ([B])
$C=\int_{0}^{1}\,\int_{0}^{1}\,\int_{0}^{1}\,{1\over
1-z+xyz}\,dx\,dy\,\,dz=\,\zeta(3)\quad.$
$I(n)=\int_{0}^{1}\,\int_{0}^{1}\,\int_{0}^{1}\,{(x(1-x)y(1-y)z(1-z))^{n}\over(1-z+xyz)^{n+1}}\,dx\,dy\,dz\quad.$
Recurrence:
$\left(1+n\right)^{3}X\left(n\right)-\left(2\,n+3\right)\left(17\,{n}^{2}+51\,n+39\right)X\left(n+1\right)+\left(n+2\right)^{3}X\left(n+2\right)\,=0\quad.$
$\alpha\,=\,\beta=17+12\,\sqrt{2}\quad.$
Initial conditions
$a_{0}=0\,,\,a_{1}=12\quad;\quad b_{0}=1\,,\,b_{1}=5\quad.$
Integer-ating factor $E(n)=lcm(1\dots n)^{3}$, $\nu=3$.
$\delta={\log\beta-\nu\over\log\alpha+\nu}={\log\beta-3\over\log\alpha+3}={\log(17+12\,\sqrt{2})-3\over\log(17+12\,\sqrt{2})+3}=0.080529431189061685186\dots\quad.$
Implied irrationality measure: $1+1/\delta=13.41782023335376578458\dots$.
Accompanying Maple packages
This article is accompanied by three Maple packages, GenBeukersLog.txt,
GenBeukersZeta2.txt, GenBeukersZeta3.txt all freely available from the front
of this masterpiece
https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/beukers.html ,
where one can find ample sample input and output files, that readers are
welcome to extend.
Zudilin’s Tweak of the Beukers $\zeta(2)$ integral to get the Catalan constant
The inspiration for our tweaks came from Wadim Zudilin’s brilliant discovery
[Zu1] that the famous Catalan constant, that may be defined by the innocent-
looking alternating series of the reciprocals of the odd perfect-squares
$C:=1-{1\over 3^{2}}+{1\over 5^{2}}-{1\over
7^{2}}+\dots=\sum_{n=0}^{\infty}{(-1)^{n}\over(2n+1)^{2}}\quad,$
can be written as the double integral
${1\over 8}\,\int_{0}^{1}\,\int_{0}^{1}\,{x^{-{1\over 2}}(1-y)^{-{1\over
2}}\over 1-xy}\,dx\,dy\quad.$
This lead him to consider the sequence of Beukers-type double-integrals
$I(n)\,=\,\int_{0}^{1}\,\int_{0}^{1}\,{x^{-{1\over 2}}(1-y)^{-{1\over 2}}\over
1-xy}\cdot\left({x(1-x)y(1-y)\over 1-xy}\right)^{n}\,dx\,dy\quad.$
Using the Zeilberger algorithm, Zudilin derived a three term recurrence for
$I(n)$ leading to good diophantine approximations to the Catalan constant,
alas not good enough to prove irrationality. This was elaborated and extended
by Yu. V. Nesterenko [N]. See also [Zu2].
Using the multivariable Almkvist-Zeilberger algorithm we can derive the
recurrence much faster. Using Koutschan’s package [K], it is yet faster.
Our Tweaks
Inspired by Zudilin’s Beukers-like integral for the Catalan constant, we
decided to use our efficient tools for quickly manufacturing recurrences.
We systematically investigated the following families.
Generalizing the Alladi-Robinson-Like Integral for $\log 2$
Alladi and Robinson [AR] gave a Beukers-style new proof of the irrationality
of $\log 2$ using the elementary fact that
$\log 2\,=\,\int_{0}^{1}\,{1\over 1+x}\,dx\quad,$
and more generally,
${1\over c}\,\log(1+c)\,=\,\int_{0}^{1}\,{1\over 1+cx}\,dx\quad.$
They used the sequence of integrals
$I(n):=\int_{0}^{1}\,{1\over 1+cx}\left({x(1-x)\over
1+cx}\right)^{n}\,dx\quad,$
and proved that for a wide range of choices of rational $c$, this leads to
irrationality proofs and irrationality measures (see also [ZeZu1]).
Our generalized version is the three-parameter family of constants
$I_{1}(a,b,c):={1\over B(1+a,1+b)}\,\int_{0}^{1}\,{x^{a}(1-x)^{b}\over
1+cx}\,dx$
that is easily seen to equal ${}_{2}F_{1}(1,a+1;a+b+2;-c)$.
We use the sequence of integrals
$I_{1}(a,b,c)(n):=\,{1\over B(1+a,1+b)}\,\int_{0}^{1}\,{x^{a}(1-x)^{b}\over
1+cx}\cdot\left({x(1-x)\over 1+cx}\right)^{n}\,dx\quad.$
Using the (original!) Almkvist-Zeilberger algorithm [AlZ], implemented in the
Maple package
https://sites.math.rutgers.edu/~zeilberg/tokhniot/EKHAD.txt ,
we immediately get a second-order recurrence that can be gotten by typing
‘OpeL(a,b,c,n,N);’ in the Maple package
https://sites.math.rutgers.edu/~zeilberg/tokhniot/GenBeukersLog.txt .
This enabled us to conduct a systematic search, and we found many cases of
${}_{2}F_{1}$ evaluations that lead to irrationality proofs, i.e. for which
the $\delta$ mentioned above is positive. Many of them turned out to be
(conjecturally) expressible in terms of algebraic numbers and/or logarithms of
rational numbers, hence proving them irrational is not that exciting, but we
have quite a few not-yet-identified (and inequivalent) cases. See the output
file
https://sites.math.rutgers.edu/~zeilberg/tokhniot/oGenBeukersLog1.txt ,
for many examples. Whenever Maple was able to (conjecturally) identify the
constants explicitly, it is mentioned. If nothing is mentioned then these are
potentially explicit constants, expressible as a hypergeometric series
${}_{2}F_{1}$, for which this would be the first irrationality proof, once the
details are filled-in.
We also considered the four-parameter family of constants
$I^{\prime}_{1}(a,b,c,d):={\int_{0}^{1}\,{x^{a}(1-x)^{b}\over(1+cx)^{d+1}}\,dx\over\int_{0}^{1}\,{x^{a}(1-x)^{b}\over(1+cx)^{d}}\,dx}\quad,$
and, using the more general recurrence, also obtained using the Almkvist-
Zeilberger algorithm (to see it type ‘OpeLg(a,b,c,d,n,Sn);’ in
GenBeukersLog.txt), found many candidates for irrationality proofs that Maple
was unable to identify. See the output file
https://sites.math.rutgers.edu/~zeilberg/tokhniot/oGenBeukersLog2.txt .
Generalizing the Beukers Integral for $\zeta(2)$
Define
$I_{2}(a_{1},a_{2},b_{1},b_{2})(n)\,=\,{1\over
B(1-a_{1},1-a_{2})B(1-b_{1},1-b_{2})}\cdot$
$\int_{0}^{1}\,\int_{0}^{1}\,{x^{-a_{1}}(1-x)^{-a_{2}}y^{-b_{1}}(1-y)^{-b_{2}}\over
1-xy}\cdot\left({x(1-x)y(1-y)\over 1-xy}\right)^{n}\,dx\,dy\quad,$
that happens to satisfy a linear-recurrence equation of second order, yielding
Diophantine approximations to the constant
$I_{2}(a_{1},a_{2},b_{1},b_{2})(0)$, let’s call it
$C_{2}(a_{1},a_{2},b_{1},b_{2})$
$C_{2}(a_{1},a_{2},b_{1},b_{2})\,=\,{1\over
B(1-a_{1},1-a_{2})B(1-b_{1},1-b_{2})}\cdot\int_{0}^{1}\,\int_{0}^{1}\,{x^{-a_{1}}(1-x)^{-a_{2}}y^{-b_{1}}(1-y)^{-b2}\over
1-xy}\,dx\,dy\quad.$
It is readily seen that
$C_{2}(a_{1},a_{2},b_{1},b_{2})\,=\,{}_{3}F_{2}\left({{1\,,\,1-a_{1}\,,\,-b_{1}+1}\atop{2-a_{1}-a_{2}\,,\,2-b_{1}-b_{2}}}\,;1\,\right)\quad.$
Most choices of random $a_{1},a_{2},b_{1},b_{2}$ yield disappointing, negative
$\delta$’s, just like $C_{2}({1\over 2},0,0,{1\over 2})$ (alias $8$ times the
Catalan constant), but a systematic search yielded several hundred candidates
that produce positive $\delta$’s and hence would produce irrationality proofs.
Alas, many of them were conjecturally equivalent to each other via a
fractional-linear transformation with integer coefficients,
$C\rightarrow{a+bC\over c+dC}$, with $a,b,c,d$ integers, hence the facts that
they are irrational are equivalent. Nevertheless we found quite a few that are
(conjecturally) not equivalent to each other. Modulo filling-in some details,
they lead to irrationality proofs. Amongst them some were (conjecturally)
identified by Maple to be either algebraic, or logarithms of rational numbers,
for which irrationality proofs exist for thousands of years (in case of
$\sqrt{2}$ and $\sqrt{3}$ etc.), or a few hundred years (in case of $\log 2$,
etc.).
But some of them Maple was unable to identify, so potentially our (sketches)
of proofs would be the first irrationality proofs.
Beukers $\zeta(2)$ Tweaks That produced Irrationality Proofs with Identified
Constants
Denominator 2
We first searched for $C_{2}(a_{1},a_{2},b_{1},b_{2})$ where the parameters
$a_{1},a_{2},b_{1},b_{2}$ have denominator $2$, there were quite a few of
them, but they were all conjecturally equivalent to each other. Here is one of
them:
$\bullet$ $C_{2}(0,0,{1\over 2},0)={}_{3}F_{2}(1,1,1/2;2,3/2;1)$, alias $2\log
2$.
Denominator 3
There were also quite a few where the parameters $a_{1},a_{2},b_{1},b_{2}$
have denominator $3$, but again they were all equivalent to each other,
featuring $\pi\sqrt{3}$. Here is one of them.
$\bullet$ $C_{2}(0,0,{1\over 3},-{2\over 3})={}_{3}F_{2}(1,1,2/3;2,7/3;1)$,
alias (conjecturally) $-6+4\pi\sqrt{3}/3$.
Denominator 4
There were also quite a few where the parameters $a_{1},a_{2},b_{1},b_{2}$
have denominator $4$, but again they were all equivalent to each other,
featuring $\sqrt{2}$, yielding a new proof of the irrationality of $\sqrt{2}$
(for what it is worth). Here is one of them.
$\bullet$ $C_{2}(-{3\over 4},-{3\over 4},-{1\over 4},-{3\over
4})={}_{3}F_{2}(1,7/4,5/4;7/2,3;1)$, alias (conjecturally) $-240\,+\,{512\over
3}\,\sqrt{2}$.
Denominator 5
There were also quite a few where the parameters $a_{1},a_{2},b_{1},b_{2}$
have denominator $5$, but again they were all equivalent to each other,
featuring $\sqrt{5}$, yielding a new proof of the irrationality of $\sqrt{5}$
(for what it is worth). Here is one of them.
$\bullet$ $C_{2}(-{4\over 5},-{4\over 5},-{2\over 5},-{3\over
5})={}_{3}F_{2}(1,9/5,7/5;18/5,3;1)$, alias (conjecturally) $-{845\over
2}\,+\,{2275\over 12}\,\sqrt{5}$
Denominator 6 with identified constants
We found two equivalence classes where the parameters
$a_{1},a_{2},b_{1},b_{2}$ have denominator $6$, for which the constants were
identified. Here are one from each class.
$\bullet$ $C_{2}(-5/6,-5/6,-1/2,-1/2)={}_{3}F_{2}(1,11/6,3/2;11/3,3;1)$, alias
(conjecturally) $-{{1344\over 5}}+{{16384\,\sqrt{3}\over 105}}$
$\bullet$ $C_{2}(-5/6,-5/6,-1/3,-2/3)={}_{3}F_{2}(1,11/6,4/3;11/3,3;1)$, alias
(conjecturally) ${{972\,{2}^{2/3}\over 5}}-{{1536\over 5}}$
denominator 7 with identified constants
We found two cases where the parameters $a_{1},a_{2},b_{1},b_{2}$ have
denominator $7$, for which the constants were identified.
$\bullet$ $C_{2}(-6/7,-6/7,-4/7,-3/7)={}_{3}F_{2}(1,13/7,11/7;26/7,3;1)$,
alias (conjecturally) the positive root of
$13824\,{x}^{3}-2757888\,{x}^{2}-10737789048\,x+16108505539=0$ .
$\bullet$ $C_{2}(-6/7,-1/7,4/7,2/7)={}_{3}F_{2}(1,13/7,3/7;3,8/7;1)$, alias
(conjecturally) the positive root of
$2299968\,{x}^{3}+7074144\,{x}^{2}-11234916\,x-12663217=0$
Beukers $\zeta(2)$ Tweaks That produced Irrationality Proofs with Not-Yet-
Identified Constants (and Hence Candidates for First Irrationality Proofs)
For the following constants, Maple was unable to identify, and we have
potentially the first irrationality proofs of these constants.
Denominator 6 with not yet identified constants
We found two cases (up to equivalence):
$\bullet$ $C_{2}(0,-1/2,1/6,-1/2)={}_{3}F_{2}(1,1,5/6;5/2,7/3;1)$
While Maple was unable to identify this constant, Mathematica came up with
$-24\,-\,{81\sqrt{\pi}\Gamma(7/3)\over\Gamma(-1/6)}$.
$\bullet$ $C_{2}(-2/3,-1/2,1/2,-1/2)={}_{3}F_{2}(1,5/3,1/2;19/6,2;1)$
While Maple was unable to identify this constant, Mathematica came up with
${13\over 2}\,-\,{6\Gamma(19/6)\over\sqrt{\pi}\Gamma(8/3)}$.
Denominator 7 with not yet identified constants
We found six cases (up to equivalence):
$\bullet$ $C_{2}(-6/7,-6/7,-4/7,-5/7)={}_{3}F_{2}(1,13/7,11/7;26/7,23/7;1)$
$\bullet$ $C_{2}(-6/7,-5/7,-3/7,-5/7)={}_{3}F_{2}(1,13/7,10/7;25/7,22/7;1)$
$\bullet$ $C_{2}(-6/7,-5/7,-2/7,-1/7)={}_{3}F_{2}(1,13/7,9/7;25/7,17/7;1)$
$\bullet$ $C_{2}(-6/7,-4/7,-1/7,-1/7)={}_{3}F_{2}(1,13/7,8/7;24/7,16/7;1)$
$\bullet$ $C_{2}(-6/7,-3/7,-5/7,-3/7)={}_{3}F_{2}(1,13/7,12/7;23/7,22/7;1)$
$\bullet$ $C_{2}(-5/7,-3/7,-4/7,-2/7)={}_{3}F_{2}(1,12/7,11/7;22/7,20/7;1)$
For each of them, to get the corresponding theorem and proof, use procedure
TheoremZ2 in the Maple pacgage GenBeukersZeta2.txt.
To get a statement and full proof (modulo a divisibility lemma) type , in
GenBeukersZeta2.txt
TheoremZ2(a1,a2,b1,b2,K,0):
with K at least $2000$. For example, for the last constant in the above list
${}_{3}F_{2}(1,12/7,11/7;22/7,20/7;1)$, type
TheoremZ2( -5/7, -3/7, -4/7, -2/7 ,3000,0):
For more details (the recurrences, the estimated irrationality measures, the
initial conditions) see the output file
https://sites.math.rutgers.edu/~zeilberg/tokhniot/oGenBeukersZeta2g.txt .
Generalizing the Beukers Integral for $\zeta(3)$
The natural extension would be the six-parameter family (but now we make the
exponents positive)
${1\over B(1+a_{1},1+a_{2})B(1+b_{1},1+b_{2})B(1+c_{1},1+c_{2})}\cdot$
$\int_{0}^{1}\,\int_{0}^{1}\,\int_{0}^{1}\,{x^{a_{1}}(1-x)^{a_{2}}y^{b_{1}}(1-y)^{b_{2}}z^{c_{1}}(1-z)^{c_{2}}\over
1-z+xyz}\cdot\left({x(1-x)y(1-y)z(1-z)\over
1-z+xyz}\right)^{n}\,dx\,dy\,dz\quad.$
However, for arbitrary $a_{1},a_{2},b_{1},b_{2},c_{1},c_{2}$ the recurrence is
third order. (Wadim Zudilin pointed out that this may be related to the work
of Rhin and Viola in [RV]).
Also, empirically, we did not find many promising cases. Instead, let’s define
$J_{3}(a_{1},a_{2},b_{1},b_{2},c_{1},c_{2};e)(n)$
$\int_{0}^{1}\,\int_{0}^{1}\,\int_{0}^{1}\,{x^{a_{1}}(1-x)^{a_{2}}y^{b_{1}}(1-y)^{b_{2}}z^{c_{1}}(1-z)^{c_{2}}\over(1-z+xyz)^{e}}\cdot\left({x(1-x)y(1-y)z(1-z)\over
1-z+xyz}\right)^{n}\,dx\,dy\,dz\quad.$
and
$I_{3}(a_{1},a_{2},b_{1},b_{2},c_{1},c_{2};e)(n):={J_{3}(a_{1},a_{2},b_{1},b_{2},c_{1},c_{2};e+1)(n)\over
J_{3}(a_{1},a_{2},b_{1},b_{2},c_{1},c_{2};e)(0)}$
The family of constants that we hope to prove irrationality is the five-
parameter:
$I_{3}(a_{1},a_{2},b_{1},b_{2},c_{1},c_{2};e)(0)\quad.$
$={\int_{0}^{1}\,\int_{0}^{1}\,\int_{0}^{1}\,{x^{a_{1}}(1-x)^{a_{2}}y^{b_{1}}(1-y)^{b_{2}}z^{c_{1}}(1-z)^{c_{2}}\over(1-z+xyz)^{e+1}}\,dx\,dy\,dz\over\int_{0}^{1}\,\int_{0}^{1}\,\int_{0}^{1}\,{x^{a_{1}}(1-x)^{a_{2}}y^{b_{1}}(1-y)^{b_{2}}z^{c_{1}}(1-z)^{c_{2}}\over(1-z+xyz)^{e}}\,dx\,dy\,dz}\quad.$
Of course, for this more general, $7$-parameter, family, there is no second-
order recurrence, but rather a third-order one. But to our delight, we found a
five-parameter family, let’s call it
$K(a,b,c,d,e)(n):=I_{3}(b,c,e,a,a,c,d)(n)\quad.$
Spelled-out, our five-parameter family of constants is
$K(a,b,c,d,e)(0)=$
${\int_{0}^{1}\,\int_{0}^{1}\,\int_{0}^{1}\,{x^{b}(1-x)^{c}y^{e}(1-y)^{a}z^{a}(1-z)^{c}\over(1-z+xyz)^{d+1}}\,dx\,dy\,dz\over\int_{0}^{1}\,\int_{0}^{1}\,\int_{0}^{1}\,{x^{b}(1-x)^{c}y^{e}(1-y)^{a}z^{a}(1-z)^{c}\over(1-z+xyz)^{d}}\,dx\,dy\,dz}\quad.$
Now we found (see the section on finding recurrences below) a general second-
order recurrence, that is too complicated to display here in full generality,
but can be seen by typing
OPEZ3(a,b,c,d,e,n,Sn);
In the Maple package GenBeukersZeta3.txt. This enabled us, for each specific,
numeric specialization of the parameters $a,b,c,d,e$ to quickly find the
relevant recurrence, and systematically search for those that give positive
$\delta$. Once again, many of them turned out to be (conjecturally) equivalent
to each other.
Denominator 2:
We only found one class, up to equivalence, all related to $\log 2$. One of
them is
$K(0,0,0,1/2,1/2)=I_{3}(0,0,1/2,0,0,0,1/2)\quad,$
that is not that exciting since it is (conjecturally) equal to
$-{{2-4\,\log\left(2\right)\over 3-4\,\log\left(2\right)}}$.
For details, type TheoremZ3(0,0,0,1/2,1/2,3000,0); in GenBeukersZeta3.txt .
Denominator 3:
We found three inequivalent classes, none of them Maple was able to identify.
$K(0,0,0,1/3,2/3)=I_{3}(0,0,2/3,0,0,0,1/3)\quad,$
for details, type TheoremZ3(0,0,0,1/3,2/3,3000,0); in GenBeukersZeta3.txt.
$K(0,0,0,2/3,1/3)=I_{3}(0,0,1/3,0,0,0,2/3)\quad,$
for details, type TheoremZ3(0,0,0,2/3,1/3,3000,0); in GenBeukersZeta3.txt.
$K(0,1/3,2/3,1/3,2/3)=I_{3}(0,0,1/3,0,0,0,2/3)\quad,$
for details, type TheoremZ3(0,1/3,2/3,1/3,2/3,3000,0); in GenBeukersZeta3.txt,
These three constants are candidates for ‘first-ever-irrationality proof’.
Denominator 4: We only found one family, all expressible in terms of $\log 2$.
Here is one of them.
For example
$K(0,1/2,0,1/4,3/4)=I_{3}(1/2,0,3/4,0,0,0,1/4)\quad,$
that, conjecturally equals
$-{{-30+45\,\log\left(2\right)\over-11+15\,\log\left(2\right)}}$.
For details, type TheoremZ3(0,1/2,0,1/4,3/4,3000,0); in GenBeukersZeta3.txt.
Denominator 5: We only found one family, up to equivalence, but Maple was
unable to identify the constant. So it is potentially the first irrationality
proof of that constant
$K(0,1/5,0,3/5,2/5)=I_{3}(1/5,0,2/5,0,0,0,3/5)\quad.$
For details, type TheoremZ3(0,1/5,0,3/5,2/5,3000,0); in GenBeukersZeta3.txt.
Denominator 6: We found three families, up to equivalence, none of which Maple
was able to identify. Once again, these are candidates for first-ever
irrationality proofs for these constants.
$K(0,1/2,1/2,1/3,1/6)=I_{3}(1/2,1/2,1/6,0,0,1/2,1/3)\quad.$
For details, type TheoremZ3(0,1/2,1/2,1/3,1/6,3000,0); in GenBeukersZeta3.txt.
$K(0,1/2,1/2,1/6,1/3)=I_{3}(1/2,1/2,1/3,0,0,1/2,1/6)\quad.$
For details, type TheoremZ3(0,1/2,1/2,1/6,1/3,3000,0); in GenBeukersZeta3.txt.
$K(1/3,0,2/3,1/2,5/6)=I_{3}(0,2/3,5/6,1/3,1/3,2/3,1/2)\quad.$
For details, type TheoremZ3(1/3,0,2/3,1/2,5/6,3000,0); in GenBeukersZeta3.txt.
Denominator 7: We found five families, up to equivalence, none of which Maple
was able to identify. Once again, these are candidates for first-ever
irrationality proofs for these constants.
$K(1/7,0,2/7,3/7,4/7)=I_{3}(0,2/7,4/7,1/7,1/7,2/7,3/7)\quad.$
For details, type TheoremZ3(1/7,0,2/7,3/7,4/7,3000,0); in GenBeukersZeta3.txt.
$K(1/7,0,2/7,5/7,3/7)=I_{3}(0,2/7,3/7,1/7,1/7,2/7,5/7)\quad.$
For details, type TheoremZ3(1/7,0,2/7,5/7,3/7,3000,0); in GenBeukersZeta3.txt.
$K(1/7,0,3/7,4/7,5/7)=I_{3}(0,3/7,5/7,1/7,1/7,3/7,4/7)\quad.$
For details, type TheoremZ3(1/7,0,3/7,4/7,5/7,3000,0); in GenBeukersZeta3.txt.
$K(1/7,0,4/7,2/7,5/7)=I_{3}(0,4/7,5/7,1/7,1/7,4/7,2/7)\quad.$
For details, type TheoremZ3(1/7,0,4/7,2/7,5/7,3000,0); in GenBeukersZeta3.txt.
$K(2/7,0,3/7,4/7,5/7)=I_{3}(0,3/7,5/7,2/7,2/7,3/7,4/7)\quad.$
For details, type TheoremZ3(2/7,0,3/7,4/7,5/7,3000,0); in GenBeukersZeta3.txt.
If you don’t have Maple, you can look at the output file
https://sites.math.rutgers.edu/~zeilberg/tokhniot/oGenBeukersZeta3All.txt ,
that gives detailed sketches of irrationality proofs of all the above
constants, some with conjectured integer-ating factors.
Guessing an INTEGER-ating factor
In the original Beukers cases the integer-ating factor was easy to conjecture,
and even to prove. For $\zeta(2)$ it was $lcm(1\dots n)^{2}$, and for
$\zeta(3)$ it was $lcm(1\dots n)^{3}$. For the Alladi-Robinson case of $\log
2$ it was even simpler, $lcm(1\dots n)$.
But in other cases it is much more complicated. A natural ‘atomic’ object is,
given a modulo M, a subset C of $\\{0,...,M-1\\}$, rational numbers $e_{1}$,
$e_{2}$ between $0$ and $1$, rational numbers $e_{3},e_{4}$, the following
quantity, for positive integers $n$
$Pp(e_{1},e_{2},e_{3},e_{4},C,M;n):=\prod_{p}p\quad,$
where $p$ ranges over all primes such that (let $\\{a\\}$ be the fractional
part of $a$, i.e. $a-\lfloor a\rfloor$)
$\bullet$ $e_{1}<\\{n/p\\}<e_{2}$
$\bullet$ $e_{3}<p/n<e_{4}$
$\bullet$ $p\,\,mod\,\,M\in C$
Using the prime number theorem, it follows (see e.g. [Zu2]) that
$\lim_{n\rightarrow\infty}{\log Pp(e_{1},e_{2},e_{3},e_{4},C,M;n)\over
n}\quad,$
can be evaluated exactly, in terms of the function
$\Psi(x)={\Gamma^{\prime}(x)\over\Gamma(x)}$ (see procedure PpGlimit in the
Maple packages) thereby giving an exact value for the quantity $\delta$ whose
positivity implies irrationality.
Of course, one still needs to rigorously prove that the conjectured integer-
ating factor is indeed correct.
Looking under the hood: On Recurrence Equations
For ‘secrets from the kitchen’ on how we found the second-order, four-
parameter recurrence operator OPEZ2(a1,a2,b1,b2,n,N) in the Maple package
GenBeukersZeta2.txt, that was the engine driving the $\zeta(2)$ tweaks, and
more impressively, the five-parameter second-order recurrence operator
OPEZ3(a,b,c,d,e,n,N) in the Maple package GenBeukersZeta3.txt, that was the
engine driving the $\zeta(3)$ tweaks, the reader is referred to the stand-
alone appendix available from the following url:
https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/beukersAppendix.pdf
.
Other Variations on Apéry’s theme
Other attempts to use Apéry’s brilliant insight are [Ze2][Ze3][ZeZu1].
Recently Marc Chamberland and Armin Straub [CS] explored other fascinating
aspects of the Apéry numbers, not related to irrationality.
Conclusion and Future Work
We believe that symbolic computational methods have great potential in
irrationality proofs, in particular, and number theory in general. In this
article we confined attention to approximating sequences that arise from
second-order recurrences. The problem with higher order recurrences is that
one gets linear combinations with rational coefficients of several constants,
but if you can get two different such sequences coming from third-order
recurrences, both featuring the same two constants, then the present method
may be applicable. More generally if you have a $k$-th order recurrences, you
need $k-1$ different integrals.
The general methodology of this article can be called Combinatorial Number
Theory, but not in the usual sense, but rather as an analog of Combinatorial
Chemistry, where one tries out many potential chemical compounds, most of them
useless, but since computers are so fast, we can afford to generate lots of
cases and pick the wheat from the chaff.
Encore: Hypergeometric challenges
As a tangent, we (or rather Maple) discovered many exact ${}_{3}F_{2}(1)$
evaluations. Recall that the Zeilberger algorithm can prove hypergemoetric
identities only if there is at least one free parameter. For a specific
${}_{3}F_{2}(a_{1}\,a_{2}\,a_{3}\,;b_{1}\,b_{2};1)$, with numeric parameters,
it is useless. Of course, it is sometimes possible to introduce such a
parameter in order to conjecture a general identity, valid for ‘infinitely’
many $n$, and then specialize $n$ to a specific value, but this remains an art
rather than a science. The output file
https://sites.math.rutgers.edu/~zeilberg/tokhniot/oGenBeukersZeta2f.txt
contains many such conjectured evaluations, (very possibly many of them are
equivalent via a hypergeometric transformation rule) and we challenge Wadim
Zudilin, the birthday boy, or anyone else, to prove them.
References
[AR] Krishna Alladi and Michael L. Robinson, Legendre polynomials and
irrationality, J. Reine Angew. Math. 318 (1980), 137-155.
[AlZ] Gert Almkvist and Doron Zeilberger, The method of differentiating under
the integral sign, J. Symbolic Computation 10, 571-591 (1990).
https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/duis.html .
[ApaZ] Moa Apagodu and Doron Zeilberger, Multi-variable Zeilberger and
Almkvist-Zeilberger algorithms and the sharpening of Wilf-Zeilberger Theory ,
Adv. Appl. Math. 37 (2006)(Special Regev issue), 139-152.
https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/multiZ.html .
[Ape] Roger Apéry, “Interpolation de fractions continues et irrationalité de
certaine constantes” Bulletin de la section des sciences du C.T.H.S. #3 p.
37-53, 1981.
[B] Frits Beukers, A note on the irrationality of $\zeta(2)$ and $\zeta(3)$,
Bull. London Math. Soc. 11 (1979), 268-272.
[CS] Marc Chamberland and Armin Straub, Apéry limits: Experiments and Proofs,
arxiv:2001.034400v1, 6 Nov 2020. https://arxiv.org/abs/2011.03400 .
[H] Professor David Hilbert, Mathematical Problems [Lecture delivered before
the International Congress of Mathematicians at Paris in 1900], translated by
Dr. Mary Winston Newson, Bulletin of the American Mathematica Society 8
(1902), 437-479.
https://www.ams.org/journals/bull/2000-37-04/S0273-0979-00-00881-8/S0273-0979-00-00881-8.pdf
.
[K] Christoph Koutschan, Advanced applications of the holonomic systems
approach, PhD thesis, Research Institute for Symbolic Computation (RISC),
Johannes Kepler University, Linz, Austria, 2009.
http://www.koutschan.de/publ/Koutschan09/thesisKoutschan.pdf,
http://www.risc.jku.at/research/combinat/software/HolonomicFunctions/ .
[N] Yu. V. Nesterenko, On Catalan’s constant, Proceedings of the Steklov
Institute of Mathematics 292 (2016), 153-170.
[vdP] Alf van der Poorten, A proof that Euler missed… Apéry’s proof of the
irrationality of $\zeta(3)$, Math. Intelligencer 1 (1979), 195-203.
[RV] Georges Rhin and Carlo Viola, The group structure of $\zeta(3)$, Acta
Arithmetica, 97(2001), 269-293.
[Ze1] Doron Zeilberger, A Holonomic systems approach to special functions
identities, J. of Computational and Applied Math. 32, 321-368 (1990).
https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/holonomic.html .
[Ze2] Doron Zeilberger, Computerized deconstruction, Adv. Applied Math. 30
(2003), 633-654.
https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/derrida.html .
[Ze3] Doron Zeilberger, Searching for Apéry-style miracles [using, inter-alia,
the amazing Almkvist-Zeilberger algorithm], Personal Journal of Shalosh B.
Ekhad and Doron Zeilberger,
https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/apery.html .
[ZeZu1] Doron Zeilberger, and Wadim Zudilin, Automatic discovery of
irrationality proofs and irrationality measures, International Journal of
Number Theory , published on-line before print, volume and page tbd. Also to
appear in a book dedicated to Bruce Berndt.
https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/gat.html .
[ZeZu2] Doron Zeilberger, and Wadim Zudilin, The irrationality measure of Pi
is at most 7.103205334137…, Moscow J. of Combinatorics and Number Theory 9
(2020), 407-419.
https://sites.math.rutgers.edu/~zeilberg/mamarim/mamarimhtml/pimeas.html .
[Zu1] Wadim Zudilin, Apéry-like difference equations for Catalan’s constant
https://arxiv.org/abs/math/0201024 .
[Zu2] Wadim Zudilin, Arithmetic of linear forms involving odd zeta values, J.
Théorie Nombres Bordeaux 16 (2004), 251-291.
https://arxiv.org/abs/math/0206176 .
Robert Dougherty-Bliss, Department of Mathematics, Rutgers University (New
Brunswick), Hill Center-Busch Campus, 110 Frelinghuysen Rd., Piscataway, NJ
08854-8019, USA. Email: Robert.w.Bliss at gmail dot com .
Christoph Koutschan, Johann Radon Institute of Computational and Applied
Mathematics (RICAM), Austrian Academy of Sciences, Altenberger Strasse 69,
A-4040 Linz, Austria Email: christoph.koutschan at ricam dot oeaw dot ac dot
at .
Doron Zeilberger, Department of Mathematics, Rutgers University (New
Brunswick), Hill Center-Busch Campus, 110 Frelinghuysen Rd., Piscataway, NJ
08854-8019, USA. Email: DoronZeil at gmail dot com .
|
# Chest X-ray lung and heart segmentation based on minimal training sets
Balázs Maga Eötvös Loránd University, Pázmány Péter sétány 1/C, Budapest,
H-1117 Hungary, email<EMAIL_ADDRESS>
###### Abstract
As the COVID-19 pandemic aggravated the excessive workload of doctors
globally, the demand for computer aided methods in medical imaging analysis
increased even further. Such tools can result in more robust diagnostic
pipelines which are less prone to human errors. In our paper, we present a
deep neural network to which we refer to as Attention BCDU-Net, and apply it
to the task of lung and heart segmentation from chest X-ray (CXR) images, a
basic but ardous step in the diagnostic pipeline, for instance for the
detection of cardiomegaly. We show that the fine-tuned model exceeds previous
state-of-the-art results, reaching $98.1\pm 0.1\%$ Dice score and $95.2\pm
0.1\%$ IoU score on the dataset of Japanese Society of Radiological Technology
(JSRT). Besides that, we demonstrate the relative simplicity of the task by
attaining surprisingly strong results with training sets of size 10 and 20: in
terms of Dice score, $97.0\pm 0.8\%$ and $97.3\pm 0.5$, respectively, while in
terms of IoU score, $92.2\pm 1.2\%$ and $93.3\pm 0.4\%$, respectively. To
achieve these scores, we capitalize on the mixup augmentation technique, which
yields a remarkable gain above $4\%$ IoU score in the size 10 setup.
## 1 Introduction
All around the world, a plethora of radiographic examinations are performed
day to day, producing images using different imaging modalities such as X-ray,
computed tomography (CT), diagnostic ultrasound and magnetic resonance imaging
(MRI). According to the publicly available, official data of the National
Health Service ([13]), in the period from February 2017 to February 2018, the
count of imaging activity was about 41 million only in England. The thorough
examination of the vast quantity of these images imposes a huge workload on
radiologists, which increases the number of the avoidable human mistakes.
Consequently, automated methods aiding the diagnostic processes are sought-
after.
The examination of medical images customarily includes various segmentation
tasks, in which detecting and pixelwise annotating different tissues and
certain anomalies are vital. Common examples include lung nodule segmentation
in the diagnosis of lung cancer, lung and heart segmentation in the diagnosis
of cardiomegaly, or plaque segmentation in the diagnosis of thrombosis. Even
in the case of 2-dimensional modalities, such segmentation tasks can be
extremely time-demanding, and the situation gets even worse in three
dimension. Taking into consideration that these tasks are easier to formalize
as a standard computer vision exercise than the identification of a particular
disease, it is not surprising that they sparked much activity in the field of
automated medical imaging analysis.
Semantic segmentation – that is assigning a pre-defined class to each pixel of
an image – requires a high level of visual understanding, in which state-of-
the-art performance is attained by methods utilizing Fully Convolutional
Networks (FCN) [4]. An additional challenge of the field is posed by the
strongly limited quantity of training data on which one train machine learning
models, as annotating medical imaging requires specialists in contrast to
“real-life” images. To overcome this difficulty, the so-called U-Net
architecture was proposed: its capability to being efficiently trained on
small datasets has been demonstrated in [5]. Over the past few years several
modifications and improvements have been proposed on the original
architecture, some of which involved different attention mechanisms designed
to help the network to detect the important parts of the images.
In the present paper we introduce a new network primarily based on the ideas
of [12] and [8], to which we refer to as Attention BCDU-Net. We optimize its
performance through hyperparameter tests on the depth of the architecture and
the loss function, and we demonstrate the superiority of the resulting model
compared to the state-of-the-art network presented in [15] in the task of lung
and heart segmentation on chest X-rays. Besides that, we will also give an
insight into two interesting phenomena arising during our research which might
be interesting for the AI medical imaging community: one of them is the very
small data requirement of this particular task, while the other one is the
peculiar evolution of the loss curves over the training.
## 2 Deep learning approach
### 2.1 Related work
As already mentioned in Section 1, [5] introducing U-Nets is of paramount
importance in the field. Since then U-Nets have been used to cope with diverse
medical segmentation tasks, and numerous papers aimed to design U-Net variants
and mechanisms such that the resulting models tackles better the problem
considered. Some of these paid primary attention to the structure of the
encoder and the decoder – that is the downsampling and the upsampling path –
of the original architecture. For example in [18], the authors developed a
network (CoLe-CNN) with multiple decoder branches and Inception-v4 inspired
encoder to achieve state-of-the-art results in 2-dimensional lung nodule
segmentation. In [10] and [14], the authors introduced U-Net++, a network
equipped with intermediate upsampling paths and additional convolutional
layers, leading to essentially an efficient ensemble of U-Nets of varying
depths, and demonstrated its superiority compared to the standard U-Net in
many image domains. Other works put emphasis on the design of skip connections
and the way the higher resolution semantic information joins the features
coming through the upsampling branch. In [12], the authors proposed the
architecture BCDU-Net, in which instead of the simple concatenation of the
corresponding filters, the features of different levels are fused using a
bidirectional ConvLSTM layer, which introduces nonlinearity into the model at
this point and makes more precise segmentations available. In [8] it has been
shown that for medical image analysis tasks the integration of so-called
Attention Gates (AGs) improved the accuracy of the segmentation models, while
preserving computational efficiency. In [15], this network was enhanced by a
critic network in a GAN-like scheme following [9], and achieved state-of-the-
art results in the task of lung and heart segmentation. Other attention
mechanisms were introduced in [17] and in [16].
### 2.2 Our proposal
The network architecture Attention BCDU-Net we propose is a modification of
the Attention U-Net, shown at Figure 1.
Figure 1: Schematic architecture of Attention U-Net [8].
Figure 2: Schematic figure of the attention gate used in Attention U-Net [8],
the tensor addition to alter is highlighted by an arrow.
In [12], the authors demonstrated that it is beneficial to use bidirectional
ConvLSTM layers to introduce nonlinearity in the step of merging semantic
information gained through skip connections and the features arriving through
the decoder. This inspired us to modify the attention gates (see Figure 2) in
a similar manner, in which these pieces of information are merged via tensor
addition, that is a linear operation as well. This addition is replaced by a
bidirectional ConvLSTM layer, to which the output of $W_{g}$ and $W_{x}$ – the
processed features and the semantic information, respectively – is fed in this
order. We note that to our best knowledge, there is a slight ambiguity about
the structure of the resampling steps in the attention gate: while the
official implementation is in accordance with the figure, there are widely
utilized implementations in which the output of $W_{g}$ is upsampled instead
of downsampling the output of $W_{x}$ in order to fit their shape. We tested
both solutions and did not experience a measurable difference in the
performance. We also experimented with the usage of additional spatial and
channel attention layers as proposed by [17], however, we found that it does
not improve the performance of our model.
The depth of the network is to be determined by hyperparameter testing. Our
tests confirmed that four downsampling steps results in the best performance,
however, the differences are minuscule.
### 2.3 Loss function
A standard score to compare segmentations is the Intersection over Union
(IoU): given two sets of pixels $X,Y$, their IoU is
$IoU(X,Y)=\frac{|X\cap Y|}{|X\cup Y|}.$
In the field of medical imaging, Dice Score Coefficient (DSC) is probably the
most widespread and simple way to measure the overlap ratio of the masks and
the ground truth, and hence to compare and evaluate segmentations. It is a
slight modification of IoU: given two sets of pixels $X,Y$, their DSC is
$DSC(X,Y)=\frac{2|X\cap Y|}{|X|+|Y|}.$
If $Y$ is in fact the result of a test about which pixels are in $X$, we can
rewrite it with the usual notation true/false positive (TP/FP), false negative
(FN) to be
$DSC(X,Y)=\frac{2TP}{2TP+FN+FP}.$
We would like to use this concept in our setup. The class $c$ we would like to
segment corresponds to a set, but it is more appropriate to consider its
indicator function $g$, that is $g_{i,c}\in\\{0,1\\}$ equals 1 if and only if
the $i$th pixel belongs to the object. On the other hand, our prediction is a
probability for each pixel denoted by $p_{i,c}\in[0,1]$. Then the Dice Score
of the prediction in the spirit of the above description is defined to be
$DSC=\frac{\sum_{i=1}^{N}p_{i,c}g_{i,c}+\varepsilon}{\sum_{i=1}^{N}\left(p_{i,c}+g_{i,c}\right)+\varepsilon},$
where $N$ is the total number of pixels, and $\varepsilon$ is introduced for
the sake of numerical stability and to avoid divison by 0. The IoU of the
prediction can be calculated in a similar manner. The linear Dice Loss (DL) of
the multiclass prediction is then
$DL=\sum_{c}\left(1-DSC_{c}\right).$
A deficiency of Dice Loss is that it penalizes false negative and false
positive predictions equally, which results in high precision but low recall.
For example practice shows that if the region of interests (ROI) are small,
false negative pixels need to have a higher weight than false positive ones.
Mathematically this obstacle is easily overcome by introducing weights
$\alpha,\beta$ as tuneable parameters, resulting in the definition of Tversky
similarity index [1]:
$TI_{c}=\frac{\displaystyle\sum_{i=1}^{N}p_{i,c}g_{i,c}+\varepsilon}{\displaystyle\sum_{i=1}^{N}p_{i,c}g_{i,c}+\alpha\displaystyle\sum_{i=1}^{N}p_{i,\overline{c}}g_{i,c}+\beta\displaystyle\sum_{i=1}^{N}p_{i,c}g_{i,\overline{c}}+\varepsilon},$
where $p_{i,\overline{c}}=1-p_{i,c}$ and $g_{i,\overline{c}}=1-g_{i,c}$, that
is the overline simply stands for describing the complement of the class.
Tversky Loss is obtained from Tversky index as Dice Loss was obtained from
Dice Score Coefficient:
$TL=\sum_{c}\left(1-TI_{c}\right).$
Another issue with the Dice Loss is that it struggles to segment small ROIs as
they do not contribute to the loss significantly. This difficulty was
addressed in [11], where the authors introduced the quantity Focal Tversky
Loss in order to improve the performance of their lesion segmentation model:
$FTL=\sum_{c}\left(1-TI_{c}\right)^{\gamma^{-1}},$
where $\gamma\in[1,3]$. In practice, if a pixel with is misclassified with a
high Tversky index, the Focal Tversky Loss is unaffected. However, if the
Tversky index is small and the pixel is misclassified, the Focal Tversky Loss
will decrease significantly.
In our work we use multiclass DSC and IoU to evaluate segmentation
performance. As our initial tests demonstrated that training our network with
Focal Tversky loss results in better scores, we will use this loss function.
The optimal $\alpha,\beta,\gamma$ parameters should be determined by extensive
hyperparameter testing and grid search. We worked below with
$\alpha=0.6,\beta=0.4,\frac{1}{\gamma}=0.675$.
### 2.4 Dataset and preprocessing
For training- and validation data, we used the public Japanese Society of
Radiological Technology (JSRT) dataset ([3]), available at [2]. The JSRT
dataset contains a total of 247 images, all of them are in $2048\times 2048$
resolution, and have 12-bit grayscale levels. Both lung and heart segmentation
masks are available for this dataset.
In terms of preprocessing, similarly to [15], the images were resized to the
resolution $512\times 512$ first. As X-rays are grayscale images with
typically low contrast, which makes their analysis a difficult task. This
obstacle might be overcome by using some sort of histogram equalization
technique. The idea of standard histogram equalization is spreading out the
the most frequent intensity values to a higher range of the intensity domain
by modifying the intensities so that their cumulative distribution function
(CDF) on the complete modified image is as close to the CDF of the uniform
distribution as possible. Improvements might be made by using adaptive
histogram equalization, in which the above method is not utilized globally,
but separately on pieces of the image, in order to enhance local contrasts.
However, this technique might overamplify noise in near-constant regions,
hence our choice was to use Contrast Limited Adaptive Histogram Equalization
(CLAHE), which counteracts this effect by clipping the histogram at a
predefined value before calculating the CDF, and redistribute this part of the
image equally among all the histogram bins.
### 2.5 Data augmentation
Concerning data augmentation, we follow [7], in which the method mixup was
used to improve glioma segmentation on brain MRI’s. This slightly counter-
intuitive augmentation technique was introduced by [6]: training data samples
are obtained by taking random convex combinations of original image-mask
pairs. That is, for $(x_{1},y_{1})$ and $(x_{2},y_{2})$ image-mask pairs, we
create a random mixed up pair $x=\lambda x_{1}+(1-\lambda)x_{2}$, $y=\lambda
y_{1}+(1-\lambda)y_{2}$, where $\lambda$ is chosen from the beta distribution
$B(\delta,\delta)$ for some $\delta\in(0,\infty)$. In each epoch, the original
samples are paired randomly, hence during the course of the training, a
multitude of training samples are fed to the network. (From the mathematical
point of view, as the coefficient $\lambda$ is chosen independently in each
case from a continuous probability distribution, the network will encounter
pairwise distinct mixed up training samples with probability 1, modulo
floating point inaccuracy.) In [6], the authors argue that generating training
samples via this method encourages the network to behave linearly in-between
training examples, which reduces the amount of undesirable oscillations when
predicting outside the training examples.
The choice of $\delta$ should be determined by hyperparameter testing for any
network and task considered. In [6], $\delta\in[0.1,0.4]$ is proposed, while
in [7] $\delta=0.4$ is applied.
## 3 Experiments
### 3.1 Training schedule
In our main tests, the JSRT dataset was randomly split so that 85% of it was
used for training and the rest for validation and testing. This split was
carried out independently in each case, enhancing the robustness of our
results. Besides that, we also experimented with small dataset training, in
which rather modest sets of 10 and 20 X-rays was utilized as training set.
(The test set remained the same.) It enabled us to measure the benefits of
mixup more transparently. In each of these cases, we trained our network with
Adam optimizer: in the former case, for 50 epochs, while in the latter cases
for 1000 and 500 epochs, respectively. As these epoch numbers are
approximately inversely proportional to the size of the training sets, these
choices correspond to each other in terms of training steps.
### 3.2 Results
Table 1 summarizes the numerical results we obtained during the testing of
Attention BCDU-Net with different train sizes and choices of $\delta$, while
Figure 3-5 display visual results. Note that the highest DSC scores slightly
exceed the ones attained by the state-of-the-art, adversarially enhanced
Attention U-Net introduced in [15] ($97.6\pm 0.5\%$) and admit higher
stability. The effect of augmentation is the most striking in the case of
training on an X-ray set of size 10, when the choice $\delta=0.2$ results in a
5% increase of IoU compared to the no mixup case. In general, we found this
case particularly interesting: it was surprising that we could achieve IoU and
DSC scores of this magnitude using such a small training set. Nevertheless the
predictions have some imperfections, displayed by Figure 3: the contours of
the segmentation are less clear and both the heart and the lung segmentation
tend to contain small spots far from the ground truth. However, such
conspicuous faults are unlikely to occur in the case of the best models for 20
train X-rays (Figure 4), which is still remarkable. The sufficiency of such
small training sets is probably due to the relative simplicity of the task.
Notably, lung and heart regions admit large similarity across a set of chest
X-rays, and they are strongly correlated with simple intensity thresholds.
Consequently, even small datasets have high representing potential. We note
that as $\delta$ gets smaller, the probability density function of
$B(\delta,\delta)$ gets more strongly skewed towards the endpoints of the
interval $[0,1]$, which results in mixed up samples being closer to original
samples in general. The perceived optimality of $\delta=0.2$ in the small
dataset cases show that a considerable augmentation is beneficial and
desirable, yet it is unadvised to use too wildly modified samples.
The benevolent effect of mixup gets more obscure as we increase the size of
the training set. Notably, the results of different augmentation setups are
almost indistinguishable from each other. We interpret this phenomena as
another consequence of the similarity of masks from different samples, which
inherently drives the network towards simpler representations in the case of a
sufficiently broad training set, even without using mixup.
We also note that in the case of 10 training samples, while the IoU
differences between the no mixup and the mixup regime are striking, the gain
in DSC is less remarkable. It hints that it is unadvised to rely merely on DSC
when evaluating and comparing segmentation models.
Figure 3: Ground truth (left) compared to the prediction of the Attention
BCDU-Net (right), train size: 10.
Figure 4: Ground truth (left) compared to the prediction of the Attention
BCDU-Net (right), train size: 20.
Figure 5: Ground truth (left) compared to the prediction of the Attention
BCDU-Net (right), train size: complete.
| Train size: 10 | Train set: 20 | Train size: 209 (Complete)
---|---|---|---
| IoU | DSC | IoU | DSC | IoU | DSC
No mixup | $87.2\pm 1.9$% | $94.8\pm 1.1$% | $91.9\pm 0.6$% | $96.9\pm 0.5$% | $94.9\pm 0.2$% | $98.0\pm 0.1$%
$\delta=0.1$ | $91.9\pm 1.3$% | $96.8\pm 0.9$% | $92.5\pm 0.5$% | $97.1\pm 0.5$% | $95.2\pm 0.2$% | $98.1\pm 0.1$%
$\delta=0.2$ | $92.2\pm 1.2$% | $97.0\pm 0.8$% | $93.3\pm 0.4$% | $97.3\pm 0.5$% | $95.0\pm 0.1$% | $98.0\pm 0.1$%
$\delta=0.3$ | $91.3\pm 1.2$% | $96.5\pm 1.0$% | $92.9\pm 0.5$% | $97.2\pm 0.5$% | $94.9\pm 0.1$% | $98.0\pm 0.1$%
$\delta=0.4$ | $91.3\pm 1.4$% | $96.4\pm 1.0$% | $93.0\pm 0.5$% | $97.2\pm 0.4$% | $94.8\pm 0.1$% | $97.9\pm 0.1$%
Table 1: Dice scores and IoU scores of Attention BCDU-Net with different mixup
parameters
We would also like to draw attention to the peculiar loss curves we primarily
encountered during the small dataset trainings, as displayed in Figure 6.
Notably, the curve of the validation DSC flattens far below the also
flattening curve of the train DSC, strongly inciting the usage of early
stopping. (Train DSC reaches essentially 1 in fact, which is unsurprising with
such a small training set.) However, in the later stages the validation DSC
catches up, even though the train DSC does not have any room for further
improvement. We were especially puzzled by this behaviour in the 10-sized
training setup, in which both the train and validation DSC seems completely
stabilized after from epoch 50 to epoch 400, yet validation DSC skyrockets in
the later stages in a very short amount of time. The same behaviour was
experienced during each test run. We have yet to give the intuitive or
theoretical explanation for this phenomenon that how the generalizing ability
of the model can improve further when it seems to be in a perfect state from
the training perspective. We note that these observations naturally led us to
experiment with even longer trainings, but to no avail.
Figure 6: From left to right: the evolution of the train DSC (blue) and the
validation DSC (orange) with 10 training samples, 20 training samples, and the
complete training dataset, respectively. The IoU curves admit similar
patterns.
## 4 Conclusion
In the present work, we addressed the problem of automated lung and heart
segmentation on chest X-rays. We introduced a new model, Attention BCDU-Net, a
variant of Attention U-Net equipped with modified attention gates, and
surpassed previous state-of-the-art results. We also demonstrated its ability
to attain surprisingly reasonable results with strongly limited training sets.
Performance in these cases was enhanced using the mixup augmentation
technique, resulting in highly notable contribution in the IoU score.
Concerning future work, a natural extension of this work would be adding a
structure correcting adversarial network to the training scheme, similarly to
[9] and [15], and measuring its effect on the performance, especially in the
setup of limited training sets. We would also like to give some kind of
explanation to the phenomenon of peculiar loss curves.
## Acknowledgements
The project was supported by the grant EFOP-3.6.3-VEKOP-16-2017-00002.
## References
* [1] Amos Tversky “Features of similarity.” In _Psychological review_ 84.4 American Psychological Association, 1977, pp. 327
* [2] , http://db.jsrt.or.jp/eng.php, 2000
* [3] Junji Shiraishi et al. “Development of a digital image database for chest radiographs with and without a lung nodule: receiver operating characteristic analysis of radiologists’ detection of pulmonary nodules” In _American Journal of Roentgenology_ 174.1 Am Roentgen Ray Soc, 2000, pp. 71–74
* [4] Jonathan Long, Evan Shelhamer and Trevor Darrell “Fully convolutional networks for semantic segmentation” In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2015, pp. 3431–3440
* [5] Olaf Ronneberger, Philipp Fischer and Thomas Brox “U-Net: Convolutional Networks for Biomedical Image Segmentation” In _Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III_ Springer International Publishing, 2015, pp. 234–241 DOI: 10.1007/978-3-319-24574-4˙28
* [6] Hongyi Zhang, Moustapha Cisse, Yann Dauphin and David Lopez-Paz “mixup: Beyond Empirical Risk Minimization”, 2017
* [7] Zach Eaton-Rosen, Felix J. S. Bragman, Sébastien Ourselin and M. Jorge Cardoso “Improving Data Augmentation for Medical Image Segmentation”, 2018
* [8] Ozan Oktay et al. “Attention U-Net: Learning where to look for the pancreas” In _arXiv:1804.03999_ , 2018
* [9] Nanqing Dong Wei Dai B et al. “SCAN: Structure Correcting Adversarial Network for Organ Segmentation in Chest X-Rays” In _Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings_ 11045, 2018, pp. 263 Springer
* [10] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh and Jianming Liang “UNet++: A Nested U-Net Architecture for Medical Image Segmentation”, 2018
* [11] Nabila Abraham and Naimul Mefraz Khan “A novel focal Tversky loss function with improved attention U-Net for lesion segmentation” In _2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019)_ , 2019, pp. 683–687 IEEE
* [12] Reza Azad, Maryam Asadi-Aghbolaghi, M. Fathy and S. Escalera “Bi-Directional ConvLSTM U-Net with Densley Connected Convolutions” In _2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)_ , 2019, pp. 406–415
* [13] NHS England and NHS Improvement “Diagnostic imaging dataset statistical release”, 2019
* [14] Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh and Jianming Liang “UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation” In _IEEE transactions on medical imaging_ , 2019 DOI: 10.1109/TMI.2019.2959609
* [15] Gusztáv Gaál, Balázs Maga and András Lukács “Attention U-Net Based Adversarial Architectures for Chest X-ray Lung Segmentation” In _arXiv:2003.10304_ , 2020
* [16] Trinh Khanh et al. “Enhancing U-Net with Spatial-Channel Attention Gate for Abnormal Tissue Segmentation in Medical Imaging” In _Applied Sciences_ 10, 2020 DOI: 10.3390/app10175729
* [17] Peng Zhao, Jindi Zhang, Weijia Fang and Shuiguang Deng “SCAU-Net: Spatial-Channel Attention U-Net for Gland Segmentation” In _Frontiers in Bioengineering and Biotechnology_ 8, 2020, pp. 670 DOI: 10.3389/fbioe.2020.00670
* [18] Giuseppe Pezzano, Vicent Ribas Ripoll and Petia Radeva “CoLe-CNN: Context-learning convolutional neural network with adaptive loss function for lung nodule segmentation” In _Computer Methods and Programs in Biomedicine_ 198, 2021, pp. 105792 DOI: https://doi.org/10.1016/j.cmpb.2020.105792
|
Partitions of an Integer into Powers
Matthieu Latapy
liafa, Université Paris 7, 2 place Jussieu, 75005 Paris.
<EMAIL_ADDRESS>
###### Abstract
In this paper, we use a simple discrete dynamical model to study partitions of
integers into powers of another integer. We extend and generalize some known
results about their enumeration and counting, and we give new structural
results. In particular, we show that the set of these partitions can be
ordered in a natural way which gives the distributive lattice structure to
this set. We also give a tree structure which allow efficient and simple
enumeration of the partitions of an integer.
## 1 Introduction
We study here the problem of writing a non-negative integer $n$ as the sum of
powers of another positive integer $b$:
$n=p_{0}b^{0}+p_{1}b^{1}+\dots+p_{k-1}b^{k-1}$
with $p_{k-1}\not=0$ and $p_{i}\in\mathbb{N}$ for all $i$. Following [Rod69],
we call the $k$-tuple $(p_{0},p_{1},\dots,p_{k-1})$ a _$b$ -ary partition_ of
$n$. The integers $p_{i}$ are called the _parts_ of the partition and $k$ is
the _length_ of the partition. A $b$-ary partition of $n$ can be viewed as a
representation of $n$ in the basis $b$, with digits in $\mathbb{N}$.
Conversely, given a $k$-tuple $(p_{0},\dots,p_{k-1})$ and a basis $b$, we will
denote by $v_{b}(p_{0},\dots,p_{k-1})$ the integer
$p_{0}b^{0}+p_{1}b^{1}+\dots+p_{k-1}b^{k-1}$. There is a unique $b$-ary
partition such that $p_{i}<b$ for all $i$, and it is the usual (canonical)
representation of $n$ in the basis $b$. Here, we consider the problem without
any restriction over the parts: $p_{i}\in\mathbb{N}$, which is actually
equivalent to say that $p_{i}\in\\{0,1,\dots,n\\}$ for all $i$. We will mainly
be concerned with the enumeration and counting of the $b$-ary partitions of
$n$, for given integers $n$ and $b$.
This natural combinatorial problem has been introduced by Mahler [Mah40], who
showed that the logarithm of the number of $b$-ary partitions of $n$ grows as
$\frac{(\log n)^{2}}{2\log b}$. This asymptotic approximation was later
improved by de Bruijn [dB48] and Pennington [Pen53]. Knuth [Knu66] studied the
special case where $b=2$. In this case, the function counting the $b$-ary
partitions for a given $n$ is called the _binary partition function_. This
function has been widely studied. Euler and Tanturri [Eul50, Tan18a, Tan18b]
studied its exact computation and Churchhouse [Chu69, Chu71] studied its
congruence properties, while Fröberg [Fro77] gave a final solution to its
asymptotical approximation. Later, Rödseth [Rod69] generalized some of these
results to $b$-ary partitions for any $b$. Finally, Pfaltz [Pfa95] studied the
subcase of the binary partitions of integers which are powers of two.
We are concerned here with the exact computation of the number of $b$-ary
partitions of a given integer $n$, for any $b$. We will use a powerful
technique we developped in [LP99] and [LMMP98]: incremental construction of
the set of $b$-ary partitions of $n$, infinite extension and coding by an
infinite tree. This method gives a deep understanding of the structure of the
set of $b$-ary partitions of $n$. We will obtain this way a tree structure
which permits the enumeration of all the $b$-ary partitions of $n$ in linear
time with respect to their number. We will also order these partitions in a
natural way which gives the distributive lattice structure to this set. We
recall that a lattice is a partially ordered set such that any two elements
$a$ and $b$ have a least upper bound (called supremum of $a$ and $b$ and
denoted by $a\vee b$) and a greatest lower bound (called infimum of $a$ and
$b$ and denoted by $a\wedge b$). The element $a\vee b$ is the smallest element
among the elements greater than both $a$ and $b$. The element $a\wedge b$ is
defined dually. A lattice is _distributive_ if for all $a$, $b$ and $c$:
$(a\vee b)\wedge(a\vee c)=a\vee(b\wedge c)$ and $(a\wedge b)\vee(a\wedge
c)=a\wedge(b\vee c)$. A distributive lattice is a strongly structured set, and
many general results, for example efficient coding and algorithms, are known
about such sets. For more details, see for example [DP90].
Notice that if we consider $b=1$ and restrict the problem to partitions of
length at most $n$, then we obtain the _compositions_ of $n$, i.e. the series
of at most $n$ integers, the sum of which equals $n$. Many studies already
deal with this special case. In particular, the (infinite) distributive
lattice $R_{1}(\infty)$ which we will introduce in Section 4 is isomorphic to
the well known Young lattice [Ber71]. Therefore, we will suppose $b>1$ in the
following. Notice however that some of the results we present here are already
known in this special case (for example the distributive lattice structure),
therefore they can be seen as an extension of the existing ones.
## 2 The lattice structure
In this section, we define a simple dynamical model which generates _all_ the
$b$-ary partitions of an integer. We will show that the set of $b$-ary
partitions, ordered by the reflexive and transitive closure of the successor
relation, has the distributive lattice structure.
Let us consider a $b$-ary partition $p=(p_{0},p_{1},\dots,p_{k-1})$ of $n$,
and let us define the following transition (or rewriting) rule:
$p\stackrel{{\scriptstyle i}}{{\longrightarrow}}q$ if and only if for all
$j\not\in\\{i,i+1\\}$, $q_{j}=p_{j}$, $p_{i}\geq b$, $q_{i}=p_{i}-b$ and
$q_{i+1}=p_{i+1}+1$ (with the assumption that $p_{k}=0$). In other words, if
$p_{i}$ is at least equal to $b$ then $q$ is obtained from $p$ by removing $b$
units from $p_{i}$ and adding one unit to $p_{i+1}$. We call this operation
_firing $i$_. The important point is to notice that $q$ is then a $b$-ary
partition of $n$. We call $q$ a _successor_ 111Notice that the term
_successor_ can have many different meanings. We follow here the standard
usage in discrete dynamical models, but in order theory the term has another
meaning, and one may also consider that a _successor_ of an integer $n$ should
be the integer $n+1$, which is not the case here. of $p$, and we denote by
$Succ_{b}(p)$ the set of all the successors of $p$, with respect to the rule.
We denote by $R_{b}(n)$ the set of $b$-ary partitions of $n$ reachable from
$(n)$ by iterating the evolution rule, ordered by the reflexive and transitive
closure of the successor relation. Notice that the successor relation is the
covering relation of the order, since it is defined as the transitive and
reflexive closure of the successor relation, and one can easily verify that
this relation has no reflexive ($x\longrightarrow x$) and no transitive
($x\longrightarrow z$ with $x\longrightarrow y$ and $y\longrightarrow z$)
edge. See Figure 1 for some examples.
Figure 1: From left to right, the sets $R_{2}(9)$, $R_{3}(9)$, $R_{3}(10)$,
$R_{3}(11)$, $R_{3}(12)$ and $R_{3}(15)$. From Theorem 1, both of these sets
is a distributive lattice.
Given a sequence $f$ of firings, we denote by $|f|_{i}$ the number of firings
of $i$ during $f$. Now, consider an element $p$ of $R_{b}(n)$, and two
sequences $f$ and $f^{\prime}$ of firings which transform $(n)$ into $p$.
Then,
$p_{i}=|f|_{i-1}-b\cdot|f|_{i}=|f^{\prime}|_{i-1}-b\cdot|f^{\prime}|_{i}$.
Suppose that there exists an integer $i$ such that
$|f|_{i}\not=|f^{\prime}|_{i}$, and let $i$ be the smallest such integer.
Then, $|f|_{i-1}=|f^{\prime}|_{i-1}$ and the equality
$|f|_{i-1}-b\cdot|f|_{i}=|f^{\prime}|_{i-1}-b\cdot|f^{\prime}|_{i}$ is
impossible. Therefore, we have $|f|_{i}=|f^{\prime}|_{i}$ for all $i$. This
leads to the definition of the _shot vector_ $s(p)$: $s(p)_{i}$ is the number
of times one have to fire $i$ in order to obtain $p$ from $(n)$. Now we can
prove:
###### Lemma 1
For all $p$ and $q$ in $R_{b}(n)$, $p\leq q$ if and only if for all $i$,
$s(p)_{i}\geq s(q)_{i}$.
Proof : If $p\leq q$, i.e. $p$ is reachable from $q$ then it is clear that for
all $i$, $s(p)_{i}\geq s(q)_{i}$. Conversely, if there exists $i$ such that
$s(p)_{i}>s(q)_{i}$, then let $j$ be the smallest such integer. Therefore,
$q_{j}>p_{j}+b$ and so $q$ can be fired at $j$. By iterating this process, we
finally obtain $p$, and so $p\leq q$.
###### Theorem 1
For all integers $b$ and $n$, the order $R_{b}(n)$ is a _distributive lattice_
which contains _all_ the $b$-ary partitions of $n$, with the infimum and
supremum of any two elements $p$ and $q$ defined by:
$s(p\vee q)_{i}=\min(s(p)_{i},s(q)_{i})\mbox{ and }s(p\wedge
q)_{i}=\max(s(p)_{i},s(q)_{i}).$
Proof : We first show that $R_{b}(n)$ contains all the $b$-ary partitions of
$n$. Consider $p$ a $b$-ary partition of $n$. If $p=(n)$, then $p\in
R_{b}(n)$, so we suppose that $p\not=(n)$. Therefore, there must be an integer
$i>0$ such that $p_{i}>0$. Let us define $q$ such that $q_{j}=p_{j}$ for all
$j\not\in\\{i-1,i\\}$, $q_{i-1}=p_{i-1}+b$ and $q_{i}=p_{i}-1$. It is clear
that $q$ is a $b$-ary partition of $n$, and that if $q\in R_{b}(n)$ then $p\in
R_{b}(n)$ since $q\stackrel{{\scriptstyle i-1}}{{\longrightarrow}}p$. It is
also obvious that, if we iterate this process, we go back to $(n)$, and so
$p\in R_{b}(n)$.
We now prove the formula for the infimum and the supremum. Let $p$ and $q$ be
in $R_{b}(n)$, and $r$ such that $s(r)_{i}=\min(s(p)_{i},s(q)_{i})$. From
Lemma 1, $p$ and $q$ are reachable from $r$. Moreover, if $p$ and $q$ are
reachable from $t\in R_{b}(n)$, then, from Lemma 1, $r$ is reachable from $t$
since we must have $s(t)_{i}\leq\min(s(p)_{i},s(q)_{i})$ (else one can not
transform $t$ into $p$ or $q$). Therefore, $r$ is the supremum of $p$ and $q$,
as claimed in the theorem. The argument for the infimum is symmetric. Finally,
to prove that the lattice is _distributive_ , we only have to check that the
formulae satisfy the distributivity laws.
We will now show that the dynamical model defined here can be viewed as a
special Chip Firing Game (CFG). A CFG [BLS91, BL92] is defined over a directed
multigraph. A configuration of the game is a repartition of a number of chips
over the vertices of the graph, and it obeys the following evolution rule: if
a vertex $\nu$ contains as many chips as its outgoing degree $d$, then one can
transfer one chip along each of its outgoing edges. In other words, the number
of chips at $\nu$ is decreased by $d$ and, for each vertex $v\not=\nu$, the
number of chips at $v$ is increased by the number of edges from $\nu$ to $v$.
This model is very general and has been introduced in various contexts, such
as physics, computer science, economics, and others. It is in particular very
close to the famous Abelian Sandpile Model [LP00].
It is known that the set of reachable configurations of such a game, ordered
with the reflexive and transitive closure of the transition rule, is a Lower
Locally Distributive (LLD) lattice (see [Mon90] for a definition and
properties), but it is not distributive in general [BL92, LP00, MPV01].
However, if a lattice is LLD and its dual, i.e. the lattice obtained by
reversing the order relation, also is LLD, then the lattice is distributive.
Therefore, we can give another proof of the fact that $R_{b}(n)$ is a
distributive lattice by showing that it is the set of reachable configurations
of a CFG, and that its dual too 222This idea is due to Clémence Magnien, who
introduced this new way to prove that a set is a distributive lattice using
two Chip Firing Games..
Given two integers $n$ and $b$, let us consider the following multigraph
$G=(V,E)$ defined by: $V=\\{0,\dots,n\\}$ and there are $b^{i+1}$ edges from
the $i$-th vertex to the $(i+1)$-th, for all $n<i\leq 0$. Now, let us consider
the CFG $C$ defined over $G$ by the initial configuration where the vertex $0$
contains $n$ chips, the other ones being empty. Now, given a configuration $c$
of the CFG, where $c_{i}$ denotes the number of chips in the vertex number
$i$, let us denote by $\bar{c}$ the vector such that
$\bar{c}_{i}=\frac{c_{i}}{b^{i}}$. Then, if the CFG is in the configuration
$c$, an application of the rule to the vertex number $i$ gives the
configuration $c^{\prime}$ such that $c^{\prime}_{i}=c_{i}-b^{i+1}$,
$c^{\prime}_{i+1}=c_{i+1}+b^{i+1}$ and $c^{\prime}_{j}=c_{j}$ for all
$j\not\in\\{i,i+1\\}$. Notice that this means exactly that $\bar{c}_{i}$ is
decreased by $b$ and that $\bar{c}_{i+1}$ is increased by $1$, therefore an
application of the CFG rule corresponds exactly to an application of the
evolution rule we defined above, and so the set of reachable configurations of
the CFG is isomorphic to $R_{b}(n)$. This leads to the fact that $R_{b}(n)$ is
a LLD lattice.
Conversely, let $G^{\prime}$ be the multigraph obtained from $G$ by reversing
each edge, and let us consider the CFG $C^{\prime}$ over $G^{\prime}$ such
that the initial configuration of $C^{\prime}$ is the final configuration of
$C$. Then it is clear that the set of reachable configurations of $C^{\prime}$
is nothing but the dual of the one of $C$, therefore it is isomorphic to the
dual of $R_{b}(n)$. This leads to the fact that the dual of $R_{b}(n)$ is a
LLD lattice, which allows us to conclude that $R_{b}(n)$ is a distributive
lattice.
## 3 From $R_{b}(n)$ to $R_{b}(n+1)$
In this section, we give a method to construct the transitive reduction (i.e.
the successor relation) of $R_{b}(n+1)$ from the one of $R_{b}(n)$. In the
following, we will simply call this the _construction of $R_{b}(n+1)$ from
$R_{b}(n)$_. This will show the self-similarity of these sets, and give a new
way, purely structural, to obtain a recursive formula for $|R_{b}(n)|$, which
is previously known from [Rod69] (the special case where $b=2$ is due to Euler
[Eul50]). This construction will also show the special role played by certain
$b$-ary partitions, which will be widely used in the rest of the paper.
Therefore, we introduce a few notations about them. We denote by $P_{i}(b,n)$
the set of the partitions $p$ in $R_{b}(n)$ such that
$p_{0}=p_{1}=\dots=p_{i-1}=b-1$. Notice that for all $i$ we have
$P_{i}(b,n)\subseteq P_{i+1}(b,n)$ and that $P_{0}(b,n)=R_{b}(n)$. If
$p=(p_{0},\dots,p_{k-1})$ is in $P_{i}(b,n)$, we denote by
$p^{\hookrightarrow_{i}}$ the $k$-uple
$(0,\dots,0,p_{i}+1,p_{i+1},\dots,p_{k-1})$. In other words,
$p^{\hookrightarrow_{i}}$ is obtained from $p$ by switching all the $i$ first
components of $p$ from $b-1$ to $0$ and adding one unit to its $i$-th
componend 333This operator is known in numeration studies as an odometer. See
[PJG95] for more precisions.. Notice that the $k$-uple
$p^{\hookrightarrow_{0}}$, which is simply obtained from $p$ by adding one
unit to its first component, is always a $b$-ary partition of $n+1$. If $S$ is
a subset of $P_{i}(b,n)$, we denote by $S^{\hookrightarrow_{i}}$ the set
$\\{p^{\hookrightarrow_{i}}|\ p\in S\\}$.
Notice that, if $p\stackrel{{\scriptstyle i}}{{\longrightarrow}}q$ in
$R_{b}(n)$, then $p^{\hookrightarrow_{0}}\stackrel{{\scriptstyle
i}}{{\longrightarrow}}q^{\hookrightarrow_{0}}$ in $R_{b}(n+1)$. This remark
makes it possible to construct $R_{b}(n+1)$ from $R_{b}(n)$: the construction
procedure starts with the lattice $R_{b}(n)^{\hookrightarrow_{0}}$ given by
its diagram. Then, we look for those elements in
$R_{b}(n)^{\hookrightarrow_{0}}$ that have a successor out of
$R_{b}(n)^{\hookrightarrow_{0}}$. The set of these elements will be denoted by
$I_{0}$, with $I_{0}\subseteq R_{b}(n)^{\hookrightarrow_{0}}$. At this point,
we add all the missing successors of the elements of $I_{0}$. The set of these
new elements will be denoted by $C_{0}$. Now, we look for the elements in
$C_{0}$ that have a successor out of the constructed set. The set of these
elements is denoted by $I_{1}$. More generally, at the $i$-th step of the
procedure we look for the elements in $C_{i-1}$ with missing successors and
call $I_{i}$ the set of these elements. We add the new successors of the
elements of $I_{i}$ and call the set of these new elements $C_{i}$. At each
step, when we add a new element, we also add its covering relations. Since
$R_{b}(n+1)$ is a finite set, this procedure terminates. At the end, we obtain
the whole set $R_{b}(n+1)$. In the rest of this section, we study more
precisely this construction process.
###### Lemma 2
Let $p$ be a $b$-ary partition in $P_{i}(b,n)$. If $p_{i}\not=b-1$ then
$Succ_{b}(p^{\hookrightarrow_{i}})={Succ_{b}(p)}^{\hookrightarrow_{i}}$. Else,
$Succ_{b}(p^{\hookrightarrow_{i}})={Succ_{b}(p)}^{\hookrightarrow_{i}}\cup\\{p^{\hookrightarrow_{i+1}}\\}$.
Proof : If a transition $p\stackrel{{\scriptstyle j}}{{\longrightarrow}}q$ is
possible, then $p^{\hookrightarrow_{i}}\stackrel{{\scriptstyle
j}}{{\longrightarrow}}q^{\hookrightarrow_{i}}$ is obviously possible.
Moreover, an additional transition is possible from $p^{\hookrightarrow_{i}}$
if and only if $p_{i}=b-1$. In this case,
$p^{\hookrightarrow_{i}}\stackrel{{\scriptstyle
i}}{{\longrightarrow}}p^{\hookrightarrow_{i+1}}$.
###### Lemma 3
For all integer $b$, $n$ and $i$, we define the function
$r_{i}:P_{i}(b,n)\rightarrow R_{b}(\frac{n+1}{b^{i}}-1)$ by: $r_{i}(p)$ is
obtained from $p\in P_{i}(b,n)$ by removing its $i$ first components (which
are equal to $b-1$). Then, $r_{i}$ is a bijection.
Proof : Let us consider $p$ in $P_{i}(b,n)$:
$p=(b-1,b-1,\dots,b-1,p_{i},\dots,p_{k})$. Then, it is clear that
$r_{i}(p)=(p_{i},\dots,p_{k})$ is in
$R_{b}(\frac{n-(b-1)-(b-1)b-\dots-(b-1)b^{i-1}}{b^{i}})=R_{b}(\frac{n+1-b^{i}}{b^{i}})=R_{b}(\frac{n+1}{b^{i+1}}-1)$.
Conversely, if we consider $p$ in $R_{b}(\frac{n+1}{b^{i}}-1)$, then
$r_{i}^{-1}(p)=(b-1,b-1,\dots,b-1,p_{0},p_{1},\dots,p_{k})$ is a $b$-ary
partition of $m=(b-1)+(b-1)b+\dots+(b-1)b^{i-1}+\frac{n+1-b^{i}}{b^{i}}$,
which is nothing but $n$. Therefore, $r_{i}^{-1}(p)$ is in $R_{b}(n)$.
###### Lemma 4
For all integer $b$, $n$ and $i$, we have
$I_{i}=P_{i+1}(b,n)^{\hookrightarrow_{i}}$ and
$C_{i}=P_{i+1}(b,n)^{\hookrightarrow_{{i+1}}}$.
Proof : By induction over $i$. For $i=0$, it is clear from Lemma 2 that the
set of elements in $R_{b}(n)^{\hookrightarrow_{0}}$ with a missing successor,
namely $I_{0}$, is exactly $P_{1}(b,n)^{\hookrightarrow_{0}}$. Moreover, the
set of these missing successors, namely $C_{0}$, is clearly
$P_{1}(b,n)^{\hookrightarrow_{1}}$. Now, let us suppose that the claim is
proved for $i$ and let us prove it for $i+1$. The set $I_{i+1}$ is the set of
elements in $C_{i}$ with one missing successor. By induction hypothesis, we
have $C_{i}=P_{i+1}(b,n)^{\hookrightarrow_{{i+1}}}$ and so, from Lemma 2,
$I_{i+1}=P_{i+2}(b,n)^{\hookrightarrow_{{i+1}}}$. Then, by application of the
evolution rule, it is clear that the set $C_{i+1}$ of the missing successor is
$P_{i+2}(b,n)^{\hookrightarrow_{{i+2}}}$, which proves the claim.
###### Theorem 2
For any positive integer $b$ and $n$, we have:
$R_{b}(n)=\bigsqcup_{i\geq
0}r_{i}^{-1}\left(R_{b}\left(\frac{n}{b^{i}}-1\right)\right)^{\hookrightarrow_{i}}$
$|R_{b}(n)|=\sum_{i=0}^{\lfloor
n/b\rfloor}\begin{tabular}[]{|c|}$R_{b}(\frac{i}{b})$\end{tabular}$
where $\bigsqcup$ denotes the disjoint union, where $R_{b}(n)$ is taken as
$\emptyset$ when $n$ is not a positive integer, and with $R_{b}(0)=\\{0\\}$.
Proof : From the construction procedure described above, we have
$R_{b}(n)=R_{b}(n-1)^{\hookrightarrow_{0}}\sqcup\bigsqcup_{i\geq 0}C_{i}$.
From Lemma 4, we obtain
$R_{b}(n)=R_{b}(n-1)^{\hookrightarrow_{0}}\sqcup\bigsqcup_{i\geq
0}P_{i+1}(b,n)^{\hookrightarrow_{{i+1}}}$. Moreover, since
$R_{b}(n-1)^{\hookrightarrow_{0}}$ is nothing but
$P_{0}(b,n)^{\hookrightarrow_{0}}$, this is equivalent to
$R_{b}(n)=\bigsqcup_{i\geq 0}P_{i}(b,n)^{\hookrightarrow_{i}}$. Finally, from
Lemma 3, we obtain the announced formula.
From this formula, we have $R_{b}(\frac{n}{b})=\bigsqcup_{i\geq
0}r^{-1}(R_{b}(\frac{n}{b^{i+1}}-1)^{\hookrightarrow_{i}})$. Therefore,
$|R_{b}(n)|=\sum_{i\geq 0}|R_{b}(\frac{n}{b^{i}}-1)|=|R_{b}(n-1)|+\sum_{i\geq
0}|R_{b}(\frac{n}{b^{i+1}}-1)|=|R_{b}(n-1)|+|R_{b}(\frac{n}{b})|$. We obtain
the claim by iterating this last formula.
The first formula given in this theorem can be used to compute the sets
$R_{b}(n)$ efficiently since it only involves _disjoint_ unions. We will give
in Section 5 another method to compute $R_{b}(n)$ which is much simplier, as
it gives $R_{b}(n)$ a tree structure. However, the formula is interesting
since it points out the self-similar structure of the set (see Figure 4).
The second formula is previouly known from [Rod69], and from [Eul50] in the
special case where $b=2$. Notice that this does not give a way to compute
$|R_{b}(n)|$ in linear time with respect to $n$, which is an unsolved problem
in the general case, but it gives a very simple way to compute recursively
$|R_{b}(n)|$.
## 4 Infinite extension
$R_{b}(n)$ is the lattice of the $b$-ary partitions of $n$ reachable from
$(n)$ by iteration of the evolution rule. We now define $R_{b}(\infty)$ as the
set of all $b$-ary partitions reachable from $(\infty)$. The order on
$R_{b}(\infty)$ is the reflexive and transitive closure of the successor
relation. For $b=2$, the first $b$-ary partitions in $R_{b}(\infty)$ are given
in Figure 2 along with their covering relation (the first component, which is
always infinity, is not represented on this diagram). Notice that it is still
possible to define the shot vector $s(p)$ of an element $p$ of $R_{b}(\infty)$
by: $s(p)_{i}$ is the number of times one has to fire $i$ in order to obtain
$p$ from $(\infty)$.
Figure 2: The first $b$-ary partitions obtained in $R_{b}(\infty)$ when
$b=2$. Two parts isomorphic to $R_{2}(4)$ are distinguished, as well as two
parts isomorphic to $R_{2}(7)$.
###### Theorem 3
The set $R_{b}(\infty)$ is a distributive lattice with:
$s(p\vee q)_{i}=\min(s(p)_{i},s(q)_{i})\mbox{ and }s(p\wedge
q)_{i}=\max(s(p)_{i},s(q)_{i})$
for all $p$ and $q$ in $R_{b}(\infty)$. Moreover, for all $n$ the functions
$\pi:s=(s_{1},s_{2},\dotsi,s_{k})\longrightarrow\pi(s)=(\infty,s_{2},\dots,s_{k})$
and
$\tau:s=(s_{1},s_{2},\dotsi,s_{k})\longrightarrow\tau(s)=(\infty,s_{1},s_{2},\dots,s_{k})$
are lattice embeddings of $R_{b}(n)$ into $R_{b}(\infty)$.
Proof : The proof for the distributive lattice structure and for the formulae
of the infimum and supremum is very similar to the proof of Theorem 1.
Therefore, it is left to the reader.
Given $p$ and $q$ in $R_{b}(n)$, we now prove that $\pi(p)\vee\pi(q)=\pi(p\vee
q)$. From Theorem 1, we have $s(p\vee q)_{i}=\min(s(p)_{i},s(q)_{i})$.
Moreover, it is clear that $s(\pi(x))_{i}=s(x)_{i}$ for all $x$ in $R_{b}(n)$.
Therefore, $s(\pi(p\vee q))_{i}=\min(s(\pi(p))_{i},s(\pi(q))_{i}))$, which
shows that $\pi$ preserves the supremum. The proof of
$\pi(p)\wedge\pi(q)=\pi(p\wedge q)$ is symmetric. Therefore, $\pi$ is a
lattice embedding.
The proof for $\tau$ is very similar when one has noticed that the shot vector
of $\tau(s)$ is obtained from the one of $s$ by adding a new first component
equal to $n$.
With similar arguments, one can easily show that $\pi(R_{b}(n))$ is a
sublattice of $\pi(R_{b}(n+1))$, and so we have an infinite chain of
distributive lattices:
$\pi(R_{b}(0))\leq\pi(R_{b}(1))\leq\dots\leq\pi(R_{b}(n))\leq\pi(R_{b}(n+1))\leq\dots\leq
R_{b}(\infty),$
where $\leq$ denotes the sublattice relation. Moreover, one can use the self-
similarity estalished here to construct filters of $R_{b}(\infty)$ (a _filter_
of a poset is an upper closed part of the poset). Indeed, if one defines
$R_{b}(\leq n)$ as the sub-order of $R_{b}(\infty)$ over $\cup_{i\leq
n}R_{b}(i)$, then one can construct efficiently $R_{b}(\leq n+1)$ from
$R_{b}(\leq n)$ by extracting from $R_{b}(\leq n)$ a part isomorphic to
$R_{b}(n+1)$ and pasting it to $R_{b}(\leq n)$. See Figures 2 and 4.
Notice that, for all integer $b$, $R_{b}(\infty)$ contains exactly all the
finite sequences of integers, since any such sequence can be viewed as a
$b$-ary partition of an integer $n$. Therefore, we provide infinitely many
ways to give the set of finite sequences of integers the distributive lattice
structure.
## 5 Infinite tree
As shown in our construction of $R_{b}(n+1)$ from $R_{b}(n)$, each $b$-ary
partition $p$ in $R_{b}(n+1)$ is obtained from another one $p^{\prime}\in
R_{b}(n)$ by application of the ↪ operator:
$p=\mbox{$p^{\prime}$}^{\hookrightarrow_{i}}$ with $i$ an integer between $0$
and $l(p^{\prime})$, where $l(p^{\prime})$ denotes the number of $b-1$ at the
beginning of $p^{\prime}$. Thus, we can define an infinite tree
$T_{b}(\infty)$ whose nodes are the elements of $\bigsqcup_{n\geq
0}{R_{b}(n)}$ and in which the fatherhood relation is defined by:
$q\mbox{ is the $(i+1)$-th son of $p$ if and only if
}q=p^{\hookrightarrow_{i}}\mbox{ for some }i,\ 0\leq i\leq l(p).$
The root of this tree is $(0)$ and each node $p$ of $T_{b}(\infty)$ has
$l(p)+1$ sons. The first levels of $T_{b}(\infty)$ when $b=2$ are shown in
Figure 3 (we call the set of elements of depth $n$ the “level $n$” of the
tree).
Figure 3: The first levels of $T_{b}(\infty)$ when $b=2$. We distinguished
some special subtrees, which will play an important role in the following.
###### Proposition 1
The level $n$ of $T_{b}(\infty)$ contains exactly the elements of $R_{b}(n)$.
Proof : Straightforward from the construction of $R_{b}(n+1)$ from $R_{b}(n)$
given above and the definition of the tree.
If we define $\overline{R_{b}(n)}$ as $\\{(s_{2},\dots,s_{k})\ |\
(s_{1},s_{2},\dots,s_{k})\in R_{b}(n)\\}$, then:
###### Proposition 2
For all integer $n$, the elements of $\overline{R_{b}(n)}$ are exactly the
elements of the $\lfloor\frac{n}{b}\rfloor$ first levels of $T_{b}(\infty)$.
Proof : Let us first prove that the elements of $R_{b}(n)$ are the nodes of a
subtree of $T_{b}(\infty)$ that contains its root. This is obviously true for
$n=0$. The general case follows by induction, since by construction the
elements of $\overline{R_{b}(n+1)}\setminus\overline{R_{b}(n)}$ are sons of
elements of $\overline{R_{b}(n)}$.
Now, let us consider an element $e$ of the $l$-th level of $T_{b}(\infty)$. If
there is a $b$-ary partition $p$ of $n$ such that $\overline{p}=e$, then
clearly $p_{i}=e_{i-1}$ for all $i>0$ and $p_{0}=n-b\cdot l$. Therefore, if
$e$ is in $\overline{R_{b}(n)}$ then all the elements of the $l$-th level are
in $\overline{R_{b}(n)}$, and this is clearly the case exactly when $0\leq
l<\lfloor\frac{n}{b}\rfloor$. This ends the proof.
Notice that this proposition gives a simple way to enumerate the elements of
$R_{b}(n)$ for any $n$ in linear time with respect to their number, since it
gives this set a tree structure. Algorithm 1 acheives this.
Input: An integer $n$ and a basis $b$
Output: The elements of $R_{b}(n)$
begin
$\mbox{Resu}\leftarrow\\{(n)\\}$;
$\mbox{CurrentLevel}\leftarrow\leftarrow\\{()\\}$;
$\mbox{OldLevel}\leftarrow\emptyset$; $l\leftarrow 0$;
while _$l <\lfloor\frac{n}{b}\rfloor$_ do
$\mbox{OldLevel}\leftarrow\mbox{CurrentLevel}$;
$\mbox{CurrentLevel}\leftarrow\emptyset$;
$l\leftarrow l+1$;
foreach _$p$ in OldLevel_ do
$i\leftarrow 0$;
repeat
Add $p^{\hookrightarrow_{i}}$ to CurrentLevel;
$i\leftarrow i+1$;
until _$p_{i-1}\not=b-1$_ ;
end foreach
foreach _$e$ in CurrentLevel_ do
Create $p$ such that $p_{i}=e_{i-1}$ for all $i>0$ and $p_{0}=n-b\cdot l$;
Add $p$ to Resu;
end foreach
end while
Return(Resu);
end
Algorithm 1 Efficient enumeration of the elements of $R_{b}(n)$.
We will now show that $T_{b}(\infty)$ can be described recursively, which
allows us to give a new recursive formula for $|R_{b}(n)|$. In order to do
this, we will use a series known as the $b$-ary carry sequence [Slo73]:
$c_{b}(n)=k$ if $b^{k}$ divides $n$ but $b^{k+1}$ does not. Notice that this
function is defined only for $n>0$ (or one can consider that
$c_{b}(0)=\infty$). These series appear in many contexts, and have many
equivalent definitions 444 For example, if one defines the series $C_{b,0}=0$
and $C_{b,i}=C_{b,i-1},\stackrel{{\scriptstyle\mbox{$b-1$
times}}}{{\overbrace{i,C_{b,i-1}}}}$, then $c_{b}(i)$ is nothing but the
$i$-th integer of the series $C_{b,i}$. The ten first values for $c_{2}(i)$
are $0,1,0,2,0,1,0,3,0,1$ and the ten first ones for $c_{3}(i)$ are
$0,0,1,0,0,1,0,0,2,0$.. Here, we will mainly use the fact that the first $n$
such that $c_{b}(n)=k$ is $n=b^{k}$, and the fact that $c_{b}(n)$ is nothing
but the number of components equal to $b-1$ at the begining of the canonical
representation of $n-1$ in the basis $b$.
###### Definition 1
Let $p\in T_{b}(\infty)$. Let us consider the rightmost branch of
$T_{b}(\infty)$ rooted at $p$ ($p$ is considered as the first node of the
branch). We say that $p$ is the root of a _$X_{b,k}$ subtree (of
$T_{b}(\infty)$)_ if this rightmost branch is as follows: for $i\leq b^{k-1}$,
the $i$-th node on the branch has $j=c_{b}(i)+1$ sons, and the $l$-th ($1\leq
l<j$) of these sons is the root of a $X_{b,l}$ subtree. Moreover, the
$(b^{k-1}+1)$-th node of the branch is itself the root of a $X_{b,k}$ subtree.
For example, we show in Figure 3 a $X_{2,2}$ subtree of $T_{2}(\infty)$,
composed of a $X_{2,1}$ subtree and another $X_{2,2}$ subtree. Notice that a
$X_{b,1}$ subtree is simply a chain.
###### Proposition 3
Let $p=(0,0,\dots,0,p_{k},\dots)$ in $T_{b}(\infty)$ with $p_{k}>b-1$. Then,
$p$ is the root of a $X_{b,k+1}$ subtree of $T_{b}(\infty)$.
Proof : The proof is by induction over $k$ and the depth of $p$. Let us
consider the rightmost branch rooted at $p$. Since, for all $q$ in
$T_{b}(\infty)$, the rightmost son of $q$ is $q^{\hookrightarrow_{i}}$ with
$i$ the number of $b-1$ at the beginning of $q$, it is clear that the $j$-th
node of this branch for $j\leq b^{k}$ is $q=(q_{0},\dots,q_{k-1},p_{k},\dots)$
where $(q_{0},\dots,q_{k-1})$ is the canonical representation of $j-1$ in the
basis $b$. Therefore, $q$ begins with $c_{b}(j)$ components equal to $b-1$,
and so, for $l=1,\dots,c_{b}(j)$, the $l$-th son of $q$ starts with $l-1$
zeroes followed by a component equal to $b>b-1$. By induction hypothesis, we
then have that the sons of $q$ are the roots of $X_{b,l}$ subtrees. Moreover,
the $(b^{k}+1)$-th node on the rightmost branch begins with exactly $k$ zeroes
followed by a component greater than $b-1$, and so it is the root of a
$X_{b,k+1}$ subtree by induction hypothesis.
###### Theorem 4
The infinite tree $T_{b}(\infty)$ is a $X_{b,\infty}$ tree: it is a chain (its
rightmost branch) such that its $i$-th node has $c_{b}(i)$ sons and the $j$-th
of these sons, $1\leq j\leq c_{b}(i)$, is the root of a $X_{b,j}$ subtree.
Moreover, the $i$-th node of the chain is the canonical representation of
$i-1$ in the basis $b$.
Proof : Since the rightmost son of $p\in T_{b}(\infty)$ is
$p^{\hookrightarrow_{i}}$, where $i$ is the number of $b-1$ at the beginning
of $p$, and since the root of $T_{b}(\infty)$ is nothing but the canonical
representation of $0$, it is clear by induction that the $i$-th node of the
rightmost branch of $T_{b}(\infty)$ is the canonical representation of $i-1$
in the basis $b$. Then, the theorem follows from Proposition 3.
We now have a recursive description of $T_{b}(\infty)$, which allows us to
give recursive formula for the cardinal of some special sets. Let us denote by
$\pi_{b}(l,k)$ the number of paths of length exactly $l$ starting from the
root of a $X_{b,k}$ subtree of $T_{b}(\infty)$. We have:
###### Theorem 5
$\pi_{b}(l,k)=\left\\{\begin{array}[]{ll}1&\mbox{if $0\leq l<b$}\\\
1+\sum_{i=1}^{l}\sum_{j=1}^{c_{b}(i)}\pi_{b}(l-i,j)&\mbox{if $b\leq l\leq
b^{k-1}$}\\\
\pi_{b}(l-b^{k-1},k)+\sum_{i=1}^{b^{k-1}}\sum_{j=1}^{c_{b}(i)}\pi_{b}(l-i,j)&\mbox{otherwise
($l>b^{k-1}$)}\end{array}\right.$
Moreover, $|R_{b}(n)|=\pi_{b}(n,n)$ and the number of $b$-ary partitions of
$n$ into exactly $l$ parts is $\pi_{b}(n-(b-1)^{l},l)$.
Proof : The formula for $\pi_{b}(l,k)$ is directly deduced from the definition
of the $X_{b,k}$ subtrees. The other formulae derive from Theorem 4 and from
the fact that all the $b$-ary partitions of length $l$ are in a $X_{b,l}$
subtree of $T_{b}(\infty)$which is rooted at the $(b-1)^{l}$-th node of the
righmost branch of $T_{b}(\infty)$.
## 6 Perspectives
The results presented in this paper mainly point out the strong self-
similarity and the structure of the sets $R_{b}(n)$. As already noticed, it is
an open question to compute the cardinal of $R_{b}(n)$ in linear time with
respect to $n$, and one may expect to obtain a solution using these results.
Another interesting direction is to investigate how one can extend the
dynamics we study. A first idea is to consider non-integer basis, in
particular complex basis or Fibonnacci basis. For example, if we consider the
complex basis $b=i-1$ then we can obtain all the ways to write an integer $n$
as the sum of powers of $b$ by iterating the following evolution rule from
$(n)$: $q$ is a successor of $p$ if $p-q=(0,\dots,0,2,0,-1,-1,0\dots,0)$. In
other words, we can decrease by two the $j$-th component of $p$ and increase
by one its $(j+2)$-th and its $(j+3)$-th components for some integer $j$. This
gives to the set of representations of $n$ in the complex basis $b=i-1$ the
lattice structure, since this can be encoded by a Chip Firing Game [LP00]
(notice however that in this case the lattice is no longer distributive).
Another interesting case is when $b=1$. As already noticed, we obtain the
Young lattice, or equivalently the lattice of the compositions of $n$.
## 7 Acknowledgments
I thank Christiane Frougny and Clémence Magnien for many useful comments on
preliminary versions, which deeply improved the manuscript.
## References
* [Ber71] Claude Berge. Principles of Combinatorics, volume 72 of Mathematics in science and engineering. Academic Press, 1971.
* [BL92] Anders Björner and László Lovász. Chip-firing games on directed graphs. J. Algebraic Comb., 1(4):305–328, December 1992.
* [BLS91] A. Bjorner, L. Lovász, and W. Shor. Chip-firing games on graphs. E.J. Combinatorics, 12:283–291, 1991.
* [Chu69] R.F. Churchhouse. Congruence properties of the binary partition function. Proc. Camb. Phil. Soc, 66:371–375, 1969.
* [Chu71] R.F. Churchhouse. Binary partitions. In A.O.L. Atkin and B.J. Birch, editors, Computers in Number Theory, pages 397–400. Academic Press, 1971.
* [dB48] N.G. de Bruijn. Nederl. Akad. Wetensch. Proc., 51:659–669, 1948.
* [DP90] B.A. Davey and H.A. Priestley. Introduction to Lattices and Orders. Cambridge university press, 1990.
* [Eul50] L. Euler. Novi Comm. Petrop., III, 1750.
* [Fro77] C.-E. Froberg. Accurate estimation of the number of binary partitions. BIT, 17:386–391, 1977.
* [Knu66] D.E. Knuth. An almost linear recurrence. Fib. Quart., 4:117–128, 1966.
* [LMMP98] M. Latapy, R. Mantaci, M. Morvan, and H.D. Phan. Structure of some sand piles model. 1998\. To appear in Theoretical Computer Science. Preprint available at `h ttp://www.liafa.jussieu.fr/~latapy/`.
* [LP99] M. Latapy and H.D. Phan. The lattice of integer partitions and its infinite extension. 1999\. To appear in DMTCS, special issue, proceedings of ORDAL’99. Preprint ava ilable at `http://www.liafa.jussieu.fr/~latapy/`.
* [LP00] M. Latapy and H.D. Phan. The lattice structure of chip firing games. 2000\. To appear in Physica D. Preprint available at `http://www.liafa.jus sieu.fr/~latapy/`.
* [Mah40] Kurt Mahler. On a special functional equation. J. London Math. Soc, 15:115–123, 1940.
* [Mon90] Bernard Monjardet. The Consequences of Dilworth’s Work on Lattices with Unique Irreducible Decompositions, pages 192–199. Birkhäuser Boston, Boston, MA, 1990.
* [MPV01] Clémence Magnien, Ha Duong Phan, and Laurent Vuillon. Characterization of lattices induced by (extended) chip firing games. In Discrete Models: Combinatorics, Computation, and Geometry, DM-CCG 2001, volume AA of DMTCS Proceedings, pages 229–244, 2001\.
* [Pen53] W.B. Pennington. On Mahler’s partition problem. Annals of Math., 57:531–546, 1953.
* [Pfa95] J.L. Pfaltz. Partitions of $2^{n}$. Congressus Numerantium, 109:3–12, 1995.
* [PJG95] Robert F. Tichy Peter J. Grabner, Pierre Liardet. Odometers and systems of numeration. Acta Arithmetica, 70(2):103–123, 1995.
* [Rod69] Öystein Rodseth. Some arithmetical properties of $m$-ary partitions. Proc. Camb. Phil. Soc, 68:447–453, 1969.
* [Slo73] N.J.A. Sloane. A Handbook of Integer Sequences. Academic Press, 1973. On-line version at `http://www.research.att.com/%7Enjas/`.
* [Tan18a] A. Tanturri. Atti R. Acad. Sci. Torino, 54:69–82, 1918.
* [Tan18b] A. Tanturri. Atti R. Acad. Lincei, 27:399–403, 1918.
Figure 4: The distributive lattice $R_{2}(80)$, which contains $4124$ elements
and $12484$ edges. The self-similarity of the set clearly appears on this
diagram.
|
# On-state commutativity of measurements
and joint distributions of their outcomes
Jan Czajkowski<EMAIL_ADDRESS>QuSoft, University of Amsterdam Alex B.
Grilo<EMAIL_ADDRESS>Sorbonne Université, CNRS, LIP6
###### Abstract
In this note, we analyze joint probability distributions that arise from
outcomes of sequences of quantum measurements performed on sets of quantum
states. First, we identify some properties of these distributions that need to
be fulfilled to get a classical behavior. Secondly, we prove that a joint
distribution exists iff measurement operators “on-state” permute
(permutability is the commutativity of more than two operators). By “on-state”
we mean properties of operators that hold only on a subset of states in the
Hilbert space. Then, we disprove a conjecture proposed by Carstens, Ebrahimi,
Tabia, and Unruh (eprint 2018), which states that the property of partial on-
state permutation implies full on-state permutation. We disprove this
conjecture with a counterexample where pairwise “on-state” commutativity does
not imply on-state permutability, unlike in the case of commutativity for all
states in the Hilbert space.
Finally, we explore the new concept of on-state commutativity by showing a
simple proof that if two projections almost on-state commute, then there is a
commuting pair of operators that are on-state close to the originals. This
result was originally proven by Hastings (Communications in Mathematical
Physics, 2019) for general operators.
## 1 Introduction
In this work we propose a basic formalism for studying classical distributions
that come from joint measurements on quantum states.
Our initial motivation comes from studying a conjecture proposed in a recent
paper by Carstens, Ebrahimi, Tabia, and Unruh [CETU18]. Their result on
quantum indifferentiability111Indifferentiability is a strong security notion
capturing security of cryptographic constructions such as hash functions,
where we require that any polynomial-time adversary cannot distinguish if it
has access to a cryptographic hash function or an ideal random function, even
if she has access to some internal auxiliary functions used to construct the
hash function. The quantum version assumes the adversary makes quantum
queries. relies on a conjecture proposed by them, which informally states that
commutation of projectors with respect to a fixed quantum state implies a
classical joint distribution of their measurement outcomes. More concretely,
they conjecture the following.222See Conjecture 2 for the formal statement.
###### Conjecture 1 (Informal).
If we have a set of $N$ measurements $P_{1},\dots,P_{N}$ that commute on a
quantum state $|\psi\rangle,$333Informally, two operators $A$ and $B$ commute
on $|\psi\rangle$ if $[A,B]|\psi\rangle=0$. then there exist random variables
$X_{1},\dots,X_{N}$ drawn from a distribution $D$ such that for any $t>1$,
$\forall_{1},\dots,i_{t}$, the marginals of this distribution on
$X_{i_{1}},\dots,X_{i_{t}}$ correspond to measuring $|\psi\rangle$ with
measurements $P_{i_{1}},\dots,P_{i_{t}}$.
Motivated by this conjecture, our goal is to study the behavior of $N$ random
variables $X_{1},X_{2},\dots,X_{N}$ corresponding to the outcomes of a
sequence of quantum measurements that commute on a set of quantum states
$\mathcal{F}\subseteq\mathcal{D}(\mathcal{H})$. Surprisingly, such results
have only been studied for $\mathcal{F}=\mathcal{D}(\mathcal{H})$, i.e.
measurements commuting on all quantum states.
The focal point of this note is to study which are the necessary and
sufficient properties of the quantum setup, so that such a probability
distribution is well-defined. With this in hand, we then have two
applications. First, we disprove 1. Secondly, we show a simpler proof for a
variant of the result by Hastings [Has09] on operators that almost-commute on
specific states.
To be able to explain our contributions in more details, we will first start
with a detour to very basic properties of probability distributions that arise
from classical processes. Then, we discuss how these properties could be
defined in the quantum setting (but, unfortunately, they do not hold for
general quantum setups), and finally we state our results and discuss related
works.
### 1.1 Classical Distributions
We discuss here properties of classical distributions that may be obvious at
first but are crucial and not trivial in the quantum world.
In the following, we let $A,B,C$ be events that come from a classical
experiment. We denote the event corresponding to $A$ not happening as
$\overline{A}$, the probability that $A$ and $B$ both happen as
$\mathbb{P}[A,B]$, and the probability that $A$ happens conditioned on the
fact that event B happens as
$\mathbb{P}[A|B]=\frac{\mathbb{P}[A,B]}{\mathbb{P}[B]}$ (assuming
$\mathbb{P}[B]\neq 0$).
The first property that we want to recall on classical distributions is that
we can compute the marginals of the distribution when given the joint
distribution:
###### Property 1 (Classical Marginals).
$\mathbb{P}[A\mid C]=\mathbb{P}[A,B\mid C]+\mathbb{P}[A,\overline{B}\mid C]$.
A second property that we want to recall is that the probability that $A$ and
$\overline{A}$ occur is $0$, even when considering other events:
###### Property 2 (Classical Disjointness).
$\mathbb{P}[A,B\mid\bar{A}]\mathbb{P}[\overline{A}]=\mathbb{P}[A,B,\overline{A}]=0$.
Another property that we have classically is _reducibility_ , which says that
the probability of events $A$ and $A$ both happening is the same as the
probability of $A$.
###### Property 3 (Classical Reducibility).
$\mathbb{P}[A,B\mid A]\mathbb{P}[A]=\mathbb{P}[A,B,A]=\mathbb{P}[A,B]$.
Finally, the last property we study is _sequential independence_ 444Sequential
independence has been originally defined in [GN01] in the context of quantum
measurements. of random variables. Roughly, this property just says that the
probability that event $A$ happens and that event $B$ happens is the same as
the probability that event $B$ happens and that event $A$ happens.
###### Property 4 (Classical Sequential Independence).
$\mathbb{P}[A\mid B]\mathbb{P}[B]=\mathbb{P}[A,B]=\mathbb{P}[B\mid
A]\mathbb{P}[A]$.
We stress that these properties hold trivially for all classical distributions
and all events such that $\mathbb{P}[A]\neq 0$, $\mathbb{P}[\overline{A}]\neq
0$, $\mathbb{P}[B]\neq 0$, and $\mathbb{P}[C]\neq 0$.
### 1.2 Quantum Distributions and their Properties
Our goal is to find necessary conditions for the existence of a classical
description of the experiment where we perform a sequence of $N$ general
measurements, irrespective of the order. More concretely, we aim to find the
properties of measurement operators $Q_{1},...,Q_{N}$ on specific subsets of
quantum states $\mathcal{F}$ so that there exists a joint distribution of
$X_{1},X_{2},\dots,X_{N}$ such that all marginals of this distribution on
$X_{i_{1}},\dots,X_{i_{t}}$ correspond to measuring a state
$|\psi\rangle\in\mathcal{F}$ with measurements $Q_{i_{1}},\dots,Q_{i_{t}}$. In
this case, we call it a _quantum distribution_.
The main obstacle in this task is the fact that quantum measurements do not
necessarily commute, unlike in the classical world: the chosen order for
performing the measurements influences the final probability distribution of
the joint measurement outcomes. Because of that, we will consider the quantum
analog of Properties 1 to 4, and study when such properties hold in the
quantum case, and their implication for having such a joint distribution. Our
connections closely follow [ME84], where they show that the existence of a
joint distribution for two arbitrary quantum observables (Hermitian operators)
on every quantum state is equivalent to their commutation. In this work, we
show how to extend their analysis in two ways: we are interested in multiple
observables and we consider specific sets of quantum states. In order to carry
out this analysis, we extend the properties described in Section 1.1 to
quantum measurements and study their relations to each other. We leave the
formal definitions of the quantum analogs of these classical properties to
Section 3.1.
### 1.3 Our Results
Using the formalism described in the previous section, we prove the following
connections between joint quantum distributions and the measurement operators.
First, we show that quantumly, the marginal property also implies the
sequential independence one.
###### Result 1 (Informal statement of 1).
If a joint distribution has the quantum marginal property, then it also has
the quantum sequential independence property.
Then, we show that in the on-state case, we have that there is a quantum joint
distribution iff all operators permute555Informally, a set of operators
permutes on $|\psi\rangle$ if applied in any order they yield the same state:
$A_{1}\cdots A_{N}|\psi\rangle=A_{\sigma(1)}\cdots A_{\sigma(N)}|\psi\rangle$,
where $\sigma$ is a permutation.. This result is a generalization of the
classic results from [Nel67, Fin73, Fin82, ME84] to the on-state case.
###### Result 2 (Informal statement of 4).
Fix a set of quantum states $\mathcal{F}$. A set of measurements yield a
quantum joint distribution on each state in $\mathcal{F}$ iff these operators
permute on every state in $\mathcal{F}$.
Then, we show that pairwise on-state commutation does not imply full on-state
permutation, unlike in the case of permutation on all states. This fact—that
we prove via a numerical example—together with Result 2 implies that 1 is
false.
###### Result 3.
1 is false.
Finally, our last result is a simpler proof for a restricted version of
Theorem 1 in [Has09], which states that if two operators $A$ and $B$ almost-
commute, we can find commuting operators $A^{\prime}$ and $B^{\prime}$ that
are close to $A$ and $B$, respectively. In our case, we consider on-state
commutation instead of the regular one, and unlike in [Has09], our proof works
only for projectors.
###### Result 4 (Making almost commuting projectors commute).
Given any two projectors $P_{1}$ and $P_{2}$ and a state $|\psi\rangle$ we
have that if $\left\|(P_{1}P_{2}-P_{2}P_{1})|\psi\rangle\right\|=\epsilon$
then there is a projector $P_{2}^{\prime}$ that is close to the original
projector on the state
$\left\|(P_{2}^{\prime}-P_{2})|\psi\rangle\right\|\leq\sqrt{2}\epsilon$ and
$[P_{1},P_{2}^{\prime}]=0$.
### 1.4 Related Work
A prominent result in the literature is that a joint distribution for a set of
measurements exists iff all the operators pairwise commute. Different versions
of this result were previously proven: In [Nel67] the author considers the
case of continuous variables and $N$ observables. A similar result but without
specifying the Hilbert space is achieved with different mathematical tools in
[Fin82]. In the specific case where we have only two observables, we mention
three works; In [Fin73] and [ME84] the authors prove the classic problem in a
way similar to each other, but using different mathematical tools. All but the
first work mentioned here focus on the joint distribution as a functional from
the space of states. An approach using $*$-algebras was presented by Hans
Maassen in [Maa06, Maa10].
The authors of [GN02] analyze the case of general measurements but prove that
the measurement operators pairwise commute iff the square-root operators
permute (Corollaries 3 and 6 in [GN02]), in the sense of our Definition 3 (for
all states in $\mathcal{H}$). In general the problem of conditional
probabilities in Quantum Mechanics was discussed by Cassinelli and Zanghi in
[CZ83].
The related problems of incompatible devices measurement and joint
measurability of quantum effects are covered in [HMZ16] and [BN18]
respectively.
In [Lin97, FR96] the authors prove that for any two Hermitian matrices if
their commutator has small norm, then there are operators close to the
originals that fully commute. In [Has09] Hastings proves how close the new
operators are in terms of the norm of the commutator.
### Organization
In Section 2, we provide some preliminaries. Then in Section 3, we discuss
quantum distributions and their properties. Finally, in Section 4, we discuss
the almost-commuting case.
### Acknowledgements
JC thanks Dominique Unruh and Christian Schaffner for helpful discussions. JC
was supported by a NWO VIDI grant (Project No. 639.022.519). Most of this work
was done while AG was affiliated to CWI and QuSoft.
## 2 Preliminaries
### 2.1 Notation
In this work, we are going to use calligraphic letters
($\mathcal{S},\mathcal{R},...$) to denote sets. We denote
$[N]:=\\{1,2,\dots,N\\}$. For $\mathcal{S}\subseteq[N]$, we denote by
$\mathcal{S}(i)$ the $i$-th element of the set $\mathcal{S}$ in ascending
order. For some fixed sets $\mathcal{X}_{1},...,\mathcal{X}_{N}$, we denote by
$\vec{x}$ an element of $\mathcal{X}_{1}\times\cdots\mathcal{X}_{N}$ and for
$\mathcal{S}\subseteq[N]$ we have
$\vec{x}_{\mathcal{S}}:=(x_{\mathcal{S}(1)},\dots,x_{\mathcal{S}(|\mathcal{S}|)})$.
We denote the set of all $t$-element permutations by $\Sigma_{t}$. For some
complex number $c=a+b\textnormal{i}$, we define $\mathfrak{Re}(c)=a$ as its
real part.
### 2.2 Quantum Measurements
We briefly review some concepts in quantum computation/information and we
refer to [NC11] for an more detailed introduction to these topics.
Quantum states are represented by positive semi-definite operators with unit
trace, i.e., $\rho\succeq 0,\textnormal{Tr}(\rho)=1$. We denote the set of all
density operators on some Hilbert space $\mathcal{H}$ by
$\mathcal{D}(\mathcal{H})$.
To describe general measurements, we use the notion of Positive Operator
Valued Measure (POVM). The only requirement of POVMs is that they consist of
positive operators and sum up to the identity operator. More formally, a POVM
with set of outcomes $\mathcal{X}$ is described by a set of operators
$\mathcal{M}=\\{Q^{x}\\}_{x\in\mathcal{X}}$, where $\forall
x\in\mathcal{X}:Q^{x}\succeq 0$, and
$\sum_{x\in\mathcal{X}}Q^{x}=\mathbbm{1}$.
We denote the probability of getting the outcome $x$ when measuring $\rho$
with the measurement $\mathcal{M}$ by
$\mathbb{P}[x\leftarrow\mathcal{M}(\rho)]:=\textnormal{Tr}(Q^{x}\rho)$. To
describe the post-measurement state, we can write down operators of
$\mathcal{M}$ as products of linear operators on $\mathcal{H}$ (denoted by
$\mathcal{L}(\mathcal{H})$), $Q^{x}=A^{x\dagger}A^{x}$, where
$A^{x}\in\mathcal{L}(\mathcal{H})$ (such a decomposition is always possible
since $Q^{x}\succeq 0$). The post-measurement state when the outcome of
$\mathcal{M}$ on $\rho$ is $x$ is given by
$\rho_{x}:=\frac{A^{x}\rho A^{x\dagger}}{\textnormal{Tr}(Q^{x}\rho)}.$ (1)
The operator $A^{x}$ is called the _square root_ of $Q^{x}$.
## 3 Quantum Distributions
In this section, we study the description of the statistics of outcomes
derived from a sequence of measurements. Our approach is to consider the
quantum version of the classical properties described in Section 1.1. Since
quantum measurements do not commute in general, these quantum properties do
not always hold. We then study the connection between properties of the
measurements and the properties of their outcome distribution.
The structure of our proofs follows [ME84], where they show that for two
Hermitian observables there is a joint distribution for the outcomes of their
joint measurement iff they commute. We stress that the result in [ME84] only
works for measurements that commute on every quantum state and our result
extends it to the case of joint distributions defined on a limited set of
states.
In the following, we denote the observables with $Q$ and their square-roots
with $R$. In Section 3.1, we define the quantum analogues of the classical
properties of distributions defined in Section 1.1. Then, in Section 3.2, we
state and prove the main result of this section, where we show a connection
between existence of a distribution and permutability—a generalization of
commutativity—of the corresponding measurement operators.
### 3.1 Quantum Distributions
We analyse a functional from the set of density operators and finite sets of
$N$ random variables $X_{1},X_{2},\dots,X_{N}$ to reals
$\mathbf{W}_{[N]}:\mathcal{D}(\mathcal{H})\times(\mathcal{X}_{1}\times\cdots\times\mathcal{X}_{N})\to[0,1]$.
We define this functional666Note that the superscript of
$\mathbf{W}^{\rho}_{[N]}(\vec{x})$ denotes the first input to the functional,
so we have $\mathbf{W}_{[N]}(\rho,\vec{x})$. as
$\mathbf{W}_{[N]}^{\rho}(\vec{x}):=\textnormal{Tr}\left(Q_{[N]}^{\vec{x}}\rho\right),$
(2)
where $Q_{[N]}^{\vec{x}}$ is a positive semidefinite operator corresponding to
the outcome $\vec{x}\in\mathcal{X}_{1}\times\cdots\times\mathcal{X}_{N}$. The
subscript $[N]$ of $\mathbf{W}$ denotes the set of indices of random variables
that the distribution is defined on. This definition is similar to the one
proposed by [ME84].
The starting point of our discussion is that every random variable $X_{i}$
corresponds to the measurement $\mathcal{M}_{i}=\\{Q_{i}^{x_{i}}\\}$. So we
have to keep in mind that these operators are fixed throughout this note.
Given the definition of $\mathbf{W}$, we can state conditions so that it can
be seen as a joint quantum distribution:
* •
Normalization: $\sum_{\vec{x}}Q^{\vec{x}}_{[N]}=\mathbbm{1}$, which implies
that for all
$\rho\in\mathcal{D}(\mathcal{H}),\;\sum_{\vec{x}}\mathbf{W}^{\rho}(\vec{x})=1$.
* •
Linearity: for every
$\vec{x}\in\mathcal{X}_{1}\times\cdots\times\mathcal{X}_{N}$,
$\rho_{1},\rho_{2}\in\mathcal{D}(\mathcal{H})$ and
$\lambda_{1},\lambda_{2}\in[0,1]$, we have that
$\mathbf{W}^{\lambda_{1}\rho_{1}+\lambda_{2}\rho_{2}}_{[N]}(\vec{x})=\lambda_{1}\mathbf{W}^{\rho_{1}}_{[N]}(\vec{x})+\lambda_{2}\mathbf{W}^{\rho_{2}}_{[N]}(\vec{x}).$
* •
Non-negativity: for every
$\vec{x}\in\mathcal{X}_{1}\times\cdots\times\mathcal{X}_{N}$ and
$\rho\in\mathcal{D}(\mathcal{H})$, we have
$\mathbf{W}^{\rho}_{[N]}(\vec{x})\geq 0$.
We describe the quantum analogues of the properties described in Section 1.1.
#### Marginals.
We start with a sequence of general measurements (i.e. POVMs):
$\\{\mathcal{M}_{i}\\}_{i\in[N]}$, where
$\mathcal{M}_{i}:=\\{Q_{i}^{x}\\}_{x\in\mathcal{X}_{i}}$ for all $i$ and all
$Q_{i}^{x}\succeq 0$ and $\sum_{x\in\mathcal{X}_{i}}Q^{x}_{i}=\mathbbm{1}$.
Moreover, for $\mathcal{S}\subseteq[N]$ we define $Q^{\vec{y}}_{\mathcal{S}}$
to be measurement operators where
$\vec{y}\in\mathcal{X}_{\mathcal{S}(1)}\times\cdots\times\mathcal{X}_{\mathcal{S}(|\mathcal{S}|)}$.
Given $Q^{\vec{y}}_{\mathcal{S}}$ and their corresponding square-roots
$R^{\vec{y}}_{\mathcal{S}}$ and $\rho$, we have that if
$\textnormal{Tr}\left(R^{\vec{y}}_{\mathcal{S}}\rho
R^{\vec{y}\dagger}_{\mathcal{S}}\right)\neq 0$, then we define the conditional
distribution for any sequence $\vec{x}$ as
$\mathbf{W}_{[N]}^{\rho}(\vec{x}\mid\vec{y}):=\mathbf{W}_{[N]}^{R^{\vec{y}}_{\mathcal{S}}\rho
R^{\vec{y}\dagger}_{\mathcal{S}}}(\vec{x})/\textnormal{Tr}R^{\vec{y}}_{\mathcal{S}}\rho
R^{\vec{y}\dagger}_{\mathcal{S}}.$ (3)
For all those measurements, for a set
$\mathcal{F}\subseteq\mathcal{D}(\mathcal{H})$, and for
$\mathcal{U}\subseteq[N]$, we define the “orbit” of the post-measurement
states. For any $\mathcal{T}\subseteq[N]$ of size $t$, we take $s\leq t$ sets
$\mathcal{S}_{1},...,\mathcal{S}_{s}$ that are a partition of $\mathcal{T}$.
We then consider the post-measurement states generated by sequences of
measurements corresponding to $\mathcal{S}_{i}$:
$\displaystyle\mathcal{G}_{\mathcal{U}}(\mathcal{F}):=\left\\{R^{\vec{y}_{s}}_{\mathcal{S}_{s}}\cdots
R^{\vec{y}_{1}}_{\mathcal{S}_{1}}\psi
R^{\vec{y}_{1}\dagger}_{\mathcal{S}_{1}}\cdots
R^{\vec{y}_{s}\dagger}_{\mathcal{S}_{s}}/\textnormal{Tr}\left(R^{\vec{y}_{s}}_{\mathcal{S}_{s}}\cdots
R^{\vec{y}_{1}}_{\mathcal{S}_{1}}\psi
R^{\vec{y}_{1}\dagger}_{\mathcal{S}_{1}}\cdots
R^{\vec{y}_{s}\dagger}_{\mathcal{S}_{s}}\right):\psi\in\mathcal{F},\mathcal{T}\subseteq\mathcal{U},\right.$
$\displaystyle\left.s\leq\left|\mathcal{T}\right|,\;\mathcal{S}_{1},...,\mathcal{S}_{s}\subseteq\mathcal{T},\bigcup_{i=1}^{s}\mathcal{S}_{i}=\mathcal{T},\forall
i\neq
j\,\mathcal{S}_{i}\cap\mathcal{S}_{j}=\emptyset,\vec{y}_{i}\in\mathcal{X}_{\mathcal{S}_{i}(1)}\times\cdots\times\mathcal{X}_{\mathcal{S}_{i}(|\mathcal{S}_{i}|)}\right\\},$
(4)
where $R^{\vec{y}_{i}}_{\mathcal{S}_{i}}$ are the square-root operators of
$Q^{\vec{y}_{i}}_{\mathcal{S}_{i}}=R^{\vec{y}_{i}\dagger}_{\mathcal{S}_{i}}R^{\vec{y}_{i}}_{\mathcal{S}_{i}}$.
The subscript of $\mathcal{G}$ denotes the set we take the subsets of, usually
it is $[N]$ but later we also consider $[N]\setminus\mathcal{S}$ for some
$\mathcal{S}$.
With our quantum marginals property, we require that the operator we get after
we sum over a subset of variables is still a valid measurement operator.
###### Property 5 (Quantum Marginals).
We say that the joint distribution $\mathbf{W}$ has the _quantum marginals
property on set $\mathcal{F}$_ if for every $\mathcal{S}\subseteq[N]$, there
is a measurement
$\mathcal{M}_{\mathcal{S}}=\\{Q^{\vec{y}}\\}_{\vec{y}\in(\mathcal{X}_{i})_{i\in\mathcal{S}}}$
such that for every value
$\vec{x}\in\mathcal{X}_{1}\times\mathcal{X}_{2}\times\cdots\times\mathcal{X}_{N}$,
denoting $\vec{x}:=(x_{1},x_{2},\dots,x_{N})$ and for every density operator
$\rho\in\mathcal{G}_{[N]}(\mathcal{F})$ defined as in Equation (3.1) we have
that
$\displaystyle\mathbf{W}^{\rho}_{\mathcal{S}}(\vec{x}_{\mathcal{S}}):=\textnormal{Tr}\left(Q^{\vec{x}_{\mathcal{S}}}_{\mathcal{S}}\rho\right)=\sum_{x_{i}\in\mathcal{X}_{i},i\in[N]\setminus\mathcal{S}}\mathbf{W}^{\rho}_{[N]}(\vec{x}).$
(5)
Additionally for $|\mathcal{S}|=1$ the operators $Q^{x_{i}}_{i}$ are the
operators from $\mathcal{M}_{i}$.
#### Disjointness.
It follows from the definition of POVMs that the quantum measurement operators
need not be orthogonal, and this implies that the disjointness property
(Property 2 does not hold in generality quantumly.
Disjointness is a property that concerns a post-measurement state of a set
$\mathcal{S}$ of variables. To ensure the existence of a measurement operator
corresponding to $\mathcal{S}$, we need to assume Property 5.
###### Property 6 (Quantum Disjointness).
Let $\mathbf{W}$ be a joint distribution for which Property 5 holds. We say
that $\mathbf{W}$ has the _quantum disjointness property on set $\mathcal{F}$_
if for every subset $\mathcal{S}\subseteq[N]$, for every density operator
$\rho\in\mathcal{G}_{[N]\setminus\mathcal{S}}(\mathcal{F})$, and for every
value
$\vec{x}\in\mathcal{X}_{1}\times\mathcal{X}_{2}\times\cdots\times\mathcal{X}_{N}$
and $\vec{y}\in\prod_{i\in\mathcal{S}}\mathcal{X}_{i}$, we have that if
$\vec{y}\neq\vec{x}_{\mathcal{S}}$, then
$\displaystyle\mathbf{W}^{\rho}_{[N]}(\vec{x}\mid\vec{y})\mathbf{W}^{\rho}_{[N]}(\vec{y})=\textnormal{Tr}\left(Q^{\vec{x}}_{[N]}R^{\vec{y}}_{\mathcal{S}}\rho
R^{\vec{y}\dagger}_{\mathcal{S}}\right)=0.$ (6)
#### Reducibility.
Reducibility (Property 3) is a similar property to disjointness but with the
key difference that we condition on the same event:
###### Property 7 (Quantum Reducibility).
Let $\mathbf{W}$ be a joint distribution for which Property 5 holds. We say
that $\mathbf{W}$ has the _quantum reducibility property on set $\mathcal{F}$_
if for every subset $\mathcal{S}\subset[N]$, every density operator
$\rho\in\mathcal{G}_{[N]\setminus\mathcal{S}}(\mathcal{F})$, and value
$\vec{x}\in\mathcal{X}_{1}\times\mathcal{X}_{2}\times\cdots\times\mathcal{X}_{N}$,
we have that
$\displaystyle\mathbf{W}^{\rho}_{[N]}(\vec{x}\mid\vec{x}_{\mathcal{S}})\mathbf{W}^{\rho}_{[N]}(\vec{x}_{\mathcal{S}})=\textnormal{Tr}\left(Q^{\vec{x}}_{[N]}R^{\vec{x}_{\mathcal{S}}}_{\mathcal{S}}\rho
R^{\vec{x}_{\mathcal{S}}\dagger}_{\mathcal{S}}\right)=\textnormal{Tr}\left(Q^{\vec{x}}_{[N]}\rho\right).$
(7)
new line
Note that the last two properties together allow us to conclude that the
operators are (morally) _on-state projections_ : Property 6 plays the role of
different projectors being orthogonal and Property 7 that projecting twice to
the same space does not change the resulting state.
More concretely, we say that the $R_{i}$’s are on-state projectors on
$\psi\in\mathcal{F}$ if for all $\mathcal{S}\subseteq[N]$, all
$\vec{x}\in\mathcal{X}_{1}\times\cdots\times\mathcal{X}_{N}$, all
$\vec{y}\in\prod_{i\in\mathcal{S}}\mathcal{X}_{i}$, and for
$R_{\mathcal{S}}^{\vec{y}}:=R^{y_{1}}_{\mathcal{S}(1)}R^{y_{2}}_{\mathcal{S}(2)}\cdots
R^{y_{t}}_{\mathcal{S}(t)}$ and
$Q_{\mathcal{S}}^{\vec{y}}:=R_{\mathcal{S}}^{\vec{y}\dagger}R_{\mathcal{S}}^{\vec{y}}$
(similarly for $[N]$), we have
$\displaystyle\textnormal{Tr}\left(R^{\vec{y}\dagger}_{\mathcal{S}}Q^{\vec{x}}_{[N]}R^{\vec{y}}_{\mathcal{S}}\psi\right)=\delta_{\vec{y},\vec{x}_{\mathcal{S}}}\textnormal{Tr}\left(Q^{\vec{x}}_{[N]}\psi\right).$
(8)
#### Sequential Independence
As previously discussed, the notion of time order in the quantum setting is
much more delicate as the probabilistic events no longer commute. Let us go
back to the example of the simple sequence $(A,B)$ from Section 1.1 but now
consider $A$ and $B$ as quantum observables measured on the state $\rho$. Let
us assume for simplicity that $A$ and $B$ are projections. The probability of
measuring $a$ with $A$ is $\textnormal{Tr}\left(A\rho\right)$ and the state
after this measurement is $\rho_{a}:=\frac{A\rho
A}{\textnormal{Tr}\left(A\rho\right)}$ so the probability of measuring the
sequence $(a,b)$ equals
$\mathbb{P}[b\leftarrow B(\rho_{a})]\mathbb{P}[a\leftarrow
A(\rho)]=\textnormal{Tr}\left(B\frac{A\rho
A}{\textnormal{Tr}(A\rho)}\right)\textnormal{Tr}\left(A\rho\right)=\textnormal{Tr}\left(ABA\rho\right).$
(9)
On the other hand the probability of measuring the sequence $(b,a)$ equals
$\textnormal{Tr}\left(BAB\rho\right)$ which is in general different than
Equation (9). This simple example shows that sequential independence is not
attained by all quantum joint probabilities. More formally, the notion of
sequential independence from [GN02] for quantum joint probabilities can be
stated as follows.
###### Property 8 (Quantum Sequential Independence).
Let $\mathbf{W}$ be a joint distribution for which Property 5 holds. We say
that $\mathbf{W}$ has the _quantum sequential independence property on set
$\mathcal{F}$_ if for every density operator $\psi\in\mathcal{F}$, for any
$\mathcal{T}\subseteq[N]$ of size $t$, for all $s\leq t$, partition
$\mathcal{S}_{1},...,\mathcal{S}_{s}$ of $\mathcal{T}$, permutation
$\sigma\in\Sigma_{s}$, and
$\vec{x}\in\mathcal{X}_{1}\times\mathcal{X}_{2}\times\cdots\times\mathcal{X}_{N}$
such that $\vec{x}:=(x_{1},x_{2},\dots,x_{N})$ and
$\vec{y}_{i}:=\left(x_{\mathcal{S}_{i}(1)},x_{\mathcal{S}_{i}(2)},\dots,x_{\mathcal{S}_{i}(\left|\mathcal{S}_{i}\right|)}\right)$,
we have that
Tr
$\displaystyle\left(Q^{\vec{y}_{s}}_{\mathcal{S}_{s}}R^{\vec{y}_{s-1}}_{\mathcal{S}_{s-1}}\cdots
R^{\vec{y}_{1}}_{\mathcal{S}_{1}}\psi
R^{\vec{y}_{1}\dagger}_{\mathcal{S}_{1}}R^{\vec{y}_{2}\dagger}_{\mathcal{S}_{2}}\cdots
R^{\vec{y}_{s-1}\dagger}_{\mathcal{S}_{s-1}}\right)$
$\displaystyle=\textnormal{Tr}\left(Q^{\vec{y}_{\sigma(s)}}_{\mathcal{S}_{\sigma(s)}}R^{\vec{y}_{\sigma(s-1)}}_{\mathcal{S}_{\sigma(s-1)}}\cdots
R^{\vec{y}_{\sigma(1)}}_{\mathcal{S}_{\sigma(1)}}\psi
R^{\vec{y}_{\sigma(1)}\dagger}_{\mathcal{S}_{\sigma(1)}}R^{\vec{y}_{\sigma(2)}\dagger}_{\mathcal{S}_{\sigma(2)}}\cdots
R^{\vec{y}_{\sigma(s-1)}\dagger}_{\mathcal{S}_{\sigma(s-1)}}\right).$ (10)
###### Theorem 1 (Marginals imply Sequential Independence).
If a joint distribution $\mathbf{W}$ has Property 5, 6, and 7, then it also
has Property 8.
###### Proof.
First note that Properties 5, 6, and 7 directly imply Equation (8).
We now prove the statement for two sets of indices and then argue how to
extend it for $s$ sets. In our restricted case, we have that
$\displaystyle\textnormal{Tr}\left(Q^{\vec{y}_{1}}_{\mathcal{S}_{1}}R^{\vec{y}_{2}}_{\mathcal{S}_{2}}\psi
R^{\vec{y}_{2}\dagger}_{\mathcal{S}_{2}}\right)=\textnormal{Tr}\left(\sum_{\vec{y}^{\prime}}Q^{\vec{y}^{\prime}}_{\mathcal{S}_{1}\cup\mathcal{S}_{2}}R^{\vec{y}_{2}}_{\mathcal{S}_{2}}\psi
R^{\vec{y}_{2}\dagger}_{\mathcal{S}_{2}}\right)$ (11)
$\displaystyle=\textnormal{Tr}\left(Q^{\vec{y}}_{\mathcal{S}_{1}\cup\mathcal{S}_{2}}R^{\vec{y}_{2}}_{\mathcal{S}_{2}}\psi
R^{\vec{y}_{2}\dagger}_{\mathcal{S}_{2}}\right)$ (12)
$\displaystyle=\textnormal{Tr}\left(Q^{\vec{y}}_{\mathcal{S}_{1}\cup\mathcal{S}_{2}}\psi\right),$
(13)
where the first equality comes from the marginals property, the second
equality comes from the disjointness property if we take $\vec{y}$ that agrees
with $\vec{y}_{2}$ on $\mathcal{S}_{2}$, and the last equality comes from the
reducibility property.
The above derivation can be repeated for any other two sets
$\mathcal{S}^{\prime}_{1}$ and $\mathcal{S}^{\prime}_{2}$ such that
$\mathcal{S}^{\prime}_{1}\cup\mathcal{S}^{\prime}_{2}=\mathcal{S}_{1}\cup\mathcal{S}_{2}$.
Any pair like this will yield
$\textnormal{Tr}\left(Q^{\vec{y}}_{\mathcal{S}_{1}\cup\mathcal{S}_{2}}\psi\right)$
that equals
$\textnormal{Tr}\left(Q^{\vec{y}}_{\mathcal{S}^{\prime}_{1}\cup\mathcal{S}^{\prime}_{2}}\psi\right)$
for all other sets $\mathcal{S}^{\prime}_{1}$ and $\mathcal{S}^{\prime}_{2}$,
hence we have proven Property 8 for any two subsets.
In general, we prove a slightly stronger statement than Property 8. We prove
that not only different sequences give the same probabilities but also that
these probabilities equal
$\textnormal{Tr}\left(Q^{\vec{y}}_{\mathcal{T}}\psi\right)$ for some operator
$Q^{\vec{y}}_{\mathcal{T}}$.
The general case holds by taking
$\rho\in\mathcal{G}_{[N]\setminus\mathcal{S}_{s}}(\mathcal{F})$ instead of
$\psi$ and use it in the above calculation. To prove Property 8 for any $s$
sets we consider $\rho=R^{\vec{y}_{s-1}}_{\mathcal{S}_{s-1}}\cdots
R^{\vec{y}_{1}}_{\mathcal{S}_{1}}\psi
R^{\vec{y}_{1}\dagger}_{\mathcal{S}_{1}}R^{\vec{y}_{2}\dagger}_{\mathcal{S}_{2}}\cdots
R^{\vec{y}_{s-1}\dagger}_{\mathcal{S}_{s-1}}$ and
$\textnormal{Tr}\left(Q^{\vec{y}_{s}}_{\mathcal{S}_{s}}\rho\right)$. We shave
off operators from $\rho$ one by one using Equation (13). After $s$ steps we
have that
$\textnormal{Tr}\left(Q^{\vec{y}_{s}}_{\mathcal{S}_{s}}\rho\right)=\textnormal{Tr}\left(Q^{\vec{x}_{\mathcal{T}}}_{\mathcal{T}}\psi\right)$.
Again, repeating this procedure for a different $\rho^{\prime}$ with sets that
sum to the same $\mathcal{T}$ yields the claimed result. ∎
### 3.2 Joint Distributions and Permutability
In this section, we state the definition of a joint distribution that
describes a sequence of quantum measurements done on states from a small set.
We also define a generalization of commutativity and prove that a joint
distribution exists if and only if the measurement operators are permutable.
###### Definition 2 (Quantum Joint Distribution On State).
A joint distribution of $N$ random variables $X_{1},\dots,X_{N}$ that describe
outcomes of general quantum measurements of states $\psi\in\mathcal{F}$ is
defined as a positive, normalized, and linear functional
$\displaystyle\mathbf{W}_{[N]}:\mathcal{D}(\mathcal{H})\times(\mathcal{X}_{1}\times\mathcal{X}_{2}\times\cdots\times\mathcal{X}_{N})\to[0,1],$
(14)
for which
1. (1)
Quantum Marginals on states in $\mathcal{F}$, Property 5, holds,
2. (2)
Quantum Disjointness on states in $\mathcal{F}$, Property 6, holds,
3. (3)
Quantum Reducibility on states in $\mathcal{F}$, Property 7, holds,
4. (4)
Quantum Sequential Independence on states in $\mathcal{F}$, Property 8, holds.
Next we show the connection between the existence of joint distributions and
requirements on the measurement operators. But first let us define the notion
of on-state permutability.
###### Definition 3 (On-state permutator, (fully) permutable operators).
For any $s$ operators $R_{i}$ and a permutation of the $s$-element set
$\sigma\in\Sigma_{s}$ the permutator on $\psi$ is defined as
$\displaystyle[R_{1},$ $\displaystyle R_{2},\dots,R_{s}]_{\psi}(\sigma):=$
$\displaystyle\textnormal{Tr}\left(R_{s}R_{s-1}\cdots R_{1}\psi
R^{\dagger}_{1}R^{\dagger}_{2}\cdots
R^{\dagger}_{s}\right)-\textnormal{Tr}\left(R_{\sigma(s)}R_{\sigma(s-1)}\cdots
R_{\sigma(1)}\psi R^{\dagger}_{\sigma(1)}R^{\dagger}_{\sigma(2)}\cdots
R^{\dagger}_{\sigma(s)}\right).$ (15)
We say that the operators $R_{1},\dots,R_{s}$ are _permutable_ if
$[R_{1},\dots,R_{s}]_{\psi}(\sigma)=0$ for all $\sigma\in\Sigma_{s}$.
Moreover, we call a set of measurements $\\{\mathcal{M}_{i}\\}_{i}$ _fully
permutable on $\mathcal{F}$_ if all square-root operators $R^{\vec{y}}_{i}$ of
these measurements $\mathcal{M}_{i}$ permute on $\psi\in\mathcal{F}$ for all
$\sigma\in\Sigma_{s}$.
Now we state the theorem connecting existence of joint distributions with
permutability of the measurement operators. This statement extends Theorem
$3.2$ of [ME84], where they prove that if the joint distribution satisfies the
marginals property (they use the name “nondisturbance”), then the operators
pairwise commute.
###### Theorem 4 (Quantum Joint Distribution and Permutability).
There is a quantum joint distribution on states $\psi\in\mathcal{F}$
describing measurement outcomes of $N$ observables $X_{1},\dots,X_{N}$ if and
only if all square roots $R^{x}_{i}$ of $Q^{x}_{i}$ operators of measurements
$\mathcal{M}_{i}$ permute on $\psi\in\mathcal{F}$ and are on-state projectors
according to Equation (8).
###### Proof.
$(\Rightarrow)$ Permutability follows from Property 8 for $N$ single-element
sets $\mathcal{S}_{i}=\\{i\\}$. Being on-state projectors (Equation (8))
follows from Property 6 and Property 7.
$(\Leftarrow)$ The other direction of the proof follows by setting the
measurement operators to
$Q_{\mathcal{S}}^{\vec{y}}:=R^{y_{t}\dagger}_{\mathcal{S}(t)}R^{y_{t-1}\dagger}_{\mathcal{S}(t-1)}\cdots
R^{y_{1}\dagger}_{\mathcal{S}(1)}R^{y_{1}}_{\mathcal{S}(1)}R^{y_{2}}_{\mathcal{S}(2)}\cdots
R^{y_{t}}_{\mathcal{S}(t)}$, for every set $\mathcal{S}\subseteq[N]$ with
$|\mathcal{S}|=t$. Similarly we define
$R_{\mathcal{S}}^{\vec{y}}:=R^{y_{1}}_{\mathcal{S}(1)}R^{y_{2}}_{\mathcal{S}(2)}\cdots
R^{y_{t}}_{\mathcal{S}(t)}$.
The marginals property follows from the fact that
$\sum_{x_{i}\in\mathcal{X}_{i}}Q_{i}^{x_{i}}=\mathbbm{1}$:
$\displaystyle\sum_{x_{i}\in\mathcal{X}_{i},i\in[N]\setminus\mathcal{S}}\textnormal{Tr}\left(Q^{x_{i_{i}}}_{i_{1}}R^{\vec{y}}_{[N]\setminus\\{i_{1}\\}}\rho
R^{\vec{y}\dagger}_{[N]\setminus\\{i_{1}\\}}\right)$
$\displaystyle=\sum_{x_{i}\in\mathcal{X}_{i},i\in[N]\setminus(\mathcal{S}\cup\\{i_{1}\\})}\textnormal{Tr}\left(Q^{x_{i_{2}}}_{i_{2}}R^{\vec{y}}_{[N]\setminus\\{i_{1},i_{2}\\}}\rho
R^{\vec{y}\dagger}_{[N]\setminus\\{i_{1},i_{2}\\}}\right)=\cdots=\textnormal{Tr}\left(Q^{\vec{y}}_{\mathcal{S}}\rho\right).$
(16)
Properties 6 and 7 are natural consequences of Equation (8). The only
difference between the properties and on-state projections is the set
$\mathcal{G}_{[N]\setminus\mathcal{S}}(\mathcal{F})$ versus just
$\mathcal{F}$. Nonetheless, with our definition of
$R_{\mathcal{S}}^{\vec{y}}$, Equation (8) implies disjointness and
reducibility.
In Theorem 1 we have already proved that marginals together with disjointness
and reducibility imply sequential independence, which concludes our proof. ∎
### 3.3 Pairwise on-state commutation does not imply full commutation
We now investigate whether full permutability is the weakest assumption we can
have for joint distributions to exist.
When we consider this question for the full Hilbert space
$\mathcal{F}=\mathcal{D}(\mathcal{H})$, this problem has been considered by a
number of works in the literature [Nel67, Fin73, Fin82, ME84] and it is well-
known that it suffices for the measurement operators to pairwise commute,
i.e., pairwise commutation on all possible quantum states implies
permutability of the operators.
Our goal in this section is to consider the case where
$\mathcal{F}\subsetneq\mathcal{D}(\mathcal{H})$. In particular, in [CETU18],
in order to connect perfect quantum indifferentiability to classical
indifferentiability with stateless simulators777 Roughly, we say that $A$ is
classical (quantum) indifferentiable from $B$ iff we can map classical (resp.
quantum) attacks on $A$ to classical (resp. quantum) attacks on $B$ using
simulators. Moreover, we say that the simulator is stateless if it does not
store any internal state. , they rely on the following conjecture.
###### Conjecture 2 (Conjecture 2 from [CETU18]).
Consider $N$ binary measurements described by projectors $P_{1},\dots,P_{N}$,
and a quantum state $|\Psi\rangle$.
Assume that any $t$ out of the $N$ measurements permute on state
$|\Psi\rangle$. That is, for any $I$ with $|I|=t$, if
$P^{\prime}_{1},\dots,P^{\prime}_{t}$ and
$P^{\prime\prime}_{1},\dots,P^{\prime\prime}_{t}$ are the projectors
$\\{P_{i}\\}_{i\in I}$ (possibly in different order), then
$P^{\prime}_{t},\dots,P^{\prime}_{1}|\Psi\rangle=P^{\prime\prime}_{t},\dots,P^{\prime\prime}_{1}|\Psi\rangle$.
Then there exist random variables $X_{1},\dots,X_{N}$ with a joint
distribution $D$ such that for any $I=\\{i_{1},\dots,i_{t}\\}$ the joint
distribution of $X_{i_{1}},\dots,X_{i_{t}}$ is the distribution of the
outcomes when we measure $|\Psi\rangle$ with measurements
$P_{i_{1}},\dots,P_{i_{t}}$.
Conjecture 2 states that if any $t$ measurement operators permute on state
$|\Psi\rangle$ then there is a joint distribution. From Theorem 4, we know
that if there is a joint distribution then the operators fully permute. Hence,
the key point of the conjecture is that if any $t$ operators permute on a
state, then they fully permute on it. However, we show here that 2 is not
true, in general.
###### Theorem 5.
There is a set of four projectors $\\{P_{1},P_{2},P_{3},P_{4}\\}$ and a state
$|\phi\rangle\in\mathbb{C}^{8}$ such that the projectors are 2-permutable
(they pairwise commute) on state $|\phi\rangle$ and they are _not_
4-permutable on $|\phi\rangle$.
###### Proof.
To prove the statement we found an example of such operators and a state
numerically by random constrained search. We consider 4 projectors $P_{i}$ and
a state $|\phi\rangle$ of dimension 8. The constraints we impose are
$\displaystyle\forall i,j\neq i\;[P_{i},P_{j}]|\phi\rangle=0,$ (17)
moreover operators $P_{i}$ are projectors: $\forall iP_{i}^{2}=P_{i}$ and
$|\phi\rangle$ is a unit-norm complex vector.
We look for an example to the statement that 2-permutability (commutativity)
does not imply 4-permutability, so that
$\displaystyle(P_{1}P_{2}P_{3}P_{4}-P_{3}P_{4}P_{1}P_{2})|\phi\rangle\neq 0.$
(18)
To find such example we used software for symbolic computing to define the
problem and maximize
$\|(P_{1}P_{2}P_{3}P_{4}-P_{3}P_{4}P_{1}P_{2})|\phi\rangle\|$, where we
maximize over operators $P_{i}$ and the state $|\phi\rangle$.
The result of our optimization can be found in Appendix A. Note that however
we consider vector equalities—instead of just traces like in Theorem 4—our
example provides a stronger argument for the necessity of full permutability.
∎
One can notice by looking at the optimization problem that it is not a
semidefinite problem, nor that it has any other structure that is easy to
exploit. For that reason finding larger instances is computationally very
expensive.
We notice that Theorem 5 actually disproves a slightly stronger version of
Conjecture 2. In the use of Conjecture 2 in [CETU18], they implicitly assume
that we can replace $P_{i}$ by $\mathbbm{1}-P_{i}$ and the permutation still
holds. While this modification gives a slightly stronger assumption, our
counterexample in Theorem 5 works just as well.
For the joint distribution in Conjecture 2 to exist, we know from Theorem 4
that all operators must on-state permute. An important observation is that
Conjecture 2 regards vector equalities and Theorem 4 regards measurement
outcomes. The theorem is “easier” than the former and our counterexample works
with vector equalities, hence we indeed disprove Conjecture 2.
In [Ebr18] the author uses a different conjecture and different reasoning to
prove the existence of the joint distribution. Our counterexample does not
disprove this other approach and we refer interested readers to [Ebr18].
## 4 Almost On-State Commutation
In the last part of the paper we discuss almost commutativity in the on-state
case. In particular, we show here that if we have two projectors that almost
commute on a state then we can define a projector that fully commutes with one
of the original operators and is on-state close to the second one.
The main tool that we need to prove this result is the Jordan’s lemma.
###### Lemma 6 (Jordan’s lemma [Jor75]).
Let $P_{1}$ and $P_{2}$ be two projectors with rank
$r_{i}:=\textnormal{rank}(P_{i})$ for $i\in\\{1,2\\}$. Then both projectors
can be decomposed simultaneously in the form
$P_{i}=\bigoplus_{k=1}^{r_{i}}P_{i}^{k}$, where $P_{i}^{k}$ denote rank-1
projectors acting on one- or two-dimensional subspaces. We denote the one- and
two-dimensional subspaces by $S_{1},\dots,S_{l}$ and subspaces by
$T_{1},\dots,T_{l^{\prime}}$, respectively. The eigenvectors $|v_{k,1}\rangle$
and $|v_{k,2}\rangle$ of $P_{1}^{k}$ and $P_{2}^{k}$ respectively are related
by:
$\displaystyle|v_{k,2}\rangle=\cos\theta_{k}|v_{k,1}\rangle+\sin\theta_{k}|v^{\perp}_{k,1}\rangle,|v_{k,1}\rangle=\cos\theta_{k}|v_{k,2}\rangle-\sin\theta_{k}|v^{\perp}_{k,2}\rangle.$
(19)
We can now prove our result.
###### Theorem 7 (Making almost commuting projectors commute).
Given any two projectors $P_{1}$ and $P_{2}$ and a state $|\psi\rangle$ we
have that if $\left\|(P_{1}P_{2}-P_{2}P_{1})|\psi\rangle\right\|=\epsilon$
then there is a projector $P_{2}^{\prime}$ that is close to the original
projector on the state
$\left\|(P_{2}^{\prime}-P_{2})|\psi\rangle\right\|\leq\sqrt{2}\epsilon$ and
$[P_{1},P_{2}^{\prime}]=0$.
###### Proof.
By Jordan’s lemma (Lemma 6), there exist bits
$\lambda_{i,1},\lambda_{i,2}\in\\{0,1\\}$ and vectors
$|u_{1}\rangle,...,|u_{m}\rangle$ and
$|v_{1,1}\rangle,|v_{1,2}\rangle,...,|v_{\ell,1}\rangle,|v_{\ell,2}\rangle$,
such that
1. 1.
$P_{1}=\sum_{i\in[m]}\lambda_{i,1}|u_{i}\rangle\langle
u_{i}|+\sum_{i\in[\ell]}|v_{i,1}\rangle\langle v_{i,1}|$ and
$P_{2}=\sum_{i\in[m]}\lambda_{i,2}|u_{i}\rangle\langle
u_{i}|+\sum_{i\in[\ell]}|v_{i,2}\rangle\langle v_{i,2}|$;
2. 2.
$\langle u_{i}|u_{k}\rangle=0$ and $\langle u_{i}|v_{j,b}\rangle=0$ for all
$b,i,j$ and $k\neq i$;
3. 3.
$\langle v_{j,b^{\prime}}|v_{i,b}\rangle=0$ for $i\neq j$ and any
$b,b^{\prime}$;
4. 4.
$0<\langle v_{i,1}|v_{i,2}\rangle<1$.
Let $\theta_{i}$ be the angle between $|v_{i,1}\rangle$ and $|v_{i,2}\rangle$
(i.e. $\cos\theta_{i}=\langle v_{i,1}|v_{i,2}\rangle$), and
$|v_{i,1}^{\perp}\rangle$ be the state orthogonal to $|v_{i,1}\rangle$ in the
subspace spanned by these two vectors. Since the non-commuting part of $P_{1}$
and $P_{2}$ must come from the pairs $|v_{i,1}\rangle,|v_{i,2}\rangle$, we
will define $P_{2}^{\prime}$ by removing the non-commuting part of $P_{2}$,
shifting the vector $|v_{i,2}\rangle$, to either $|v_{i,1}\rangle$ or
$|v_{i,1}^{\perp}\rangle$:
$P_{2}^{\prime}=\sum_{i\in[m]}\lambda_{i,2}|u_{i}\rangle\langle
u_{i}|+\sum_{i\in[\ell]:\theta_{i}\leq\frac{\pi}{4}}|v_{i,1}\rangle\langle
v_{i,1}|+\sum_{i\in[\ell]:\theta_{i}>\frac{\pi}{4}}|v_{i,1}^{\perp}\rangle\langle
v_{i,1}^{\perp}|.$
We have clearly that $[P_{1},P_{2}^{\prime}]=0$ since the two projectors are
simultaneously diagonalizable and we now want to prove that
$\displaystyle\left\|(P_{2}^{\prime}-P_{2})|\psi\rangle\right\|\leq\sqrt{2}\varepsilon.$
Notice that
$\displaystyle\left\|(P_{2}^{\prime}-P_{2})|\psi\rangle\right\|^{2}$
$\displaystyle=\left\|\sum_{i\in[l^{\prime}]:\theta_{i}\leq\frac{\pi}{4}}\left(|v_{i,1}\rangle\langle
v_{i,1}|\psi\rangle-|v_{i,2}\rangle\langle
v_{i,2}|\psi\rangle\right)+\sum_{i\in[l^{\prime}]:\theta_{i}>\frac{\pi}{4}}\left(|v_{i,1}^{\perp}\rangle\langle
v_{i,1}^{\perp}|\psi\rangle-|v_{i,2}\rangle\langle
v_{i,2}|\psi\rangle\right)\right\|^{2}$ (20)
$\displaystyle=\sum_{i\in[l^{\prime}]:\theta_{i}\leq\frac{\pi}{4}}\left\||v_{i,1}\rangle\langle
v_{i,1}|\psi\rangle-|v_{i,2}\rangle\langle
v_{i,2}|\psi\rangle\right\|^{2}+\sum_{i\in[l^{\prime}]:\theta_{i}>\frac{\pi}{4}}\left\||v_{i,1}^{\perp}\rangle\langle
v_{i,1}^{\perp}|\psi\rangle-|v_{i,2}\rangle\langle
v_{i,2}|\psi\rangle\right\|^{2},$ (21)
where in the last step we used that $\langle
v_{i,b^{\prime}}|v_{j,b}\rangle=0$ for $i\neq j$.
Using that
$|v_{i,2}\rangle=\cos{\theta_{i}}|v_{i,1}\rangle+\sin{\theta_{i}}|v_{i,1}^{\perp}\rangle$,
we have that if $\theta_{i}\leq\frac{\pi}{4}$, then
$\displaystyle\left\|\langle v_{i,1}|\psi\rangle|v_{i,1}\rangle-\langle
v_{i,2}|\psi\rangle|v_{i,2}\rangle\right\|^{2}$
$\displaystyle=\sin^{4}\theta_{i}|\langle
v_{i,1}|\psi\rangle|^{2}-2\sin^{3}\theta_{i}\cos\theta_{i}\mathfrak{Re}(\langle
v_{i,1}^{\perp}|\psi\rangle\langle
v_{i,1}|\psi\rangle)+\sin^{2}\theta_{i}\cos^{2}\theta_{i}|\langle
v_{i,1}^{\perp}|\psi\rangle|^{2}$ $\displaystyle+\sin^{4}\theta_{i}|\langle
v_{i,1}^{\perp}|\psi\rangle|^{2}+2\sin^{3}\theta_{i}\cos\theta_{i}\mathfrak{Re}(\langle
v_{i,1}|\psi\rangle\langle
v_{i,1}^{\perp}|\psi\rangle)+\sin^{2}\theta_{i}\cos^{2}\theta_{i}|\langle
v_{i,1}|\psi\rangle|^{2}$ $\displaystyle\leq
2\sin^{2}\theta_{i}\cos^{2}\theta_{i}(|\langle
v_{i,1}^{\perp}|\psi\rangle|^{2}+|\langle v_{i,1}|\psi\rangle|^{2}),$ (22)
where in the inequality we used our assumption that
$\theta_{i}\leq\frac{\pi}{4}$ which implies that
$\sin\theta_{i}\leq\cos\theta_{i}$.
Using similar calculations, we have that if $\theta_{i}\geq\frac{\pi}{4}$
$\displaystyle\left\|\langle
v_{i,1}^{\perp}|\psi\rangle|v_{i,1}^{\perp}\rangle-\langle
v_{i,2}|\psi\rangle|v_{i,2}\rangle\right\|^{2}\leq
2\sin^{2}\theta_{i}\cos^{2}\theta_{i}(|\langle
v_{i,1}^{\perp}|\psi\rangle|^{2}+|\langle v_{i,1}|\psi\rangle|^{2}).$ (23)
We will show now that
$\sum_{i}\sin^{2}\theta_{i}\cos^{2}\theta_{i}(|\langle
v_{i,1}^{\perp}|\psi\rangle|^{2}+|\langle
v_{i,1}|\psi\rangle|^{2})=\varepsilon^{2},$
which finishes the proof:
$\displaystyle\varepsilon^{2}$
$\displaystyle=\left\|(P_{2}P_{1}-P_{1}P_{2})|\psi\rangle\right\|^{2}$
$\displaystyle=\left\|\sum_{i\in[l^{\prime}]}|v_{i,1}\rangle\langle
v_{i,1}|v_{i,2}\rangle\langle v_{i,2}|\psi\rangle-|v_{i,2}\rangle\langle
v_{i,2}|v_{i,1}\rangle\langle v_{i,1}|\psi\rangle.\right\|^{2}$
$\displaystyle=\left\|\sum_{i\in[l^{\prime}]}\cos{\theta_{i}}\left(\left(\cos{\theta_{i}}\langle
v_{i,1}|\psi\rangle+\sin{\theta_{i}}\langle
v_{i,1}^{\perp}|\psi\rangle\right)|v_{i,1}\rangle-\langle
v_{i,1}|\psi\rangle\left(\cos{\theta_{i}}|v_{i,1}\rangle+\sin{\theta_{i}}|v_{i,1}^{\perp}\rangle\right)\right)\right\|^{2}$
$\displaystyle=\left\|\sum_{i\in[l^{\prime}]}\sin{\theta_{i}}\cos{\theta_{i}}\left(\langle
v_{i,1}^{\perp}|\psi\rangle|v_{i,1}\rangle-\langle
v_{i,1}|\psi\rangle|v_{i,1}^{\perp}\rangle\right)\right\|^{2}$
$\displaystyle=\sum_{i\in[l^{\prime}]}\sin^{2}\theta_{i}\cos^{2}\theta_{i}\left(|\langle
v_{i,1}^{\perp}|\psi\rangle|^{2}+|\langle v_{i,1}|\psi\rangle|^{2}\right).$
where in the second equality we again use that
$|v_{i,2}\rangle=\cos{\theta_{i}}|v_{i,1}\rangle+\sin{\theta_{i}}|v_{i,1}^{\perp}\rangle$
and in the fourth equality we use the fact that $\langle
v_{i,b^{\prime}}|v_{j,b}\rangle=0$ for $i\neq j$. ∎
Our proof relies solely on Jordan’s Lemma Note that Jordan’s Lemma is
sufficient only if we analyze commutation of projectors. Results that show how
to make any Hermitian matrices commute [FR96, Has09] are much more complicated
to prove and it is not clear how to translate them to the “on-state” case.
We stress that our proof only works for two projectors, since Jordan’s lemma
does not generalize for three or more projectors. Therefore, we leave as open
problem (dis)proving a generalized version of 7 for more projectors.
In [Ebr18] the author proves Theorem 7 for $\varepsilon=0$, but with a
different proof. They use Halmo’s two rojections theorem instead of Jordan’s
lemma.
## References
* [BN18] Andreas Bluhm and Ion Nechita. Joint measurability of quantum effects and the matrix diamond. Journal of Mathematical Physics, 59(11):112202, 2018.
* [CETU18] Tore Vincent Carstens, Ehsan Ebrahimi, Gelo Noel Tabia, and Dominique Unruh. On quantum indifferentiability. Technical report, Cryptology ePrint Archive, Report 2018/257, 2018. https://eprint. iacr. org/2018/257, 2018.
* [CZ83] Gianni Cassinelli and N Zanghi. Conditional probabilities in quantum mechanics. i. – conditioning with respect to a single event. Il Nuovo Cimento B (1971-1996), 73(2):237–245, 1983.
* [Cza21] Jan Czajkowski. Github repository “joints-counterexample”, 2021.
* [Ebr18] Ehsan Ebrahimi. Post-quantum security in the presence of superposition queries. 2018\.
* [Fin73] Arthur Fine. Probability and the interpretation of quantum mechanics. The British Journal for the Philosophy of Science, 24(1):1–37, 1973\.
* [Fin82] Arthur Fine. Joint distributions, quantum correlations, and commuting observables. Journal of Mathematical Physics, 23(7):1306–1310, 1982.
* [FR96] Peter Friis and Mikael Rørdam. Almost commuting self-adjoint matrices-a short proof of huaxin lin’s theorem. Journal fur die Reine und Angewandte Mathematik, 479:121–132, 1996\.
* [GN01] Stan Gudder and Gabriel Nagy. Sequential quantum measurements. Journal of Mathematical Physics, 42(11):5212–5222, 2001.
* [GN02] Stan Gudder and Gabriel Nagy. Sequentially independent effects. Proceedings of the American Mathematical Society, 130(4):1125–1130, 2002.
* [Has09] Matthew B Hastings. Making almost commuting matrices commute. Communications in Mathematical Physics, 291(2):321–345, 2009.
* [HMZ16] Teiko Heinosaari, Takayuki Miyadera, and Mário Ziman. An invitation to quantum incompatibility. Journal of Physics A: Mathematical and Theoretical, 49(12):123001, 2016.
* [Jor75] Camille Jordan. Essai sur la géométrie à $n$ dimensions. Bulletin de la Société mathématique de France, 3:103–174, 1875.
* [Lin97] Huaxin Lin. Almost commuting selfadjoint matrices and applications. Fields Inst. Commun, 13:193–233, 1997.
* [Maa06] Hans Maassen. Quantum Probability and Quantum Information Theory. https://www.math.ru.nl/~maassen/lectures/Trieste.pdf, 2006\.
* [Maa10] Hans Maassen. Quantum probability and quantum information theory. In Quantum information, computation and cryptography, pages 65–108. Springer, 2010.
* [ME84] WM de Muynck and JPHW van den Eijnde. A derivation of local commutativity from macrocausality using a quantum mechanical theory of measurement. Foundations of physics, 14(2):111–146, 1984.
* [NC11] Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, 10th edition, 2011.
* [Nel67] Edward Nelson. Dynamical theories of Brownian motion, volume 3. Princeton university press, 1967.
## Appendix A Numerical values
Below we present the state and the projectors that are claimed in the proof of
Theorem 5. The script used to generate these values can be found in [Cza21].
Before we write out the state and the projections that we found, let us state
our violation of the permutator:
$\displaystyle\|(P_{1}P_{2}P_{3}P_{4}-P_{3}P_{4}P_{1}P_{2})|\phi\rangle\|=0.25\pm
3\cdot 10^{-8}.$ (24)
All the constraints listed in the proof of Theorem 5 are fulfilled up to the
seventh decimal digit of precision, so up to $10^{-7}$. Internal computations
of the algorithm are performed with machine precision of $10^{-15}$.
The state is
$\displaystyle|\phi\rangle:=\left(\begin{array}[]{c}-0.135381-0.0503468\text{i}\\\
0.325588\,-0.222403\text{i}\\\ -0.209447-0.0404665\text{i}\\\
-0.418336+0.130098\text{i}\\\ -0.503693-0.299414\text{i}\\\
0.379842\,+0.205081\text{i}\\\ -0.179291-0.0381456\text{i}\\\
0.0840381\,-0.125995\text{i}\end{array}\right).$ (33)
We define the projectors by their eigenvectors:
$\displaystyle P_{1}=|\pi^{1}\rangle\langle\pi^{1}|,$ $\displaystyle
P_{2}=|\pi^{2}_{1}\rangle\langle\pi^{2}_{1}|+|\pi^{2}_{2}\rangle\langle\pi^{2}_{2}|,$
(34) $\displaystyle
P_{3}=|\pi^{3}_{1}\rangle\langle\pi^{3}_{1}|+|\pi^{3}_{2}\rangle\langle\pi^{3}_{2}|+|\pi^{3}_{3}\rangle\langle\pi^{3}_{3}|,$
$\displaystyle
P_{4}=|\pi^{4}_{1}\rangle\langle\pi^{4}_{1}|+|\pi^{4}_{2}\rangle\langle\pi^{4}_{2}|.$
(35)
The eigenvector of $P_{1}$ is:
$\displaystyle|\pi^{1}\rangle:=\left(\begin{array}[]{c}0.440777\,+0.168408\text{i}\\\
0.208781\,-0.37351\text{i}\\\ 0.247514\,+0.0276065\text{i}\\\
-0.297971+0.0252308\text{i}\\\ 0.118798\,+0.112225\text{i}\\\
-0.293428+0.270889\text{i}\\\ -0.193073+0.218869\text{i}\\\
-0.41405\end{array}\right).$ (44)
The eigenvectors of $P_{2}$ are:
$\displaystyle|\pi^{2}_{1}\rangle:=\left(\begin{array}[]{c}-0.497016-0.094035\text{i}\\\
0.417527\,-0.0737062\text{i}\\\ -0.000125303+0.35123\text{i}\\\
0.166569\,-0.187245\text{i}\\\ -0.373202+0.205633\text{i}\\\
0.318452\,-0.251475\text{i}\\\ -0.107473-0.123987i\\\
-0.0711523\end{array}\right),|\pi^{2}_{2}\rangle:=\left(\begin{array}[]{c}0.365906\,+0.0620997\text{i}\\\
0.418728\,-0.2059\text{i}\\\ 0.229457\,+0.0557421\text{i}\\\
-0.140393+0.0945029\text{i}\\\ -0.199205-0.188139\text{i}\\\
0.103617\,+0.279644\text{i}\\\ -0.546498+0.147197\text{i}\\\
0.275295\end{array}\right).$ (61)
The eigenvectors of $P_{3}$ are:
$\displaystyle|\pi^{3}_{1}\rangle:=\left(\begin{array}[]{c}-0.453059+0.181543\text{i}\\\
-0.452841+0.0154095\text{i}\\\ -0.17948-0.222827\text{i}\\\
-0.230355-0.0526756\text{i}\\\ -0.0918752-0.250754\text{i}\\\
0.242416\,-0.126917\text{i}\\\ 0.300832\,-0.287566\text{i}\\\
0.315259\end{array}\right),|\pi^{3}_{2}\rangle:=\left(\begin{array}[]{c}-0.0586669-0.269559\text{i}\\\
-0.280155+0.373271\text{i}\\\ -0.150758-0.158539\text{i}\\\
0.158793\,-0.0454731\text{i}\\\ 0.165888\,+0.362832\text{i}\\\
-0.110453-0.310755\text{i}\\\ 0.353894\,-0.00811586\text{i}\\\
-0.487537\end{array}\right),$ (78)
$\displaystyle|\pi^{3}_{3}\rangle:=\left(\begin{array}[]{c}-0.182739-0.114718\text{i}\\\
0.246775\,-0.134678\text{i}\\\ -0.513357-0.193655\text{i}\\\
-0.10451+0.421294\text{i}\\\ 0.111183\,+0.122625\text{i}\\\
-0.200917-0.25897\text{i}\\\ -0.0290851+0.398494\text{i}\\\
0.30081\end{array}\right).$ (87)
The eigenvectors of $P_{4}$ are:
$\displaystyle|\pi^{4}_{1}\rangle:=\left(\begin{array}[]{c}-0.464187+0.213035\text{i}\\\
-0.364421+0.119836\text{i}\\\ -0.324984-0.23097\text{i}\\\
-0.256841+0.0478513\text{i}\\\ -0.0700499-0.192822\text{i}\\\
0.146148\,-0.225755\text{i}\\\ 0.243944\,-0.284786\text{i}\\\
0.331272\end{array}\right),|\pi^{4}_{2}\rangle:=\left(\begin{array}[]{c}0.111757\,+0.151275\text{i}\\\
0.236223\,-0.323279\text{i}\\\ 0.157312\,-0.115385\text{i}\\\
-0.30864+0.0990552\text{i}\\\ -0.260931-0.236239\text{i}\\\
0.240497\,+0.13559\text{i}\\\ -0.453404+0.12357\text{i}\\\
0.490125\end{array}\right).$ (104)
|
# Extremal Laws for Laplacian Random Matrices ††thanks: Research supported by
Conacyt Grant A1-S-976
Santiago Arenas-Velilla Victor Pérez-Abreu CIMAT, Guanajuato, Mexico,
<EMAIL_ADDRESS>Politécnico Nacional, Mexico<EMAIL_ADDRESS>
###### Abstract
For an $n\times n$ Laplacian random matrix $L$ with Gaussian entries it is
proven that the fluctuations of the largest eigenvalue and the largest
diagonal entry of $L/\sqrt{n-1}$ are Gumbel. We first establish suitable non-
asymptotic estimates and bounds for the largest eigenvalue of $L$ in terms of
the largest diagonal element of $L$. An expository review of existing results
for the asymptotic spectrum of a Laplacian random matrix is also presented,
with the goal of noting the differences from the corresponding classical
results for Wigner random matrices. Extensions to Laplacian block random
matrices are indicated.
## 1 Introduction
The largest eigenvalue of a random Laplacian matrix plays an important role in
applications of complex graphs [11], analysis of algorithms for the
$\mathbb{Z}_{2}$ Synchronization, and community detection in the Stochastic
Block Model [1, 13], among other optimization problems in semidefinite
programming [4]. Its limiting behavior both almost surely and in probability
has been considered by Bryc, Dembo and Jiang [8] and Ding and Jiang [12]. The
present paper considers the limiting law of the largest eigenvalue of a random
Laplacian matrix.
The famous Tracy–Widom distribution appears in random matrix theory as the
limiting law of the largest eigenvalue for a large class of random matrices,
including classical Gaussian [25], [26], Wigner [24], Wishart [15], and sparse
matrices [17].
However, there are examples of structured random matrices constructed from a
number of independent random variables less than the one used to construct
Wigner matrices, and for which the limiting law of the largest eigenvalue is
Gumbel. This is the case for palindromic circulant random matrices, as shown
by Bryc and Sethuraman [9], and for Gaussian Hermitian circulant matrices, as
shown by Meckes [19]. The Gumbel distribution is also the limiting law of the
spectral radius of Ginibre ensembles [23].
One of the purposes of the present paper is to show that the Gumbel
distribution is also the limiting law of the largest eigenvalue of a Laplacian
matrix constructed from independent real Gaussian random variables. This kind
of random matrix appears in the $\mathbb{Z}_{2}$ Synchronization problem of
recovering binary labels with Gaussian noise [4].
More specifically, let $X=(X_{ij})$ be an $n\times n$ symmetric matrix. The
Laplacian matrix $L=L_{X}=(L_{ij})$ of $X$ is defined by
$L=D_{X}-X$ (1.1)
where $D_{X}$ is the diagonal matrix whose entries are given by
$(D_{X})_{ii}=\sum_{j=1}^{n}X_{ij}.$ (1.2)
Matrices whose row sums are zero are often called Markov matrices, so a
Laplacian matrix is an example of a Markov matrix.
Let $X=(X_{ij})$ be a Wigner random matrix, that is, an $n\times n$ symmetric
random matrix whose off-diagonal entries $\\{X_{ij},1\leq i<j\leq n\\}$ are
real-valued, independent and identically distributed random variables with
zero mean and unit variance, and the diagonal entries $\\{X_{ii},1\leq i\leq
n\\}$ are independent random variables with zero mean and variance two that
are independent of the off-diagonal entries.
We observe that for a Wigner matrix $X$, the corresponding Laplacian matrix
$L$ has $n(n-1)/2$ independent random variables and its diagonal entries are
not independent of the non-diagonal entries, while $X$ has $n(n+1)/2$
independent random variables.
Let $\lambda_{\max}(X)$ denote the largest eigenvalue of $X$ and
$\lambda_{\max}(L)$ denote that of $L$. It is well known that under suitable
moment conditions, the limiting spectral distribution of $X/\sqrt{n}$ is the
semicircle law in $[-2,2]$, that $\lambda_{\max}(X)/\sqrt{n}$ converges to $2$
almost surely as $n$ goes to infinity, and that the limiting law of
$\lambda_{\max}(X)$, after appropriate rescaling, obeys the Tracy–Widom
distribution.
In contrast, for the Laplacian matrix $L$, under suitable conditions on the
entries of $X$, the limiting spectral law of $L/\sqrt{n}$ is a distribution
with unbounded support which is the free convolution of the semicircular law
with the standard Gaussian distribution [8], [12]. Moreover,
$\lambda_{\max}(L)/\sqrt{n\log n}$ converges to $\sqrt{2}$ in probability as
$n$ goes to infinity [12]. In Section 2 we provide a brief summary of the
above results for Laplacian random matrices, and also indicate some extensions
to block matrices.
Our main result is the weak convergence of $\lambda_{\max}(L)/\sqrt{n-1}$ with
an appropriate rescaling to the Gumbel distribution, as presented in Theorem
4. In order to prove this result, we first show in Section 3 that the limiting
law of the largest entry $\max_{i}L_{ii}/\sqrt{n-1}$ of the sequence of the
triangular array $\\{L_{ii}=L_{ii}^{(n)}:i=1,\ldots,n\\}$ is a Gumbel
distribution for which we use the form of the non-stationary covariance matrix
of $\\{L_{ii}\\}$ to produce a useful approximating sequence of independent
Gaussian random variables. Then we present in Section 4 suitable non-
asymptotic estimates and bounds for the largest eigenvalue of $L$ in terms of
the largest diagonal element of $L$ for a class of Laplacian matrices, namely,
those such that $\max_{i}L_{ii}/\sqrt{n-1}$ grows faster than $\sqrt{\log n}$.
Finally, we prove in Section 5 that, after appropriate rescaling,
$\max_{i}L_{ii}/\sqrt{n-1}$ and $\lambda_{\max}(L)/\sqrt{n-1}$ have the same
limiting law under Gaussian assumptions.
## 2 On the asymptotic spectrum of the Laplacian
In this section we present a summary of two kinds of known results for the
asymptotic spectrum of a random Laplacian matrix, namely, the weak convergence
of the empirical spectral distribution of the Laplacian matrix and the almost
sure behavior of its largest eigenvalue. We remark on the differences from the
corresponding results for Wigner matrices and also indicate some new useful
extensions to block matrices.
### 2.1 Weak convergence of the empirical spectral distribution
As is by now well known, the empirical spectral distribution plays an
important role in the theory of large dimensional random matrices. For a
symmetric $n\times n$ matrix $A_{n}$, let
$\lambda_{1}(A_{n})\geq\cdot\cdot\cdot\geq\lambda_{n}(A_{n})$ denote the
eigenvalues of the matrix $A_{n}$ and let
$\tilde{F}_{n}(x)=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}\left\\{\lambda_{i}(A_{n})\leq
x\right\\},x\in\mathbb{R}$ be the empirical distribution of its eigenvalues.
When $A_{n}$ is a random matrix, $\tilde{F}_{n}$ is a random distribution
function in $\mathbb{R}.$
In his pioneering work, Wigner [27] proved that for a Wigner matrix $X$, the
empirical spectral distribution of $\frac{1}{\sqrt{n}}X$ converges weakly to
the semicircle law $S$ in $[-2,2]$
$S(dx)=\frac{1}{2\pi}\sqrt{4-x^{2}}\mathbb{I}_{|x|\leq 2}dx.$
with probability one
In the case when $X_{n}$ is a Wigner matrix with Gaussian entries and the
sequence $(X_{n})_{n\geq 1}$ is independent of a sequence of diagonal random
matrices $(\widetilde{D}_{n})_{n\geq 1}$ also with independent Gaussian
entries of zero mean, Pastur and Vasilchuk [21] proved that with probability
one the empirical spectral distribution of
$\frac{1}{\sqrt{n}}(\widetilde{D}_{n}-X_{n})$ converges weakly to a non-random
distribution $\gamma_{M}$, which is the free convolution $S\boxplus\gamma$ of
the semicircle law $S$ in $(-2,2)$ with the standard Gaussian distribution
$\gamma$.
This result is a consequence of a pioneering result by Voiculescu, who
established that, under suitable conditions, if $A=(A_{n})_{n\geq 1}$ and
$B=(B_{n})_{n\geq 1}$ are independent sequences of $n\times n$ random matrices
with empirical spectral distributions $F_{A}$ and $F_{B}$, then $A$ and $B$
become asymptotically free and the asymptotic spectral distribution of the sum
$(A_{n}+B_{n})_{n\geq 1}$ is the free convolution $F_{A}\boxplus F_{B}.$ This
remarkable fact shows the usefulness of free probability in large dimensional
random matrices, since from a knowledge of the eigenvalues of $A_{n}$ and
$B_{n}$, it is not easy to find the eigenvalues of the sum $A_{n}+B_{n}$,
unless $A_{n}$ and $B_{n}$ commute. We refer to the books [2] and [20] for the
definition and properties of the free convolution $\boxplus$ and the
asymptotic freeness of random matrices, and to [7] for a study of
distributions that are free convolutions with the semicircle distribution.
A natural question is whether it is possible to find a corresponding spectral
asymptotic result for the Laplacian matrix $L_{n}$ given by (1.1). The
challenge is that the elements of the diagonal $D_{n}$ of $L_{n}$ are not
independent of the non-diagonal entries of $X_{n}$. However, the result in
[21] predicted that the result holds also for $L_{n}$, but the problem is
nontrivial because $D_{n}$ strongly depends on $X_{n}$. This was done by Bryc,
Dembo and Jiang [8] in the context of Markov random matrices and when the
entries $X_{ij}$ are independent and identically distributed random variables.
We summarize some of their results in what follows.
###### Theorem 1.
Let $L_{n}=L_{X_{n}}$ be the Laplacian matrix of $X_{n}=(X_{ij}^{(n)})$ where
$\left\\{X_{ij}^{(n)}:1\leq i<j\leq n\right\\}_{n\geq 1}$ is a collection of
independent identically distributed random variables of mean $m$ and finite
variance $\sigma^{2}$. Let $\tilde{F}_{n}$ be the empirical spectral
distribution of $\frac{1}{\sqrt{n}}L_{n}.$
i) If $m=0$ and $\sigma^{2}=1$, then, with probability one, as
$n\rightarrow\infty$, $\tilde{F}_{n}$ converges weakly to the free convolution
$\gamma_{M}=S\boxplus\gamma$ of the semicircle law $S$ in $[-2,2]$ with the
standard Gaussian distribution $\gamma.$
ii) The measure $\gamma_{M}$ is a nonrandom symmetric probability measure with
unbounded support which does not depend on the distribution of $X_{ij}^{(n)}$.
iii) If $m$ is not zero and $\sigma^{2}<\infty$, then $\tilde{F}_{n}$
converges weakly to the degenerate distribution $\delta_{-m}$ as
$n\rightarrow\infty$.
The above result was extended by Ding and Jiang [12] to the case when the
random variables $X_{ij}^{(n)}$ are independent but not necessarily
identically distributed. More specifically, they make the following
assumption.
###### Condition 1.
Let $X_{n}=(X_{ij}^{(n)})$ be an $n\times n$ symmetric matrix with
$\\{X_{ij}^{(n)};1\leq i<j\leq n,n\geq 2\\}$ random variables defined on the
same probability space and $\\{X_{ij}^{(n)};1\leq i<j\leq n\\}$ independent
for each $n\geq 2$ (not necessarily identically distributed) with
$X_{ij}^{(n)}=X_{ji}^{(n)}$, $\mathbb{E}(X_{ij}^{(n)})=\mu_{n}$,
$\text{Var}(X_{ij}^{(n)})=\sigma_{n}^{2}>0$ for all $1\leq i<j\leq n$ $n\geq
2$, and
$\sup_{1\leq i<j\leq n,n\geq
2}\mathbb{E}|(X_{ij}^{(n)}-\mu_{n})/\sigma_{n}|^{p}<\infty$
for some $p>0$.
Assuming this condition for some $p>4,$ it is proved in [12] that if
$\tilde{F}_{n}(x)=\frac{1}{n}\sum_{i=1}^{n}\mathbb{I}\left\\{\frac{\lambda_{i}(L_{n})-n\mu_{n}}{\sqrt{n}\sigma_{n}}\leq
x\right\\},\qquad x\in\mathbb{R},$
then, as $n\rightarrow\infty$, with probability one, $\tilde{F}_{n}$ converges
weakly to the distribution of the free convolution
$\gamma_{M}=S\boxplus\gamma$ of the semicircle law $S$ in $[-2,2]$ with the
standard Gaussian distribution $\gamma$.
### 2.2 On the distribution of $\gamma_{M}=S\boxplus\gamma$
Several other properties of the limiting spectral distribution
$\gamma_{M}=S\boxplus\gamma$ are known. In addition to being symmetric and
with unbounded support, it is also shown in [8] that this distribution has a
smooth bounded density $f_{\gamma_{M}}$ and is determined by its moments. Some
of these properties are based on [7], where the author consider distributions
that are free convolutions with the semicircle distribution. It is also proved
in [8] that $\gamma_{M}$ has even moments given by the combinatorial formula
$m_{2k}=\sum_{\omega:\left|\omega\right|=2k}2^{h(\omega)}$
where $h(\omega)$ is the so-called height function that assigns to a pair
partition $\omega$ the number of connected blocks which are of cardinality 2.
However, an explicit expression for $\gamma_{M}$ is not known. One can easily
simulate this distribution using large dimensional random matrices and predict
the form of the empirical spectral distribution. In fact, Figure 1 presents an
example of a consistent result of a large simulation study we have done,
finding that the density $f_{\gamma_{M}}$ can be statistically approximated by
a mixture of a Gaussian distribution and a semicircle distribution, although,
in this approximation, the derivative of the density has two singularities.
More specifically, if a Laplacian matrix $L$ is constructed from independent
identically distributed random variables $\\{X_{ij}^{(n)};1\leq i<j\leq
n,n\geq 2\\},$ with zero mean and variance $\sigma^{2},$ a good approximation
$\widehat{f}_{\gamma_{M}}$ is
$\displaystyle\widehat{f}_{\gamma_{M}}(x)$
$\displaystyle=\frac{\alpha}{2\sigma\sqrt{2}}f_{S}\left(\frac{x}{\sigma\sqrt{2}}\right)+\frac{1-\alpha}{2\sigma\sqrt{2}}f_{\gamma}\left(\frac{x}{\sigma\sqrt{2}}\right)$
(2.1)
$\displaystyle=\frac{\alpha}{2\sigma\sqrt{2}}f_{S_{\sigma}}\left(x\right)+\frac{1-\alpha}{2\sigma\sqrt{2}}f_{\gamma_{\sigma}}\left(x\right),\text{
\ \ }x\in\mathbb{R},$ (2.2)
where $S_{\sigma}$ is the semicircle distribution on
$[-\sqrt{2}\sigma,\sqrt{2}\sigma]$, $\gamma_{\sigma}$ is the Gaussian
distribution of zero mean and variance $2\sigma^{2}$, and $\alpha=\sqrt{2}/2$
is the weight of the mixture.
Figure 1: Figure 1
### 2.3 Almost sure behavior of the largest eigenvalue
Bai and Yin [3] proved that the largest eigenvalue $\lambda_{\max}(X)$ of a
Wigner matrix $X$ satisfies
$\lim_{n\rightarrow\infty}\frac{1}{\sqrt{n}}\lambda_{\max}(X)=2\quad\text{a.s.}\quad\text{
as }n\rightarrow\infty.$ (2.3)
The next result from Ding and Jiang [12] gives the same almost sure
convergence for the largest eigenvalue for symmetric matrices as in the Wigner
case, but with unit variance in the diagonal $X$. The second part is the same
almost sure convergence of the maximum eigenvalue of symmetric matrices with
zero diagonal elements as is the case with adjacency matrices in random
graphs.
###### Proposition 1.
Let $\mathbf{U}_{n}=(u_{ij}^{(n)})$ be an $n\times n$ symmetric random matrix
and assume that for each $n\geq 1$, $\\{u_{ij}^{(n)}:1\leq i\leq j\leq n\\}$
are independent random variables with $\mathbb{E}u_{ij}^{(n)}=0$,
$\text{Var}(u_{ij}^{(n)})=1$ for all $1\leq i,j\leq n$, and $\sup_{1\leq
i,j,\leq n,n\geq 1}\mathbb{E}|u_{ij}^{(n)}|^{6+\delta}<\infty$ for some
$\delta>0$. Then:
1. (i)
$\lim_{n\to\infty}\frac{\lambda_{\max}(\mathbf{U}_{n})}{\sqrt{n}}=2\quad\text{a.s.}$
2. (ii)
The statement in (i) still holds if $\mathbf{U}_{n}$ is replaced by
$\mathbf{U}_{n}-\text{diag}(u_{ij}^{(n)})$.
However, for Laplacian matrices, the asymptotic empirical spectral
distribution has unbounded support and therefore it is not expected to have
the same type of convergence to a finite limit, if at all, with the same
normalization $1/\sqrt{n}$. A description of the asymptotic behavior of the
largest eigenvalue of Laplacian matrices is also given in [12, Theorem 1]. It
turns out that when normalized by $1/\sqrt{n\log n}$, the largest eigenvalue
of Laplacian matrices converges to a finite limit in probability.
###### Theorem 2.
Let $L_{n}=L_{X_{n}}$ be the Laplacian matrix of an $n\times n$ symmetric
matrix $X_{n}$ whose entries satisfy Condition 1 for some $p>6$. If
$\mu_{n}=0$ and $\sigma_{n}=1$ for all $n\geq 2$, then
1. (a)
$\frac{\lambda_{\max}(L_{n})}{\sqrt{n\log n}}\to\sqrt{2}$
in probability as $n\to\infty$.
1. (b)
Furthermore, if $\\{L_{2},L_{3},\ldots\\}$ are independent Laplacian matrices,
then
$\liminf_{n\rightarrow\infty}\frac{\lambda_{\max}(L_{n})}{\sqrt{n\log
n}}=\sqrt{2}\quad\text{a.s.}$
and
$\limsup_{n\rightarrow\infty}\frac{\lambda_{\max}(L_{n})}{\sqrt{n\log
n}}=2\quad\text{a.s.}$
and the sequence $\\{\lambda_{\max}(L_{n})/\sqrt{n\log n};n\geq 2\\}$ is dense
in $[\sqrt{2},2]\quad\text{a.s.}$
As a consequence of Theorem 2, we have the following result (which will be
used in Section 4), giving upper and lower bounds for the largest eigenvalue
of a Laplacian matrix.
We say that an event $\mathcal{C}_{n}$ occurs with high probability
($\mathrm{w.h.p}$) if for every $\eta>0$, there exists an $n_{0}\in\mathbb{N}$
such that
$\mathbb{P}(\mathcal{C}_{n})\geq 1-\eta\qquad\text{for }\quad n>n_{0}.$
###### Corollary 1.
Let $L=L_{X}$ be the Laplacian matrix of an $n\times n$ symmetric matrix $X$
whose entries satisfy Condition 1 for some $p>6$. If $\mu_{n}=0$ and
$\sigma_{n}=1$ for all $n\geq 2$, then for all $\epsilon>0$
$\lambda_{\max}(L)\leq\sqrt{(2+\epsilon)n\log n}$ (2.4)
with high probability, and
$\lambda_{\max}(L)\geq(2\sqrt{2}-\sqrt{2+\epsilon})\sqrt{n\log n}.$ (2.5)
###### Proof.
This follows from Theorem 2, which ensures that
$\mathbb{P}\left(\left|\frac{\lambda_{\max}(L)}{\sqrt{n\log
n}}-\sqrt{2}\right|>\delta\right)\to 0\qquad\text{as}\quad n\to\infty$
taking $\delta=\sqrt{2+\epsilon}-\sqrt{2}>0$. ∎
Observe that the right side of (2.5) is non-negative if $0<\epsilon<6$.
### 2.4 Block Laplacian matrices
We now indicate some extensions of Theorem 2 to certain block diagonal
Laplacian matrices
$L=\left[\begin{array}[]{cccc}L_{1}&0&\ldots&0\\\ 0&L_{2}&\ldots&0\\\
\vdots&\vdots&\ddots&\vdots\\\ 0&0&\cdots&L_{k}\end{array}\right],$ (2.6)
where $\\{L_{1},L_{2},\ldots,L_{k}\\}$ are independent Laplacian random square
matrices of the same size. These block matrices and the behavior of the
largest eigenvalue of $L$ are important in the optimization problems of
Stochastic Block Models: see [4].
As an application of Theorem 2, we find the asymptotic convergence of the
largest eigenvalue for block diagonal Laplacian matrices.
###### Proposition 2.
Let $L$ be a $k$-block Laplacian diagonal matrix in which each block is of
size $n/k$, and the off-diagonal entries in each block are independent.
Suppose that Condition 1 holds for some $p>6$. If whenever $L_{ij}$ is a
nonzero entry of $L$, $\mathbb{E}[L_{ij}]=0$ and $\text{Var}(L_{ij})=1$ for
all $n\geq 2$, then
$\frac{\lambda_{\max}(L)}{\sqrt{n\log n}}\to\sqrt{\frac{2}{k}}$ (2.7)
in probability as $n\to\infty$.
###### Proof.
Let $L_{1},L_{2},\ldots,L_{k}$ be the $k$ blocks of $L$. For $i=1,\ldots,k$
let
$\Omega_{i}^{(n)}=\\{\omega\in\Omega:\lambda_{\max}(L_{j})\leq\lambda_{\max}(L_{i})\text{
for all }j\\}.$
Hence
$\displaystyle\mathbb{P}\left(\left|\frac{\lambda_{\max}(L)}{\sqrt{n\log
n}}-\sqrt{\frac{2}{k}}\right|>\epsilon\right)$
$\displaystyle=\sum_{i=1}^{k}\mathbb{P}\left(\left\\{\left|\frac{\lambda_{\max}(L)}{\sqrt{n\log
n}}-\sqrt{\frac{2}{k}}\right|>\epsilon\right\\}\bigcap\Omega_{i}^{(n)}\right)$
$\displaystyle\leq\sum_{i=1}^{k}\mathbb{P}\left(\left|\frac{\lambda_{\max}(L_{i})}{\sqrt{n\log
n}}-\sqrt{\frac{2}{k}}\right|>\epsilon\right)$
$\displaystyle\leq\sum_{i=1}^{k}\mathbb{P}\left(\left|\frac{\lambda_{\max}(L_{i})}{\sqrt{\frac{n}{k}\log\frac{n}{k}}}\frac{\sqrt{\frac{n}{k}\log\frac{n}{k}}}{\sqrt{n\log
n}}-\sqrt{\frac{2}{k}}\right|>\epsilon\right)$
Note that each matrix $L_{i}$ has size $n/k\times n/k$ and satisfies the
hypotheses of Theorem 2, so, for $i=1,\ldots,k$,
$\frac{\lambda_{\max}(L_{i})}{\sqrt{\frac{n}{k}\log{\frac{n}{k}}}}\to\sqrt{2}$
in probability as $n\to\infty$ and
$\frac{\sqrt{\frac{n}{k}\log\frac{n}{k}}}{\sqrt{n\log
n}}\to\frac{1}{\sqrt{k}}\qquad\text{as}\qquad n\to\infty.$
Then, by Slutsky’s theorem, we have that for each $i=1,\ldots,k$,
$\frac{\lambda_{\max}(L_{i})}{\sqrt{\frac{n}{k}\log{\frac{n}{k}}}}\frac{\sqrt{\frac{n}{k}\log\frac{n}{k}}}{\sqrt{n\log
n}}\to\sqrt{\frac{2}{k}}$
in probability as $n\to\infty$. This guarantees that for all $\epsilon>0$,
$\mathbb{P}\left(\left|\frac{\lambda_{\max}(L)}{\sqrt{n\log
n}}-\sqrt{\frac{2}{k}}\right|>\epsilon\right)\to 0.$
∎
From Proposition 2 and following similar ideas from Corollary 1, we obtain
upper and lower bounds for the largest eigenvalue of a Laplacian block
diagonal matrix.
###### Proposition 3.
Let $L$ be a $k$-block Laplacian diagonal matrix in which each block is of
size $n/k$, where the off-diagonal entries in each block are independent.
Suppose that Condition 1 holds for some $p>6$. If $\mathbb{E}[L_{ij}]=0$ and
$\text{Var}(L_{ij})=\sigma^{2}$ for all $n\geq 2$, then, for all $\epsilon>0$,
$\lambda_{\max}(L)\leq\sigma\sqrt{\left(\frac{2}{k}+\epsilon\right)n\log n}$
(2.8)
with high probability, and
$\lambda_{\max}(L)\geq\sigma\left(2\sqrt{2}-\sqrt{\frac{2}{k}+\epsilon}\right)\sqrt{n\log
n}.$ (2.9)
###### Proof.
The proof follows from Proposition 2, which implies that
$\mathbb{P}\left(\left|\frac{\lambda_{\max}(L)}{\sqrt{n\log
n}}-\sqrt{\frac{2}{k}}\right|>\delta\right)\to 0\qquad\text{as}\qquad
n\to\infty$
taking
$\delta=\sqrt{\frac{2}{k}+\epsilon}-\sqrt{\frac{2}{k}}>0.$
∎
## 3 Limiting law for the largest diagonal entry
Let $L=\left(L_{ij}\right)$ be an $n\times n$ Laplacian matrix whose off-
diagonal entries are independent random variables with zero mean and variance
one, and let
$D^{(n)}=\frac{1}{\sqrt{n-1}}(L_{11},L_{22},\ldots,L_{nn}).$
The random variables $\\{D_{i}^{(n)}\\}_{i=1,\ldots,n}$ are not independent
but rather have the covariance matrix
$\Sigma_{D^{(n)}}=(\Sigma_{D^{(n)}})_{ij}=\left\\{\begin{array}[]{ccc}1&\text{
if }&i=j\\\ \frac{1}{n-1}&\text{ if }&i\neq j.\end{array}\right.$ (3.1)
To find the limiting law of the largest element of $D^{(n)}$ we first consider
a useful representation for $\Sigma_{D^{(n)}}$, which holds in the case of any
distribution of $L_{ij}$ with the same covariance matrix.
The eigenvalues of $\Sigma_{D^{(n)}}$ are $(n-2)/(n-1)$ with multiplicity
$n-1$ and $2$ with multiplicity $1$.
Consider the spectral decomposition of $\Sigma_{D^{(n)}}$
$\Sigma_{D^{(n)}}=U\Lambda U^{t}$
where $\Lambda=(\Lambda_{ii})$ is the diagonal matrix with the eigenvalues of
$\Sigma_{D^{(n)}}$ and $U$ is an orthogonal matrix whose first $n-1$ columns
are an orthonormal basis for the eigenvectors associated to the $n-1$ equal
eigenvalues and the last is a normalized eigenvector of the eigenvalue $2$,
namely, $(1/\sqrt{n},\ldots,1/\sqrt{n})$.
Taking
$\tilde{D}^{(n)}=\Sigma_{D^{(n)}}^{-1/2}D^{(n)}$ (3.2)
we have that the covariance matrix of $\tilde{D}^{(n)}$ is the identity
matrix, so the random variables $\\{\tilde{D}_{i}^{(n)}\\}_{i=1}^{n}$ are
uncorrelated.
Then
$\displaystyle\left(\Sigma_{D^{(n)}}^{1/2}\right)_{ij}$
$\displaystyle=\sum_{l=1}^{n}U_{il}\Lambda_{ll}^{1/2}U_{lj}^{t}$
$\displaystyle=\sum_{l=1}^{n-1}U_{il}\sqrt{\frac{n-2}{n-1}}U_{lj}^{t}+\sqrt{2}U_{in}U_{nj}^{t}$
$\displaystyle=\sqrt{\frac{n-2}{n-1}}\sum_{l=1}^{n}U_{il}U_{lj}^{t}+\left(\sqrt{2}-\sqrt{\frac{n-2}{n-1}}\right)U_{in}U_{nj}^{t},$
and since $U$ is orthogonal and $U_{in}=1/\sqrt{n}$ for $i=1,\ldots,n$, we
obtain
$\left(\Sigma_{D^{(n)}}^{1/2}\right)=\left\\{\begin{array}[]{ccc}\left(\sqrt{2}-\sqrt{\frac{n-1}{n-2}}\right)\frac{1}{\sqrt{n}}U_{nj}^{t}&\text{
if }&i\neq j\\\
\sqrt{\frac{n-2}{n-1}}+\left(\sqrt{2}-\sqrt{\frac{n-1}{n-2}}\right)\frac{1}{\sqrt{n}}U_{ni}^{t}&\text{
if }&i=j.\\\ &&\end{array}\right.$ (3.3)
Thus
$\displaystyle D_{i}^{(n)}$ $\displaystyle=\sum_{\begin{subarray}{c}j=1\\\
j\neq
i\end{subarray}}^{n}\left(\sqrt{2}-\sqrt{\frac{n-1}{n-2}}\right)\frac{1}{\sqrt{n}}U_{nj}^{t}\tilde{D}_{j}^{(n)}+\left(\sqrt{\frac{n-2}{n-1}}+\left(\sqrt{2}-\sqrt{\frac{n-2}{n-1}}\right)\frac{1}{\sqrt{n}}U_{nj}^{t}\right)\tilde{D}_{i}^{(n)}$
$\displaystyle=\left(\sqrt{2}-\sqrt{\frac{n-2}{n-1}}\right)\frac{1}{\sqrt{n}}\sum_{j=1}^{n}U_{nj}^{t}\tilde{D}_{j}{(n)}+\sqrt{\frac{n-2}{n-1}}\tilde{D}_{i}^{(n)},$
(3.4)
which is a useful explicit representation of the elements of $D^{(n)}$ in
terms of the elements of $\tilde{D}^{(n)}$, $\Lambda$, and $U$.
We now assume that the entries $L_{ij}$ of $L$ are Gaussian. The following
result shows that the Gumbel distribution is the limiting law of the largest
diagonal entry of $L/\sqrt{n-1}$.
###### Proposition 4.
Let $L=L_{X}$ be an $n\times n$ Laplacian matrix with $X=(X_{ij})$ a symmetric
matrix whose off-diagonal entries are independent standard Gaussian random
variables. Let
$a_{n}=\sqrt{2\log n}\qquad\text{and}\qquad b_{n}=\sqrt{2\log
n}-\frac{\log\log n+\log 4\pi}{2\sqrt{2\log n}}.$ (3.5)
Then
$M_{n}=a_{n}\left(\frac{\max_{i}L_{ii}}{\sqrt{n-1}}-\sqrt{\frac{n-2}{n-1}}b_{n}\right)$
(3.6)
converges in distribution when $n\rightarrow\infty$ to a Gumbel random
variable.
###### Proof.
Since we are assuming Gaussianness, the uncorrelated random variables
$\\{D_{i}^{*}\\}_{i=1}^{n}$ given by (3.2) are independent standard Gaussian
random variables. Then
$Z=Z_{n}=\sum_{j=1}^{n}U_{nj}^{t}\tilde{D}_{j}^{(n)}$
has the standard Gaussian distribution for all $n$ and by (3)
$D_{i}^{(n)}=\left(\sqrt{2}-\sqrt{\frac{n-2}{n-1}}\right)\frac{1}{\sqrt{n}}Z+\sqrt{\frac{n-2}{n-1}}\tilde{D}_{i}^{(n)}.$
(3.7)
We write
$\displaystyle a_{n}\left(\max_{1\leq i\leq
n}D_{i}^{(n)}-\sqrt{\frac{n-2}{n-1}}b_{n}\right)$
$\displaystyle=a_{n}\left(\max_{1\leq i\leq
n}\tilde{D}_{i}^{(n)}-b_{n}\right)$ $\displaystyle+a_{n}\left(\max_{1\leq
i\leq n}D_{i}^{(n)}-\max_{1\leq i\leq
n}\tilde{D}_{i}^{(n)}-\sqrt{\frac{n-2}{n-1}}b_{n}+b_{n}\right).$ (3.8)
The first term on the right side of (3) converges in distribution to the
Gumbel law by the classical extreme value result for maxima of i.i.d. Gaussian
random variables; see [16] or [22].
On the other hand, using (3.7) we obtain
$\displaystyle\max_{1\leq i\leq n}D_{i}^{(n)}$ $\displaystyle=\max_{1\leq
i\leq
n}\left(\left(\sqrt{2}-\sqrt{\frac{n-2}{n-1}}\right)\frac{1}{\sqrt{n}}Z+\sqrt{\frac{n-2}{n-1}}\tilde{D}_{i}^{(n)}\right)$
$\displaystyle=\left(\sqrt{2}-\sqrt{\frac{n-2}{n-1}}\right)\frac{1}{\sqrt{n}}Z+\sqrt{\frac{n-2}{n-1}}\max_{1\leq
i\leq n}\tilde{D}_{i}^{(n)}.$
Thus
$\displaystyle\left|\max_{1\leq i\leq n}D_{i}^{(n)}-\max_{1\leq i\leq
n}\tilde{D}_{i}^{(n)}-\left(\sqrt{\frac{n-2}{n-1}}-1\right)b_{n}\right|$
$\displaystyle\leq\left|\left(\sqrt{2}-\sqrt{\frac{n-2}{n-1}}\right)\frac{1}{\sqrt{n}}Z\right|+\left|\sqrt{\frac{n-2}{n-1}}\max_{1\leq
i\leq n}\tilde{D}_{i}^{(n)}-\max_{1\leq i\leq
n}\tilde{D}_{i}^{(n)}-\left(\sqrt{\frac{n-2}{n-1}}-1\right)b_{n}\right|$
$\displaystyle=\left(\sqrt{2}-\sqrt{\frac{n-2}{n-1}}\right)\frac{1}{\sqrt{n}}\left|Z\right|+\left(\sqrt{\frac{n-2}{n-1}}-1\right)\left|\max_{1\leq
i\leq n}\tilde{D}_{i}^{(n)}-b_{n}\right|.$
Then by Slutsky’s theorem
$a_{n}\left|\max_{1\leq i<n}D_{i}^{(n)}-\max_{1\leq
i<n}\tilde{D}_{i}^{(n)}-\left(\sqrt{\frac{n-2}{n-1}}-1\right)b_{n}\right|\to
0\qquad\text{in probability }$
as $n$ goes to infinity. Hence, the second term on the right side of (3) also
goes to zero in probability as $n$ goes to infinity.
Then, again using Slutsky’s theorem in (3), we obtain that $M_{n}$ converges
in distribution to a Gumbel random variable as $n$ goes to infinity. ∎
###### Remark.
The last result is similar to Theorem 3.1 in [6]. However, in [6], a fixed
sequence $\\{X_{n}:n=0,n=\pm 1,\ldots\\}$ of stationary random variables is
considered, while Proposition 4 deals with a triangular array
$\\{L_{ii}^{(n)}:i=1,\ldots,n\\}$, as is the case in the study of the largest
eigenvalue of ensembles of random matrices.
## 4 Non-asymptotic estimates for the largest eigenvalue
A consequence of the classical Courant–Fischer min-max Theorem [14] gives that
$\max_{i}L_{ii}\leq\lambda_{\max}(L),$ (4.1)
without more assumptions than that the matrix $L$ is symmetric. This section
establishes a converse inequality, which is valid $\mathrm{w.h.p.}$, for a
class of Laplacian matrices of symmetric Wigner matrices.
The intuition for this comparison is the following. From Theorem 2(a),
$\lambda_{\max}(L)$ grows like $\sqrt{2n\log n}$. On the other hand, the
diagonal entries of a Laplacian matrix are sums of independent random
variables, so by the Central Limit Theorem they would have an approximately
Gaussian distribution for large $n$. It can be proved that for Gaussian random
variables $\gamma_{i}$ (not necessarily independent as is the case of
$L_{ii}$) $\max_{i}\gamma_{i}$ grows like $\sqrt{n\log n}$. This and (4.1)
suggests that $\max_{i}L_{ii}$ and $\lambda_{\max}(L)$ behave similarly when
$n$ is large.
This comparison of $\max_{i}L_{ii}$ and $\lambda_{\max}(L)$ is known when the
Wigner matrix has Gaussian or bounded entries [4, 5]. Our Theorem 3 below
makes rigorous the above motivation, and is a general result under the
assumption (4.2) and uses Corollary 1, which is a consequence of Theorem 2(a).
For the sake of completeness we give a proof that this assumption is satisfied
when the Laplacian matrix $L$ is constructed from a Wigner matrix with
Gaussian entries, using concentration results for the maximum of Gaussian
random variables and Sudakov Minorization which deals with correlated Gaussian
random variables.
###### Theorem 3.
Let $L=(L_{ij})$ be the Laplacian matrix of an $n\times n$ symmetric Wigner
matrix $X=(X_{ij})$ constructed from independent random variables with zero
mean, variance $\sigma^{2}$, and finite $p$-th moment for some $p>6$. If there
exists a $c>0$ such that
$\sigma\sqrt{(n-1)\log n}\leq c\max_{i}L_{ii}\qquad\mathrm{w.h.p}.,$ (4.2)
then for all $\epsilon>0$ and
$K=c\sqrt{2+\epsilon}>0,$ (4.3) $\lambda_{\max}(L)\leq
K\left(1+\frac{1}{\sqrt{n-1}}\right)\max_{i}L_{ii}\qquad\mathrm{w.h.p}.$ (4.4)
###### Proof.
Let $\hat{L}$ be the matrix given by
$\hat{L}=\frac{1}{\sigma}L.$
We note that for all $1\leq i<j\leq n$, $\mathbb{E}\hat{L}_{ij}=0$ and
$\mathbb{E}\hat{L}_{ij}^{2}=1$.
From Corollary 1, we have that
$\lambda_{\max}(\hat{L})\leq\sqrt{(2+\epsilon)n\log n}\qquad\mathrm{w.h.p}.$
and
$\lambda_{\max}(L)\leq\sigma\sqrt{(2+\epsilon)n\log n}\qquad\mathrm{w.h.p}.$
Hence, with high probability
$\displaystyle\lambda_{\max}(L)$
$\displaystyle\leq\sigma\sqrt{(2+\epsilon)(n-1)\log
n}+\sigma\sqrt{(2+\epsilon)\log n}$ $\displaystyle\leq
c\sqrt{2+\epsilon}\max_{i}L_{ii}+\sigma\sqrt{(2+\epsilon)\log n}$
$\displaystyle=\left(c\sqrt{2+\epsilon}+\frac{1}{\max_{i}L_{ii}}\sigma\sqrt{(2+\epsilon)\log
n}\right)\max_{i}L_{ii}.$ (4.5)
From (4.2) we have that
$\frac{1}{\max_{i}L_{ii}}\sigma\sqrt{(2+\epsilon)\log
n}\leq\frac{c\sigma\sqrt{(2+\epsilon)\log n}}{\sigma\sqrt{(n-1)\log
n}}=\frac{c\sqrt{2+\epsilon}}{\sqrt{n-1}}.$
Now, replacing the last term in (4), we obtain that
$\displaystyle\lambda_{\max}(L)$
$\displaystyle\leq\left(c\sqrt{2+\epsilon}+\frac{c\sqrt{2+\epsilon}}{\sqrt{n-1}}\right)\max_{i}L_{ii}$
$\displaystyle=c\sqrt{2+\epsilon}\left(1+\frac{1}{\sqrt{n-1}}\right)\max_{i}L_{ii}.$
Hence for $\epsilon>0$, if we take $K=c\sqrt{2+\epsilon}$, it follows that
$\lambda_{\max}(L)\leq
K\left(1+\frac{1}{\sqrt{n-1}}\right)\max_{i}L_{ii}\qquad\mathrm{w.h.p}.$
∎
The inequality (4.2) is satisfied by several distributions. In particular,
under Gaussian assumptions, as we now show.
###### Proposition 5.
Let $L=(L_{ij})$ be the Laplacian matrix of an $n\times n$ symmetric Wigner
matrix $X=(X_{ij})$ constructed from i.i.d. standard Gaussian random
variables. Then (4.2) is satisfied.
###### Proof.
We note that
$\max_{i}L_{ii}=\sqrt{n-1}\max_{i}\frac{L_{ii}}{\sqrt{n-1}}=\sqrt{n-1}\max_{i}\gamma_{i},$
where $\gamma_{i}$ are standard Gaussian random variables with covariance
matrix given by (3.1).
Now, using concentration results (Theorem 3.12 in [18]) we obtain that for all
$\alpha>0$
$\max_{i}\gamma_{i}\geq\mathbb{E}\max_{i}\gamma_{i}-\sqrt{\alpha\log
n}\qquad\text{w.h.p.}.$ (4.6)
From the structure of the covariance matrix, it follows that for all $i\neq j$
and $n>3$,
$\mathbb{E}\left(\gamma_{i}-\gamma_{j}\right)^{2}=2-2\frac{1}{n-1}=2\left(\frac{n-2}{n-1}\right)\geq
1.$
So, Sudakov Minorization (Lemma A.3 in [10]) implies
$\mathbb{E}\max_{i}\gamma_{i}\geq C\sqrt{\log n},$ (4.7)
where $C$ is a positive constant independent of $n$ and the covariance of the
Gaussian random variables $\gamma_{i}$’s. Hence, combining (4.6) and (4.7),
with high probability
$\displaystyle\max_{i}L_{ii}$ $\displaystyle\geq\sqrt{n-1}\left(C\sqrt{\log
n}-\sqrt{\alpha\log n}\right)$ $\displaystyle=(C-\sqrt{\alpha})\sqrt{(n-1)\log
n}.$
Then (4.2) is satisfied with $c=(C-\sqrt{\alpha})^{-1}$. ∎
###### Remark.
1. (a)
It is well known (see for example Lemma 2.3 in [18] or Appendix A in [10])
that the expected value of the maximum of $n$ standard Gaussian random
variables, even if they are not independent, cannot be much larger that
$\sqrt{2\log n}$. For that reason, the universal constant in the Sudakov
Minorization is less than or equal to $\sqrt{2}$. So with this, we can take
$\alpha$ such that $C\geq 1-\sqrt{\alpha}$, and taking
$\epsilon=2((C-\sqrt{\alpha})^{2}-1)\geq 0$ we obtain that in the Gaussian
case the constant $K$ can be equal to $\sqrt{2}$.
2. (b)
The constant $K$ might not be sharp but it is useful for our weak convergence
result in Section 5.
For the non-Gaussian case, when the random variables $X_{ij}$ have bounded
support, Bandeira proved (proof of Theorem 2.1 in [4]) that condition (4.2) is
satisfied. Moreover, the following bound can be established as a consequence
of Theorem 2.1 in [4], which considers Laplacian matrices constructed from
Wigner matrices with bounded entries but not necessarily with the same
distribution.
###### Proposition 6.
Let $L=(L_{ij})$ be the Laplacian matrix of an $n\times n$ symmetric Wigner
matrix $X=(X_{ij})$ constructed from independent random variables bounded by
$M$, with zero mean and variance $\sigma^{2}.$ If there exists a $c>0$ such
that
$\sigma\sqrt{n-1}\geq cM\sqrt{\log n},$
then there exist positive constants $c_{1},C_{1},\beta$ depending only on $c,$
such that
$\lambda_{\max}(L)\leq\left(1+\frac{C_{1}}{\sqrt{\log
n}}\right)\max_{i}L_{ii}\qquad$ (4.8)
with probability at least $1-c_{1}n^{-\beta}$.
## 5 Limiting law for the largest eigenvalue
We are now ready to prove that the Gumbel distribution is the limiting law of
the largest eigenvalue of a Laplacian random matrix constructed from Gaussian
entries. We write
$a_{n}^{\prime}=\frac{a_{n}}{\sqrt{2}}=\sqrt{\log n}\qquad\text{and}\qquad
b_{n}^{\prime}=\sqrt{2}b_{n}=2\sqrt{\log n}-\frac{\log\log n+\log
4\pi}{\sqrt{2\log n}}.$ (5.1)
###### Theorem 4.
Let $L=L_{X}$ be an $n\times n$ Laplacian matrix with $X=(X_{ij})$ a symmetric
matrix whose off-diagonal entries are independent standard Gaussian random
variables. Then
$R_{n}=a_{n}^{\prime}\left(\frac{\lambda_{\max}(L)}{\sqrt{n-1}}-b_{n}^{\prime}\sqrt{\frac{n-2}{n-1}}\right)$
(5.2)
converges in distribution when $n\to\infty$ to a Gumbel random variable with
distribution $G(x)=\exp(-e^{-x})$ for all $x\in\mathbb{R}$.
We observe that the rescaling sequences $a_{n}^{\prime}$ and $b_{n}^{\prime}$
given by (5.1) have the same order as the appropriate choices for the extreme
value theorem for the maxima of a sequence of Gaussian random variables in the
i.i.d. and the stationary sequence cases; see [6], [22] respectively and [16].
###### Proof.
We first note, with the notation of Section 2,
$\displaystyle
a_{n}^{\prime}\left(\frac{\lambda_{\max}(L)}{\sqrt{n-1}}-b_{n}^{\prime}\sqrt{\frac{n-2}{n-1}}\right)$
$\displaystyle=a_{n}\left(\max_{1\leq i\leq
n}D_{i}^{(n)}-b_{n}\sqrt{\frac{n-2}{n-1}}\right)$
$\displaystyle+a_{n}^{\prime}\left(\frac{\lambda_{\max}(L)}{\sqrt{n-1}}-\sqrt{2}\max_{1\leq
i\leq n}D_{i}^{(n)}\right).$ (5.3)
From Proposition 4, the first term on the right side of (5) converges in
distribution to a Gumbel random variable. On the other hand, taking
$K=\sqrt{2}$ in Theorem 3 and Proposition 5, we have that
$\left|\frac{\lambda_{\max}(L)}{\sqrt{n-1}}-\sqrt{2}\max_{1\leq i\leq
n}D_{i}^{(n)}\right|\leq\left|\frac{\sqrt{2}}{\sqrt{n-1}}\max_{1\leq i\leq
n}D_{i}^{(n)}\right|$
and
$\displaystyle\frac{a_{n}}{\sqrt{2}}\left|\frac{\lambda_{\max}(L)}{\sqrt{n-1}}-\sqrt{2}\max_{1\leq
i\leq n}D_{i}^{(n)}\right|$
$\displaystyle\leq\frac{a_{n}}{\sqrt{2}}\left|\frac{\sqrt{2}}{\sqrt{n-1}}\max_{1\leq
i\leq n}D_{i}^{(n)}\right|$
$\displaystyle\leq\frac{a_{n}}{\sqrt{n-1}}\left|\max_{1\leq i\leq
n}D_{i}^{(n)}-b_{n}\sqrt{\frac{n-2}{n-1}}+b_{n}\sqrt{\frac{n-2}{n-1}}\right|$
$\displaystyle\leq\frac{a_{n}}{\sqrt{n-1}}\left|\max_{1\leq i\leq
n}D_{i}^{(n)}-b_{n}\sqrt{\frac{n-2}{n-1}}\right|$
$\displaystyle\quad\qquad\qquad+\frac{a_{n}b_{n}}{\sqrt{n-1}}\sqrt{\frac{n-2}{n-1}}.$
(5.4)
Now using Proposition 4 and Slutsky’s theorem, the first term in the last
inequality goes to zero in probability as $n$ goes to infinity. Hence, using
$a_{n}$ and $b_{n}$ in (3.5), we have that the second term in (5) goes to zero
as $n$ goes to infinity. Therefore
$\frac{a_{n}}{\sqrt{2}}\left|\frac{\lambda_{\max}(L)}{\sqrt{n-1}}-\sqrt{2}\max_{1\leq
i\leq n}D_{i}^{(n)}\right|\to 0\qquad\text{as}\qquad n\to\infty$
in probability. Thus, the second term of the right side of Equation (5) tends
to zero as $n$ goes to infinity, and we obtain that $R_{n}$ in (5.2) converges
in distribution to a Gumbel random variable as $n$ goes to infinity. ∎
###### Remark.
It is easy to find the asymptotic distribution of the largest eigenvalue of a
$k$-block Laplacian diagonal matrix $L$ given by (2.6) when the blocks are
independent and all the entries are Gaussian. Indeed the largest eigenvalue
$\lambda_{\max}(L)$ of $L$ satisfies
$R_{n}^{k}=a_{n}^{\prime}\left(\frac{\lambda_{\max}(L)}{\sqrt{n-1}}-b_{n}^{\prime}\sqrt{\frac{n-2}{n-1}}\right)$
converges in distribution when $n\to\infty$ to a random variable with
distribution $G_{k}(x)=\exp(-ke^{-x})$ for all $x\in\mathbb{R}$, where
$a_{n}^{\prime}$ and $b_{n}^{\prime}$ are given by (5.1).
Acknowledgments. The authors would like to thank the anonymous referee of our
manuscript for the valuable and constructive suggestions that improved the
revised version of our work.
## References
* [1] E. Abbe, A. S. Bandeira, A. Bracher, and A. Singer, Decoding binary node labels from censored edge measurements: Phase transition and efficient recovery, IEEE Transactions on Network Science and Engineering, 1 (2014), pp. 10–22.
* [2] G. W. Anderson, A. Guionnet, and O. Zeitouni, An Introduction to Random Matrices, Cambridge University Press, 2010.
* [3] Z.-D. Bai and Y.-Q. Yin, Necessary and sufficient conditions for almost sure convergence of the largest eigenvalue of a Wigner matrix, Ann. Probab., 16 (1988), pp. 1729–1741.
* [4] A. Bandeira, Random Laplacian matrices and convex relaxations, Found. Comput. Math., 18 (2018), pp. 345–379.
* [5] A. Bandeira, A. Singer, and T. Strohme, Mathematics of Data Science, in preparation.
* [6] S. M. Berman, Limit theorems for the maximum term in stationary sequences, Ann. Math. Statist., 35 (1964), pp. 502–516.
* [7] P. Biane, On the free convolution with a semi-circular distribution, Indiana Univ. Math. J., 46 (1997), pp. 705–718.
* [8] W. Bryc, A. Dembo, and T. Jiang, Spectral measure of large random Hankel, Markov and Toeplitz matrices, Ann. Probab., 34 (2006), pp. 1–38.
* [9] W. Bryc and S. Sethuraman, A remark on the maximum eigenvalue for circulant matrices, in High Dimensional Probability V: The Luminy Volume, Institute of Mathematical Statistics, 2009, pp. 179–184.
* [10] S. Chatterjee, Superconcentration and Related Topics, Springer-Verlag, Berlin, 2014.
* [11] F. Chung and L. Lu, Complex Graphs and Networks, American Mathematical Society, Providence, RI, USA, 2006.
* [12] X. Ding and T. Jiang, Spectral distributions of adjacency and Laplacian matrices of random graphs, Ann. Appl. Probab., 20 (2010), pp. 2086–2117.
* [13] M. X. Goemans and D. P. Williamson, Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming, J. ACM, 42 (1995), pp. 1115–1145.
* [14] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, 2012.
* [15] I. M. Johnstone, On the distribution of the largest eigenvalue in principal components analysis, Ann. Statist., 29 (2001), pp. 295–327.
* [16] M. R. Leadbetter, G. Lindgren, and H. Rootzén, Extremes and Related Properties of Random Sequences and Processes, Springer-Verlag, Berlin, 1983.
* [17] J. O. Lee and K. Schnelli, Local law and Tracy–Widom limit for sparse random matrices, Probab. Theory and Related Fields, 171 (2018), pp. 543–616.
* [18] P. Massart, Concentration Inequalities and Model Selection. Ecole d’Eté de Probabilités de Saint-Flour XXXIII.2003, Springer-Verlag, Berlin, 2007.
* [19] M. W. Meckes, Some results on random circulant matrices, in High Dimensional Probability V: The Luminy Volume, Institute of Mathematical Statistics, 2009, pp. 213–223.
* [20] J. A. Mingo and R. Speicher, Free Probability and Random Matrices, Springer-Verlag, Berlin, 2017.
* [21] L. Pastur and V. Vasilchuk, On the law of addition of random matrices, Comm. Math. Phys., 214 (2000), pp. 249–286.
* [22] S. I. Resnick, Extreme Values, Regular Variation, and Point Processes, Springer-Verlag, Berlin, 1987.
* [23] B. Rider and C. D. Sinclair, Extremal laws for the real Ginibre ensemble, Ann. Appl. Probab., 24 (2014), pp. 1621–1651.
* [24] A. Soshnikov, Universality at the edge of the spectrum in Wigner random matrices, Comm. Math. Phys., 207 (1999), pp. 697–733.
* [25] C. A. Tracy and H. Widom, Level-spacing distributions and the Airy kernel, Comm. Math. Phys., 159 (1994), pp. 151–174.
* [26] , On orthogonal and symplectic matrix ensembles, Comm. Math. Phys., 177 (1996), pp. 727–754.
* [27] E. P. Wigner, On the distribution of the roots of certain symmetric matrices, Ann. Math., 67(2) (1958), pp. 325–327.
|
††institutetext: 1Institut für Experimentalphysik, Universität Hamburg,
Germany
2Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720,
USA
3Berkeley Institute for Data Science, University of California, Berkeley, CA
94720, USA
4NHETC, Department of Physics & Astronomy, Rutgers University, Piscataway, NJ
08854, USA
5Department of Physics & Astronomy, The Johns Hopkins University, Baltimore,
MD 21211, USA
6Google, Mountain View, CA 94043, USA
7Physics Department, Reed College, Portland, OR 97202, USA
8Jožef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia
9Nevis Laboratories, Columbia University, 136 S Broadway, Irvington NY, USA
10Physik Institut, University of Zurich, Winterthurerstrasse 190, 8057 Zurich,
Switzerland
11SLAC National Accelerator Laboratory, Stanford University, Stanford, CA
94309, USA
12Berkeley Center for Cosmological Physics, University of California, Berkeley
13Departamento de Física da Universidade de Aveiro and CIDMA Campus de
Santiago, 3810-183 Aveiro, Portugal
14Institute for Theoretical Physics, University of Heidelberg, Heidelberg,
Germany
15Department of Physics & Astronomy, University of Kansas, 1251 Wescoe Hall
Dr., Lawrence, KS 66045, USA
16Laboratoire de Physique de Clermont, Université Clermont Auvergne, France
17University of California San Diego, La Jolla, CA 92093, USA
18Laboratory for Nuclear Science, MIT, 77 Massachusetts Ave, Cambridge, MA
02139
19Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19,
1000 Ljubljana, Slovenia
20Department of Physics and Astronomy, University of Sussex, Brighton BN1 9QH,
UK
21Center for Theoretical Physics, MIT, 77 Massachusetts Ave, Cambridge, MA
02139
22Physics Department, University of Michigan, Ann Arbor, MI 48109, USA
23Instituto de Física Teórica, IFT-UAM/CSIC, Universidad Autónoma de Madrid,
28049 Madrid, Spain
24Laboratory of Instrumentation and Experimental Particle Physics, Lisbon,
Portugal
25European Organization for Nuclear Research (CERN), CH-1211, Geneva 23,
Switzerland
26Instituto de Física Corpuscular (IFIC), Universidad de Valencia-CSIC,
E-46980, Valencia, Spain
27CSAIL, Massachusetts Institute of Technology, 32 Vassar Street, Cambridge,
MA 02139, USA
28International Center for Advanced Studies and CONICET, UNSAM, CP1650, Buenos
Aires, Argentina
29Division Office Physics, Math and Astronomy, California Institute of
Technology, Pasadena, CA 91125, USA
30Department of Physics, University of Genova, Via Dodecaneso 33, 16146
Genova, Italy
# The LHC Olympics 2020
A Community Challenge for Anomaly
Detection in High Energy Physics
Gregor Kasieczka (ed),1 Benjamin Nachman (ed),2,3 David Shih (ed),4 Oz Amram,5
Anders Andreassen,6 Kees Benkendorfer,2,7 Blaz Bortolato,8 Gustaaf
Brooijmans,9 Florencia Canelli,10 Jack H. Collins,11 Biwei Dai,12 Felipe F. De
Freitas,13 Barry M. Dillon,8,14 Ioan-Mihail Dinu,5 Zhongtian Dong,15 Julien
Donini,16 Javier Duarte,17 D. A. Faroughy10 Julia Gonski,9 Philip Harris,18
Alan Kahn,9 Jernej F. Kamenik,8,19 Charanjit K. Khosa,20,30 Patrick Komiske,21
Luc Le Pottier,2,22 Pablo Martín-Ramiro,2,23 Andrej Matevc,8,19 Eric
Metodiev,21 Vinicius Mikuni,10 Inês Ochoa,24 Sang Eon Park,18 Maurizio
Pierini,25 Dylan Rankin,18 Veronica Sanz,20,26 Nilai Sarda,27 Uros̆
Seljak,2,3,12 Aleks Smolkovic,8 George Stein,2,12 Cristina Mantilla Suarez,5
Manuel Szewc,28 Jesse Thaler,21 Steven Tsan,17 Silviu-Marian Udrescu,18 Louis
Vaslin,16 Jean-Roch Vlimant,29 Daniel Williams,9 Mikaeel Yunus18
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
A new paradigm for data-driven, model-agnostic new physics searches at
colliders is emerging, and aims to leverage recent breakthroughs in anomaly
detection and machine learning. In order to develop and benchmark new anomaly
detection methods within this framework, it is essential to have standard
datasets. To this end, we have created the LHC Olympics 2020, a community
challenge accompanied by a set of simulated collider events. Participants in
these Olympics have developed their methods using an R&D dataset and then
tested them on black boxes: datasets with an unknown anomaly (or not). This
paper will review the LHC Olympics 2020 challenge, including an overview of
the competition, a description of methods deployed in the competition, lessons
learned from the experience, and implications for data analyses with future
datasets as well as future colliders.
## 1 Introduction
The field of high energy physics (HEP) has reached an exciting stage in its
development. After many decades of searching, the Standard Model (SM) of
particle physics was completed in 2012 with the discovery of the Higgs boson
Aad:2012tfa ; Chatrchyan:2012ufa . Meanwhile, there are strong motivations for
physics beyond the Standard Model (BSM). For example, the nature of dark
matter and dark energy, the mass of neutrinos, the minuteness of the neutron
dipole moment, and the baryon-anti-baryon asymmetry in the universe are all
well-established problems that do not have solutions in the Standard Model.
Furthermore, the Higgs boson mass is unstable with respect to quantum
corrections, and a consistent theory of quantum gravity remains mysterious.
The Large Hadron Collider (LHC) at CERN has the potential to shed light on all
of these fundamental challenges.
Searching for BSM physics is a major part of the research program at the LHC
across experiments atlasexoticstwiki ; atlassusytwiki ; atlashdbspublictwiki ;
cmsexoticstwiki ; cmssusytwiki ; cmsb2gtwiki ; lhcbtwiki . The current
dominant search paradigm is top-down, meaning searches target specific models.
Nearly all of the existing BSM searches at the LHC pick a signal model that
addresses one or more of the above experimental or theoretical motivations for
BSM physics. Then, high-fidelity synthetic or simulated data are generated
using this signal model. These signal events are then often combined with
synthetic background events to develop an analysis strategy which is
ultimately applied to data. An analysis strategy requires a proposal for
selecting signal-like events as well as a method for calibrating the
background rate to ensure that the subsequent statistical analysis is
unbiased. Many searches provide “model-independent” results, in the form of a
limit on cross-section or cross-section times acceptance ungoverned by any
theoretical calculation. However, the event selection and background
estimation are still strongly model-dependent.
These search efforts are constantly improving and are important to continue
and expand with new data. However, it is also becoming clear that a
complementary search paradigm is critical for fully exploring the complex LHC
data. One possible explanation for why we have not discovered new physics yet
is that the model dependence of the current search paradigm has created blind
spots to unconventional new physics signatures. In fact, despite thousands of
BSM searches to date, much of phase space and countless possible signals
remain unexplored at present (for many examples just in the realm of 2-body
resonances, see Craig:2016rqv ; 1907.06659 ).
Model independent searches for new particles have a long history in high
energy physics. With a venerable history dating back at least to the discovery
of the $\rho$ meson Button:1962bnf , generic searches like the bump
hunt111This is a search where signal events present as a localized enhancement
on top of a smoothly falling background distribution. assume little about the
signal and have been used to discover many new particles, including the Higgs
boson Aad:2012tfa ; Chatrchyan:2012ufa . While generic, the bump hunt is not
particularly sensitive because it usually does not involve other event
properties aside from the resonant feature. More differential signal model
independent searches have been performed by D0 sleuth ; Abbott:2000fb ;
Abbott:2000gx ; Abbott:2001ke , H1 Aaron:2008aa ; Aktas:2004pz , ALEPH
Cranmer:2005zn , CDF Aaltonen:2007dg ; Aaltonen:2007ab ; Aaltonen:2008vt , CMS
CMS-PAS-EXO-14-016 ; CMS-PAS-EXO-10-021 ; CMS:2020ohc ; Sirunyan:2020jwk , and
ATLAS Aaboud:2018ufy ; ATLAS-CONF-2014-006 ; ATLAS-CONF-2012-107 . The general
strategy in these analyses is to directly compare data with simulation in a
large number of exclusive final states (bins). Aside from the feature
selection, these approaches are truly signal model independent. The cost for
signal model independence is sensitivity if there are a large number of bins
because of the look elsewhere effect Gross:2010qma . Also, given the extreme
reliance on simulation (a form of background model dependence) in these
approaches, and they are typically only as sensitive as the simulation is
accurate, and characterizing systematic uncertainties across thousands of
final states can be challenging.
Machine learning offers great potential to enhance and extend model
independent searches. In particular, semi-, weak-, or un-supervised training
can be used to achieve sensitivity to weak or complex signals with fewer model
assumptions than traditional searches. Anomaly detection is an important topic
in applied machine learning research, but HEP challenges require dedicated
approaches. In particular, single events often contain no useful information —
it is only when considering a statistical ensemble that an anomaly becomes
apparent. This is a contrast between anomaly detection that is common in
industry (“off manifold” or “out-of-sample” anomalies) and that which is the
target of searches in high energy physics (“over-densities”). Furthermore, HEP
data are systematically different than natural images and other common data
types used for anomaly detection in applied machine learning. In order to test
the resulting tailored methods, it is essential to have public datasets for
developing and benchmarking new approaches.
For this purpose, we have developed the LHC Olympics 2020 challenge and
corresponding datasets lhco . The name of this community effort is inspired by
the first LHC Olympics that took place over a decade ago before the start of
the LHC lhco_old1 ; lhco_old2 ; lhco_old3 ; lhco_old4 . In those Olympics,
researchers prepared ‘black boxes’ (BBs) of simulated signal events and
contestants had to examine these simulations to infer the underlying signal
process. These boxes were nearly all signal events and many of the signatures
were dramatic (e.g. dilepton mass peaks) and all were findable with simple
analysis procedures. While this was an immensely useful exercise, we are now
faced with the reality that the new physics is rare or at least hard to find,
and characterizing the BSM properties will not be our biggest challenge.
The LHC Olympics 2020 challenge is also composed of black boxes. In contrast
to the previous Olympics, these contain mostly simulated SM events. The goal
of the challenge is to determine if there is new physics in the box and then
to identify its properties. As stressed above, calibrating the background
prediction is an essential aspect of BSM searches and so we have restricted
this search to high energy hadronic final states where sideband methods can be
used to estimate the background. We provide lists of the detector-
reconstructed final state particles in order to allow contestants to test
methods that can process low-level event information. To aid in method
development and testing, we also provide a simulated dataset (with no
anomalies) and a benchmark signal dataset. These two are combined to form the
R&D dataset, and provided in addition to the black box datasets. The goal of
this paper is to describe the Winter winterolympics and Summer summerolympics
Olympics 2020 competitions. Well over one hundred researchers participated in
these events, with over a dozen teams submitting their results for the black
boxes.
This paper is organized as follows. Section 2 introduces the LHC Olympics
competition, including the R&D and black box datasets. A brief description of
methods deployed in the competition are provided in Secs. 3, 4, and 5. Each
contribution includes an introduction to the method, a concise statement of
the results, as well as lessons learned before, during, and/or after the
challenge. The results and lessons learned are synthesized in Sec. 6.
Implications for data analyses with future datasets and as well as future
colliders are discussed in Sec. 7 and the paper concludes in Sec. 8.
## 2 Dataset and Challenge
The portal for the LHC Olympics dataset can be found at the challenge website
lhco . The datasets described below are all publicly available and
downloadable from Zenodo lhc_bb1 . Contestants entered their results in a
Google form. On the form, participants were asked to state:
* •
The black box number (1-3) corresponding to their submission.
* •
A short abstract describing their method.
* •
A $p$-value associated with the dataset having no new particles (null
hypothesis).
* •
As complete a description of the new physics as possible. For example: the
masses and decay modes of all new particles (and uncertainties on those
parameters).
* •
How many signal events (with the associated uncertainty) are in the dataset
(before any selection criteria).
Additionally, contestants were encouraged to submit plots or a Jupyter
notebook PER-GRA:2007 . The LHC Olympics website includes a basic Jupyter
notebook for reading in the data and running basic preprocessing using the
pyjet software noel_dawe_2020_4289190 ; Cacciari:2011ma ; Cacciari:2005hq .
Further details of the R&D and three black box datasets can be found below.
### 2.1 R&D Dataset
The R&D dataset consisted of one million SM events each comprised of two jets
produced through the strong interaction, referred to as quantum chromodynamics
(QCD) dijet events, and 100,000 $Z^{\prime}\to XY$ events, with $X\to
q\bar{q}$ and $Y\to q\bar{q}$, as shown in Fig. 1 for the topology. The masses
of the new BSM particles $Z^{\prime}$, $X$, and $Y$ are 3.5 TeV, 500 GeV and
100 GeV, respectively. The events were produced using Pythia 8.219
Sjostrand:2006za ; Sjostrand:2014zea and Delphes 3.4.1 deFavereau:2013fsa ;
Mertens:2015kba ; Selvaggi:2014mya , with default settings, and with no pileup
or multiparton interactions included. They are selected using a single large-
radius ($R=1$) anti-$k_{\mathrm{T}}$ Cacciari:2008gp jet trigger with a
$p_{\text{T}}$ threshold of 1.2 TeV.
The signal model was discussed in Ref. 1907.06659 and has the feature that
existing generic searches for dijet resonances or targeted searches for
diboson resonances may not be particularly sensitive. For example, existing
searches may register a low significance ($<2\,\sigma$) while automated
methods may be able to identify the signal with a high significance.
$q$$q$$X$$Y$$q$$q$$q$$q$$Z^{\prime}$ Figure 1: Feynman diagram for signals of
R&D dataset and Black Box 1.
These events are stored as pandas dataframes mckinney-proc-scipy-2010 saved
to compressed HDF5 koranne2011hierarchical format. For each event, all
Delphes reconstructed particles in the event are assumed to be massless and
are recorded in detector coordinates ($p_{\text{T}}$, $\eta$, $\phi$). More
detailed information such as particle charge is not included. Events are zero
padded to constant size arrays of 700 particles, with the truth bit appended
at the end to dictate whether the event is signal or background. The array
format is therefore ($N_{\text{events}}$=1.1 M, 2101).
### 2.2 Black Box 1
Setting | R&D | BB1 | BB3
---|---|---|---
Tune:pp | 14 | 3 | 10
PDF:pSet | 13 | 12 | 5
TimeShower:alphaSvalue | 0.1365 | 0.118 | 0.16
SpaceShower:alphaSvalue | 0.1365 | 0.118 | 0.16
TimeShower:renormMultFac | 1 | 0.5 | 2
SpaceShower:renormMultFac | 1 | 0.5 | 2
TimeShower:factorMultFac | 1 | 1.5 | 0.5
SpaceShower:factorMultFac | 1 | 1.5 | 0.5
TimeShower:pTmaxMatch | 1 | 2 | 1
SpaceShower:pTmaxMatch | 0 | 2 | 1
Table 1: Pythia settings for the different datasets. For R&D the settings were
the Pythia defaults while for BB1 and BB3 they were modified. BB2 is not shown
here because it was produced using Herwig++ with default settings.
This box contained the same signal topology as the R&D dataset (see Fig. 1)
but with masses $m_{Z^{\prime}}=3.823$ TeV, $m_{X}=732$ GeV and $m_{Y}=378$
GeV. A total of 834 signal events were included (out of a total of 1M events
in all). This number was chosen so that the approximate local significance
inclusively is not significant. In order to emulate reality, the background
events in Black Box 1 are different to the ones from the R&D dataset. The
background still uses the same generators as for the R&D dataset, but a number
of Pythia and Delphes settings were changed from their defaults. For the
Pythia settings, see Table222Setting pTmaxMatch = 2 in Pythia invokes a “power
shower”, where emissions are allowed to occur all the way to the kinematical
limit. With a phase space cut on the hard scattering process, this sculpts a
bump-like feature in the multijet background, which was flagged as anomalous
by the authors of Section 5.2. Identification of this bump is labeled as
“Human NN” in Figure 51. 1. For the Delphes settings, we changed
EfficiencyFormula in the ChargedHadronTrackingEfficiency module,
ResolutionFormula in the ChargedHadronMomentumSmearing module, and
HCalResolutionFormula in the hadronic calorimeter (HCal) module. The tracking
variations are approximated using the inner-detector measurements from Ref
Aad:2016jkr and the calorimeter energy resolutions are varied by 10% inspired
by measurements from Ref. Aaboud:2016hwh .
### 2.3 Black Box 2
This sample of 1M events was background only. The background was produced
using Herwig++ Bahr:2008pv instead of Pythia, and used a modified Delphes
detector card that is different from Black Box 1 but with similar
modifications on top of the R&D dataset card.
### 2.4 Black Box 3
The signal was inspired by Ref. Agashe:2016rle ; Agashe:2016kfr and consisted
of a heavy resonance (the KK graviton) with mass $m=4.2$ TeV which had two
different decay modes. The first is just to dijets (gluons), while the second
is to a lighter neutral resonance $R$ (the IR radion) of mass $m_{R}=2.217$
TeV plus a gluon, with $R\to gg$. 1200 dijet events and 2000 trijet events
were included along with QCD backgrounds in Black Box 3. These numbers were
chosen so that an analysis that found only one of the two modes would not
observe a significant excess. The background events were produced with
modified Pythia and Delphes settings (different than the R&D and the other
black boxes). For the Pythia settings, see Table 1.
$q$$q$$Y$$g$$g$$g$$X$
$q$$q$$q$$q$$X$
Figure 2: Feynman diagrams for signal of Black Box 3.
Individual Approaches
The following sections describes a variety of approaches to anomaly detection.
In addition to an explanation of the method, each section includes a set of
results on the LHC Olympics datasets as well as a brief description of lessons
learned.
We have grouped the various methods into three loose categories: Unsupervised
(Sec. 3), Weakly Supervised (Sec. 4), and (Semi)-Supervised (Sec. 5).
Supervision refers to the type of label information provided to the machine
learning algorithms during training. Unsupervised methods do not provide any
label information and learn directly from background-dominated data.
Typically, these methods try to look for events with low
$p(\text{background})$. (Exceptions exist, see e.g. ANODE in Sec. 3.2 and GIS
in Sec. 3.5 which use likelihood ratios.) Weakly supervised methods have noisy
labels.333Such a categorisation is not unique, see e.g. zhou2018brief for an
alternative way of defining weak supervision. We follow the established usage
in applications of machine learning for particle physics. Many of these
approaches operate by comparing two datasets with different amounts of a
potential signal. The labels are noisy because instead of being pure ‘signal’
and ‘background’, the labels are ‘possibly signal-depleted’ and ‘possibly
signal-enriched’. The goal of these methods is to look for events with high
$p(\text{possibly signal-depleted})/p(\text{possibly signal-enriched})$.
Supervised methods have labels for each event. Semi-supervised methods have
labels for some events. Methods that are labeled as (Semi-)Supervised use
signal simulations in some way to build signal sensitivity. These three
categories are not exact and the boundaries are not rigid. However, this
categorization may help to identify similarities and differences between
approaches. Within each category, the methods are ordered alphabetically by
title.
Furthermore, the results on the datasets can be grouped into three types: (i)
blinded contributions using the black boxes, (ii) unblinded results or updates
on blinded results (and thus, also unblinded) on the black boxes, and (iii)
results only using the R&D dataset. All three of these contribution types
provide valuable insight, but each serves a different purpose. The first
category (i) corresponds to the perspective of a pure challenge that is
analogous to a real data analysis. The organizers of the LHCO challenge could
not participate in this type of analysis. Section 6.1 provides an overview of
the challenge results. The LHC Olympics datasets have utility beyond the
initial blinded challenge as well and so contributions of type (ii) and (iii)
are also important. Some of the results of these types came from
collaborations with the challenge organizers and some came from other groups
as well who did not manage (for whatever reason) to deploy their results on
the blinded black boxes.
A summary of all of the methods and results can be found in Table 2. Note that
in some cases, blinded results (of type (i)) were presented at the LHC
Olympics workshops, but only a subset (sometimes of type (iii)) appear in the
subsequent sections. The table gives precedence to the workshops results,
which are also discussed in Sec. 6.1.
Section | Short Name | Method Type | Results Type
---|---|---|---
3.1 | VRNN | Unsupervised | (i) (BB2,3) and (ii) (BB1)
3.2 | ANODE | Unsupervised | (iii)
3.3 | BuHuLaSpa | Unsupervised | (i) (BB2,3) and (ii) (BB1)
3.4 | GAN-AE | Unsupervised | (i) (BB2-3) and (ii) (BB1)
3.5 | GIS | Unsupervised | (i) (BB1)
3.6 | LDA | Unsupervised | (i) (BB1-3)
3.7 | PGA | Unsupervised | (ii) (BB1-2)
3.8 | Reg. Likelihoods | Unsupervised | (iii)
3.9 | UCluster | Unsupervised | (i) (BB2-3)
4.1 | CWoLa | Weakly Supervised | (ii) (BB1-2)
4.2 | CWoLa AE Compare | Weakly/Unsupervised | (iii)
4.3 | Tag N’ Train | Weakly Supervised | (i) (BB1-3)
4.4 | SALAD | Weakly Supervised | (iii)
4.5 | SA-CWoLa | Weakly Supervised | (iii)
5.1 | Deep Ensemble | Semisupervised | (i) (BB1)
5.2 | Factorized Topics | Semisupervised | (iii)
5.3 | QUAK | Semisupervised | (i) (BB2,3) and (ii) (BB1)
5.4 | LSTM | Semisupervised | (i) (BB1-3)
Table 2: A categorization in terms of method and result type for all of the
results presented in the Sec. 3, Sec. 4, and Sec. 5.
## 3 Unsupervised
### 3.1 Anomalous Jet Identification via Variational Recurrent Neural
Network444Authors: Alan Kahn, Julia Gonski, Inês Ochoa, Daniel Williams, and
Gustaaf Brooijmans.
#### 3.1.1 Method
The method described here employs a Variational Recurrent Neural Network
(VRNN) to perform jet-level anomaly detection by modeling jets as a sequence
of constituents. A VRNN is a sequence-modeling architecture which replaces the
standard encoder-decoder architecture of a Recurrent Neural Network with a
Variational Autoencoder (VAE) chung2016recurrent . This allows the VRNN to
perform both sequence modeling in addition to variational inference, which has
been shown to be a very powerful tool for anomaly detection
An2015VariationalAB . A sequence-modeling architecture is well-motivated as it
is capable of accommodating variable-length inputs, such as lists of
constituent four-vectors in a jet, while suppressing the ability of the model
to learn correlations with the jet’s constituent multiplicity. By contrast,
fixed-length architectures such as VAEs rely on a loss function that is
computed between the input layer and the reconstructed output layer. As a
result, zero-padded inputs directly affect the value of the loss function,
leading to correlations that are difficult to remove when using inputs that
are naturally variable in length, but forced to work in a fixed-length
framework.
Figure 3 shows a diagram of one VRNN cell. The VAE portion of the architecture
is displayed on the top row of layers in the diagram, where a constituent’s
four-momentum components are input as a vector $x(t)$, which is encoded into a
multivariate Gaussian distribution in the latent space $z$, and then decoded
to produce a reconstruction of the same input constituent’s components $y(t)$.
The variable $t$ refers to the time-step, which advances as the sequence is
processed, and can be interpreted as the constituent number currently being
processed by the model.
Inputs to the VRNN consist of sequences of jet four-vector constituent
components $p_{\text{T}}$, $\eta$, and $\phi$, where constituents are assumed
to be massless. Jets are reconstructed with FastJet Cacciari:2011ma ;
Cacciari:2005hq using the anti-$k_{t}$ algorithm with a radius parameter of
1.0 Cacciari:2008gp . Before training, a pre-processing method is applied
which boosts each jet to the same reference mass, energy, and orientation in
$\eta-\phi$ space, such that all input jets differ only by their substructure.
In addition, our pre-processing method includes a choice of sequence ordering,
in which the constituent sequence input into the model is sorted by
$k_{t}$-distance instead of by the typical constituent $p_{\text{T}}$. In more
detail, the $n^{\text{th}}$ constituent in the list, $c_{n}$, is determined by
Eq. 1 to be the constituent with the highest $k_{t}$-distance relative to the
previous constituent, with the first constituent in the list being the highest
$p_{\text{T}}$ constituent.
$c_{n}=\max(p_{Tn}\Delta R_{n,n-1})$ (1)
This ordering is chosen such that non-QCD-like substructure, characterized by
two or more separate prongs of constituents within the jet, is more easily
characterized by the sequence. When compared to $p_{T}$-sorted constituent
ordering, the $k_{t}$-sorted sequence consistently travels back and forth
between each prong, making their existence readily apparent and easy to model.
As a result, a significant boost in performance is observed.
The loss function $\mathcal{L}(t)$ for each constituent, defined in Eq. 2, is
very similar to that of an ordinary VAE. It consists of a mean-squared-error
(MSE) loss between input constituents and generated output constituents as a
reconstruction loss, as well as a weighted KL-Divergence from the learned
latent space prior to the encoded approximate posterior distribution. Since
softer constituents contribute less to the overall classification of jet
substructure, each KL-Divergence term, computed constituent-wise, is weighted
by the constituent’s $p_{T}$-fraction with respect to the jet’s total
$p_{\text{T}}$, averaged over all jets in the dataset to avoid correlations
with constituent multiplicity. The weight coefficient of the KL-Divergence
term is enforced as a hyperparameter, and has been optimized to a value of 0.1
in dedicated studies.
$\mathcal{L}(t)=\text{MSE}+0.1\times\overline{p_{T}}(t)D_{\text{KL}}$ (2)
After a jet is fully processed by the VRNN, a total loss function
$\mathcal{L}$ is computed as the average of the individual constituent losses
over the jet: $\mathcal{L}=\frac{\Sigma\mathcal{L}(t)}{N}$.
The architecture is built with 16 dimensional hidden layers, including the
hidden state, with a two-dimensional latent space. All hyperparameters used
are determined by a hyperparameter optimization scan.
The model is trained on the leading and sub-leading jets of each event, where
events are taken from the LHC Olympics datasets. After training, each jet in
the dataset is assigned an Anomaly Score, defined in Eq. 3, where
$D_{\text{KL}}$ is the KL-Divergence from the learned prior distribution to
the encoded posterior distribution.
$\text{Anomaly Score}=1-e^{-\overline{D_{\text{KL}}}}$ (3)
Since the LHC Olympics challenge entails searching for a signal on the event
level instead of the jet level, an overall Event Score is determined by
choosing the most anomalous score between the leading and sub-leading jets in
an event. To ensure consistency between training scenarios, Event Scores are
subject to a transformation in which the mean of the resulting distribution is
set to a value of 0.5, and Event Scores closer to 1 correspond to more
anomalous events.
Figure 3: A Variational Recurrent Neural Network cell. The $x(t)$ and $y(t)$
layers represent respectively the input constituent and reconstructed
constituents’ four-momentum components $p_{\text{T}}$, $\eta$, and $\phi$. The
$\phi_{x}$ and $\phi_{z}$ layers are feature-extracting layers which encode a
representation of the features in the input layer $x(t)$ and latent space $z$
respectively. $h(t-1)$ represents the current time-step’s hidden state, which
is updated each iteration via a transition function between $h(t-1)$,
$\phi_{x}$, and $\phi_{z}$ carried out by a Gated Recurrent Unit (GRU). At
each time-step, the prior distribution defined by $\mu_{t}$ and $\sigma_{t}$
is determined from the current hidden state.
#### 3.1.2 Results on LHC Olympics
The performance of the VRNN was first assessed with the LHC Olympics R&D
dataset, which includes a known signal of a beyond-the-Standard-Model
$Z^{\prime}$ boson with a mass of 3500 GeV which decays to two hadronically
decaying $X$ and $Y$ particles, each reconstructed by a $R=1.0$ jet. This
study was used as a validation of the method, with a goal of directly
investigating the ability of the Event Score selection to reconstruct the
$Z^{\prime}$ mass. Therefore, no selections beyond those described in Section
3.1.1 are applied.
The VRNN was trained over a contaminated dataset consisting of 895113
background events and 4498 signal events, corresponding to a signal
contamination level of 0.5%. A selection on the Event Score is applied as the
sole discriminator, and the invariant mass $m_{JJ}$ of the two jets is then
scanned to assess the prominence of the $Z^{\prime}$ mass peak. In this
validation analysis, the Event Score is required to exceed a value of 0.65.
This value is chosen to significantly enrich the fraction of anomalous jet
events over the background, while retaining enough statistics in the
background to display its smoothly falling behavior.
Figure 4 shows the dijet invariant mass distributions before and after the
Event Score selection, along with the local significance of the signal
computed in each bin using the BinomialExpZ function from RooStats with a
relative background uncertainty of 15% moneta2011roostats . Applying this
selection dramatically increases the significance of the excess from
$0.18\sigma$ to $2.2\sigma$ without significantly sculpting the shape of the
background.
Figure 4: Dijet invariant mass distributions before (left) and after (right) a
selection on the Event Score, with a two-prong Z’ signal contamination of
0.5%.
Once the method was validated in the R&D dataset, it was applied to Black Box
1, with a re-optimized tighter selection on the Event Score of 0.75, as well
as a requirement on the pseudorapidity of the leading and sub-leading jets to
be less than 0.75, to ensure that central, high momentum transfer events are
considered. Figure 5 shows the dijet invariant mass for both the Black Box 1
and Background datasets. The Event Score selection reveals an enhancement in
$m_{JJ}$ just below 4000 GeV. This is consistent with the Black Box 1 signal,
which is a new $Z^{\prime}$ boson with a mass of 3800 GeV decaying to two new
particles, each decaying hadronically.
Figure 5: Dijet invariant mass distributions before (left) and after (right) a
selection on the Event Score from the Black Box 1 dataset. The signal present
is a $Z^{\prime}$ boson with a mass of 3800 GeV.
The same method applied to Black Box 2, shown in Fig. 6, results in no
significant excesses in the invariant mass distribution. Additionally, the
effect of the Event Score selection on the $m_{JJ}$ shapes is similar between
the Black Box 2 and Background datasets. Black Box 2 does not contain any
beyond-the-Standard-Model events, and therefore these results are consistent
with a QCD-only sample. It is important to note that the model was trained
independently on each dataset, and the resulting Event Scores are from
entirely unique sets of network weights.
Figure 6: Dijet invariant mass distributions before (left) and after (right) a
selection on the Event Score from the Black Box 2 dataset. No signal is
present, and the dataset shown consists entirely of multijet background
events.
Figure 7 shows results for Black Box 3. The signal in Black Box 3 consists of
a new 4200 GeV particle, with varied final states beyond the two-prong
large-$R$ jets described earlier. As the model described here is specifically
sensitive to substructure within a large-$R$ jet, it is insensitive to the
signal present in this Black Box.
Figure 7: Dijet invariant mass distributions before (left) and after (right) a
selection on the Event Score from the Black Box 3 dataset. The signal present
is a new boson with a mass of 4200 GeV.
#### 3.1.3 Lessons Learned
This challenge presented a highly useful avenue for the development of our
model. Results from the R&D and Black Box dataset analyses indicate that the
VRNN is capable of identifying anomalies via sequence modeling, as we have
shown in the context of searching for anomalous substructure within boosted
hadronically decaying objects. We learned that the pre-processing method is
hugely influential on the performance of the model, in particular the choice
of $k_{t}$-ordered sequencing. We feel that this is a generalizable conclusion
from our study which can be applied to the understanding and use of jet
substructure in future analyses.
Given these lessons, there are a variety of future opportunities with this
application of the VRNN architecture to jet-level anomaly detection. Since the
VRNN takes constituent information as input and learns jet substructure
without explicit reliance on high level variables, it is expected to have less
correlation with jet mass than standard substructure variables such as
$n$-subjettiness. Further characterization of this point could reveal a key
advantage in using such an approach in an analysis context. While we limited
our scope in this study to be entirely unsupervised with no signal or
background model information, the RNN and VAE elements of the VRNN give
potential for accommodating more supervised training scenarios. Furthermore, a
number of advancements to the architecture, such as a dedicated adversarial
mass de-correlation network, or an additional input layer representing high-
level features, are worthwhile avenues of exploration to enhance performance
while minimizing unwanted correlations.
### 3.2 Anomaly Detection with Density Estimation555Authors: Benjamin Nachman
and David Shih.
This section introduces an approach called ANOmaly detection with Density
Estimation (ANODE) that is complementary to existing methods and aims to be
largely background and signal model agnostic. Density estimation, especially
in high dimensions, has traditionally been a difficult problem in unsupervised
machine learning. The objective of density estimation is to learn the
underlying probability density from which a set of independent and identically
distributed examples were drawn. In the past few years, there have been a
number of breakthroughs in density estimation using neural networks and the
performance of high dimensional density estimation has greatly improved. The
idea of ANODE is to make use of these recent breakthroughs in order to
directly estimate the probability density of the data. Assuming the signal is
localized somewhere, one can attempt to use sideband methods and interpolation
to estimate the probability density of the background. Then, one can use this
to construct a likelihood ratio generally sensitive to new physics.
#### 3.2.1 Method
This section will describe the ANODE proposal for an unsupervised method to
search for resonant new physics using density estimation.
Let $m$ be a feature in which a signal (if it exists) is known to be localized
around some $m_{0}$. The value of $m_{0}$ will be scanned for broad
sensitivity and the following procedure will be repeated for each window in
$m$. It is often the case that the width of the signal in $m$ is fixed by
detector properties and is signal model independent. A region $m_{0}\pm\delta$
is called the signal region (SR) and $m\not\in[m_{0}-\delta,m_{0}+\delta]$ is
defined as the sideband region (SB). A traditional, unsupervised, model-
agnostic search is to perform a bump hunt in $m$, using the SB to interpolate
into the SR in order to estimate the background.
Let $x\in\mathbb{R}^{d}$ be some additional discriminating features in which
the signal density is different than the background density. If we could find
the region(s) where the signal differs from the background and then cut on $x$
to select these regions, we could improve the sensitivity of the original bump
hunt in $m$. The goal of ANODE is to accomplish this in an unsupervised and
model-agnostic way, via density estimation in the feature space $x$.
More specifically, ANODE attempts to learn two densities:
$p_{\text{data}}(x|m)$ and $p_{\text{background}}(x|m)$ for $m\in{\rm SR}$.
Then, classification is performed with the likelihood ratio
$\displaystyle
R(x|m)=\frac{p_{\text{data}}(x|m)}{p_{\text{background}}(x|m)}.$ (4)
In the ideal case that
$p_{\text{data}}(x|m)=\alpha\,p_{\text{background}}(x|m)+(1-\alpha)\,p_{\text{signal}}(x|m)$
for $0\leq\alpha\leq 1$ and $m\in\text{SR}$, Eq. 4 is the optimal test
statistic for identifying the presence of signal. In the absence of signal,
$R(x|m)=1$, so as long as $p_{\text{signal}}(x|m)\neq
p_{\text{background}}(x|m)$, $R_{\text{data}}(x|m)$ has a non-zero density
away from 1 in a region with no predicted background.
In practice, both $p_{\text{data}}(x|m)$ and $p_{\text{background}}(x|m)$ are
approximations and so $R(x|m)$ is not unity in the absence of signal. The
densities $p(x|m)$ are estimated using conditional neural density estimation.
The function $p_{\text{data}}(x|m)$ is estimated in the signal region and the
function $p_{\text{background}}(x|m)$ is estimated using the sideband region
and then interpolated into the signal region. The interpolation is done
automatically by the neural conditional density estimator. Effective density
estimation will result in $R(x|m)$ in the SR that is localized near unity and
then one can enhance the presence of signal by applying a threshold
$R(x|m)>R_{\text{cut}}$, for $R_{\text{cut}}>1$. The interpolated
$p_{\text{background}}(x|m)$ can then also be used to estimate the background.
The ANODE procedure as described above is completely general with regards to
the method of density estimation. In this work we will demonstrate a proof-of-
concept using normalizing flow models for density estimation. Since
normalizing flows were proposed in Ref. pmlr-v37-rezende15 , they have
generated much activity and excitement in the machine learning community,
achieving state-of-the-art performance on a variety of benchmark density
estimation tasks.
#### 3.2.2 Results on LHC Olympics
The conditional MAF is optimized666Based on code from
https://github.com/ikostrikov/pytorch-flows. using the log likelihood loss
function, $\log(p(x|m))$. All of the neural networks are written in PyTorch
NEURIPS2019_9015 . For the hyperparameters, there are 15 MADE blocks (one
layer each) with 128 hidden units per block. Networks are optimized with Adam
adam using a learning rate $10^{-4}$ and weight decay of $10^{-6}$. The SR
and SB density estimators are each trained for 50 epochs. No systematic
attempt was made to optimize these hyperparameters and it is likely that
better performance could be obtained with further optimization. For the SR
density estimator, the last epoch is chosen for simplicity and it was verified
that the results are robust against this choice. The SB density estimator
significantly varies from epoch to epoch. Averaging the density estimates
point-wise over 10 consecutive epochs results in a stable result. Averaging
over more epochs does not further improve the stability. All results with
ANODE present the SB density estimator with this averaging scheme for the last
10 epochs.
Figure 8: Scatter plot of $R(x|m)$ versus $\log p_{\text{background}}(x|m)$
across the test set in the SR. Background events are shown (as a two-
dimensional histogram) in grayscale and individual signal events are shown in
red. Ref. Nachman:2020lpy .
Figure 8 shows a scatter plot of $R(x|m)$ versus $\log
p_{\text{background}}(x|m)$ for the test set in the SR. As desired, the
background is mostly concentrated around $R(x|m)=1$, while there is a long
tail for signal events at higher values of $R(x|m)$ and between $-2<\log
p_{\text{background}}(x|m)<2$. This is exactly what is expected for this
signal: it is an over-density ($R>1$) in a region of phase space that is
relatively rare for the background ($p_{\text{background}}(x|m)\ll 1$).
The background density in Fig. 8 also shows that the $R(x|m)$ is narrower
around $1$ when $p_{\text{background}}(x|m)$ is large and more spread out when
$p_{\text{background}}(x|m)\ll 1$. This is evidence that the density
estimation is more accurate when the densities are high and worse when the
densities are low. This is also to be expected: if there are many data points
close to one another, it should be easier to estimate their density than if
the data points are very sparse.
Figure 9: Receiver Operating Characteristic (ROC) curve (left) and
Significance Improvement Characteristic (SIC) curve (right). Figure reproduced
from Ref. Nachman:2020lpy .
The performance of $R$ as an anomaly detector is further quantified by the
Receiver Operating Characteristic (ROC) and Significance Improvement
Characteristic (SIC) curves in Fig. 9. These metrics are obtained by scanning
$R$ and computing the signal efficiency (true positive rate) and background
efficiency (false positive rate) after a threshold requirement on $R$. The
Area Under the Curve (AUC) for ANODE is 0.82. For comparison, the CWoLa
hunting approach is also shown in the same plots. The CWoLa classifier is
trained using sideband regions that are 200 GeV wide on either side of the SR.
The sidebands are weighted to have the same number of events as each other and
in total, the same as the SR. A single NN with four hidden layers with 64
nodes each is trained using Keras keras and TensorFlow tensorflow . Dropout
JMLR:v15:srivastava14a of 10% is used for each intermediate layer.
Intermediate layers use rectified linear unit activation functions and the
last layer uses a sigmoid. The classifier is optimized using binary cross
entropy and is trained for 300 epochs. As with ANODE, 10 epochs are averaged
for the reported results777A different regularization procedure was used in
Ref. Collins:2018epr ; Collins:2019jip based on the validation loss and
$k$-folding. The averaging here is expected to serve a similar purpose..
The performance of ANODE is comparable to CWoLa hunting in Fig. 9, which does
slightly better at higher signal efficiencies and much better at lower signal
efficiencies. This may be a reflection of the fact that CWoLa makes use of
supervised learning and directly approaches the likelihood ratio, while ANODE
is unsupervised and attempts to learn both the numerator and denominator of
the likelihood ratio. With this dataset, ANODE is able to enhance the signal
significance by about a factor of 7 and would therefore be able to achieve a
local significance above $5\sigma$ given that the starting value of
$S/\sqrt{B}$ is 1.6.
#### 3.2.3 Lessons Learned
While ANODE appears to be robust to correlations in the data (see Ref.
Nachman:2020lpy ), it is challenging to obtain precise estimates of the
background density to very values of small $S/B$. Another challenge is
extending the density estimation to higher dimensions. While the
demonstrations here were based on the innovative MAF density estimation
technique, the ANODE method can be used in conjunction with any density
estimation algorithm. Indeed, there are numerous other neural density
estimation methods from the past few years that claim state-of-the-art
performance, including Neural Autoregressive Flows
DBLP:journals/corr/abs-1804-00779 and Neural Spline Flows durkan2019neural ;
exploring these would be an obvious way to attempt to improve the results in
this section.
### 3.3 BuHuLaSpa: Bump Hunting in Latent Space888Authors: Blaz Bortolato,
Barry M. Dillon, Andrej Matevc, Jernej F. Kamenik, Aleks Smolkovic. The code
used in this project can be found at
https://github.com/alekssmolkovic/BuHuLaSpa.
#### 3.3.1 Method
The BuHuLaSpa method assumes that the LHCO event data was generated through a
stochastic process described by an underlying probabilistic generative model
with continuous latent variables. We use neural networks as approximators to
the likelihood and posterior distributions of the model, and use the
variational autoencoder (VAE) architecture as a means of optimising these
neural networks. For each event in the dataset we cluster the hadrons, select
the resulting two leading $p_{\text{T}}$ jets, and order these by mass,
$m_{j_{1}}>m_{j_{2}}$. The data representation we use for the LHCO consists of
the following observables for each jet: jet mass $m_{j}$, the $n$-subjettiness
observables $\tau_{2}/\tau_{1}$ and $\tau_{3}/\tau_{2}$, and an observable
similar to soft drop defined by clustering the jets with the C/A algorithm,
then de-clustering them branch by branch, and summing the ratios of parent to
daughter subjet masses along the way, stopping at some pre-defined mass scale
which we have chosen to be $20$ GeV. We denote these input measurements for
the $i^{\text{th}}$ event in the sample by a vector $\vec{x}_{i}$.
The probablistic model underlying the VAE architecture can be viewed as a
generative process through which the event data is generated from some
underlying distributions. The generative process for one event $\vec{x}_{i}$
starts with the sampling of a latent vector $\vec{z}_{i}$ from a prior
distribution $p(\vec{z})$. Given this latent vector, the data for a single
event is then sampled from the likelihood function
$p(\vec{x}_{i}|\vec{z}_{i})$. The goal is then to approximate the posterior
distribution, $p(\vec{z}_{i}|\vec{x}_{i})$, i.e. perform posterior inference,
which maps a single event back to its representation in latent space.
The neural networks used as approximators to the posterior and likelihood
functions are denoted by, $q_{\phi}(\vec{z}_{i}|\vec{x}_{i})$ and
$p_{\theta}(\vec{x}_{i}|\vec{z}_{i})$, where $\phi$ and $\theta$ represent the
weights and biases (i.e. the free parameters) of the encoder and decoder
networks, respectively. The sampling proceure is re-formulated using the re-
parameterisation technique which allows the neural networks to be optimised
through traditional back-propagation methods. Specifically the encoder network
consists of dim$(\vec{x})$ neurons in the input layer, followed by some number
of hidden layers, and $2\times$dim$(\vec{z})$ neurons in the output layer.
Each element in $\vec{z}_{i}$ corresponds to two neurons in the output layer
of the encoder network, one representing the mean and one representing the
variance. Elements of the latent vector $\vec{z}_{i}$ are then sampled from
Gaussian distributions parameterised by these means and variances. The
resulting latent vector $\vec{z_{i}}$ is then fed to the decoder network which
consists of dim$(\vec{z})$ neurons in the input layer, some number of hidden
layers, and dim$(\vec{x})$ neurons in the output layer.
The VAE method is important because it allows us to frame the posterior
inference task as an optimisation problem, and the loss function that is
optimised is the Stochastic Gradient Variational Bayes (SGVB) estimator:
$\mathcal{L}=-D_{\text{KL}}(q_{\phi}(\vec{z}_{i}|\vec{x}_{i})|p(\vec{z}_{i}))+\beta_{\text{reco}}\log
p_{\theta}(\vec{x}_{i}|\vec{z}_{i})\,,$ (5)
where the first term is the KL divergence between the posterior approximation
for event $i$ and the prior, and the second term is the reconstruction loss
term. We have added a re-scaling term $\beta_{\text{reco}}$ which alters how
much influence the reconstruction loss has over the KL divergence term in the
gradient updates. We fix $\beta_{\text{reco}}=5000$ for this work, but our
studies indicate that the results are insensitive to within order of magnitude
changes to this number.
##### Invariant mass as latent dimension
Once we have a fully trained VAE, the goal is then to use the latent
representation of the data obtained from the posterior approximation to
perform classification on the LHCO events. To search for anomalies we
typically look for excesses in the invariant mass distribution of the events.
Thus it is important to understand any observed correlations between the
latent vectors $\vec{z}_{i}$ and the invariant mass. The latent space
dimensions are each some non-linear function of the input observables. In
presence of correlations between the input observables and the invariant mass
of the events, the latent dimensions are expected to encode some information
on the invariant mass of the events. Crucially though, if signal events are
localised in the invariant mass distribution and the VAE learns how to
accurately encode and reconstruct the signal events, then part of the
correlation the VAE networks learn must indeed correspond to the presence of
the signal events in the data.
We then propose to make the invariant mass dependence of the VAE network
explicit by imposing that one of the latent dimensions corresponds exactly to
the invariant mass of the events. We do this by modifying the generative
process for a single event $\vec{x}_{i}$ with mass $m_{i}$ such that
$\vec{z}_{i}$ is sampled from $p(\vec{z}_{i})$, while $\tilde{m}_{i}$ is
sampled from a gaussian prior, centered at $m_{i}$ and with a width
$\sigma(m_{i})$ reflecting a realistic uncertainty of the invariant mass
reconstruction. In the LHCO case we take $\sigma(m_{i})=0.1m_{i}$ for
definiteness. Both the latent vector $\vec{z}_{i}$ and the sampled mass
variable $\tilde{m}_{i}$ are fed to the decoder which now has dim$(\vec{z})+1$
neurons in the input layer. The encoder remains exactly the same as in the
original VAE set-up and in particular can be made completely agnostic to
invariant mass by decorrelating the input variables $\vec{x}_{i}$ from $m_{i}$
using standard techniques. Now however the decoder is able to use the
invariant mass information for each event to help in the reconstruction of the
event data $\vec{x}_{i}$. At the same time the encoder network is no longer
incentivized to learn the correlations between $\vec{x}_{i}$ and $m_{i}$ even
if these are present in the data. This has a number of potential benefits:
1. 1.
The optimisation occurs locally in the invariant mass variable. Events with
similar latent representations, i.e. similar $\vec{z}$, but very different
invariant masses will now be treated differently by the decoder, therefore the
network will no longer be forced to use the latent vector $\vec{z}$ to
distinguish between events with different invariant masses.
2. 2.
We can visualise the correlations between the latent space and the invariant
mass explicitly without relying on data. By scanning over $\vec{z}_{i}$ and
$\tilde{m}_{i}$ and feeding the values into the decoder we can visualise the
latent space structure in terms of different observables at different
invariant masses. This novel way of inferring on what the network has learned
could lead to new approaches to bump hunting with machine learning at
colliders, or even more broadly to machine learning applications in high-
energy physics.
##### Optimization and classification
Using the R&D dataset we investigated how best to train the VAE, and then
applied what we learned here to the analysis on the black box datasets. After
an extensive scan over the hyper-parameters of the model, and monitoring the
behaviour of the network throughout the training, we have have come to the
following conclusions regarding optimization and classification:
* •
The Adagrad and Adadelta optimizers consistently outperform momentum-based
optimizers like Adam and Nadam, which we expect is due to the smoothing of
gradients in the latter which in effect reduce the sensitivity of the gradient
updates to outliers in the data.
* •
The norm of the latent vector $|\vec{z}_{i}|$ performs best as a classifier
for the signal events.
* •
Classification performance does not converge throughout training, instead it
peaks and then dies off at larger epochs. The epoch at which the peak
performance occurs is correlated with a minimum in the reconstruction loss of
the signal-only events, indicating that the network begins to ignore outliers
in the data in order to reduce the overall reconstruction loss.
* •
It appears that the reason for this is that at some point during the training
the network learns to reconstruct just one or two of the eight observables
well, while mostly ignoring the others. What we have found is that this can be
avoided if we monitor the variance of the per-observable reconstruction losses
through the training, and stop the training at the minima of this variance.
This is very strongly correlated with the peak in the classification
performance.
For the training we used just one latent dimension, SeLU activation functions,
two layers of 100 nodes each, the Adadelta optimizer with a learning rate of
$0.001$, Mean-Squared-Error reconstruction loss, and batch sizes of $10$k. The
correlations used in the early-stopping procedure are more robust and precise
when using larger batch sizes.
#### 3.3.2 Results on LHC Olympics
For the blackbox datasets and the R&D dataset we trained the VAE networks on
the whole event sample, without any cuts or binning in invariant mass, and
followed the early stopping procedure outlined above. In Fig. 28 we show an
example of a ROC curve obtained by training on the R&D data with an S/B of
$0.1\%$. In Fig. 29 we show a bump in the invariant mass spectrum in the Black
Box 1 data after applying a classifier trained with this method. The bump is
at a mass of $\sim 3.8$ TeV and if we study the jet mass (Fig. 30) and
$\tau_{2}/\tau_{1}$ distributions of the events that pass the cuts we clearly
see that they correspond to events with jet masses $\sim 750$ GeV and $\sim
400$ GeV, with $\tau_{2}/\tau_{1}$ values from the lower end of the spectrum.
Our analyses of the Black Box 2 and Black Box 3 data did not result in any
clear signals in the data.
Figure 10: ROC curve obtained with the VAE classifier on the R&D data. Figure
11: The invariant mass distribution for the blackbox 1 data after applying the
VAE classifier.
Figure 12: The jet mass distributions for the blackbox 1 data after applying
the VAE classifier and restricting to the invariant mass range $[3.6,4.0]$
TeV.
#### 3.3.3 Lessons Learned
The first interesting lesson learned through this analysis was that the choice
of the optimizer can play an important role in different machine-learning
tasks. While in standard classification tasks the momentum-based optimizers
such as Adam perform very well, we found when using a VAE for anomaly
detection this was not the case. Instead, when the VAE is tasked with learning
an effective latent representation of the dataset, including a small subset of
anomalous signal events, it performs much better when using either the Adagrad
or Adadelta optimizers. The reason for this appears to be that the momentum
updates in the Adam optimizer tend to smooth out the effects of anomalous
events in the gradient updates, in turn ignoring the signal events in the
data. This may also be the case for other anomaly detection techniques, but
has not been tested here.
The second lesson we learned was that after some number of epochs the VAE has
a tendancy to ‘over-train’ on just one or two of the eight inputs we used.
This results in an overall reduction in the loss function, but interestingly
it results in an increase in the loss of signal-only events. This increase in
the reconstruction loss of signal-only events is inevitably correlated with a
reduction in the peformance of the classifier. We remedied this by introducing
an early-stopping procedure in which we stop the training when the variance of
the per-observable reconstruction losses reach a minimum. This allowed us to
achieve the optimal performance in an entirely unsupervised manner.
### 3.4 GAN-AE and BumpHunter999Authors: Louis Vaslin and Julien Donini. All
the scripts used to train and apply the GAN-AE algorithm are given at this
link: ”https://github.com/lovaslin/GAN-AE_LHCOlympics”. The implementation of
the BumpHunter algorithm used in this work can be found at this link:
https://github.com/lovaslin/pyBumpHunter. In near future, it is planed that
this implementation of BumpHunter becames a official package to be included in
the scikit-HEP toolkit.
#### 3.4.1 Method
The methods presented in this section combine two independent anomaly
detection algorithm. The objective is to have a full analysis workflow that
can give a global $p$-value and evaluate the number of signal events in any
black-box dataset.
##### GAN-AE
The GAN-AE method is an attempt at associating an Auto-Encoder architecture to
a discriminant neural network in a GAN-like fashion. The reason for this
particular setting is to use information that does not come only from the
“reconstruction error” usually used to train AEs. This could be seen as an
alternative way to constrain the training of an AE. As discriminant network, a
simple feed-forward Multi-Layer Perceptron (MLP) is used.
This method been inspired by the GAN algorithm, the two participants (AE and
MLP) are trained alternatively with opposite objectives :
* •
The MLP is trained for a few epochs using the binary crossentropy (BC) loss
function on a labeled mixture of original and reconstructed events, the
objective being to expose the weaknesses of the AE.
* •
The AE is trained for a few epochs using a loss function combining the
reconstruction error (here, the Mean Euclidean Distance between the input and
output, or MED for short) and the BC loss of the MLP. In order to decorrelate
as much as possible the reconstruction error and the invariant mass, the
distance correlation (DisCo) term is used DiscoFever . The loss is then given
by :
$\text{loss}_{\text{AE}}=\text{BC}+\varepsilon\times\text{MED}+\alpha\times\text{DisCo}$
With $\varepsilon$ and $\alpha$ two hyperparameters used to balance the
weights of each terms. In this case, the BC term is evaluated by giving
reconstructed events to the MLP, but this time with the “wrong label”, the
objective being to mislead the MLP.
* •
Then the AE is evaluated on a validation set using a Figure of Merit (FoM)
that also combines the reconstruction error and some information from the MLP.
The FoM used is given by :
$\text{FoM}=\text{MED}+(1-\text{Mean}~{}\text{MLP}~{}\text{output})$
This second term is preferred over the binary crossentropy because it seems to
be more stable, which makes it more suitable to set a early stopping
condition. As for the reconstruction error,
$1-(\text{Mean}~{}\text{MLP}~{}\text{output})$ must be minimized. In fact, the
closer to zero is this term, the better the AE is at misleading the MLP.
These three steps are repeated in a loop until the FoM fails to improve for
five cycles. Once the AE has been trained, the MLP can be discarded since it
is not needed anymore. Then, the AE can be used by taking the reconstruction
error (Euclidean distance) as discriminative feature.
The GAN-AE hyperparameter used for the LHC Olympics are shown in Tab. 3
| AE | MLP
---|---|---
Neurons per hidden layer | 30/20/10/20/30 | 150/100/50
Number of epochs per cycle | 4 | 10
Activation function | ReLU (sigmoid for output) | LeakyReLU (sigmoid for output)
Dropout | 0.2 (hidden layers only)
Early-stopping condition | 5 cycles without improvment
Table 3: Hyperparameters used for the GAN-AE algorithm.
##### BumpHunter
The BumpHunter algorithm is a hypertest that compares a data distribution with
a reference and evaluates the p-value and significance of any deviation. To do
so, BumpHunter will scan the two distributions with a sliding window of
variable width. For each position and width of the scan window, the local
p-value is calculated. The window corresponding to the most significant
deviations is then defined as the one with the smallest local p-value.
In order to deal with the look elsewhere effect and evaluate a global p-value,
BumpHunter generates pseudo-experiment by sampling from the reference
histogram. The scan is then repeated for each pesudo-data histogram by
comparing with the original reference. This gives a local p-value distribution
that can be compared with the local p-value obtained for the real data. Thus,
a global $p$-value and significance is obtained. The BumpHunter
hyperparameters used for the LHC Olympics are shown in Tab. 4
min/max window width | 2/7 bins
---|---
width step | 1 bins
scan step | 1 bin
number of bins | 40
number of pseudo-experiments | 10000
Table 4: Hyperparameters used for the BumpHunter algorithm.
##### Full analysis workflow
The objective of this work is to use the Auto-Encoder trained withe the GAN-AE
algorithm to reduce the background and then use the BumpHunter algorithm to
evaluate the (global) $p$-value of a potential signal. However, the use of
this second algorithm requires the use of a ”reference background” to be
expected in the data. Unfortunately, such reference is not always available,
as it is the case for the LHC Olympics black-box dataset. Thus, in order to
use BumpHunter, one must first extract a background model for the data.
Another point that has to be taken into account is the fact that, despite the
use of the DisCo term, the dijet mass spectrum is not totally independent from
the reconstruction error. Thus, simply rescaling the full dataset precut to
fit the mass spectrum postcut will not work.
One way to do this is to use a small subset of the data to compute a shaping
function. The objective of this function is to capture how the mass spectrum
behaves when a cut on the reconstruction error is applied. This function is
computed bin per bin on the dijet mass histogram by doing the ratio of the bin
yields postcut and precut.
Of course, the presence of signal in the subset used for this calculation
might impact this shaping function. In order to mitigate this effect, the
shaping function can be fitted using the tools available in the scikit-learn
toolkit. This will minimize the effect of the signal on the shaping function.
Once the shaping function is defined, it can be used to reshape the mass
spectum precut in order to reproduce the behaviour of the background postcut.
With this final step, the full analysis workflow is the following :
* •
Data preprocessing (anti-$k_{t}$ clusturing, precut on dijet mass)
* •
Training of GAN-AE on the R&D background
* •
Application of the trained AE on the black-box dataset
* •
Use 100k events for the black-box to compute a shaping function
* •
Use the shaping function to build a reference to use the BumpHunter algorithm
#### 3.4.2 Results on LHC Olympics
The results shown were obtained with an AE trained with the GAN-AE algorithm
on 100k events from the R&D background. Note that before the training and
application, cuts were applied on the dijet mass at 2700 GeV and 7000 GeV.
##### R&D dataset
Here we discuss the result obtained on the R&D dataset. The trained AE have
been tested on 100k background events (not used during the training), as well
as on the two signals provided. Fig. 13 shows the Euclidean distance
distributions (left) and the corresponding ROC curves (right).
This result illustrates the potential of the GAN-AE algorithm to obtain a good
discrimination between the background and signals, event though only the
background was used during the training. However, if the obtained AUC is good,
it also appears that the Euclidean distance is still very correlated with the
dijet mass. This might have a negative impact on the bump hunting algorithm
performance.
Figure 13: Euclidean distance distributions and ROC curves obtained for the
R&D dataset.
##### Black Boxe datasets
Here we discuss the results obtained for the black box dataset provided for
the LHC Olympics challenge.
Figure 14 shows the Euclidean distance distribution obtained for each black
box. Compared to what was obtained with the R&D background, the distributions
seem larger and globally shifted to the right. This is most likely due to the
difference between the R&D background and the background generated in the
black boxes. This fact shows that the method used is quite sensitive to the
modeling of the background.
Figure 14: Euclidean distance distributions and ROC curves obtained for the
black boxes datasets.
Figure 15 shows the shaping function obtained using 100k events from each
black box dataset. A preliminary fit was made to each of the distribution.
Since the fit is suboptimal this might lead to the appearance of fake bump or
fake deficit during the BumpHunter scan.
Figure 15: Shaping function obtained for each black box. From left to right,
black box 1, 2 and 3.
Finally Fig. 16 shows the results obtained with BumpHunter for all black
boxes. As foreseen with the poor fit of the shaping functions, the constructed
reference backgrounds do not fit well the data after cut on the Euclidean
distance. In this condition and at the current stage of this work we can not
really evaluate a meaningful $p$-value for a potential signal. If the results
were good on the R&D dataset, it seems that the method is more challenging to
apply without a good modeling of the background shape.
Figure 16: Result of the BumpHunter scan obtained for each black box. From
left to right, Black Box 1, 2 and 3.
#### 3.4.3 Lessons Learned
The LHC Olympics challenge has been a good opportunity to test the potential
of the GAN-AE algorithm that we have been developing. This shows the potential
of this method with the good results on the R&D dataset, but also its limits.
The results obtained revealed the sensibility of GAN-AE to the modeling of the
background and to the correlation of the distance distribution with the dijet
mass, despite the use of DisCo term. In addition, the fact that no background
simulation that fits the black boxes data were available made the use of the
BumpHunter algorithm difficult to apply.
### 3.5 Gaussianizing Iterative Slicing (GIS): Unsupervised In-distribution
Anomaly Detection through Conditional Density Estimation101010Authors: George
Stein, Uros̆ Seljak, Biwei Dai. The Gaussianizing Iterative Slicing (GIS) used
in this work was an early form of what is now called Sliced Iterative
Generation (SIG). More details on SIG can be found at sig , and code will be
made publicly available when ready. The results discussed in this section were
also presented in Ref. stein2020unsupervised .
We approached the LHC signal detection challenge as an example of in-
distribution anomaly detection. Rather than searching for samples near the
tails of various data distributions as is typically done in out-of-
distribution anomaly detection applications, the strategy we pursue is to look
for excess density in a narrow region of a parameter of interest, such as the
invariant mass. We term this in-distribution anomaly detection. We perform
conditional density estimation with Gaussianizing Iterative Slicing (GIS) sig
, and construct a local over-density based in-distribution anomaly score to
reveal the signal in a completely blind manner. The results presented here are
unchanged from our blind submission to the LHC Olympics in January 2020.
Parallel and independent to our development and application of our conditional
density estimation method, a similar one was applied in Nachman:2020lpy , to
great results on the R&D dataset.
The R&D dataset lhc_randd was used for constructing and testing the method,
while the first of the ‘black boxes’ lhc_bb1 was the basis of our submission
to the winter Olympics challenge. As the up to 700 particles given for each
event are likely the result of hadronic decays we expect them to be spatially
clustered in a number of jets. By focusing on the jet summary statistics
rather than the particle data from an event we are able to vastly reduce the
dimensionality of the data space. We note that this form of dimensionality
reduction requires a small amount of prior knowledge and understanding of the
data, and the assumption that the detected jets contain the anomaly, and other
data-agnostic dimensionality reduction methods could instead be used. We used
the python interface of FastJet Cacciari:2011ma ; Cacciari:2005hq \- pyjet
pyjet \- to perform jet clustering, setting $R=1.0$ as the jet radius and
keeping all jets with $|\eta|<2.5$. Each jet $J$ is described by a mass
$m_{J}$, a linear momentum $p=(p_{\text{T}},\eta,\phi)$, and n-subjettiness
ratios $\tau^{J}_{nn-1}$ Thaler:2010tr ; Thaler:2011gf , which describe the
number of sub-jets within each jet. A pair of jets has an invariant mass
$M_{JJ}$. Additional parameters beyond these few may be necessary in certain
scenarios, or at minimum useful, but our lack of familiarity with the field
limited our search to use only these standard jet statistics. To construct
images of the jets we binned each particles transverse momentum $p_{\text{T}}$
in $(\eta,\phi)$ and oriented using the moment of inertia. For the final black
box 1 run we limited events to 2250 GeV $<$ $M_{JJ}$ $<$ 4750 GeV, resulting
in 744,217 events remaining after all data cuts.
#### 3.5.1 Method
Our in-distribution anomaly detection method relies on a framework for
conditional density estimation. Current state-of-the-art density estimation
methods are those of flow-based models, popularized by realnvp and
comprehensively reviewed in normalizing_flows . A conditional normalizing flow
(NF) aims to model the conditional distribution $p(x|x_{c})$ of input data $x$
with conditional parameter $x_{c}$ by introducing a sequence of $N$
differentiable and invertible transformations $f=f_{1}\circ
f_{2}\circ\dots\circ f_{N}$ to a random variable $z$ with a simple probability
density function $\pi(z)$, generally a unit Gaussian. Through the change of
variables formula the probability density of the data can be evaluated as the
product of the density of the transformed sample and the associated change in
volume introduced by the sequence of transformations:
$p(x|x_{c})=\pi(f_{x_{c}}(x))\left|\mathrm{det}\left(\frac{\partial
f_{x_{c}}(x)}{\partial
x}\right)\right|=\pi(f_{x_{c}}(x))\prod_{i=1}^{i=N}\left|\mathrm{det}\left(\frac{\partial
f_{x_{c},i}(x)}{\partial x}\right)\right|.$ (6)
While various NF implementations make different choices for the form of the
transformations $f_{i}$ and their inverse $f_{i}^{-1}$, they are generally
chosen such that the determinant of the Jacobian, $\mathrm{det}(\partial
f_{x_{c},i}(x)/\partial x)$, is easy to compute. Mainstream NF methods follow
the deep learning paradigm: parametrize the transformations using neural
networks, train by maximizing the likelihood, and optimize the large number of
parameters in each layer through back-propagation.
In this work we use an alternative approach to the current deep learning
methodology, a new type of normalizing flow - Gaussianizing Iterative Slicing
(GIS) sig . GIS works by iteratively matching the 1D marginalized distribution
of the data to a Gaussian. At iteration $i$, the transformation of data
$X_{i}$, $f_{x_{c},i}$, can be written as
$X_{i+1}=X_{i}-W_{i}W_{i}^{T}X_{i}+W_{i}\mathbf{\Psi}_{x_{c},i}(W_{i}^{T}X_{i}),$
(7)
where $W_{i}$ is the weight matrix that satisfies $W_{i}^{T}W_{i}=I$, and
$\mathbf{\Psi}_{x_{c},i}$ is the 1D marginal Gaussianization of each dimension
of $W_{i}^{T}X_{i}$. To improve the efficiency, the directions of the 1D
slices $W_{i}$ are chosen to maximize the PDF difference between the data and
Gaussian using the Wasserstein distance at each iteration. The conditional
dependence on $x_{c}$ is modelled by binning the data in $x_{c}$ and
estimating a 1D mapping $\mathbf{\Psi}_{i}$ for each $x_{c}$ bin, then
interpolating ($W_{i}$ is the same for different $x_{c}$ bins). GIS can
perform an efficient parametrization and calculation of the transformations in
Equation 6, with little hyperparameter tuning. We expect that standard
conditional normalizing flow methods would also work well for this task, but
did not perform any comparisons.
With the GIS NF trained to calculate the conditional density, our in-
distribution anomaly detection method, illustrated in Fig. 17, works as
following:
1. 1.
Calculate the conditional density at each data point $p(x|M_{JJ})$, denoting
this $\mathrm{p_{signal}}$, using the jet masses and n-subjettiness ratios as
the data $x$ and the invariant mass of a pair of jets $M_{JJ}$ as the
conditional parameter.
2. 2.
Calculate the density at neighbouring regions along the conditional dimension,
$p(x|M_{JJ}\pm\Delta)$, and interpolate to get a density estimate in the
absence of any anomaly. This is denoted $\mathrm{p_{background}}$. Explore
various values of $\Delta$ and interpolation/smoothing methods.
3. 3.
The local over-density ratio (or anomaly score $\alpha$),
$\mathrm{\alpha=p_{signal}/p_{background}}$, will be $\approx 1$ in the
presence of a smooth background with no anomaly. A sign of an anomalous event
is $\alpha>1$. Individual events can also be selected based on the desired
$\alpha$ characteristic.
Figure 17: In-distribution anomaly detection through conditional density
estimation. Consider samples of a 1D feature $x$ and a conditional parameter
of interest $M$ (left panel), drawn from a smooth Gaussian ‘background’ with a
small number of anomalous ‘signal’ events added (inside red circle for
clarity). The conditional density values at each data point do not allow the
anomaly to be distinguished from the background (center left panel), as they
only identify the outliers. However, the local over-density anomaly ratio
$\mathrm{\alpha}$ peaks at the anomalous data points (center right panel), and
implementing a minimum cut on the anomaly ratio reveals the anomalous events
(right panel).
#### 3.5.2 Results on LHC Olympics
We reasoned that if there is an anomalous particle decay in the data, its jet
decay products would likely be located in a narrow range of masses
corresponding to the mass of the particle itself. For this reason we chose the
invariant mass $M_{JJ}$ of two jets as the conditional parameter to conduct
the anomaly search along. We iterated on selections of jets $i$ and $k$, and
selections of n-subjettiness ratios, and found the most significant anomaly
when investigating the lead two jets and the first n-subjettiness ratio, so we
used {$M_{JJ}$, $m_{J_{1}}$, $m_{J_{1}}-m_{J_{2}}$, $\tau_{21}^{J_{1}}$,
$\tau_{21}^{J_{2}}$} as the 5 parameters describing each event.
We also experimented with training a convolutional autoencoder on the jet
images, reasoning that rare events (anomalies) would have a higher
reconstruction error and different latent space variables than more common
ones, as seen in Farina:2018fyg . While we found a larger than average
reconstruction error for signal events, and latent space parameters to be
noticeably different between background and signal events, on the R&D dataset,
these autoencoder-based variables introduced more noise in the density
estimation than the physics-based parameters, so they were not used in our
final submission.
Simple investigations of the dataset showed that it was smoothly distributed,
and no anomalies were apparent by eye. We trained the conditional GIS on all
events, and evaluated the anomaly score $\alpha$ for each datapoint. On the
R&D set we found that point estimates of the conditional densities resulted in
a larger noise level than convolving the conditional density with a Gaussian
PDF of width $\sigma=\Delta$ (1-PDF convolution for the background),
discretely sampled at 10 points, so used the Gaussian-convolved probability
estimates. $\mathrm{\sigma=250\ GeV}$ provided the most strongly peaked
signal.
As seen in Fig. 18, the anomaly score strongly peaks around
$\mathrm{M_{JJ}\approx 3750\ GeV}$. If these events are truly from a particle
decay we expect that their resulting jet statistics will be clustered around
some mean value, unlike if it is simply a result of noise in the model or
background. To investigate the anomaly we remove data outside of
$\mathrm{3600\ GeV<M_{JJ}<3900\ GeV}$, and look at the events that remain
after a series of cuts on the anomaly score $\alpha$.
Figure 18: The anomaly score for each event as a function of the invariant
mass of the leading two jets. A number of anomalous events are clearly seen
near $\mathrm{M_{JJ}\approx 3750GeV}$. Figure 19: Parameter distributions of
the events that remain after imposing cuts on the anomaly score $\alpha$, and
limiting the mass range to $\mathrm{3600\ GeV<M_{JJ}<3900\ GeV}$. Vertical
dashed lines are the true anomalous events that were unveiled after the close
of the competition.
In Fig. 19 we show the parameter distributions of the events that remain after
imposing $\alpha>[1.5,2.5,5.0]$ cuts in the right four panels, and find that
the most anomalous events are centered in $M_{J1}$ and $M_{J1}-M_{J2}$, and
have small values of n-subjettiness $\tau_{21}$. This strongly indicates that
we found a unique over-density of events that do not have similar counterparts
at neighbouring $M_{JJ}$ values - i.e. an anomaly.
Figure 20: The eight most anomalous events in the black box. Each pair of
images visualizes the particles belonging to the lead two jets. Images were
constructed by binning the transverse momentum of each particle belonging to
the jet in ($\eta$, $\phi$), and oriented along the y axis using using the
$p_{\text{T}}$ weighted moment of inertia. Color is log scaled.
We visualized the events ranked by decreasing anomaly score in Fig. 20, and
found that each of the leading two jets for events with a high anomaly score
additionally have very similar visual appearances. Using the events that
remain after an $\alpha>2.0$ cut we can summarize the anomalous events as
follows: a $\mathrm{3772.9\pm 8.3\ GeV}$ particle decays into 2 particles, one
with $\mathrm{M_{1}=727.8\pm 3.8\ GeV}$, and the other with
$\mathrm{M_{2}=374.8\pm 3.5\ GeV}$. Each of these decayed into two-pronged
jets. Based on the corresponding analysis of the R&D data, by limiting the
number of signal events until the results visually resembled Fig. 18, we
estimated that there were a total of $1000\pm 200$ of these events included in
the black box of a million total events. While this is not a robust technique
to estimate the number of events in all cases, as the anomaly characteristics
may be much more broad or peaked in a black box than they were in the R&D set,
it nevertheless gave an accurate result here.
#### 3.5.3 Lessons Learned
The availability of a low-noise and robust density estimation method such as
GIS was key throughout this work, as the lack of hyperparamater tuning allowed
us to focus on the blind search rather than worrying that failing to detect an
anomaly may purely stem from some parameters in the method. We also learned
plenty of interesting particle physics along the way, and thank the organizers
greatly for taking the time to design and implement this challenge.
### 3.6 Latent Dirichlet Allocation111111Authors: B. M. Dillon, D. A.
Faroughy, J. F. Kamenik, M. Szewc. The implementation of LDA used here for the
unsupervised jet-substructure algorithm is available at
http://github.com/barrydillon89/LDA-jet-substructure.
Latent Dirichlet allocation (LDA) is a generative probabilistic model for
discrete data first introduced to particle physics for unsupervised jet
tagging and event classification in Refs. Dillon:2019cqt ; 1797846 . In
general, a single collider event can be represented by a set of measurements
$(o_{1},o_{2},\ldots)$. For example, the set of all particle four-momenta in
the space $(p_{\text{T}},\eta,\phi)$, or any set of substructure observables
extracted while declustering jets. The basic assumption of LDA is that
individual events can be modelled by mixtures of a finite number of latent
distributions, referred to as themes or topics. These themes are multinomial
distributions over the binned space of observables where event measurements
are generated from. Therefore, sampling a single measurement $o_{i}$ from a
theme consists in drawing a bin from a discretized phase space containing the
particular measurement. The simplest case is to assume two underlying themes,
the two-theme LDA model. In this case the generative process for a single
event goes as follows: (i) from a suitable prior distribution draw a random
number $\omega$ between zero and one, (ii) select a theme by drawing from the
binomial distribution with bias $\omega$, (iii) sample one measurement from
the selected theme’s multinomial space of observables. Repeat steps (ii-iii)
until all measurements in the event are generated. Repeat the procedure above
for each event in the event sample. The above setting can be generalized to
more than two themes by replacing the two-theme mixing proportion $\omega$
with a set of mixing proportions $(\omega_{1},\ldots,\omega_{T})$ living in a
$(T-1)$-dimensional simplex121212The simplex is the space of all $T$
dimensional vectors satisfying $0\leq\omega_{t}\leq 1$ and
$\sum_{t=0}^{T}\omega_{t}=1$. where $T$ is the number of themes. The
$\omega_{t}\,$’s reflect the preponderance of each theme within an individual
event. The themes are then drawn from the multinomial distributions with
biases $\omega_{t}$. In contrast to a mixture model131313In a mixture model
all measurements from an individual event are drawn from a single underlying
distribution., in a mixed membership model like LDA different measurements
within an event can originate from different themes, leading to a more
flexible probabilistic model. LDA has a set of hyper-parameters $\alpha$
parametrizing the prior distribution from which the theme mixing proportions
$\omega_{t}$ are to be drawn for each event (step (i) of the generative
process described above). In particular, the prior is the Dirichlet
distribution $\mathcal{D}(\alpha_{0},\ldots,\alpha_{T})$. Different choices of
the concentration parameters $\alpha_{t}>0$ yield different shapes over the
simplex. For the two-theme model, the Dirichlet reduces to a beta distribution
$\mathcal{D}(\alpha_{0},\alpha_{1})$ over the unit interval.
Once the Dirichlet hyper-parameter $\alpha$ and the number of themes $T$ is
fixed, we can train a LDA model by “reversing” the generative process
described above to infer from unlabelled collider data the latent parameters,
namely the mixing proportions $\omega_{t}$ of each theme and the multinomial
parameters $0\leq\beta_{t,m}\leq 1$ of the theme distributions $p(o|\beta)$,
where $t$ labels the theme and $m$ labels the bins in the space of
observables. To learn these parameters in this work we use the standard method
of stochastic variational inference (SVI). Once these parameters are learned
from the data, we can then use LDA to classify events in an unsupervised
fashion. In the case of a two-theme LDA model ($T=2$) we can conveniently use
the likelihood ratio of the learned themes of an event $e=(o_{1},\ldots
o_{N})$:
$L(o_{1},\ldots,o_{N}|\alpha)=\prod_{i=1}^{N}\frac{p(o_{i}|\hat{\beta}_{1}(\alpha))}{p(o_{i}|\hat{\beta}_{2}(\alpha))}\,.$
Here $\hat{\beta}_{t}$ are the estimators of the theme parameters extracted
from SVI. Notice that the above expression is dependent on the Dirichlet
hyper-parameter $\alpha$ leading to a landscape of classifiers. In principle
there are no hard criteria for choosing one set of hyper-parameters over the
other. One way to guide the choice is by using the resulting model’s
perplexity, see Ref. 1797846 for details. After training LDA models for
different points in the landscape, the LDA classifier with the lowest
perplexity (corresponding to the LDA model that best fits the data) has been
shown in examples to be correlated with truth-level performance measures like
the AUC.
#### 3.6.1 Method
As shown in Refs. Dillon:2019cqt ; 1797846 , the two-theme LDA model can be
used for anomaly detection in events with large radius jets. The jets are
declustered, and at each splitting a set of substructure observables is
extracted and binned. We refer to these binned measurements as $o_{j,i}$, with
an added categorical variable that tags the jet to which the splitting belongs
to. In the limit of exchangeable splittings, De Finetti’s theorem allows us to
derive, with the help of some additional assumptions, the latent substructure
of such jets, characteristic of a mixed-membership model. In practice,
exchangeability is a reasonable approximation since most of the interesting
physical information contained in jet substructure is in the kinematical
properties of the splittings, not in their ordering.
The choice of data representation and suitable binning are fundamental for LDA
performance. Here we refer to data representation as both the kinematical
information we use from each splitting as well as the kinematic cuts
determining the splittings to be considered. As shown in Ref. 1797846 , the
data representation and binning on one hand must allow for discrimination
between signal and background, while at the same time produce co-occurrences
of measurements within the same event. The former is obvious considering the
classification task at hand, while the latter is needed for the SVI procedure
to be able to extract the latent distributions. This results in a trade-of of
using relatively coarse binning in order to ensure co-occurrence of
measurements without sacrificing too much discriminatory power. In a fully
unsupervised setting, one does not know a priori which data representation is
best for any given possible signal, and any data representation carries some
assumptions on how the signal is imprinted in jet substructure. In this work
we consider two fairly general bases of jet substructure observables, the so
called mass-basis and the Lund-basis. In the mass basis we only include
splittings from subjets of mass above $30$ GeV. In the Lund basis we only
include splittings from subjets which lie in the primary Lund plane. We
emphasise that the resulting two data representations do not only differ in
the observables included, but also in the set of splittings kept for each jet
due to the different declustering cuts. In our current setting, the number of
considered jets in an event is fixed to two (of highest
$p_{\text{T}}$).141414When considering a variable number of jets, LDA tends to
cluster together events based on jet multiplicity rather then jet
substructure.
After choosing a suitable data representation and binning, the procedure is as
follows: We first split the dataset into overlapping invariant mass bins. In
each bin, we perform a hyper-parameter optimization using perplexity to find
the best LDA model. Selecting the signal and background themes in the model by
looking at the latent distributions of the themes over the vocabulary and the
weight distributions of the events, we build a test statistic and define a
threshold for data selection. Finally, we perform a bump hunt on the selected
data invariant mass distribution. In order to provide a background-only
hypothesis, we consider the uncut invariant mass distribution as a background
template and fix the total number of background events using the sideband
regions. We can then produce a local p-value after also estimating the
systematic errors due to possible classifier correlation with the invariant
mass using the simulated background sample.
#### 3.6.2 Results on LHC Olympics
For Black Box 1 we assumed a di-jet resonance and consequently applied the LDA
method to the two leading jets in each event using the mass-basis data
representation. The invariant mass bin of 2.5-3.5 TeV yields themes shown in
Fig. 29. We deem the signal theme to be the one with resonant substructre,
uncharacteristic of QCD.
Figure 21: Best inferred latent distributions of the two themes (left and
right column) for Black Box 1 with the LDA method. Shown is the
$m_{0},m_{1}/m_{0}$ plane of the mass-basis for the heavier (top row) and the
lighter (bottom row) of the two jets.
We perform a bump hunt with this model on Black Box 1 and on the simulated
background sample. We show the invariant mass distribution after cutting using
this LDA and the resulting BumpHunter excess in Fig. 30. In both cases we also
show the background estimation used to compute the p-value. The reported
significances are 1.8$\sigma$ and 3.8$\sigma$ for the background sample and
the Black Box 1 sample respectively.
Figure 22: Invariant mass event distribution of the simulated background
(left) and Black Box 1 (right) after performing an LDA-based cut along with
the background estimation using the uncut invariant mass distribution. Bottom
row displays the corresponding excess found by BumpHunter.
When comparing our estimates to the unveiled results, the LDA inferred di-jet
resonance mass is not incompatible with the actual value of 3.8 TeV. However,
the two decay products of this resonance have masses which are significantly
above LDA estimates (732 and 378 GeV). The discrepancy is possibly due to an
unfortunate choice of binning, since having bins narrow in $m_{0}$ may have
reduced the strength of the co-occurrences, which in turn may have caused the
signal features to be washed out by sculpting effects in the jet mass
distribution coming from the $p_{\text{T}}$ cut. On the other hand, we could
not find compelling new physics candidates in neither Black Box 2, where no
signal was present, nor Black Box 3.
#### 3.6.3 Lessons Learned
The main lesson we take from the LHCO challenges is that a realistic LDA
implementation should consider several different data representations and
binnings. As we limited ourselves to di-jet jet-substructure observables we
missed the characteristics of a rare signal which does not produce a rich jet
substructure in the two leading jets. In the future, the search pipeline
should allow to consider a larger number of jets but also include data
representations which are not focused exclusively on jet substructure, e.g. by
considering global jet or event variables.
### 3.7 Particle Graph Autoencoders151515Authors: Steven Tsan, Javier Duarte,
Jean-Roch Vlimant, Maurizio Pierini. All code is publicly available at
https://github.com/stsan9/AnomalyDetection4Jets.
#### 3.7.1 Method
We propose particle graph autoencoders (PGAEs) based on graph neural networks
1808887 for unsupervised detection of new physics in multijet final states at
the LHC. By embedding particle jet showers as a graph, GNNs are able to
exploit particle-particle relationships to efficiently encode and reconstruct
particle-level information within jets. We posit that this can improve the
capacity of autoencoders to learn a compressed representation of a jet and
consequently help identify anomalous beyond-the-standard-model (BSM) multijet
signal events from LHC data.
In our PGAE model, we represent each input jet as a graph in which each
particle of the jet is a node, and each node has an edge connecting it to
every other particle in the jet (i.e. a fully-connected particle graph). When
encoding and decoding, the graph structure of the data remains the same, but
the nodes’ features, initially the particle’s four-momentum
$(E,p_{x},p_{y},p_{z})$, have their dimensionality reduced during the encoding
phase. We note the model can be expanded to consider additional particle-level
information, such as particle type, electromagnetic charge, and pileup
probability weight Bertolini:2014bba . For the encoder and decoder, we use the
edge convolution layer from Ref. DGCNN , which performs message passing along
the edges and aggregation of messages at the nodes of the graphs. A schematic
of this is shown in Fig. 23.
Figure 23: Schematic of the particle graph autoencoder model proposed. Each
input jet is represented as a graph in which each particle of the jet is a
node, and each node has an edge connecting it to every other particle in the
jet. After an edge convolution layer DGCNN , each particle is encoded in a
reduced two-dimensional latent space, before another edge convolution layer
reconstructs each particle’s four-momentum $(E,p_{x},p_{y},p_{z})$.
The PGAE model is constructed using the PyTorch Geometric library
PyTorchGeometric . In this model, the input node features are first processed
by a batch normalization layer batchnorm . The encoder is an edge convolution
layer DGCNN , built from a fully connected neural network $\phi_{\mathrm{e}}$
with layers of sizes $(8,32,32,2)$ and rectified linear activation unit (ReLU)
activation functions relu . The first layer of dimension $8$ represents the
input, which is given by $(\bm{p}_{i},\bm{p}_{j}-\bm{p}_{i})$, where
$\bm{p}_{i}$ ($\bm{p}_{j}$) is the four-momentum for particle $i$ ($j$) and
$i\neq j$. The final layer produces a two-dimensional message vector from each
pair of distinct particles. These two-dimensional message vectors are
aggregated (using a mean function) for each receiving particle
$\bm{h}_{i}=\frac{1}{|\mathcal{N}(i)|}\sum_{j\in\mathcal{N}(i)}\phi_{\mathrm{e}}(\bm{p}_{i},\bm{p}_{j}-\bm{p}_{i})\,,$
(8)
where $\mathcal{N}(i)$ is the neighborhood of particles connected to the $i$th
particle, which corresponds to all other particles in this case. This summed
message $\vec{h}_{i}$ is the bottleneck or encoded representation for the
$i$th particle. The decoder is also an edge convolution layer, containing a
network $\phi_{\mathrm{d}}$ with layers of sizes $(4,32,32,4)$ and ReLU
activation functions, except for the final layer, which reconstructs each
particle’s momentum. We note that the architecture itself is insensitive to
the ordering of the input particles. PyTorch Geometric supports variable-size
input graphs so there is no need for zero-padding.
The model is trained on the QCD background dataset with two different loss
functions. The first is the mean squared error (MSE) between the input and
output particles. This choice of loss function violates the permutation
invariance of the algorithm because the particles must be reconstructed in the
same order as they are input to achieve a small value of the loss function.
For this reason, we also investigate a second, alternative loss function, the
Chamfer distance loss, whose value does not depend on either the order of the
input particles or the reconstructed particles 10.5555/1622943.1622971 ;
Fan_2017_CVPR ; Zhang2020FSPool . Given two input sets of particles
$\mathcal{M}$ and $\mathcal{N}$, expressed in terms of the momentum vectors
$\bm{p}_{i}$ and $\bm{p}_{j}$ (with $i\in\mathcal{M}$ and $j\in\mathcal{N}$),
the loss function is defined as
$D^{\mathrm{NN}}(\mathcal{M},\mathcal{N})=\frac{1}{|\mathcal{M}|}\sum_{i\in\mathcal{M}}\min_{j\in\mathcal{N}}\left(||\bm{p}_{i}-\bm{p}_{j}||\right)^{2}+\frac{1}{|\mathcal{N}|}\sum_{j\in\mathcal{N}}\min_{i\in\mathcal{M}}\left(||\bm{p}_{i}-\bm{p}_{j}||\right)^{2}\,,$
(9)
where $||\bm{p}_{i}-\bm{p}_{j}||$ is the Euclidean distance.
Figure 24: Comparison of input and reconstructed features $E$ (far left),
$p_{x}$ (center left), $p_{y}$ (center right), and $p_{z}$ (far right) for the
models trained with MSE (top) and Chamfer (bottom) loss functions on the QCD
testing dataset.
#### 3.7.2 Results on LHC Olympics
First, we studied our algorithm on the R&D dataset. As the truth information
is provided, we can create a receiver operating characteristic (ROC) curve to
determine the effectiveness of the PGAE to identify a signal
($\mathrm{W}^{\prime}\to\mathrm{X}\mathrm{Y}$,
$\mathrm{X}\to\mathrm{q}\mathrm{q}$, and $\mathrm{Y}\to\mathrm{q}\mathrm{q}$
with $m_{\mathrm{W}^{\prime}}=3.5$ TeV, $m_{\mathrm{X}}=500$ GeV, and
$m_{\mathrm{Y}}=100$ GeV) that it did not observe during training. The ROC
curves for both the MSE and Chamfer loss functions are shown in Fig. 25.
Although the MSE loss is not permutation invariant, we find it provides better
discrimination for a new unseen signal.
Figure 25: ROC curves for the PGAE trained with the MSE (left) and Chamfer
loss (right).
To evaluate our model’s performance for anomaly detection, we perform a
resonance search (or “bump hunt”) in the dijet invariant mass
$m_{\mathrm{jj}}$, computed from the two jets with highest $p_{\mathrm{T}}$ in
the event. We perform this dijet search in black box (BB) 1, which contains a
resonant dijet signal at $m_{\mathrm{jj}}\sim 3.8$ TeV, and BB 2, which
contains no signal. We require both of the jets to be “outliers,” which we
define as jets with a reconstruction loss exceeding a threshold corresponding
to the 90% quantile of the loss distribution for the leading two jets in the
corresponding evaluation dataset. We note that because our algorithm is jet-
focused, it is straightforward to generalize this search to multijet events.
For the background prediction in the signal-enriched outlier region, we
perform a simplified analysis using the shape of the data in the background-
enriched nonoutlier region. Specifically, we fit the ratio of the nonoutlier-
to-outlier dijet mass distribution with a fourth-order polynomial to derive a
transfer factor (TF). We take nonoutlier data distribution weighted by the TF
as an estimate of the expected background in the outlier region. We do not
consider systematic uncertainties associated to the TF although these could be
taken into account in a more complete analysis in the future. The procedure is
illustrated in Fig. 26 for BB 2.
Figure 26: Illustration of the simplified background estimation procedure in
BB 2 for the GAE trained with MSE loss. A comparison between the nonoutlier
and outlier jet mass distribution is shown (upper left). The ratio of the two
distributions is fit with a fourth-order polynomial to derive a transfer
factor (lower left). The corresponding postfit prediction is also shown (upper
right). The postfit ratio is randomly scattered around one as expected for BB
2, which contains no signal.
To derive the observed significance with the simplified background prediction,
we use the bump hunter (BH) algorithm Choudalakis:2011qn , recently
implemented in Python pybumphunter . We choose the variable-width mass binning
from the CMS dijet searches Sirunyan:2018xlo in the range from 2659 GeV to
6099 GeV. We look for resonances in windows spanning two to five bins. With
the MSE model in BB 1, we identify a possible resonance around $3.9$ TeV with
a local significance of $2.1\,\sigma$, which is close to the region of the
injected dijet resonance with $m_{\mathrm{Z}}^{\prime}=3823$ GeV. In BB 2
using the same model, the most discernable bump lies around $3.3$ TeV with a
small local significance of $0.8\,\sigma$, which agrees with the fact that BB
2 has no injected signal. For the model trained with the Chamfer loss, a
$1.5\,\sigma$ excess is seen at $2.8$ TeV in BB 1 and a $-1.4\,\sigma$ excess
at $5.1$ TeV in BB 2. Neither is significant. As noted previously, the
permutation invariant Chamfer loss performs worse at the unsupervised anomaly
detection task. This may be due to the minimization, which will often return a
smaller loss value than MSE even for poorly reconstructed, anomalous jets.
Fig. 27 shows the BH results for BBs 1 and 2 using the models trained with
both losses.
Figure 27: BB 1, MSE, $2.1\,\sigma$ at $3.9$ TeV, BB 2, MSE, $0.8\,\sigma$ at
$3.3$ TeV, BB 1, Chamfer, $1.5\,\sigma$ at $2.8$ TeV, BB 2, Chamfer,
$-1.4\,\sigma$ at $5.1$ TeV. Bump hunt in the dijet invariant mass in BB 1
(left) and 2 (right) using MSE (top) and Chamfer (bottom) as the loss
functions. Outlier jets have a reconstruction loss in the top 10% with respect
to the corresponding BB. Outlier events are required to have both jets be
outliers. BB 1 has an anomalous large-radius dijet signal
$\mathrm{Z}^{\prime}\to\mathrm{X}\mathrm{Y}\to(\mathrm{q}\mathrm{q})(\mathrm{q}\mathrm{q})$
injected at $m_{\mathrm{Z}}^{\prime}=3823$ GeV (with $m_{\mathrm{X}}=732$ GeV
and $m_{\mathrm{Y}}=378$ GeV), while BB 2 has no injected anomalies.
#### 3.7.3 Lessons Learned
Graph neural networks, like our proposed particle graph autoencoder, are
promising methods for anomaly detection. However, further work is needed to
define a permutation-invariant loss function for use with such architectures
that is more performant for anomaly detection. In addition, a more generic
resonance search procedure, such a multimensional fit in the trijet, dijet,
trijet, and single-jet mass distributions possibly using methods like Gaussian
process fitting Frate:2017mai , would be appropriate to use in combination
with this algorithm. In our experience, the R&D dataset was extremely helpful
in preparing our anomaly detection algorithms and gauging whether the
algorithm we were developing was on the right track. In the future, more
extensive R&D datasets, together with additional black boxes with different
signals, may be useful. Finally, it may be productive to host a future
competition on a well-known platform, such as Kaggle, to increase engagement
with the broader machine learning community.
### 3.8 Regularized Likelihoods161616Authors: Ioan-Mihail Dinu. Most of the
machine learning heavy lifting was done with the help of the existing code
base from the original $\mathcal{M}$-flow model introduced in Ref.
Brehmer:2020vwc by Johann Brehmer and Kyle Cranmer.
https://github.com/johannbrehmer/manifold-flow.
#### 3.8.1 Method
The method presented in this section attempts to use the power of generative
models for the downstream task of Anomaly Detection. We have mainly explored
the possible applications of flow-based methods, since they have the advantage
of providing an explicit likelihood.
Normalizing Flows (NF) are one of the best methods available at the moment for
density estimation in high-dimensional data (Ref. pmlr-v37-rezende15 ). Those
types of models work by learning a bijective mapping between the data
distribution and a multivariate gaussian (with the same number of dimensions).
Experience shows that, unfortunately, the likelihood that NF models provide is
not sufficient as a stand-alone anomaly detection metric.
In an attempt to regularize the likelihood obtained with such density
estimation techniques we have explored several alternatives to the vanilla NF
models. One particularly interesting approach is the $\mathcal{M}$-flow model
introduced originally in Ref. Brehmer:2020vwc .
##### $\mathcal{M}$-flows
The $\mathcal{M}$-flow model combines the idea of reconstruction error from
autoencoders with the tractable density of NF. If there exists a lower-
dimensional data manifold embedded in the data space, this method attempts to
learn both the shape of this data manifold $\mathcal{M}$ and the density over
that manifold.
In order to create a $\mathcal{M}$-flow we start with a bijective mapping
$\mathrm{f}$ between the latent space $\mathrm{U}\times\mathrm{V}$ to the data
space $\mathrm{X}$, as in Eq. 10. The latent space is split in two components:
$\mathbf{u}$, which is the latent space representation that maps to the
learned manifold, and $\mathbf{v}$, which represents the remaining latent
variables that are “off the manifold”.
$\begin{split}\mathrm{f}:\mathrm{U}\times\mathrm{V}&\rightarrow\mathrm{X}\\\
u,v&\rightarrow\mathrm{f}(u,v)\end{split}$ (10)
The transition from the space $\mathrm{U}\times\mathrm{V}$ space to the space
$\mathrm{U}$ is implemented as a projection operation, the $\mathbf{v}$
component being basically discarded. The inverse of this transition is
implemented with zero-padding, $\mathbf{u}$ remains unchanged and $\mathbf{v}$
is filled with zeros. We notate the previous operations with the function
$\mathrm{g}$, characterizing the transformation of a latent representation
$\mathbf{u}$ to a data point $\mathbf{x}$ (shown in Eq. 11).
$\begin{split}\mathrm{g}:\mathrm{U}&\rightarrow\mathcal{M}\subset\mathrm{X}\\\
u&\rightarrow\mathrm{g}(u)=\mathrm{f}(u,0)\end{split}$ (11)
Finally the density in the space $\mathrm{U}$ is learned using a regular NF
model denoted as $\mathrm{h}$. A schematic representation of those operations
is presented in Fig. 28.
Figure 28: An example representation of dependencies between the data
$\mathbf{x}$, latent variables $\mathbf{u}$, $\mathbf{v}$ and the normally
distributed variable $\mathbf{z}$. Here the example data has 8 dimensions and
the latent space has 5 dimensions. The bijective transformations are learned
with Masked Autoregressive Flows (MAFs).
The training of this model is split in two phases completed sequentially for
every batch. Firstly, the parameters of $\mathrm{f}$ are updated by minimizing
reconstruction error from the projection onto the manifold (loss function in
Eq. 12). The second phase of training consists in updating the parameters of
$\mathrm{h}$ by minimizing the negative log likelihood from Eq. 13.
$\mathcal{L}_{manifold}=\lvert\lvert x-g(g^{-1}(x))\lvert\lvert^{2}$ (12)
$\mathcal{L}_{density}=\log p_{u}(g^{-1}(x))$ (13)
Regarding the preprocessing steps, the LHC Olympics datasets have been
clustered and the following features have been selected for each of the two
leading jets: $p_{T}$, $\eta$, $E$, $m$, $\tau_{3}/\tau_{2}$,
$\tau_{2}/\tau_{1}$, where $\tau_{n}$ is the n-subjettiness. For these 12
features, the best performing manifold size was 8.
This model offers the possibility to calculate both the density on the
manifold and the reconstruction error from the projection on the manifold. We
tried to use both of those metrics in order construct a robust anomaly score
as in Eq. 14. This metric performs the anomaly detection task better on the
R&D dataset than its components and better than a basic normalizing flow model
trained on the same data, judging by the ROC curves in Fig. 29.
$\mathcal{R}_{exp}(x)=\frac{\lvert\lvert
x-g(g^{-1}(x))\lvert\lvert^{2}}{1+p_{u}(g^{-1}(x))}$ (14)
While experimenting with this anomaly score, it became apparent that it
generates a bias towards events with high dijet mass ($m_{jj}$). In order to
decouple $\mathcal{R}_{exp}$ from $m_{jj}$ we included the marginal likelihood
of $m_{jj}$, that was modeled using Kernel Density Estimation (KDE), as a term
into the anomaly score. The resulting metric, denoted ${R}_{m_{jj}}$, uses the
ratio between the likelihood on the manifold and marginal $m_{jj}$ likelihood
as in Eq. 15.
$\mathcal{R}_{m_{jj}}(x)=\frac{\lvert\lvert
x-g(g^{-1}(x))\lvert\lvert^{2}}{1+\frac{p_{u}(g^{-1}(x))}{p_{KDE}(m^{x}_{jj})}}$
(15)
Translating the performance obtained on the R&D data to the black boxes proved
to be a big challenge. The small differences in modeling from a black box to
another are often enough to introduce significant biases. The only apparent
solution seems to be training and applying the method on the same dataset.
#### 3.8.2 Results on LHC Olympics
The R&D dataset was heavily used for benchmarking different approaches, Fig.
29 shows the anomaly detection performance of different metrics on the R&D
dataset.
Figure 29: Signal detection ROC curves in the R&D dataset for different
anomaly scores
In order to evaluate the performance of this method in the absence of pure
background training data, a small fraction ($\sim 1\%$) of signal was
introduced into a subsample from the R&D dataset. The resulting data sample
was used both for training and evaluation of the model.
Several cuts have been applied on $\mathcal{R}_{m_{jj}}$ while trying to find
any indication of a resonance in the $m_{jj}$ spectrum. Although less
apparent, there is still a bias towards identifying higher $m_{jj}$ events as
being anomalous. The right plot in Fig. 30 shows the $m_{jj}$ distribution for
events above the $50^{th}$ percentile of $\mathcal{R}_{m_{jj}}$ vs events
above the $70^{th}$ percentile of $\mathcal{R}_{m_{jj}}$. If we were to take
the $50^{th}$ cut as a baseline, it is clear that increasing the threshold has
the effect of selecting events with slightly higher $m_{jj}$. Unfortunately
there is no sharp peak in the $m_{jj}$ distribution that would indicate a
possible resonance, but rather the tail of the distribution seems to get
bigger.
Figure 30: Overlapping $m_{jj}$ distributions below (left) and above (right)
two threshold cuts on $\mathcal{R}_{m_{jj}}$. Distributions for a $50^{th}$
percentile cut are in blue, while distributions for a $70^{th}$ percentile cut
are in orange. The $x$ axis is in $GeV/c^{2}$.
The results so far suggest that this method can not be used reliably to find
the hidden signal within the black-boxes. This behavior is consistent
regardless of the choice of $\mathcal{R}_{m_{jj}}$ thresholds.
#### 3.8.3 Lessons Learned
One of the main lessons learned during this challenge is that: in absence of a
good background model, the neural networks by themselves can not achieve good
anomaly detection performance.
For the winter LHC Olympics, we approached the problem with a simple
autoencoder that was trained on the full background black box. Applying that
model on Black Box 1 (BB1) introduced a lot of bias that ended up acting like
a fake signal. Special precautions should always be taken in order to avoid
this scenario.
With the experience gained from studying BB1 we were a lot more careful to
avoid creating fake signal. The subsequent problem proved to be the lack of a
good background model. Since we could not rely on the full background black
box, the alternative was to train on data, but this comes with its own issues.
All of the attempts so far came short of providing a good background modeling
and therefore the current anomaly detection performance leaves a lot to be
desired. Those trials taught us that a good machine learning anomaly detection
algorithm is not just about the neural network itself, but many other analysis
details should be treated with the same amount of attention.
### 3.9 UCluster: Unsupervised Clustering171717Authors: Vinicius Mikuni and
Florencia Canelli. UCluster is available at:
https://github.com/ViniciusMikuni/UCluster.
#### 3.9.1 Method
The properties of physics beyond the Standard Model (BSM) are not yet known.
However, we can expect anomalous events, from the same physics processes, to
carry similar event signatures. In this section, we introduce a method for
Unsupervised Clustering (UCluster). The goal of UCluster is to reduce the data
dimensionality using a neural network that retains the main properties of the
event collision. In this reduced representation, a clustering objective is
added to the training to encourage points embedded in this space to be close
together when they share similar properties and far apart otherwise.
To create meaningful event embeddings, a per-particle jet mass classification
is chosen. We first start clustering particles into jets with the Fastjet
implementation of the anti-$k_{t}$ algorithm with $R=1.0$ for the jet radius.
Each particle associated to a clustered jet receives a label, proportional to
the mass of the associated jet. For this task, we require the model to learn
the mass of the associated jet the particle belongs to, and which particles
should belong to the same jet. This approach is motivated by the fact that the
invariant mass of a jet is correlated with jet substructure observables, which
often contains useful information for distinguishing different physics
processes. The mass labels are then created by defining 20 equidistant
intervals from 10 to 1000 GeV. For simplicity, the first 100 particles
associated to the two heaviest jets in the event are considered. If a smaller
number of particles are found, events are zero-padded up to 100, otherwise
truncated.
The classification task is achieved by means of a classification loss
($L_{\mathrm{focal}}$), defined by the focal loss
DBLP:journals/corr/abs-1708-02002 . The focal loss is usually applied to
classification problems with unbalanced labels. This choice was made since a
different number of events is expected for different mass intervals. The focal
loss expression for a multiclass classification is defined as:
$L_{\mathrm{focal}}=-\frac{1}{N}\sum_{j}^{N}\sum_{m}^{M}y_{j,m}(1-p_{\theta,m}(x_{j}))^{\gamma}\log(p_{\theta,m}(x_{j}))$
(16)
where $p_{\theta,m}(x_{j})$ is the network’s confidence, for event $x_{j}$
with trainable parameters $\theta$, to be classified as class $m$. The term
$y_{j,m}$ is 1 if class $m$ is the correct assignment for event $x_{j}$ and 0
otherwise. In this implementation, the parameter $\gamma=2$ is used. Different
values of $\gamma$ were tested resulting in no significant changes in
performance.
To cluster events with similar properties, a clustering loss
($L_{\mathrm{cluster}}$) is added to the overall loss function.
$L_{\mathrm{cluster}}$ was introduced in DBLP:journals/corr/abs-1806-10069 ,
defined as:
$L_{\mathrm{cluster}}=\frac{1}{N}\sum_{k}^{K}\sum_{j}^{N}\left\|f_{\theta}(x_{j})-\mu_{k}\right\|^{2}\pi_{jk}.$
(17)
The distance between each event $x_{j}$ and cluster centroid $\mu_{k}$ is
calculated in the embedding space $f_{\theta}$, created by the classification
task. The function $\pi_{jk}$ weighs the importance of each event to the
clustering objective of the form:
$\pi_{jk}=\frac{e^{-\alpha\left\|f_{\theta}(x_{j})-\mu_{k}\right\|}}{\sum_{k^{\prime}}e^{-\alpha\left\|f_{\theta}(x_{j})-\mu_{k}\right\|}},$
(18)
with hyperparameter $\alpha$. Since $L_{\mathrm{cluster}}$ is differentiable,
stochastic gradient descent can be used to optimize jointly the trainable
parameters $\theta$ and the centroid positions $\mu_{k}$.
The combined loss to be minimized is then:
$L=L_{\mathrm{focal}}+\beta L_{\mathrm{cluster}}.$ (19)
The hyperparameter $\beta$ controls the relative importance between the two
losses. The value of $\beta$=10 is used to give the two components the same
relative order of magnitude.
As defined in Eq. 17, $L_{\mathrm{cluster}}$ requires an initial value for the
cluster centers. While the initial value can be corrected during training, a
more stable performance is observed when the model is first pre-trained with
only $L_{\mathrm{focal}}$ for 10 epochs. After the pre-training, the centroids
are initialized by applying the K-Means algorithm 10.2307/2346830 to the
object embeddings. The full training is then carried out with the combined
loss defined in 19 for 100 epochs. The $\alpha$ parameter controls the
importance of the initial cluster assignment and is set to a starting value of
1, increasing by a factor 2 for every following epoch.
UCluster was designed to be independent of the ML architecture. For these
studies, ABCNet Mikuni:2020wpr is used as the backbone network. ABCNet is a
graph-based implementation where each reconstructed particle is taken as a
node in a graph. The importance of each node is then learned by the addition
of attention mechanisms described in velikovi2017graph .
The 10 nearest neighbors from each particle are used to calculate the
GAPLayers 2019arXiv190508705C . The initial distances are calculated in the
pseudorapidity-azimuth ($\eta-\phi$) space of the form $\Delta
R=\sqrt{\Delta\eta^{2}+\Delta\phi^{2}}$. The second GAPLayer uses the
Euclidean distances in the space created by subsequent fully connected layers.
The architecture used and the layer where the embedding space is define are
depicted in Fig. 31. No significant changes were observed when varying the
number of neighbors and maximum number of training epochs. Additional
hyperparameters of ABCNet were kept as is to avoid fine tuning.
Figure 31: ABCNet architecture used in UCluster for a batch size N, F input
features, and embedding space of size E. Fully connected layers and encoding
node sizes are denoted inside “{}”. For each GAPLayer, the number of k-nearest
neighbors (k) and heads (H) are given. Full lines represent direct connections
while dotted lines denote skip connections.
UCluster and ABCNet are implemented in v1.14 of Tensorflow tensorflow . The
loss is optimized by Adam adam and back-propagation to compute gradients. The
learning rate starts from 0.001 and decreases by a factor 2 every three
epochs, until reaching a minimum of 1e-5. The batch size is fixed to 1024.
#### 3.9.2 Results on LHC Olympics
Results are presented on the R&D data set created for the LHC Olympics 2020.
From this data set, 300k events are used for training, 150k for testing and
300k events used to evaluate the performance. The signal fraction in each of
these samples is fixed at 1% of the total amount of events.
The distributions used as input features for ABCNet are described in Tab. 5.
Table 5: Descriptions of each feature used to define a point in the point cloud implementation for multiclass classification. The last two lines are the global information added to parameterize the network. Variable | Description
---|---
$\Delta\eta$ | Pseudorapidity difference between the constituent and the associated jet
$\Delta\phi$ | Azimuthal angle difference between the constituent and the associated jet
$\log(p_{\text{T}})$ | Logarithm of the constituent’s $p_{\text{T}}$
$\log\mathrm{E}$ | Logarithm of the constituent’s E
$\log\frac{p_{\text{T}}}{p_{\text{T}}(\mathrm{jet})}$ | Logarithm of the ratio between the constituent’s $p_{\text{T}}$ and the associated jet $p_{\text{T}}$
$\log\frac{\mathrm{E}}{\mathrm{E}(\mathrm{jet})}$ | Logarithm of the ratio between the constituent’s E and the associated jet E
$\Delta\mathrm{R}$ | Distance in the $\eta-\phi$ space between the constituent and the associated jet
$\log m_{J\\{1,2\\}}$ | Logarithm of the masses of the two heaviest jets in the event
$\tau_{21}^{\\{1,2\\}}$ | Ratio of $\tau_{1}$ to $\tau_{2}$ for the two heaviest jets in the event
We first evaluate the performance of UCluster by requiring the presence of two
clusters in an embedding space of same size. Fig. 32 shows the result of the
event embeddings, superimposed for 1000 events of the evaluation sample.
Figure 32: Visualisation of the embedding space created for anomaly detection
for 1000 events. The true labels are show in the left, while the cluster
labels created by UCluster are shown in the right. Figure from Ref.
Mikuni:2020qds .
A large fraction of BSM events are found in the same cluster, confirming the
earlier assumption that anomalous events would end up close together in the
embedding space. However, the QCD background contamination in the same cluster
only leads to a signal-to-background ratio (S/B) increase from 1% to 2.5%. The
S/B can be further enhanced by partitioning the events into more clusters.
This assumption is correct if the properties of the anomalous events are
different than the QCD signatures. To verify this behavior, the cluster size
is varied while keeping all other network parameters fixed. In Fig. 33 (left),
the maximum S/B found in a single cluster is shown as a function of the
cluster multiplicity. The S/B increases up to around 28% as the number of
clusters increases. The effect of the sample size used for training was also
checked by varying the amount of training and evaluation examples while
keeping the initial S/B fixed. In Fig. 33 (right), the approximate
significance (S/$\sqrt{\mathrm{B}}$) is shown as a function of the different
sample sizes for UCluster trained with a fixed cluster size of 30. The red
markers show the maximum significance found in a single cluster, compared to
the initial significance of the sample shown in blue. For initial
significances in the range 2-6, we observe enhancements by factors 3-4. The
training stability is tested by retraining each model five times. The standard
deviation of the independent trainings is shown by the error bars in Fig. 33.
When many clusters are used, the clustering stability starts to decrease, as
evidenced by larger error bars.
Figure 33: Maximum signal-to-background ratio found for different clustering
sizes (left) and maximum approximate significance found for UCluster trained
and evaluated on different number of events with cluster size fixed to 30
(right). The uncertainty corresponds to the standard deviation of five
trainings with different random weight initialization. Figure from Ref.
Mikuni:2020qds .
#### 3.9.3 Lessons Learned
The development of UCluster was carried based on the R&D data set. While the
conceptual implementation to cluster events with similar properties was
achieved in this data set, an additional step to identify interesting clusters
for further inspection was also required. The latter step, while important,
was not fully investigated by the time the results were announced, leading to
no conclusive results when the method was applied to the black boxes. For
future endeavors, an automatic procedure to evaluate the cluster importance
will be necessary. The classification task, paired with the clustering
objective, is paramount to the ability of UCluster to reduce the data
dimensionality while providing meaningful event embeddings. During the
development of the method, the substructure observables of each jet in the
dijet event carried information to characterize the anomaly. Because of that,
a classification task that took advantage of this property was defined.
However, for different decay topologies, like the one presented in BB3, this
approach would not necessarily be optimal. The reason is that only one of the
decay modes presented jets with substructure properties that would differ from
the main QCD background. To alleviate this issue, a different classification
task could be adapted. However, a more general approach to create the
embedding space could be used. In particular, auto-encoders applied to
particle physics are suitable candidates for a summary statistic that can
encapsulate the event information in a lower dimensional representation.
## 4 Weakly Supervised
### 4.1 CWoLa Hunting181818Authors: Jack H Collins and Benjamin Nachman. The
code can be found at https://github.com/Jackadsa/CWoLa-Hunting/tree/tf2/LHCO-
code.
#### 4.1.1 Method
CWoLa (Classification Without Labels) Hunting is a strategy for searching for
resonant anomalies, in which the signal is hypothesized to be localized in one
chosen resonant variable (e.g. some invariant mass, $m_{\text{res}}$) and the
background is known to follow some smooth and simple distribution in that
variable. Given a hypothesis resonance mass $m_{\text{hyp}}$ and width, a
signal region is constructed by selecting events in a window around the
resonance mass hypothesis, and upper and lower sideband regions are
constructed by selecting events in windows adjacent to the signal region.
Additional features $\\{y\\}$ orthogonal to the resonance mass (e.g. jet
substructures) are used to distinguish a potential signal from the background.
A binary classifier is trained to distinguish signal region events from
sideband events using these additional features. If the features are chosen
such that the distribution of background events in the signal region is
indistinguishable from those in the sideband, then in the absence of a signal
the classifier will be driven by statistical fluctuations between the two
event samples and will have poor performance on test data. If, however, the
signal region contains an additional population of signal events that is not
present or is very rare in the sideband, then the classifier may learn the
distribution of the signal events in $\\{y\\}$.
Given that the black-box data is simulated with a di-jet invariant mass
trigger, we use as our resonant mass variable the invariant mass between the
two highest $p_{\text{T}}$ $R=1$ anti-$k_{t}$ jets in the event, and the
orthogonal features will be the jet substructure variables
$\mathrm{Features:}~{}~{}m_{J,A},\;m_{J,B},\;\tau_{21,A}^{(1)},\;\tau_{21,B}^{(1)},\;\tau_{32,A}^{(1)},\;\tau_{32,B}^{(1)},$
(20)
where $A,B$ refer to the two $p_{\text{T}}$-ordered jets. In order to remove
some amount of correlation between the jet masses and $m_{JJ}$ in background
QCD events, we rescale the jet masses before they are input into the
classifiers
$m_{J}\rightarrow
m_{J}^{\prime}=\frac{m_{J}-30\;\text{GeV}}{m_{JJ}}+\frac{30\;\text{GeV}}{3000\;\text{GeV}}.$
(21)
The key part is the rescaling by dividing by $m_{JJ}$, since the $m_{J}$
distributions have a strong scaling with $m_{JJ}$. The additional offset by
$30\;\text{GeV}$ is not important, but was judged by eye to result in smaller
correlation between $m_{J}$ and $m_{JJ}$.
By construction, this strategy is sensitive only to signals that result from
the decay of a heavy resonance into two significantly lighter particles which
each decay into largely hadronic boosted final states. This still covers a
broad range of phenomenological possibilities, as the space of possible jet
substructures is large. This has the potential to be sensitive to the the
signal in the R&D dataset and BB1, but not to that in BB3. We attempted to
apply a modified form to the signal in BB3 without success, as briefly
described at the end.
The statistical independence of training and test sets is critical, and in
order to retain as much statistical power as possible we perform a nested
cross-validation procedure to select signal-like events. A detailed
explanation follows. There are four training loops including the scan over
$m_{\text{hyp}}$, the additional ones running over loops indexed by the labels
$k,l,i$.
1. $k$
We split the entire dataset (including events outside the signal and sideband
regions) randomly into five $k$-folds, and when searching for a signal in the
$k$th fold we train a classifier using the remaining $k-1$ folds. Given a pre-
determined threshold efficiency $\epsilon_{\text{sel}}$, that fraction of
highest scoring events is chosen from the $k$th fold as judged by the
classifier trained on the other folds. The selected events from each fold are
then combined into a single histogram in $m_{\text{res}}$. A bump hunt is then
performed at $m_{\text{hyp}}$ using a fit of a simple function to data outside
the signal region to predict the expected background in the signal region. A
simple Poisson hypothesis test is performed on the observed event rate in the
signal region compared to the background expectation, with uncertainties in
the fit parameters assumed to follow a Gaussian distribution.
2. $l$
The training of the classifier for a single $k$-fold involves another layer of
cross-validation. This is due to the difficulty of a single classifier
learning a small difference between two distributions that are otherwise
identical besides statistical fluctuations, and overfitting to these
fluctuations is unavoidable. Multiple classifiers are liable to overfit in
different ways, and an ensemble model consisting of an average of multiple
individually-trained neural networks tends to be more robust, due to
destructive interference of overfitting and constructive interference of a
true signal. For each $k$ four classifiers are trained labelled by $1\leq
l\leq 5$, $l\neq k$. The $l$th classifier uses the $l$th fold of data as a
validation set and the remaining three folds as training data. The ensemble
model consists of the mean of the outputs of the individual neural networks.
3. $i$
For each $l$, multiple networks (in this work, three) are trained on the same
data and the best performing one is chosen as the corresponding input to the
ensemble model. The performance metric (evaluated on validation data) is the
selection efficiency on the signal region events of a selection cut on the
neural network output above a threshold determined to have a given efficiency
$\epsilon_{\text{cut}}$ on sideband events. In the present study
$\epsilon_{\text{cut}}$ is chosen to be 0.01, in order to be as small as
possible while avoiding being dominated by statistical fluctuations when the
number of validation events is small.
The neural networks are coded in Keras keras with Tensorflow tensorflow
backend. The architecture consists of four hidden layers each with 128 nodes.
The activation function of the first hidden layer is Leaky ReLU with inactive
gradient of 0.1, while the other hidden layers have elu activation functions.
Dropout layers with probability 0.1 are added between each pair of hidden
layers. Adam adam is used for optimization with hyperparameters:
$\text{lr}=0.001$, $\beta_{1}=0.8$, $\beta_{2}=0.99$, $\text{decay}=5\times
10^{-4}$. The model is trained with batch size of 5000, the large number being
chosen to increase the chance of true signal events being in each batch. The
metric $\epsilon_{\text{cut}}$ is monitored on validation data; the model is
saved at the maximum value and training is halted if 250 epochs pass without
improvement. Training and validation events are reweighted so that the lower
and upper sidebands each have equal weight (which ensures that one is not
favoured over the other in training), and together they have the same total
weight as the signal region.
No scan or systematic optimization of hyperparameters was performed and many
of these choices are likely to be suboptimal.
Data is selected in the window $2632\;\text{GeV}\leq m_{JJ}\leq
6000\;\text{GeV}$, and split into 16 equally log-spaced bins. A signal region
is defined as three adjacent bins, which corresponds to a width of around 15%.
The two bins adjacent above and below the signal region are defined as the
upper and lower sidebands. There are therefore ten overlapping signal regions,
starting centered at the fourth bin and ending centered at the 13th bin. This
strategy was chosen so that a signal cannot hide by being centered at a bin
boundary, split equally between signal region and sideband. The signal region
background is determined by a fit of the following function to the $m_{JJ}$
distribution in bins outside the signal region
$\frac{dN}{dm_{JJ}}=p_{0}\frac{(1-y)^{p_{1}}}{y^{p_{2}+p_{3}\log(y)}},~{}~{}~{}y=\frac{m_{JJ}}{13\;\text{TeV}}$
(22)
where $p_{i}$ are four free fit parameters. This function is used in ATLAS and
CMS diboson searches Aaboud:2017eta ; Sirunyan:2016cao .
#### 4.1.2 Results on LHC Olympics
This study was performed on BB1 and BB2 after the signal was unblinded.
However, no changes were made in the algorithm compared to the original study
Collins:2018epr ; Collins:2019jip that were chosen on the basis of knowledge
of the signal. The $p$-values obtained are shown in Fig. 34, for cuts at
efficiency 10%, 1%, and 0.2% (the solid black line is the result before any
selection). We find no significant excess in BB2, but a large $5\sigma$ excess
in BB1 at a resonance mass of $3500\;\mathrm{GeV}$. Fig. 35 shows the
distributions in $m_{JJ}$ obtained for the signal region centered around
$3500\,\mathrm{GeV}$ for BB2 (left) and BB1 (right) after a series of cuts.
Figure 34: $p$-values obtained from the analysis in the resonance mass scan
for BB2 (left) and BB1 (right) at selection efficiencies 10%, 1%, 0.2%. The
dashed black line is the result with no selection cut.
Figure 35: $m_{JJ}$ distributions obtained for BB2 (left) and BB1 (right) for
the signal region centered around $3500\,\mathrm{GeV}$ after a series of
selection cuts. The top line and data points corresponds to no selection cut.
We can study the signal observed in BB1 in more detail by plotting
substructure distributions of selected events in the anomalous signal region,
Fig. 36. Grey points are the distribution of all events in the signal region
sample, while red points are the events in that sample that have been selected
by a cut on the classifier output with efficiency 0.5%. In the leftmost plot,
we see two clusters of events with jet masses of around $400,750\;\text{GeV}$
and the reverse, indicating that the two fat jets are produced from the decay
of boosted particles of these masses. The middle plot indicates that the
signal-like events all have small $\tau_{21}$ for both jets, indicating that
they have a two-pronged structure. No strong clustering is observed in
$\tau_{32}$ (right plot).
Figure 36: Substructure distributions in the anomalous BB1 signal region for
signal-like (red), and background-like (grey) events. For this figure, signal-
like is defined by a selection on the classifier output with efficiency 0.5%
#### 4.1.3 Lessons Learned
Compared to the original study Collins:2018epr ; Collins:2019jip , we found
that rescaling $m_{J}$ by $m_{JJ}$ is effective in sufficiently eliminating
the correlation between these variables. In the original study we instead
removed events with high jet mass over $500\;\text{GeV}$, since this is where
the neural networks focussed on finding these correlations and a cut on high
jet masses severely distorts the QCD background shape by rejecting a very high
fraction of events at low $m_{JJ}$. The same strategy applied to BB1 would
have missed the signal.
Of course, the method stricty defined is clearly limited in finding signals
that do not look like two fat jets with substructure, and would therefore fail
in identifying the signal in BB3. We attempted to apply a modificatied version
of the strategy, called ‘Tossed CWoLa SALAD’ (a variation on CWoLa SALAD
(Simulation Assisted Likelihood-free Anomaly Detection) 1815227 ). In this
attempt, the top ten jets in each event have recorded their 4-momenta which
act as the inputs to the classifiers (with zero-padding in the case of fewer
jets in an event), and the total invariant mass of the system acts as the
resonant variable. The jet-momenta are rescaled by this mass in an attempt to
avoid correlations. The classifiers are trained simultaneously on BB3 data and
also on QCD simulation from the R&D dataset, but in this second dataset the
sideband and signal region labels are reversed (‘tossed’). If the simulated
background is similar to the true background, then this training strategy
penalizes attempts to learn background correlations. Nonetheless, all attempts
to apply this strategy and the original CWoLa SALAD strategy on BB3 led to
heavy sculpting of the background $m_{JJ}$ distribution. A more global
decorrelation strategy is apparently needed.
### 4.2 CWoLa and Autoencoders: Comparing Weak- and Unsupervised methods for
Resonant Anomaly Detection191919Authors: Jack H. Collins, Pablo Martín-Ramiro,
Benjamin Nachman, and David Shih.
#### 4.2.1 Machine Learning Setup
There are two techniques that show great potential at model-independent
anomaly detection: Classification Without Labels (CWoLa) Metodiev:2017vrx ;
Collins:2018epr ; Collins:2019jip and deep autoencoders Farina:2018fyg ;
Heimel:2018mkt ; Cerri:2018anq ; Roy:2019jae ; Blance:2019ibf . These
techniques have two important advantages over supervised methods. First, they
are model independent and therefore allow to extend the sensitivity of current
new physics searches to model-agnostic BSM scenarios. Second, they can learn
directly from real data and thus do not rely on simulations that may suffer
from potentially large mismodeling effects. In this section, we provide a
comparative study between CWoLa and an autoencoder (AE) using a similar signal
than the one released in the R&D dataset: two jets with masses
$m_{j_{1}}=m_{j_{2}}=500\;\mathrm{GeV}$. We examine the ability of the two
methods to identify the new physics signal at different cross sections and to
increase the significance of the signal excess. CWoLa is expected to reach an
excellent performance for large amounts of signal, while the AE should show a
robust performance in the limit of low signal statistics. Therefore, these two
approaches may have an intersection in performance at different cross sections
that would be of great interest for real experimental searches.
The R&D dataset represents a general new physics scenario where a signal is
localized in one known dimension of phase space (in this case, the dijet
invariant mass $m_{JJ}$) on top of a smooth background. In this scenario,
CWoLa and the AE can be trained to exploit the information in the substructure
of the two jets to gain discriminating power between the signal and background
events. From the full dataset, we select all of the events in the range
$m_{JJ}\in[2800,5200]\;\mathrm{GeV}$ and split them uniformly in
$\log(m_{JJ})$ in $15$ bins. After selecting this range, $537304$ background
events remain in our sample. We consider the following set of input features
for each jet:
$Y_{i}=\left\\{m_{J},\,\sqrt{\tau_{1}^{(2)}}/\tau_{1}^{(1)},\,\tau_{21},\,\tau_{32},\,\tau_{43},\,n_{\text{trk}}\right\\}\,,$
(23)
where $\tau_{ij}^{(\beta)}$ represent fractions of $N$-subjettiness variables
(with angular exponent $\beta=1$ unless otherwise specified in the
superscript), $n_{\text{trk}}$ denotes the number of tracks of a given jet,
and jets are ordered by mass in descending order. For the autoencoder we add
two extra input features for each jet:
$\left\\{p_{\text{T}_{1}},\,p_{\text{T}_{2}},\eta_{1},\,\eta_{2}\right\\}$,
which lead to a significant performance improvement. For CWoLa, using these
extra input features produces an undesirable correlation between the jets
$p_{\text{T}}$ and $m_{JJ}$, which may help CWoLa learn $m_{JJ}$ and sculpt
artifical bumps on this distribution in the absence of signal.
##### Classification Without Labels (CWoLa)
The strategy that we follow to implement CWoLa is similar to the approach
described in Ref. Collins:2019jip . First, we build a signal region and a
sideband region to test for a signal hypothesis with mass
$m_{JJ}=m_{\text{peak}}$, where $m_{\text{peak}}$ is the mean mass of the
injected signal. The signal region contains all of the events in the three
bins centered around $m_{\text{peak}}$, while the sideband region includes all
of the events in the two bins below and above the signal region. The width of
the signal region is $435\;\mathrm{GeV}$, and the lower and upper sidebands
have a width of $262\;\mathrm{GeV}$ and $322\;\mathrm{GeV}$, respectively.
Note that in a real search the location of the mass peak of any potential
signal would be unknown, and therefore the mass hypothesis must be scanned as
described in Ref. Collins:2019jip .
After defining these two regions, the CWoLa approach is used to train a fully
supervised classifier to distinguish the events of the signal region from the
events of the sideband using the set of twelve input features that describe
the jet substructure of each event, presented in Eq. (23). If a signal is
present in the signal region with anomalous jet substructure, CWoLa should
learn the information that is useful to distinguish the signal and sideband
regions. This classifier can then be used to select signal-like events,
producing a new distribution in the dijet mass that may enhance the
significance of the signal excess. Note that the CWoLa performance should be
poor when no signal is present in the signal region; in this case, the signal
and sideband regions will be statistically identical and thus the classifier
should not be able to distinguish between the two regions.
The classifier that we use is a dense neural network with four hidden layers.
The first layer has $64$ nodes with ReLU activation, and the second through
fourth layers have $32$, $16$ and $4$ nodes respectively, with ELU activation.
The output layer has a sigmoid activation. The first three hidden layers are
followed by dropout layers with a $20\,\%$ dropout rate. We use the binary
cross-entropy loss function and the Adam optimizer with learning rate of
$0.001$ and learning rate decay of $5\cdot 10^{-4}$, and a batch size of
$20480$. The training data is reweighted such that the two sidebands have the
same total weight, the signal region has the same total weight as the sum of
the sidebands, and the sum of all events weights in the training data is equal
to the total number of training events. Although the two sideband regions have
different event rates, this reweighting procedure ensures that they contribute
equally to the training process and that the classifier output peaks around
$0.5$ if no signal is present in data.
In order to reduce any potential overfitting, a $5$-fold cross-validation
procedure is implemented. After standardizing all the input features, we
divide each bin of the full dataset in five parts to build five samples of
events of equal size. First, four of these samples are used to perform four
rounds of training and validation, using three different subsets for training
and one for validation each time, and the other sample is saved for testing.
For each cross-validation round, ten neural networks are trained for $200$
epochs on the same training and validation data using different
initializations. The performance of each classifier is measured on validation
data according to the metric $\epsilon_{\text{val}}$. This metric is defined
as the true positive rate for classifying signal region events as such,
calculated at a threshold with a false positive rate $z=0.5\,\%$ for
incorrectly classifying sideband region events. The best of the ten models is
saved at the end of each round, and the four selected models are used to build
an ensemble model, which is used to classify the events in the test set. The
output of this classifier can then be used to select the $x\,\%$ most signal-
like events in the test set. We repeat the same procedure for the five choices
of test set and combine the signal-like event subsamples into a final signal-
like sample. If a signal is present in data and CWoLa is able to find it, the
selected sample of signal-like events will show an enhanced excess in the
signal region on the $m_{JJ}$ plane.
##### Autoencoder
In order to use all the available information from the events, we build two
different autoencoders, Autoencoder I and II, which are trained on Jet 1 and
Jet 2, respectively. Both autoencoders are trained and tested on a mixed
sample of signal and background events. The reason for this is that the signal
contamination ratio in the full sample for the $S/B$ benchmarks that we
consider is small enough for the AE to learn the potentially anomalous feature
distribution of the signal events. For each jet, we build an autoencoder
ensemble that is trained on a randomly selected sample of $50000$ events for
only $1$ epoch. We train twenty different models (i.e. the ensemble
components) and compute the reconstruction error for each event. The final
reconstruction error of an event is obtained by computing the mean over the
twenty different ensemble components. The autoencoder ensembles are then used
to classify events in the test set, by selecting the $x\,\%$ most signal-like
events applying a simultaneous cut in the reconstruction loss of the
autoencoders trained on Jet $1$ and Jet $2$. Since the autoencoders are
trained mostly on background events, signal events are expected to yield
larger reconstruction losses if the signal is sufficiently different to the
background.
In this work, the two autoencoders that we consider are dense neural networks
with seven hidden layers. The autoencoders have an input layer with $8$ nodes.
The encoder has three hidden layers of $64$, $32$ and $16$ nodes, and is
followed by a bottleneck layer with $1$ node and linear activation. Finally,
the decoder has 3 hidden layers of $16$, $32$ and $64$ nodes. All of the
hidden layers have ReLU activation. The output layer is made of $8$ nodes and
has linear activation. We use the Minimum Squared Error (MSE) loss function,
the Adam optimizer with learning rate of $0.001$ and a batch size of $5120$.
We standardize all the input features from the training and test sets using
training information only.
#### 4.2.2 Results on LHC Olympics
The goal of this work is to compare the performance of CWoLa and the AE at
different cross sections. For this purpose, we define a set of eight
benchmarks with the same number of background events and different amounts of
injected signal events. In particular, we consider a set of benchmarks
distributed over the range $S/B\in[1.2\cdot 10^{-3},6\cdot 10^{-3}]$ in the
signal region. To test the consistency of both models in the absence of
signal, we consider a final benchmark with no signal events. For each $S/B$
benchmark, we present results averaged over five independent runs using a
random subset of signal events each time. The $S/B$ range that we consider is
key to observe the complementarity of the two methods for different amounts of
signal, and the observed behaviors continue beyond these limits.
We analyze the performance of CWoLa and the AE according to two different
metrics. First, we measure the performance of the two methods according to the
AUC metric. The AUC score is computed using all the available signal events to
reduce any potential overfitting. Second, we compare the performance of CWoLa
and the AE at increasing the significance of the signal region excess. For
this purpose, we use the following $4$-parameter function Aad:2019hjw ;
Sirunyan:2018xlo to fit the smooth background distribution:
$d\sigma/dm_{JJ}=(p_{0}(1-x)^{p_{1}})/(x^{p_{2}+p_{3}\ln(x)})$. This function
is used to estimate the background density outside of the signal region and
then the fit result is interpolated into the signal region. The number of
expected and observed events in the signal region are compared and p-value is
calculated to evaluate the significance of any potential excess.
We present results showing the performance of CWoLa and the AE for different
$S/B$ ratios according to the two previously defined metrics in Fig. 37. The
left plot shows results for the AUC metric, while the right plot shows the
models performance at increasing the significance of the signal region excess.
First, the AUC metric shows that CWoLa achieves very good discrimination power
between signal and background events for large $S/B$ ratios, reaching AUC
values above $0.90$ and approaching the $0.98$ score from a fully supervised
classifier. As the amount of injected signal in the signal region is
decreased, the amount of useful information that allows CWoLa to discriminate
between the signal and sideband regions during training is reduced. As a
consequence, the classifier struggles to learn the signal features and its
performance drops in testing. By contrast, the AE shows a solid performance in
the full $S/B$ range. This is caused by the fact that once the AE learns to
reconstruct the event sample, its performance remains independent of the
amount of signal present in this sample as long as the contamination ratio is
sufficiently small. Interestingly, the performance of the AE trained on Jet 2
is superior to the one trained on Jet 1, which suggests that using full event
information can be very important. Note that the AUC scores from CWoLa and the
AE cross at $S/B\sim 3\cdot 10^{-3}$.
The p-values analysis shows two interesting patterns. First, CWoLa is able to
enhance the significance of the signal regions excess by $3\sigma-8\sigma$ for
$S/B$ ratios above $\sim 3\cdot 10^{-3}$, even when the fit to the full event
sample shows no deviation from the background-only hypothesis. Second, the AE
shows a superior performance below this range, increasing the significance of
the excess by at least $2\sigma-3\sigma$ in the low $S/B$ region where CWoLa
is not sensitive to the signal. Crucially, there is again an intersection in
the performance of CWoLa and the AE as measured by their ability to enhance
the significance of the signal region excess. Therefore, our results show that
the two methods are complementary for resonant anomaly detection depending on
the amount of signal.
Figure 37: Left plot: Performance of CWoLa (blue), the Autoencoder trained on
Jet 1 (brown) and Jet 2 (green), and their average (orange), as measured by
the AUC metric. The error bars denote the standard deviation on the AUC
metric. Right plot: Significance of the signal region excess after applying
different cuts for CWoLa (blue) and the Autoencoder (orange). The best cuts
for CWoLa and the AE ensemble correspond to the $0.3\%$ and the (Jet 1, Jet 2)
= $(80\,\%,2.5\,\%)$ event selections, respectively. The initial significance
of the excess ($100\,\%$ selection) is shown in green. Note that the fit to
the raw distribution (i.e. no cut applied) is lower than the naive expected
significance $S/\sqrt{B}$ due to a downward fluctuation in the number of
background events in the signal region.
#### 4.2.3 Lessons Learned
We have compared weakly-supervised and unsupervised anomaly detection methods
in a fully hadronic dijet resonance search in the context of the LHC Olympics
$2020$. We used CWoLa and deep autoencoders as representative models of the
two classes, and examined their ability to identify the signal and enhance the
sensitivity of the signal excess at different cross sections. Our results
demonstrate that CWoLa is very effective for sizable amounts of signal,
increasing the significance of a negligible excess above the $5\sigma$
discovery limit. The AE showed a solid performance at low signal rates,
raising the significance of the excess by up to $3\sigma$ in a region where
CWoLa was not sensitive to the signal. Therefore, both techniques are
complementary and can be used together for anomaly detection at the LHC and
beyond.
We feel the LHC Olympics $2020$ has been a very enriching experience that
allowed us to deepen our understanding of machine learning methods for LHC
physics and learn more about the work that has been done in this field. We
really hope to repeat this experience next year.
### 4.3 Tag N’ Train202020Authors: Oz Amram and Cristina Mantilla Suarez.
Code to reproduce all of our results can be found on
https://github.com/OzAmram/TagNTrain.
#### 4.3.1 Method
Tag N’ Train Amram:2020ykb is a technique to train classifiers on unlabeled
events that is naturally employed in an anomaly search. The Tag N’ Train (TNT)
approach is based on the premise that signal events contain two or more
anomalous objects (hereafter called Object-1 and Object-2) in them that can be
used independently for classification. If this is the case, one can use the
Object-1’s in each event to tag examples as signal-like or background-like.
These signal-rich and background-rich samples can then be used to train a
classifier for Object-2. This training step uses the Classification Without
Labels (CWoLa) method Metodiev:2017vrx , in which a classifier is trained by
using mixed samples of signal and background rather than fully labeled events.
One can then repeat the procedure to train a classifier for Object-1 as well.
In order to perform the initial tagging, one must be able to at least weakly
classify the anomalous objects to begin with, and so the technique must be
seeded by initial classifiers. In a jet-based anomaly search, autoencoders can
be used as the initial classifiers because they were previously shown to be
effective unsupervised classifiers of anomalous jets Farina:2018fyg ;
Heimel:2018mkt . Overall, TNT takes as input a set of unlabeled data events
and two initial classifiers, and outputs two new classifiers designed to have
improved performance. Because the technique works better if the initial
classifier can create a larger separation between signal and background in the
mixed samples, multiple iterations of this technique (where the output
classifiers are used with a new data sample to train new classifiers) can
further improve classification performance until a plateau is reached. The
technique is summarized graphically in Fig. 38.
Figure 38: An illustration of the Tag N’ Train technique. Here O1 and O2
represent Object-1 and Object-2, the two components of the data one wishes to
train classifiers for.
The usage of TNT in an anomaly search requires data events to be partitioned
into three subsets. The first subset is used to train the autoencoders, the
second subset is used to perform the TNT technique that trains improved
classifiers, which are then used on the third subset to select anomalous
events and search for a signal. A nested cross validation approach, where the
different subsets are swapped from being used for training or searching, can
be used in order to achieve maximum sensitivity.
Our search only targeted dijet resonances, where we took the two highest pT
jets as the dijet candidate. In order to apply the Tag N’ Train technique, we
treat our Object-1 as the more massive jet and Object-2 as the less massive
jet of these two jets. We found that one can incorporate the assumption of a
resonant signal by requiring signal-like events fall in particular dijet mass
window and scanning this window over the full range during a search (as in
Collins:2018epr ; Collins:2019jip ). This requirement helps to better isolate
resonant signals and improves the performance of the resulting classifier. Our
implementation of the TNT based anomaly search used jet images as the inputs
for both the autoencoders and the TNT classifiers and CNN based model
architectures trained with the Adam optimizer. We chose a latent size of 6 for
the autoencoder based on results of the previous studies in the literature
Farina:2018fyg ; Heimel:2018mkt . Based on results on the R&D Dataset we found
that the second iteration of the Tag N’ Train technique generally reached the
plateau performance and so we used 2 iterations in our search. No optimization
of the model architectures and optimizer hyperparameters was attempted. A
rough optimization of the selection of signal-like and background-like samples
in the TNT technique was performed using the R&D dataset. In the first
iteration, we used the 40% of events with the lowest autoencoder
reconstruction losses as the background-like sample and the 20% with the
highest as signal-like sample during the first iteration. In the second
iteration, we once again used the 40% of events with the lowest scores as the
background-rich sample, but tightened the signal-like cut to the top 10% of
events. On the R&D dataset we found the performance was quite insensitive to
the exact background-like cut used (as the resulting sample was always nearly
pure background) and moderately sensitive to the signal-like cut used.
On the Blackboxes we used 200k events to train the autoencoders, 400k to run
Tag N’ Train (200k for each iteration) and searched for a signal in remaining
400k events. Due to limited computational resources, we did not run the full
cross validation, but rather switched the 400k events used for training and
searching and kept the same autoencoders. Thus only 800k out of the 1M events
were actually used to determine the significance of the anomaly. We used the
alteration of TNT that assumes a resonance by requiring signal events fall in
a dijet mass window and scanned over the dijet mass range of 3000 to 5000 with
window sizes of 500 GeV. In searching for a signal, we selected events where
both jets scores were in the top 3% most signal-like (for an overall
efficiency of roughly 0.1%) and then did a bump hunt. We generally found that
cutting as tightly as possible while still having enough statistics for a
stable fit maximized our sensitivity. We did not mix events from the two sub-
samples but rather fit each simultaneously to produce a single p-value.
#### 4.3.2 Results on LHC Olympics
On the R&D dataset we compared the performance of the Tag N’ Train classifiers
to autoencoders and the CWoLa hunting Collins:2018epr ; Collins:2019jip
method for various amounts of signal in the dataset (9%, 1%, 0.3% and 0.1% of
the total dataset respectively). We generally found the Tag N’ Train approach
to be competitive with these other methods. For the 1% signal test, TNT
produced a classifier that is somewhat worse than the one produced with TNT
with an additional dijet mass cut (TNT + Mjj), but still had significantly
improved performance with respect to the autoencoder. For the 0.3% and 0.1%
signal tests, there was too little signal for the TNT classifier to learn
from, and TNT performs significantly worse than the autoencoder. The TNT + Mjj
classifier performs similarly to the one trained using CWoLa hunting for the 3
tests with larger signal. For the 0.1% test the TNT + Mjj classifier is able
to achieve better performance better than that of the CWoLa hunting method,
but does not improve with respect to the autoencoders approach. More details
along with ROC curves are in the TNT paper Amram:2020ykb .
When applying the Tag N’ Train search to Blackbox 1 we found a resonance at
around 3800 $\pm$ 50 GeV with a local significance of 4$\sigma$. The bump-hunt
plot for one of the subset is shown in Fig. 39.
Figure 39: Events in the first data subset after final selection for Blackbox
1. The signal peak can be seen slightly above 3800 GeV. The local p-value for
just this subset of the data was around 3$\sigma$.
We had difficulty characterizing the nature of the signal as that was not
extensively tested on the R&D dataset. We reported that one of the resonance’s
daughters had a mass of 270 $\pm$ 40 GeV (this was meant to be the lighter
daughter) and did a very rough guess of the total number of signal events
present.
When doing an initial run over Black boxes 2 and 3 we did not see any
significant evidence of a signal and we did not revisit Blackboxes 2 and 3
once the results of Blackbox 1 was revealed.
#### 4.3.3 Lessons Learned
It is not surprising our technique was not able to find the signal in Blackbox
3 because our implementation of TNT focused only dijet resonances where both
jets had anomalous substructure while Blackbox 3 had 3-jet decays and its
dijet decays had gluon-jets.
We were happy to find the correct resonance on Blackbox 1 but had difficulty
characterizing the signal. Because we were using jet images and CNN’s it was
not straightforward to interpret what our models had learned was signal-like.
For future studies it may be interesting to explore using more sophisticated
techniques to attempt to understand what a model like a CNN has learned, or
use models with higher level features that are more interpretable.
Additionally, we tried plotting the distribution of jet masses of signal-like
events, but we knew that our technique distorted the jet mass distribution
(selecting higher jet masses as preferentially signal-like). However we had
not extensively studied this effect, making it difficult to extract the signal
jet masses from the distributions of most signal-like events. We think with
more deliberate study of these effects and/or using more interpretable model
architectures, characterizing basic aspects of the signal (number of prongs,
jet masses, etc.) should be possible. What poses a more significant challenge
is trying to estimate the signal cross section (total amount of signal present
in the dataset) with an anomaly detection search that features a cut meant to
isolate signal events. One can always set a lower bound based on the estimated
number of signal events in the final fit, however because these events are
selected with quite a low selection efficiency it will usually be a poor lower
bound. Without specifying a particular model, one cannot know the signal
efficiency of the selection imposed so it is difficult to estimate how far
this lower bound is from the true amount of signal. An approach that could be
taken would be to try to calibrate the sensitivity of a technique in mock
experiments on simulated datasets where the amount of signal is known. However
it is likely that such a calibration, a mapping from observed p-value to total
amount of signal present, depends on the nature of the signal and will not be
universal. Some signals (e.g. those containing more exotic substructures) may
be easier to find than others. Thus such a procedure would face difficult to
estimate modeling uncertainties even if performed after signal
characterization has been attempted.
### 4.4 Simulation Assisted Likelihood-free Anomaly Detection212121Authors:
Anders Andreassen, Benjamin Nachman, and David Shih. The code can be found at
https://github.com/bnachman/DCTRHunting.
While learning directly from data can mitigate model biases, it is also useful
to incorporate information from background simulations. These simulations are
only an approximation to the Standard Model, but they include a wealth of
physics knowledge at all energy scales relevant for collider reactions. This
section describes an approach that uses a background simulation in a way that
depends as little as possible on the simulations. In particular, a neural
network based on the Deep neural networks using Classification for Tuning and
Reweighting (Dctr) protocal Andreassen:2019nnm is trained in a region of
phase space that is largely devoid of signals. In a resonance search, this
region can be isolated using sidebands in the resonant feature. The
reweighting function morphs the simulation into the data and is parameterized
in the resonant feature(s). The model is then interpolated to the signal
region region and the reweighted background simulation can be used for both
enhancing signal sensitivity and estimating the background. As deep learning
classifiers can naturally probe high dimensional spaces, this reweighting
model can in principle exploit the full phase space for both enhancing signal
sensitivity and estimating the Standard Model background.
#### 4.4.1 Method
Let $m$ be a feature (or set of features) that can be used to localize a
potential signal in a signal region (SR). Furthermore, let $x$ be another set
of features which are useful for isolating a potential signal. For the LHC
Olympics, $m$ will be the invariant mass of two jets and $x$ includes
information about the substructure of the two jets. The Simulation Assisted
Likelihood-free Anomaly Detection (Salad) method then proceeds as follows:
1. 1.
Train a classifier $f$ to distinguish data and simulation for
$m\not\in\text{SR}$. This classifier is parameterized in $m$ by simply
augmenting $x$ with $m$, $f=f(x,m)$ Cranmer:2015bka ; Baldi:2016fzo . If $f$
is trained using the binary cross entropy or the mean squared error loss, then
asymptotically, a weight function $w(x|m)$ is defined by
$\displaystyle
w(x|m)\equiv\frac{f(x)}{1-f(x)}=\frac{p(x|\text{data})}{p(x|\text{simulation})}\times\frac{p(\text{data})}{p(\text{simulation})},$
(24)
where the last factor in Eq. 24 is an overall constant that is the ratio of
the total amount of data to the total amount of simulation. This property of
neural networks to learn likelihood ratios has been exploited for a variety of
full phase space reweighting and parameter estimation proposals in high energy
physics (see e.g. Andreassen:2019nnm ; Brehmer:2018hga ; Brehmer:2018eca ;
Brehmer:2018kdj ; Cranmer:2015bka ; Andreassen:2019cjw ).
2. 2.
Simulated events in the SR are reweighted using $w(x|m)$. The function
$w(x|m)$ is interpolated automatically by the neural network. A second
classifier $g(x)$ is used to distinguish the reweighted simulation from the
data. This can be achieved in the usual way with a weighted loss function such
as the binary cross-entropy:
$\displaystyle\text{loss}(g(x))=-\sum_{m_{i}\in\text{SR}_{\text{data}}}\log
g(x_{i})-\sum_{m_{i}\in\text{SR}_{\text{simulation}}}w(x_{i}|m_{i})\log(1-g(x_{i})).$
(25)
Events are then selected with large values of $g(x)$.
Asymptotically222222Sufficiently flexible neural network architecture, enough
training data, and an effective optimization procedure., $g(x)$ will be
monotonically related with the optimal classifier:
$\displaystyle\frac{g(x)}{1-g(x)}\propto\frac{p(x|\text{signal+background})}{p(x|\text{background})}.$
(26)
It is important that the same data are not used for training and testing. The
easiest way to achieve this is using different partitions of the data for
these two tasks. One can make use of more data with a cross-validation
procedure Collins:2018epr ; Collins:2019jip .
3. 3.
One could combine the previous step with a standard data-driven background
estimation technique like a sideband fit or the ABCD method. However, one can
also directly use the weighted simulation to predict the number of events that
should pass a threshold requirement on $g(x)$:
$\displaystyle
N_{\text{predicted}}(c)=\sum_{m_{i}\in\text{SR}_{\text{simulation}}}w(x_{i}|m_{i})\mathbb{I}[g(x_{i})>c],$
(27)
for some threshold value $c$ and where $\mathbb{I}[\cdot]$ is the indicator
function that is one when its argument is true and zero otherwise. The
advantage of Eq. 27 over other data-based methods is that $g(x)$ could be
correlated with $m$; for sideband fits, thresholds requirements on $g$ cannot
sculpt local features in the $m$ spectrum.
#### 4.4.2 Results on LHC Olympics
The R&D dataset was used for the results presented in this section. The first
step of the Dctr reweighting procedure is to train a classifier to distinguish
the ‘data’ (Pythia) from the ‘simulation’ (Herwig) in a sideband region. The
next step for Salad is to interpolate the reweighting function. The neural
network is trained conditional on $m_{jj}$ and so it can be evaluated in the
SR for values of the invariant mass that were not available during the network
training. Note that the signal region must be chosen large enough so that the
signal contamination in the sideband does not bias the reweighting function.
Figure 40 shows a classifier trained to distinguish ‘data’ and ’simulation’ in
the signal region before and after the application of the interpolated Dctr
model as well as $\tau_{21}$. As expected, the neural network is a linear
function of the likelihood ratio (as seen in the ratio), but this closure is
excellent after the interpolated reweighting.
Figure 40: A histogram of the classifier output (left) and the subleading
$\tau_{21}$ (right) for a neural network trained to distinguish ‘data’
(Pythia) and ‘simulation’ (Herwig) in the signal region. The ratio between the
‘simulation’ (Herwig) or ‘simulation + Dctr’ and ‘data’ (Pythia) is depicted
by orange circles (green squares) in the lower panels. Figure from Ref.
Andreassen:2020nkr .
After reweighting the signal region to match the data, the next step of the
search is to train a classifier to distinguish the reweighted simulation from
the data in the signal region. If the reweighting works exactly, then this new
classifier will asymptotically learn
$p(\text{signal}+\text{background})/p(\text{background})$. If the reweighting
is suboptimal, then some of the classifier capacity will be diverted to
learning the residual difference between the simulation and background data.
If the reweighted simulation is nothing like the data, then all of the
capacity will go towards this task and it will not be able to identify the
signal. There is therefore a tradeoff between how different the (reweighted)
simulation is from the data and how different the signal is from the
background. If the signal is much more different from the background than the
simulation is from the background data, it is possible that a sub-optimally
reweighted simulation will still be able to identify the signal.
Figure 41 shows the sensitivity of the Salad tagger to signal as a function of
the signal-to-background ratio ($S/B$) in the signal region. In all cases, the
background is the QCD simulation using Pythia. The Pythia lines correspond to
the case where the simulation follows the same statistics as the data ($=$
Pythia). When the $S/B\sim\mathcal{O}(1)$, then the performance in Fig. 41 is
similar to a fully supervised classifier. As $S/B\rightarrow 0$, the Pythia
curves approach the random classifier, with a max significance improvement of
unity. The significance improvement quickly drops to unity for Herwig when
$S/B\lesssim 1\%$, indicating the the network is spending more capacity on
differentiating Pythia from Herwig than finding signal. Salad significantly
improves the performance of the Herwig-only approach. In particular, the Salad
tagger is effective to about $S/B\lesssim 0.5\%$, whereas the Herwig-only
tagger is only able to provide useful discrimination power down to about
$S/B\sim 1\%$.
The performance gains can be combined with a sideband background estimation
strategy, as long as threshold requirements on the classifier do not sculpt
bumps in the $m_{jj}$ spectrum. However, there is also an opportunity to use
Salad to directly estimate the background from the interpolated simulation.
The right plot of Fig. 41 illustrates the efficacy of the background
estimation for a single classifier trained in the absence of signal. Without
the Dctr reweighting, the predicted background rate is too low by a factor of
two or more below 10% data efficiency. With the interpolated reweighting
function, the background prediction is accurate within a few percent down to
about 1% data efficiency.
Figure 41: Left: the significance improvement at the a fixed 50% signal
efficiency as a function of the signal-to-background ratio ($S/B$) in the
signal region. The evaluation of these metrics requires signal labels, even
though the training of the classifiers themselves do not have signal labels.
Error bars correspond to the standard deviation from training five different
classifiers. Each classifier is itself the truncated mean over ten random
initializations. Right: The predicted efficiency normalized to the true data
efficiency in the signal region for various threshold requirements on the NN.
The $x$-axis is the data efficiency from the threshold. The error bars are due
to statistical uncertainties. Figure from Ref. Andreassen:2020nkr .
#### 4.4.3 Lessons Learned
In practice, the difficulty in using Salad to directly estimate the background
is the estimation of the residual bias. One may be able to use validation
regions between the signal region and sideband region, but it will never
require as much interpolation as the signal region itself. One can rely on
simulation variations and auxiliary measurements to estimate the systematic
uncertainty from the direct Salad background estimation, but estimating high-
dimensional uncertainties is challenging Nachman:2019dol ; Nachman:2019yfl .
With a low-dimensional reweighting or with a proper high-dimensional
systematic uncertainty estimate, the parameterized reweighting used in Salad
should result in a lower uncertainty than directly estimating the uncertainty
from simulation. In particular, any nuisance parameters that affect the
sideband region and the signal region in the same way will cancel when
reweighting and interpolating.
While the numerical Salad results presented here did not fully achieve the
performance of a fully supervised classifier trained directly with inside
knowledge about the data, there is room for improvement. In particular, a
detailed hyperparameter scan could improve the quality of the reweighting.
Additionally, calibration techniques could be used to further increase the
accuracy Cranmer:2015bka . Future work will investigate the potential of Salad
to analyze higher-dimensional feature spaces as well as classifier features
that are strongly correlated with the resonant feature. It will also be
interesting to compare Salad with other recently proposed model independent
methods. When the nominal background simulation is an excellent model of
nature, Salad should perform similarly to the methods presented in Ref.
DAgnolo:2018cun ; DAgnolo:2019vbw and provide a strong sensitivity to new
particles. In other regimes where the background simulation is biased, Salad
should continue to provide a physics-informed but still mostly
background/signal model-independent approach to extend the search program for
new particles at the LHC and beyond.
### 4.5 Simulation-Assisted Decorrelation for Resonant Anomaly
Detection232323Authors: Kees Benkendorfer, Luc Le Pottier, Benjamin Nachman.
The code can be found at https://github.com/bnachman/DCTRHunting.
In this section, two weakly supervised approaches are studied: Classification
without Labels (CWoLa) Metodiev:2017vrx ; Collins:2018epr ; Collins:2019jip ;
collaboration2020dijet and Simulation Assisted Likelihood-free Anomaly
Detection (Salad) Andreassen:2020nkr . CWoLa is a method that does not depend
on simulation and achieves signal sensitivity by comparing a signal region
with nearby sideband regions in the resonance feature. As a result, CWoLa is
particularly sensitive to dependencies between the classification features and
the resonant feature. Salad uses a reweighted simulation to achieve signal
sensitivity. Since it never directly uses the sideband region, Salad is
expected to be more robust than CWoLa to dependencies. In order to recover the
performance of CWoLa in the presence of significant dependence between the
classification features and the resonant feature, a new method called
simulation augmented CWoLa (SA-CWoLa) is introduced. The SA-CWoLa approach
augments the CWoLa loss function to penalize the classifier for learning
differences between the signal region and the sideband region in simulation,
which is signal-free by construction. All of these methods will be
investigated using the correlation test proposed in Ref. Nachman:2020lpy .
#### 4.5.1 Method
For a set of features $(m,x)\in\mathbb{R}^{n+1}$, let
$f:\mathbb{R}^{n}\rightarrow[0,1]$ be parameterized by a neural network. The
observable $m$ is special, for it is the resonance feature that should be
relatively independent from $f(x)$. The signal region (SR) is defined by an
interval in $m$ and the sidebands (SB) are neighboring intervals.
All neural networks were implemented in Keras keras with the Tensorflow
backend tensorflow and optimized with Adam adam . Each network is composed of
three hidden layers with 64 nodes each and use the rectified linear unit
(ReLU) activation function. The sigmoid function is used after the last layer.
Training proceeds for 10 epochs with a batch size of 200. None of these
parameters were optimized; it is likely that improved performance could be
achieved with an in-situ optimization based on a validation set.
##### Simulation Assisted Likelihood-free Anomaly Detection (SALAD)
The Salad network Andreassen:2020nkr is optimized using the following loss:
$\displaystyle\mathcal{L}_{\text{SALAD}}[f]$
$\displaystyle=-\sum_{i\in\text{SR,data}}\log(f(x_{i}))-\sum_{i\in\text{SR,sim.}}w(x_{i},m)\log(1-f(x_{i}))\,$
(28)
where $w(x_{i},m)=g(x_{i},m)/(1-g(x_{i},m))$ are a set of weights using the
Classification for Tuning and Reweighting (Dctr) Andreassen:2019nnm method.
The function $g$ is a parameterized classifier Cranmer:2015bka ; Baldi:2016fzo
trained to distinguish data and simulation in the sideband:
$\displaystyle\mathcal{L}[g]$
$\displaystyle=-\sum_{i\in\text{SB,data}}\log(g(x_{i},m))-\sum_{i\in\text{SB,sim.}}\log(1-g(x_{i},m))\,.$
(29)
The above neural networks are optimized with binary cross entropy, but one
could use other functions as well, such as the mean-squared error.
Intuitively, the idea of Salad is to train a classifier to distinguish data
and simulation in the SR. However, there may be significant differences
between the background in data and the background simulation, so a reweighting
function is learned in the sidebands that makes the simulation look more like
the background in data.
##### Simulation Augmented Classification without Labels (SA-CWoLa)
The idea of CWoLa Metodiev:2017vrx is to construct two mixed samples of data
that are each composed of two classes. Using CWoLa for resonant anomaly
detection Collins:2018epr ; Collins:2019jip , one can construct the mixed
samples using the SR and SB. In the absence of signal, the SR and SB should be
statistically identical and therefore the CWoLa classifier does not learn
anything useful. However, if there is a signal, then it can detect the
presence of a difference between the SR and SB. In practice, there are small
differences between the SR and SB because there are dependencies between $m$
and $x$ and so CWoLa will only be able to find signals that introduce a bigger
difference than already present in the background. The CWoLa anomaly detection
strategy was recently used in a low-dimensional application by the ATLAS
experiment collaboration2020dijet .
We propose a modification of the usual CWoLa loss function in order to
construct a simulation-augmented (SA) CWoLa classifier:
$\displaystyle\mathcal{L}_{\text{SA-CWola}}[f]$
$\displaystyle=-\sum_{i\in\text{SR,data}}\log(f(x_{i}))-\sum_{i\in\text{SB,data}}\log(1-f(x_{i}))$
$\displaystyle\hskip
71.13188pt+\lambda\left(\sum_{i\in\text{SR,sim.}}\log(f(x_{i}))+\sum_{i\in\text{SB,sim.}}\log(1-f(x_{i}))\right)\,,$
(30)
where $\lambda>0$ is a hyper-parameter. The limit $\lambda\rightarrow 0$ is
the usual CWoLa approach and for $\lambda>0$, the classifier is penalized if
it can distinguish the SR from the SB in the (background-only) simulation. In
order to help the learning process, the upper and lower sidebands are given
the same total weight as each other and together, the same weight as the SR.
#### 4.5.2 Results on LHC Olympics
The R&D dataset is used for the results presented here. For anomaly detection,
the dijet invariant mass $m_{JJ}$ is used as the resonant feature. The
classification features used are the invariant mass of the lighter jet, the
mass difference between the two leading jets, and the $N$-subjettiness
$\tau_{21}$ of the two leading jets. As a benchmark, 1500 signal events
corresponding to a fitted significance of about $2\sigma$ are injected into
the data for training. For evaluation, the entire signal sample (except for
the small number of injected events) is used. In order to demonstrate the
breakdown of CWoLa in the presence of dependencies between the classification
features and the resonant feature, we strengthen the dependence between the
jet masses $m_{J}$ and invariant dijet mass $m_{JJ}$ by setting $m_{J}\mapsto
m_{J}+0.1m_{JJ}$, as in Nachman:2020lpy .
Figure 42 shows the performance of various configurations. The fully
supervised classifier uses high statistics signal and background samples in
the SR with full label information. Since the data are not labeled, this is
not achievable in practice. A solid red line labeled ‘Optimal CWoLa’
corresponds to a classifier trained using two mixed samples, one composed of
pure background in the single region and the other composed of mostly
background (independent from the first sample) in the SR with the 1500 signal
events. This is optimal in the sense that it removes the effect from phase
space differences between the SR and SB for the background. The Optimal CWoLa
line is far below the fully supervised classifier because the neural network
needs to identify a small difference between the mixed samples over the
natural statistical fluctuations in both sets. The actual CWoLa method is
shown with a dotted red line. By construction, there is a significant
difference between the phase space of the SR and SB and so the classifier is
unable to identify the signal. At low efficiency, the CWoLa classifier
actually anti-tags because the SR-SB differences are such that the signal is
more SB-like then SR-like. Despite this drop in performance, the simulation
augmenting modification (solid orange) with $\lambda=0.5$ nearly recovers the
full performance of CWoLa.
Figure 42: A Receiver Operating Characteristic (ROC) curve (left) and
significance improvement curve (right) for various anomaly detection methods
described in the text. The significance improvement is defined as the ratio of
the signal efficiency to the square root of the background efficiency. A
significance improvement of 2 means that the initial significance would be
amplified by about a factor of two after employing the anomaly detection
strategy. The supervised line is unachievable unless there is no mismodeling
and one designed a search for the specific $W^{\prime}$ signal used in this
paper. The curve labeled ‘Random’ corresponds to equal efficiency for signal
and background. Figure from Ref. 1815227 .
For comparison, a classifier trained using simulation directly is also
presented in Figure 42. The line labeled ‘Data vs. Sim.’ directly trains a
classifier to distinguish the data and simulation in the SR without
reweighting. Due to the differences between the background in data and the
simulated background, this classifier is not effective. In fact, the signal is
more like the background simulation than the data background and so the
classifier is worse than random (preferentially removes signal). The
performance is significantly improved by adding in the parameterized
reweighting, as advocated by Ref. Andreassen:2020nkr . With this reweighting,
the Salad classifier is significantly better than random and is comparable to
SA-CWoLa. The Optimal CWoLa line also serves as the upper bound in performance
for Salad because it corresponds to the case where the background simulation
is statistical identical to the background in data.
#### 4.5.3 Lessons Learned
This section has investigated the impact of dependencies between $m_{jj}$ and
classification features for the resonant anomaly detection methods Salad and
CWoLa. A new simulation-augmented approach has been proposed to remedy
challenges with the CWoLa method. This modification is shown to completely
recover the performance of CWoLa from the ideal case where dependences are
ignored in the training. In both the Salad and SA-CWoLa methods, background-
only simulations provide a critical tool for mitigating the sensitivity of the
classifiers on dependences between the resonant feature and the classifier
features.
Each of the methods examined here have advantages and weaknesses, and it is
likely that multiple approaches will be required to achieve broad sensitivity
to BSM physics. Therefore, it is critical to study the sensitivity of each
technique to dependencies and propose modifications where possible to build
robustness. This paper is an important step in the decorrelation program for
automated anomaly detection with machine learning. Tools like the ones
proposed here may empower higher-dimensional versions of the existing ATLAS
search collaboration2020dijet as well as other related searches by other
experiments in the near future.
## 5 (Semi)-Supervised
### 5.1 Deep Ensemble Anomaly Detection242424Authors: Felipe F. De Freitas,
Charanjit K. Khosa, Veronica Sanz. The codes can be found at the following
link https://github.com/FFFreitas/Deep-Ensemble-Anomaly-Detection.
#### 5.1.1 Method
For the LHC Olympics challenge we opted for a semi-supervised approach. This
was partly motivated by lack of time, and partly by the way the challenge
itself was set up. Indeed, previous to the releasing of the blackboxes, the
organisers had provided warm-up examples including signal and background
labels. At the end we focused on a mixture of neural networks, with
convolutional layers, and Boosted Decision Trees (BDTs). This hybrid approach
was based on previous studies by one of the authors Alves:2019ppy , which
proposes to use a two step “pipeline” to assign event-by-event probabilities
in categories signal or background. This model uses event input in two forms;
raw data as an image as well as high level features (kinematic variables). We
train the model for the labelled background and signal data sets (R $\&$ D
data set). ResNet is used as a pre-classifier for the $\eta$-$\phi$ 2d images
(weighted by pt) of the un-clustered particles of the event. Along with ResNet
predictions of signal/background (event-by-event), we used the kinematics of
fat jets (zero-padded in case of only one) for the BDTs. This two-step
approach provides an AUC increase of about 5$\%$ over the BDT trained only on
kinematic observables.
##### Data sets
We start by describing the data preparation procedure
1. 1.
We first create event images from the data. The images are generated from the
uncluttered data display in a $224\times 224$ 2-D grid with the $x$ and $y$
positions given by the $\eta$ and $\phi$ information from the particles in an
event. The 2-D grid is further converted into a RGB image with $224\times 224$
pixels, the pixels color values are normalized according to
$W_{p_{\text{T}}}=\frac{p_{\text{T}}(i)}{max(p_{\text{T}}(i))}$, where $i$
runs over all the particles found in an event.
2. 2.
The tabular data is a comma-separated values (CSV) file, where each row
corresponds to an event and the columns are the kinematic and angular
observables from the final state particles and jets from the event. In our
analysis, we cluster inclusive fat jets with $p_{\text{T}}^{min}>$ 500 GeV and
R=1 with the anti-$k_{T}$ algorithm. For this analysis, we consider the first
two leading jets. If there were only one jet, then all the kinematics of the
second jet were set to zero values. We cluster again the constituents of the
fat jets with R=0.4 and $k_{T}$ algorithm with minimum $p_{T}$ condition of 20
GeV. With these jets we construct the following observables :
$p_{\text{T}}^{j_{1}}$, $m_{j_{1}}$, $\eta_{j_{1}}$, $\phi_{j_{1}}$,
$E_{j_{1}}$, $p_{\text{T}}^{j_{2}}$, $m_{j_{2}}$, $\eta_{j_{2}}$,
$\phi_{j_{2}}$, $E_{j_{2}}$, $\delta\eta$, $\delta\phi$, ${m/E}_{j_{1}}$,
${m/E}_{j_{2}}$, $m_{jj}$, $P_{T}^{A}(j_{1}j_{2})$, $\delta R^{j_{1}}_{12}$,
$\delta R^{j_{1}}_{13}$, $\delta R^{j_{1}}_{14}$, $\delta R^{j_{1}}_{23}$,
$\delta R^{j_{1}}_{24}$, $\delta R^{j_{1}}_{34}$, $\delta R^{j_{2}}_{12}$,
$\delta R^{j_{2}}_{13}$, $\delta R^{j_{2}}_{14}$, $\delta R^{j_{2}}_{23}$,
$\delta R^{j_{2}}_{24}$, $\delta R^{j_{2}}_{34}$,
$n_{subjets1}$,$n_{subjets2}$. Some of the observables are constructed from
the fat-jets kinematics and some of them are from their sub-jets.
After we have a trained CNN model for the classification of the image dataset,
we include in the tabular data the predicted scores from the CNN model for a
given event. This additional information helps to improve further the
classification power of the BDT model, allowing our framework to predict with
fairly good confidence if an event is background or an anomaly.
##### The CNN architecture and training methodology
We use a modified pre-trained ResNet-34 as a pre-classifier, the ResNet-34
consists of 34 convolutional (Conv2D) layers. In between each Conv2D layers,
one has a series of batch normalizations, average pooling and rectified
activations (ReLU). For our task, we replace the last fully connected layers
of the ResNet-34, responsible for the classification, with the following
sequence of layers:
* •
An adaptive concatenated pooling layer (AdaptiveConcatPool2d),
* •
A flatten layer,
* •
A block with batch normalization, dropout, linear, and ReLU layers,
* •
A dense linear layer with 2 units as outputs, corresponding to a $signal$ and
$background$, and a softmax activation function.
The AdaptiveConcatPool2d layer uses adaptive average pooling and adaptive max
pooling and concatenates them both. Such procedure provides the model with the
information of both methods and improves the overall performance of the CNN.
We also make use of the label smoothing methodology
DBLP:journals/corr/SzegedyVISW15 as well as the MixUp 2017arXiv171009412Z
training method.
Due to the high imbalance between the number of signal and background events
in the full R & D data set, we generate a smaller sample with 93390 images,
with the same proportion of images for the signal and background events. We
further separate these images into a training set containing 74712 (80%)
images and 18678 (20%) for the validation and test sets.
We train the ResNet-34 for 25 epochs using fit one-cycle method
2018arXiv180309820S . Our modified ResNet-34 achieves an accuracy of 92%. We
then use this CNN to get the predictions for each image and append these
values to the tabular data, so we have both the kinematic information for a
given event and also the score from the CNN of the same event to belong to
$signal$ or $background$.
##### The BDT model
After we gather the predictions from our modified ResNet-34 and the kinematic
information, described in Sec. 5.3.1, with the appended tabular dataset we
make use of scikit-learn scikit-learn and DEAP DEAP_JMLR2012 to build an
evolutionary search algorithm in order to find the best hyper-parameters of
the BDT which maximize the accuracy. After we perform a evolutionary search
over the space of hyper-parameters, we found a BDT with 95.38% accuracy were
achieved by the following configuration:
* •
Estimators : 700
* •
Learning rate : 0.1
* •
Algorithm : SAMME.R
* •
Max depth : 3
* •
Criterion : entropy
* •
Splitter : random
The BDT model gives us the final classification of events where we estimate
all metrics presented in the section 5.4.2. In Fig. 43 and 44 we show the BDT
score and ROC curves for the test data sets.
#### 5.1.2 Results on LHC Olympics
Figure 43: Left: BDT scores using the kinematic observables and the scores
from ResNet-34. Right: BDT scores using the kinematic observables only.
Figure 44: Left: ROC curve for a BDT using the kinematic observables and the
scores from ResNet-34. Right: ROC curve for a BDT using the kinematic
observables only.
Figure 45: Data Features for the Blackbox data 1. The dark blue line
(background) refers to the labeled dataset, whereas the other three lines are
distributions from the blackbox.
Figure 46: Data Features for the Blackbox data 1.
Using the training from the previous section and applying it to the blackbox,
we found that the new physics signal is likely to be a heavy resonance of mass
3.5 GeV which further decays to two particles of mass 100 GeV and 500 GeV, as
in the labeled data. We show the feature distributions in Fig. 45 and 46. Note
that, due to lack of time, we have not analysed the whole blackbox, just a
subsample of 200K events.
#### 5.1.3 Lessons Learned
Although the method outlined above is able to identify the new physics events,
it is not robust enough to predict the correct range of mass of the heavy
resonance. It is specific to the type of new physics it was trained on.
This exercise and the issue of lack of generalization to new blackboxes, led
to two of the authors to develop a semi-supervised algorithm, called Anomaly
Awareness Khosa:2020qrz , with the focus on robustness respect to the types of
anomalies.
Initially we spent a significant amount of time on preliminary investigations
to get the feeling of what could be the best approach. Maybe a better strategy
would have been to try different methods in parallel. Different approaches
have different systematic errors and sensitivities, and we would have liked to
develop further the other proposals we thought of for the challenge. Alas, we
had to settle for the quickest analysis to reach the deadline.
Another point we would like to mention is the following: in the list of
observables we included correlated variables such as $m_{j_{1}}$ and
$m_{j_{2}}$. A more appropriate option would have been for example $m_{j_{1}}$
and $m_{j_{1}}-m_{j_{2}}$.
### 5.2 Factorized Topic Modeling252525Authors: Patrick Komiske, Eric
Metodiev, Nilai Sarda, and Jesse Thaler. Code will be made available at the
following link:
https://github.com/nilais/factorized-topic-modeling
In this contribution, we propose and evaluate a new technique to statistically
model and discriminate between dijet configurations, called “factorized topic
modeling”. The cross section for dijet production satisfies a leading-order
factorization theorem, which implies that both jets in a dijet event are
statistically independent, conditioned on their joint types. This is an
extremely powerful statement, motivated from first principles, which we
convert into a statistical constraint on a generative model. Starting from the
framework of “jet topics” Metodiev:2018ftz , we leverage a factorization
theorem for jet substructure to build a generative model, and demonstrate a
procedure for optimizing it to recover both the relative fractions of
different types of jets within a mixed sample, as well as the component
distribution for a given observable. We use factorized topic modeling in the
context of anomaly detection to identify exotic jet types in the LHC Olympics
R&D dataset.
#### 5.2.1 Method
##### Review of factorization
Factorization is the statement that the cross-section for dijet production can
be decomposed into the product of independent probability functions. Each
component of the cross-section corresponds to a different physical process
contributing to the observed jet pair. Concretely, to leading order in a power
expansion, the cross-section for dijet production in proton-proton collisions
can be written as Ellis:1991qcd :
$d\sigma=\sum_{ab\to cd}f_{a}(\xi_{a})\otimes
f_{b}(\xi_{b})\otimes\mathcal{H}_{ab\to
cd}\otimes\mathcal{J}_{c}(z_{c})\otimes\mathcal{J}_{d}(z_{d}),$ (31)
where $f_{a}$ is the parton distribution function for parton $a$ inside the
proton, $\xi_{a}$ is that parton’s momentum fraction, $\mathcal{H}$ is the
partonic cross section for the short-range hard scattering process $(ab\to
cd)$, and $\mathcal{J}$ are the jet branching functions. In this work, as our
goal is jet tagging, we focus on the part of this equation that governs the
substructure of the two jets:
$d\sigma\propto\sum_{cd}\mathcal{H}_{cd}\otimes\mathcal{J}_{c}(z_{c})\otimes\mathcal{J}_{d}(z_{d}).$
(32)
Our goal is to translate this physical, leading-order factorization theorem
into a statistical constraint on the probability distribution over jet
observables. For dijets, we consider each observation to be a pair
$(\mathbf{x}_{1},\mathbf{x_{2}})$, corresponding to the value of a given
observable for the hardest and second-hardest jet in the event, respectively.
Using eq. (32) as a starting point, we will write down a generative model for
dijet production – more specifically, a topic model.
##### Review of topic modeling
Topic modeling was first applied to jet physics in Metodiev:2018ftz and has
since been studied in both phenomenological and experimental contexts
Komiske:2018vkc ; Aad:2019onw ; Sirunyan:2019jud ; Aad:2020cws . This body of
work leverages the statistical connection between themes in text corpora and
jet flavors in event samples to propose a new data-driven method for defining
classes of jets. We first consider an unfactorized topic model in a single
observable $\mathbf{x}$. For a mixed sample $\mathcal{M}$, this corresponds to
a generative process with the following structure:
$p_{\mathcal{M}}(\mathbf{x})=\sum_{k}f_{\mathcal{M}}(k)\cdot
p_{k}(\mathbf{x}),\qquad\text{s.t.}\quad\int_{\mathcal{X}}d\mathbf{x}\,p_{k}(\mathbf{x})=1\quad\forall
k,\quad\sum_{k}f_{\mathcal{M}}(k)=1.$ (33)
Each component $k$ corresponds to a jet class (i.e. an anomalous jet or an
ordinary quark/gluon jet from QCD). The mixture components $\\{p_{k}\\}$
correspond to the distributions of any given jet observable $\mathbf{x}$,
while the fractions $f(k)$ represent the fraction of the total sample which
belongs to each component. The goal of a topic model is to simultaneously
learn the components $\\{p_{k}\\}$ and fractions $f(k)$ from a set of samples
$\\{\mathcal{M}_{i}\\}$.
##### Factorized topic models
Unlike the univariate topic model described in eq. (33), factorized topic
modeling operates on pairs of observables $\mathbf{x}_{1},\mathbf{x}_{2}$,
corresponding to the leading and subleading jets in an event. The multivariate
formula for the sample distribution is then given by:
$p_{\mathcal{M}}(\mathbf{x}_{1},\mathbf{x}_{2})=\sum_{k}f_{\mathcal{M}}(k)p_{k}(\mathbf{x}_{1},\mathbf{x}_{2}).$
(34)
To specify the form for $p(\mathbf{x}_{1},\mathbf{x}_{2})$, we must explicitly
state our constraints in a statistical language:
1. 1.
_Sample independence_ : The model assumes that, to leading order, the jet
observable $\mathbf{x}$ depends only on the initiating parton. In reality,
there is some dependence on the process in addition to the parton flavor.
However, experimental studies have shown a high degree of empirical
independence, and we suggest that these differences can be considered
negligible for our model Komiske:2018vkc . Defining $p^{(1)},p^{(2)}$ as the
distribution functions for the hardest and second-hardest jet, respectively,
sample independence implies:
$p^{(1)}_{k}(\mathbf{x})=p^{(2)}_{k}(\mathbf{x}).$ (35)
2. 2.
_Factorization:_ The leading-order factorization theorem tells us that the two
jets in an event are statistically independent, conditioned on convolution
through the matrix element describing the short-range scattering. From a
statistical perspective, the factorization theorem given above is
mathematically equivalent to stating that our topic model for dijets must be
an _mixture of products_. In other words,
$(\mathbf{x}_{1}|k_{1},k_{2})\text{ and }(\mathbf{x}_{2}|k_{1},k_{2})\text{
are conditionally independent.}$ (36)
Note that by simply replacing the structure of the sample-level probability
distribution given above with these constraints, the mapping between the
factorization theorem and statistical language can directly give us a topic
model. The model can be expressed as follows.
$p_{\mathcal{M}}(\mathbf{x}_{1},\mathbf{x}_{2})=\sum_{k_{1},k_{2}}f_{\mathcal{M}}(k_{1},k_{2})\cdot
p_{k_{1}}(\mathbf{x}_{1})\cdot p_{k_{2}}(\mathbf{x}_{2}).$ (37)
##### Algorithm to solve the model
Figure 47: The paired observables from a dijet sample can be represented as a
histogram, shown as the matrix $\mathbf{D}$. The generative process we
describe can be visualized as the matrix product
$\mathbf{PFP}^{\mathsf{\scriptscriptstyle T}}$, shown as a decomposition on
the right. This example is for separating dijet events into quark and gluon
categories, where the observable is jet constituent multiplicity.
Our goal is to find the parameters of the topic model that give the best fit
to the true distribution for the mixed sample $p_{\mathcal{M}}$. First, we
discretize the sample distribution into binned histograms, which allows us to
reformulate eq. (37) as a matrix decomposition. Define the matrix $\mathbf{D}$
to be the 2-dimensional histogram generated by jointly binning the sampled
data across $\mathbf{x}_{1}$ and $\mathbf{x}_{2}$. Similarly, let $\mathbf{P}$
be the matrix whose columns are $n$-bin histograms representing each component
$\mathbf{p}_{k}$. By rewriting the model in terms of histograms and bins, we
arrive at the following non-convex program:
###### Problem 5.1
$\displaystyle\min_{\begin{subarray}{c}\mathbf{F}\in\mathbbm{R}^{k\times k}\\\
\mathbf{P}\in\mathbbm{R}^{n\times
k}\end{subarray}}\left\|\mathbf{D}-\mathbf{PFP}^{\mathsf{\scriptscriptstyle
T}}\right\|^{2}_{F},\qquad\text{s.t.}\quad\mathbf{P}^{\mathsf{\scriptscriptstyle
T}}\mathbbm{1}_{n}=\mathbbm{1}_{k},\quad\mathbbm{1}_{k}^{\mathsf{\scriptscriptstyle
T}}\mathbf{F}\mathbbm{1}_{k}=1,\quad\mathbf{P,F}\geq 0,$ where
$\mathbbm{1}_{n}$ is the $n$-dimensional vector of all ones, and we have taken
the Frobenius norm
$\|\mathbf{A}-\mathbf{B}\|_{F}=\sqrt{\sum_{ij}(\mathbf{A}_{ij}-\mathbf{B}_{ij})^{2}}$
as our loss function. A pictorial representation of this discretization is
given in figure 47. While this problem is non-convex, and thus finding global
optima is not guaranteed, we employed a scheme based on alternating
minimization to recover locally optimal $\mathbf{P},\mathbf{F}$.
#### 5.2.2 Results on the LHC Olympics
Figure 48: Anomalous components at 10% signal, Background component at 10%
signal, Anomalous components at 1% signal, Background component at 1% signal
The components retrieved from factorized topic modeling of the LHC Olympics
R&D dataset, using jet mass as our observable. Our method shows good agreement
between the learned topics and the ground truth on the jet mass observable. We
are able to recover both of the new physics resonant masses (at 100 GeV and
500 GeV) with signal fraction of 10% (top row) and 1% (bottom row). The dips
in the background model at the resonance masses arise because the topic
finding procedure attempts to identify the most orthogonal components.
In figure 48, we demonstrate the performance of our algorithm in recovering
the jet mass distributions for dijet events from the R&D dataset, using jet
mass as our observable. We learn a model with 3 topics, corresponding to
$p_{X},p_{Y},p_{\text{QCD}}$, respectively. To generate these figures, we
consider a signal fraction of 10%, and 1% respectively, solve the topic model,
and then re-weight the component distributions by subtracting the overall
background distribution and renormalizing. As the algorithms used to optimize
the model returns extremal points in the polytope of all feasible solutions,
the solution forces the components to be as orthogonal as possible, which is
why we see characteristic dips in the background components near the $m_{X}$
and $m_{Y}$ resonance masses.
For signal fractions of both 10% and 1%, we are able to recover the known
exotic jet masses of $m_{X}=100~{}\text{GeV}$ and $m_{Y}=500~{}\text{GeV}$. As
expected, the noise in the recovered distributions is noticeably larger at the
lower signal fractions. (The behavior of this model in the low-signal regime
with $<0.1\%$ signal is still under investigation – as we currently formulate
it, performance degrades considerably, mostly likely due to our choice of
histogram bins.) Even in the presence of this noise, our model has significant
discriminative power. In particular, the model can infer which process any
event was generated from using the likelihood ratio:
$\displaystyle\mathcal{L}(\mathbf{x}_{1},\mathbf{x}_{2})$
$\displaystyle=\frac{f(\text{signal})\cdot
p_{\text{signal}}(\mathbf{x}_{1},\mathbf{x}_{2})}{f(\text{background})\cdot
p_{\text{background}}(\mathbf{x}_{1},\mathbf{x}_{2})}$ (38)
$\displaystyle=\frac{f(X,Y)\,p_{X}(\mathbf{x}_{1})\,p_{Y}(\mathbf{x}_{2})+f(Y,X)\,p_{Y}(\mathbf{x}_{1})\,p_{X}(\mathbf{x}_{2})}{f(\text{QCD,
QCD})\,p_{\text{QCD}}(\mathbf{x}_{1})\,p_{\text{QCD}}(\mathbf{x}_{2})}.$ (39)
Using this likelihood ratio as a discriminant, we can test the ability of our
model to classify events relative to the ground truth in the dataset. In both
cases, the model performs well, with AUCs of 0.88 and 0.81, respectively.
#### 5.2.3 Lessons Learned
Leveraging the leading-order factorization theorem in eq. (32), we designed a
statistically constrained, non-parametric, generative model to disentangle
anomalous jet components from the LHC Olympics R&D dataset. For large enough
signal fractions, our minimization algorithm finds a robust solution to
Problem 5.1, though performance degrades at lower signal fractions. Since the
input to our model is simply a 2-dimensional histogram, an interesting
direction for future research could be to use this as a drop-in replacement
for density estimation steps in other anomaly detection methods. More
crucially, we see this model as a proof-of-concept for the idea of encoding
physical constraints on scattering processes into a statistical language.
### 5.3 QUAK: Quasi-Anomalous Knowledge for Anomaly Detection262626Authors:
Sang Eon Park, Dylan Rankin, Silviu-Marian Udrescu, Mikaeel Yunus, Philip
Harris. Further details can be found in Ref. Park:2020pak .
Deep-learning-based anomaly detection within physics has largely focused on
searching for anomalous signatures in the complete absence of a signal prior.
In this scenario, two fundamental approaches have been considered:
* •
Isolate two distinct datasets, one which may contain a signal, and one which
does not; try to find a deviation between the two.
* •
Embed our knowledge of known physics processes into a simulation or a deep
learning algorithm and look for events with a low likelihood of being a known
physics process.
In the first approach, colloquially referred to as classification without
labels (CWoLa), conventional discrimination algorithms are used to separate
the two datasets Metodiev:2017vrx ; Collins:2018epr ; Collins:2019jip ;
Nachman:2020lpy . Care must be taken to ensure that selection biases are
mitigated so that the only discernible difference within the discrimination
algorithm is the presence of an unknown physics signal. The second approach
attempts to embed a complete knowledge of physics processes within a selected
region into a likelihood discriminant. An excess of events with a low
likelihood of being from the selected region constitutes a new physics
signature. Complete knowledge of all expected physical processes within a
large, high dimensional dataset can become quite complicated and can lead to
reduced sensitivity Heimel:2018mkt ; Farina:2018fyg ; Cerri:2018anq ;
Kuusela_2012 . This method broadly comprises models that utilize deep learning
autoencoders.
When comparing the two approaches, the CWoLa approach is often more sensitive,
provided a signal region is present. This increase in sensitivity results from
the fact that an implicit signal assumption is placed on this type of anomaly
search: a signal is localized to be within a specific kinematic region. This
constitutes a signal prior to the model and enhances discrimination power. For
many new physics models, there are fundamental assumptions that broadly apply
to all anomalies. For example, if a massive particle decays, its decay
products fall within a cone determined by the particle’s energy and Lorentz
invariance. Neural net algorithms, on the other hand, have to learn about
Lorentz invariance Butter:2019cae .
By relying on one anomaly metric that measures the deviation from the
background, we miss the chance to apply fundamental physical laws about how
new physics may appear at the LHC, thus wasting our prior knowledge about
existing physics. However, if we can incorporate this prior knowledge into the
search, it should be possible to either improve the sensitivity of our search
or restrict the size of the network, since additional constraints help to
limit the scope of the exploration needed to construct the model.
In this section, we extend the concept of placing signal priors on anomaly
searches by developing a mechanism to add signal priors without degrading the
sensitivity of the pre-existing model-independent search. Through our
approach, signal priors, which may or may not be accurate signal descriptions,
can be embedded within an anomaly search. By inserting additional signal
priors, we enhance sensitivity to signal models with characteristics similar
to the embedded signal priors. Since priors are systematically added to
construct information, we refer to this technique as Quasi-Anamalous
Knowledge, or simply QUAK.
#### 5.3.1 Method
Figure 49: The QUAK approach
In natural language processing, new models have emerged that utilize semi-
supervised learning to embed these constraints on models in a natural way
chen2020big ; ouali2020overview . Semi-supervision works by training on both
labeled and unlabeled data. With the unlabeled data, an unsupervised network
is used. With the labeled data, a supervised classification is applied using
the intermediate latent space of the unsupervised network. The unsupervised
network constructs a latent space of self-organized patterns; the labeled data
identifies the most useful characteristics of this space. The use of a self-
supervised space has been found to be robust against variations in the data,
and classifiers, in some cases, exceed that of supervised training
hendrycks2019using . Semi-supervision has been found to be effective for
anomaly detection ruff2020deep ; hendrycks2019deep , even, very recently,
within physics Cheng:2020dal . This paper differs from this previous approach
in the construction of the network architecture and the abstraction of the
latent space.
To construct QUAK, we follow a multi-step procedure whereby we
* •
choose a background and set of approximate signal priors that capture features
of a new physics signature,
* •
train $N$ separate unsupervised probabilistic models for each signal prior or
background prior,
* •
construct an $N$-dimensional “QUAK” space consisting of the loss on each
unsupervised probabilistic model, and
* •
exploit QUAK space to search for anomalies.
The construction is semi-supervised in that we use the signal priors as labels
for the construction of QUAK space.
Figure 49 illustrates the concept of QUAK. By constructing an N-dimensional
space, we allow for the placement of certain physics processes within specific
regions of the space. A background event will have low background loss and
high signal loss. An anomalous event similar to the chosen signal prior will
have a low signal loss and large background loss. An anomalous event that is
different from both will have large signal and background loss. By searching
in the region where other proxy signals are present, we aim to isolate broadly
similar anomalies, but not necessarily the same. In the following sections, we
present this idea in various setups, including the LHC Olympics dataset and
the MNIST dataset.
#### 5.3.2 Results on LHC Olympics
To perform the anomaly search, we first construct high-level jet features and
then feed these inputs into the network for training. The high-level features
consist of n-subjettiness ratios ranging from 1-prong to 4-prong and the raw
jet mass of the individual jets Thaler:2010tr ; Datta:2017rhs . Training and
testing are performed with 12 variables for each event, 6 variables for each
jet (4 n-subjettiness ratios, a total number of tracks, and the jet mass).
In the construction of the unsupervised probabilistic model, an optimized scan
for the studies with the LHC Olympics dataset converged on variational
autoencoders (VAEs) kingma2014autoencoding with normalizing flows
rezende2015variational for improved posterior approximation. Among a wide
variety of normalizing flow architectures, we find Masked Autoregressive Flow
papamakarios2017masked yields optimized results. For the training, we apply a
loss metric of mean squared error reconstruction on each of the 12 variables
with a KL-divergence term to regularize the sampling parameters. We choose a
value of KL-divergence scale $\beta=10$ Higgins2017betaVAELB . Additionally,
we choose a latent dimension $z_{dim}=4$, with a fully connected neural
network with 3 hidden layers on either end of the VAE layer. In computing the
loss within QUAK space, we remove the KL-divergence term.
With QUAK applied to the BSM search, we train multiple separate autoencoders
on the QCD(background) and individual signal priors. We first utilize the
single QCD autoencoder loss (1D QUAK). We progressively add additional
approximate priors to the search with 2D QUAK, including one approximate
signal prior, and 3D QUAK, including two approximate signal priors. To
construct the ROC curve, we systematically scan the 2D space integrating from
the region of minimum signal loss and maximum QCD loss. Alternative, more
sophisticated approaches, such as a fit within the n-dimensional space, are
not investigated here.
The performance comparison of adding additional priors to the search is shown
in Fig 50. By comparing dotted, dashed, and solid lines, we see that we can
increase the sensitivity of the search by adding more approximate priors in
training. The addition of the approximate priors approaches, and, in some
places, exceeds, a fully supervised discriminator computed by training the
same inputs on the known signal. Interestingly, much of the gain in
discrimination of the 3-prong signal arises by adding a 2-prong signal prior.
Therefore, we observe that the addition of signal priors preserves the model-
independent sensitivity of the search. Even if the signal priors are not
accurate, we gain sizable performance improvement. We interpret this to mean
that the added information present in the signal loss helps isolate
“signal”-like anomalies from other anomalous features present within the
background. Through the construction of QUAK space, we also demonstrate that
incorrect signal priors, whether they result from inaccurate simulation or
different signal model choice, can still be a powerful discriminant when
searching for new physics.
Figure 50: (Left) Receiver Operator Characteristic (ROC) for signal versus
background selection for different test priors. Performance comparison of the
1D (QCD prior only), 2D (QCD prior and two prong
$(m_{jj},m_{j1},m_{j2})=(4500,500,150)$), 3D (QCD prior, two prong
$(m_{jj},m_{j1},m_{j2})=(4500,500,150)$ prior, and three prong
$(m_{jj},m_{j1},m_{j2})=(5000,500,500)$) with fully supervised training on the
correct signal prior (red). Jet masses $(m_{j1},m_{j2})$ are excluded in the
training of the supervised classifier to mitigate model dependence and to
allow for the potential of signal extraction through mass fits. (Right) ROC
for signal versus background selection for 2D QUAK (solid) and a fixed
supervised network (dashed). For both QUAK and the supervised network a signal
prior of $(m_{jj},m_{j1},m_{j2})=(4500,500,150)$ is used in the training.
To contrast this with conventional new physics searches, we consider
constructing a supervised classifier where we choose a signal prior and apply
it to a broad range of different signal models, due to uncertainties in signal
simulation and detector modeling. Nearly every signal model is inconsistent
with data to a certain degree.
Figure 50 compares two-dimensional QUAK, trained with QCD events and a 2-prong
prior, with a supervised classifier trained with the same raw inputs and
signal prior. A fully-connected network is used for both the learnable mapping
from data to the latent space for the VAEs, and the supervised classifier (4
hidden layers with batch normalization batchnorm and dropout
hinton2012improving ). With the supervised training, we observe a general
trend where the supervised classifier performs gradually worse as the test
data deviates further from the 2-prong prior used to train the supervised
classifier. With the 3-prong signal, we find abysmal performance with the
supervised model. With QUAK, we observe relatively stable discrimination
against background as the test signal further deviates from the signal prior.
We conclude that QUAK incorporates signal priors in a more efficient way than
supervised classifiers, and by using QUAK, we can do a more efficient scan of
the whole possible BSM space. For searches where the signal prior is partially
known (to within uncertainties), QUAK has the potential to mitigate loss in
sensitivity.
#### 5.3.3 Lessons Learned
In summary, we propose the exploration of a new algorithm, QUAK, to perform
model independent searches. We demonstrate this work in the context of new
physics search at the LHC. We observe that the addition of approximate priors
to anomaly loss allows for enhanced identification of anomalies by providing
generic “signal-like” or “background-like” features to help in identification.
With QUAK, we have presented an approach that effectively adds these priors
without degrading the sensitivity of a prior-free model. QUAK is broadly
applicable to a number of different problems and can help to improve both
anomaly searches, and searches where large uncertainties are present on the
signal modeling.
### 5.4 Simple Supervised learning with LSTM layers272727Authors: Zhongtian
Dong.
#### 5.4.1 Method
Recurrent neural networks have had some success in jet-tagging tasks
Guest:2018yhq . The goal of this section is to examine how a simple model with
a small number of parameters can perform on the complicated anomaly detection
task. As a first step, the raw hadron data where clustered into jets using the
anti-$k_{t}$ algorithm with pyjet. The radius $R$ is varied during the
training phase to obtain optimal test performance and is set around $R=0.7$ as
a result. The input into the network is the sequence of four-momentum of jets
($p_{\text{T}}$, $\eta$, $\phi$, mass). The jets are ordered by their
$p_{\text{T}}$, from largest to smallest. The length of the sequence $N$ is
varied for the best performance for each data set; events that had fewer jets
were be zero-padded to the same number. Typically, $N$ is chosen between $6$
and $10$. The neural network model has four hidden layers: two LSTM layers
with $64$ and $128$ units, followed by two dense layers with $256$ units
before a single output node. The intermediate layers have ReLU activation, and
the output has a sigmoid activation. All training was done using Tensorflow
through keras with the adam optimizer. $10\%$ of the R&D data set is used as
the test data set, the rest is used for training and validation. The training
is done with about $30$ epochs, when the model is able to successfully
identify $95\%$ of the signals in the test data.
#### 5.4.2 Results on LHC Olympics
The model performs well on the R&D data set, which is unsurprising for a
supervised learning model, but it has relatively poor performances on the
black boxes. In Black Box 1, it identifies some events as signals but with
relatively low confidence, i.e. the output scores given by the model are not
as high as given by the R&D data. Compared to the actual signals presented in
the data sets, the number of events identified as signals by the model is
relatively large, with a higher average mass. It is possible that the model
does not actually capture any real signal and incorrectly labels events as
signals or backgrounds. In Black Box 2 and 3, the output results are similar
to the results when running the model over the pure background data set. In
retrospect, this happens perhaps for a good reason. The signal type in Black
Box 3 is quite different from what is presented in the training set, which is
probably the reason the neural network cannot identify them correctly.
#### 5.4.3 Lessons Learned
Supervised models tend to perform well on identifying specific signals
comparing to a consistent background, and is probably inappropriate for
anomaly detection with varying backgrounds. In general, it seems such a model
is incapable of handling data that is different from the ones presented in the
training set. Maybe we can still use the network structure to learn important
features of the events but not doing classification tasks as it is. Rather
than continuing on this type of model, I would like to study in a different
direction next. Suppose we can obtain $n$ independent features for each event,
and divide each feature into $2$ regions. We would have $2^{n}$ regions, some
would be populated by ”background” generic QCD events, and some other region
would be not. We can focus on these regions that are rarely populated by QCD
events, if many events occupy one such region in particular data sets, we may
consider this an anomaly. Perhaps it is even wrong to divide into only two
regions for each feature, more regions for each feature would result in a high
dimensional grid, generic QCD events would rarely appear in some grid, and
they can be considered as our anomaly grid. We can use machine learning to
find novel features as mentioned in some previous works Datta:2017lxt . We may
also use methods such as distance correlation to make sure features we find
are independent of each other DiscoFever . The majority of what has been done
so far are studied with dense layer networks, it would be interesting to see
if we can find exotic features with more complicated network structures. Of
course, there will be a lot of potential problems with this approach, one
being how to make sure that data we use for training covers all of the regions
that suppose to be the backgrounds, and does not falsely label signal region
as background.
## 6 Discussion
The important thing in the Olympic Games is not to win, but to take part;
the important thing in Life is not triumph, but the struggle;
the essential thing is not to have conquered but to have fought
well.282828Pierre de Coubertin, founder of the International Olympic
Committee, as quoted in The Olympian (1984) by Peter L. Dixon, p. 210
The results of the LHCO are to be understood in a similar way. The goal is not
to declare one superior method, but to foster the development of novel tools
for the unsupervised detection of anomalies. With this in mind, we now turn to
a discussion and comparison of the various algorithms’ performance on the
LHCO2020 Black Box datasets. Knowing which algorithms achieved accurate
results in blinded and unblinded tests is important information, as it will
provide crucial feedback for the further refinement and improvement of each
approach. Also, it is important to keep in mind that an approach which did not
perform well in this challenge may have its strengths elsewhere and may turn
out to be better suited for a different part of phase space.
We discuss the results in Sec. 6.1 and review the lessons learned — both in
terms of anomaly detection as well as in future directions for the LHCO — in
Sec. 6.2.
### 6.1 Overall Results
In the following we will review the results submitted during the two LHC
Olympics sessions as well as additional contributions received for this paper.
As approaches were allowed to change and improve between the sessions and in
preparation of this document, we chronologically walk through results at these
three stages.
As discussed in Sec. 2.2, the signal in Black Box 1 consists of 834 anomalous
events with the same topology as the R&D dataset and masses
$m_{W^{\prime}}=3.823$ TeV, $m_{X}=732$ GeV and $m_{Y}=378$ GeV and were
unblinded during the LHCO session at the 2020 ML4Jets workshop winterolympics
. Nine blind approaches were submitted and are summarised in Fig. 51: ResNet +
BDT (Sec. 5.1), PCA (Principal component analysis used as an outlier
detector), LSTM (Sec. 5.4.1), High-level features AE (encoding kinematic and
substructure variables using an autoencoder, selecting signal as events with
high reconstruction MSE loss), Tag N Train (Sec. 4.3), Density Estimation
(Sec. 3.5), VRNN (Sec. 3.1), Latent Dirichlet Allocation (Sec. 3.6), and Human
NN (manual search).
Of these submissions, four approaches identified the correct resonance mass
either within the claimed error (PCA) or within a window of $\pm 200$ GeV
(LSTM, Tag N Train, Density Estimation). Accurate predictions for the other
observables were achieved only by the Density Estimation method.
Figure 51: Results of unblinding the first black box. Shown are the predicted
resonance mass (top left), the number of signal events (top right), the mass
of the first daughter particle (bottom left), and the mass of the second
daughter particle (bottom right). Horizontal bars indicate the uncertainty
(only if provided by the submitting groups). In a smaller panel the pull
(answer-true)/uncertainty is given. Descriptions of the tested models are
provided in the text.
Next, Black Boxes 2 and 3 were unblinded in Summer 2020 summerolympics . For
Black Box 2, a resonance at 4.8 TeV (PCA), at 4.2 TeV (VRNN, Sec. 3.1), 4.6
TeV (embedding clustering, Sec. 3.9), and 5 TeV (QUAK, Sec. 5.3) were
predicted. For LDA (Sec. 3.6), the absence of signal as di-jet resonance was
reported. As Black Box 2 did not contain any injected signal, these results
highlight a possible vulnerability of anomaly detection methods in the tail of
statistical distributions.
For Black Box 3 a resonance decaying to hadrons and invisible particles (PCA),
a resonance with a mass between 5.4 and 6.4 TeV (LDA), at 3.1 TeV (embedding
clustering), and between 5 and 5.5 TeV (QUAK) was reported. No signal was
observed by one approach (VRNN). The true injected resonance with a mass of
4.2 TeV and two competing decay modes was not detected by any approach.
After unveiling the black boxes, further submissions and improvements to the
anomaly detectors were made. The VRNN and BuHuLaSpa (Sec. 3.3) approaches now
report an enhancement at an invariant mass below 4 TeV for black box 1, while
no signal is observed for the other two black boxes. With deep ensemble
anomaly detection (Sec. 5.1) a resonance at 3.5 TeV is seen for the first
black box and for Latent Dirichlet Allocation a resonance not incompatible
with 3.8 TeV is observed. Another new submission was Particle Graph
Autoencoders (Sec 3.7) which detected a resonance at 3.9 TeV for the first
black box. Finally, a resonance at 3.5 TeV was seen using CWoLa hunting (Sec.
4.1). For Black Box two and three, no additional observations of a signal were
reported after unblinding.
### 6.2 Overall Lessons Learned
This large and diverse number of submissions on the blinded and unblinded
datasets is very encouraging. Even better, the resonance in the first black
box was successfully detected multiple times even before unblinding. Of the
three methods finding a mass resonance mass closest to the true value, two
were based on building a signal-to-background likelihood ratio (Tag N Train,
Density Estimation) while one used a signal likelihood (LSTM), and likely
benefitted from the same topology between the provided development signal and
the first black box.
However, there still is substantial room for improvement for anomaly detection
in the realm of particle physics. First, no confident statement of the absence
of signal for the second black box could be made, with a number of false
positives at high values of the invariant mass.
Second, the resonance in Black Box 3 was not detected. The structure of this
signal was different from the shared topology of the development data and
Black Box 1 which was likely to cause issues for models too finely tuned on
these signals. Furthermore Black Box 3 featured two different decay modes
which need to be combined to achieve a significant observation. Finally,
substructure offered a less useful handle here as one decay mode involved the
direct production of a pair of gluon jets. Despite all this, the signal in
Black Box 3 still decayed as hadronic resonance with a well-defined mass-peak
and visible particles in the final state. Future developments therefore will
need to both improve the sensitivity as well as the statistical interpretation
of anomaly detectors.
Beyond the reported results on the black box datasets, we also observe the use
of the initial dataset for the development of new approaches. Overall, the
volume of work and results shows the value of targeted community studies. For
anomaly detection, a new frontier would lie in the inclusion of more complex
detector effects and observables such as track and vertex information,
although first a credible detection or rejection of anomalies similar to Black
Box 3 would could be desireable. While toy studies will play an important role
in developing new methods, we keenly await experimental results with these
tools.
## 7 Outlook: Anomaly Detection in Run 3, the HL-LHC and for Future Colliders
### 7.1 Prospects for the (HL)-LHC
While there are already many search results from the LHC collaborations using
the full Run 2 dataset, many more will be published in the coming years.
Notably, almost none of the methods described in this paper have been applied
yet to collider data. The ATLAS Collaboration has produced a first version of
the CWoLa hunting analysis using low-dimensional features
collaboration2020dijet , which is likely the start of a growing set of
searches. At this juncture, it is useful to consider what is possible with the
full LHC dataset and what is the best way of organizing these efforts going
forward.
Figure 52: The organization of physics analysis groups in ATLAS and CMS. The
large circles on the left represent analysis groups that are primarily focused
on measuring properties of the Standard Model. The group called SM is focused
on the electroweak and QCD aspects of the SM that are not covered by the other
groups. The large circles on the right represent the analysis groups primarily
focused on searches for new particles. Selected supporting organizations that
are connected to both measurement and search groups are depicted in smaller
circles in the middle. The ATLAS CWoLa hunting search was performed in the
HDBS analysis group in ATLAS (as a ‘model agnostic extension of the diboson
resonance search’) and the ATLAS and CMS data-versus-simulation analyses are
performed in the Exotics/Exotics groups.
First of all, it is clear that there is no one approach which is strictly
better than every other approach. Therefore, we envision a group of searches
using complementary methodologies and targeting a variety of final states.
Currently, analyses in ATLAS and CMS are organized by physics models: there is
a group focusing on supersymmetry (SUSY) models, one focused on Higgs-like
particles (HDBS in ATLAS) and 3rd generation particles (B2G in CMS), and one
focused on exotic particles (Exotics in ATLAS and Exotica in CMS). It is not
obvious that a model agnostic search program fits within the scope of this
existing model-dependent group structure. At the same time, the commonalities
across model agnostic methods would benefit from a coherent strategy.
Therefore, a new analysis group or at least a new analysis subgroup may be
required. This is illustrated in Fig. 52. There are clearly strong connections
with supporting groups as well, including the statistics and machine learning
fora. The analysis group home of these searches is not just a sociological
question — the technical development and physics review is primarily carried
out by the analysis groups so this choice can have important implications for
the success of this program.
The LHC Olympics focused on resonant new physics because there is a natural
scheme for estimating backgrounds. However, there is still a non-trivial
relationship between classification and background estimation. In particular,
if the classifier is dependent on the resonant feature (e.g. the invariant
mass of pairs of jets), then an artificial bump could be sculpted in the
absence of any signal. This is related to the challenge of exploring higher
dimensional feature spaces, which is required to achieve the broadest
sensitivity. In some cases, automated decorrelation techniques for model-
dependent searches can be adapted; in other cases, these methods would mask
potential signals and so new approaches are required. None of the methods
deployed for the LHC Olympics were able to find anomalies using the full list
of hadron four-vectors directly — successful approaches all used some model-
inspired dimensionality reduction. Scaling up these approaches to high
dimensional feature spaces is a key challenge for the next years and will
require both methodological and computational innovation.
Exploring anomaly detection in the non-resonant case is more challenging
because there is no general approach for estimating the background. Some of
the methods deployed for the LHC Olympics can achieve signal sensitivity for
non-resonant signals, but significant research is required in order to combine
these and possibly new approaches with background estimation strategies.
Strategies that directly compare data and background simulation are promising
for final states where the background model is accurate and when the
uncertainty is well-known. A key challenge for these approaches is scaling up
to high-dimensional features where the full systematic uncertainty covariance
matrix may not be known. This is a general challenge that is also faced by
model-dependent approaches, where signal model uncertainties in many
dimensions may not be well constrained.
Another independent dimension to consider is when in the data processing
pipeline the anomaly detection happens. The LHC Olympics is designed as an
offline analysis, where standard trigger algorithms are used to collect the
data. There is significant unexplored phase space from existing triggers, but
there is also a vast phase space that is deleted in real time before it can be
explored. The development of online anomaly detection will be a significant
innovation to complement offline analysis. Recent innovations have shown that
machine learning inference can be fast enough to fit within the strict trigger
latency requirements (see e.g. Duarte:2018ite ; CERN-LHCC-2020-004 ). However,
the same algorithms applied offline may not be applicable online. For example,
offline methods can make multiple passes through the dataset in order to
identify anomalous regions of the phase space. In contrast, the trigger only
sees collision once before a decision to save the event or not must be made.
Even if a method could identify anomalous events within the required
bandwidth, this is only a partial solution because strange collisions are only
useful if we can quantify their level of strangeness. This is one key
difference between anomaly detection in high energy physics and the typical
anomaly detection protocols developed in industry; we are almost never able to
declare a discovery with a single collision. Our expectation is that new
physics will manifest as an ‘over-density’ in phase space rather than being
‘off-manifold’. By analogy, we are not looking for flying elephants, but
instead a few extra elephants than usual at the local watering hole. The only
way to know that the number of elephants is anomalous is to have a precise
understanding of the usual rate of elephants.
In addition to the rich research and development program required to fully
exploit the potential of these searches, there are a variety of complications
involved in the interpretation of results. The most pressing question is what
to do in the case of a positive signal detection. No fundamental particle that
was not already precisely predicated by an existing theory has been discovered
in decades. Would the high energy physics community believe a significant
anomaly? It is important to start a conversation about the community plan in
the case of a significant anomaly detected by one of these approaches. If an
anomaly is found before the full high-luminosity LHC (HL-LHC) dataset is
recorded, then a targeted search could be conducted using an independent
dataset. What if the anomaly is only identified using the full HL-LHC dataset?
What post-hoc analysis can and should be done? It is also important to ensure
sensitivity to complex signals, where there may be multiple possible final
states (as exemplified by Black Box 3).
Figure 53: An illustration of the nested loops required for signal model-
dependent interpretation of a model-agnostic search. The parenthetical remark
for the signal cross section refers to the fact that if the number of
predicted signal events is small, one may need to repeat the injection many
times due to the large statistical fluctuations in the phase space. This is
not a problem for model-dependent search where one can use all simulated
signal events and scale by the predicted cross section. Unsupervised
approaches may be able to avoid certain steps if they do not change in
response to modifications in the data.
In the absence of a signal detection, there is a significant challenge to
quantify the sensitivity of a search. For a model-dependent search,
quantifying what was not seen is relatively straightforward for a given model,
one can provide an upper limit on the cross section. However, model agnostic
methods are sensitive to many models all at once and it is challenging to
define the sensitive volume. This is particularly challenging in many
dimensions. One way to map out the sensitivity is to pick a small set of
representative signal models. Signal model dependent limits can be
significantly more difficult to compute for these searches than for standard
searches. In particular, any time the anomaly classifier depends directly on
the data in the signal-sensitive region, the entire analysis procedure must be
repeated for every variation of the signal hypothesis. This is represented
schematically in Fig. 53. Since the analysis selection depends on the data,
the classifier must be retrained every time a different signal model cross
section is injected into the data. For example, the final exclusion plots in
Ref collaboration2020dijet required training tens of thousands of neural
networks. Computing may become a bottleneck in the future when there is more
data and higher dimensional features. Heterogeneous High Performance Computing
Centers with significant GPU resources may provide a solution to this
challenge.
The dependence of the event selection on the data also complicates the
usability of these data for reanalysis and reinterpretation. One cannot simply
recast published results because if a new signal was in the data, then the
event selection would have been different. If the network training and
statistical analysis can be automated, then a system like RECAST
Cranmer:2010hk may be possible whereby signal models could be submitted to
collaborations for inference. Note that this is one more level of automation
beyond typical analysis preservation: in addition to preserving the analysis
selection, we also need to preserve the analysis optimization procedure which
itself needs to be reproducible.
### 7.2 Prospects for Future Colliders and Beyond
All of the challenges described in the previous section also apply to future
colliders beyond the HL-LHC. However, a new machine opens up the possibility
to imagine the co-design of accelerator/detector and analysis procedure. What
operational conditions and detector configurations are most interesting for
anomaly detection?
The methods developed for colliders may also be more broadly applicable.
Anomaly detection at other fundamental physics experiments shares many
features with collider physics. In fact, a presentation at the Summer Olympics
described an anomaly detection method developed using the LHC Olympics that is
now being studied for astrophysical data streams .
### 7.3 The Role of Theory and Theorists
While this paper is about making a search program that is model agnostic, this
does not mean we should proceed without theory and without theorists. The most
powerful methods will likely employ physics-informed machine learning
techniques, whereby symmetries and other physical principles are part of the
learning. These tools may allow us to find rarer signals and design procedures
that are interpretable and robust. Furthermore, there is a continuum of model
independence. Building theoretically motivated, but still relatively broad
priors may achieve more powerful sensitivity to a wide class of models.
Machine learning is in general a unifying subject, where there have been many
rich collaborations between experimentalists and theorists as well as between
high energy physicists and machine learning researchers and practitioners.
About half of the participants in the LHC Olympics are ‘experimentalists’ and
half are ‘theorists’. It is critical for the success of the anomaly detection
program that model agnostic method development be a respected form of theory
work and that machine learning method development and implementation be an
appreciated from of experimental work. Furthermore, barriers between theory,
experiment, and computation/statistics should be as low as possible so we can
make the best use of our data. Public challenges like the LHC Olympics are an
important step in this direction, but this is only one part of a bigger
program of engagement.
### 7.4 The Future of the LHC Olympics
This round of the LHC Olympics was driven by a need from the community to
develop and test a growing number of machine learning anomaly detection
methods. With a diverse set of submissions, we believe that this exercise has
succeeded and has added value to the community. However, there is always room
for improvement. In no particular order:
* •
Unlike other challenges in high energy physics such as the top tagging
competition Kasieczka:2019dbj and the challenges on the Kaggle platform like
the HiggsML Challenge pmlr-v42-cowa14 , the Flavours of Physics Challenge
flavorofphyiscs , and the TrackML Challenge Amrouche:2019wmx , there was no
single metric for determine a winner and therefore it was not possible to
directly compare methods. (See Rousseau:2020rnz for a recent overview of
these competitions.) This is similar to the correlation aspect of the Flavours
of Physics Challenge and the efficiency-versus-fake-rate aspect of the TrackML
challenge, but it even more acute for the LHC Olympics in part because the
estimation of the false positive rate is non-trivial.
* •
Without a platform like Kaggle that offers broad exposure and a monetary
prize, few ML experts outside of HEP participated in the LHC Olympics.
Additionally, accessibility to non-experts could be improved. Code to read in
the data and cluster jets were provided to participants, but given that nearly
every group performed additional dimensionality reduction first suggests that
additional information could have been useful.
* •
One of the biggest difficulties with selecting the Black Boxes was that the
anomalies should be easy enough to find that the challenge is doable, but not
too easy that one could find them without new methods. Some checks were
performed before releasing the Black Boxes, but with significant work, this
could have been more robust and streamlined.
There are many possibilities for variations on the LHC Olympics 2020.
Additional signal models could be considered as black boxes and more signal
topologies could be studied including final state leptons, heavy flavor
quarks, and long-lived particles. We look forward to the deployment and impact
of new methods developed from the LHC Olympics 2020 as well as future
iterations.
## 8 Conclusions
Given the current lack of convincing evidence for new fundamental particles or
new forces of nature from HEP experiments, it is critical that the program of
dedicated searches be complemented with more model agnostic methods. While
there has been a long history of signal model agnostic methods based on binned
data-simulation comparisons, there has been a recent explosion of new ideas
for less model dependent approaches. Many of the new proposals make use of
machine learning to aid in the less-than-supervised exploration of collider
data. The methods presented in this paper represent a snapshot292929See Ref.
livingreview for a more updated list of papers in this area. of the rapidly
developing area of machine learning for anomaly detection in HEP.
To address this challenge, we introduced the LHC Olympics, a community effort
to develop and test anomaly detection methods in a relatively realistic
setting. A set of datasets were produced to emulate the typical setting where
data are unlabeled but there is a corresponding labeled dataset for research
and development. In the LHC Olympics, three black boxes were the analog of
collider data, each with a different SM background simulation and a different
potential anomaly. Many teams developed and implemented a variety of
techniques on these datasets covering at least 18 different methods (some
submissions compared multiple distinct methods).
In addition to results with the R&D dataset, many teams deployed their
techniques on the black boxes. At the Winter and Summer Olympics workshops,
teams presented their results on these boxes before even knowing the nature of
the signal in the datasets analyzed. While some strategies were closer to the
correct answer than others, every team followed the scientific method and
gained valuable insight and experience. In several cases, strategies were
refined between the two workshops using feedback from the unveiling of the
first black box. Many of these strategies continue to be refined as they are
prepared for the application to collider data in the near future.
These methods use a variety of unsupervised, semisupervised, and fully
supervised machine learning methods based on neural networks and other
approaches. While there are unique advantages and disadvantages to each
method, there are also common challenges across techniques, such as scaling to
higher dimensions. The ultimate performance is likely to include a combination
of approaches, and new method development will be required to reach the full
physics potential of the data.
A data-driven revolution has started with machine learning as its catalyst. We
are well-equipped to explore the complex LHC data in new ways with immense
potential for discovery. The Run 2 data collection is over, but our
exploration of these precious collisions in their natural high dimensionality
is only beginning. This LHC Olympics has been a starting point for a new
chapter in collider physics that will produce exciting physics results from
the current datasets as well from the datasets of the future at the LHC and
beyond.
## Acknowledgments
We thank the organizers and participants in the ML4Jets2020 workshop hosted at
New York University and at the anomaly detection workshop hosted (virtually)
by the University of Hamburg for many interesting discussions at the Winter
and Summer Olympics, respectively. B. Nachman and G. Kasieczka are grateful to
the NHETC Visitor Program at Rutgers University for the generous support and
hospitality during the spring of 2019 where the idea for the LHC Olympics 2020
was conceived.
A. Kahn, J. Gonski, D. Williams, and G. Brooijmans are supported by the
National Science Foundation (NSF) under Grant No. PHY-2013070. I. Ochoa is
supported by the fellowship LCF/BQ/PI20/11760025 from “la Caixa” Foundation
(ID 100010434) and by the European Union’s Horizon 2020 research and
innovation programme under the Marie Skłlodowska-Curie grant agreement No
847648. S. E. Park, S. Udrescu, M. Yunus, P. Harris are supported by the NSF
Grants #1934700 and #1931469. Cloud credits for training were supported by the
Internet2/NSF Grant #190444. V. Mikuni and F. Canelli are supported in part by
the Swiss National Science Foundation (SNF) under contract No. 200020-182037.
F. F. Freitas is supported by the Center for Research and Development in
Mathematics and Applications (CIDMA) through the Portuguese Foundation for
Science and Technology (FCT - Fundação para a Ciência e a Tecnologia),
references UIDB/04106/2020 and UIDP/04106/2020 and the project PTDC/FIS-
PAR/31000/2017. C. K. Khosa is supported by the Royal Society, UK under the
Newton International Fellowship programme (NF171488). K. Benkendorfer was
supported in part by NSF PHY REU Grant 1949923. B. Bortolato, B. Dillon, A.
Matevc, J. Kamenik, A. Smolkovic acknowledge the financial support from the
Slovenian Research Agency (research core funding No. P1-0035 and J1-8137). D.
A. Faroughy is supported by SNF under contract 200021-159720. M. Szewc would
like to thank the Jozef Stefan Institute for its enormous hospitality. P.
Komiske, E. Metodiev, N. Sarda, and J. Thaler are supported by the Office of
Nuclear Physics of the U.S. Department of Energy (DOE) under grant DE-
SC-0011090 and by the DOE Office of High Energy Physics under grant DE-
SC0012567. N. Sarda was additionally supported by the QCRI-CSAIL Computer
Research Program. P. Komiske, E. Metodiev, N. Sarda, and J. Thaler are
grateful to Benjamin Nachman and Justin Solomon for helpful conversations. B.
Nachman and J. Collins were supported by the DOE under contracts DE-
AC02-05CH11231 and DE-AC02-76SF00515, respectively. P. Martín-Ramiro
acknowledges Berkeley LBNL, where part of this work has been developed. P.
Martín-Ramiro further acknowledges support from the Spanish Research Agency
(Agencia Estatal de Investigación) through the contract FPA2016-78022-P and
IFT Centro de Excelencia Severo Ochoa under grant SEV-2016-0597. P. Martín-
Ramiro also received funding/support from the European Union’s Horizon 2020
research and innovation programme under the Marie Skłlodowska-Curie grant
agreement No 690575 (RISE InvisiblesPlus). S. Tsan, J. Duarte, J.-R. Vilmant,
and M. Pierini thank the University of California San Diego Triton Research
and Experiential Learning Scholars (TRELS) program for supporting this
research, CENIC for the 100 Gpbs networks, and Joosep Pata for helpful
discussions. They are additionally supported in part by NSF awards
CNS-1730158, ACI-1540112, ACI-1541349, OAC-1826967, the University of
California Office of the President, and the University of California San
Diego’s California Institute for Telecommunications and Information
Technology/Qualcomm Institute. J. Duarte is supported by the DOE, Office of
Science, Office of High Energy Physics Early Career Research program under
Award No. DE-SC0021187. M. Pierini is supported by the European Research
Council (ERC) under the European Union’s Horizon 2020 research and innovation
program (Grant Agreement No. 772369). J-R. Vilmant is partially supported by
the European Research Council (ERC) under the European Union’s Horizon 2020
research and innovation program (Grant Agreement No. 772369) and by the DOE,
Office of Science, Office of High Energy Physics under Award No. DE-SC0011925,
DE-SC0019227, and DE-AC02-07CH11359. D. Shih is supported by DOE grant DOE-
SC0010008. GK acknowledges support by the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation) under Germany’s Excellence Strategy – EXC
2121 “Quantum Universe” – 390833306.
## References
* (1) ATLAS collaboration, G. Aad et al., _Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC_ , _Phys. Lett. B_ 716 (2012) 1–29, [1207.7214].
* (2) CMS collaboration, S. Chatrchyan et al., _Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC_ , _Phys. Lett. B_ 716 (2012) 30–61, [1207.7235].
* (3) ATLAS Collaboration, “Exotic Physics Searches.” https://twiki.cern.ch/twiki/bin/view/AtlasPublic/ExoticsPublicResults, 2019\.
* (4) ATLAS Collaboration, “Supersymmetry searches.” https://twiki.cern.ch/twiki/bin/view/AtlasPublic/SupersymmetryPublicResults, 2019\.
* (5) ATLAS Collaboration, “Higgs and Diboson Searches.” https://twiki.cern.ch/twiki/bin/view/AtlasPublic/HDBSPublicResults, 2019\.
* (6) CMS Collaboration, “CMS Exotica Public Physics Results.” https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsEXO, 2019.
* (7) CMS Collaboration, “CMS Supersymmetry Physics Results.” https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsSUS, 2019.
* (8) CMS Collaboration, “CMS Beyond-two-generations (B2G) Public Physics Results.” https://twiki.cern.ch/twiki/bin/view/CMSPublic/PhysicsResultsB2G, 2019.
* (9) LHCb Collaboration, “Publications of the QCD, Electroweak and Exotica Working Group.” http://lhcbproject.web.cern.ch/lhcbproject/Publications/LHCbProjectPublic/Summary_QEE.html, 2019\.
* (10) Craig, Nathaniel and Draper, Patrick and Kong, Kyoungchul and Ng, Yvonne and Whiteson, Daniel, _The unexplored landscape of two-body resonances_ , _Acta Phys. Polon. B_ 50 (2019) 837, [1610.09392].
* (11) J. Kim, K. Kong, B. Nachman, D. Whiteson, _The motivation and status of two-body resonance decays after the LHC Run 2 and beyond_ , _JHEP_ 04 (2020) 30, [1907.06659].
* (12) J. Button, G. R. Kalbfleisch, G. R. Lynch, B. C. Maglić, A. H. Rosenfeld and M. L. Stevenson, _Pion-Pion Interaction in the Reaction $\bar{p}+p\to 2\pi^{+}+2\pi^{-}+n\pi^{0}$_, _Phys. Rev._ 126 (1962) 1858–1863.
* (13) B. Knuteson, “A Quasi-Model-Independent Search for New High $p_{T}$ Physics at D0.” https://www-d0.fnal.gov/results/publications_talks/thesis/knuteson/thesis.ps.
* (14) D0 collaboration, B. Abbott et al., _Search for new physics in $e\mu X$ data at DØ using Sherlock: A quasi model independent search strategy for new physics_, _Phys. Rev. D_ 62 (2000) 092004, [hep-ex/0006011].
* (15) D0 collaboration, V. M. Abazov et al., _A Quasi model independent search for new physics at large transverse momentum_ , _Phys. Rev. D_ 64 (2001) 012004, [hep-ex/0011067].
* (16) D0 collaboration, B. Abbott et al., _A quasi-model-independent search for new high $p_{T}$ physics at DØ_, _Phys. Rev. Lett._ 86 (2001) 3712–3717, [hep-ex/0011071].
* (17) H1 collaboration, F. D. Aaron et al., _A General Search for New Phenomena at HERA_ , _Phys. Lett. B_ 674 (2009) 257–268, [0901.0507].
* (18) H1 collaboration, A. Aktas et al., _A General search for new phenomena in ep scattering at HERA_ , _Phys. Lett. B_ 602 (2004) 14–30, [hep-ex/0408044].
* (19) K. S. Cranmer, “Searching for new physics: Contributions to LEP and the LHC.” http://weblib.cern.ch/abstract?CERN-THESIS-2005-011, 2005.
* (20) CDF collaboration, T. Aaltonen et al., _Model-Independent and Quasi-Model-Independent Search for New Physics at CDF_ , _Phys. Rev. D_ 78 (2008) 012002, [0712.1311].
* (21) CDF collaboration, T. Aaltonen et al., _Model-Independent Global Search for New High-p(T) Physics at CDF_ , 0712.2534.
* (22) CDF collaboration, T. Aaltonen et al., _Global Search for New Physics with 2.0 fb -1 at CDF_, _Phys. Rev. D_ 79 (2009) 011101, [0809.3781].
* (23) CMS Collaboration, “MUSiC, a Model Unspecific Search for New Physics, in $pp$ Collisions at $\sqrt{s}=8$ TeV.” http://cds.cern.ch/record/2256653, 2017.
* (24) CMS Collaboration, “Model Unspecific Search for New Physics in $pp$ Collisions at $\sqrt{s}=$ 7 TeV.” http://cds.cern.ch/record/1360173, 2011\.
* (25) CMS Collaboration, “MUSiC, a model unspecific search for new physics, in $pp$ collisions at $\sqrt{s}=13$ TeV.” https://cds.cern.ch/record/2718811, 5, 2020.
* (26) CMS collaboration, A. M. Sirunyan et al., _MUSiC: a model unspecific search for new physics in proton-proton collisions at $\sqrt{s}=$ 13 TeV_, 2010.02984.
* (27) ATLAS collaboration, M. Aaboud et al., _A strategy for a general search for new phenomena using data-derived signal regions and its application within the ATLAS experiment_ , _Eur. Phys. J._ C79 (2019) 120, [1807.07447].
* (28) ATLAS collaboration, “A general search for new phenomena with the ATLAS detector in $pp$ collisions at $\sqrt{s}=8$ TeV.” https://cds.cern.ch/record/1666536, Mar, 2014.
* (29) ATLAS collaboration, “A general search for new phenomena with the ATLAS detector in $pp$ collisions at $\sqrt{s}=7$ TeV.” https://cds.cern.ch/record/1472686, Aug, 2012.
* (30) E. Gross and O. Vitells, _Trial factors for the look elsewhere effect in high energy physics_ , _Eur. Phys. J. C_ 70 (2010) 525, [1005.1891].
* (31) “LHC Olympics 2020.” https://lhco2020.github.io/homepage/, 2020.
* (32) “1st LHC Olympics Workshop, CERN.” https://indico.cern.ch/event/370125/, 2005.
* (33) “2nd LHC Olympics Workshop, CERN.” https://indico.cern.ch/event/370132/, 2006.
* (34) “3rd LHC Olympics Workshop, KITP.” https://www.kitp.ucsb.edu/activities/lhco-c06, 2006.
* (35) “4th LHC Olympics Workshop, Princeton.” http://physics.princeton.edu/lhc-workshop/LHCO4/, 2007.
* (36) “Anomaly Detection Session, Machine Learning for Jets Workshop.” https://indico.cern.ch/event/809820/sessions/329216/#20200116, 2020.
* (37) “Anomaly Detection Mini-Workshop.” https://indico.desy.de/indico/event/25341/, 2020.
* (38) G. Kasieczka, B. Nachman and D. Shih, “Official Datasets for LHC Olympics 2020 Anomaly Detection Challenge.” https://doi.org/10.5281/zenodo.3596919, Nov., 2019.
* (39) F. Pérez and B. E. Granger, _IPython: a system for interactive scientific computing_ , _Computing in Science and Engineering_ 9 (May, 2007) 21–29.
* (40) N. Dawe, E. Rodrigues, H. Schreiner, B. Ostdiek, D. Kalinkin, M. R. et al., _scikit-hep/pyjet: Version 1.8.0_ , Nov., 2020. 10.5281/zenodo.4289190.
* (41) M. Cacciari, G. P. Salam and G. Soyez, _FastJet User Manual_ , _Eur. Phys. J. C_ 72 (2012) 1896, [1111.6097].
* (42) M. Cacciari and G. P. Salam, _Dispelling the $N^{3}$ myth for the $k_{t}$ jet-finder_, _Phys. Lett. B_ 641 (2006) 57, [hep-ph/0512210].
* (43) T. Sjöstrand, S. Mrenna and P. Z. Skands, _PYTHIA 6.4 Physics and Manual_ , _JHEP_ 05 (2006) 026, [hep-ph/0603175].
* (44) T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten et al., _An introduction to PYTHIA 8.2_ , _Comput. Phys. Commun._ 191 (2015) 159–177, [1410.3012].
* (45) DELPHES 3 collaboration, J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaitre, A. Mertens et al., _DELPHES 3, A modular framework for fast simulation of a generic collider experiment_ , _JHEP_ 02 (2014) 057, [1307.6346].
* (46) A. Mertens, _New features in Delphes 3_ , _J. Phys. Conf. Ser._ 608 (2015) 012045.
* (47) M. Selvaggi, _DELPHES 3: A modular framework for fast-simulation of generic collider experiments_ , _J. Phys. Conf. Ser._ 523 (2014) 012033.
* (48) M. Cacciari, G. P. Salam and G. Soyez, _The anti- $k_{t}$ jet clustering algorithm_, _JHEP_ 04 (2008) 063, [0802.1189].
* (49) W. McKinney, _Data structures for statistical computing in python_ , in _Proceedings of the 9th Python in Science Conference_ (S. van der Walt and J. Millman, eds.), pp. 51 – 56, 2010.
* (50) S. Koranne, _Hierarchical data format 5: Hdf5_ , in _Handbook of Open Source Tools_ , pp. 191–200. Springer, 2011.
* (51) ATLAS collaboration, G. Aad et al., _Muon reconstruction performance of the ATLAS detector in proton–proton collision data at $\sqrt{s}$ =13 TeV_, _Eur. Phys. J. C_ 76 (2016) 292, [1603.05598].
* (52) ATLAS collaboration, M. Aaboud et al., _A measurement of the calorimeter response to single hadrons and determination of the jet energy scale uncertainty using LHC Run-1 $pp$-collision data with the ATLAS detector_, _Eur. Phys. J. C_ 77 (2017) 26, [1607.08842].
* (53) M. Bahr et al., _Herwig++ Physics and Manual_ , _Eur. Phys. J. C_ 58 (2008) 639–707, [0803.0883].
* (54) K. Agashe, P. Du, S. Hong and R. Sundrum, _Flavor Universal Resonances and Warped Gravity_ , _JHEP_ 01 (2017) 016, [1608.00526].
* (55) K. S. Agashe, J. Collins, P. Du, S. Hong, D. Kim and R. K. Mishra, _LHC Signals from Cascade Decays of Warped Vector Resonances_ , _JHEP_ 05 (2017) 078, [1612.00047].
* (56) Z.-H. Zhou, _A brief introduction to weakly supervised learning_ , _National Science Review_ 5 (2018) 44–53.
* (57) J. Chung, K. Kastner, L. Dinh, K. Goel, A. Courville and Y. Bengio, _A recurrent latent variable model for sequential data_ , 2016.
* (58) J. An and S. Cho, _Variational autoencoder based anomaly detection using reconstruction probability_ , 2015.
* (59) L. Moneta, K. Belasco, K. Cranmer, S. Kreiss, A. Lazzaro, D. Piparo et al., _The roostats project_ , 2011.
* (60) D. Rezende and S. Mohamed, _Variational inference with normalizing flows_ , vol. 37 of _Proceedings of Machine Learning Research_ , (Lille, France), pp. 1530–1538, PMLR, 07–09 Jul, 2015, http://proceedings.mlr.press/v37/rezende15.html.
* (61) A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan et al., _Pytorch: An imperative style, high-performance deep learning library_ , in _Advances in Neural Information Processing Systems 32_ , pp. 8024–8035. Curran Associates, Inc., 2019.
* (62) D. P. Kingma and J. Ba, _Adam: A method for stochastic optimization_ , in _3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings_ (Y. Bengio and Y. LeCun, eds.), 2015, http://arxiv.org/abs/1412.6980.
* (63) B. Nachman and D. Shih, _Anomaly Detection with Density Estimation_ , _Phys. Rev. D_ 101 (2020) 075042, [2001.04990].
* (64) F. Chollet et al., “Keras.” https://keras.io, 2015.
* (65) M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean et al., _Tensorflow: A system for large-scale machine learning._ , in _OSDI_ , vol. 16, p. 265, 2016.
* (66) N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever and R. Salakhutdinov, _Dropout: A simple way to prevent neural networks from overfitting_ , _Journal of Machine Learning Research_ 15 (2014) 1929–1958.
* (67) J. H. Collins, K. Howe and B. Nachman, _Anomaly Detection for Resonant New Physics with Machine Learning_ , _Phys. Rev. Lett._ 121 (2018) 241803, [1805.02664].
* (68) J. H. Collins, K. Howe and B. Nachman, _Extending the search for new resonances with machine learning_ , _Phys. Rev. D_ 99 (2019) 014038, [1902.02634].
* (69) C. Huang, D. Krueger, A. Lacoste and A. C. Courville, _Neural autoregressive flows_ , _CoRR_ abs/1804.00779 (2018) , [1804.00779].
* (70) C. Durkan, A. Bekasov, I. Murray and G. Papamakarios, _Neural spline flows_ , 1906.04032.
* (71) G. Kasieczka and D. Shih, _DisCo Fever: Robust Networks Through Distance Correlation_ , 2001.05310.
* (72) B. Dai and U. Seljak, _Sliced Iterative Generator_ , _arXiv e-prints_ (July, 2020) arXiv:2007.00674, [2007.00674].
* (73) G. Stein, U. Seljak and B. Dai, _Unsupervised in-distribution anomaly detection of new physics through conditional density estimation_ , 2012.11638.
* (74) G. Kasieczka, B. Nachman and D. Shih, _R &D Dataset for LHC Olympics 2020 Anomaly Detection Challenge_, Apr., 2019. 10.5281/zenodo.3832254.
* (75) N. Dawe, E. Rodrigues, H. Schreiner, S. Meehan, M. R., D. Kalinkin et al., _scikit-hep/pyjet: 1.6.0_ , Dec., 2019. 10.5281/zenodo.3594321.
* (76) J. Thaler and K. Van Tilburg, _Identifying Boosted Objects with N-subjettiness_ , _JHEP_ 03 (2011) 015, [1011.2268].
* (77) J. Thaler and K. Van Tilburg, _Maximizing Boosted Top Identification by Minimizing N-subjettiness_ , _JHEP_ 02 (2012) 093, [1108.2701].
* (78) L. Dinh, J. Sohl-Dickstein and S. Bengio, _Density estimation using Real NVP_ , _arXiv e-prints_ (May, 2016) arXiv:1605.08803, [1605.08803].
* (79) I. Kobyzev, S. J. D. Prince and M. A. Brubaker, _Normalizing Flows: An Introduction and Review of Current Methods_ , _arXiv e-prints_ (Aug., 2019) arXiv:1908.09257, [1908.09257].
* (80) M. Farina, Y. Nakai and D. Shih, _Searching for New Physics with Deep Autoencoders_ , 1808.08992.
* (81) B. M. Dillon, D. A. Faroughy and J. F. Kamenik, _Uncovering latent jet substructure_ , _Phys. Rev. D_ 100 (2019) 056002, [1904.04200].
* (82) B. M. Dillon, D. A. Faroughy, J. F. Kamenik and M. Szewc, _Learning the latent structure of collider events_ , 2005.12319.
* (83) J. Shlomi, P. Battaglia and J.-R. Vlimant, _Graph Neural Networks in Particle Physics_ , 2007.13681.
* (84) D. Bertolini, P. Harris, M. Low and N. Tran, _Pileup per particle identification_ , _J. High Energy Phys._ 10 (2014) 059, [1407.6013].
* (85) Y. Wang, Y. Sun, Z. Liu, S. E. Sarma, M. M. Bronstein and J. M. Solomon, _Dynamic graph CNN for learning on point clouds_ , _ACM Trans. Graph._ 38 (2019) , [1801.07829].
* (86) M. Fey and J. E. Lenssen, _Fast graph representation learning with PyTorch Geometric_ , in _ICLR Workshop on Representation Learning on Graphs and Manifolds_ , 2019, 1903.02428, https://pytorch-geometric.readthedocs.io/.
* (87) S. Ioffe and C. Szegedy, _Batch Normalization: Accelerating deep network training by reducing internal covariate shift_ , in _Proceedings of the 32nd International Conference on Machine Learning_ (F. Bach and D. Blei, eds.), vol. 37, (Lille), p. 448, PMLR, 2015, 1502.03167, http://proceedings.mlr.press/v37/ioffe15.html.
* (88) A. F. Agarap, _Deep learning using rectified linear units (ReLU)_ , 1803.08375.
* (89) H. G. Barrow, J. M. Tenenbaum, R. C. Bolles and H. C. Wolf, _Parametric correspondence and Chamfer matching: Two new techniques for image matching_ , in _Proceedings of the 5th International Joint Conference on Artificial Intelligence (KJCAI)_ , vol. 2, (San Francisco, CA, USA), p. 659, Morgan Kaufmann Publishers Inc., 1977, https://www.ijcai.org/Proceedings/77-2/Papers/024.pdf.
* (90) H. Fan, H. Su and L. J. Guibas, _A point set generation network for 3D object reconstruction from a single image_ , in _2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , p. 2463, 6, 2017, 1612.00603, DOI.
* (91) Y. Zhang, J. Hare and A. Prügel-Bennett, _FSPool: Learning set representations with featurewise sort pooling_ , in _8th International Conference on Learning Representations_ , 2020, 1906.02795, https://openreview.net/forum?id=HJgBA2VYwH.
* (92) G. Choudalakis, _On hypothesis testing, trials factor, hypertests and the BumpHunter_ , in _PHYSTAT 2011_ , 2011, 1101.0390.
* (93) L. Vaslin, J. Donini and H. Schreiner, “pyBumpHunter.” https://github.com/lovaslin/pyBumpHunter, 2020.
* (94) CMS collaboration, A. M. Sirunyan et al., _Search for narrow and broad dijet resonances in proton-proton collisions at $\sqrt{s}=13$ TeV and constraints on dark matter mediators and other new particles_, _JHEP_ 08 (2018) 130, [1806.00843].
* (95) M. Frate, K. Cranmer, S. Kalia, A. Vandenberg-Rodes and D. Whiteson, _Modeling Smooth Backgrounds and Generic Localized Signals with Gaussian Processes_ , 1709.05681.
* (96) J. Brehmer and K. Cranmer, _Flows for simultaneous manifold learning and density estimation_ , 2003.13913.
* (97) T. Lin, P. Goyal, R. B. Girshick, K. He and P. Dollár, _Focal loss for dense object detection_ , _CoRR_ abs/1708.02002 (2017) , [1708.02002].
* (98) M. M. Fard, T. Thonet and É. Gaussier, _Deep k-means: Jointly clustering with k-means and learning representations_ , _CoRR_ abs/1806.10069 (2018) , [1806.10069].
* (99) J. A. Hartigan and M. A. Wong, _Algorithm as 136: A k-means clustering algorithm_ , _Journal of the Royal Statistical Society. Series C (Applied Statistics)_ 28 (1979) 100–108.
* (100) V. Mikuni and F. Canelli, _ABCNet: An attention-based method for particle tagging_ , _Eur. Phys. J. Plus_ 135 (2020) 463, [2001.05311].
* (101) P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Lio and Y. Bengio, _Graph attention networks_ , 2017.
* (102) C. Chen, L. Zanotti Fragonara and A. Tsourdos, _GAPNet: Graph Attention based Point Neural Network for Exploiting Local Feature of Point Cloud_ , _arXiv e-prints_ (May, 2019) arXiv:1905.08705, [1905.08705].
* (103) V. Mikuni and F. Canelli, _Unsupervised clustering for collider physics_ , 2010.07106.
* (104) ATLAS collaboration, M. Aaboud et al., _Search for diboson resonances with boson-tagged jets in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector_, _Phys. Lett. B_ 777 (2018) 91–113, [1708.04445].
* (105) CMS collaboration, A. M. Sirunyan et al., _Search for massive resonances decaying into WW, WZ or ZZ bosons in proton-proton collisions at $\sqrt{s}=$ 13 TeV_, _JHEP_ 03 (2017) 162, [1612.09159].
* (106) K. Benkendorfer, L. L. Pottier and B. Nachman, _Simulation-Assisted Decorrelation for Resonant Anomaly Detection_ , 2009.02205.
* (107) E. M. Metodiev, B. Nachman and J. Thaler, _Classification without labels: Learning from mixed samples in high energy physics_ , _JHEP_ 10 (2017) 174, [1708.02949].
* (108) T. Heimel, G. Kasieczka, T. Plehn and J. M. Thompson, _QCD or What?_ , _SciPost Phys._ 6 (2019) 030, [1808.08979].
* (109) O. Cerri, T. Q. Nguyen, M. Pierini, M. Spiropulu and J.-R. Vlimant, _Variational Autoencoders for New Physics Mining at the Large Hadron Collider_ , _JHEP_ 05 (2019) 036, [1811.10276].
* (110) T. S. Roy and A. H. Vijay, _A robust anomaly finder based on autoencoder_ , 1903.02032.
* (111) A. Blance, M. Spannowsky and P. Waite, _Adversarially-trained autoencoders for robust unsupervised new physics searches_ , _JHEP_ 10 (2019) 047, [1905.10384].
* (112) ATLAS collaboration, G. Aad et al., _Search for new resonances in mass distributions of jet pairs using 139 fb -1 of $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector_, _JHEP_ 03 (2020) 145, [1910.08447].
* (113) O. Amram and C. M. Suarez, _Tag N’ Train: A Technique to Train Improved Classifiers on Unlabeled Data_ , 2002.12376.
* (114) A. Andreassen and B. Nachman, _Neural Networks for Full Phase-space Reweighting and Parameter Tuning_ , _Phys. Rev. D_ 101 (2020) 091901, [1907.08209].
* (115) K. Cranmer, J. Pavez and G. Louppe, _Approximating Likelihood Ratios with Calibrated Discriminative Classifiers_ , 1506.02169.
* (116) P. Baldi, K. Cranmer, T. Faucett, P. Sadowski and D. Whiteson, _Parameterized neural networks for high-energy physics_ , _Eur. Phys. J. C_ 76 (2016) 235, [1601.07913].
* (117) J. Brehmer, G. Louppe, J. Pavez and K. Cranmer, _Mining gold from implicit models to improve likelihood-free inference_ , _Proc. Nat. Acad. Sci._ (2020) 201915980, [1805.12244].
* (118) J. Brehmer, K. Cranmer, G. Louppe and J. Pavez, _A Guide to Constraining Effective Field Theories with Machine Learning_ , 1805.00020.
* (119) J. Brehmer, K. Cranmer, G. Louppe and J. Pavez, _Constraining Effective Field Theories with Machine Learning_ , 1805.00013.
* (120) A. Andreassen, P. T. Komiske, E. M. Metodiev, B. Nachman and J. Thaler, _OmniFold: A Method to Simultaneously Unfold All Observables_ , _Phys. Rev. Lett._ 124 (2020) 182001, [1911.09107].
* (121) A. Andreassen, B. Nachman and D. Shih, _Simulation Assisted Likelihood-free Anomaly Detection_ , _Phys. Rev. D_ 101 (2020) 095004, [2001.05001].
* (122) B. Nachman, _A guide for deploying Deep Learning in LHC searches: How to achieve optimality and account for uncertainty_ , 1909.03081.
* (123) B. Nachman and C. Shimmin, _AI Safety for High Energy Physics_ , 1910.08606.
* (124) R. T. D’Agnolo and A. Wulzer, _Learning New Physics from a Machine_ , _Phys. Rev. D_ 99 (2019) 015014, [1806.02350].
* (125) R. T. D’Agnolo, G. Grosso, M. Pierini, A. Wulzer and M. Zanetti, _Learning Multivariate New Physics_ , 1912.12155.
* (126) ATLAS Collaboration, _Dijet resonance search with weak supervision using 13 TeV pp collisions in the ATLAS detector_ , 2005.02983.
* (127) A. Alves and F. F. Freitas, _Towards recognizing the light facet of the Higgs Boson_ , 1912.12532.
* (128) C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, _Rethinking the inception architecture for computer vision_ , _CoRR_ abs/1512.00567 (2015) , [1512.00567].
* (129) H. Zhang, M. Cisse, Y. N. Dauphin and D. Lopez-Paz, _mixup: Beyond Empirical Risk Minimization_ , _arXiv e-prints_ (Oct., 2017) arXiv:1710.09412, [1710.09412].
* (130) L. N. Smith, _A disciplined approach to neural network hyper-parameters: Part 1 – learning rate, batch size, momentum, and weight decay_ , _arXiv e-prints_ (Mar., 2018) arXiv:1803.09820, [1803.09820].
* (131) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel et al., _Scikit-learn: Machine learning in Python_ , _Journal of Machine Learning Research_ 12 (2011) 2825–2830.
* (132) F.-A. Fortin, F.-M. De Rainville, M.-A. Gardner, M. Parizeau and C. Gagné, _DEAP: Evolutionary algorithms made easy_ , _Journal of Machine Learning Research_ 13 (jul, 2012) 2171–2175.
* (133) C. K. Khosa and V. Sanz, _Anomaly Awareness_ , 2007.14462.
* (134) E. M. Metodiev and J. Thaler, _Jet Topics: Disentangling Quarks and Gluons at Colliders_ , _Phys. Rev. Lett._ 120 (2018) 241602, [1802.00008].
* (135) R. Ellis, W. Stirling and B. Webber, _QCD and collider physics_ , vol. 8. Cambridge University Press, 2, 2011.
* (136) P. T. Komiske, E. M. Metodiev and J. Thaler, _An operational definition of quark and gluon jets_ , _JHEP_ 11 (2018) 059, [1809.01140].
* (137) ATLAS collaboration, G. Aad et al., _Properties of jet fragmentation using charged particles measured with the ATLAS detector in $pp$ collisions at $\sqrt{s}=13$ TeV_, _Phys. Rev. D_ 100 (2019) 052011, [1906.09254].
* (138) CMS collaboration, A. M. Sirunyan et al., _Measurement of the $\mathrm{t\bar{t}}\mathrm{b\bar{b}}$ production cross section in the all-jet final state in pp collisions at $\sqrt{s}=$ 13 TeV_, _Phys. Lett. B_ 803 (2020) 135285, [1909.05306].
* (139) ATLAS Collaboration, _Dijet resonance search with weak supervision using $\sqrt{s}=13$ TeV $pp$ collisions in the ATLAS detector_, 2005.02983.
* (140) S. E. Park, D. Rankin, S.-M. Udrescu, M. Yunus and P. Harris, _Quasi Anomalous Knowledge: Searching for new physics with embedded knowledge_ , 2011.03550.
* (141) M. Kuusela, T. Vatanen, E. Malmi, T. Raiko, T. Aaltonen and Y. Nagai, _Semi-supervised anomaly detection – towards model-independent searches of new physics_ , _Journal of Physics: Conference Series_ 368 (Jun, 2012) 012032.
* (142) A. Butter, T. Plehn and R. Winterhalder, _How to GAN LHC Events_ , _SciPost Phys._ 7 (2019) 075, [1907.03764].
* (143) T. Chen, S. Kornblith, K. Swersky, M. Norouzi and G. Hinton, _Big self-supervised models are strong semi-supervised learners_ , 2020.
* (144) Y. Ouali, C. Hudelot and M. Tami, _An overview of deep semi-supervised learning_ , 2020.
* (145) D. Hendrycks, M. Mazeika, S. Kadavath and D. Song, _Using self-supervised learning can improve model robustness and uncertainty_ , 2019.
* (146) L. Ruff, R. A. Vandermeulen, N. Görnitz, A. Binder, E. Müller, K.-R. Müller et al., _Deep semi-supervised anomaly detection_ , 2020.
* (147) D. Hendrycks, M. Mazeika and T. Dietterich, _Deep anomaly detection with outlier exposure_ , 2019.
* (148) T. Cheng, J.-F. Arguin, J. Leissner-Martin, J. Pilette and T. Golling, _Variational Autoencoders for Anomalous Jet Tagging_ , 2007.01850.
* (149) K. Datta and A. Larkoski, _How Much Information is in a Jet?_ , _JHEP_ 06 (2017) 073, [1704.08249].
* (150) D. P. Kingma and M. Welling, _Auto-Encoding Variational Bayes_ , 2014.
* (151) D. J. Rezende and S. Mohamed, _Variational inference with normalizing flows_ , 2015.
* (152) G. Papamakarios, T. Pavlakou and I. Murray, _Masked autoregressive flow for density estimation_ , 2017.
* (153) I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick et al., _beta-vae: Learning basic visual concepts with a constrained variational framework_ , in _ICLR_ , 2017.
* (154) G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever and R. R. Salakhutdinov, _Improving neural networks by preventing co-adaptation of feature detectors_ , 2012.
* (155) D. Guest, K. Cranmer and D. Whiteson, _Deep Learning and its Application to LHC Physics_ , _Ann. Rev. Nucl. Part. Sci._ 68 (2018) 161–181, [1806.11484].
* (156) K. Datta and A. J. Larkoski, _Novel Jet Observables from Machine Learning_ , _JHEP_ 03 (2018) 086, [1710.01305].
* (157) J. Duarte et al., _Fast inference of deep neural networks in FPGAs for particle physics_ , _JINST_ 13 (2018) P07027, [1804.06913].
* (158) CMS collaboration, _The Phase-2 Upgrade of the CMS Level-1 Trigger_ , CMS Technical Design Report CERN-LHCC-2020-004. CMS-TDR-021, CERN, 2020.
* (159) K. Cranmer and I. Yavin, _RECAST: Extending the Impact of Existing Analyses_ , _JHEP_ 04 (2011) 038, [1010.2506].
* (160) “Via Machinae: Anomaly Detection of Stellar Streams.” https://indico.desy.de/event/25341/contributions/56824/attachments/36785/46006/StreamFinding.pdf, 2020\.
* (161) A. Butter et al., _The Machine Learning Landscape of Top Taggers_ , _SciPost Phys._ 7 (2019) 014, [1902.09914].
* (162) C. Adam-Bourdarios, G. Cowan, C. Germain, I. Guyon, B. Kégl and D. Rousseau, _The Higgs boson machine learning challenge_ , vol. 42 of _Proceedings of Machine Learning Research_ , (Montreal, Canada), pp. 19–55, PMLR, 13 Dec, 2015, http://proceedings.mlr.press/v42/cowa14.html.
* (163) “Kaggle Challenge: Flavours of Physics: Finding $\tau\rightarrow\mu\mu\mu$.” https://www.kaggle.com/c/flavours-of-physics/, 2015.
* (164) S. Amrouche et al., _The Tracking Machine Learning challenge : Accuracy phase_ , 1904.06778.
* (165) D. Rousseau and A. Ustyuzhanin, _Machine Learning scientific competitions and datasets_ , 2012.08520.
* (166) M. Feickert and B. Nachman, “A living review of machine learning for particle physics.” https://iml-wg.github.io/HEPML-LivingReview/, 2020.
|
# Observations and predictions from past lightcones
Martin Lesourd Black Hole Initiative, Harvard University
<EMAIL_ADDRESS>
###### Abstract.
In a general Lorentzian manifold $M$, the past lightcone of a point is a
proper subset of $M$ that does not carry enough information to determine the
rest of $M$. That said, if $M$ is a globally hyperbolic Cauchy development of
vacuum initial data on a Cauchy surface $S$ and there is a point whose past
lightcone contains $S$, then the contents of such a lightcone determines all
of $M$ (up to isometry). We show some results that describe what properties of
$M$ guarantee that past lightcones do indeed determine all or at least
significant portions of $M$. Null lines and observer horizons, which are well
known features of the de-Sitter spacetime, play a prominent role.
## 1\. Introduction
In Lorentzian geometry, an observer at a given time in a spacetime $(M,g)$ is
represented by a timelike curve with future endpoint $p\in M$, and the past
lightcone $J^{-}(p)\subset M$ of $p$ represents all signals in $M$ that can
reach the observer at $p$. One can then ask the following.
1. (A)
Can an observer at $p\in M$ know the global structure of $M$ on the basis of
$J^{-}(p)$?
2. (B)
Can an observer at $p\in M$ make predictions about $M\backslash J^{-}(p)$ on
the basis of $J^{-}(p)$?
In this short note, we describe some of what is known about (A) and (B), we
prove various further results, and we list some natural further questions.
Part of the appeal in (A) and (B) is that they are subject to somewhat
surprising examples. As described below, two (inextendible and globally
hyperbolic) spacetimes $(M^{\prime},g^{\prime})$ and $(M,g)$ can be non-
isometric, in spite of the fact that each member of the countable collection
of past lightcones $\\{I^{-}(p_{i})\\}$ that covers $M$ can be isometrically
embedded into $(M^{\prime},g^{\prime})$ and likewise with $M$ and $M^{\prime}$
interchanged. Here we will show when this cannot happen.
We now recall some basic definitions of causal theory, cf. [1], [14], [16] for
some classic references and [15] for a more recent authorative survey.
A spacetime $(M,g)$ is a connected $C^{\infty}$ Hausdorff manifold $M$ of
dimension two or greater with a Lorentzian metric $g$ of signature
$(-,+,+,...)$, and we will assume a time and space orientation. Since
regularity is not the issue here, for simplicity of expression we take $g$ to
be smooth, but many of the arguments can be extended to lower regularity.
The lightcone structure inherited on the tangent space $T_{p}M$ at each $p$
leads to the notion of a causal curve, which in turn leads to defining
$J^{-}(p)$ (or $I^{-}(p)$) as the collection of all points $q\in M$ from which
there exists a causal (or timelike) curve with future endpoint $p$ and past
endpoint $q$. A causal curve between $p$ and $q\in J^{+}(p)$ is achronal iff
$q\notin I^{+}(p)$. A null line is an inextendible achronal causal curve.
A set is achronal (acausal) iff no two members of it can be connected by a
timelike (causal) curve. The domain of dependence $D(S)$ of a set $S\subset M$
is given by $D(S)=D^{+}(S)\cup D^{-}(S)$ and $D^{+}(S)$ is defined as the
collection of all points $q$ in $M$ such that any inextendible ($C^{1}$) past
directed causal curve passing through $q$ intersects $S$. $\tilde{D}(S)$ is
defined identically except that curves are timelike, rather than causal. From
the perspective of the Cauchy problem of general relativity, $D^{+}(S)$
represents the maximal portion of $M$ that could be determined by initial data
on $S$. If $S$ is closed as a subset of $M$ then
$\overline{D^{+}(S)}=\tilde{D}^{+}(S)$. The future Cauchy horizon is defined
as $H^{+}(S)\equiv\overline{D^{+}(S)}\backslash I^{-}(D^{+}(S))$ and
$H(S)=H^{+}(S)\cup H^{-}(S)$.
$S$ is a partial Cauchy hypersurface if it an edgeless acausal set, cf.
Definition 14.27 of [1] for the definition of $\text{edge}(S)$ for an achronal
set $S$. A spacetime $(M,g)$ is globally hyperbolic iff there exists a partial
Cauchy hypersurface $S$ such that $M=D(S)$. By [2], a smooth globally
hyperbolic spacetime $(M,g)$ is isometric to $(\mathbb{R}\times
S,-f(t)dt^{2}+h(t))$ where $f(t)$ is smooth and $h(t)$ a Riemannian metric on
$S$. From the perspective of causal structure, global hyperbolicity is
equivalent to the spacetime being causal111No closed causal curves. and
$J^{+}(p)\cap J^{-}(q)$ being compact for all $p,q\in M$.
If $S$ is acausal and $S\cap\text{edge}(S)=\emptyset$ (eg., $S$ is a partial
Cauchy hypersurface), then $D(S)$ is non-empty, open, and $D(S)\cap
H(S)=\emptyset$. Note that the openness of $D(S)$ may be ruined if we take $S$
to be achronal rather than acausal.
A spacetime $(M,g)$ is causally simple iff it is causal and $J^{+(-)}(p)$ is
closed for all $p\in M$. Global hyperbolicity is strictly stronger than causal
simplicity.
An isometric embedding of $(M,g)$ into $(M^{\prime},g^{\prime})$ is an
injective (but not necessarily surjective) map $\phi:M\hookrightarrow
M^{\prime}$ such that $\phi$ is a diffeomorphism onto its image $\phi(M)$ and
$\phi^{*}(g^{\prime})=g$. If $\phi$ maps $M$ surjectively onto $M^{\prime}$
then we write $\phi:M\to M^{\prime}$ and we say that $(M,g)$ and
$(M^{\prime},g^{\prime})$ are isometric. A spacetime $(M,g)$ is inextendible
when there exists no isometric embedding $\phi$ of $M$ into $M^{\prime}$ such
that $M^{\prime}\backslash\phi(M)\neq\emptyset$.
A spacetime is future holed if there is a partial Cauchy hypersurface
$S^{\prime}\subset M^{\prime}$ and an isometric embedding
$\phi:\tilde{D}(S^{\prime})\hookrightarrow M$ such that $\phi(S^{\prime})$ is
acausal in $(M,g)$ where $(M,g)$ is any spacetime, and
$\phi(H^{+}(S^{\prime}))\cap D^{+}(\phi(S^{\prime}))\neq\emptyset$. A
spacetime is hole-free if it lacks future and past holes. Minguzzi [12] shows
that causally simple, inextendible spacetimes are hole free.
Acknowledgements. We thank the Gordon Betty Moore Foundation and John
Templeton Foundation for their support of Harvard’s Black Hole Initiative. We
thank Professors Erik Curiel, JB Manchak, Ettore Minguzzi, Chris Timpson, and
James Weatherall for valuable comments that improved the paper.
## 2\. Previous Work
(A). A natural definition coming from [8] (whose terminology is slightly
different) is the following.
###### Definition 2.1.
A spacetime $(M,g)$ is weakly observationally indistinguishable from
$(M^{\prime},g^{\prime})$ just in case there is an isometric embedding
$\phi:I^{-}(p)\hookrightarrow M^{\prime}$ for every point $p\in M$. If this is
also true with $M$ and $M^{\prime}$ interchanged, then the spacetimes are
strongly observationally indistinguishable.
One could replace $I^{-}(p)$ with $J^{-}(p)$, and although this makes no real
difference to any of the results in [8] and [9] or indeed to what follows,
that would capture a somewhat more honest sense of distinguishability because
observers can certainly receive signals from $J^{-}(p)\backslash I^{-}(p)$.
One can also take observers to be inextendible timelike curves $\sigma$ (as is
also done in [8]), but in that case $J^{-}(\sigma)\subset M$ would represent
“all possible observations that could be made supposing that the observer
lasts forever with respect to $M$”, which, though interesting, is stronger
than what happens in practice.
Malament pg. 65-6 [8] gives examples of spacetimes that are strongly
observationally indistinguishable but non-isometric. His examples are globally
hyperbolic, inextendible, and exploit the presence of observer horizons in the
de-Sitter spacetime.
Based on an explicit cut and paste argument, Manchak [9] shows the following.
###### Proposition 2.2 (Manchak [9]).
Given any non-causally bizarre222A spacetime is causally bizarre if there is a
point $p\in M$ such that $M\subset I^{-}(p)$. spacetime $(M,g)$, there exists
a spacetime $(M^{\prime},g^{\prime})$ that is weakly observationally
indistinguishable from $(M,g)$ but not isometric to $(M,g)$.
Although Manchak’s construction of $(M^{\prime},g^{\prime})$ works for any
(non-causally bizarre) spacetime $(M,g)$, it relies on introducing a countably
infinite collection of holes in $(M^{\prime},g^{\prime})$. It is unknown to us
whether Proposition 2.2 holds for strong observational indistinguishability.
Viewed together, [8] and [9] lead one to the following.
Question. Find conditions $\\{A,B,...\\}$ satisfied by $(M,g)$ and
$(M^{\prime},g^{\prime})$ such that:
‘Weakly (or strongly) observationally indistinguishable $+A+B+...$’
$\Leftrightarrow$
‘$(M,g)$ and $(M^{\prime},g^{\prime})$ are isometric’
Proposition 3.6 and Corollary 3.11 below are in this direction.
(B). Geroch [7] defines prediction in general relativity as follows.
###### Definition 2.3.
$p\in M$ is predictable from $q$, written as $p\in P(q)$, iff $p\in D^{+}(S)$
for some closed, achronal set $S\subset J^{-}(q)$. $p\in M$ is verifiable iff
$p\in P(q)$ and $p\in I^{+}(q)\backslash J^{-}(q)$.
Manchak observes the following.
###### Proposition 2.4 (Manchak [10]).
If $P(q)\cap(I^{+}(q)\backslash J^{-}(q))\neq\emptyset$, then $(M,g)$ admits
an edgless compact achronal set.
A slightly different notion of prediction considered in [10] is as follows.
###### Definition 2.5.
$p\in M$ is genuinely predictable from $q$, written $\mathcal{P}(q)$, iff
$p\in P(q)$ and for all inextendible spacetimes $(M^{\prime},g^{\prime})$, if
there is an isometric embedding $\phi:J^{-}(q)\hookrightarrow M^{\prime}$,
then there is an isometric embedding $\phi^{\prime}:J^{-}(q)\cup
J^{-}(p)\hookrightarrow M^{\prime}$ such that $\phi=\phi^{\prime}_{\mid
J^{-}(q)}$.
The idea here is that genuine predictions guarantee that observers with the
same past make the same predictions. By a short cut-and-paste argument,
Manchak observes the following.
###### Proposition 2.6 (Manchak [10]).
Let $(M,g)$ be any spacetime and $q$ a point in $M$. Then
$\mathcal{P}(q)\subseteq\partial J^{-}(q)$.
Thus the domain of genuine predictions from $q$, if non-empty, is on the verge
of being a retrodiction.
## 3\. Some Observations
We assume that spacetimes satisfy the field equations
(3.1)
$G_{g}\equiv\text{Ric}_{g}-\frac{1}{2}g\>\text{Scal}_{g}=T_{\phi,...}-\Lambda\>g$
where $\Lambda\in\mathbb{R}$ is a constant, and with $T_{\phi,...}$ the
stress-energy tensor associated with possible matter fields $\\{\phi,...\\}$
in $M$.333Here we think of the Cauchy problem from the perspective of ‘initial
data’, as opposed to ‘initial and boundary data’, though the latter is more
natural for $\Lambda<0$. The system (3.1) leads to the formulation of a Cauchy
problem on a spacelike initial data set $S$. In the vacuum setting $T=0$,
$\Lambda=0$, the Cauchy problem was shown [3] to be well posed in the sense
that there exists a unique, up to isometry, maximal globally hyperbolic
development of $S$ obtained by Cauchy evolution of the initial data on $S$
according to (3.1) with $T=0$, $\Lambda=0$. This well posedness has been
extended to more general settings and we take it as a pre-condition for the
spacetimes we consider.
###### Definition 3.1.
Fix the constant $\Lambda$ and fix an expression for $T_{\phi,...}$. Given a
spacetime $(M,g)$ and an acausal edgeless connected set $S\subset M$, we say
that $(\tilde{D}(S),g|_{\tilde{D}(S)})$ is a faithful development if it is
uniquely determined, up to isometry, by Cauchy evolution of the initial data
on $S$ according to (3.1). If $(M,g)$ admits a connected acausal edgeless set,
we say that it is locally Cauchy if, for any connected acausal edgeless set
$S$, $(\tilde{D}(S),g|_{\tilde{D}(S)})$ is a faithful development.
###### Remark 3.2.
Note that this is slightly unorthodox in the sense that it is usually $D(S)$,
rather than $\tilde{D}(S)$, which we think of as being determined by $S$.
Given that $S$ is closed for the definition of locally Cauchy, we have
$\overline{D(S)}=\tilde{D}(S)$, and so being locally Cauchy is only slightly
stronger than asking for $D(S)$ to be determined up to isometry.
###### Remark 3.3.
Since we want to guarantee isometric embeddings, we want to rule out examples
of regions that are globally hyperbolic but not determined by Cauchy
evolution. To see a trivial example444Examples like this suggest the following
problem. Given a Lorentzian manifold $(M,g)$ with an arbitrary geodesically
complete Lorentzian metric $g$, what conditions on $M$ and $g$ make it
possible to solve for a function $\Omega:M\to\mathbb{R}$ such that
$(M,\Omega^{2}g)$ is complete and vacuum? smooth functions transformations
$\Omega$ , start with Minkowski spacetime $(\mathbb{R}^{1,n},\eta)$, identify
some open set $O\subset\mathbb{R}^{1,n}$ lying above $t=0$, and modify it by a
conformal transformation $\eta\to\Omega^{2}\eta$. In that case, in spite of
$M\backslash O$ being vacuum, $O$ will in general have a non-vanishing
Einstein tensor $G_{g}$. But since there are Cauchy hypersurfaces (in
$M\backslash O$) for $(\mathbb{R}^{1,n},\Omega^{2}\eta)$ which are exactly
flat555In the language of the constraints, initial data sets of the form
($\mathbb{R}^{n},g_{E},0$)., $(\mathbb{R}^{1,n},\Omega^{2}\eta)$ is not
locally Cauchy if it is not isometric to $(\mathbb{R}^{1,n},\eta)$.
We also make use of the following.
###### Definition 3.4.
Given a partial Cauchy hypersurface $S$ in a spacetime $M$ we say that an open
subset of $M$ is an $\epsilon$-development of $S$, denoted
$D_{\epsilon}(S)(\supset S)$ if there is exists an
$\epsilon>0\>(\in\mathbb{R})$ such that $D_{\epsilon}(S)$ admits a Cauchy
surface $S_{\epsilon}$ every point of which lies at distance $\geq\epsilon>0$
in the future of $S$, as measured by a normalized timelike vector field
orthogonal to $S$.
###### Remark 3.5.
The non-empty interior of a causal diamond $J^{+}(p)\cap
J^{-}(q)\neq\emptyset$ with $p,q\in(\mathbb{R}^{1,n},\eta)$ is not an
$\epsilon$-development because its Cauchy surfaces are anchored at $\partial
J^{+}(p)\cap\partial J^{+}(q)$.
We now observe the following.
###### Proposition 3.6.
Let $(M,g)$ and $(M^{\prime},g^{\prime})$ be inextendible and locally Cauchy.
Suppose that
1. (i)
$(M,g)$ has a compact Cauchy surface and no null lines,
2. (ii)
$(M^{\prime},g^{\prime})$ is causal and hole-free.
Then $(M,g)$ and $(M^{\prime},g^{\prime})$ are isometric iff they are weakly
observationally indistinguishable.
Note that by the examples in [8], Proposition 3.6 is false without the
assumption that $(M,g)$ lacks null lines.
In either case of weakly o.i. or strongly o.i., it would be interesting to
settle whether the compactness in (i) is necessary, cf. Proposition 3.13
below.
###### Proof.
The proof of Proposition 3.6 starts by strengthening Theorem 1 of [6].666In
[6], the authors assume that $(M,g)$ is null geodesically complete, satisfies
the null energy condition, and the null generic condition. These assumptions
implies the absence of null lines. It was then observed by Galloway that one
can prove the theorem by instead assuming that $(M,g)$ lacks null lines (cf.
footnote for Theorem 1 of [6]), but since those details never appeared we
include them for completeness.
###### Lemma 3.7.
Let $(M,g)$ be a spacetime without null lines. Then given any compact region
$K$, there exists another compact $K^{\prime}\supset K$ such that if
$p,q\notin K^{\prime}$ and $q\in J^{+}(p)-I^{+}(p)$ , then any causal curve
$\gamma$ connecting $p$ to $q$ cannot intersect $K$.
###### Proof.
Suppose otherwise for some $K_{0}$ and $K_{1}\supset K_{0}$. Then consider a
sequence of ever bigger compact sets $K_{i+1}\subset K^{\prime}_{i}$. By
assumption, each $K_{i}$ will have horismos777The future horismos of $p$ is
defined as $J^{+}\backslash I^{+}(p)$. related outer points $p_{i},q_{i}\notin
K_{i}$, $q_{i}\in J^{-}(p_{i})-I^{-}(p_{i})$ that are connected by a causal
curve, necessarily achronal, that intersects $K_{0}$. In considering larger
compact sets, we make these causal curves longer in the sense of an auxiliary
Riemannian metric. All of these curves intersect $K_{0}$. Taking the limit, by
compactness of $K_{0}$, there is a limit point for the sequence of points
lying in $K_{0}$ for each causal curve linking $p_{i},q_{i}$, which moreover
is in $K_{0}$. By a standard limit curve arguments, cf. Proposition 3.1 of
[1], there passes an inextendible limit curve through this limit point. The
limit curve is also straightforwardly seen to be achronal.888See [11] for
significantly stronger limit curve statements. Thus we have a null line. ∎
With the same proof as in Corollary 1 of [6], Lemma 3.7 implies the
following.999Lemma 3.8 strengthens the main result of [6].
###### Lemma 3.8.
Let $(M,g)$ be a spacetime with compact Cauchy surface that does not admit any
null lines. Then $(M,g)$ admits a point $p\in M$ such that $S\subset I^{-}(p)$
for some Cauchy surface $S$.
###### Proof.
We include this for completeness, but this argument is exactly as in [6]
except that Lemma 3.7 plays the role of Theorem 1 of [6]. Since $(M,g)$ is
globally hyperbolic, there exists a continuous global time function
$t:M\to\mathbb{R}$, such that each surface of constant $t$ is a Cauchy
surface. Let $K=\Sigma$ and $K^{\prime}$ be as in Lemma 3.7. Let $t_{1}$ and
$t_{2}$ denote, respectively, the minmum and maximum values of $t$ on
$K^{\prime}$. Let $\Sigma_{1}$ be any Cauchy surface with $t<t_{1}$ and let
$\Sigma_{2}$ denote the Cauchy surface $t=t_{2}$. Let $q\in
I^{+}(\Sigma_{2})$, $p\in\Sigma_{1}$ and suppose that $p\in\partial I^{-}(q)$.
Since $(M,g)$ is globally hyperbolic, $J^{-}(q)$ is closed and so $p\in
J^{-}(q)\backslash I^{-}(q)$ and thus there is a causal curve connecting $p$
and $q$. It follows from Lemma 3.7 that this causal curve does not intersect
$\Sigma$. However, this contradicts the fact that $\Sigma$ is a Cauchy
surface. Consequently, there cannot exist a $p\in\Sigma_{1}$ and such that
$p\in\partial I^{-}(q)$, i.e., $\partial I^{-}(q)\cap\Sigma_{1}=\emptyset$.
But $I^{-}(q)$ is open and since $\partial I^{-}(q)\cap\Sigma_{1}=\emptyset$,
the complement of $I^{-}(q)$ in $\Sigma_{1}$ is also open. Since we have
$I^{-}(q)\cap\Sigma_{1}$ and $\Sigma_{1}$ is connected, this implies
$\Sigma_{1}\subset I^{-}(q)$. ∎
We now finish the proof of Proposition 3.6. By Lemma 3.8 there is a point
$p\in M$ with $S\subset I^{-}(p)$ where $S$ is a Cauchy surface of $M$.
$\phi(S)$ is compact. By weak observational indistinguishability, there is an
isometric embedding $\phi:I^{-}(p)\hookrightarrow M^{\prime}$. Because $\phi$
is a diffeomorphism onto its image, the map $\phi|_{S}$, $\phi$ restricted to
$S$, is a diffeomorphism of $S$ onto its image and thus $\phi(S)$ is compact
in $M^{\prime}$.
$\phi(S)$ is achronal. Suppose otherwise that $\gamma^{\prime}$ is a past
directed timelike curve from $x^{\prime}$ to $y^{\prime}$ with
$x,y\in\phi(S)$. Extend $\gamma^{\prime}$ to $\sigma^{\prime}$ so that
$\sigma^{\prime}$ is a past directed timelike curve from $q^{\prime}$ to
$x^{\prime}$ to $y^{\prime}$. Note that the isometric embedding forces
$\phi(S)$ to be locally achronal; that is, there is an open neighborhood $O$
around around $S$ with two connected boundary components
$\partial_{+}O(\subset I^{+}(S))$, $\partial_{-}O$ (allocated using the
orientation in $M$), such that no two distinct points in $O$ can be joined by
a timelike curve in $O$. Since $O$ isometrically embeds into $M^{\prime}$, a
locally achronal neighborhood $O^{\prime}$ exists around $\phi(S)$ in
$M^{\prime}$. As such, the curve $\sigma^{\prime}$ must leave
$\partial_{-}O^{\prime}$ and re-enter $O^{\prime}$ via either
$\partial_{-}O^{\prime}$ or $\partial_{+}O^{\prime}$.
In the former case, we can build a closed piecewise smooth timelike curve from
$q^{\prime}$ and back, which violates causality of $M^{\prime}$.
In the latter case, we will obtain a contradiction with the inextendibility of
$M$. Since $I^{-}(S)\subset D^{-}(S)\subset I^{-}(p)$, we must have that
$\sigma^{\prime}$ leaves $\phi(D^{-}(S))$, say at some point
$r^{\prime}\in\partial\phi(D^{-}(S))$, and re-enter $\phi(I^{-}(p))\cap
I^{+}(\phi(S))$. Since the global hyperbolicity of $M$ implies that every
future directed timelike curve in $\phi(I^{-}(p))$ must eventually leave
$\phi(I^{-}(p))$ when sufficiently extended in the future direction, we can
take the re-entry point to lie on the ‘future’ boundary of $\phi(I^{-}(p))$;
that is, which is the endpoint of a timelike curve whose $\phi^{-1}$ pre-image
has endpoint on $\partial I^{-}(p)$. Now consider an open neighborhood
$Z^{\prime}$ of $r^{\prime}$ in $M^{\prime}\cap\partial\phi(D^{-}(S))$.
Consider the open subset $\phi(I^{-}(p))\cup Z^{\prime}$ of $M^{\prime}$. Now
define a new spacetime $M^{\prime\prime}$ by $\phi(I^{-}(p))\cup
Z^{\prime}\cup J^{+}(\partial I^{-}(p))$. We know this can be done because the
global hyperbolicity of $M$ and the locally Cauchy property of $M$ and
$M^{\prime}$ imply that the regions $\overline{I^{-}(p)}$ and
$\overline{\phi(I^{-}(p))}\backslash\partial\phi(D^{-}(S))$ are isometric. We
now have a spacetime $M^{\prime\prime}$ into which $M$ can be isometrically
embedded as a proper subset (in virtue of the extra $Z^{\prime}$ beyond
$\phi(D^{-}(S))$), contradicting the inextendibility of $M$.
$\phi(S)$ is edgeless. Compactness of $S$ and achronality of $\phi(S)$
straightforwardly implies that $\phi(S)$ is edgeless.
###### Remark 3.9.
At this point if $(M^{\prime},g^{\prime})$ is assumed globally hyperbolic,
$\phi(S)$ being an edgeless compact connected achronal set means that
$\phi(S)$ can be taken to be a Cauchy surface of $M^{\prime}$. In that case,
both $(M,g)$ and $(M^{\prime},g^{\prime})$ are representatives of the unique,
up to isometry, maximal globally hyperbolic development of $S$, and are thus
isometric. We will instead show that $(M^{\prime},g^{\prime})$ is globally
hyperbolic.
In $(M,g)$ we can consider an $\epsilon$-development $D_{\epsilon}(S)\subset
M$. Within $D_{\epsilon}(S)$, we can then find an acausal hypersurface
$S_{\epsilon}$ which is still Cauchy for $M$. We know that
$D_{\epsilon}(S)\subset M$ isometrically embeds in $M^{\prime}$ as a small
neighborhood around $\phi(S)$. The image $\phi(S_{\epsilon})$ of
$S_{\epsilon}$, now denoted $S^{\prime}_{\epsilon}$, is acausal in
$M^{\prime}$ (by a causal version of the argument for the achronality of
$\phi(S)$) and edgeless. We now have partial Cauchy surfaces $S_{\epsilon}$
and $S^{\prime}_{\epsilon}$ in $M$ and $M^{\prime}$ respectively.
We have the inclusion $D^{-}(S^{\prime}_{\epsilon})\supseteq
M^{\prime}\backslash I^{+}(S^{\prime}_{\epsilon})$. Since $(M,g)$ is globally
hyperbolic and inextendible, we have $D^{-}(S_{\epsilon})\supseteq
J^{-}(S_{\epsilon})$. Since $J^{-}(S_{\epsilon})$ isometrically embeds into
$(M^{\prime},g^{\prime})$, if $D^{-}(S^{\prime}_{\epsilon})$ fails to cover
$M^{\prime}\backslash I^{+}(S^{\prime}_{\epsilon})$, then
$S^{\prime}_{\epsilon}$ must have a past Cauchy horizon
$H^{-}(S^{\prime}_{\epsilon})\neq\emptyset$ in $M^{\prime}$. In that case, by
the locally Cauchy property we can isometrically embed $S^{\prime}_{\epsilon}$
and $\tilde{D}^{-}(S^{\prime}_{\epsilon})$ back into $M$ using $\phi^{-1}$,
and by the global hyperbolicity and inextendibility of $M$, we have that
$D^{-}(\phi^{-1}(S^{\prime}_{\epsilon}))\supset\phi^{-1}(H^{-}(S^{\prime}_{\epsilon}))$,
which contradicts the past hole-freeness of $(M^{\prime},g^{\prime})$.
We have the inclusion $D^{+}(S^{\prime}_{\epsilon})\supseteq
M^{\prime}\backslash I^{-}(S^{\prime}_{\epsilon})$. This follows by the same
argument.
Since we now have $D(S_{\epsilon})=M$ and
$D(S^{\prime}_{\epsilon})=M^{\prime}$, the conclusion follows from locally
Cauchy. ∎
In view of the role of null lines, we note the rigidity theorem of Galloway-
Solis [5].
###### Theorem 3.10 (Galloway-Solis [5]).
Assume that the $4$-dimensional spacetime $(M^{4},g)$
1. (i)
satisfies (3.1) with $T=0$101010Generalizations to Einstein-Maxwell are
possible. and $\Lambda>0$,
2. (ii)
is asymptotically de-Sitter111111cf. [5] for definitions of asymptotically de-
Sitter and the associated hypersurfaces $\mathcal{J}^{+(-)}$ in that context.,
3. (iii)
is globally hyperbolic,
4. (iv)
there is a null line with endpoints on $\mathcal{J}^{+}$ and
$\mathcal{J}^{-}$.
Then $(M^{4},g)$ isometrically embeds as an open subset of the de-Sitter
spacetime containing a Cauchy surface.
Together, Proposition 3.6 and Theorem 3.10 imply the following.
###### Corollary 3.11.
Given two $4$-dimensional spacetimes $(M,g)$ and $(M^{\prime},g^{\prime})$
assume that
1. (i)
$(M,g)$ and $(M^{\prime},g^{\prime})$ satisfy (3.1) with $T=0$ and
$\Lambda>0$,
2. (ii)
$(M,g)$ is inextendible and has a compact Cauchy surface,
3. (iii)
$(M,g)$ is asymptotically de-Sitter but not isometric to de-Sitter,
4. (iv)
$(M^{\prime},g^{\prime})$ is inextendible, causal and hole-free.
Then $(M,g)$ and $(M^{\prime},g^{\prime})$ are isometric iff they are weakly
observationally indistinguishable.
Note that the assumptions of Corollary 3.11 just falls short of astrophysical
relevance on account of the assumption that $(M,g)$ be past asymptotically de-
Sitter, which is not supported by current data. A more desirable statement
would be welcome.121212[13] contains results precluding the existence of null
lines based on astrophysically interesting assumptions.
(B). We say that a spacetime $(M,g)$ is Cauchy friendly if it is weakly
locally Cauchy131313Weakly locally Cauchy replaces $\tilde{D}(S)$ with $D(S)$.
and there are no points $p\in M$ such that $J^{-}(p)\supseteq M$. We now show
the following, which guarantees that genuine predictions extend a little
beyond what is suggested in Proposition 2.6.
###### Proposition 3.12.
Given two Cauchy friendly spacetimes $(M,g)$ and $(M^{\prime},g^{\prime})$,
assume that
1. (i)
there is a partial Cauchy surface $S\subset J^{-}(q)\subseteq D(S)$ for some
point $q\in M$,
2. (ii)
there is an isometry $\phi:J^{-}(q)\to J^{-}(q^{\prime})$.
Then there is an isometric embedding $\psi:A\hookrightarrow M^{\prime}$ for
some $A\supsetneq J^{-}(q)$ such that
* •
$\psi_{\mid J^{-}(q)}=\phi$,
* •
$A$ and $\psi(A)$ contain points in the domain of verifiable prediction of $q$
and $q^{\prime}$.
###### Proof.
First we make some basic observations, and throughout we denote
$S^{\prime}\equiv\phi(S)$.
We have $J^{-}(q)\subsetneq D(S)$. Since $J^{-}(q)$ lies in a globally
hyperbolic set $D(S)$, $J^{-}(q)$ is closed. Moreover, since $S$ is acausal
and edgeless, $D(S)$ must be open. From this it follows that the inclusion of
$J^{-}(q)\subseteq D(S)$ is strict. Suppose otherwise that $J^{-}(q)=D(S)$. In
that case $D(S)$ is both open and closed in $M$, and since $M$ is connected,
that implies $D(S)=M$. But then $M=J^{-}(q)$, in contradiction with Cauchy
friendly. Thus $J^{-}(q)$ is a closed proper subset of the open set $D(S)$.
$S^{\prime}$ is a partial Cauchy hypersurface. Since $\phi$ is a
diffeomorphism onto its image, we know that $S^{\prime}$ is compact. Unlike
the arguments given in Proposition 3.6, $\phi(S)$ is acausal and edgeless by
the fact that $\phi$ is an isometry (as opposed to merely an embedding). Since
$S^{\prime}$ belongs to $J^{-}(q^{\prime})$, if $S^{\prime}$ is acausal in
$M^{\prime}$, then $S$ is also acausal in $M$, which contradicts (ii). Thus
$S^{\prime}$ is a partial Cauchy surface in $M^{\prime}$.
We have $J^{-}(x^{\prime})\cap J^{+}(S^{\prime})\subseteq D^{+}(S^{\prime})$
for any $x^{\prime}\in\phi(J^{-}(q))$. We seek to show that any past
inextendible causal curve in $J^{-}(x^{\prime})$ with with future endpoint
$x^{\prime}\in\phi(J^{-}(q))$ intersects $S^{\prime}$. By assumption, we have
an isometric embedding $\phi:J^{-}(q)\cap D^{+}(S)\hookrightarrow M^{\prime}$
where $S$ is a closed achronal set. Consider any point $x\in J^{-}(q)\cap
J^{+}(S)$. By the isometry $\phi$, we have $\phi(J^{-}(x))=J^{-}(x^{\prime})$.
Note first that by definition, there is a neighborhood of $0\in T_{x}M$ such
that the exponential map of the past non-spacelike vectors in that
neighborhood is contained in $D^{+}(S)$. By the isometry $\phi$, the same is
true for $x^{\prime}$ with respect to $\phi(D^{+}(S))$, in particular there is
a neighborhood $U_{x^{\prime}}$ of $x^{\prime}$ such that $U_{x^{\prime}}\cap
J^{-}(x^{\prime})\subset\phi(D^{+}(S))$. Seeking a contradiction, suppose
there is a past inextendible causal curve $\gamma^{\prime}$ with future
endpoint $x^{\prime}$ that does not intersect $S^{\prime}$. By the property
aforementioned, there is at least a segment of $\gamma^{\prime}$ contained in
$\phi(D^{+}(S))$. The curve defined by
$\gamma\equiv\phi^{-1}(\gamma^{\prime})$ is causal and ends at $x$, and is
thus entirely contained in $D^{+}(S)$. But then $\gamma$ does not intersect
$S$, which is a contradiction.
Similarly, we have $J^{-}(S)\subseteq D^{-}(S^{\prime})$. This follows from
(ii), the isometry $\phi$, and the argument just above.
We have $J^{-}(q^{\prime})\subsetneq D(S^{\prime})$. This proceeds as above,
which now leads to a contradiction with the Cauchy friendliness of
$M^{\prime}$.
We have two closed sets $J^{-}(q^{\prime})$ and $J^{-}(q)$ each strictly
contained in the open sets $D(S^{\prime})$ and $D(S)$. We now seek to show the
existence of $\psi$. Although there may be no isometric embedding of $D(S)$
into $M^{\prime}$, we need only show that it is possible to non-trivially
extend the pre-image of $\phi$ beyond $J^{-}(q)$, which will thus enter
$D(S)\backslash J^{-}(q)$.
Consider now the unique (up to isometry) maximal globally hyperbolic
development $X(S)$ of $S$, where $X(S)$ denotes one representative among all
isometric developments. By the isometry $\phi$, we know that $S$ and and
$S^{\prime}$ are isometric as initial data sets, and thus that $X(S)$ is the
unique (up to isometry) maximal globally hyperbolic development of both $S$
and $S^{\prime}$. By locally Cauchy, it follows that both $D(S)$ and
$D(S^{\prime})$ can be isometrically embedded into $X(S)$.141414Note here that
we could use a weaker version of locally Cauchy here that involves $D(S)$
rather than $\tilde{D}(S)$. Denote these isometries by
$\rho:D(S)\hookrightarrow X(S)$ and
$\rho^{\prime}:D(S^{\prime})\hookrightarrow X(S)$.
It is obvious that $\rho(D(S))\cap\rho^{\prime}(D(S^{\prime}))\neq\emptyset$
and since $\rho$ and $\rho^{\prime}$ are local diffeomorphisms, both
$\rho(D(S))$ and $\rho(D(S^{\prime}))$ are open in $X(S)$, and moreover both
$\rho(J^{-}(q))$ and $\rho^{\prime}(J^{-}(q^{\prime})$ are closed in $X(S)$.
We now define the following set in $X(S)$
$[\rho(D(S))-\rho(J^{-}(q))]\cap[\rho^{\prime}(D(S^{\prime}))-\rho(J^{-}(q^{\prime}))]\equiv
I$
The openness of $\rho(D(S)),\rho^{\prime}(D(S^{\prime}))$, and the fact that
we may choose $\rho,\rho^{\prime}$ such that
$\rho(J^{-}(q))=\rho^{\prime}(J^{-}(q^{\prime}))$ means that the strict
inclusions $D(S)\backslash J^{-}(q),D(S^{\prime})\backslash J^{-}(q^{\prime})$
and $\neq\emptyset$ extend to $\rho$ and $\rho^{\prime}$, i.e.,
$\rho(D(S))\backslash\rho(J^{-}(q))\neq\emptyset$ and
$\rho(D(S^{\prime}))\backslash\rho(J^{-}(q^{\prime}))\neq\emptyset$. It
follows that $I\neq\emptyset$.
We can now identify the set $A\equiv\rho^{-1}[\rho(J^{-}(q))\cup I]$ as having
the desired properties, i.e. there exists an isometric embedding of
$\psi:A\hookrightarrow M^{\prime}$ such that $\psi|_{J^{-}(q)}=\phi$. ∎
We can also consider what happens after lifting the compactness assumption on
$S$.
###### Proposition 3.13.
Let $S$ be a partial Cauchy hypersurface in $M$ and $D_{\epsilon}(S)$ an
$\epsilon$-development of $S$. Let $\phi:D_{\epsilon}(S)\hookrightarrow
M^{\prime}$ be an isometric embedding into a hole-free spacetime $M^{\prime}$.
Then either
* •
$\phi(S)$ is causal in $M^{\prime}$,
* •
or $\phi(S)$ is a partial Cauchy hypersurface in $M^{\prime}$.
In the latter case, if $M,M^{\prime}$ are locally Cauchy and $M^{\prime}$ is
inextendible, then there is an isometric embedding $\psi:D(S)\hookrightarrow
M^{\prime}$ with $\psi|_{D_{\epsilon}(S)}=\phi$.
Thus, after basic assumptions like hole-freeness, locally Cauchy and
inextendibility, the only obstruction concerns the acausality of $\phi(S)$. It
may be that $\phi(S)$ is always acausal if $M^{\prime}$ satisfies some
causality assumption, eg. causally simple, causally continuous151515cf. pg. 59
of [1], etc.
Note also that $\phi(S)$ need not be a partial Cauchy hypersurface if we
replace $D_{\epsilon}(S)$ by ‘an open globally hyperbolic subset of $D(S)$’
(delete a half-space at $t=0$ from $(\mathbb{R}^{1,n},\eta)$).
###### Proof.
First we show the second statement. If $\phi(S)$ is a partial Cauchy
hypersurface, we know that $D(S)$ and $D(\phi(S))$ are both open subsets of
$M$ and $M^{\prime}$ respectively, and since $M,M^{\prime}$ are locally
Cauchy, we know that $D(S)$ and $D(\phi(S))$ share a common (isometric) subset
extending beyond $D_{\epsilon}(S)$. Let $\mathcal{D}(S)$ denote the maximal
open globally hyperbolic subset of $D(S)$ for which there is an isometric
embedding $\psi^{\prime}:\mathcal{D}(S)\hookrightarrow M^{\prime}$.
Now suppose that $D(S)$ does not isometrically embed into $M^{\prime}$, i.e.
$\mathcal{D}(S)\subsetneq D(S)$. If $H(\psi^{\prime}(S))\neq\emptyset$ then by
locally Cauchy we can use $\psi^{\prime-1}$ to embed
$\tilde{D}(\psi^{\prime}(S))$ into $M$ and contradict the hole-freeness of
$M^{\prime}$. If $H(\psi^{\prime}(S))=\emptyset$, then
$M^{\prime}=D(\psi^{\prime}(S))$. But in that case, by locally Cauchy,
$M^{\prime}$ isometrically embeds into $M$ as a proper subset of $M$,
contradicting the inextendibility of $M^{\prime}$.
Now we show the first part: if $\phi(S)$ is acausal, then it is a partial
Cauchy hypersurface in $M^{\prime}$. Supposing that
$\text{edge}(\phi(S))\neq\emptyset$, we will show that
$\text{edge}({\phi(S)})\cap\phi(S)=\emptyset$, and we then show that this
implies $(M^{\prime},g^{\prime})$ is holed.
Let $q^{\prime}$ be a point in $\text{edge}(\phi(S))\cap\phi(S)$. In that case
denote $q=\phi^{-1}(q^{\prime})\in S$ and take a future directed timelike
curve $\sigma$ from $q$ to $q_{\epsilon}\in I^{+}(S)\cap D_{\epsilon}(S)$, and
set $\sigma$ to be past inextendible in $M$. Then there is a timelike curve
$\sigma^{\prime}=\phi(\sigma)\subset M^{\prime}$ passing through $q^{\prime}$.
Take $U_{i}(q^{\prime})$ to be a system of increasingly small neighborhoods
$U_{i}(q^{\prime})\supsetneq U_{i+1}(q^{\prime})$, each containing points in
$I^{+}(q^{\prime})$ and $I^{-}(q^{\prime})$, such that
$\\{U_{i}(q^{\prime})\\}$ has accumulation point $q^{\prime}$. Define a
collection of curves $\\{\gamma_{i}^{\prime}\\}$ by taking $\sigma^{\prime}$,
removing from $\sigma^{\prime}$ the portion $\sigma^{\prime}\cap
U_{i}(q^{\prime})$, and replacing that portion with timelike segments with
endpoints in $I^{+}(q^{\prime})$ and $I^{-}(q^{\prime})$ which miss $\phi(S)$.
Although this might produce only piecewise smooth timelike curves, the curves
$\\{\gamma_{i}^{\prime}\\}$ can be approximated by $C^{1}$ causal curves
(still missing $\phi(S)$), which we relabel as $\\{\gamma_{i}^{\prime}\\}$.
Now consider $\\{\phi^{-1}(\gamma_{i}^{\prime})\\}$. This defines a collection
of $C^{1}$ causal curves in $D_{\epsilon}(S)$ that approach $\sigma$ but which
do not intersect $S$.
We now recall a well known fact. In any spacetime $L$, there exists a
sufficiently small neighborhood $N(x)\subset L$ around some point $x$ such
that $J^{+}(y)\cap J^{-}(z)$ is compact for all $y$, $z$ $\in N(x)$. Since a
sufficiently small $N(x)$ is causal, every point in a spacetime lives in a
small globally hyperbolic neighborhood.
Consider such a globally hyperbolic neighborhood $N(q)\subset M$ centered at
$q$. Without loss of generality, we can take $N(q)$ to have Cauchy surface
$S\cap N(q)$. For some sufficiently large $n\in\mathbb{N}$, the causal curves
$\\{\phi^{-1}(\gamma_{i\geq n}^{\prime})\\}$ lie in $N(q)$ and are
inextendible therein. But since these do not intersect $S$, we contradict the
global hyperbolicity of $N(q)$.
It now follows that $\text{edge}(\phi(S))$, if not empty, lies outside of
$\phi(S)$. By standard results in causal theory, $H(\phi(S))$ is ruled by null
geodesics intersecting $\text{edge}(\phi(S))$. By the definition of
$D_{\epsilon}(S)$, we can use $\phi$ to pull back
$H(\phi(S))\cap\phi(D_{\epsilon}(S))$ into $D_{\epsilon}(S)\cap M$. By the
definition of $D_{\epsilon}(S)$, it is then clear that
$\phi^{-1}\left[H(\phi(S))\cap\phi(D_{\epsilon}(S)\right]\cap
D(S)\neq\emptyset$, and thus $(M^{\prime},g^{\prime})$ is holed. ∎
## References
* [1] Beem, J., Ehrlich, P., Easley, K., Global Lorentzian geometry, second ed., Monographs and Textbooks in Pure and Applied Mathematics, vol. 202, Marcel Dekker Inc., New York, 1996.
* [2] Bernal, A. N., Sánchez, M., On smooth Cauchy hypersurfaces and Geroch’s splitting theorem, 2003, Commun.Math.Phys. 243:461-470.
* [3] Choquet-Bruhat, Y., Geroch, R., Global aspects of the Cauchy problem in general relativity, 1969, Commun.Math.Phys. 14:329–335.
* [4] Earman, J., Glymour, C., Stachel, J. (eds.), Foundations of Space-Time Theories. Minnesota Studies in the Philosophy of Science, vol. 8., University of Minnesota Press 1977.
* [5] Galloway, G. J. G., Solis, D., Uniqueness of de Sitter space, 2007, Class.Quant.Grav. 24:3125-3138.
* [6] Gao, S., Wald, R. M., Theorems on gravitational time delay and related issues, 2000, Class. Quant.Grav. 17:4999-5008.
* [7] Geroch, R., Prediction in General Relativity, in Earman, J., Glymour, C., Stachel, J. (eds.) Foundations of Space-Time Theories, Minnesota Studies in the Philosophy of Science, vol. 8., University of Minnesota Press 1977.
* [8] Malament, D., Observationally Indistinguishable Space-Times, in Foundations of Space-Time Theories, Minnesota Studies in the Philosophy of Science, vol. 8., University of Minnesota Press 1977.
* [9] Manchak, JB., Can We Know the Global Structure of Spacetime ?, 2009, Studies in History and Philosophy of Modern Physics 40:53–56.
* [10] Manchak, JB., Is prediction possible in general relativity ?, 2008, Foundations of Physics 38(4):317-321.
* [11] Minguzzi, E., Limit curve theorems in Lorentzian geometry, 2008, J.Math.Phys. 49:092501.
* [12] Minguzzi, E., Causally simple inextendible spacetimes are hole free, 2012, J.Math.Phys. 53:062501.
* [13] Minguzzi, E., Chronological spacetimes without lightlike lines are stably causal , 2009, Commun. Math. Phys. 288:801-819.
* [14] Minguzzi, E., Sanchez, M., The Causal Hierarchy of Spacetimes, 2008, arXiv: 0609119.
* [15] Minguzzi, E., Lorentzian causality theory, 2019, Living Rev Relativ 22, 3.
* [16] O’Neill, B., Semi-Riemannian Geometry, San Diego: Academic Press 1983.
|
# “This Whole Thing Smacks of Gender”: Algorithmic Exclusion in Bioimpedance-
based Body Composition Analysis
Kendra Albert<EMAIL_ADDRESS>Harvard Law SchoolCambridge, MA and
Maggie Delano<EMAIL_ADDRESS>Swarthmore CollegeSwarthmore, PA
(2021)
###### Abstract.
Smart weight scales offer bioimpedance-based body composition analysis as a
supplement to pure body weight measurement. Companies such as Withings and
Fitbit tout composition analysis as providing self-knowledge and the ability
to make more informed decisions. However, these aspirational statements elide
the reality that these numbers are a product of proprietary regression
equations that require a binary sex/gender as their input. Our paper combines
transgender studies-influenced personal narrative with an analysis of the
scientific basis of bioimpedance technology used as part of the Withings smart
scale. Attempting to include nonbinary people reveals that bioelectrical
impedance analysis has always rested on physiologically shaky ground. White
nonbinary people are merely the tip of the iceberg of those who may find that
their smart scale is not so intelligent when it comes to their bodies. Using
body composition analysis as an example, we explore how the problem of trans
and nonbinary inclusion in personal health tech goes beyond the issues of
adding a third “gender” box or slapping a rainbow flag on the packaging. We
also provide recommendations as to how to approach creating more inclusive
technologies even while still relying on exclusionary data.
data collection and curation, sex/gender, bioelectrical impedance analysis,
body composition, critical data/algorithm studies, science and technology
studies, critical HCI and the design of algorithmic systems
††copyright: rightsretained††journalyear: 2021††conference: Conference on
Fairness, Accountability, and Transparency; March 3–10, 2021; Virtual Event,
Canada††booktitle: Conference on Fairness, Accountability, and Transparency
(FAccT ’21), March 3–10, 2021, Virtual Event, Canada††doi:
10.1145/3442188.3445898††isbn: 978-1-4503-8309-7/21/03††ccs: Human-centered
computing Ubiquitous and mobile computing††ccs: Social and professional topics
Gender††ccs: Social and professional topics Race and ethnicity††ccs: Social
and professional topics People with disabilities††ccs: Social and professional
topics Geographic characteristics††ccs: Social and professional topics
Cultural characteristics††ccs: Social and professional topics Remote
medicine††ccs: Applied computing Consumer health
## 1\. Introduction
Kendra: As a nonbinary person who was assigned female at birth, I’ve always
had an uncomfortable relationship with my body weight. To be honest, before I
stepped on the Withings scale, I hadn’t weighed myself in some time. But after
undergoing some medical transition steps, I found myself much more curious
about what my body fat percentage was. Especially since I have put on a fair
amount of muscle, I was interested to see how the numbers matched or didn’t
match my self perception.
Because of this discomfort with weight numbers, I asked Maggie to make a
profile for me when I was getting started with the scale. She started going
through the registration flow for a new user, and quickly encountered a
roadblock - the system required a gender.
Wait, that’s not right. It didn’t require a gender, it required you to pick
one of two images labeled gender - a person wearing pants or a person wearing
a skirt (see Figure 1). When Maggie asked me which one I preferred, I told her
to pick one and not tell me.
When I did finally step on the scale (with my shiny new profile), I looked at
the numbers and felt…well, some sort of way. To be honest, I’m happier with my
body now than I remember ever being before, but the numbers seemed very off.
Figure 1. The Withings profile view in their Health Mate mobile application.
A view of the Withings profile view. There is an option for Gender with a
person wearing pants and a person wearing a skirt, an option for height (5 ft
3)
and an option for Athlete mode, which is set.
The next morning, we were talking about the scale at breakfast, and that was
when Maggie first told me that the fat percentage was an estimation based on a
technique called bioelectrical impedance analysis (BIA). There was an equation
behind that number - it didn’t actually measure my body fat percentage
directly. I was shocked, and asked how that related to my gender. We decided
to do some testing.
When Maggie changed my “gender” from skirt-wearing to pants-wearing, my body
fat percentage dropped 10% points. 10! The huge difference seemed to confirm
everything I’d thought about the numbers feeling wonky. But which one was
right? Or failing that, which one closer? Could we look at how the algorithm
calculated the percentage in order to see which one was likely to be more
accurate for me?
Maggie: Kendra’s reaction to the scale was an interesting experience because I
realized that the knowledge I had about how BIA worked was non-obvious to
anyone not familiar with the tech. I had already been thinking about how sex
is often coded in the underlying BIA equations as 1 or 0 because I had given a
guest lecture about it recently, but I hadn’t fully thought through the
implications for trans and nonbinary people.
Seeing the numbers on the scale jump by so much was jarring. We talked a lot
about how to interpret the results and what the exact percentages really mean.
There is a little progress bar on the scale that indicated that the numbers
(regardless of gender) were above the “normal range.” But that normal range
itself appears to be a “normal range” for an athlete because we were both well
above that line. In terms of making the scale work for Kendra, the answer I
was proposing was to pick skirt-wearing or pants-wearing and then track that
number over time.
Kendra’s interest in using the Withings smart scale, and figuring out what
“gender option” was right drove us to reexamine how BIA works as a technology,
and how assumptions about gender and sex are built into the fundamental
equations that drive the algorithmic models used for estimating body fat
percentage.
This paper, the result of that analysis, combines transgender studies-
influenced personal narratives with an analysis of the scientific basis behind
the bioimpedance technology used as part of the Withings smart scale. With
these different forms of knowledge, we explore how the problem of trans and
nonbinary inclusion in personal health tech goes beyond adding a third
“gender” option or slapping a rainbow flag on some packaging. Rather,
nonbinary exclusion is the tip of an iceberg of flawed assumptions and
exclusionary clinical testing, resulting in algorithms that are advertised for
self-knowledge but prove to allow anything but.
## 2\. Background
This paper draws on previous work related to self-tracking, transgender
studies, human-computer interaction, and the study of sex and gender in
biomedical research. In this section, we provide a brief summary of related
work in these disciplines to situate our findings.
While the Withings weight scale is not the first commercially available scale
to estimate body fat percentage, the device was one of the original “smart”
(i.e. connected) weight scales (Bod, 2020). The device was first sold in the
early 2010s at the beginning of a surge of interest in self-tracking and the
advent of the “quantified self” movement (Lupton, 2016; Neff and Nafus, 2016;
Nafus, 2016). The quantified self movement included a variety of stakeholders
including individual self-trackers, organizations such as Quantified Self,
companies and toolmakers, academic researchers, and physicians (with
considerable overlap between these categories) (Boesel, 2013). Participants
are broadly interested in the capabilities of self-tracking to provide unique,
actionable, and personalized insights (Lupton, 2020; Wolf and De Groot, 2020).
Self-trackers engage deeply with their data through a process sociologist
Deborah Lupton refers to as “data sense-making” (Lupton, 2018b). Many self-
trackers believe that data can “speak for itself” and should be involved in
medical care (Fiore-Gartland and Neff, 2015; Omer, 2016). However, the use of
data collected from commercial “wellness” devices such as the Withings scale
or activity trackers like Fitbits is controversial as these devices don’t
always perform well in third-party validations, and often involve proprietary
algorithms and metrics (e.g. steps, sleep scores).
Previous research investigating self-tracking devices and wearable monitors
has shown that these devices, like devices in other categories, are designed
primarily for an unmarked user (Cifor and Garcia, 2020; Costanza-Chock, 2020).
That is, the user is assumed to be a White, cisgender, non-disabled man. Cifor
and Garcia, for example, use duoethnography to evaluate gendered assumptions
in the design and app of the Jawbone UP3 fitness tracker. They illustrate that
while the device itself appeared “genderless,” the design of the device and
the app reinforced masculinist values such as competition (Cifor and Garcia,
2020). Such issues are also present in the design of algorithms - for example,
Fitbit devices count steps less reliably at slower gait speeds and with softer
steps, which decreases step count accuracy for older people or people with
mobility related disabilities (Feehan et al., 2018; Javorina, 2020). Early
implementations of the hardware and algorithms used to estimate heart rate on
wearables were less accurate for users with darker skin tones (Shcherbina et
al., 2017; Hailu, 2019), though recent evidence suggests these disparities may
have been addressed by improvements to the device’s algorithms (Bent et al.,
2020).
The development of algorithms without a diverse set of users creates
algorithmic exclusion. Populations are excluded from the use of algorithmic
tools because they were not included as part of the original data used in
development, or because information was not gathered in such a way as to make
their needs visible. This algorithmic exclusion means that the performance of
these algorithms for individuals not in the original dataset are unknown; the
practical implication is that these algorithms likely work less well for those
not included in the original dataset. Algorithmic exclusion can have real
world impacts as individuals rely more and more on these data, especially when
these data are used by physicians. For example, pulse oximeter devices that
measure blood oxygenation (using a more involved technique similar to that
used by wearables manufacturers for measuring heart rate) overestimate blood
oxygenation in individuals with darker skin tones (Bickler et al., 2005;
Feiner et al., 2007). Renewed interest in these disparities during the
COVID-19 epidemic led to a study that showed that Black patients had nearly
three times the frequency of occult hypoxemia (oxygen depravation) that was
not detected by pulse oximetry than White patients (Moran-Thomas, 2020;
Sjoding et al., 2020), potentially leading to higher mortality rates for Black
patients when the seriousness of their COVID-19 cases were underestimated.
These issues have not escaped notice within communities that build
technological tools. There has been increasing discussion in different design
communities about how to create technology that is more inclusive and/or
addresses some of the disparities discussed above. In human-computer
interaction (HCI) and artificial intelligence (AI) research, for example,
there have been efforts including analytical reviews, software analysis of
datasets, and guidelines about increasing “gender sensitivity” (Burtscher and
Spiel, 2020; Scheuerman et al., 2019; Scheuerman et al., 2020; Hamidi et al.,
2018; Keyes, 2018) and more intersectional approaches to addressing
disparities, such as the intersection of gender and race (Buolamwini and
Gebru, 2018; Scheuerman et al., 2018). There have been multiple guides and
recommendations for including transgender people in user interface design
(Morgan Klaus Scheuerman et al., 2020; Burtscher and Spiel, 2020) and in
surveys (Spiel et al., 2019). These recommendations include allowing all
individuals to self-identify their gender, not assuming binary gender, using
the language users use, and protecting the privacy of participants. In the
case of dealing with medical research and “embodiment,” the guidelines
recommend measuring physiological parameters such as hormone levels directly,
rather than assuming them based on gender.
However, embodiment is a tricky line to draw. When one considers the terms
“sex” and “gender,” the common assumption is that sex is biological and gender
is social. If there is any relationship between the two, it is assumed that
sex influences gender, and transgender and intersex people are seen as
outliers whose needs vary from “normal” populations. However, Springer et al.
argue that it is _sex_ that cannot be purely decoupled from social factors
(i.e. gender) (Springer et al., 2012). A “biosocial turn” is now beginning in
the study of sex and gender (Shattuck-Heidorn and Richardson, 2019). Many
mechanisms that were previously thought to be due to biological “sex”
differences, are in fact mechanisms that involve differences based on
socialization that manifest in biological differences. Springer et al.
recommend using “gender” to refer to social relations and gender roles and the
term “sex/gender” to refer to those biological and biosocial factors
associated with individual physiology. In this paper, we will use the terms
sex/gender and gender, unless we are referring to how these terms are used in
a specific work.
## 3\. Approach
Our work draws heavily from transgender studies as an approach, while having
some similarity to Black feminist methods, specifically Jennifer Nash’s love
politics in the form of witnessing (Nash, 2019). We include conversations
between the two of us throughout throughout the paper. Personal narrative,
especially dialogue, can help uncover “common pain points and overlooked
opportunities” (Cifor and Garcia, 2020). Where duoethnography, used by
previous studies, is a research method that employs personal narrative to
“simultaneously generate, interpret, and articulate data” about a shared
experience (Norris, 2008), we include personal narratives throughout this
paper to combine, in the words of Susan Stryker, “the embodied experience of
the speaking subject” (i.e. our experiences using the weight scale) with “the
specialized domain of scholarship”, (i.e. the specifics of the theory and
practice of BIA for at home body composition monitoring) (Stryker, 2013;
Spade, 2003). Personal narrative allows for a starting point to a broader
conversation about smart weight scales and the implications the system and
algorithm design have for technology and biomedical research more broadly.
We are approaching this topic as a White nonbinary person (Kendra), and as a
White cisgender woman (Maggie). Neither of us are disabled in ways that are
likely to affect our readings from or interactions with the Withings scale.
Both of us have considerable background in technology and gender. Kendra is a
lawyer teaching at a technology clinic who also teaches in transgender
studies. They have, at times, engaged in self-tracking, although not
previously around weight. Maggie is an assistant professor at a small liberal
arts school where she teaches digital/embedded systems and inclusive
engineering design. Her research involves using bioimpedance to help patients
manage fluid overload. She is also a self-tracker and has presented her work
at several Quantified Self conferences.
## 4\. Bioelectrical Impedance Analysis
Critical analysis of the sort that we deploy in this paper requires the
knowledge of how the measurement technology inside smart weight scales works.
In this section, we present a background on Bioelectrical Impedance Analysis
(BIA). We should note, however, that because the specific testing and
equations used by the Withings scale are not publicly available, this
background will leverage knowledge from public and peer-reviewed sources and
may or may not reflect the specific approaches that the Withings or other
consumer-facing scales employ.
At the most basic level, the body can be divided into two main “compartments:”
fat mass (FM) and fat free mass (FFM) (Kyle et al., 2004; Lukaski, 2013;
Khalil et al., 2014). FM includes all the fat in the human body, including
what we think of as body fat and also visceral fat around vital organs. FFM is
the rest of the tissue; it is about 73% water, about 7% bone, and the rest is
proteins.
BIA leverages the fact that the water in FFM is conductive; by driving a
small, painless current through the body via a pair of electrodes (in weight
scales these are two of the electrodes on the scale), the resulting voltage
can be measured by another pair of electrodes (also on the scale) and related
to the electrical properties of the underlying tissue.
If one assumes the body is a homogeneous cylinder with cross-sectional area A,
the measured resistance $R$ (defined as the real part of the measured voltage
divided by the current) is equal to:
(1) $R=\frac{\rho L}{A}$
where $\rho$ is the conductivity of the cylinder, $L$ is the length of the
cylinder, and $A$ is the cross-sectional area of the cylinder. Most BIA
equations assume that $L$ is proportional to the body height $H$. Multiplying
both sides of the equation by $L/L$, the resistance can be related to the
inverse of the volume, assuming that $V=L\times A$:
(2) $R=\frac{\rho L}{A}\cdot\frac{L}{L}=\frac{\rho L^{2}}{V}$
If one moves the volume to the other side, there is then:
(3) $V=\frac{\rho L^{2}}{R}$
This volume $V$ corresponds to what is called the “total body water” or the
volume of all water in the body, which is assumed to be about 73% of the
volume of the FFM. If one multiplies this volume by the presumed density of
the FFM to obtain the FFM, the FM and body fat percentage (BF%) can then be
calculated as:
(4) $\displaystyle FM$ $\displaystyle=Weight-FFM$ (5) $\displaystyle BF\%$
$\displaystyle=\frac{FM}{FM+FFM}\cdot 100$
### 4.1. Assumptions of BIA
The methods described above require a number of assumptions related to the
body. In order for these assumptions to be valid, the resistivity $\rho$ of
the measured volume must be homogeneous, and the cross-sectional area must be
constant throughout the body such that $V=L\times A$. The assumption that the
FFM is 73% water must also hold, along with the assumed density of the FFM.
Finally, it must be assumed that the current penetrates through the whole body
in a uniform manner such that the estimated volume is truly reflective of the
total body water, and not just a fraction of it.
Of course, these assumptions are not realistic; the body is not a single
homogeneous cylinder with precisely known body composition. Instead, BIA
leverages different variables that correlate with “gold standard” estimations
of the FFM to estimate the FFM based on the BIA itself. An example BIA
equation for estimating FFM might look like the following: (Kyle et al.,
2001a):
(6) $FFM=-4.104+(0.518\ \times\ H^{2}/R)+(0.231\ \times\ weight)+(0.130\
\times\ X)\\\ +(4.229\ \times\ sex:men=1,women=0)$
This equation involves a number of key terms: the $H^{2}/R$ term, the weight
term, the $X$ term (reactance or imaginary part of measured bioimpedance), and
sex. Each of these terms is associated with a coefficient in front (along with
a constant at the beginning of the equation) that are calculated based on the
best fit of the regression equation that minimizes the error between the
estimation via BIA and the estimation via the gold standard for the population
under test (in this case, the gold standard used was a technology called dual
x-ray absorptiometry or DXA).
Precisely which parameters are included in the regression equations and their
corresponding coefficients depends on the population used to calculate the
equations and researcher preference. Other researchers also include factors
such as the age of the participants and whether or not participants are
athletes (Khalil et al., 2014). In some cases these parameters are all
incorporated into a single equation (such as the one above that has “options”
for participant sex), or multiple equations are generated, such as one for
“lean” participants, one for “average” participants, and one for “obese”
participants (Segal et al., 1988).
Parameters included in FFM estimation equations often do a lot of “work” and
their role is not always clearly understood. These parameters and their
coefficients “stand in” for things such as body density, which can vary
depending on both factors included in the equations and those typically
excluded from the equations such as ethnicity. For example, age is sometimes
included as a parameter because there tends to be a decrease in FFM and an
increase in FM with age, and sex is included because males on average have
lower FM than females. We unpack these assumptions and coefficient parameters
in more depth in Section 6.
## 5\. How BIA Is Marketed
Kendra: Of course, I didn’t know how BIA worked before using the scale. Nor
would looking at the Withings website have revealed any of the fraughtness of
BIA to me - when I look at their ads now, they call the technology “body
composition.” It’s not obvious from their advertising that it’s estimating
body fat percentage based on a set of assumptions and an algorithm, rather
than providing an individual-level ground truth. If you don’t know how the
technology works, it’s actually quite easy to draw the conclusion that the
scale just magically knows your actual body fat percentage.
Even if I review the “technical specifications,” the information contained
requires quite a bit of context to determine that what is produced isn’t an
individualized number. The bullet points say “Bioelectrical Impedance Analysis
/ Athlete and non-athlete mode / Unit: body fat %, total body water %, muscle
mass kg or lb, bone mass kg or lb.” There’s nothing there that tells me, as an
end-user without a lot of expertise in BIA, that it’s engaged in an estimation
based on plugging particular values into an equation.
That brings me to the question, Maggie, what were you thinking when you bought
the Withings scale? How did the body composition stuff play into it?
Maggie: I’ve had the scale for a long time - since 2012. That was also the
time when the number of people talking about self-tracking was growing, and
organizations such as Quantified Self were facilitating some of the first
meetups and conferences in this area. Quantified Self emphasized self-
knowledge through numbers, often using the frame “self-research” or “personal
science” (Wolf and De Groot, 2020). Over the next few years, the idea of self-
tracking would become very hyped, and an entire commercial ecosystem Whitney
Erin Boesel dubs “quantified self” (i.e. little q, little s, vs big Q, big S)
was formed (Boesel, 2013). Looking back at my data, my first weigh in was
March 23rd, 2012. I wanted to learn more about the tech that was out there and
see what I could do to make sense of things, and was also inspired by a
Quantified Self “Show & Tell” talk by Amelia Greenhall about how she weighed
herself everyday and sustained her weight loss long term (Greenhall, 2013). I
was excited about the possibility of self-tracking and consistent habits
improving my fitness and my life. I wanted to learn more and then translate
that knowledge to help others.
Kendra: This idea of self-knowledge is really exciting. That’s what I was
hoping for when I stepped on the scale as well - some numbers to help me
quantify what I was feeling about my body. But of course, that’s not what I
got. As a White nonbinary person, what I learned is that this tech isn’t build
for me - in part because of the choices that technology companies make, and in
part because of the failure to meaningfully account for transgender experience
as part of the underlying clinical testing. And it’s worse for non-White
nonbinary or intersex folks, who are both not represented in the studies in
terms of sex/gender or race/ethnicity. So much for smart scales.
## 6\. Critiques of BIA
Many limitations of BIA have been well established in the medical literature
(Khalil et al., 2014), though some researchers argue that the techniques are
still appropriate to use under the correct circumstances (Ward, 2019).
Researchers suggest caution when using BIA, especially when working with
populations that have altered body composition, such as patients with fluid
overload. In these cases, some researchers have developed equations
specifically for a particular patient population, or have used alternative
methods of body composition assessment that don’t rely on regressions (see
e.g. (Keane et al., 2017)).
A major challenge with body composition using BIA is that the two compartment
model of “fat mass” (FM) vs “fat free mass” (FFM) inherently requires
assumptions about the composition of the two different compartments (in
addition to other assumptions such as homogeneous composition as discussed
previously in Section 4.1). Uniformity of the FM is a fairly reasonable
assumption across individuals (Martin et al., 1994), but assumptions about the
composition of the FFM are not (Côté and Adams, 1993). Additionally, when
using the regression equations, the assumptions about the composition of the
FFM are obscured into the coefficients associated with the included variables
such as age, weight, and sex.
Because the linear regressions are optimized at a population level and most
studies do not examine the accuracy of the estimations at an individual level,
there is no guarantee that a specific equation is accurate for any given
individual. Additionally, once one begins to consider any individual that is
not perfectly matched for the population that was used to create the
equations, the role of these variables becomes increasingly murky but also
increasingly important in order to design equations that work for populations
historically not included in the populations used to generate the equations.
This includes non-White people, trans and nonbinary people, intersex people,
people with chronic illnesses, and those at the intersections of these
categories.
### 6.1. Unpacking “Sex”
We begin with the assumptions and role of “sex” in the equations.111We use the
term “sex” here given that this is the term used by the researchers. However,
sex and gender are entangled as described in Section 2. Sex in BIA is either
coded as 0 for female and 1 for male (effectively changing the offset of the
equations, as in Equation 6) or there are separate equations created for male
participants and female participants, as in Sun et al. (Sun et al., 2003).
What the literature means by “male” or “female” is unclear, and these terms
are often confounded with gender identities of “man” and “woman” as in
Equation 6. As we discussed in Section 2, “sex” as a concept is just as
fraught and contingent as gender (Fausto-Sterling, 2000; Springer et al.,
2012; Davis and Preves, 2017). This is not a problem unique to (or caused by)
the existence of trans (or intersex) people.
Although the methods by which “sex” was evaluated in the BIA literature is
unclear, it is common for reported sex to be a participant’s sex assigned at
birth. And “sex assigned at birth” is generally only a proxy for someone’s
external genitalia at birth, which is only one of the many characteristics
that are often encompassed under the word sex (Davis and Preves, 2017; Fausto-
Sterling, 2000). Others include hormone balance, chromosomal makeup, and
internal reproductive organs. We could not find an example of a study that
produces BIA estimates that discusses what sexual characteristics they round
up to a determination of sex, and it is generally not clear how the
identification of sex was made (i.e. whether self-identified or identified by
the researchers). This lack of specificity is one of the first and most
significant barriers to creating a more inclusive algorithm for transgender
people. Given how large the sample sizes were for some of the populations used
to create BIA equations (upwards of 5,000 in (Kyle et al., 2001b)), it is
statistically unlikely that no transgender people were involved. But we don’t
know how they were accounted for or counted in the calculations.
The use of “sex” itself is also a “stand in” for other parameters. What the
researchers presumably intend is for “sex” to stand in for the roughly
dimorphic variation in body composition between those assigned male at birth
and those assigned female at birth (Davis and Preves, 2017). However, it is
not the fact that a person is assigned male at birth (i.e., they have genital
erectile tissue of a certain size) that makes those assigned male at birth
have lower FM on average compared with those assigned female at birth. In
fact, FM and the distribution of fat may actually be biosocial phenomena: they
may depend on both sex AND gender (sex/gender). For example, one effect of
larger quantities of testosterone is changes in fat distribution and increases
in muscle production, resulting in lower body fat percentages (Spanos et al.,
2020). However, most research into sex differences in body composition do not
explore other explanations, such as the differences in diet between men and
women in some cultural contexts (Prättälä et al., 2007; Holm and Møhl, 2000).
The BIA literature also does not disaggregate sociocultural context that might
be caught up in the word “sex”, nor account for the myriad ways in which
someone assigned male at birth might not match the presumed physiological
composition associated with those assigned male at birth on average.
With only two sex or gender options, some intersex and most nonbinary users
experience what D.E. Wittkower has termed a dysaffordance – they have to
pretend to be something they are not in order to access the service
(Wittkower, 2016; Costanza-Chock, 2020). The lack of transparency in the BIA
equations makes it difficult to tell exactly what adjustments might need to be
made to make equations more inclusive, or to at the least advise individuals
as to which “sex” they should choose when using consumer technologies that
incorporate BIA. On the other hand, the current setup of the Withings scale
that asks for user gender may actually produce more accurate results for trans
women and trans men who have undergone gender-affirming hormone therapy, as
their body composition may resemble that of a cis woman or cis man, rather
than their sex assigned at birth (Klaver and T’Sjoen, 2018; Spanos et al.,
2020).
### 6.2. Other Issues With Published Studies
White nonbinary or intersex people, despite experiencing a dysaffordance, may
be better off than their siblings of color because the composition of the
study populations used in BIA equations in the Withings scale are unclear, and
there is no way to enter racial or ethnic information.222We use both the terms
race and ethnicity because the literature is mixed on which are relevant
factors, likely because of the entanglement of biological and social factors
similar to sex/gender as discussed in Sections 2 and 6. The majority of
measurement studies involving BIA include primarily Caucasian subjects and
assume that Caucasian subjects are to serve as a “reference” for other
ethnicities.333The use of White people as a default mirrors Kimberle
Crenshaw’s critiques of US law, as well as Ruha Benjamin’s critique of
photographic film (Benjamin, 2019; Crenshaw, 1989). This is an issue because
body composition can vary among ethnic and racial groups due to the
environment, diet, cultural factors, and anthropomorphic measurements such as
limb length and body size (Deurenberg et al., 1991). Researchers or device
manufacturers interested in using BIA equations validated in a Caucasian
population therefore need to cross-validate the equations in the population of
study. As with sex, race or ethnicity is likely standing in as a proxy for
some other variables, such as environmental racism, socioeconomic status, or
some actual relationship between FFM composition and certain genetic factors.
Some studies suggest that ethnicity specific compensations are needed to use
body composition equations in different ethnic groups (Cohn et al., 1977;
Schutte et al., 1984), whereas other studies have shown that stratification
based on adiposity (i.e. body fat percentage) is more important than race
(Ainsworth et al., 1997). Regardless, cross-validation studies are necessary
to ensure that the assumptions of equations produce accurate results for all
populations.
Another major issue with BIA is that the regression equations themselves are
validated against “gold standards” that in turn have their own assumptions.
For example, BIA equations are often compared against air displacement
plethysmography or hydrostatic weighing. But both of these techniques require
assumptions about the density of the FFM. The density of the FFM depends on
numerous factors such as age, sex, and ethnicity (Ellis, 2000). Therefore any
“gold standards” that likewise depend on the two-compartment model (i.e. FM
and FFM) are themselves are only valid in the same population, and most of
those are also primarily tested on White (presumably cis and binary gender)
subjects.
Even techniques that do not depend on the two-compartment model such as dual
x-ray absorptiometry (DXA) are found to significantly underestimate fat mass
when compared with CT scans (Kullberg et al., 2009). Ultimately, the only way
to validate a body composition device is using cadaver studies and chemical
analysis, which have some of the same issues - the results would then only be
validated for those similar to the cadavers (Shepherd et al., 2017). Although
measurement problems of this type are common in all areas of science, the lack
of physiological basic science about how and why FFM varies with respect to
both social and biological factors in an intersectional way means that it is
difficult to determine which assumptions will hold across populations. Because
of this, the lack of population-specific testing in each individual “gold
standard” testing regime complicates the possibility of meaningful validation
for folks who do not fit the default.
### 6.3. Why (anti-)Black Boxes? The Lack of Regulatory Oversight
The gap between promises and reality in the smart medical device space is in
part due to the limited scope of review by the United States’ Food and Drug
Administration (FDA). The vast majority of medical devices, including “smart”
and “connected” devices, are approved through a program at the FDA known as
the 510k approval process. This process requires no clinical trials and very
little oversight - device manufacturers merely need to prove that their
product is “substantially equivalent” to that of an already approved medical
device. The Withings scale discussed in this paper received 510(k) clearance
(Wit, 2012); similar scales on the market such as the Fitbit Aria scale
received approval even after the devices were already on the market (Fit,
2014). In 2014, the FDA announced their intentions to cease requiring even
510(k) approval for devices such as smart weight scales. This means that there
is very little regulation of these devices, and certainly no required clinical
validation of the algorithms used to calculate body composition, despite the
label of “clinically tested” on the Withings website.
This lack of regulatory oversight results in most consumer-focused deployments
of technologies like BIA being “black box” algorithms. Popularized in the
context of technology studies by scholar Bruno Latour, an algorithm is
considered a black box “when a statement is simply presented as raw fact
without any reference to its genesis or even its author.” (Harman, 2010). When
using the Withings scale, only the final body fat percentage is made
available, and there is no explicit reference to an algorithm in the app or in
most of the marketing materials. Additionally, Withings does not release the
equations its scales use to calculate FFM or the populations that it used to
calculate those equations. The power of the black box is that we cannot
thoroughly investigate a subject about which nothing is known (Pasquale,
2015). We are left with the assumption that, unless proven otherwise,
Withings’ internal algorithm production mirrors the biases of the research at
large, but again, it is impossible to tell. All that we know are the inputs
that Withings asks users for (binary gender, height, age and athlete status).
We can draw some conclusions even with just publicly available information.
The binary approach to the gender question without any explanatory text both
erases nonbinary people, and does not ask the right question about the
embodiment of the user. The Withings scale does not prompt the user to enter
information related to ethnicity, so it is not possible that the scale is
using an equation that adjusts variables to compensate for different FFM
factors in racial or ethnic groups. Because of the marketing of the
technology, non-White users may not even know that it might be relevant. The
Withings scale’s algorithm is, in the words of Ruha Benjamin, an anti-Black
box (Benjamin, 2019).
Furthermore, what evidence we do have points to BIA being unreliable even on
the populations that it is theoretically well positioned for. We reviewed the
list of studies that Withings shared on their page for researchers and found
only one study that specifically evaluated the body composition aspects of the
scales (Collier et al., 2020). The results of the study compared the
performance of the newer Body Cardio scales with an air displacement
plethysmography device called the Bod Pod (a “gold standard” discussed in
Section 6.2). The mean absolute percentage error of the body fat estimation of
the Body Cardio weight scale compared with the Bod Pod was greater than 25%,
well above a previously established 1.5% threshold deemed as an acceptable
error (Collier et al., 2020).444It is important to note, however, that while
air displacement plethysmography is often used as a comparison device/gold
standard, it relies on assumptions that can compromise its effectiveness as we
discuss in Section 4.1. Withings also argues that these scales should be used
to indicate trends rather than for absolute assessment (Worley and Messer,
2016). This suggests that any results from the Withings scales should be
interpreted with extreme caution, even on the target population who is most
well represented in the studies likely used to create equations.
## 7\. Denying Self-Knowledge
Kendra: The distance between Withings’ promise of self-knowledge and the
reality of regression equations is upsetting. They advertise all of these
benefits to self-quantification, but it’s actually limited by the technology
(Felber, 2012). As with many algorithmic technologies, the creation of
regression equations based on a limited sample cannot and will not create
accurate self-knowledge amongst those who do not fit within those samples.
If the scale was actually individually calculating a ground truth number, as
opposed to using a regression based on height and skirt-wearing vs. pants-
wearing, being nonbinary wouldn’t matter! To be honest, 90% of the time when
I’m asked my gender or sex, it doesn’t matter. It’s an arbitrary box checking
exercise. So why would the Withings scale be different?
There’s this sleight of hand involved in not revealing to people how the
technology works that creates the situation in which a nonbinary person could
go in expecting self-knowledge and getting a prediction totally disconnected
from what they themselves could tell you about their body. I could have told
the scale more about me to make the answer more accurate, but that wasn’t an
option. I don’t necessarily mind sharing information about my hormone status,
my sex assigned at birth, or other facts about my body if they’re actually
useful. And although my views on volunteering race are shaped by my membership
in a privileged racial group, I suspect many users would prefer to share race
or ethnicity information if it meant that they would get more accurate
results.
Withings asks for my gender, but it doesn’t want it, even aside from the app’s
confusion between gender and sex. I know things about myself that are relevant
to its guessing, but there’s no way to translate this knowledge within the
limited frame of reference produced by the clinical trials. There’s no way to
come out with a more accurate picture or contest the bounded understanding of
the system. That feels erasing, even more than the mere existence of the
binary prompt.
It makes me wonder about all of the other random gender/sex requests that I’ve
encountered when using technologies around health. Does Fitbit want my
gender/sex for calorie burning calculation? Does Apple? What about the Ring
Fit? How deep does the rabbit hole go?
## 8\. Paths Forward and Recommendations
Trans and nonbinary people of all races deserve to have access to inclusive
self-tracking technologies that do not collapse their identities or force them
to ignore relevant self-knowledge. What can be done to improve these
technologies? We evaluate three options for how to handle sex/gender under the
hood of a BIA-calculating device such as the Withings scale, and then provide
overall recommendations as to how to handle the use of regression equations
based on limited medical testing.
It would be inappropriate to continue without noting that BIA as a technology
deployed in smart scales may be fundamentally inseparable from the fatphobic
medical and cultural context that has created concerns about body fat in the
first place (Lupton, 2018a). Withings may claim that “there is more to you
than just weight” (see Figure 2), but the subtext of its advertising indicates
that you should want to weigh less. That is not fixable through
recommendations around the handling of sex (or gender, or hormone status). It
might be reasonably asked - given all its flaws, should BIA be used in
consumer devices at all? We don’t seek to answer that question in this paper.
Our aims are more modest. Nonbinary folk and transgender folks deserve access
to technologies of self-knowledge, even as those technologies may be used both
by companies and individuals to suggest harmful body standards.
We provide a series of options for making weight scales based on BIA more
inclusive, with recommendations for users, and both manufacturers and
researchers. Most of our recommendations specifically focus on sex/gender, but
it is worth noting that overall, BIA also has a long way to go when it comes
to race/ethnicity, which we leave for future work to explore in more depth.
Although our recommendations are designed based on the specific context of BIA
smart scales, they could potentially apply to many areas where binary
sex/gender is used as part of an algorithmic system.
Figure 2. A portion of the front page of the Withings website advertising a
version of their smart scale.
Image of the front page of the Withings website advertising a square weight
scale. The text reads “There is more to you than just weight.”
### 8.1. Option 1: Eliminate Sex/Gender as A Variable
One option for making systems more inclusive of nonbinary people would be the
elimination of sex/gender as a variable, using one equation for all users.
Practically speaking, this is not difficult. Manufacturers would not have to
acquire new data, only re-run the regression to find the best fit without
sex/gender as a variable in the regression equation(s).
However, a drawback of this option is that elimination of sex / gender as a
variable for all users would result in readings that were less accurate in
aggregate, as the inclusion of sex does reduce the mean error of the
regression equations (Khalil et al., 2014). Given the lack of accuracy of the
technology as a whole, this may or may not be hugely significant - however,
users who find a sex/gender binary appropriate for their bodies might be upset
to lose accuracy in order to make the technology more inclusive.
### 8.2. Option 2: Add a Third Option
The next option is to add a third option to the menu where one currently
chooses sex/gender. One method of implementing a nonbinary option would be the
optional elimination of sex/gender as a variable as listed above for people
who select a third option. There would then be three options: “male,”
“female,” and “sex/gender neutral.”
This option could be helpful for some intersex people, nonbinary people who
have not medically transitioned but who would prefer potentially less accurate
results to having to volunteer a binary marker, nonbinary people who have
taken some medical transition steps, and anyone else for whom the binary sex
options are unlikely to produce accurate results. A cautionary note: having a
third option in the menu does not a nonbinary inclusive app make. As Anna
Lauren Hoffman explains in her generative work on data violence and inclusion,
without taking meaningful steps to change the power dynamics present in the
product, inclusion is at best, lip service (Hoffmann, 2020). For example, when
Facebook allowed for self-identification with a wider variety of gender
identities on their platform, they did not fundamentally change the binary
advertising logic of male or female, making their claims of nonbinary
inclusion questionable (Bivens, 2017). Thus, adding a third option is only
appropriate if there is an underlying algorithmic change.
### 8.3. Option 3: Stop Using Sex/Gender as a Proxy
Ultimately, the ideal outcome of this work would be for the field to take a
step back and consider the role that sex/gender are playing as a “stand in”
for things like body fat distribution and anthropomorphic information. This is
exactly the kind of work that HCI researchers have recommended when
considering trans embodiment (Morgan Klaus Scheuerman et al., 2020; Burtscher
and Spiel, 2020). But it is more difficult in this case than many others
because of how pervasive assumptions about sex and gender are in clinical
research (Tannenbaum et al., 2019; Springer et al., 2012; Moseson et al.,
2020). The full, complicated role that sex and gender play in BIA equations
and beyond are not well understood. Significant fundamental research is
necessary to begin to understand which additional factors to measure and how
to measure them in cost-effective and reliable ways.
A deeper understanding of sex/gender and body composition will require “slow
science” (Stengers, 2018). With more information about the role that these
factors are playing, additional information could be provided by end users -
everything from body shape (i.e., “apple”, “pear”) to hormone status. This
information could even be made optional to not place an additional burden on
those unfamiliar with the specifics or who want to do the basics.
Fundamentally, this approach is the most well aligned with the promises that
companies such as Withings make about their technologies, but would also
require the most fundamental research.
Table 1. Recommended best practices for trans, nonbinary and intersex inclusion in regression based technologies such as BIA. Table design inspired by (Moseson et al., 2020). Context | Marginalizing Practices | Inclusive Practices
---|---|---
For manufacturers and researchers | Assume sex is purely biological and gender is purely social. | Review literature on sex/gender and select the appropriate measures for the specific project. If none are available, consider conducting more basic research, and consider and articulate the limitations of the state of the art.
| | Assume a biosocial explanation for physiological differences unless evidence clearly suggests otherwise.
| Ignore the existence of trans, non-binary, and intersex people. | Acknowledge the existence of trans, non-binary, and intersex people and how their physiology or experiences might be different from other users.
| | Acknowledge, if needed, the limitations of the current results and how trans and nonbinary people can still obtain the best results for them.
| Analyze results only at the population level. | Analyze results for sub-groups and at the individual level.
| | Evaluate results across racial and ethnic groups, implementing race/ethnicity selection or inclusion of non-proxy variable as appropriate.
For manufacturers | Elide the measurement precision and assumptions. | Explicitly state the precision of the measurement system, along with assumptions and constants used.
| Represent sex and/or gender with pictograms. | Use clear terminology based on the underlying research or known physiology to select a term.
| | Be explicit why sex and/or gender are being used so that trans, non-binary and/or intersex users can choose the best option for them.
For researchers | Assume “gold standard” has no built-in assumptions. | Discuss the assumptions embedded in the gold standard methods themselves, and how those assumptions influence the results.
| Assume the collection of sex and/or gender information is obvious or straightforward, and therefore does not need to be discussed in depth. | Include whether reported sex and/or gender was based on self-report, and if so, what options were available for participants to choose between. Were there only two options? Was there a free response option?
| | Explicitly state how sex and gender are being used and what they stand in for. Example: “sex is a stand-in for the dimorphic distribution of body fat in the human population.”
### 8.4. Recommendations
First, some advice to transgender, nonbinary, and intersex people who wish to
use technology that incorporates BIA but presents binary sex options. Based on
studies looking at changes in body composition, hormone status is a very
significant variable for body composition of the bodily characteristics,
perhaps more significant than other variables that are encompassed by the word
sex (Spanos et al., 2020; Elbers et al., 1997). So we would recommend that if
folks must pick a binary category, they pick the one most closely aligned with
their hormonal balance - male if in a normal male range for testosterone,
female if not. In any case, because of the study populations used to produce
equations and the black box nature of these algorithms, the actual value
produced is unlikely to be accurate and should be used primarily to track
change over time rather than for its absolute value. (This recommendation also
holds true for any person of any gender using the scale.)
In Table 1, we lay out additional recommendations for researchers and
manufacturers who wish to build more inclusive regression-based technologies.
Elaboration on some of these recommendations and additional recommendations
for different contexts can be found in the following references: (Tannenbaum
et al., 2019; Moseson et al., 2020; Morgan Klaus Scheuerman et al., 2020;
Springer et al., 2012; Spiel et al., 2019; Burtscher and Spiel, 2020; Gebru,
2019).
In general our recommendations can be summarized as a) acknowledge the
existence of non-gender normative people, b) make fewer assumptions, and c)
explain in more detail the limitations of technology. Of course, it may be
difficult for companies to fully be honest about measurement accuracy and
precision. But if being explicit and honest with customers about these errors
and assumptions would make them think twice about purchasing a product,
perhaps the best next step is for companies to reconsider their business
model.
Of course, including trans people after the fact is not ideal. Participatory
methods that incorporate transgender people in problem definition around
medical devices - to design, in the words of Haimson et al., trans
technologies - would be preferable to all of these stopgap measures (Haimson
et al., 2020; Costanza-Chock, 2020; Shamas and Afsaneh Rigot, 2019; Gebru,
2019). However, as practitioners who work with participatory methods, we
understand that such practices are unlikely to arise overnight. Until it’s
widely accepted that designing for those on the margins can create better
medical devices, participatory design may never fully adopted by those who
commercialize them.
## 9\. Conclusion
It can be easy to assume that the use of “sex” in quasi-medical applications
is neutral, just another fact about one’s body that allows for a more accurate
complete picture. But, in the immortal words of dril, “this whole thing smacks
of gender” (dril, 2012). When the lived realities of nonbinary folks cause us
to scratch below the surface, the lack of careful thought around assumptions
that go into technologies like smart scales becomes clear. Cultural beliefs
about gender are driving the bus when it comes to engagement with “sex
differences.” And because of that, sex, even in clinically tested BIA
equations, is holding space for too many variables, supported by too little
basic research.
When inadequately validated, (anti)-Black box algorithms built on these shaky
foundations deceive their users. They harm people who do not line up with the
assumed user base, promising knowledge of self but instead merely reproducing
the violence of erasing clinical systems (Hoffmann, 2020).
It doesn’t have to be this way. Even without additional clinical testing or
regulation, there are clear steps that manufacturers can take to mitigate some
of the harms caused by these systems. First, they can educate users as to the
population-level accuracy of metrics like BIA, rather than advertising body
composition analysis as if it was accurate on an individual basis. Second, as
discussed above, they can make clear how the technology does or does not work
on transgender or nonbinary people, while also identifying other factors (such
as race and/or ethnicity) that make results more or less accurate. Finally,
manufacturers could release as much information about their equations as
possible, including validation studies, in order to facilitate cross-
validation by independent researchers.
Admittedly, these solutions may reify technical expertise and serve to
legitimize the ideas of these types of body measurement, as Greene, Stark, and
Hoffman, point out in their work on technological ethics (Greene et al.,
2019). Ultimately, BIA is just one example of how regression-based body
measurements, whether implemented in technologies like smart scales or
described in scientific papers, harm those who are not presumed to be the
ideal. And that whole thing is worth overturning.
###### Acknowledgements.
Thank you to Siobhan Kelly for their perceptive comments on lip service, and
to the many essential workers, particularly those at Philly Foodworks and
South Square Market, whose labor allowed us to write (and eat).
## References
* (1)
* Wit (2012) 2012\. Withings 510(k) Summary of Safety and Effectiveness. https://www.accessdata.fda.gov/cdrh_docs/pdf12/K121971.pdf.
* Fit (2014) 2014\. Fitbit Aria 510(k) Summary of Safety and Effectiveness. https://www.accessdata.fda.gov/cdrh_docs/pdf13/K133872.pdf.
* Bod (2020) 2020\. Body Composition Wi-Fi Scale - Body+ — Withings. https://www.withings.com/us/en/body-plus.
* Ainsworth et al. (1997) Barbara E. Ainsworth, Lisa M. Stolarczyk, Vivian H. Heyward, Carolynn B. Berry, Melinda L. Irwin, and Laura M. Mussulman. 1997. Predictive Accuracy of Bioimpedance in Estimating Fat-Free Mass of African-American Women. _Medicine & Science in Sports & Exercise_ 29, 6 (June 1997), 781–787.
* Benjamin (2019) Ruha Benjamin. 2019\. _Race after Technology: Abolitionist Tools for the New Jim Code_. Polity Press, Newark.
* Bent et al. (2020) Brinnae Bent, Benjamin A. Goldstein, Warren A. Kibbe, and Jessilyn P. Dunn. 2020. Investigating Sources of Inaccuracy in Wearable Optical Heart Rate Sensors. _npj Digital Medicine_ 3, 1 (Feb. 2020), 1–9. https://doi.org/10.1038/s41746-020-0226-6
* Bickler et al. (2005) Philip E. Bickler, John R. Feiner, and John W. Severinghaus. 2005\. Effects of Skin Pigmentation on Pulse Oximeter Accuracy at Low Saturation. _Anesthesiology_ 102, 4 (April 2005), 715–719. https://doi.org/10.1097/00000542-200504000-00004
* Bivens (2017) Rena Bivens. 2017\. The Gender Binary Will Not Be Deprogrammed: Ten Years of Coding Gender on Facebook. _New Media & Society_ 19, 6 (2017), 880–898. https://doi.org/10.1177/1461444815621527
* Boesel (2013) Whitney Erin Boesel. 2013\. What Is the Quantified Self Now? https://thesocietypages.org/cyborgology/2013/05/22/what-is-the-quantified-self-now/.
* Buolamwini and Gebru (2018) Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In _Proceedings of Machine Learning Research_ , Vol. 81. 15.
* Burtscher and Spiel (2020) Sabrina Burtscher and Katta Spiel. 2020. ”But Where Would I Even Start?”: Developing (Gender) Sensitivity in HCI Research and Practice. In _Proceedings of the Conference on Mensch Und Computer_. ACM, Magdeburg Germany, 431–441. https://doi.org/10.1145/3404983.3405510
* Cifor and Garcia (2020) Marika Cifor and Patricia Garcia. 2020. Gendered by Design: A Duoethnographic Study of Personal Fitness Tracking Systems. _ACM Transactions on Social Computing_ 2, 4 (Jan. 2020), 1–22. https://doi.org/10.1145/3364685
* Cohn et al. (1977) S. H. Cohn, C. Abesamis, I. Zanzi, J. F. Aloia, S. Yasumura, and K. J. Ellis. 1977\. Body Elemental Composition: Comparison between Black and White Adults. _The American Journal of Physiology_ 232, 4 (April 1977), E419–422. https://doi.org/10.1152/ajpendo.1977.232.4.E419
* Collier et al. (2020) Scott R. Collier, Conner McCraw, Megan Campany, Austin Lubkemann, Price StClair, Hong Ji, Kathryn Sandberg, Joseph W. Morgan, and Caroline J. Smith. 2020. Withings Body Cardio Versus Gold Standards of Pulse-Wave Velocity and Body Composition. _Journal of Personalized Medicine_ 10, 1 (March 2020), 17. https://doi.org/10.3390/jpm10010017
* Costanza-Chock (2020) Sasha Costanza-Chock. 2020\. _Design Justice: Community-Led Practices to Build the Worlds We Need_. MIT Press, Cambridge.
* Côté and Adams (1993) K. D. Côté and W. C. Adams. 1993. Effect of Bone Density on Body Composition Estimates in Young Adult Black and White Women. _Medicine and Science in Sports and Exercise_ 25, 2 (Feb. 1993), 290–296.
* Crenshaw (1989) Kimberle Crenshaw. 1989\. Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. _University of Chicago Legal Forum_ 1989, 1 (1989), 31\.
* Davis and Preves (2017) Georgiann Davis and Sharon Preves. 2017. Intersex and the Social Construction of Sex. _Contexts_ 16, 1 (Feb. 2017), 80–80. https://doi.org/10.1177/1536504217696082
* Deurenberg et al. (1991) Paul Deurenberg, Jan A. Weststrate, and Jaap C. Seidell. 1991\. Body Mass Index as a Measure of Body Fatness: Age- and Sex-Specific Prediction Formulas. _British Journal of Nutrition_ 65, 2 (March 1991), 105–114. https://doi.org/10.1079/BJN19910073
* dril (2012) dril. 2012. ”’This Whole Thing Smacks Of Gender,’ i Holler as i Overturn My Uncle’s Barbeque Grill and Turn the 4th of July into the 4th of Shit”. https://twitter.com/dril/status/213849618415484929.
* Elbers et al. (1997) Jolanda M H Elbers, Henk Asscheman, and Jacob C Seidell. 1997\. Reversal of the Sex Difference in Serum Leptin Levels upon Cross-Sex Hormone Administration in Transsexuals. _Journal of Clinical Endocrinology and Metabolism_ 82, 10 (1997), 4.
* Ellis (2000) Kenneth J. Ellis. 2000\. Human Body Composition: In Vivo Methods. _Physiological Reviews_ 80, 2 (Jan. 2000), 649–680. https://doi.org/10.1152/physrev.2000.80.2.649
* Fausto-Sterling (2000) Anne Fausto-Sterling. 2000\. _Sexing the Body: Gender Politics and the Construction of Sexuality_ (1st ed.). Basic Books, New York, NY.
* Feehan et al. (2018) Lynne M. Feehan, Jasmina Geldman, Eric C. Sayre, Chance Park, Allison M. Ezzat, Ju Young Yoo, Clayon B. Hamilton, and Linda C. Li. 2018\. Accuracy of Fitbit Devices: Systematic Review and Narrative Syntheses of Quantitative Data. _JMIR mHealth and uHealth_ 6, 8 (Aug. 2018), e10527. https://doi.org/10.2196/10527
* Feiner et al. (2007) John R. Feiner, John W. Severinghaus, and Philip E. Bickler. 2007\. Dark Skin Decreases the Accuracy of Pulse Oximeters at Low Oxygen Saturation: The Effects of Oximeter Probe Type and Gender. _Anesthesia & Analgesia_ 105, 6 (Dec. 2007), S18. https://doi.org/10.1213/01.ane.0000285988.35174.d9
* Felber (2012) Susie Felber. 2012\. 5 Reasons to Quantify Yourself.
* Fiore-Gartland and Neff (2015) Brittany Fiore-Gartland and Gina Neff. 2015. Communication, Mediation, and the Expectations of Data: Data Valences Across Health and Wellness Communities. _International Journal of Communication_ 9 (2015), 1466.
* Gebru (2019) Timnit Gebru. 2019\. Oxford Handbook on AI Ethics Book Chapter on Race and Gender. _arXiv:1908.06165 [cs]_ (Aug. 2019). arXiv:1908.06165 [cs]
* Greene et al. (2019) Daniel Greene, Anna Lauren Hoffmann, and Luke Stark. 2019\. Better, Nicer, Clearer, Fairer: A Critical Assessment of the Movement for Ethical Artificial Intelligence and Machine Learning. _Proceedings of the 52nd Hawaii International Conference on System Sciences_ (2019), 10.
* Greenhall (2013) Amelia Greenhall. 2013\. ”Weight, Weight - Don’t Tell Me”. https://www.youtube.com/watch?v=Qni8RcbwV1c.
* Hailu (2019) Ruth Hailu. 2019\. Fitbits, Other Wearables May Not Accurately Track Heart Rates in People of Color. https://www.statnews.com/2019/07/24/fitbit-accuracy-dark-skin/.
* Haimson et al. (2020) Oliver L. Haimson, Dykee Gorrell, Denny L. Starks, and Zu Weinger. 2020. Designing Trans Technology: Defining Challenges and Envisioning Community-Centered Solutions. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. ACM, Honolulu HI USA, 1–13. https://doi.org/10.1145/3313831.3376669
* Hamidi et al. (2018) Foad Hamidi, Morgan Klaus Scheuerman, and Stacy M. Branham. 2018\. Gender Recognition or Gender Reductionism?: The Social Implications of Embedded Gender Recognition Systems. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_. ACM, Montreal QC Canada, 1–13. https://doi.org/10.1145/3173574.3173582
* Harman (2010) Graham Harman. 2010\. _Prince of Networks: Bruno Latour and Metaphysics_. re. press.
* Hoffmann (2020) Anna Lauren Hoffmann. 2020\. Terms of Inclusion: Data, Discourse, Violence. _New Media & Society_ (Sept. 2020), 146144482095872. https://doi.org/10.1177/1461444820958725
* Holm and Møhl (2000) Lotte Holm and M. Møhl. 2000. The Role of Meat in Everyday Food Culture: An Analysis of an Interview Study in Copenhagen. _Appetite_ 34, 3 (2000), 277–283.
* Javorina (2020) Dragana Javorina. 2020\. _Investigating the Validity of the Fitbit ChargeHRTM for Measuring Physical Activity in Children and Youth with Mobility-Related Disabilities_. Thesis.
* Keane et al. (2017) D. F. Keane, P. Baxter, E. Lindley, U. Moissl, S. Pavitt, L. Rhodes, and S. Wieskotten. 2017. The Body Composition Monitor: A Flexible Tool for Routine Fluid Management across the Haemodialysis Population. _Biomedical Physics & Engineering Express_ 3, 3 (2017), 035017\. https://doi.org/10.1088/2057-1976/aa6f45
* Keyes (2018) Os Keyes. 2018. The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. _Proceedings of the ACM on Human-Computer Interaction_ 2, CSCW (Nov. 2018), 1–22. https://doi.org/10.1145/3274357
* Khalil et al. (2014) Sami Khalil, Mas Mohktar, and Fatimah Ibrahim. 2014. The Theory and Fundamentals of Bioimpedance Analysis in Clinical Status Monitoring and Diagnosis of Diseases. _Sensors_ 14, 6 (June 2014), 10895–10928. https://doi.org/10.3390/s140610895
* Klaver and T’Sjoen (2018) M Klaver and G T’Sjoen. 2018. Changes in Regional Body Fat, Lean Body Mass and Body Shape in Trans Persons Using Cross- Sex Hormonal Therapy: Results from a Multicenter Prospective Study. _European Journal of Endocrinology_ 178 (2018), 163–171.
* Kullberg et al. (2009) J. Kullberg, J. Brandberg, J-E. Angelhed, H. Frimmel, E. Bergelin, L. Strid, H. Ahlström, L. Johansson, and L. Lönn. 2009. Whole-Body Adipose Tissue Analysis: Comparison of MRI, CT and Dual Energy X-Ray Absorptiometry. _The British Journal of Radiology_ 82, 974 (Feb. 2009), 123–130. https://doi.org/10.1259/bjr/80083156
* Kyle et al. (2004) Ursula G. Kyle, Ingvar Bosaeus, Antonio De Lorenzo, Paul Deurenberg, Marinos Elia, Jose Manuel Gomez, Berit Lilienthal Heitmann, Luisa Kent-Smith, and Jean-Claude Melchior. 2004\. Bioelectrical Impedance Analysis-Part I: Review of Principles and Methods. _Clinical Nutrition_ 23, 5 (Oct. 2004), 1226–1243. https://doi.org/10.1016/j.clnu.2004.06.004
* Kyle et al. (2001a) Ursula G. Kyle, Laurence Genton, Laurie Karsegard, Daniel O. Slosman, and Claude Pichard. 2001a. Single Prediction Equation for Bioelectrical Impedance Analysis in Adults Aged 20 –94 Years. _Nutrition_ 17, 3 (2001), 6.
* Kyle et al. (2001b) Ursula G. Kyle, Laurence Genton, Daniel O. Slosman, and Claude Pichard. 2001b. Fat-Free and Fat Mass Percentiles in 5225 Healthy Subjects Aged 15 to 98 Years. _Nutrition_ 17, 7-8 (July 2001), 534–541. https://doi.org/10.1016/S0899-9007(01)00555-X
* Lukaski (2013) H. C. Lukaski. 2013\. Evolution of Bioimpedance: A Circuitous Journey from Estimation of Physiological Function to Assessment of Body Composition and a Return to Clinical Research. _European Journal of Clinical Nutrition_ 67, S1 (Jan. 2013), S2–S9. https://doi.org/10.1038/ejcn.2012.149
* Lupton (2016) Deborah Lupton. 2016\. _The Quantified Self_. Polity, Cambridge, UK.
* Lupton (2018a) Deborah Lupton. 2018a. _Fat_. Routledge.
* Lupton (2018b) Deborah Lupton. 2018b. How Do Data Come to Matter? Living and Becoming with Personal Data. _Big Data & Society_ 5, 2 (July 2018), 2053951718786314\. https://doi.org/10.1177/2053951718786314
* Lupton (2020) Deborah Lupton. 2020\. Data Mattering and Self-Tracking: What Can Personal Data Do? _Continuum_ 34, 1 (Jan. 2020), 1–13. https://doi.org/10.1080/10304312.2019.1691149
* Martin et al. (1994) A. D. Martin, M. Z. Daniel, D. T. Drinkwater, and J. P. Clarys. 1994. Adipose Tissue Density, Estimated Adipose Lipid Fraction and Whole Body Adiposity in Male Cadavers. _International Journal of Obesity and Related Metabolic Disorders: Journal of the International Association for the Study of Obesity_ 18, 2 (Feb. 1994), 79–83.
* Moran-Thomas (2020) Amy Moran-Thomas. 2020\. How a Popular Medical Device Encodes Racial Bias. http://bostonreview.net/science-nature-race/amy-moran-thomas-how-popular-medical-device-encodes-racial-bias.
* Morgan Klaus Scheuerman et al. (2020) Morgan Klaus Scheuerman, Katta Spiel, Oliver L. Haimson, Foad Hamidi, and Stacy M. Branham. 2020\. HCI Guidelines for Gender Equity and Inclusivity. https://www.morgan-klaus.com/gender-guidelines.html.
* Moseson et al. (2020) Heidi Moseson, Noah Zazanis, Eli Goldberg, Laura Fix, Mary Durden, Ari Stoeffler, Jen Hastings, Lyndon Cudlitz, Bori Lesser-Lee, Laz Letcher, Aneidys Reyes, and Juno Obedin-Maliver. 2020. The Imperative for Transgender and Gender Nonbinary Inclusion. _Obstetrics & Gynecology_ 135, 5 (2020), 1059–1068. https://doi.org/10.1097/AOG.0000000000003816
* Nafus (2016) Dawn Nafus (Ed.). 2016\. _Quantified: Biosensing Technologies in Everyday Life_. MIT Press, Cambridge, MA.
* Nash (2019) Jennifer C. Nash. 2019\. _Black Feminism Reimagined: After Intersectionality_. Duke University Press, Durham.
* Neff and Nafus (2016) Gina Neff and Dawn Nafus. 2016. _Self-Tracking_. MIT Press, Cambridge, MA.
* Norris (2008) Joe Norris. 2008\. Duoethnography. _The SAGE encyclopedia of qualitative research methods_ 1 (2008), 233–236.
* Omer (2016) Timothy Omer. 2016\. Empowered Citizen ‘Health Hackers’ Who Are Not Waiting. _BMC Medicine_ 14, 1 (Aug. 2016), 118\. https://doi.org/10.1186/s12916-016-0670-y
* Pasquale (2015) Frank Pasquale. 2015\. _The Black Box Society: The Secret Algorithms That Control Money and Information_. Harvard University Press, Cambridge, MA.
* Prättälä et al. (2007) Ritva Prättälä, Laura Paalanen, Daiga Grinberga, Ville Helasoja, Anu Kasmel, and Janina Petkeviciene. 2007. Gender Differences in the Consumption of Meat, Fruit and Vegetables Are Similar in Finland and the Baltic Countries. _European Journal of Public Health_ 17, 5 (2007), 520–525.
* Scheuerman et al. (2018) Morgan Klaus Scheuerman, Stacy M. Branham, and Foad Hamidi. 2018\. Safe Spaces and Safe Places: Unpacking Technology-Mediated Experiences of Safety and Harm with Transgender People. _Proc. ACM Hum.-Comput. Interact._ 2, CSCW, Article 155 (Nov. 2018). https://doi.org/10.1145/3274424
* Scheuerman et al. (2019) Morgan Klaus Scheuerman, Jacob M. Paul, and Jed R. Brubaker. 2019\. How Computers See Gender: An Evaluation of Gender Classification in Commercial Facial Analysis Services. _Proceedings of the ACM on Human-Computer Interaction_ 3, CSCW (Nov. 2019), 1–33. https://doi.org/10.1145/3359246
* Scheuerman et al. (2020) Morgan Klaus Scheuerman, Kandrea Wade, Caitlin Lustig, and Jed R. Brubaker. 2020. How We’ve Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis. _Proceedings of the ACM on Human-Computer Interaction_ 4, CSCW1 (May 2020), 1–35. https://doi.org/10.1145/3392866
* Schutte et al. (1984) J. E. Schutte, E. J. Townsend, J. Hugg, R. F. Shoup, R. M. Malina, and C. G. Blomqvist. 1984\. Density of Lean Body Mass Is Greater in Blacks than in Whites. _Journal of Applied Physiology: Respiratory, Environmental and Exercise Physiology_ 56, 6 (June 1984), 1647–1649. https://doi.org/10.1152/jappl.1984.56.6.1647
* Segal et al. (1988) K. R. Segal, M. Van Loan, P. I. Fitzgerald, J. A. Hodgdon, and T. B. Van Itallie. 1988. Lean Body Mass Estimation by Bioelectrical Impedance Analysis: A Four-Site Cross-Validation Study. _The American Journal of Clinical Nutrition_ 47, 1 (Jan. 1988), 7–14. https://doi.org/10.1093/ajcn/47.1.7
* Shamas and Afsaneh Rigot (2019) Norman Shamas and Afsaneh Rigot. 2019. Looking At The Margins: Incorporating Harm Reduction Into Tech - from Radical Networks 2019.
* Shattuck-Heidorn and Richardson (2019) Heather Shattuck-Heidorn and Sarah S. Richardson. 2019. Sex/Gender and the Biosocial Turn. _The Scholar and Feminist Online_ 15 (2019).
* Shcherbina et al. (2017) Anna Shcherbina, C. Mikael Mattsson, Daryl Waggott, Heidi Salisbury, Jeffrey W. Christle, Trevor Hastie, Matthew T. Wheeler, and Euan A. Ashley. 2017. Accuracy in Wrist-Worn, Sensor-Based Measurements of Heart Rate and Energy Expenditure in a Diverse Cohort. _Journal of Personalized Medicine_ 7, 2 (June 2017), 3. https://doi.org/10.3390/jpm7020003
* Shepherd et al. (2017) John Shepherd, Bennett Ng, Markus Sommer, and Steven B. Heymsfield. 2017. Body Composition by DXA. _Bone_ 104 (Nov. 2017), 101–105. https://doi.org/10.1016/j.bone.2017.06.010
* Sjoding et al. (2020) Michael W. Sjoding, Robert P. Dickson, Theodore J. Iwashyna, Steven E. Gay, and Thomas S. Valley. 2020\. Racial Bias in Pulse Oximetry Measurement. _New England Journal of Medicine_ 383, 25 (Dec. 2020), 2477–2478. https://doi.org/10.1056/NEJMc2029240
* Spade (2003) Dean Spade. 2003\. Resisting Medicine/Remodeling Gender. _Berkeley Women’s Law Journal_ 18 (2003), 15–37.
* Spanos et al. (2020) Cassandra Spanos, Ingrid Bretherton, Jeffrey D Zajac, and Ada S Cheung. 2020. Effects of Gender-Affirming Hormone Therapy on Insulin Resistance and Body Composition in Transgender Individuals: A Systematic Review. _World Journal of Diabetes_ 11, 3 (March 2020), 66–77. https://doi.org/10.4239/wjd.v11.i3.66
* Spiel et al. (2019) Katta Spiel, Oliver L. Haimson, and Danielle Lottridge. 2019\. How to Do Better with Gender on Surveys: A Guide for HCI Researchers. _Interactions_ 26, 4 (June 2019), 62–65. https://doi.org/10.1145/3338283
* Springer et al. (2012) Kristen W. Springer, Jeanne Mager Stellman, and Rebecca M. Jordan-Young. 2012. Beyond a Catalogue of Differences: A Theoretical Frame and Good Practice Guidelines for Researching Sex/Gender in Human Health. _Social Science & Medicine_ 74, 11 (June 2012), 1817–1824. https://doi.org/10.1016/j.socscimed.2011.05.033
* Stengers (2018) Isabelle Stengers. 2018\. _Another Science Is Possible: A Manifesto for Slow Science_. John Wiley & Sons.
* Stryker (2013) Susan Stryker. 2013\. (De)Subjugated Knowledges. _The Transgender Studies Reader_ (2013), 1–17.
* Sun et al. (2003) Shumei S Sun, W Cameron Chumlea, Steven B Heymsfield, Henry C Lukaski, Dale Schoeller, Karl Friedl, Robert J Kuczmarski, Katherine M Flegal, Clifford L Johnson, and Van S Hubbard. 2003\. Development of Bioelectrical Impedance Analysis Prediction Equations for Body Composition with the Use of a Multicomponent Model for Use in Epidemiologic Surveys. _The American Journal of Clinical Nutrition_ 77, 2 (Feb. 2003), 331–340. https://doi.org/10.1093/ajcn/77.2.331
* Tannenbaum et al. (2019) Cara Tannenbaum, Robert P. Ellis, Friederike Eyssel, James Zou, and Londa Schiebinger. 2019. Sex and Gender Analysis Improves Science and Engineering. _Nature_ 575, 7781 (Nov. 2019), 137–146. https://doi.org/10.1038/s41586-019-1657-6
* Ward (2019) Leigh C. Ward. 2019\. Bioelectrical Impedance Analysis for Body Composition Assessment: Reflections on Accuracy, Clinical Utility, and Standardisation. _European Journal of Clinical Nutrition_ 73, 2 (Feb. 2019), 194–199. https://doi.org/10.1038/s41430-018-0335-3
* Wittkower (2016) D.E. Wittkower. 2016\. Principles of Anti-Discriminatory Design. In _2016 IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS)_. IEEE, Vancouver, BC, Canada, 1–7. https://doi.org/10.1109/ETHICS.2016.7560055
* Wolf and De Groot (2020) Gary Isaac Wolf and Martijn De Groot. 2020. A Conceptual Framework for Personal Science. _Frontiers in Computer Science_ 2 (2020), 21. https://doi.org/10.3389/fcomp.2020.00021
* Worley and Messer (2016) Becky Worley and Sarah Messer. 2016. Comparing the Accuracy of Body Fat Scales. _ABC News_ (June 2016).
|
# The H$\alpha$ Dots Survey. IV. A Fourth List of Faint Emission–Line Objects
Joseph D. Watkins Department of Astronomy, Indiana University, 727 East Third
Street, Bloomington, IN 47405, USA John J. Salzer Department of Astronomy,
Indiana University, 727 East Third Street, Bloomington, IN 47405, USA Angela
Van Sistine Department of Astronomy, Indiana University, 727 East Third
Street, Bloomington, IN 47405, USA Center for Gravitation, Cosmology, and
Astrophysics, University of Wisconsin-Milwaukee, 3135 N Maryland Ave
Milwaukee, Wisconsin 53211, USA Ana Hayslip Lowell Observatory, 1400 W. Mars
Hill Rd. Flagstaff. Arizona. 86001. USA. Eric Hoar School of Materials
Science & Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
Rayna Rampalli Department of Physics and Astronomy, Dartmouth College,
Hanover, NH 03755, USA
###### Abstract
We present the fourth catalog of serendipitously discovered compact
extragalactic emission-line sources – H$\alpha$ Dots. A total of 454 newly
discovered objects are included in the current survey list. These objects have
been detected in searches of moderately deep narrow-band images acquired for
the ALFALFA H$\alpha$ project (Van Sistine et al., 2016). The catalog of
H$\alpha$ Dots presented in the current paper was derived from searches
carried out using ALFALFA H$\alpha$ images obtained with the KPNO 2.1 m
telescope. This results in a substantially deeper sample of Dots compared to
our previous lists, which were all discovered in images taken with the WIYN
0.9 m telescope. The median R-band magnitude of the current catalog is 21.59,
more than 1.6 magnitudes fainter than the median for the 0.9 m sample (factor
of 4.4$\times$ fainter). Likewise, the median emission-line flux of the
detected sources is a factor of 4.3$\times$ fainter. The line-flux
completeness limit of the current sample is approximately 3 $\times$ 10-16 erg
s-1 cm-2. We present accurate coordinates, apparent magnitudes and narrow-band
line fluxes for each object in the sample. Unlike our previous lists of
H$\alpha$ Dots, the current sample does not include follow-up spectroscopy.
††journal: Astrophysical Journal Supplement Series††facilities:
KPNO:2.1m††software: IRAF
## 1 Introduction
We present the latest installment of the H$\alpha$ Dots survey. H$\alpha$ Dots
are compact emission-line galaxies (ELGs) discovered in narrow-band images
(e.g., Salzer et al., 2020). The nature of our selection method results in
catalogs of strong-lined objects that identify very specific types of star-
forming and active galaxies detected via a number of different emission lines.
In particular, the H$\alpha$ Dots detected via the H$\alpha$ line are all
dwarf star-forming galaxies (including many blue compact dwarf (BCD) galaxies)
or outlying H II regions in nearby spiral galaxies, while those detected via
the [O III]$\lambda$5007 line are typically Green Pea galaxies (e.g.,
Cardamone et al., 2009; Brunker et al., 2020) or Seyfert 2 galaxies (Salzer et
al., 2020). The H$\alpha$ Dots survey also detects high redshift QSOs via one
of several UV emission lines (e.g., Mg II $\lambda$2798, C IV $\lambda$1549 or
Ly$\alpha$).
The H$\alpha$ Dots survey is carried out using images acquired for the ALFALFA
H$\alpha$ project (Van Sistine et al., 2016, hereafter AHA). The goal of the
AHA project was to measure accurately the star-formation rate density in the
local universe by obtaining H$\alpha$ images of a volume-limited sample of H
I-selected galaxies from the ALFALFA survey (Giovanelli et al., 2005; Haynes
et al., 2011, 2018). The H$\alpha$ Dots project originated with the
serendipitous discovery of point-like emission-line sources located in the
narrow-band AHA images (Kellar et al., 2012, hereafter K12). The initial
discovery was made by an undergraduate student who was processing early AHA
data in collaboration with the senior author. All subsequent searches for
H$\alpha$ Dots in the AHA images have been carried out exclusively by
undergraduate students.
Previous H$\alpha$ Dots survey lists were derived from AHA images taken with
the WIYN 0.9 m telescope (K12, Salzer et al., 2020). A third list of H$\alpha$
Dots detected using WIYN 0.9 m images is in preparation. The current survey
catalog has been created by analyzing narrow-band images obtained with the
Kitt Peak National Observatory (KPNO)111The Kitt Peak National Observatory is
part of the National Optical-Infrared Research Laboratory (NOIRLab) . NOIRLab
is operated by the Association of Universities for Research in Astronomy
(AURA) under a cooperative agreement with the National Science Foundation. 2.1
m telescope. As we show in the subsequent sections of this paper, the use of a
larger telescope naturally results in a sample of ELGs that reaches to
substantially fainter flux levels compared to the previous survey lists.
Our paper is organized as follows: Section 2 describes the observational data
and the preliminary data processing steps, while Section 3 details our
selection methodology. The latter follows closely the methods adopted in the
previous H$\alpha$ Dots papers. Section 4 presents our new list of H$\alpha$
Dots and provides an overview of the properties of the samples using the
available survey data. Section 5 presents the limited spectroscopic
information available in the literature for the newly cataloged H$\alpha$ Dots
and discusses potential applications of this deep sample of ELGs once follow-
up spectra are obtained. Section 6 summarizes the main results of this study.
## 2 Observational Data
The current study analyzes all ten observing runs of AHA data obtained with
the KPNO 2.1 m telescope.
Images were taken using three different CCD detectors during the course of the
AHA project: T2KB, with a FOV of 10 by 10 arcmin, and STA2 and STA3, which
both had FOVs of 10 by 6.6 arcmin. All three CCDs had a pixel scale of 0.30
arcsec/pixel. Both continuum $R$-band images and narrow-band images were
acquired, the latter using the KPNO narrow-band filter set (see Figure 1).
Three of the four narrow-band filters were used in the observational data set
analyzed in this paper (KP1564, KP1565 and KP1566), although the majority of
the AHA targets observed with the KPNO 2.1 m telescope are located in the two
higher redshift filters (KP1565 and KP1566). For each target AHA galaxy, three
images were obtained. First, a 15 minute narrow-band image was taken at the
beginning of the observing sequence, then a 3 minute $R$-band image, followed
by another 15 minute narrow-band image.
Figure 1: The filter transmission curves for the narrow-band filter set used
for the current survey. The filters are designated in the survey catalog
(Table 2) as filters 1 through 4: filter 1 = KP1563, filter 2 = KP1564, filter
3 = KP1565, filter 4 = KP1566. The vast majority of the AHA targets observed
with the 2.1 m telescope were observed in filters 3 and 4.
The AHA project required accurate flux calibration for the narrow-band data.
Multiple observations of spectrophotometric standard stars were acquired each
night of the program. If the photometric conditions were at all dubious on any
night where data were obtained, fields taken on those nights were flagged and
re-observed on a later night using short “post-calibration” observations. All
narrow-band fluxes measured from AHA images have the zero-point of the flux
scale measured to better than 2% accuracy (and usually $\sim$1%). Full details
are included in Van Sistine et al. (2016). Because of the careful approach to
flux calibration employed by the AHA program, the narrow-band fluxes measured
for the H$\alpha$ Dots should be quite accuarate.
Standard corrections for instrumental signatures were performed on each image.
This included overscan correction, mean bias subtraction, flat field
correction, and bad pixel cleaning (see K12 for details). All preliminary
image processing utilized the Image Reduction and Analysis Facility (IRAF).
Cosmic rays were removed from the images using the L.A. Cosmic script (van
Dokkum, 2001).
A software pipeline was developed to process the AHA images; details of this
pipeline are given in AHA. First, the code aligns the $R$-band and two narrow-
band images to a common center and applies an astrometric solution to each
image. They are then Gaussian filtered to a common stellar FWHM, after which
all images have their fluxes scaled to the flux level of the first narrow-band
image using bright, unsaturated stars in the field. The scaled $R$-band image
is subtracted from each narrow-band image, producing the continuum-subtracted
images. Finally, the two continuum-subtracted images are added together to get
the combined continuum-subtracted image.
Figure 2: H$\alpha$ Dot 1010 is located in the upper-left portion of the field
of AGC 330247. The upper image is the $R$-band continuum image, and the lower
image is the continuum-subtracted narrow-band image. The isolated point source
of residual narrow-band flux is circled in red. It has an R magnitude of
20.67. The target AHA galaxy, AGC 330247 = CGCG 476-011, is located in the
bottom-left of the field (below the two bright stars), and is seen to possess
strong H$\alpha$ emission. The spiral galaxy in the lower right is UGC 12514;
it exhibits H$\alpha$ emission from a number of disk H II regions. The total
field-of-view of these images is 10.0 $\times$ 6.3 arcminutes.
Examples of a continuum $R$-band image and a continuum-subtracted narrow-band
image are shown in Figure 2. The images show a cut-out of the field for AHA
target AGC 330247, taken with the T2KB detector. Circled in the upper-left
corner of both images is an isolated, point-like source of residual narrow-
band flux. It is unresolved in our images and is located far from either of
the two larger galaxies in this field. This object is what we call an
“H$\alpha$ Dot.” When viewed in SDSS color images it exhibits a greenish tint,
strongly suggesting that it is an [O III]-detected galaxy located at z $\sim$
0.34. The target AHA galaxy is located in the lower-left of this image; it is
a strong H$\alpha$ emitter. The spiral galaxy located in the lower right is
UGC 12514. It is also an ALFALFA H I source, and exhibits emission from many H
II regions in its spiral disk.
Table 1: Observing Run Summary for KPNO 2.1 m H$\alpha$ Dot Catalog Observing Run | Detector | Number Nights | Number Fields | H$\alpha$ Dots
---|---|---|---|---
September 2010 | T2KB | 4 | 25 | 24
November 2010 | T2KB | 5 | 45 | 26
March 2011 | T2KB | 7 | 72 | 78
May 2011 | T2KB | 7 | 48 | 53
October 2011 | T2KB | 8 | 88 | 54
March 2012 | T2KB | 9 | 87 | 89
September 2012 | STA2 | 8 | 80 | 45
March 2013 | T2KB | 5 | 45 | 28
April 2013 | T2KB | 8 | 68 | 34
March 2014 | STA3 | 5 | 53 | 23
Totals | | 66 | 611 | 454
A total of 611 ALFALFA H$\alpha$ fields were observed over the course of 10
observing runs (Table 1). These 10 runs can be broken up into Fall sample and
Spring samples. The Fall sample is approximately contained within a region
between R.A. of $22^{h}$ to $3^{h}$ and Dec. of $+24^{\circ}$ to
$+29^{\circ}$, and the Spring sample is approximately contained between R.A.
of $7^{h}30^{m}$ to $16^{h}40^{m}$ and Dec. of $+3^{\circ}$ to $+17^{\circ}$.
The images collected during the course of these ten observing runs were
searched for H$\alpha$ Dots. Our methodology for detecting H$\alpha$ Dots in
the processed AHA narrow-band images is described in the next section.
## 3 2.1 m H$\alpha$ Dot Survey: Identifying H$\alpha$ Dots
For an object in the field to be considered to be an H$\alpha$ Dot, it must
satisfy two primary criteria. First, it must have a statistically significant
excess of flux in the narrow-band filter relative to the R-band flux. Second,
it must be morphologically compact. This usually means the object is either
unresolved or barely resolved in our CCD images. The first criterion is
readily quantified (see below), while the second is admittedly somewhat
subjective. In particular, it will vary from field-to-field depending on the
size of the point-spread function (a.k.a. “seeing”) associated with each
image. All H$\alpha$ Dot candidates are evaluated by at least two members of
the project team in order to try to invoke a uniform assessment of
compactness.
A software package was developed in order to automatically and systematically
search for H$\alpha$ Dots in the AHA images. The software employs routines
designed to identify every object in an image, compare their fluxes in the
continuum and narrow-band filters, and then calculate a magnitude difference
and its uncertainty for each source. Potential candidates are reviewed by
members of the H$\alpha$ Dot team before the list of ELG candidates is
finalized. For more details about the software package, see K12.
The software takes as input the $R$-band and composite narrow-band images and
identifies every object present in the field using a modified version of
DAOFIND (Stetson, 1987). Next the software performs photometry with a constant
aperture size on each object in the scaled $R$-band and the unsubtracted
narrow-band images to construct a magnitude difference, calculated as
$\Delta m=m_{NB}-m_{R}.$ (1)
Here the magnitudes used are simple instrumental magnitudes. Because the
images being used have all been scaled to a common flux level, objects with no
emission lines (e.g., most stars) will have $\Delta$m = 0.0. Large negative
values of $\Delta$m indicate an object with a significant excess of flux in
the narrow-band image. The software also computes the ratio of the absolute
value of the magnitude difference to the error in the magnitude difference, as
$ratio=\frac{|\Delta m|}{\sigma_{\Delta m}},$ (2)
where $\sigma_{\Delta m}$ is generated by taking the errors associated with
the $R$-band and narrow-band magnitudes and summing them in quadrature:
$\sigma_{\Delta m}={\sqrt{\sigma_{NB}^{2}+\sigma_{R}^{2}}}.$ (3)
The $ratio$ parameter serves as a pseudo signal-to-noise (SNR) ratio. Small
values of $ratio$ represent either objects with little or no emission-line
flux (small $\Delta$m) or noisy sources (large $\sigma_{\Delta m}$).
For each field analyzed, the software generates a diagnostic plot (see Figure
3 for an example). Each “$\mathsf{X}$” in the plot indicates a single object
in the images. The left-hand graph plots $\Delta m$ against the instrumental
R-band magnitude. The bright stars in the field are clumped at $\Delta m$ = 0
on the left side of the plot; the locus of stars remains centered on zero but
spreads out to larger values of $\Delta m$ for the fainter stars with large
photometric errors. Objects with a negative magnitude difference indicate more
flux in the narrow-band image than in the $R$-band image. The right-hand graph
plots $\Delta m$ against the $ratio$ parameter. The vertical and horizontal
lines drawn on the diagnostic plot represent the threshold values for $\Delta
m$ and $ratio$ that are used to select emission-line candidates. We set the
values for inclusion in the H$\alpha$ Dot sample at $\Delta m$ $<$ $-$0.4 and
$ratio$ $>$ 4.5. These values were found by K12 to optimize the detection of
faint objects and minimize the number of false detections. Objects located in
the lower-right quadrant of the right-hand plot of Figure 3 represent
candidate H$\alpha$ Dots.
Figure 3: The diagnostic plot produced after using the dot-finding software on
the field of AGC 330247 (see Figure 2). The left panel plots $\Delta m$ (eq.
1) vs. instrumental R-band magnitude. Brighter stars are located to the left
and lie along a line around $\Delta m=0$ since the narrow-band and broad-band
flux levels are normalized to a common value by our software. Objects with a
negative $\Delta m$ indicate residual narrow-band flux. The right panel plots
$\Delta m$ vs. $ratio$ (eq. 2). Objects of interest have a large negative
$\Delta m$ and large $ratio$ values, corresponding to the bottom-right
quadrant of the plot. The solid red lines indicate the limiting values for
$\Delta m$ and $ratio$ for inclusion in the sample (see text). The large
number of putative detections in this field is caused by the many H II regions
located in AGC 330247 and UGC 12514 (Figures 2 and 5). The H II regions (n =
52) are marked in both panels with red squares, while the single H$\alpha$ Dot
candidate (H$\alpha$ Dot 1010) is shown as a green circle. The remaining
objects in the lower right quadrant, indicated by the “$\mathsf{X}$” symbol,
are all image artifacts that have been rejected. We also point out the many
bright (RInst $<$ $-$8) sources with large postive $\Delta m$ in the upper
left portion of the left panel. These are all image artifacts caused by the
long saturation “bleed” trail from the bright star visible in Figure 2.
Once the software has selected all possible candidates, these objects must be
visually examined to ascertain their nature. This verification step is
essential, since the automated software and our high-quality data combine to
yield numerous sources that are not true emission-line objects. Our survey
images typically include numerous sources that can lead to false detections,
including uncleaned cosmic rays, saturated stars, satellite or meteor trails,
and noise spikes. It is also common that star–forming regions in the AHA
target are also selected by passing the $\Delta m$ and $ratio$ criteria listed
above. For example, many of the emission-line candidates in the diagnostic
plot shown in Figure 3 are H II regions in UGC 12514 (see Figures 2 and 5).
The object review process is necessary to separate these three types of
detections: real H$\alpha$ Dots, H II regions, or image artifacts.
Figure 4: Example of image cutouts used to evaluate each H$\alpha$ Dot
candidate. The field is centered on H$\alpha$ Dot 1447. From left to right,
these images are the $R$-band continuum image, the combined narrow-band image,
and the continuum-subtracted narrow-band image. The compact appearance and
significant residual flux present in the continuum-subtracted image is
characteristic of an H$\alpha$ Dot. Each sub-image is 200 $\times$ 200 pixels,
or 60 arcseconds on a side. Figure 5: This set of image cutouts shows an
example where an H II region in a large spiral galaxy has been detected by the
dot-finding software (indicated by the red circle). The software often detects
dozens of H II regions in the central AHA galaxy because they surpass the
$\Delta m$ and $ratio$ thresholds. In the case of this particular galaxy (UGC
12514) a total of 37 H II regions were selected by our software. The review of
all objects located in the lower right quadrant of the diagnostic diagram
(e.g., Figure 3) allows the user to categorize this detection properly as an H
II region and not an H$\alpha$ Dot.
Our software produces image cut-outs for each object found in the bottom-right
quadrant of the diagnostic plot. These are three sub-images (200 $\times$ 200
pixels in size) centered on the object in question and displayed horizontally.
The cut-outs contain the object as seen in the $R$-band image, the combined
narrow-band image, and the continuum-subtracted narrow-band image (see Figures
4 and 5). Using the image cut-outs, the user visually examines each object
located in the bottom–right quadrant of the right plot in Figure 3 and
categorizes them into one of the three object types specified above. Objects
that are classified as image artifacts are discarded. Objects flagged as
H$\alpha$ Dots or HII regions are cataloged into separate lists for subsequent
analysis.
An object must be compact in appearance, spatially separate from the central
AHA galaxy, and contain significant emission in the narrow-band image in order
to be selected as an H$\alpha$ Dot candidate. These criteria are discussed in
detail in the previous H$\alpha$ Dots survey papers (Kellar et al., 2012;
Salzer et al., 2020). The compactness criteria was instituted to avoid
cataloging large, extended galaxies that were already known. In particular, we
did not wish to include the target AHA galaxy in our survey. The compactness
and separation requirements are somewhat subjective, although the visual
checking by at least two independent team members helps to ensure a fairly
uniform approach. We note that the separation criterion does not prevent the
survey from detecting isolated / outlying H II regions that are associated
with the AHA target galaxy. Several examples are included in Salzer et al.
(2020), who emphasize that the identification of such objects requires follow-
up spectroscopy.
Examples of objects detected in our emission-line searches are shown in
Figures 4 and 5. The object in Figure 4 is H$\alpha$ Dot 1447, which has a
measured R-band magnitude of 19.87 $\pm$ 0.03. While very compact in nature,
is is seen to be resolved in the broad-band image. This implies that the
emission line detected in our narrow-band filter is most likely H$\alpha$.
Since it was observed in filter KP1566 (i.e., filter 4), its redshift is most
likely to fall between 5300 and 7800 km/s ((Van Sistine et al., 2016). This
would make H$\alpha$ Dot 1447 a dwarf star-forming galaxy, which is consistent
with its blue appearance in the Sloan Digital Sky Survey (SDSS) color images
(York et al., 2000; Abolfathi et al., 2018). Figure 5 shows a number of H II
regions located in the nearly face-on spiral UGC 12514 (see Figure 2). Large
spiral galaxies like this are commonly detected multiple times by our
software. In the example shown the detected H II region is the faint object
located at the center of the cut-outs. As mentioned above, these H II regions
are cataloged separately from the H$\alpha$ Dots. They are not discussed
further as part of the current study.
After the review process is completed, objects flagged as H$\alpha$ Dots are
remeasured more carefully in order to obtain accurate $R$-band magnitudes and
narrow-band line fluxes for each source. The photometric calibrations derived
for the AHA project are utilized to place the measurements on an accurate flux
scale.
Once the list of H$\alpha$ Dots discovered in the AHA 2.1 m images was
finalized, the entire catalog was crossed-matched with objects in the SDSS
Data Release 14 (Abolfathi et al., 2018). Using the coordinates for the
H$\alpha$ Dots obtained from our astrometric solution (see Section 2), SDSS
positions were retrieved for all H$\alpha$ Dots that were within the SDSS
footprint. We then visually compared the location given by the coordinates for
each H$\alpha$ Dot to verify that the query had returned the correct object.
If it returned the wrong object, the correct object was located using the SDSS
navigate window, and new SDSS coordinates were obtained by centering the
cursor on the object in the navigate window and reading off the corresponding
Right Ascension and Declination. Fully 83.7% of the 2.1 m H$\alpha$ Dots (380
of 454) were included in the SDSS photometric catalog. The H$\alpha$ Dots not
in SDSS were either too faint (65 of 454, or 14.3%) or are located outside of
the SDSS footprint (9 or 454, or 2.0%). For the objects in common between the
two surveys we queried SDSS DR14 again to obtain the full set of SDSS ugriz
magnitudes and errors. This information was then merged into the H$\alpha$
Dots database.
## 4 2.1 m H$\alpha$ Dot Survey: Results
The final catalog of H$\alpha$ Dots detected using the KPNO 2.1 m AHA images
contains 454 newly discovered ELGs. This list of objects was arrived at after
searching 611 AHA fields. The total sky coverage of these images represents
15.494 square degrees, resulting in a surface density of new H$\alpha$ Dots of
29.30 deg-2. This number, which is a key figure of merit for such surveys, is
substantially higher than the surface densities of objects detected in the
previous H$\alpha$ Dots survey lists (K12, Salzer et al., 2020). The latter
had surface densities of 5.22 and 5.24 ELGs/deg2, respectively, more than a
factor of 5.5$\times$ lower.
Table 2: Fourth List of H$\alpha$ Dots
H$\alpha$ Dot # | RA(2000) | DEC(2000) | Obs. Run | Filter | $\Delta$m | Ratio | mR | NB Line Flux | SDSS r
---|---|---|---|---|---|---|---|---|---
| degrees | degrees | | | | | mag | x10-14 erg/s/cm2 | mag
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10)
1001 | 330.44398 | 24.18998 | Sep2010 | 4 | -1.21 | 23.91 | 21.84 $\pm$ 0.06 | 0.062 $\pm$ 0.003 | 22.05 $\pm$ 0.10
1002 | 330.44587 | 24.20362 | Sep2010 | 4 | -0.92 | 72.05 | 19.27 $\pm$ 0.02 | 0.329 $\pm$ 0.004 | 19.45 $\pm$ 0.02
1003 | 330.45553 | 24.21585 | Sep2010 | 4 | -0.60 | 34.08 | 19.69 $\pm$ 0.02 | 0.179 $\pm$ 0.004 |
1004 | 334.14431 | 27.94493 | Sep2010 | 4 | -1.20 | 17.60 | 21.39 $\pm$ 0.04 | 0.058 $\pm$ 0.004 |
1005 | 340.46256 | 27.76333 | Sep2010 | 4 | -0.49 | 10.80 | 21.13 $\pm$ 0.05 | 0.043 $\pm$ 0.004 |
1006 | 340.55283 | 27.69124 | Sep2010 | 4 | -1.25 | 18.51 | 22.13 $\pm$ 0.10 | 0.059 $\pm$ 0.004 | 21.87 $\pm$ 0.11
1007 | 340.59996 | 27.81721 | Sep2010 | 4 | -1.89 | 12.94 | 22.94 $\pm$ 0.18 | 0.057 $\pm$ 0.004 |
1008 | 344.40155 | 26.40989 | Sep2010 | 4 | -0.72 | 30.60 | 19.98 $\pm$ 0.03 | 0.135 $\pm$ 0.005 | 20.19 $\pm$ 0.03
1009 | 349.23530 | 27.93332 | Sep2010 | 4 | -1.10 | 7.39 | 22.97 $\pm$ 0.28 | 0.033 $\pm$ 0.006 | 22.87 $\pm$ 0.19
1010 | 350.08029 | 26.09913 | Sep2010 | 4 | -1.64 | 52.78 | 20.67 $\pm$ 0.05 | 0.348 $\pm$ 0.007 | 20.66 $\pm$ 0.04
1011 | 352.82919 | 25.12090 | Sep2010 | 4 | -0.54 | 17.84 | 20.48 $\pm$ 0.03 | 0.092 $\pm$ 0.006 | 20.82 $\pm$ 0.05
1012 | 355.59838 | 27.96453 | Sep2010 | 4 | -0.42 | 7.10 | 21.37 $\pm$ 0.05 | 0.026 $\pm$ 0.004 | 21.81 $\pm$ 0.16
1013 | 355.70529 | 27.97838 | Sep2010 | 4 | -1.63 | 106.59 | 19.44 $\pm$ 0.02 | 0.762 $\pm$ 0.008 | 19.78 $\pm$ 0.04
1014 | 355.70809 | 28.07243 | Sep2010 | 4 | -1.87 | 7.21 | 23.75 $\pm$ 0.58 | 0.017 $\pm$ 0.004 |
1015 | 355.99726 | 27.15461 | Sep2010 | 4 | -2.17 | 8.94 | 22.51 $\pm$ 0.12 | 0.053 $\pm$ 0.006 |
1016 | 356.00110 | 27.15570 | Sep2010 | 4 | -1.63 | 7.17 | 22.02 $\pm$ 0.11 | 0.057 $\pm$ 0.006 |
1017 | 356.00053 | 27.12553 | Sep2010 | 4 | -0.50 | 11.08 | 21.16 $\pm$ 0.04 | 0.054 $\pm$ 0.006 | 21.07 $\pm$ 0.09
1018 | 356.05400 | 27.19430 | Sep2010 | 4 | -0.72 | 7.08 | 22.38 $\pm$ 0.10 | 0.033 $\pm$ 0.004 | 22.00 $\pm$ 0.15
1019 | 357.03115 | 24.32929 | Sep2010 | 4 | -1.48 | 6.31 | 24.87 $\pm$ 2.26 | 0.041 $\pm$ 0.006 |
1020 | 357.18135 | 27.98432 | Sep2010 | 4 | -0.41 | 4.92 | 21.38 $\pm$ 0.09 | 0.017 $\pm$ 0.003 | 21.43 $\pm$ 0.09
1021 | 357.18551 | 27.94486 | Sep2010 | 4 | -1.28 | 17.00 | 21.27 $\pm$ 0.11 | 0.152 $\pm$ 0.007 | 21.13 $\pm$ 0.07
1022 | 357.38800 | 27.91175 | Sep2010 | 4 | -1.46 | 9.06 | 22.53 $\pm$ 0.24 | 0.055 $\pm$ 0.006 |
1023 | 17.12467 | 24.68193 | Sep2010 | 4 | -0.55 | 13.64 | 20.81 $\pm$ 0.04 | 0.053 $\pm$ 0.006 | 20.77 $\pm$ 0.06
1024 | 17.17512 | 24.68174 | Sep2010 | 4 | -0.59 | 9.91 | 21.75 $\pm$ 0.07 | 0.034 $\pm$ 0.004 | 21.91 $\pm$ 0.10
1025 | 331.97791 | 27.02233 | Nov2010 | 4 | -0.42 | 6.76 | 20.30 $\pm$ 0.05 | 0.032 $\pm$ 0.007 |
1026 | 332.20645 | 24.72709 | Nov2010 | 4 | -1.17 | 17.20 | 21.23 $\pm$ 0.05 | 0.071 $\pm$ 0.004 | 21.87 $\pm$ 0.18
1027 | 332.27611 | 24.61840 | Nov2010 | 4 | -0.46 | 7.05 | 21.30 $\pm$ 0.07 | 0.028 $\pm$ 0.005 | 21.94 $\pm$ 0.08
1028 | 332.53514 | 25.44871 | Nov2010 | 4 | -1.04 | 10.36 | 21.55 $\pm$ 0.12 | 0.062 $\pm$ 0.006 | 22.15 $\pm$ 0.11
1029 | 334.34985 | 27.68723 | Nov2010 | 4 | -0.45 | 5.71 | 21.97 $\pm$ 0.07 | 0.019 $\pm$ 0.005 | 22.12 $\pm$ 0.14
1030 | 334.51783 | 27.55964 | Nov2010 | 4 | -1.33 | 11.42 | 23.14 $\pm$ 0.22 | 0.033 $\pm$ 0.004 |
1031 | 350.37149 | 24.25764 | Nov2010 | 4 | -0.91 | 21.42 | 21.68 $\pm$ 0.05 | 0.045 $\pm$ 0.003 | 21.90 $\pm$ 0.09
1032 | 351.29051 | 25.85756 | Nov2010 | 4 | -0.69 | 8.87 | 20.90 $\pm$ 0.10 | 0.050 $\pm$ 0.006 | 21.07 $\pm$ 0.12
1033 | 351.68935 | 25.68903 | Nov2010 | 4 | -1.21 | 20.38 | 22.72 $\pm$ 0.12 | 0.045 $\pm$ 0.004 | 22.25 $\pm$ 0.19
1034 | 352.94941 | 25.83481 | Nov2010 | 4 | -0.74 | 47.50 | 19.11 $\pm$ 0.02 | 0.274 $\pm$ 0.006 | 19.24 $\pm$ 0.03
1035 | 354.34425 | 25.64395 | Nov2010 | 4 | -1.98 | 8.47 | 23.51 $\pm$ 0.43 | 0.046 $\pm$ 0.004 |
1036 | 354.49840 | 25.69983 | Nov2010 | 4 | -1.93 | 9.46 | 22.71 $\pm$ 0.27 | 0.051 $\pm$ 0.004 |
1037 | 357.03530 | 27.60221 | Nov2010 | 4 | -0.65 | 15.31 | 20.87 $\pm$ 0.06 | 0.057 $\pm$ 0.006 | 20.76 $\pm$ 0.07
1038 | 357.31563 | 25.53299 | Nov2010 | 4 | -1.51 | 16.02 | 21.68 $\pm$ 0.14 | 0.164 $\pm$ 0.008 |
1039 | 358.17888 | 27.30178 | Nov2010 | 4 | -1.08 | 5.74 | 22.26 $\pm$ 0.17 | 0.016 $\pm$ 0.004 |
1040 | 358.24607 | 27.29628 | Nov2010 | 4 | -0.89 | 10.03 | 22.06 $\pm$ 0.14 | 0.042 $\pm$ 0.006 | 22.18 $\pm$ 0.14
Note. — Table 2 is published in its entirety in the machine-readable format. A
portion is shown here for guidance regarding its form and content.
After the analysis is completed on the data from each of the AHA observing
runs, every H$\alpha$ Dot candidate in the final list is assigned a unique
H$\alpha$ Dot number. Following the convention established in K12, the
H$\alpha$ Dots are ordered by increasing right ascension within each observing
run. The observing runs are ordered chronologically, with the H$\alpha$ Dot
numbering sequence proceeding continuously from one run to the next. To avoid
any confusion with the numbering sequence established for the 0.9 m H$\alpha$
Dot catalogs, the 2.1 m H$\alpha$ Dot numbers start with 1001. Hence, the
first list of 24 H$\alpha$ Dot candidates from the September 2010 observing
run are numbered 1001 through 1024, the H$\alpha$ Dot numbers for the 26 ELGs
discovered in the November 2010 data run from 1025 to 1050, and so on.
Our fourth catalog of H$\alpha$ Dots is presented in Table 2. The table
includes, for each H$\alpha$ Dot, its identifying H$\alpha$ Dot number (column
1), SDSS Right Ascension and Declination (epoch J2000) (columns 2 and 3), the
observing run from which the AHA data originates (column 4; see also Table 1),
and the narrow-band filter used for the observation of that field (column 5,
see Figure 1). Columns 6 and 7 give the magnitude difference $\Delta m$ and
$ratio$ as defined in Section 3. These represent two of the primary selection
parameters used in creating the H$\alpha$ Dots survey. The measured R-band
magnitude and its associated error is given in column 8, while column 9 gives
the measured narrow-band line flux and its error. These measured quantities
are derived directly from our survey images. Finally, column 10 lists the SDSS
r-band magnitude and its error for objects where this has been measured.
We make a few notes about the data presented in Table 2. The decision to use
SDSS coordinates as opposed to our own astrometry was based on a direct
comparison of the two sets of positional data. The accuracy of our astrometry
is, in general, typically quite good. The median positional offset between our
astrometry and SDSS positions is 1.02 arcsec for the full sample. However, we
found that while the positional agreement is good at the centers of our
images, it tends to be less reliable near the edges of our frames. Given the
wide-field accuracy of the SDSS astrometry, we have opted to adopt it whenever
possible. In cases where the H$\alpha$ Dot is located outside the footprint of
the SDSS (nine galaxies), the coordinates are those derived from our analysis.
We include the SDSS r-band magnitudes for those H$\alpha$ Dots that match with
an object in the SDSS photometric catalog (380 objects). In general, there is
good agreement between the SDSS and H$\alpha$ Dots photometry for objects with
R $<$ 22 (with some notable exceptions). The median SDSS r magnitudes are
systematically 0.10-0.15 magnitudes fainter than our R-band measurements due
to well known offsets in the two photometric systems (e.g., Jordi et al.,
2006). For R $>$ 22.5, the photometric errors in both measurements get quite
large (typically larger than 0.2 magnitudes). As expected, the R-band
magnitudes for the 65 H$\alpha$ Dots that were not cataloged by SDSS are
substantially fainter than those that are: $\langle$R$\rangle_{In\ SDSS}$ =
21.34, compared with $\langle$R$\rangle_{Not\ in\ SDSS}$ = 22.32.
Figure 6 presents a histogram of the R-band magnitudes for the full sample of
454 H$\alpha$ Dots listed in Table 2 (red histogram). The brightest object in
the catalog is H$\alpha$ Dot 1144, with R = 17.07, while the faintest object,
H$\alpha$ Dot 1117, has R = 25.35. For comparison, the corresponding histogram
for the entire catalog of H$\alpha$ Dots discovered using the 0.9 m telescope
(K12, Salzer et al., 2020, plus additional objects from the upcoming third
catalog list) is shown in black.
The median R magnitude in the 2.1 m catalog is 21.59, 1.62 magnitudes fainter
than the median R magnitude found in the 0.9 m catalog. This corresponds to a
factor of 4.45 in brightness. The two sample medians are indicated with arrows
in the figure. The relative photometric depths of the two samples of ELGs is
very similar to the ratio of the surface density of objects discussed above,
and is consistent with the expectations based on the ratio of the light-
collecting areas of the two telescopes of (84 inches/37 inches)2 = 5.15.
Figure 6: Histogram of the R-band magnitudes for all 454 2.1 m H$\alpha$ Dots
(red histogram). For comparison we plot the corresponding histogram for the
full sample of 0.9 m detected H$\alpha$ Dots (black histogram). The median
values for the two distributions are given in the legend and are indicated by
the two arrows. The median apparent magnitude for the 2.1 m H$\alpha$ Dots is
seen to be 1.62 magnitudes fainter (factor of 4.45$\times$ fainter). Figure 7:
Histogram of the emission-line flux measured in our narrow-band images for the
full set of 454 2.1 m H$\alpha$ Dots (red histogram). For comparison we plot
the corresponding histogram for the entire sample of 358 0.9 m detected
H$\alpha$ Dots (black histogram). The median values for the two distributions
are given in the legend and are indicated by the two arrows. The median line
flux for the 2.1 m H$\alpha$ Dots is seen to be 0.63 dex fainter (factor of
4.27$\times$ fainter).
While the distribution of apparent magnitudes is a useful indicator of the
depth of the H$\alpha$ Dots survey, this is a sample of galaxies detected
based on the strength of their emission lines located within the bandpasses of
our narrow-band filters. Hence, the proper way to evaluate the depth of the
survey is by examination of the distribution of emission-line fluxes. This
distribution is shown in Figure 7.
As with Figure 6, we plot line-flux histograms for both the 2.1 m H$\alpha$
Dots from the current study (red histogram) as well as the sample of H$\alpha$
Dots detected in the 0.9 m data (black histogram) in Figure 7. The median
values of the two distributions are indicated in the legend, and denoted by
arrows in the plot. Once again, the 2.1 m H$\alpha$ Dots sample is seen to be
substantially deeper than that found with the 0.9 m telescope. The median line
flux found for the 2.1 m data is 4.57 $\times$ 10-16 erg s-1 cm-2, which is a
factor of 4.27 times fainter than the median for the 0.9 m H$\alpha$ Dots. The
flux distribution of the current sample peaks at log(flux) = $-$15.5 (3.2
$\times$ 10-16 erg s-1 cm-2), beyond which the distribution falls off rapidly.
We adopt this value as the approximate line-flux completeness limit for the
2.1 m H$\alpha$ Dots sample.
## 5 Discussion
### 5.1 Spectroscopic Follow-up of the New H$\alpha$ Dots
Previous H$\alpha$ Dot survey lists (K12, Salzer et al., 2020) included
information from follow-up spectra. These spectra provide confirmation of the
emission-line detection, as well as identifying the line present in the
narrow-band images. Typically these follow-up spectra yield accurate redshifts
and emission-line ratios that allow for the determination of the activity type
of each ELG (e.g., star forming vs. AGN). A similar spectroscopic campaign for
objects in the current H$\alpha$ Dot catalog was not possible, in part because
of the larger number of objects, and in part because the objects are, on
average, significantly fainter than those in the previous catalogs. Hence, we
are presenting our latest and deepest list of H$\alpha$ Dots without benefit
of spectroscopic confirmations.
During our cross-matching with the SDSS, we noted a small number of H$\alpha$
Dots with SDSS spectra. A search of Data Release 16 (Ahumada et al., 2020)
reveals that 17 H$\alpha$ Dots from the current survey list (3.7% of the
total) have existing spectra in SDSS. These objects are listed in Table 3.
There are four H$\alpha$ Dots with low redshifts where the emission line in
our narrow-band filter was H$\alpha$ (based on our inspection of the SDSS
spectra). These four objects are all brighter than R = 18.0, and are among the
brightest of the H$\alpha$ Dots in the current sample (e.g., Figure 6). One of
these is H$\alpha$ Dot 1144, the brightest object in our catalog. All four of
these were observed as part of the legacy SDSS redshift survey (e.g., Strauss
et al., 2002). The remaining H$\alpha$ Dots with SDSS spectra were observed as
part of the BOSS (Dawson et al., 2013) or eBOSS (Dawson et al., 2016)
projects. One of these, H$\alpha$ Dot 1021, was detected via strong [O
III]$\lambda$5007 being located in our survey filter. This object has a
spectrum similar to the Green Pea galaxies (Cardamone et al., 2009; Brunker et
al., 2020). All of the remaining 12 H$\alpha$ Dots with SDSS spectra are QSOs.
Of these, nine are detected due to redshifted Mg II emission, one via C III],
and two via C IV. Table 3 lists the H$\alpha$ Dot number, the emission line
detected in our narrow-band filter, the spectroscopic redshift, and the
activity type (either QSO or star-forming galaxy (SFG)) for each object.
Table 3: H$\alpha$ Dots with SDSS Spectra H$\alpha$ Dot # | Detected Line | z | ELG Type
---|---|---|---
(1) | (2) | (3) | (4)
1011 | Mg II $\lambda$2798 | 1.3955 | QSO
1021 | [O III] $\lambda$5007 | 0.3406 | SFG
1110 | H$\alpha$ | 0.0165 | SFG
1142 | H$\alpha$ | 0.0241 | SFG
1144 | H$\alpha$ | 0.0239 | SFG
1192 | Mg II $\lambda$2798 | 1.4085 | QSO
1193 | Mg II $\lambda$2798 | 1.3925 | QSO
1214 | Mg II $\lambda$2798 | 1.3858 | QSO
1216 | C IV $\lambda$1549 | 3.2969 | QSO
1217 | C IV $\lambda$1549 | 3.3161 | QSO
1220 | Mg II $\lambda$2798 | 1.3930 | QSO
1260 | C III] $\lambda$1909 | 2.4814 | QSO
1329 | Mg II $\lambda$2798 | 1.3896 | QSO
1341 | Mg II $\lambda$2798 | 1.3803 | QSO
1356 | Mg II $\lambda$2798 | 1.3837 | QSO
1358 | Mg II $\lambda$2798 | 1.3717 | QSO
1392 | H$\alpha$ | 0.0214 | SFG
The distribution of redshifts for the 17 H$\alpha$ Dots from the current
catalog with SDSS spectra is dramatically different from the one for the 0.9 m
Dots (e.g., Salzer et al., 2020). The latter sample is dominated by objects
detected via their H$\alpha$ or [O III] lines. Specifically, 92% of the
H$\alpha$ Dots from the first two catalogs were detected either by H$\alpha$
(55%) or [O III]$\lambda$5007 (37%). The remaining galaxies were discovered
either by the [O II]$\lambda$3727 doublet (2%) or one of the UV lines common
to QSO spectra (6%).
The fact that the “emission-line detection function” for the H$\alpha$ Dots
found with the KPNO 2.1 m telescope is so different from the one observed for
the previous H$\alpha$ Dot survey lists (WIYN 0.9 m component) should be no
surprise. Many of the objects with SDSS spectra were pre-selected for the BOSS
and eBOSS surveys as having broad-band colors consistent with QSOs (e.g.,
Dawson et al., 2013, 2016). Hence, the large percentage of QSOs is no
accident. However, even before seeing the SDSS spectra, we were anticipating
that the new catalog of H$\alpha$ Dots would be different. The increased depth
of the 2.1 m sample compared to the 0.9 m Dots, coupled with the fixed
redshift range accessible for each emission line, implies that the fainter
H$\alpha$ Dots should preferentially be objects selected at higher redshifts
rather than being lower luminosity versions of the the objects found with the
0.9 m telescope. We expect a higher portion of [O III]-detected Dots in the
current catalog than were present on the first two survey lists, as well as
substantially higher numbers of [O II]-detected galaxies and high redshift
QSOs. As time and telescope resources allow, we will hopefully get the
opportunity to test these hypotheses as we obtain follow-up spectra for these
new H$\alpha$ Dots.
### 5.2 Applications of the H$\alpha$ Dots
As highlighted in the previous survey lists based on sources detected in WIYN
0.9 m images (K12, Salzer et al., 2020), the H$\alpha$ Dots have a number of
interesting science applications. These include studying large samples of
dwarf star-forming galaxies, including Blue Compact Dwarfs (BCDs), and the
detection of strong [O III] emitters like Green Peas and Seyfert 2 galaxies.
The lack of existing follow-up spectroscopy for the current sample of
H$\alpha$ Dots naturally limits its immediate impact on addressing relevant
science questions. Nonetheless, we draw attention to the high-impact
scientific applications these objects can be used to address.
#### 5.2.1 Low Luminosity Star-forming Galaxies
As outlined above, we anticipate that a lower percentage of the H$\alpha$ Dots
in the current catalog will be low redshift H$\alpha$ detections. Still, we
expect that a significant fraction will have been detected via the H$\alpha$
line. Given the wavelength coverage of the narrow-band filters employed, the
resulting redshift range of the H$\alpha$-selected galaxies will be 0.005 –
0.026 (velocity range 1460 – 7810 km/s). This redshift range, coupled with the
apparent magnitude distribution of the current sample (e.g., Figure 6),
implies that the H$\alpha$-detected H$\alpha$ Dots will all be dwarf star-
forming systems (e.g., see Figure 8 in Salzer et al., 2020). Since this
catalog of galaxies represents a statistically complete, line-flux-limited
sample, it could be used for accurately establishing the volume density of
low-luminosity star-forming galaxies in the local universe.
Since the H$\alpha$ Dots are pre-selected as possessing strong emission lines,
this sample of dwarf galaxies will also be ideal for measuring metal
abundances for a large, statistically-complete sample of objects. A similar
study is currently underway utilizing the brighter H$\alpha$ Dots from K12 and
Salzer et al. (2020) (A. Hirschauer, in preparation). An important application
of the current catalog of objects is that they extend the detection of strong-
lined ELGs to substantially fainter magnitudes. Hence, we expect that the 2.1
m survey of H$\alpha$ Dots will include lower-luminosity dwarfs than those
found in the previous survey lists. This in turn should result in a larger
yield of extremely low abundance systems, similar to HADot 303 = AGC 198691
(a.k.a. the Leoncino Dwarf; Hirschauer et al., 2016; McQuinn et al., 2020).
#### 5.2.2 [O III]-detected Star-forming Galaxies
The H$\alpha$ Dots discovered with the WIYN 0.9 m telescope included a large
number of [O III]-detected galaxies (37% of the objects in the first two
survey lists). The expectation is that the 2.1 m Dots will include a
comparable or somewhat larger fraction of systems detected by the [O
III]$\lambda$5007 line. In fact, the current catalog might well be dominated
by such objects. The relevant redshift range for detection by [O
III]$\lambda$5007 is z=0.317-0.345.
The [O III]-detected systems found in the previous H$\alpha$ Dots catalogs
included a mix of Green Pea-like galaxies (Cardamone et al., 2009; Brunker et
al., 2020) and Seyfert 2 galaxies. We expect that the current catalog will
detect many additional Green Pea candidates. Additionally, given the increased
depth of the current list of H$\alpha$ Dots, we fully expect that many less
extreme [O III]-selected star-forming galaxies will also come within reach of
detection. It is well known that the strength of the [O III]$\lambda$5007 line
peaks at metal abundances of $\sim$10% solar (log(O/H)+12 $\sim$ 7.7). In the
local universe (z $<$ 0.1), actively star-forming galaxies with B-band
absolute magnitudes in the range $-$16 $\leq$ MB $\leq$ $-$18 are often found
in [O III] line-selected samples such as the UM survey (e.g., MacAlpine et
al., 1977; MacAlpine & Williams, 1981; Salzer et al., 1989), the Case survey
(e.g., Sanduleak & Pesch, 1982; Pesch & Sanduleak, 1983; Salzer et al., 1995),
and the KISSB survey (Salzer et al., 2002). The depth of the current sample
should allow for the detection of this population of objects at the higher
redshifts probed by the [O III]$\lambda$5007 line with our filters.
A key attribute of both the Green Pea galaxies and the intermediate luminosity
[O III]-detected SFGs is that they very often exhibit spectra with such strong
emission lines that the temperature-sensitive [O III]$\lambda$4363 auroral
line is present in their follow-up spectra. Hence, we expect that the [O
III]-detected H$\alpha$ Dots in the current list will include dozens of
sources from which accurate direct abundances will be measurable.
#### 5.2.3 [O III]-detected Seyfert 2 Galaxies
The additional depth of the current H$\alpha$ Dot catalog will likely result
in a deeper and presumably more comprehensive sample of [O III]-selected
Seyfert 2 galaxies in the redshift window z=0.317-0.345. While the previous
H$\alpha$ Dots list include a significant number of Seyfert 2s (11% of the
sample overall, and 30% of the [O III]-detected Dots), they tend to be objects
with extreme spectral characteristics. The Seyfert 2s included in the previous
H$\alpha$ Dots catalogs tend to exhibit very high [O III]/H$\beta$ ratios;
they are nearly all very high-excitation objects (see Figure 7 in Salzer et
al. (2020)). Because the great depth of the current survey, the Seyfert 2s
cataloged in the current paper should include both the high-excitation objects
as well as many with lower [O III]/H$\beta$ ratios. Overall we expect an even
higher percentage of Seyfert 2s compared to the previous lists. We also expect
that this new sample of Seyferts will be more representative, rather than
being biased toward the more extreme examples.
Once again, the line-flux limited nature of the H$\alpha$ Dots survey method
will allow us to generate an accurate volume density for Seyfert 2 population
in the redshift window covered by our filters. The strong-lined nature of the
Seyfert 2 sample will also allow for the study of the metallicity of the AGN
at these redshifts (which represents a lookback time of $\sim$3.7 Gyr). A
preliminary analysis of the [O III]-detected Seyfert 2s from the previous
survey catalogs has indicated the possibility of a modest drop in the average
metal abundance compared to low redshift counterparts (D. Carr, in
preparation).
#### 5.2.4 [O II]-detected SFGs and AGN
Another expectation of the current list of H$\alpha$ Dots is that it will
include a higher proportion of [O II]-detected SFGs and AGN at z =
0.770-0.807. This population of ELGs is just barely detectable with the 0.9 m
telescope portion of the H$\alpha$ Dots survey. Only three [O II]-detected
galaxies are included in the first two survey lists, and these are all found
in the fainter portion of the sample (average line flux of 1.3 $\times$ 10-15
ergs/s/cm2). The increased depth of the 2.1 m survey list should result in a
substantial increase in the number of [O II]-selected ELGs. This will allow
the survey to probe the star-forming and AGN galaxy populations at these
cosmologically interesting distances (lookback times of $\sim$6.8 Gyr, or 50%
the age of the universe).
#### 5.2.5 QSOs
Finally, we mention the rather obvious presence of numerous QSOs within the
current H$\alpha$ Dots catalog. While our survey is not capable of producing a
comprehensive QSO survey, it does detect substantial numbers of quasars in
specific, narrow redshift windows: z = 1.357-1.408 for the Mg II $\lambda$2798
line, z = 2.454-2.528 for C III] $\lambda$1909, z = 3.257-3.347 for C IV
$\lambda$1549, and z = 4.428-4.543 for Ly$\alpha$ $\lambda$1215\. If detected
in sufficient numbers, the H$\alpha$ Dots QSOs could provide accurate “hard
points” for their volume densities in these redshift windows. While it is
clear that the large fraction of QSOs among the H$\alpha$ Dots with available
SDSS spectra is a selection effect, these spectra provide a glimpse of the
potential science applications of the line-flux limited H$\alpha$ Dots survey
for probing the QSO population at a range of redshifts.
## 6 Summary & Conclusions
We present the latest list of H$\alpha$ Dots, based on images obtained for the
ALFALFA H$\alpha$ project (Van Sistine et al., 2016). H$\alpha$ Dots are
compact emission-line sources detected serendipitously in narrow-band images.
Previous survey catalogs have presented lists of H$\alpha$ Dots detected in
images obtained with the WIYN 0.9 m telescope (K12, Salzer et al., 2020). Our
new list of H$\alpha$ Dots has been created by analyzing 611 ALFALFA H$\alpha$
fields observed with the KPNO 2.1 m telescope.
The current H$\alpha$ Dot catalog contains 454 unique H$\alpha$ Dots. All the
new H$\alpha$ Dots were identified in 2.1 m images using the same software
packages developed for the previous H$\alpha$ Dots catalogs. Hence, the only
significant difference with the previous survey lists is in the depth of the
sample. The 2.1 m H$\alpha$ Dots survey is sensitive to fainter objects,
detecting sources with a median apparent R magnitude of 21.59 and median line
fluxes of 4.57 $\times$ 10-16 erg s-1 cm-2. In both metrics, the current
survey list probes a factor of $\sim$4.4 times fainter than the 0.9 m
H$\alpha$ Dots catalogs. The approximate emission-line flux completeness limit
of the current sample is 3 $\times$ 10-16 erg s-1 cm-2.
While the previous H$\alpha$ Dots catalogs included information from follow-up
spectroscopy, we do not have corresponding spectral data for the current list
of ELGs. We speculate that the additional depth of the H$\alpha$ Dots list
generated using 2.1 m telescope images will result in a significantly
different mix of objects being discovered, relative to the previous catalogs.
While we expect that the current catalog includes numerous low-luminosity
star-forming dwarf galaxies detected via their H$\alpha$ lines, we expect that
this population will account for a much smaller fraction of the overall ELG
catalog when compared to the lists generated from the 0.9 m data (where the
H$\alpha$-detected fraction was 55%). We anticipate large fractions of the
current catalog will be found to have been detected via their [O III] or [O
II] lines. Plans for carrying out follow-up spectroscopy of the 2.1 m
H$\alpha$ Dots are being formulated.
We would like to thank the anonymous referee who made a number of suggestions
that have improved the paper. We gratefully acknowledge the financial support
of the College of Arts and Sciences and the Department of Astronomy at Indiana
University, which helped make this ongoing undergraduate research project
possible. The H$\alpha$ Dots survey project is based on data obtained for the
ALFALFA H$\alpha$ project, which was carried out with the support of a
National Science Foundation grant to JJS (NSF-AST-0823801). We would also like
to acknowledge the Maria Mitchell Association, which provided a summer
research internship to RR as part of their NSF-funded REU program (with JJS
serving as an associate mentor). This project made use of Sloan Digital Sky
Survey data. Funding for the SDSS and SDSS-II has been provided by the Alfred
P. Sloan Foundation, the Participating Institutions, the National Science
Foundation, the U.S. Department of Energy, the National Aeronautics and Space
Administration, the Japanese Monbukagakusho, the Max Planck Society, and the
Higher Education Funding Council for England. The SDSS Web Site is
http://www.sdss.org/. The SDSS is managed by the Astrophysical Research
Consortium for the Participating Institutions. The Participating Institutions
are the American Museum of Natural History, Astrophysical Institute Potsdam,
University of Basel, University of Cambridge, Case Western Reserve University,
University of Chicago, Drexel University, Fermilab, the Institute for Advanced
Study, the Japan Participation Group, Johns Hopkins University, the Joint
Institute for Nuclear Astrophysics, the Kavli Institute for Particle
Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of
Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute
for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New
Mexico State University, Ohio State University, University of Pittsburgh,
University of Portsmouth, Princeton University, the United States Naval
Observatory, and the University of Washington.
## References
* Abolfathi et al. (2018) Abolfathi, B., Aguado, D. S., Aguilar, G., et al. 2018, ApJS, 235, 42
* Ahumada et al. (2020) Ahumada, R., Allende Prieto, C., Alameida, A., et al. 2020, ApJS, 249, 3
* Brunker et al. (2020) Brunker, S. W., Salzer, J. J., Janowiecki, S., et al. 2020, ApJ, 898, 68
* Cardamone et al. (2009) Cardamone, C., Schawinski, K., Sarzi, M., et al. 2009, MNRAS, 399, 1191
* Dawson et al. (2013) Dawson, K. S., Schlegel, D. J., Ahn, C. P., et al. 2013, AJ, 145, 10
* Dawson et al. (2016) Dawson, K. S., Kneib, J.-P., Percival, W. J., et al. 2016, AJ, 151, 44
* Hirschauer et al. (2016) Hirschauer, A. S., Salzer, J. J., Skillman, E. D., et al. 2016, ApJ, 822, 108
* Giovanelli et al. (2005) Giovanelli, R., Haynes, M. P., Kent, B. R. et al. 2005, AJ, 130, 2598
* Haynes et al. (2011) Haynes, M. P., Giovanelli, R., Martin, A. M., et al. 2011, AJ, 142, 170
* Haynes et al. (2018) Haynes, M. P., Giovanelli, R., Kent, B. R. et al. 2018, ApJ, 861, 49
* Jordi et al. (2006) Jordi, K., Grebel, E. K., Ammon, K. 2006, A&A, 460, 339
* Kellar et al. (2012) Kellar, J. A., Salzer, J. J., Wegner, G., et al. 2012, AJ, 143, 145 (K12)
* MacAlpine et al. (1977) MacAlpine, G. M., Smith, S. B., & Lewis, D. W. 1977, ApJS, 34, 95
* MacAlpine & Williams (1981) MacAlpine, G. M., & Williams, G. A. 1981, ApJS, 45, 113
* McQuinn et al. (2020) McQuinn, K. B. W., Berg, D. A., Skillman, E. D., et al. 2020, ApJ, 891, 181
* Pesch & Sanduleak (1983) Pesch, P., & Sanduleak, N. 1983, ApJS, 51, 171
* Salzer et al. (1989) Salzer, J. J., MacAlpine, G. M., & Boroson, T. A. 1989, ApJS, 70, 479
* Salzer et al. (1995) Salzer, J. J., Moody, J. W., Rosenberg, J. L., et al. 1995, AJ, 109, 2376
* Salzer et al. (2000) Salzer, J. J., Gronwall, C., Lipovetsky, V. A., et al. 2000, AJ, 120, 80
* Salzer et al. (2001) Salzer, J. J., Gronwall, C., Lipovetsky, V. A., et al. 2001, AJ, 121, 66
* Salzer et al. (2002) Salzer, J. J., Gronwall, C., Sarajedini, V. L., et al. 2002, AJ, 123, 1292
* Salzer et al. (2020) Salzer, J. J., Feddersen, J. R., Derloshon, K., et al. 2020, AJ, 160, 242
* Sanduleak & Pesch (1982) Sanduleak, N., & Pesch, P. 1982, ApJ, 258, L11
* Stetson (1987) Stetson, P. B. 1987, PASP, 99, 191
* Strauss et al. (2002) Strauss, M. A., Weinberg, D. H., Lupton, R. H., et al. 2002, AJ, 124, 1810
* van Dokkum (2001) van Dokkum, P. 2001, PASP, 113, 1420
* Van Sistine et al. (2016) Van Sistine, A., Salzer, J. J., Sugden, A., et al. 2016, ApJ, 824, 25 (AHA)
* York et al. (2000) York, D. G., Adelman, J., Anderson, J. E., et al. 2000, AJ, 120, 1579
|
# Density-based clustering of social networks
Giovanna Menardi Department of Statistical Sciences, University of Padova,
Italy<EMAIL_ADDRESS>Domenico De Stefano Department of Political and
Social Sciences, University of Trieste, Italy<EMAIL_ADDRESS>
###### Abstract
The idea underlying the modal formulation of density-based clustering is to
associate groups with the regions around the modes of the probability density
function underlying the data. This correspondence between clusters and dense
regions in the sample space is here exploited to discuss an extension of this
approach to the analysis of social networks. Such extension seems particularly
appealing: conceptually, the notion of high-density cluster fits well the one
of community in a network, regarded to as a collection of individuals with
dense local ties in its neighbourhood. The lack of a probabilistic notion of
density in networks is turned into a major strength of the proposed method,
where node-wise measures that quantify the role and position of actors may be
used to derive different community configurations. The approach allows for the
identification of a hierarchical structure of clusters, which may catch
different degrees of resolution of the clustering structure. This feature well
fits the nature of social networks, disentangling a different involvement of
individuals in social aggregations.
## 1 Introduction
### 1.1 Background and motivation
Within large social communities, individuals sparsely interact with each
others and usually set a tight relationship with a limited number of subjects.
Interactions favour individuals to aggregate into groups, where the
relationships are stronger and the information flow is more intense than
outside.
The generating mechanism of these groups, albeit pervasive, is complex and
often difficult to be disclosed. On one hand, different kinds of relationship
may be established, from friendship to professional collaboration, each of
them possibly with different levels of intensity. On the other hand,
aggregation may be driven by diverse, sometimes unobserved, social mechanisms
– homophily, popularity, ranking or influence. Depending on the context,
cohesive communities may be formed, where even relationships connect each
actor with most of other actors. This configuration characterizes, for
instance, individual interactions, communication system, sport and team
relationships (Carron and Brawley, 2000). A different dynamic arises when one
or few influential actors drive the aggregation and shape the whole
organization of the community (Ghalmane et al., 2019). Examples of this latter
behaviour are opinion or news spreading in online communities where followers
are attached to influencers (e.g. Wang et al., 2017b); epidemic diffusion
where few prominent actors govern the outbreak (Medo et al., 2009), scientific
collaborations and citations where communities develop around the so-called
star scientists (De Stefano et al., 2013). Here, the nature of leadership may
be associated to various roles which actors carve out within the groups,
acting for instance as hubs or brokers.
In this context, Social network analysis (SNA) exploits the framework offered
by graph theory to translate these ideas into operational tools: any community
is suitably described by a graph where nodes represent the actors and the
links between them their interactions, possibly of different strength. A wide
range of methods, among which centrality or equivalence measures are just
simple examples, have been spawned to express notions of social role and
position. A standard accounts is Wasserman and Faust (1994).
While the underlying scope to find groups in network may follow different
routes, these are usually defined as locally densely connected set of nodes.
The correspondence between groups of subjects and their inner connection
density, as well as the possible role of influential individuals within
communities, suggest us to extend the ideas underlying the density-based
approach for clustering nonrelational data to the network framework. The
_modal_ formulation of this approach associates clusters with the domains of
attraction of the modes of the density function underlying the observed data,
namely clusters correspond to dense regions of the sample space. While network
data unarguably prevent the definition of a probabilistic notion of a density
function defined on the nodes, the two notions of group are in agreement
conceptually. Operationally, modal clustering often resorts to graph theory to
detect clusters, which further favours the extension of this formulation to
network data. As a fortunate side effect the modal approach allows for the
identification of a hierarchical structure of clusters, which may catch
different degrees of resolution of the clustering structure.
Based on these ideas, the aim of this work is to discuss a method to find
clusters of nodes within a network structure, while accounting for
relationships of different strength. Consistently with the cluster notion
shared by the nonrelational density-based approach, we focus on aggregation
mechanisms driven by the attraction exerted by influential actors, on the
basis of different “leadership” roles as detected by means of alternative
node-wise measures. Note that this perspective is largely neglected by the
inherent literature, most focusing on the concept of mutual cohesiveness
within communities.
The paper is organised as follows. After a brief review of clustering
approaches for networks, we overview the modal clustering formulation in
metric spaces. Then, we discuss its extension to network data, in both, the
unweighted and weighted network framework. The procedure is illustrated on
some simple archetypal networks characterized by different community
configurations, on a number of benchmark examples with a known community
structure, and on a comprehensively complex original dataset to identify
groups of researchers within the community of the Italian academic
statisticians. A discussion concludes the paper.
### 1.2 Overview of the related literature
_Community detection_ refers to the general aim of partitioning networks in
subsets of nodes, which share some common properties and play similar roles in
the relational structure. Similarly to the nonrelational framework, this task
is, in fact, far from being accurately defined. Thus, while the general
purpose usually translates into the task of identifying assortative groups
with dense inner connections, a different perspective would also include the
search of disassortative structures with weaker interactions within, rather
than between communities.
The lack of a precise definition of cluster, along with the unsupervised
nature of the problem, have led on one hand to the proliferation of a
voluminous amount of literature on this topic and, on the other hand, to
confusing taxonomies of methods designed for the scope. A lack of a consistent
terminology has determined expressions as _network_ or _graph clustering_ ,
_module_ , _block_ or _community detection_ to be either used interchangeably,
or carry slightly different, yet ambiguous, connotations. In this confounding
panorama, methods are easier classified on the basis of their technical
details and algorithmic implementations (e.g., Fortunato, 2010; Azaouzi et
al., 2019), which yet disguises the more relevant notion of cluster underlying
them. Reviewing all these methods is then an awkward task which we cannot
engage without crossing over the scope of the paper. For our purpose, we limit
to set some boundaries by providing a coarse overview of the main different
goals and motivations for finding groups in networks, and refer back to the
insightful review of Rosvall et al. (2019), where the reader will find further
details and references. At the same time, we use the terms cluster, community,
groups and so on exchangeably in the rest of the paper.
The first, perhaps most widespread approach to find clusters in networks aims
at identifying densely interconnected nodes compared to the other nodes. Due
to the generality of this principle, methods differ in the way it is
translated into operational implementations. Several formulations rely on
detection of actors or edges with high centrality, as for instance, the very
popular method of Girvan-Newman (GN, Newman and Girvan, 2004), a divisive
algorithm for undirected and unweighted graphs based on edge-betweenness,
afterwards generalized by Chen and Yuan (2006). Further methods relying on a
similar ground build on the optimisation of the cluster modularity (Danon et
al., 2005), so that each community will include a larger number of inner edges
than expected by chance. The Louvain method is unarguably one of the most
popular representative of this category (Blondel et al., 2008). The
aforementioned methods result in cohesive communities where transitivity is
high and each actor is highly connected to each other inside the group.
Notwithstanding, the idea of high density within a group may be also intended
as the one arising in star-shaped clusters, where density is concentrated in
the figure of some hubs attracting less prominent actors. Evidence of such a
theoretical mechanism of aggregation has been explained by Goyal et al. (2006)
as a combination of small-world behavior guided by the presence of interlinked
stars. In fact, this principle has been largely neglected by SNA, with the
works of Kloster and Gleich (2014), based on the local optimization of the so-
called conductance and, to some extent, Falkowski et al. (2007) representing
an exception. This is also the route we follow.
A further facet of the clustering problem in networks, known as _cut-based_
perspective, aims at partitioning networks in a fixed number of balanced
groups with a small number of edges between them, and no guarantees about a
possible denser structure of inner connection. In this context, networks are
often of a mesh- or grid-like form. Methods in this class refer back to the
seminal work of Kernighan and Lin (1970) and often build on the spectrum of of
the data. Examples are Pothen et al. (1990); Wang et al. (2017a).
The block-modeling approach follows a completely different purpose, relying on
the fundamental concept of node equivalence, of which _structural equivalence_
is the most used. Disregarding the similarity of nodes, groups are here based
on more general patterns that include disassortative communities and include
nodes that serve, within the network, a similar structural role in terms of
their connectivity profile. A first formalization in terms of non-stochastic
blocks can be found in Lorrain and White (1971), while Holland et al. (1983)
gave rise to the stochastic counterpart, later generalized to the weighted
framework (Aicher et al., 2015) and largely applied in various contexts See
Lee and Wilkinson (2019) for a recent review.
## 2 Clusters as dense sets
### 2.1 Modal clustering of non-relational data
Modal clustering delineates a class of methods for grouping non-relational
data defined on a metric, continuous space, and building on the concept of
clusters as “regions of high density separated from other such regions by
regions of low density” (Hartigan, 1975, p. 205) Formally, the observed data
$(x_{1},\ldots,x_{n})^{\prime}$, $x_{i}\in\mathbb{R}^{d}$, $i=1,\ldots,n$, are
supposed to be a sample from a random vector with (unknown) probability
density function $f$. The modes of $f$ are regarded to as the archetypes of
the clusters, which are in turn represented by their domain of attraction.
The practical identification of the modal regions may occur according to
different directions. One of them associates the clusters to disconnected
density level sets of the sample space, without attempting explicitly the
difficult task of mode detection. The key idea is that, when there is no
clustering structure, $f$ is unimodal, and any section of $f$, at a given
level $\lambda$, singles out a connected (upper) level set:
$L(\lambda)=\\{x\in\mathbb{R}^{d}:f(x)\geq\lambda\\}$. Conversely, when $f$ is
multimodal, $L(\lambda)$ may be either connected or disconnected, depending on
$\lambda$. In the latter case, it is formed by a number of connected
components, each of them associated with a region of the sample space
including at least one mode of $f$. Since a single section of $f$ could not
reveal all the modes of $f$, $\lambda$ is moved along its feasible range,
giving rise to a hierarchical structure, known as the _cluster tree_ , which
provides the number of connected components for each $\lambda$. Each leaf of
the tree describes a _cluster core_ , defined as the largest connected
component of the density level sets which includes one mode. Figure 1
illustrates a simple example of these ideas: cluster cores associated with the
two highest modes are identified by the smallest $\lambda$ larger than
$\lambda_{3}$, while the smallest $\lambda$ larger than $\lambda_{1}$
identifies two connected components whose one is the cluster core associated
to the lowest mode.
Note that while the cluster tree resembles a dendrogram, the whole procedure
cannot be included in the class of hierarchical techniques. These explore,
within the same run, all the partitions with a number of clusters ranging from
one to $n,$ by subsequent splits (divisive algorithms) or aggregations
(agglomerative algorithms). Conversely, in the cluster tree, the leaves are
themsevels veritable clusters, instead of single observations, and their
number is then an estimate of the number of clusters. Hence, with respect to a
dendrogram, the cluster tree enjoys a different, more insightful
interpretation. The height of the leaves corresponds to the density level at
which the associated mode appears, thus providing an indication of the cluster
prominence. Finally, the hierarchical structure of the tree allows for
catching possible different degrees of resolution of the clustering. In the
example illustrated in Figure 1 the number of modes is three, but the two
highest ones pertain to the same macro-group, at a lower level of resolution,
hence the leaves associated to the two groups collapse to a single branch
accordingly.
Figure 1: A sample from three subpopulations and the associated contour set at
a level $\lambda_{0}$ (left). The threshold $\lambda_{0}$ defines a section of
the trimodal underlying density function (center) and identifies two connected
regions. On the right, the cluster tree indicates the number of connected
components for varying $\lambda$ and the total number of clusters,
corresponding to the leaves.
As the union of the cluster cores is not a partition of the sample space,
unallocated points are assigned to the cluster cores according to a supervised
scheme of classification, generally accounting for their density.
Operationally, clustering involves two main choices: first, a density
estimator is required and this is typically selected among the nonparametric
methods. Second, for each examined threshold $\lambda$ it is to establish
whether the associated level set is connected and what are its components.
Since there is no obvious method to identify connected sets in a
multidimensional space, graph theory comes to this aid. A graph is built on
the sample points and the connected components of the subgraphs induced by the
level sets are then easily detected. The reader is referred to Menardi (2016)
for further details about modal clustering.
### 2.2 Modal clustering of social networks
#### 2.2.1 Defining density on networks
For the current formulation, we regard to social networks as undirected graphs
$\mathcal{G}=\\{\mathcal{V},\mathcal{E}\\}$ consisting of a set
$\mathcal{V}=\\{v_{1},\dots,v_{n}\\}$ of nodes – the actors of the network–
and a set $\mathcal{E}=\\{e_{ij}\\}$ of $m$ links or edges, $i\neq
j=1,\ldots,n,$ representing relations between pairs of nodes. Depending on the
nature of the observed relationships, the elements of $\mathcal{E}$ assume
different forms: in binary networks the $e_{ij}$ will take values in
$\\{0,1\\}$, denoting the absence and the presence of a link, respectively,
while real nonnegative values of $e_{ij}$ will account for different strengths
of the relationship in weighted networks. In order to represent a given
network $\mathcal{G}$ it is possible to define a $n\times n$ adjacency matrix
$\mathbf{A}$ whose elements $a_{ij}=e_{ij}$.
The notion of high-density regions highlighted in the previous section
suggests a natural counterpart in network analysis, where clusters are often
referred to as sets of actors with dense relationship patterns (see, among
others, Moody, 2001). However, network objects are subject to an inherent
limitation, as their properties can be established in geodesic terms only. In
particular, a probabilistic notion of density cannot be defined and shall be
intended in a less formal way, reflecting some intuitive meaning of
cohesiveness.
We are naturally tempted to borrow the concept of density and akin notions
from graph theory. The density of a subgraph $\mathcal{H}\subseteq\mathcal{G}$
is defined as the proportion of all possible edges of $\mathcal{H}$ which are
actually observed. In fact, density definition as a node-wise measure is
arbitrary as a subgraph $\mathcal{H}_{v}$ is required to be associated to each
node $v$. For instance, one could set
$\mathcal{H}_{v}=\\{\mathcal{V}_{v},\mathcal{E}_{v}\\}$ as the subgraph having
the nearest neighbours of $v$ as nodes, or focus on the single node
$\mathcal{V}_{v}=v$ and its incident edges $\mathcal{E}_{v}$ thus recasting to
the notion of (possible weighted) degree. In fact, consistently with the
previous one, a wider set of candidates to quantify local density is
represented by measures of connectivity or measures of centrality, which
evaluate, somehow, the role as well as the prominence of each actor in a
network. It is worthwhile to observe that the choice of a node-wise density
measure is not inconsequential with regard to the subsequent interpretation of
clusters, and different choices would entail a different concept of cluster.
For example, the notion of degree accounts for the rate of the activity of
individual nodes in the network, so that high-degree actors act as “hubs” and
play a central role in the overall connectivity. Alternatively, by measuring
the proportion of times a node works as a broker connecting nodes otherwise
disconnected in the network, betweenness evaluates the influence of the actors
with respect to the information flow in the network. Despite in the following
we adopt well-known node centrality measures only, any function defined on the
node set $\mathcal{V}$ or alternative node-wise measures that allow to
quantify the role and/or position of each node in the network can be used.
This allows our procedure to be more flexible than other methods based on
optimisation of a given node (or edge) function.
While, in general, the above mentioned measures do not sum up to one, as it
would be required by a density function, they can be easily normalised to this
purpose, but for the subsequent developments this is not strictly necessary.
#### 2.2.2 Clustering of unweighted networks
Consider a binary network $\mathcal{G}=\\{\mathcal{V},\mathcal{E}\\}$, where
$\mathcal{E}=\\{e_{ij}\\}$ and $e_{ij}\in\\{0,1\\}.$ To perform clustering, we
select a node-wise measure of density
$\delta:\mathcal{V}\mapsto\mathrm{R}^{+}\cup\\{0\\}$ as discussed in the
previous section. Afterwards, we may proceed to cluster the nodes according to
the modal formulation illustrated in Section 2.1, i.e. actors are clustered
together when they have density above the examined threshold and they are
connected. With respect to the nonrelational framework above, we further
benefit of the fact that the connected components of the high-density level
sets may be identified as the connected components of the induced subgraphs,
namely the maximal set of nodes such that each pair of nodes is connected by a
path. An operational route is a represented by the following scheme:
1. 1.
Compute the density of the relationships of each actor:
$\delta(v_{1}),\dots,\delta(v_{i}),$ $\dots,\delta(v_{n})$. Clusters will be
formed around the modal actors, namely actors with the densest relationship
patterns.
2. 2.
For $0<\lambda<\max_{i}\delta(v_{i}):$
* •
Determine the upper level set
$\mathcal{V}_{\lambda}=\\{v_{i}\in\mathcal{V}:\delta(v_{i})\geq\lambda\\},$
* •
Build the subgraph
$\mathcal{G}_{\lambda}=(\mathcal{V}_{\lambda},\mathcal{E}_{\lambda})\subset\mathcal{G}$
where $\mathcal{E}_{\lambda}=\\{e_{ij}(\lambda)\\}$ and
$e_{ij}(\lambda)=\left\\{\begin{array}[]{ll}e_{ij}&\mbox{if
}(v_{i},v_{j}\in\mathcal{V}_{\lambda})\\\
0&\mbox{otherwise}\end{array}\right.$
* •
Find the connected components of $\mathcal{G}_{\lambda}$.
3. 3.
Build the cluster tree by associating each level $\lambda$ to the number of
connected components of $\mathcal{G}_{\lambda}$.
4. 4.
Identify all the lowest $\lambda$ for which the branches of the tree represent
the leaves, and form the cluster cores as the connected components of the
different associated $\mathcal{G}_{\lambda}$.
Essentially, at each threshold $\lambda$ we evaluate the connected components
of $\mathcal{G}_{\lambda}$, the subgraph formed by the nodes with density
above $\lambda$ and the only connections between them. The scheme usually
leaves unallocated a number of actors with low-density patterns, when they do
not univocally pertain to a modal actor. Depending on the aim of clustering
and on subjects-matter considerations, part, or all of them may be either left
unallocated or assigned to the cluster for which they present the highest
density $\delta(\cdot)$.
The described way of proceeding entails the early identification of clusters
as formed by actors with the highest density, i.e. the leaders of the
community, and the subsequent aggregation to the formed clusters of actors
with less prominent role. In this sense, and consistently with the non-
relational version of modal clustering, the final clusters are then described
by the domains of attraction of the community leaders.
#### 2.2.3 Clustering of weighted networks
Let us now consider a weighted network
$\mathcal{G}=\\{\mathcal{V},\mathcal{E}\\}$, where $\mathcal{E}=\\{e_{ij}\\}$
and $e_{ij}\in\mathbb{R}^{+}\cup\\{0\\}$, _i.e._ the link weight is
proportional to the strength of the relationship between the two incident
nodes and it is set to zero when the two nodes are not linked.
As a first natural ploy to account for real-valued edges, we consider density
measures for weighted networks. Indeed, the generalisation of these measures
to weighted networks has been historically a somewhat controversial matter
which cannot be tackled without considering the nature of the data, the goal
of the analysis, and subject-matter knowledge. However, for most of the
mentioned candidate measures $\delta$, there exist a reasonable weighted
counterpart. The degree, for instance, is easily extended to measure
centrality in weighted networks by summing up the weights incident with each
node. This allows considering prominent an actor not only when he has many
connections, but also when the strength of these connection is large. We refer
the reader to the existing literature for a discussion about the specification
of descriptive measures for weighted networks (Opsahl et al., 2010).
In the presence of relationships of different strengths, we need to further
adjust the presented procedure. Indeed, a possible weak connection between two
high density actors does not appear as a sufficient condition for them to be
clustered. Thus, we account for the weights on the basis of the following
simple idea: two actors are clustered together when they have density above
the examined threshold and they are _strongly_ connected. Actors presenting a
weak relationship with their neighbours are merged into the same cluster at a
lower level of density. Here, the strength of the connection is intended as
relative to the set of connections of each node. While this is consistent with
the natural idea that prominent actors exercise more influence over their
strong connections and less influence over their weak connections, its
implementation may take various forms. The following scheme provides two
options of possible operational routes:
1. 1.
Compute the density of each actor, $\delta(v_{1}),\dots,\delta(v_{i}),$
$\dots,\delta(v_{n})$, with $\delta$ an appropriate measure of node-wise
density accounting for the weights of the edges;
2. 2.
For each node $v_{i},i=1,\ldots,n$, identify the incident edge with maximum
weight $e_{im}=\underset{j:\mbox{ }e_{ij}\in\mathcal{E}}{\max}{e_{ij}};$
3. 3.
For $0<\lambda<\max_{i}\delta(v_{i}):$
* •
Determine the upper level set
$\mathcal{V}(\lambda)=\\{v_{i}\in\mathcal{V}:\delta(v_{i})\geq\lambda\\}$
* •
Build the subgraph
$\mathcal{G}_{\lambda}=\\{\mathcal{V}_{\lambda},\mathcal{E}_{\lambda}\\}$,
where $\mathcal{E}_{\lambda}=\\{e_{ij}(\lambda)\\}$ and $e_{ij}(\lambda)$ can
be defined according to the two alternative options, denoted by ‘AND’ and ‘OR’
respectively.
option AND
$e_{ij}(\lambda)=\left\\{\begin{array}[]{ll}e_{ij}&\mbox{if
}(v_{i},v_{j}\in\mathcal{V}_{\lambda})\cap((e_{im}=e_{ij})\cap(e_{jm}=e_{ij}))\\\
0&\mbox{otherwise}\end{array}\right.$
option OR
$e_{ij}(\lambda)=\left\\{\begin{array}[]{ll}e_{ij}&\mbox{if
}(v_{i},v_{j}\in\mathcal{V}_{\lambda})\cap((e_{im}=e_{ij})\cup(e_{jm}=e_{ij}))\\\
0&\mbox{otherwise}\end{array}\right.$
* •
find the connected components of $\mathcal{G}_{\lambda}$
* •
update $e_{im}=\underset{j:\mbox{
}e_{ij}\in\mathcal{E}\setminus\mathcal{E}_{\lambda}}{\max}{e_{i,j}};$
4. 4.
Find the connected components of $\mathcal{G}_{\lambda}$.
5. 5.
Build the cluster tree by associating each level $\lambda$ to the number of
connected components of $\mathcal{G}_{\lambda}$.
6. 6.
Identify all the lowest $\lambda$ for which the branches of the tree represent
the leaves, and form the cluster cores as connected components of the
different associated $\mathcal{G}_{\lambda}$.
Essentially, at each $\lambda$, we identify the connected components of
$\mathcal{G}_{\lambda}$ which are formed by the nodes with density above
$\lambda$. According to “option AND” the additional condition for aggregation
is that these nodes represent their reciprocal strongest connection among
those not examined yet; conversely, according to “option OR” the condition is
loosen by requiring that such connection is the strongest for just one of the
actors. The two options, albeit not exhaustive, correspond to different ways
of disentangling network complexity and defining the underlying network group
structure. With the tight AND option, aggregation is harder to occur, hence
leading to a large number of highly homogeneous clusters. The resulting
partition is mostly driven by the importance of the relations among nodes
rather than by their relative importance within the whole network. According
to the “OR option”, where the aggregation condition is more frequently
satisfied, more parsimonious partitions are created, with clusters mostly
driven by the attraction hold by the high density nodes, namely the leaders,
on the lower-density ones. Note that this way of proceeding does not guarantee
that all the weights are scanned while scanning the density values, i.e. at
the lowest considered $\lambda$, the weakest connections between some pairs of
actors might not be accounted for. Since in practice these connections are
negligible as, by construction, the weakest ones, we simply circumvent this
problem by identifying, at the end of the density scanning, the connected
components of the network disregarding the weights of the connections.
The clustering procedure eventually entails the formation of singleton
clusters: suppose that three connected nodes $u,v,$ and $z$ have all density
above a given $\lambda$, but while the strongest relationship of $u$ is with
$v$, the strongest relationship of $v$ is with $z$ and _viceversa_. Then, with
the AND option, $v$ and $z$ will fall in the same cluster while $u$ will be a
singleton cluster which will be aggregated to the other at a lower $\lambda$.
Unallocated actors are finally classified to the cluster core at which they
present highest density, like in the unweighted setting.
## 3 Empirical analysis
### 3.1 Aims and implementation details
The current section aims to illustrate the aggregation mechanism at the basis
of the proposed method for different community configurations, also with
respect to the selected node-wise measure. We consider as density measures
three alternative indexes of centrality designed to catch different roles and
community configurations within a network: degree centrality evaluates the
actor importance in terms of number of relationships with other members of the
community; betwenness centrality, on the other hand, by counting the number of
times actor work as bridges to connect other members, evaluates their
strategic role in terms of brokerage; finally, local density, by shifting the
focus from single actors to their nearest neighbourhood (i.e., nodes at
geodesic distance equal to one from the focal actor), relaxes the focus on
centralized groups and identifies shared leaderships. The considered measures
have been consistently adjusted for their use in weighted networks.
With the aim of comparison, the considered examples are run also by
considering a few competing community detection methods, mostly selected
because of their wide popularity: the Girvan Newman (GN) method and its
extension to weighted networks, the Louvain method and Stochastic Block Models
(SBM).
All the analyses are run in the R computing environment R Core Team (2020)
with the aid of libraries igraph (Csardi and Nepusz, 2006), sna(Butts, 2020),
sbm(Chiquet et al., 2020). The proposed method has been implemented within the
DeCoDe package (Density-based Community Detection), available on the author
webpage111https://homes.stat.unipd.it/giovannamenardi/content/software.
### 3.2 A simple illustrative example
For the sake of illustration, we consider as a first example of our empirical
analysis some unweighted archetypal networks where the community structure is
determined by the presence of high-density nodes.
The simple network displayed in the top row of Figure 2 highlights 4 hubs
standing out among 28 actors, labelled as $5,8,15$, and $22$. Each of the four
hubs drives the information flow from and towards six actors having a less
prominent role. Density-based clustering built on degree centrality reflects
the hub dominance by identifying 4 clusters headed by the leaders (Figure 2
a1). In fact, if the leaders were connected - middle panel of the Figure,
where a tie links actors $1$ and $8$ \- the clustering configuration would
change accordingly, and a single group would be formed by all the followers of
the leader dyad (Figure 2 b1). In the lack of hubs - bottom row of Figure 2,
where actors $5,8,15,22$ have been removed from the network - density-based
clustering built on the degree fails to identify groups (Figure 2 c1), which
are better identified by alternative node-wise measures accounting for a
decentralized leadership. By considering, for example, the local density based
on the nearest neighborhood, modal clustering detects four clusters in all the
three versions of network (second column of Figure 2). Conversely, if the the
analysis focuses on the strategic role of the actors, the leadership is rather
drawn by actors $12$ and $25$, acting as brokers which connect nodes otherwise
disconnected in the network. With this changed aim in mind, a different
structure characterizes the network, as the whole community is compact around
the leaders. Consistently, density-based clustering built on betweenness
detects just one cluster in all the three versions of network, leaded by the
connected brokers (third column of Figure 2).
The cluster trees provide further information by identifying the hierarchy of
the communities. Thus, in the four clusters configurations the more central
communities aggregate first, whereas in the three clusters configuration the
first merge occurs between the largest cluster and the closest one (bottom
panel of Figure 2).
Compared to density-based clustering, GN and the Louvain method, mostly driven
by the idea of modularity within a community, identify 4 clusters in all
variants of the network, thus behaving like modal clustering with local
density (Figure 2, fourth and fifth columns). SBM identifies an optimal
partition in two clusters in the presence of hubs, and one cluster only in the
absence of hubs (Figure 2 a6, b6, and c6 respectively).
Figure 2: At each row a slightly changed version of the same toy network: in
the first one four hubs not linked directly; in the second row two of them are
connected by a link; in the third row the hubs have been removed. At each
column the clustering produced by the density-based procedure built on
different measures, GN, Louvain, and SBM. Clusters are marked with different
colors. In the first three columns actor size is proportional to their
density. In the bottom panel, the cluster trees associated with the density-
based partitions into 4 and 3 clusters.
### 3.3 Benchmark examples
As a second step of the empirical analysis, we explore the behaviour of our
method in some popular real datasets where a ground truth community membership
is assigned. The choice of evaluating results in term of a true labeling,
rather common in clustering, is motivated by our will of not being biased
towards specific community configurations. On the other hand, it is worth
highlighting that the possible identification of community structures diverse
from the defined true labels would not necessarily imply a failure of the
applied clustering method. Such possible result would just reflect that the
true clusters have a configuration different from the one that each method is
designed to detect.
The agreement between the true and the detected membership has been measured
in terms of normalised mutual information (NMI, Danon et al., 2005) which
increases for improved quality and associates the maximum value 1 to a perfect
agreement.
##### Zachary Karate Club network
The well-known Karate Club data (Zachary, 1977) describes the network of
friendships between 34 members of a karate club at a US university in the
1970s. The network is in principle weighted, with the strength of connections
given by the number of common activities of the club members. In fact, we run
the empirical analysis on both the weighted network and on its binary version,
built by neglecting the strength of connections. Due to a dispute between the
administrator ‘John A’ and the instructor ‘Mr Hi’, the club split into two
factions, here representing the benchmark membership. The two factions are
then built around the leadership of John A and Mr Hi, which play a special
role in terms of both direct influence on the club members, and influence on
the information flow to and from the actors.
binary network
---
G-N | Louvain | SBM | Density-based clustering
| | | degree | loc. density | betw. | | |
0.58 | 0.59 | 0.01 | 1 | 0.36 | 1 | | |
weighted network
G-N | Louvain | SBM | Density-based clustering
| | | OR option | AND option
| | | degree | loc. density | betw. | degree | loc. density | betw.
0.56 | 0.69 | 0.01 | 1 | 0.44 | 0.85 | 0.61 | 0.36 | 0.42
Figure 3: Zachary Karate Club network with true communities marked with
different colours. Below, NMI results of different community detection
methods.
In agreement with these considerations, in the binary setting, an essentially
perfect agreement is found between the two factions and the density-based
partition detected with both degree and betweenness as node-wise measures.
Conversely, since local density accounts for the maximum number of ties each
actor can set in its neighbourhood, it results in depowering the leaders of
star-based community, thus proving not to be adequate as a node-wise measure
to recover the true factions. A slightly better performance arises from the
application of both Louvain and GN methods, while SBM cannot reconstruct the
benchmark factions. Note, however, that none of the competitors is designed to
detect hub headed communities. See Figure 3.
When considering the weighted network with the OR option, the two factions are
again perfectly recovered with the degree used as a node-wise measure.
Betwenness overall makes a remarkable job as well, althought it identifies
three community instead of two. Accounting for the link weights, in fact,
allows to distinguish a new leader beyond Mr Hi and John A, namely actor 32,
with a high prominent bridging role. The inadequacy of local density to find
clusters arising from a leadership is confirmed also in the weighted setting.
The AND option gives rise, by construction, to a larger number of homogeneous
clusters, with the highest density ones still led by John A and Mr Hi. For
this reason the NMI stands at decreased values. Disregarding the employed
density, in general, the presence of more peripheral actors is enhanced, as
with the AND option individual connections are accounted for in clustering
formation, rather than the leader influence. In fact, despite the true cluster
membership, driven by a forced choice of each actor to line up with one of the
leaders, data show that the relationships among the peripheral actors are
generally stronger than the ones they have with the leaders.
The Louvain method produces improved results with respect to the binary case,
while SBM and GN stand at about the same level than in the binary counterpart.
##### Les Misérables character network
binary network
---
G-N | Louvain | SBM | Density-based clustering
| | | degree | loc. density | betw. | | |
0.76 | 0.63 | 0.54 | 0 | 0.76 | 0 | | |
weighted network
G-N | Louvain | SBM | Density-based clustering
| | | OR option | AND option
| | | degree | loc. density | betw. | degree | loc. density | betw.
0.35 | 0.63 | 0.59 | 0.48 | 0.43 | 0.61 | 0.76 | 0.78 | 0.78
Figure 4: Les Misérables character network. Cf. Figure 3.
This popular network describes the interactions between 77 characters of the
Victor Hugo’s novel Les Misérables (Knuth, 1993). The network is in principle
weighted, with edge strength set to the number of co-appearance of characters
in one or more scenes of the novel. Like in the previous example, we also
analyse its binary version. With the aim of an objective evaluation, we pursue
the assignment of a ground truth membership by associating each character to
the book of his/her early appearance. This eventually results in 20 small
communities having an assorted attachment mechanism, with some communities
formed around more relevant characters and other more cohesive communities
(Figure 4). The partition provides an overall fair summary of the novel plot,
yet we shall account with some limitations. Beyond three ambiguous references
to unnamed actors, the cluster membership of a few main characters should
rather have overlapping nature. Hence, results evaluation requires some
further insights beyond the mere observation of the NMI values.
Neglecting the strenght of relationships, has little impact on the minor
characters, generally claiming a small number of weak interactions. Thus, in
the binary version of the network, cohesive communities are anyway easily
detected, whereby GN, Louvain and modal clustering based on local density
stand out from the other methods at high values of accuracy. Conversely, the
loss of information on the weights affects the classification of the main
characters, all being connected to each other, yet with a different extent
which pinpoints their role. Hence, modal clustering with degree and
betweenness identifies in the binary network just one community, built around
the protagonist Jean Valjean.
When the weight strength is accounted for, modal clustering works remarkably
with the AND option, which tends to inflate the segmentation and highlights
minor groups. The OR option underperforms the AND version compared to the true
labels, yet results are anyway highly interpretable. The use of both degree
and betwenness gives rise to 6 clusters. In the former case most communities
are built around one main character, whereas when betwenness centrality kicks
in, being its values larger for those characters having a protagonist role in
multiple books, modal clustering is able to isolate all the main characters in
just one group (that is the “main plot” cluster) together with other 5 smaller
sized groups of actors whose story is standalone within the whole plot.
##### US politics books co-purchasing network
G-N | Louvain | SBM | Density-based clustering
---|---|---|---
| | | degree | loc. density | betw.
0.56 | 0.51 | 0.45 | 0.60 | 0.31 | 0.07
Figure 5: US politics books co-purchasing network. Cf. Figure 3.
As a further example of star-shaped communities in networks, the US politics
books co-purchasing data222http://www.orgnet.com/ include 105 books about US
politics published around the presidential election in 2004 and sold online at
Amazon.com. The 441 ties between them represent co-purchasing of books by the
same buyers. Community membership is given by the book political alignment:
liberal, neutral, or conservative. Within communities there exists a slighlty
centralized organization of links, especially among liberal and conservative
thinkings, with bestsellers representing high-density nodes, often bought in
bundle with a variety of less popular other books.
Results (Figure 5) reflect such behaviour, as the density-based partition
built on degree centrality outperforms both the other centrality measures and
the competitors. While the latter tend to oversegment the network yet
achieving acceptable results, the former result not appropriate to describe
the community configuration. Without exception, methods are not able to
identify the least characterized neutral books.
##### Email-EU-core network
G-N | Louvain | SBM | Density-based clustering
---|---|---|---
| | | degree | loc. density | betw.
0.56 | 0.53 | - | 0.26 | 0.58 | 0.26
Figure 6: Email-EU-core network network. Cf. Figure 3.
The Email-EU-core network (Leskovec et al., 2007; Yin et al., 2017) describes
the email exchange between the members of 42 departments of an European
research institution. The network is regarded to as undirected by setting an
edge whenever there has been at least one either outgoing or incoming email
between two members. True clusters are the Departments of affiliation.
Distribution of actors among Departments is rather unbalanced, ranging from 1
to 107 individuals. Since the network includes a few isolated nodes, we focus
on the giant component only, consisting of 986 individuals (98% of the total)
connected by 25552 ties (99.9%).
The network is far more complex than the ones examined above. While of
difficult inspection, Figure 6 shows that the community configuration is
hardly caught by the link description. Research collaborations, indeed, are
possibly conducted also by email, and likely not to be limited to the members
of a Department. Additionally, there is little evidence of some attachment
mechanism guided by the presence of prominent individuals in terms of their
degree; also due to the unavailability of weights, conversely, it is likely to
expect quite a homogeneous distribution of links within each Department and
possible clusters not built around some leaders.
Results confirm the expectations, as local density is the only centrality
measure able to catch the gross community structure via density-based
clustering. GN and Lovain method stand on about the same level of accuracy of
classification. The application of SBM is computationally unfeasible on this
network, due to an inner limitation of the R routines included in package sbm,
which requires the joint estimation of models for any number of communities
and the subsequent selection of the best models. Hence, networks with a large
number of clusters, like in this case, run into a memory error.
##### American college football network
The American college football network, described by Girvan and Newman (2002),
represents the schedule of Division I American football games for the 2000
season. Nodes represent teams and ties between two teams represent regular-
season games they dispute. The 115 teams are divided into 12 conferences,
representing the benchmark community memberships. In most conferences, inner
games are more frequent than games with external teams, with an average of
about seven intraconference games and four interconference games in the
reference season (Girvan and Newman, 2002, p.7824). The example is here
explored to show the inadequacy of density-based community detection in the
lack of leadership. Games configuration, indeed, leads to a grid-like
organization of links within communities. In this situation the competing
methods are able to recover accurately the community structure, while our
proposal fails by setting either degree or betwenness as node-wise measure.
Local density in this case, allows just for a slight improvement. See Figure 7
for details.
GN | Louvain | SBM | Density-based clustering
---|---|---|---
| | | degree | loc.dens. | betw.
0.88 | 0.89 | 0.89 | 0.33 | 0.55 | 0.13
Figure 7: American college football network. Cf. Figure 3.
### 3.4 Finding clusters within the community of Italian academic
statisticians
The aim of the case study here considered is to characterise the scientific
community of the Italian academic scholars in Statistics and related fields,
via the identification of the clusters formed on the basis of the
relationships between them, possibly of different nature and strength, and of
the leading aggregation mechanism. This can be useful, for instance, for the
creation of new projects and synergies, or more generally, to understand who
are, within the community, the leading actors with respect to specific topics.
The main hypothesis underlying the data collection is that, to characterise a
researcher within the community, we broadly answer to the questions: _Where
does he/she work? What is his/her macro-area of research? Who does he/she work
with? What does he/she work on?_ As a consequence, we have built a weighted
network having in principle a multiplex structure, divided in four layers
associated with the questions above: (1) affiliation adjacency matrix (AFF):
two actors are connected when they share the same university department
affiliation; (2) macro-sector adjacency matrix (MS): two actors are connected
when they belong to the same macro-sector, within the area of Statistics and
related fields, and as defined by the Italian Ministry of Education,
Universities, and Research MIUR (statistics, economic statistics, demography
and social statistics, mathematical methods for economy, actuarial and
financial sciences) (3) co-authorship network (PUBS): two actors are connected
with a link weighted as the number of publications they co-authored; (4)
common keywords adjacency matrix (KW): two actors are connected with a link
weighted as the number of common keywords in their publications.
Data have been collected in November 2019 and refer to 1160 among professors
and researchers of the academic community of statisticians, as recorded by the
MIUR database333http://cercauniversita.cineca.it. where information about the
university affiliation and the scientific macro-sector have been drawn.
Information about the publications and the keywords have been extracted from
the ISI-WoS database444https://apps.webofknowledge.com . Handling the latter
one has been troublesome, due to an awkward operation of author matching,
especially in the case of homonymy or when a researcher has changed his
affiliation at some time and the WoS database does not recognise it. In fact,
we shall live with the likely, hopefully not relevant, distortion in the
assessment of both the publications and the inherent keywords.
A summarising description of the single layers is provided in Table 1. All
networks at the individual layers are composed of the 1160 nodes representing
the members of the scientific community under study. Given the exclusivity of
the affiliation, the associated network is composed by as many components as
the number of observed University departments (namely, 194), within which
every actor is connected with all the other actors. The number of researchers
within departments is pretty heterogeneous, ranging from 1 to 54. A similar
behaviour is observed in the network associated to the macro-sector, where
each actor is connected with all other researchers in the same macro-sector.
The number of connected components in this layer is equal to the number of
considered macro-sectors and these components have diverse sizes (both the
statistics and mathematical methods for economy, actuarial and financial
sciences areas count more that 400 researchers, while each of the two further
sectors count about 150 researchers). The co-authorship layer represents an
updated, enriched version of one of the databases employed by De Stefano et
al. (2013). We observe 255 isolated researchers, either because they have not
published on ISI journals, or because their publications have never been co-
authored by any other Italian academic statistician currently on the MIUR
list. Their publications have been in any case considered to extract the
keywords for the fourth layer of the network, where the number of isolated
researchers reduces to 109.
Table 1: Italian academic statisticians network: descriptive statistics for the individual layer networks (AFF - Department affiliation, MS - Macro-sector, CA - Coauthorship network, KW - common keywords) and overall weighted networks. | AFF | MS | PUBS | KW | Overall
---|---|---|---|---|---
# of isolated nodes | 67 | 0 | 255 | 109 | 0
# of components | 194 | 4 | 292 | 110 | 1
Network Density | 0.014 | 0.308 | 0.002 | 0.523 | 0.734
Global transitivity | 1 | 1 | 0.306 | 0.820 | 0.833
Degree centralization | 0.032 | 0.082 | 0.016 | 0.346 | 0.240
In order to aggregate the four layers into a single, weighted network, we have
first normalised the edge weights, measured on different scales, depending on
the represented relationship. In principle, there are many procedures to
choose among for the purpose.We opt for the simple idea of dividing each
weight by the sum of weights within the layer. Then, stemming from the four
normalised networks, we have built the associated _overlapping_ network
(Battiston et al., 2014) by simply summing up the edge weights associated to
the same actor across different layers. The overall network is relatively
dense and cohesive with no isolates since all nodes are comprised in the
unique component (see Table 1). The strength of the links in the overall
network is largely governed by the sparsest co-authorship layer because of the
used weighting system.
Among the community detection methods, we have been able to run the Louvain
method only, whereas the application of both GN and SBM has turned out
computationally unfeasible. It is worth reporting, however, that we run SBM up
to the maximum number of communities allowed by our computational resources,
i.e. 64. The detected partition is unarguably suboptimal, with one community
gathering the $2/3$ of the actors but the remaining 63 clusters do not differ
much from the ones obtained with our procedure. The Louvain algorithm
identifies $23$ communities, of size ranging from 13 to 102 researchers.
Cluster homogeneity with respect to the scientific macro-sector and the
affiliation has been evaluated via the complement to one of the Gini index. As
for the publications, for each researcher, the proportion of works coauthored
by members of the same cluster has been evaluated and the cluster average used
as a summarising measure of cluster homogeneity. The same index has been
computed for the keywords. To look at the assortative mixing within the
detected communities and find if actors within clusters tend to exhibit dense
connections among them rather than with actors in different clusters,
modularity of the clusters has been also evaluated. Results are summarized in
Figure 8. Due to the large size of the detected clusters, clusters are
somewhat homogeneous for the scientific sector and the affiliation only, while
communities are scarcely associated to co-authorship and research topics.
While, by construction, modularity of the Louvain-based partition is
maximised, it does not show a remarkably high value. To this respect, it is
worth noting that even if the detected partition corresponds to the global
maximum of the modularity, in scientific applications this solution is not
guaranteed to be more meaningful than the ones obtained by local maxima (Good
et al., 2010). Furthermore, results are affected by the so-called resolution
limit for which small, plausible communities cannot be identified if the
network is large and heterogeneous clusters tend to be formed (Fortunato and
Barthélemy, 2007).
Modal clustering has been run building on the degree of the actors, as it
appears the most sensible and easiest to interpret choice in such a complex
application. Both the options “OR” and “AND” have been run. Summarising
results in terms of size of clusters, modularity, and homogeneity with respect
to the considered relationships are reported in Figure 8. The OR option
identifies 139 groups of size ranging from 4 to 49 scholars. Some
heterogeneity with respect to the considered relationships is unavoidable, but
clusters are far more homogeneous than those identified by the Louvain method.
In fact, while actors working either together or on similar topics tend to be
aggregated into the same cluster, the same membership is often shared by other
researchers. In this case, cluster aggregation is mostly driven by the
attraction hold by a few leaders towards minor actors which often exhibit
pretty diverse characteristics. Conversely, option AND gives rise to a very
sensible partition, counting 499 clusters, overall a realistic value in the
overview of the statistical community where, excluded applied
interdisciplinary scholars, researchers tend to work within very small-sizes
teams, and publications are generally co-authored by 2/3 researchers at most.
The majority of clusters is fully homogeneous with respect to the scientific
macro-sector and affiliation, and gathers researchers who are known to belong
to the same research group. Note that this result is largely acknowledged to
occur in social contexts, where social groups are limited in size even if
social actors are embedded in relatively large networks (Dunbar, 1992).
Louvain | Density-based clustering
---|---
| | Option OR | Option AND
size | 45.69 (22.90) | size | 7.56 (7.97) | size | 2.11 (1.42)
modularity | 0.42 | modularity | 0.24 | modularity | 0.18
| |
Figure 8: Italian community of statisticians. Top panel: average size of
clusters (and standard deviation) found via the Louvain method and both the
options OR/AND of the density-based method and modularity. The boxplots
display the homogeneity of actors across clusters with respect to the
considered relationships. Figure 9: Cluster tree of the Italian academic
statisticians (rotated for better readability, with high density levels at the
right side). An insight of the highlighted area is provided in Figure 10.
However, the clustering is not to be interpreted solely in term of final group
membership \- a not that different partition could be trivially obtained by
aggregating pairs of maximally connected actors. In fact, the generation
process of collaboration among researchers is pretty peculiar: there may be
solitary researchers, sparsely collaborating with other subjects; also, there
are researchers who mostly focus on a specific research topic, but also
collaborate with different groups of people on a variety of different areas.
Both keeping these researchers separated or merging them into the same group
may be a stretch. To this aim, a relevant interpretation derives from the
exploration of the cluster tree, where clusters are subsequently aggregated at
lower levels of the hierarchy, to form larger clusters with a lower resolution
(Figure 9). For the sake of interpretation, one of its branches, including 17
clusters and a total of 30 researchers, is detailed in Figure 10, along with
the associated subnetwork highlighting cluster aggregations at the different
levels of the cluster tree. The forming leaves of the branch mostly include
either researchers affiliated to the Department of Statistical Science at
University of Padova, or scholars who have spent at that Department part of
their academic career. Actor aggregation in clusters mostly relies on the
strength of connections, hence lead by co-authorship which weights most on the
overlapping network. At a lower level of the tree, cluster merging is driven
by research topics, with the largest branch on the left associated to
likelihood theory, and the other branches including scholars working on its
more applied declinations. A link between branches derives from the
eclecticism of some of the researchers, working on different research topics.
The size of the tree prevents an overall interpretation, but similar traits of
homogeneity can be easily identified by picking any branch of the tree. Of
course, the lower the level of aggregation of the branches, the lower the
homogeneity of the branch.
Figure 10: Detailed visualization of the subtree highlighted in Figure 9 and
associated subnetwork with clusters marked in different colors and
superimposed the cluster aggregations at the different levels of the cluster
tree. Actor size is proportional to the their density and different shapes are
associated to different macro-sectors. Actor colour is associated to the
affiliation. Edge width is proportional to the number of common keywords and
coauthored publications.
## 4 Discussion
Due to the unsupervised nature of the problem, and to the further lack of a
ground truth against which to evaluate the quality of a partition, clustering
is an ill-posed task, which cannot be performed fully automatically, i.e.
without some amount of human intervention and disregarding subject-matter
considerations.
The methodology here presented makes no exception in the clustering panorama,
as it both has required during its planning and still requires the user to
make a few thorny choices. A first choice concerns the density measure. The
lack of a probabilistic notion of density at a node-wise level implies the
loss, for network data, of the probabilistic framework of the original
approach defined for non-relational data. Hence, the proposed procedure cannot
enjoy the mathematical rigour of other well known stochastic procedures. On
the other hand, it follows the opportunity of selecting the measure of density
among a wide set of candidates which quantify connectivity or centrality roles
of the actors. Different group structures arise according to the chosen
density measure and those structures account for different aspects of
subnetworks cohesiveness. In fact, we believe that leaving unspecified this
measure represents a strength of the procedure. Depending on subject-matter
considerations, this provides the procedure with the flexibility of adapting
to different notions of clusters, each of them associated with a specific
selection of the density and consistently with the intrinsic ill-posedness of
the clustering problem.
A second choice concerns the way to handle relationships of different
strength. Unlike the unweighted framework, there is no obvious way to extend
modal clustering in the presence of weighted links. Our strategy aggregates
strongly connected individuals at a higher density level than individuals
which are weakly connected. While this choice is consistent with the
considered aggregation mechanism, based on the most prominent actors exerting
influence over their neighbours, the actual implementation of this idea may
take various forms. The AND option aggregates two actors with density above a
threshold, when they represent their reciprocal strongest connection among
those not examined yet. Alternatively, the condition may be loosen via the OR
option, by requiring that such connection is the strongest for just one of the
actors. A further alternative route would consist in proceeding in a block-
sequential manner, aggregating several actors with density above the threshold
at a time, as long as their relationship has, at least, a given strenght. The
possibility of choosing among these options in cluster formation allows for
looking at a given network structure from a different granularity of the
representation. As showed in the proposed applications, the OR mechanism tends
to minimize the network partition in a smaller number of internal densely
connected clusters with loose connections with other clusters. On the other
hand, the AND mechanism maximizes the internal homogeneity of clusters
detecting a larger number of smaller groups. Here again the choice of the
mechanism to handle weights depends on the purpose of the analysis. For
instance in the Karate network, the choice of the OR option would reflect an
interest in the big picture after collapsing the relations in the community.
Conversely, in the Italian statisticians network, the choice of the AND option
would entail small scale groups of actors and reflect the purpose of looking
for cohesive research clusters.
Although featuring these different options of analysis, the proposed density-
based procedure does not suffer from the arbitrariness matters which are
typical of standard clustering procedures. While the number of clusters is
determined within the procedure, the partitioning accounts for different
levels of cluster resolution, via the group hierarchy provided by the cluster
tree. In this sense, the cluster tree represents a somewhat formal instrument
to emulate the human cognitive system and allows for getting over the
resolution limit of modularity-based methods.
From a computational point of view, the algorithm requires to run $O((V+E)V)$
operations on a binary network, in addition to the ones needed to compute the
density, which depend on the selected node-wise measure. While the quadratic
growth discourages the use of the procedure with huge networks, we have not
experienced system crashes in any of the examples run in the manuscript, and
the procedure has proven its feasiblity on networks having $V$ in the order of
the thausands. According to our experience, detecting communities in networks
of such size is conversely prevented by the application of popular competitors
like Girvan Newman and SBM.
## References
* Aicher et al. (2015) Aicher, C., Jacobs, A. Z. and Clauset, A. (2015) Learning latent block structure in weighted networks. Journal of Complex Networks, 3, 221–248.
* Azaouzi et al. (2019) Azaouzi, M., Rhouma, D. and Romdhane, L. B. (2019) Community detection in large-scale social networks: state-of-the-art and future directions. Social Network Analysis and Mining, 9, 23.
* Battiston et al. (2014) Battiston, F., Nicosia, V. and Latora, V. (2014) Structural measures for multiplex networks. Physical Review E, 89, 032804.
* Blondel et al. (2008) Blondel, V. D., Guillaume, J.-L., Lambiotte, R. and Lefebvre, E. (2008) Fast unfolding of communities in large networks. Journal of Statistical Mechanics: Theory and Experiment, 2008, P10008.
* Butts (2020) Butts, C. T. (2020) sna: Tools for Social Network Analysis. URL: https://CRAN.R-project.org/package=sna. R package version 2.6.
* Carron and Brawley (2000) Carron, A. V. and Brawley, L. R. (2000) Cohesion: Conceptual and measurement issues. Small group research, 31, 89–106.
* Chen and Yuan (2006) Chen, J. and Yuan, B. (2006) Detecting functional modules in the yeast protein–protein interaction network. Bioinformatics, 22, 2283–2290.
* Chiquet et al. (2020) Chiquet, J., Donnet, S. and Barbillon, P. (2020) sbm: Stochastic Blockmodels. URL: https://CRAN.R-project.org/package=sbm. R package version 0.2.2.
* Csardi and Nepusz (2006) Csardi, G. and Nepusz, T. (2006) The igraph software package for complex network research. InterJournal, Complex Systems, 1695. URL: https://igraph.org.
* Danon et al. (2005) Danon, L., Díaz-Guilera, A., Duch, J. and Arenas, A. (2005) Comparing community structure identification. Journal of Statistical Mechanics: Theory and Experiment, 2005, 9008.
* De Stefano et al. (2013) De Stefano, D., Fuccella, V., Vitale, M. and Zaccarin, S. (2013) The use of different data sources in the analysis of co-authorship networks and scientific performance. Social Networks, 35, 370–381.
* Dunbar (1992) Dunbar, R. (1992) Neocortex size as a constraint on group size in primates. Journal of Human Evolution, 22, 469 – 493. URL: http://www.sciencedirect.com/science/article/pii/004724849290081J.
* Falkowski et al. (2007) Falkowski, T., Barth, A. and Spiliopoulou, M. (2007) Dengraph: A density-based community detection algorithm. In IEEE/WIC/ACM International Conference on Web Intelligence (WI’07), 112–115. IEEE.
* Fortunato (2010) Fortunato, S. (2010) Community detection in graphs. Physics Reports, 486, 75 – 174.
* Fortunato and Barthélemy (2007) Fortunato, S. and Barthélemy, M. (2007) Resolution limit in community detection. Proceedings of the National Academy of Sciences of the United States of America, 104, 36–41. URL: http://www.jstor.org/stable/25426046.
* Ghalmane et al. (2019) Ghalmane, Z., El Hassouni, M., Cherifi, C. and Cherifi, H. (2019) Centrality in modular networks. EPJ Data Science, 8, 15.
* Girvan and Newman (2002) Girvan, M. and Newman, M. E. J. (2002) Community structure in social and biological networks. Proceedings of the National Academy of Sciences, 99, 7821–7826.
* Good et al. (2010) Good, B. H., de Montjoye, Y.-A. and Clauset, A. (2010) Performance of modularity maximization in practical contexts. Physical Review E, 81, 046106.
* Goyal et al. (2006) Goyal, S., Van Der Leij, M. J. and Moraga-González, J. L. (2006) Economics: An emerging small world. Journal of political economy, 114, 403–412.
* Hartigan (1975) Hartigan, J. (1975) Clustering Algorithms. New York: J. Wiley & Sons.
* Holland et al. (1983) Holland, P. W., Laskey, K. B. and Leinhardt, S. (1983) Stochastic blockmodels: First steps. Social networks, 5, 109–137.
* Kernighan and Lin (1970) Kernighan, B. W. and Lin, S. (1970) An efficient heuristic procedure for partitioning graphs. The Bell system technical journal, 49, 291–307.
* Kloster and Gleich (2014) Kloster, K. and Gleich, D. F. (2014) Heat kernel based community detection. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1386–1395.
* Knuth (1993) Knuth, D. E. (1993) The Stanford GraphBase: a platform for combinatorial computing. AcM Press New York.
* Lee and Wilkinson (2019) Lee, C. and Wilkinson, D. J. (2019) A review of stochastic block models and extensions for graph clustering. Applied Network Science, 4, 122.
* Leskovec et al. (2007) Leskovec, J., Kleinberg, J. and Faloutsos, C. (2007) Graph evolution: Densification and shrinking diameters. ACM Trans. Knowl. Discov. Data, 1, 2–es.
* Lorrain and White (1971) Lorrain, F. and White, H. (1971) Structural equivalence of individuals in social networks. The Journal of Mathematical Sociology, 1, 49–80.
* Medo et al. (2009) Medo, M., Zhang, Y.-C. and Zhou, T. (2009) Adaptive model for recommendation of news. EPL (Europhysics Letters), 88, 38005.
* Menardi (2016) Menardi, G. (2016) A review on modal clustering. International Statistical Review, 84, 413–433.
* Moody (2001) Moody, J. (2001) Peer influence groups: identifying dense clusters in large networks. Social Networks, 23, 261–283.
* Newman and Girvan (2004) Newman, M. E. J. and Girvan, M. (2004) Finding and evaluating community structure in networks. Phys. Rev. E, 69, 026113.
* Opsahl et al. (2010) Opsahl, T., Agneessens, F. and Skvoretz, J. (2010) Node centrality in weighted networks: Generalizing degree and shortest paths. Social Networks, 32, 245–251.
* Pothen et al. (1990) Pothen, A., Simon, H. D. and Liou, K.-P. (1990) Partitioning sparse matrices with eigenvectors of graphs. SIAM Journal on Matrix Analysis and Applications, 11, 430–452.
* R Core Team (2020) R Core Team (2020) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna. URL: https://www.R-project.org/.
* Rosvall et al. (2019) Rosvall, M., Delvenne, J.-C., Schaub, M. T. and Lambiotte, R. (2019) Different approaches to community detection. Advances in network clustering and blockmodeling, 105–119.
* Wang et al. (2017a) Wang, T.-S., Lin, H.-T. and Wang, P. (2017a) Weighted-spectral clustering algorithm for detecting community structures in complex networks. Artificial Intelligence Review, 47, 463–483.
* Wang et al. (2017b) Wang, Z., Moreno, Y., Boccaletti, S. and Perc, M. (2017b) Vaccination and epidemics in networked populations–an introduction.
* Wasserman and Faust (1994) Wasserman, S. and Faust, K. (1994) Social network analysis: Methods and applications. Cambridge University Press.
* Yin et al. (2017) Yin, H., Benson, A. R., Leskovec, J. and Gleich, D. F. (2017) Local higher-order graph clustering. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 555–564.
* Zachary (1977) Zachary, W. (1977) An information flow model for conflict and fission in small groups. Journal of Anthropological Research, 452–473.
|
Proof of a Conjecture on the Wiener Index of Eulerian Graphs
Peter Dankelmann111Financial support by the South African National Research
Foundation, grant 118521, is gratefully acknowledged. (University of
Johannesburg)
###### Abstract
The Wiener index of a connected graph is the sum of the distances between all
unordered pairs of vertices. A connected graph is Eulerian if its vertex
degrees are all even. In [Gutman, Cruz, Rada, Wiener index of Eulerian Graphs,
Discrete Applied Mathematics 132 (2014), 247-250] the authors proved that the
cycle is the unique graph maximising the Wiener index among all Eulerian
graphs of given order. They also conjectured that for Eulerian graphs of order
$n\geq 26$ the graph consisting of a cycle on $n-2$ vertices and a triangle
that share a vertex is the unique Eulerian graph with second largest Wiener
index. The conjecture is known to hold for all $n\leq 25$ with exception of
six values. In this paper we prove the conjecture.
Keywords: Wiener index; average distance; mean distance; total distance;
Eulerian graph; degree
MSC-class: 05C12 (primary) 92E10 (secondary)
## 1 Introduction
Let $G=(V,E)$ be a finite, connected graph. The Wiener index of $G$ is defined
by
$W(G)=\sum_{\\{u,v\\}\subseteq V}d_{G}(u,v),$
where $d_{G}(u,v)$ denotes the usual distance between vertices $u$ and $v$ of
$G$, i.e., the minimum number of edges on a $(u,v)$\- path in $G$.
The Wiener index, originally conceived by the chemist Wiener [25], has been
investigated extensively in the mathematical and chemical literature, often
under different names, such as transmission, distance, total distance or gross
status. Several of these results were originally obtained for the closely
related average distance, also called mean distance, which is defined as
$\binom{n}{2}^{-1}W(G)$, where $n$ is the order of the graph $G$. For some of
its chemical applications see, for example, [21].
One of the most basic results on the Wiener index states that
$W(G)\leq\binom{n+1}{3}$
for every connected graph on $n$ vertices, and equality holds if and only if
$G$ is a path.
A path has only vertices of degree one and two, so it is reasonable to expect
that better bounds can be obtained if restrictions are placed on the values of
the degrees. Upper bounds on the Wiener index that take into account not only
the order, but also the minimum degree were given, for example, in [2, 7, 17],
and it was shown in [1] that stronger bounds hold in the presence of a vertex
of large degree. The Wiener index in relation to the inverse degree, i.e., the
sum of the inverses of all vertex degrees, was considered by Erdös, Pach, and
Spencer [11].
Bounds on the Wiener index of trees in terms of vertex degree have also been
considered extensively. Every tree has minimum degree $1$, so it is natural to
ask how large or small the Wiener index can be in trees of given maximum
degree. Answering this question for the maximum value of the Wiener index is
fairly straightforward (see [20] and [23]), however the determination of the
minimum Wiener index by Fischermann, Hoffmann, Rautenbach, Székely and
Volkmann [12] required much more effort. For the more general problem of
determining the extremal values of the Wiener index of a tree with given
degree sequence see, for example, [6], [22] and [24]. A good survey of results
on the Wiener index of trees before 2000 was given in [10].
Not the actual value, but the parity of the degrees has been used to bound the
Wiener index. Trees in which all vertices have odd degree were considered by
Lin [18], who determined their smallest and largest possible Wiener index of
trees. This result was extended in [14] with the determination of all such
trees of order $n$ with the largest $\lfloor\frac{n}{4}\rfloor+1$ values of
the Wiener index, see also [13]. The smallest and largest Wiener index of a
tree whose order and number of vertices of even degree are given, was
determined in [19].
The Wiener index of connected graphs in which all vertices have even degrees,
that is, Eulerian graphs, was considered by Gutman, Cruz and Rada [15], who
obtained the following theorem.
###### Theorem 1 (Gutman, Cruz and Rada [15]).
Let $G$ be an Eulerian graph of order $n$. Then
$W(G)\leq W(C_{n}),$
where $C_{n}$ is the cycle on $n$ vertices. Equality holds if and only if
$G=C_{n}$.
The authors of Theorem 1 gave a direct proof of their result. However, since
every Eulerian graph is $2$-edge-connected, Theorem 1 can also be obtained as
a consequence of Theoremm 3(a) below, which states that the cycle is the
unique graph maximising the Wiener index among all $2$-edge-connected graphs
of given order.
Gutman, Cruz and Rada [15] also presented a conjecture on the question which
Eulerian graph of given order has the second largest Wiener index. For $n\geq
5$ let $C_{n,3}$ be the graph of order $n$ obtained from the disjoint union of
two cycles on $n-2$ vertices and $3$ vertices, respectively, by identifying
two vertices, one from each cycle. Their conjecture states that $C_{n,3}$ is
the unique graph that has the second largest Wiener index among all Eulerian
graphs of order $n$ for $n\geq 26$. It is the aim of this paper to give a
proof of this conjecture.
It was verified in [15] that the conjecture holds for all values of $n$ up to
$25$ except $n\in\\{7,9\\}$, for which there are other extremal graphs of
larger Wiener index than $C_{n,3}$, and $n\in\\{8,10,11,13\\}$, for which
$C_{n,3}$ has second largest Wiener index, but there exists another graph of
the same Wiener index. All Eulerian graphs of order $7,8,9,10,11,13$ that have
the second largest Wiener index are shown in Figure 1.
The main result of this paper reads as follows.
###### Theorem 2.
Let $G$ be an Eulerian graph of order $n$ with $n\geq 26$ that is not a cycle.
Then
$W(G)\leq W(C_{n,3})$
Equality holds if and only if $G=C_{n,3}$.
Figure 1: The Eulerian graphs of second largest Wiener index for
$n=7,8,9,10,11,13$.
The determination of the unique Eulerian graph with second largest Wiener
index is reminiscent of the corresponding problem for $2$-connected graphs. A
cutvertex of a connected graph is a vertex whose removal disconnects the
graph. A connected graph with no cutvertex is said to be $2$-connected, and a
block of a graph is a maximal subgraph that is $2$-connected. Plesník [20]
proved that among all $2$-connected graphs of given order, the cycle is the
unique graph maximising the Wiener index. While the proof of this result is
relatively straightforward, determining the $2$-connected graph of given order
with the second largest Wiener index requires significantly more effort, see
the paper by Bessy, Dross, Knor and S̆krekovski [3]. The proof of Theorem 2
given in the present paper suggest that the situation is no different for
Eulerian graphs. We note that Plesník’s result on $2$-connected graphs was,
asymptotically, extended to $k$-connected graphs in [8].
We note also a certain analogy between our result, and the determination of
the largest Wiener index of a connected graph with given order and number of
cutvertices in [4] and [5]. If the number of cutvertices is sufficiently small
relative to the order, then the extremal graph consists of a path, whose one
end is attached to a cycle. So among all graphs with exactly one cutvertex,
the extremal graph has two blocks whose order is as unequal as possible.
Similarly, the extremal graph $C_{n,3}$ has two blocks, which are as unequal
as possible, given the restriction that every block of an Eulerian graph has
at least three vertices.
The notation we use is as follows. If $G$ is a graph, then $V(G)$ and $E(G)$
denote the vertex set and the edge set, respectively, of $G$. The order $n(G)$
and the size $m(G)$ are the number of vertices and edges, respectively, of
$G$. If $G$ and $H$ are graphs with $V(H)\subseteq V(G)$ and $E(H)\subseteq
E(G)$, then we say that $H$ is a subgraph of $G$ and write $H\leq G$. If
$A\subseteq V(G)$, then $G[A]$ denotes the subgraph of $G$ induced by $A$,
i.e., the graph whose vertex set is $A$, and whose edges are exactly the edges
of $G$ joining two vertices of $A$.
If $v$ is a vertex of $G$, then $N_{G}(v)$ denotes the neighbourhood of $v$,
i.e., the set of all vertices of $G$ adjacent to $v$. For $i\in\mathbb{N}$ we
define the $i$-th neighbourhood of $v$, $N_{i}(v)$, to be the set of vertices
at distance exactly $i$ from $v$, and we let $n_{i}(v)=|N_{i}(v)|$. The degree
of $v$ in $G$, i.e., the value $n_{1}(v)$, is denoted by ${\rm deg}_{G}(v)$.
A cutset of $G$ is a set $S\subseteq V$ such that $G-S$, the graph obtained
from deleting all vertices in $S$ and all edges incident with vertices in $S$
from $G$, is disconnected. An edge-cut of $G$ is a set $E_{1}\subseteq E(G)$
such that $G-E_{1}$, the graph obtained from $G$ by deleting all edges in
$E_{1}$, is disconnected. Let $k\in\mathbb{N}$. We say that $G$ is
$k$-connected ($k$-edge-connected) if $G$ contains no cutset (no edge-cut)
with fewer than $k$ elements. A cutvertex is a vertex $v$ with the property
that $\\{v\\}$ is a cutset. An endblock a graph $G$ is a block of $G$ that
contains only one cutvertex. It is known that every connected graph that is
not $2$-connected has at least two endblocks.
If $S$ is a cutset of $G$ and $H$ a component of $G-S$, then we say that
$G[V(H)\cup S]$ is a branch of $G$ at $S$. If $S=\\{v\\}$, then we say that
$H$ is a branch at $v$.
The total distance of a vertex $v$, $\sigma_{G}(v)$, is defined as the sum
$\sum_{y\in V(G)}d_{G}(v,y)$. By $\sigma_{G}(A)$ we mean $\sum_{y\in
V(G)-A}d_{G}(y,A)$, where $d_{G}(y,A)$ is defined as $\min_{a\in
A}d_{G}(y,a)$.
The eccentricity $e(v)$ of a vertex $v$ of $G$ is the distance from $v$ to a
vertex farthest from $v$ in $G$.
## 2 Preliminary Results
In this section we present definitions and results that will be used in the
proof of Theorem 2. We begin with some bounds on the Wiener index and on the
total distance of vertices in $2$-connected and $2$-edge-connected graphs.
###### Theorem 3 (Plesník [20]).
(a) Let $G$ be a $2$-edge-connected graph of order $n$. Then
$W(G)\leq\left\\{\begin{array}[]{cc}\frac{n^{3}}{8}&\textrm{if $n$ is
even,}\\\ \frac{n^{3}-n}{8}&\textrm{if $n$ is odd.}\end{array}\right.$
Equality holds if and only if $G$ is a cycle.
(b) Let $G$ be a $2$-connected graph of order $n$ and $v$ a vertex of $G$.
Then
$\sigma_{G}(v)\leq\left\\{\begin{array}[]{cc}\frac{n^{2}}{4}&\textrm{if $n$ is
even,}\\\ \frac{n^{2}-1}{4}&\textrm{if $n$ is odd.}\end{array}\right.$
Equality holds if $G$ is a cycle.
(c) Let $G$ be a $2$-edge-connected graph of order $n$ and $v$ a vertex of
$G$. Then
$\sigma_{G}(v)\leq\frac{n(n-1)}{3}.$
###### Corollary 1.
Let $G$ be a $2$-connected graph of order $n$ and $u,w$ two vertices of $G$.
Let $u_{1},u_{2}$ be two adjacent vertices of the cycle $C_{n}$. Then
$\sigma_{G}(\\{u,w\\})\leq\sigma_{C_{n}}(\\{u_{1},u_{2}\\}).$
Proof: Let $G^{\prime}$ be the $2$-connected graph obtained from $G$ by adding
a new vertex $z$ and joining it to $u$ and $w$. Then
$\sigma(z,G^{\prime})=\sum_{x\in
V(G)}\big{(}1+d_{G}(x,\\{u,w\\})\big{)}=n+\sigma_{G}(\\{u,w\\}).$
Let $C_{n}^{\prime}$ be the graph obtained from $C_{n}$ by adding adding a new
vertex $y$ and joining it to two adjacent vertices $u_{1}$ and $u_{2}$ of
$C_{n}$. As above,
$\sigma(y,C_{n}^{\prime})=\sum_{x\in
V(C_{n})}\big{(}1+d_{C_{n}}(x,\\{u_{1},u_{2}\\})\big{)}=n+\sigma_{C_{n}}(\\{u_{1},u_{2}\\}).$
Clearly, removing the edge $u_{1}u_{2}$ from $C_{n}^{\prime}$ does not change
$\sigma(y)$. But $C_{n}^{\prime}-u_{1}u_{2}$ is $C_{n+1}$, so by Theorem 3(b),
we have $\sigma(z,G^{\prime})\leq\sigma(y,C_{n}^{\prime})$, which implies the
statement of the lemma. $\Box$
$C_{8,3}$
$F_{7,4}$
Figure 2: Graphs defined in Definitions 1.
###### Definition 1.
(a) Let $n,a\in\mathbb{N}$ with $3\leq a\leq n-2$. Then $C_{n,a}$ denotes the
graph of order $n$ obtained from two disjoint cycles $C_{a}$ and $C_{n+1-a}$
by identifying a vertex of $C_{a}$ with a vertex of $C_{n+1-a}$.
(b) Let $n,a\in\mathbb{N}$ with $3\leq a\leq n-1$. Then $F_{n,a}$ denotes the
graph of order $n$ obtained from two disjoint cycles $C_{a}$ and $C_{n+2-a}$
by choosing two adjacent vertices $u,v$ of $C_{a}$ and two adjacent vertices
$u^{\prime},v^{\prime}$ of $C_{n+2-a}$ and identifying $u$ with $u^{\prime}$
and $v$ with $v^{\prime}$.
The Wiener index of the graph $C_{n,a}$ was evaluated in [15]. Specifically
for $C_{n,3}$ we have
$W(C_{n,3})=\left\\{\begin{array}[]{cc}\frac{1}{8}n^{3}-\frac{1}{4}n^{2}+\frac{3}{2}n-2&\textrm{if
$n$ is even,}\\\
\frac{1}{8}n^{3}-\frac{1}{4}n^{2}+\frac{11}{8}n-\frac{9}{4}&\textrm{if $n$ is
odd.}\end{array}\right.$ (1)
In our proofs below we make use of the fact that
$\frac{1}{8}n^{3}-\frac{1}{4}n^{2}+\frac{11}{8}n-\frac{9}{4}\leq
W(C_{n,3})\leq\frac{1}{8}n^{3}-\frac{1}{4}n^{2}+\frac{3}{2}n-2$ for all
$n\in\mathbb{N}$ with $n\geq 5$, irrespective of the parity of $n$.
###### Lemma 1.
(Gutman, Cruz, Rada [15] If $n\in\mathbb{N}$ is even, $n\geq 6$, then
$W(C_{n,3})>W(C_{n,4})>\ldots>W(C_{n,n/2-1})>W(C_{n,n/2}).$
If $n\in\mathbb{N}$ is odd, $n\geq 11$ and $n=4k+3$ for some $k\in\mathbb{N}$,
then
$W(C_{n,3})>W(C_{n,4})>\ldots>W(C_{n,2k})>W(C_{n,2k+2})>W(C_{n,2k+1}).$
If $n\in\mathbb{N}$ is odd, $n\geq 11$ and $n=4k+1$ for some $k\in\mathbb{N}$,
then
$W(C_{n,3})>W(C_{n,4})>\ldots>W(C_{n,2k-2})>W(C_{n,2k})>W(C_{n,2k-1})>W(C_{n,2k+1}).$
For $n=7,9$ we have $W(C_{7,4})>W(C_{7,3})$ and
$W(C_{9,4})>W(C_{9,3})>W(C_{9,5})$.
###### Lemma 2.
Let $n\geq 26$ and $4\leq a\leq n-2$. Then
$W(F_{n,a})\leq W(C_{n,3}),$
with equality only if $a=4$ or $a=n-2$.
Proof: A tedious but straightforward calculation yields that
$W(F_{n,a})=\left\\{\begin{array}[]{cc}\frac{1}{8}\big{[}a(n-2)(a-n-2)+n(n^{2}+2n-4)\big{]}&\textrm{if
$n$ even, $a$ even,}\\\
\frac{1}{8}\big{[}a(n-2)(a-n-2)+n(n^{2}+2n-4)-3n+6\big{]}&\textrm{if $n$ even,
$a$ odd,}\\\
\frac{1}{8}\big{[}a(n-2)(a-n-2)+n(n^{2}+2n-4)-n-a+2\big{]}&\textrm{if $n$ odd,
$a$ even,}\\\
\frac{1}{8}\big{[}a(n-2)(a-n-2)+n(n^{2}+2n-4)+a-2n\big{]}&\textrm{if $n$ odd,
$a$ odd.}\end{array}\right.$
Since $F_{n,a}=F_{n,n+2-a}$, we may assume that
$a\leq\lfloor\frac{n+2}{2}\rfloor$. The derivative with respect to $a$ of the
four terms on the right hand side above equals $\frac{1}{8}((n-2)(2a-n-2))$ if
$n$ is even, $\frac{1}{8}((n-2)(2a-n-2)-1)$ if $n$ is odd and $a$ is even, and
$\frac{1}{8}((n-2)(2a-n-2)+1)$ if $n$ is even and $a$ is odd. Hence each of
these four terms is strictly decreasing in $a$. It thus follows that
$W(F_{n,a})\leq W(F_{n,4})$ if $a$ is even, with equality only if $a=4$, and
$W(F_{n,a})\leq W(F_{n,5})$ if $a$ is odd, with equality only if $a=5$. Now an
easy calculation shows that $W(F_{n,5})<W(F_{n,4})=W(C_{n,3})$. Hence the
lemma follows. $\Box$
###### Corollary 2.
Let $G$ be a graph of order $n\geq 26$ obtained from a cycle $C_{n}$ by adding
three edges between vertices of $C_{n}$ that are not in $E(C_{n})$ and that
form a triangle. Then $W(G)<W(C_{n,3})$.
Proof: Let the three edges added to $C_{n}$ be $uv,vw,wu$. Then for at least
one of these three edges, $uv$ say, we have $C_{n}+uv=F_{n,a}$ for some $a$
with $4\leq a\leq\frac{n+1}{2}$. Applying Lemma 1 yields that
$W(G)<W(F_{n,a})\leq W(C_{n,3})$. $\Box$
## 3 Excluding $2$-Connected Counterexamples
The goal of this section is to prove that an Eulerian graph of given order
that is not a cycle, and which has maximum Wiener index among such graphs,
cannot be $2$-connected. Hence it will suffice to prove Theorem 2 for graphs
that have a cutvertex.
We begin by showing that Theorem 2 holds for graphs that are obtained from two
$2$-connected graphs by gluing them together at two vertices, provided one of
them contains a spanning cycle.
###### Lemma 3.
(a) Let $G$ be a $2$-connected graph of order $n$. If $G$ contains a cutset
$\\{u,w\\}$ with the property that the union of some, but not all, branches of
$G$ at $\\{u,w\\}$ has exactly $a$ vertices and contains a spanning cycle, and
the union of the remaining branches is $2$-connected, then
$W(G)\leq W(F_{n,a}).$
Equality implies that $G=F_{n,a}$.
(b) If, in addition, $G$ is Eulerian, $n\geq 26$ and $4\leq a\leq n-2$, then
$W(G)<W(C_{n,3})$.
Proof: Let $A$ be the vertex set of the union of the branches at $\\{u,w\\}$
that contains a spanning cycle, and let $B$ be the vertex set of the union of
the remaining branches. Let $a=|A|$ and $b=|B|$. Then $A\cap B=\\{u,w\\}$. Let
$H=G[B]$ and let $C$ be a spanning cycle of $G[A]$. We denote the set of
vertices $x$ of $A-\\{u,w\\}$ for which $d_{C}(u,x)<d_{C}(w,x)$
($d_{C}(u,x)>d_{C}(w,x)$, $d_{C}(u,x)=d_{C}(w,x)$) by $U$ ($W$, $S$). Then
$\displaystyle W(G)$ $\displaystyle=$ $\displaystyle
W_{G}(A)+W_{G}(B)-d_{G}(u,w)+\sum_{x\in U\cup W\cup S,y\in
B-\\{u,w\\}}d_{G}(x,y)$ (2) $\displaystyle\leq$ $\displaystyle
W(C)+W(H)-d_{G}(u,w)+\sum_{x\in U,y\in
V(H)-\\{u,w\\}}\big{(}d_{C}(x,u)+d_{H}(u,y)\big{)}$ $\displaystyle+\sum_{x\in
W,y\in V(H)-\\{u,w\\}}\big{(}d_{C}(x,w)+d_{H}(w,y)\big{)}+\sum_{x\in S,y\in
V(H)-\\{u,w\\}}\big{(}d_{C}(x,\\{u,w\\})+d_{H}(\\{u,w\\},y)\big{)}$
$\displaystyle=$ $\displaystyle
W(C)+W(H)-d_{G}(u,w)+(b-2)\sigma_{C}(u,U)+|U|(\sigma_{H}(u)-d_{H}(u,w))$
$\displaystyle+(b-2)\sigma_{C}(w,W)+|W|(\sigma_{H}(w)-d_{H}(u,w))+(b-2)\sigma_{C}(\\{u,w\\},S)$
$\displaystyle+|S|\sigma_{H}(\\{u,w\\}).$
Let $C_{a}$ and $C_{b}$ be the cycles of the graph $F_{a,b}$ defined above,
and let $u^{\prime}$ and $w^{\prime}$ be the two adjacent vertices of
$F_{a,b}$ shared by $C_{a}$ and $C_{b}$. Let $U^{\prime}$ ($W^{\prime}$,
$S^{\prime}$) be the set of vertices $x$ of $C_{a}-\\{u,w\\}$ with
$d(u^{\prime},x)<d(w^{\prime},x)$ ($d(u^{\prime},x)>d(w^{\prime},x)$,
$d(u^{\prime},x)=d(w^{\prime},x)$). As above, we have
$\displaystyle W(F_{a,b})$ $\displaystyle=$ $\displaystyle
W(C_{a})+W(C_{b})-d_{F_{a,b}}(u^{\prime},w^{\prime})+(b-2)\sigma_{C_{a}}(u^{\prime},U^{\prime})+|U^{\prime}|(\sigma_{C_{b}}(u^{\prime})-d_{F_{a,b}}(u^{\prime},w^{\prime}))$
(3)
$\displaystyle+(b-2)\sigma_{C_{a}}(w^{\prime},W^{\prime})+|W^{\prime}|(\sigma_{C_{b}}(w^{\prime})-d_{F_{a,b}}(u^{\prime},w^{\prime}))+(b-2)\sigma_{C_{a}}(\\{u^{\prime},w^{\prime}\\},S^{\prime})$
$\displaystyle+|S^{\prime}|\sigma_{C_{b}}(\\{u^{\prime},w^{\prime}\\}).$
Since $C$ and $C_{a}$ are cycles, we have $|U|=|W|$ and
$|U^{\prime}|=|W^{\prime}|$. Subtracting (3) from (2) yields thus
$\displaystyle W(F_{a,b})-W(G)$ $\displaystyle\geq$
$\displaystyle\big{(}W(C_{a})-W(C)\big{)}+\big{(}W(C_{b})-W(H)\big{)}+\big{(}d_{G}(u,w)-d_{F_{a,b}}(u^{\prime},w^{\prime})\big{)}$
$\displaystyle+(b-2)\big{[}\sigma_{C_{a}}(u^{\prime},U^{\prime})+\sigma_{C_{a}}(w^{\prime},W^{\prime})+\sigma_{C_{a}}(\\{u^{\prime},w^{\prime}\\},S^{\prime})-\sigma_{C}(u,U)$
$\displaystyle-\sigma_{C}(w,W)-\sigma_{C}(\\{u,w\\},S)\big{]}+|U^{\prime}|\big{[}\sigma_{C_{b}}(u^{\prime})+\sigma_{C_{b}}(w^{\prime})-2d_{F_{a,b}}(u^{\prime},w^{\prime})\big{]}$
$\displaystyle+|S^{\prime}|\sigma_{C_{b}}(\\{u^{\prime},w^{\prime}\\})-|U|\big{[}\sigma_{H}(u)+\sigma_{H}(w)-2d_{H}(u,w)\big{]}-|S|\sigma_{H}(\\{u,w\\})$
We now argue that the right hand side of the last inequality is nonnegative.
Clearly, $C$ and $C_{a}$ are isomorphic, so $W(C)-W(C_{a})=0$. Since $H$ is
$2$-connected, and thus $2$-edge-connected, we have $W(C_{b})-W(H)\geq 0$ by
Theorem 3(a). Since $d_{F_{a,b}}(u^{\prime},w^{\prime})=1$ we have
$d_{G}(u,w)-d_{F_{a,b}}(u^{\prime},w^{\prime})\geq 0$. Also
$\sigma_{C_{a}}(u^{\prime},U^{\prime})+\sigma_{C_{a}}(w^{\prime},W^{\prime})+\sigma_{C_{a}}(\\{u^{\prime},w^{\prime}\\},S^{\prime})-\sigma_{C}(u,U)-\sigma_{C}(w,W)-\sigma_{C}(\\{u,w\\},S)=\sigma_{C_{a}}(\\{u^{\prime}w^{\prime}\\})-\sigma_{C}(\\{u,w\\})$,
but $\sigma_{C_{a}}(\\{u^{\prime}w^{\prime}\\})-\sigma_{C}(\\{u,w\\})\geq 0$
by Lemma 1.
We now bound the remaining expression,
$|U^{\prime}|\big{(}\sigma_{C_{b}}(u^{\prime})+\sigma_{C_{b}}(w^{\prime})-2d_{C_{b}}(u^{\prime},w^{\prime})\big{)}+|S^{\prime}|\sigma_{C_{b}}(\\{u^{\prime},w^{\prime}\\})-|U|\big{(}\sigma_{H}(u)+\sigma_{H}(w)-2d_{H}(u,w)\big{)}-|S|\sigma_{H}(\\{u,w\\})$,
which we denote by $f$. In order to complete the proof of the lemma it remains
to show that $f\geq 0$.
We have $a=2|U^{\prime}|+|S^{\prime}|=2|U|+|S|$. In $C_{a}$, vertices
$u^{\prime}$ and $w^{\prime}$ are adjacent, so there is exactly one vertex
equidistant from $u^{\prime}$ and $w^{\prime}$ if $a$ is odd, and there is no
vertex equidistant from $u^{\prime}$ and $w^{\prime}$ if $a$ is even. Hence
$|S^{\prime}|=1$ if $a$ is odd, and $|S^{\prime}|=0$ if $a$ is even. In $C$
the vertices $u$ and $w$ are not necessarily adjacent, so we have $|S|=1$ if
$a$ is odd, and $|S|\in\\{0,2\\}$ if $a$ is even. We conclude that if $a$ is
odd, then $|U|=|U^{\prime}|$ and $|S|=|S^{\prime}|$, and if $a$ is even then
either $|U|=|U^{\prime}|$ and $|S|=|S^{\prime}|$, or $|U^{\prime}|=|U|+1$,
$|S^{\prime}|=0$, and $|S|=2$. If $|U^{\prime}|=|U|$ and $|S|=|S|$, then
$f=|U|\big{(}\sigma_{C_{b}}(u^{\prime})-\sigma_{H}(u)+\sigma_{C_{b}}(w^{\prime})-\sigma_{H}(w)+2d_{H}(u,w)-2d_{C_{b}}(u^{\prime},w^{\prime})\big{)}+|S|\big{(}\sigma_{C_{b}}(\\{u^{\prime},w^{\prime}\\}-\sigma_{H}(\\{u,w\\})\big{)}$.
Each of the terms $|U|$, $|S|$, $\sigma_{C_{b}}(u^{\prime})-\sigma_{H}(u)$,
$\sigma_{C_{b}}(w^{\prime})-\sigma_{H}(w)$,
$2d_{H}(u,w)-2d_{C_{b}}(u^{\prime},w^{\prime})$, and
$\sigma_{C_{b}}(\\{u^{\prime},w^{\prime}\\}-\sigma_{H}(\\{u,w\\})$ is
nonnegative, hence $f\geq 0$ in this case. If $|U^{\prime}|=|U|+1$ and
$|S|=2$, $|S^{\prime}|=0$, then
$f=|U|\big{(}\sigma_{C_{b}}(u^{\prime})-\sigma_{H}(u)+\sigma_{C_{b}}(w^{\prime})-\sigma_{H}(w)+2d_{H}(u,w)-2d_{C_{b}}(u^{\prime},w^{\prime})\big{)}+\sigma_{C_{b}}(u^{\prime})+\sigma_{C_{b}}(w^{\prime})-2d_{C_{b}}(u^{\prime},w^{\prime})-2\sigma_{H}(\\{u,w\\})$.
As above, each of the terms $|U|$, $\sigma_{C_{b}}(u^{\prime})-\sigma_{H}(u)$,
$\sigma_{C_{b}}(w^{\prime})-\sigma_{H}(w)$,
$2d_{H}(u,w)-2d_{C_{b}}(u^{\prime},w^{\prime})$, is nonnegative. We also have
$\sigma_{C_{b}}(u^{\prime})+\sigma_{C_{b}}(w^{\prime})-2d_{C_{b}}(u^{\prime},w^{\prime})-2\sigma_{H}(\\{u,w\\})\geq
0$ since
$\sigma_{C_{b}}(u^{\prime})+\sigma_{C_{b}}(w^{\prime})-2d_{C_{b}}(u^{\prime},w^{\prime})=\sum_{x\in
V(C_{b})-\\{u^{\prime},w^{\prime}\\}}\big{(}d_{C_{b}}(u^{\prime},x)+d_{C_{b}}(w^{\prime},x)\big{)}\geq\sum_{x\in
V(C_{b})-\\{u^{\prime},w^{\prime}\\}}2\min\\{d_{C_{b}}(u^{\prime},x),d_{C_{b}}(w^{\prime},x)\\}=2\sigma_{C_{b}}(\\{u^{\prime},w^{\prime}\\})$.
Hence $f\geq 0$ also in this case. This proves the desired bound on $W(G)$.
Now assume that $W(G)=W(F_{n,a})$. Then we have equality between the
corresponding terms in (2) and (3), in particular $W(G[A])=W(C_{a})$ and
$W(H)=W(C_{b})$. This implies by Theorem 3(a) that $G[A]$ and $H$ are cycles
of length $a$ and $b$, respectively. We also have $d_{C}(u,w)=1$. It follows
that $G=F_{n,a}$.
(b) If $G$ is Eulerian, then $G\neq F_{n,a}$ and so $W(G)<W(F_{n,a})$. By
Lemma 2 we have $W(F_{n,a})\leq W(C_{n,3})$, and (b) follows. $\Box$
###### Lemma 4.
Let $n\in\mathbb{N}$ with $n\geq 26$. Among all Eulerian graphs of order $n$
that are not cycles, let $G$ be one that has maximum Wiener index. Then $G$
has a cutvertex.
Proof: Suppose to the contrary that $G$ is $2$-connected. We first prove that
every triangle of $G$ contains a vertex of degree $2$. (4)
Suppose to the contrary that $G$ contains a triangle $u_{1}u_{2}u_{3}$ with
${\rm deg}(u_{i})>2$ for $i=1,2,3$. Let $E^{\prime}$ be the edge set of this
triangle. Then $G-E^{\prime}$ is connected since otherwise, if $G-E^{\prime}$
is disconnected, the vertices $u_{1},u_{2}$ and $u_{3}$ are not all in the
same component of $G-E^{\prime}$, so there exists a component of
$G-E^{\prime}$ containing only one vertex, $u_{1}$ say, of the triangle. This
implies that $u_{1}$ is a cutvertex of $G$, a contradiction to $G$ being
$2$-connected. Hence $G-E^{\prime}$ is connected. Clearly, $G-E^{\prime}$ is
also Eulerian, and $W(G-E^{\prime})>W(G)$. By our choice of $G$, the graph
$G-E^{\prime}$ is a cycle. But then $G$ is obtained from a cycle by adding the
edges of a triangle, and so $W(G)<W(C_{n,3})$ by Corollary 2. This contradicts
the choice of $G$ as having maximum Wiener index, and so (4) follows.
$\overline{v}$$v$$w$$u_{1}$$u_{2}$
$v$$\overline{v}$$w$$u_{1}$$u_{2}$$u_{3}$
$v$$\overline{v}$$w$$u_{1}$$u_{2}$$u_{3}$
$v$$\overline{v}$$w$$u_{1}$$u_{2}$$u_{3}$
Figure 3: Cases 1, 2A, 2B, and 2C in the proof of Lemma 4.
Since $G$ is not a cycle, it has a vertex of degree greater than $2$. For
$v\in V(G)$ we define $\overline{v}$ to be a nearest vertex of degree greater
than $2$ (with ties broken arbitrarily) and let $f(v)=d(v,\overline{v})$. Note
that $v=\overline{v}$ if and only if ${\rm deg}(v)>2$. Since $G$ is Eulerian,
we have ${\rm deg}(\overline{v})\geq 4$ for every $v\in V$. For $v\in V$, we
have $\overline{v}\in N_{f(v)}(v)$, and thus $N(\overline{v})\subseteq
N_{f(v)-1}(v)\cup N_{f(v)}(v)\cup N_{f(v)+1}(v)$. We claim that
$\textrm{If ${\rm deg}(v)=2$, then}\ |N(\overline{v})\cap N_{f(v)-1}(v)|=1.$
(5)
Indeed, if ${\rm deg}(v)=2$ then the neighbour, $w$ say, of $\overline{v}$ on
a shortest $(v,\overline{v})$-path is in $N_{f(v)-1}(v)$. If there was a
second neighbour $w^{\prime}$ of $\overline{v}$ in $N_{f(v)-1}(v)$, then the
vertices in $\bigcup_{i=0}^{f(v)-1}N_{i}(v)\cup\\{\overline{v}\\}$ would
induce a cycle as a subgraph whose only vertex of degree greater than $2$ in
$G$ is $\overline{v}$, implying that $\overline{v}$ is a cutvertex of $G$,
contradicting the $2$-connectedness of $G$. This proves (5).
Let $v$ be a vertex of degree $2$. By the definition of $f(v)$, all vertices
in $\bigcup_{i=0}^{f(v)-1}N_{i}(v)$ have degree $2$. It is easy to see that,
since $G$ is $2$-connected, this implies
$n_{1}(v)=n_{2}(v)=\cdots=n_{f(v)}(v)=2.$ (6)
Let $N_{f(v)}(v)=\\{\overline{v},w\\}$. It follows from (5) that
$n_{f(v)}(v)+n_{f(v)+1}(v)\geq{\rm deg}(\overline{v})\geq 4$, so
$n_{f(v)+1}(v)\geq 2$. We consider three cases, depending on the value
$n_{f(v)+1}(v)$.
Case 1: There exists $v\in V(G)$ with $n_{f(v)+1}(v)=2$.
Let $N_{f(v)+1}(v)=\\{u_{1},u_{2}\\}$. Since ${\rm deg}_{G}(\overline{v})\geq
4$ it follows that $\overline{v}$ is adjacent to $u_{1},u_{2}$, $w$ and a
vertex in $N_{f(v)-1}(v)$, we have ${\rm deg}_{G}(\overline{v})=4$. Since $w$
is adjacent to a vertex in $N_{f(v)+1}(v)$, otherwise $\overline{v}$ would be
a cutvertex, to a vertex in $N_{f(v)-1}(v)$, and also to $\overline{v}$, it
follows that ${\rm deg}_{G}(w)>2$, and thus ${\rm deg}_{G}(w)=4$, so $w$ is
also adjacent to $u_{1}$ and $u_{2}$.
Now $\overline{v}$, $w$ and $u_{i}$ form a triangle for $i=1,2$. Since ${\rm
deg}_{G}(\overline{v})={\rm deg}_{G}(w)=4$, it follows by (4) that ${\rm
deg}_{G}(u_{1})={\rm deg}_{G}(u_{2})=2$. So the vertices in $N_{f(v)+1}(v)$
have only neighbours in $N_{f(v)+1}(v)\cup N_{f(v)}(v)$. This implies that
$e_{G}(v)=f(v)+1$, and so $V(G)=\bigcup_{i=0}^{f(v)+1}N_{i}(v)$. It follows
that $G$ consists of the cycle induced by $\bigcup_{i=0}^{f(v)}N_{i}(v)$ and
the two additional vertices $u_{1}$ and $u_{2}$ of degree two, both adjacent
to $\overline{v}$ and $w$. Hence $G$ is the first graph depicted in Figure 3.
Applying Lemma 3 to the cutset $\\{\overline{v},w\\}$ now yields that
$W(G)<W(C_{n,3})$. This contradiction to the maximality of $W(G)$ proves the
lemma in Case 1.
Case 2: There exists $v\in V(G)$ with $n_{f(v)+1}=3$.
Let $N_{f(v)+1}(v)=\\{u_{1},u_{2},u_{3}\\}$. We consider subcases as follows.
Case 2a: $\overline{v}w\in E(G)$.
The set $\\{\overline{v},w\\}$ is a cutset. Its branch containing $v$ is a
cycle of length $2f(v)+1$, and the union of the other branches is
$2$-connected since $\overline{v}$ and $w$ are adjacent. Hence we have
$W(G)\leq W(F_{n,2f(v)+1})$ by Lemma 3(a). If $f(v)\geq 2$, then $4\leq
2f(v)+1\leq n-2$ and so $W(F_{n,2f(v)+1})<W(C_{n,3})$ by Lemma 3(b), a
contradiction to the maximality of $W(G)$. Hence $f(v)=1$ and $\overline{v}\in
N_{1}(v)$.
Then ${\rm deg}(w)>2$, since otherwise $w$ would not have neighbours in
$N_{2}(v)$, and $\overline{v}$ would be a cutvertex, a contradiction. Hence
${\rm deg}(w)\geq 4$. From (5) and $n_{1}(v)+n_{2}(v)=5$ we conclude that
both, $\overline{v}$ and $w$, have degree $4$, and both are adjacent to
exactly two vertices in $N_{2}(v)$. We may assume that $\overline{v}$ is
adjacent to $u_{1}$ and $u_{2}$, while $w$ is adjacent to $u_{2}$ and $u_{3}$.
The situation is depicted in the second graph of Figure 3. Since
$\overline{v}$, $w$ and $u_{2}$ form a triangle, we have ${\rm deg}(u_{2})=2$
by (4). This implies that $\\{\overline{v},w\\}$ is a cutset of adjacent
vertices with at least three branches, and the branches containing $v$ and
$u_{2}$ are $3$-cycles, so their union contains a spanning cycle, whose length
is $4$. Since $\overline{v}$ and $w$ are adjacent, the union of the remaining
branches is $2$-connected. Hence it follows by Lemma 3 that $W(G)<W(C_{n,3})$.
This contradiction to the maximality of $W(G)$ proves the lemma in this case.
Case 2b: $\overline{v}w\notin E(G)$ and ${\rm deg}(w)=2$.
Then $w$ has a unique neighbour, $u_{3}$ say, in $N_{f(v)+1}(v)$. The set
$\bigcup_{i=0}^{f(v)}N_{i}(v)\cup\\{u_{3}\\}$ induces a cycle in $G$ in which
only $\overline{w}$ and possibly $u_{3}$ have degree greater than $2$. The
situation is depicted in the third graph of Figure 3. The set
$\\{\overline{v},u_{3}\\}$ is a cutset. The branch containing $v$ and $w$
induces a cycle of length $2f(v)+2$, and the union of the remaining branches
is $2$-connected since $\overline{v}$ and $u_{3}$ are adjacent. Since $4\leq
2f(v)+2\leq n-2$, we have $W(G)<W(C_{n,3})$ by Lemma 3, a contradiction to the
maximality of $W(G)$.
Case 2c: $\overline{v}w\notin E(G)$ and ${\rm deg}(w)>2$.
By (5) and $n_{f(v)+1}(v)=3$ if follows that $\overline{v}$ and $w$ are both
adjacent to $u_{1}$, $u_{2}$ and $u_{3}$. If at least one vertex in
$\\{u_{1},u_{2},u_{3}\\}$, $u_{1}$ say, has degree $2$, then the union of the
two branches at the cutset $\\{\overline{v},w\\}$ containing $v$ and $u_{1}$
has at least four vertices and a spanning cycle, while the union of the
remaining branches is $2$-connected. Hence it follows from Lemma 3 that
$W(G)<W(C_{n,3})$, contradicting the maximality of $W(G)$. So we may assume
that $u_{1}$, $u_{2}$ and $u_{3}$ all have degree greater than $2$. The
situation is depicted in the fourth graph of Figure 3. Let $E^{\prime}$ be the
edge set of the $4$-cycle $\overline{v},u_{1},w,u_{2},\overline{v}$. Then
$G-E^{\prime}$ is connected since otherwise, similarly to the proof of (4),
one of the vertices $u_{1}$, $u_{2}$ or $u_{3}$ would be a cutvertex of $G$.
Since all vertices in $G-E^{\prime}$ have even degree, it follows that
$G-E^{\prime}$ is Eulerian. Since at least one vertex of $G-E^{\prime}$ has
degree greater than $2$, viz $u_{3}$, we conclude that $G-E^{\prime}$ is not a
cycle. But $W(G-E^{\prime})>W(G)$, a contradiction to the maximality of
$W(G)$.
Case 3: $n_{f(v)+1}(v)\geq 4$ for all $v\in V$.
Let $v\in V(G)$ be fixed. We first show that
$\sigma_{G}(v)\leq\left\\{\begin{array}[]{cc}\frac{1}{4}n^{2}-n+\frac{11}{4}+2f(v)&\textrm{if
$n$ is odd,}\\\ \frac{1}{4}n^{2}-n+3+2f(v)&\textrm{if $n$ is
even}\end{array}\right.$ (7)
We note that $n_{0}(v)=1$, $n_{1}(v)=n_{2}(v)=\ldots=n_{f(v)}(v)=2$, and
$n_{f(v)+1}(v)\geq 4$ imply that $n\geq 5+2f(v)$, so
$f(v)\leq\lfloor\frac{n-5}{2}\rfloor$. Let $k=e(v)$. Then
$\sigma_{G}(v)=\sum_{i=0}^{k}in_{i}(v)$. The values $n_{i}(v)$ satisfy the
following conditions: (i) $n_{0}(v)=1$ and (ii) $\sum_{i=0}^{k}n_{i}(v)=n$.
Since $G$ is $2$-connected, we have (iii) $n_{i}(v)\geq 2$ for
$i=1,2,\ldots,k-1$, and (iv) $n_{f(v)+1}(v)\geq 4$ by the defining condition
of Case 3.
In order to bound $\sum_{i=1}^{k}in_{i}(v)$ from above, assume that $n$ and
$f(v)$ are fixed, and that integers $k,n_{0},n_{1},\ldots,n_{k}$ are chosen to
maximise $\sum_{i=1}^{k}in_{i}$ subject to conditions (i)-(iv). Then
$n_{0}=1$, and $n_{i}=2$ for all $i\in\\{1,2,\ldots,k-1\\}-\\{f(v)+1\\}$,
since otherwise, if $n_{i}>2$, we can modify the sequence $n_{0},\ldots,n_{k}$
by decreasing $n_{i}$ by $1$ and increasing $n_{i+1}$ by $1$ to obtain a new
sequence $n_{0}^{\prime},\ldots,n_{k}^{\prime}$ which satisfies (i)-(iv), but
for which $\sum_{i=0}^{k}in_{i}^{\prime}>\sum_{i=0}^{k}in_{i}$, a
contradiction. The same argument yields that $n_{f(v)+1}=4$, and also that
$n_{k}\in\\{1,2\\}$ if $k>f(v)+1$. Therefore, if $n$ is odd we have
$k=\frac{n-3}{2}$ and
$\sum_{i=0}^{k}in_{i}=\frac{1}{4}n^{2}-n+\frac{11}{4}+2f(v)$, and if $n$ is
even we have $k=\frac{n-2}{2}$, $n_{k}=1$ and
$\sum_{i=0}^{k}in_{i}=\frac{1}{4}n^{2}-n+30+2f(v)$, which is (7).
Summation of (7) over all $v\in V(G)$ yields
$2W(G)=\sum_{v\in
V(G)}\sigma_{G}(v)\leq\left\\{\begin{array}[]{cc}\frac{1}{4}n^{3}-n^{2}+\frac{11}{4}n+2\sum_{v\in
V(G)}f(v)&\textrm{if $n$ is odd,}\\\ \frac{1}{4}n^{3}-n^{2}+3n+2\sum_{v\in
V(G)}f(v)&\textrm{if $n$ is even.}\end{array}\right.$ (8)
We now bound $\sum_{v\in V(G)}f(v)$. Since $G$ is an Eulerian graph but not a
cycle, $G$ contains a vertex $w$ of degree at least $4$. Since for
$i\in\\{0,1,\ldots,e(w)\\}$ every vertex $v\in N_{i}(w)$ satisfies $f(v)\leq
d(v,w)$, we have
$\sum_{v\in V(G)}f(v)\leq\sum_{v\in V(G)}d(v,w)=\sigma_{G}(w).$
Now $G$ has more than one vertex of degree greater than two, since otherwise
such a vertex would be a cutvertex, contradicting the $2$-connectedness of
$G$. That implies that the strict inequality $\sum_{v\in
V(G)}f(v)<\sigma_{G}(w)$ holds. Noting that $f(w)=0$, we obtain by (7) that
$\sum_{v\in
V(G)}f(v)<\sigma_{G}(w)\leq\left\\{\begin{array}[]{cc}\frac{1}{4}n^{2}-n+\frac{11}{4}&\textrm{if
$n$ is odd,}\\\ \frac{1}{4}n^{2}-n+3&\textrm{if $n$ is
even}.\end{array}\right.$ (9)
From (8) and (9) we get
$W(G)<\left\\{\begin{array}[]{cc}\frac{1}{8}n^{3}-\frac{1}{4}n^{2}+\frac{3}{8}n+\frac{11}{4}&\textrm{if
$n$ is odd,}\\\ \frac{1}{8}n^{3}-\frac{1}{4}n^{2}+\frac{1}{2}n+3&\textrm{if
$n$ is even.}\end{array}\right.$
But the right hand side of the last inequality equals $W(C_{n,3})$. This
contradiction to the maximality of $W(G)$ completes the proof. $\Box$
## 4 Completing the proof of Theorem 2
Proof of Theorem 2: Suppose to the contrary that the theorem is false, and let
$n$ be the smallest value with $n\geq 26$ for which the theorem fails. Let $G$
be an Eulerian gaph of order $n$ that is not a cycle, and that has maximum
Wiener index among all such graphs. By Lemma 4, $G$ has a cutvertex, so $G$ is
not $2$-connected. Then $G$ hat at least two endblocks. Let $H$ be a smallest
endblock of $G$, let $v$ be the cutvertex of $G$ contained in $H$, and let $K$
be the union of the branches at $v$ distinct from $H$. Let $A$ and $B$ be the
vertex set of $H$ and $K$, respectively, and let $a=|A|$ and $b=|B|$. Then
$b=n-a+1$, and since $H$ is a smallest endblock we have $a\leq\frac{n+1}{2}$.
We have
$\displaystyle W(G)$ $\displaystyle=$ $\displaystyle\sum_{\\{x,y\\}\subseteq
A}d_{H}(x,y)+\sum_{\\{x,y\\}\subseteq B}d_{K}(x,y)+\sum_{x\in
A-\\{v\\}}\sum_{y\in B-\\{v\\}}\big{(}d_{H}(x,v)+d_{K}(v,y)\big{)}$ (10)
$\displaystyle=$ $\displaystyle
W(H)+W(K)+(a-1)\sigma_{K}(v)+(b-1)\sigma_{H}(v).$
Since $H$ is an endblock, $H$ is $2$-connected, but $K$ may or may not be
$2$-connected.
Case 1: $K$ is $2$-connected.
Similarly to (10) we obtain for the graph $C_{n,a}$ and its two blocks $C_{a}$
and $C_{b}$ that
$W(C_{n,a})=W(C_{a})+W(C_{b})+(a-1)\sigma_{C_{b}}(w)+(b-1)\sigma_{C_{a}}(w),$
where $w$ is the cutvertex of $C_{n,a}$. Since $H$ and $K$ are $2$-connected,
we have by Theorem 3 that $W(H)\leq W(C_{a})$, $W(K)\leq W(C_{b})$,
$\sigma_{K}(v)\leq\sigma_{C_{b}}(w)$ and $\sigma_{H}(v)\leq\sigma_{C_{a}}(w)$.
Hence we have $W(G)\leq W(C_{n,a})$. By Lemma 1 we have $W(C_{n,a})\leq
W(C_{n,3})$, and so we have $W(G)\leq W(C_{n,3})$, as desired.
Assume that $W(G)=W(C_{n,3})$. Then $W(K)=W(C_{b})$, and so $K=C_{b}$, and
similarly $H=C_{a}$ by Theorem 3. Now Lemma 1 implies that $a=3$. It follows
that $G=C_{n,3}$, and so the theorem holds in Case 1.
Case 2: $K$ is not $2$-connected.
We now bound each term on the right hand side of (10) separately. Clearly, $K$
is an Eulerian graph of order $n-a+1$ but not a cycle. Since $G$ is a smallest
counterexample to Theorem 2, the bound in Theorem 2 holds for $K$ unless $b<5$
or $b\in\\{7,9\\}$. However, since $b\geq\frac{n+1}{2}$ and $n\geq 26$, $b$ is
not one of these exceptional values and Theorem 2 holds for $K$. Therefore,
$W(K)\leq
W(C_{n-a+1,3})\leq\frac{1}{8}(n-a+1)^{3}-\frac{1}{4}(n-a+1)^{2}+\frac{3}{2}(n-a+1)-2.$
(11)
It follows from Theorem 3(c) that
$\sigma_{K}(v)\leq\frac{1}{3}(n-a+1)(n-a).$ (12)
As in Case 1, Theorem 3 yields the following bounds for $W(H)$ and
$\sigma_{H}(v)$
$W(H)\leq W(C_{a})\leq\frac{1}{3}a^{3}\ \ \textrm{and}\ \
\sigma_{H}(v)\leq\frac{1}{4}a^{2}.$ (13)
Substituting (13), (11) and (12) into (10) yields that
$\displaystyle W(G)$ $\displaystyle\leq$
$\displaystyle\frac{1}{8}a^{3}+\frac{1}{8}(n-a+1)^{3}-\frac{1}{4}(n-a+1)^{2}+\frac{3}{2}(n-a+1)-2$
$\displaystyle+\frac{1}{3}(a-1)(n-a+1)(n-a)+\frac{1}{4}(n-a)a^{2}.$
From equation (1) and the remark following it, we have
$W(C_{n,3})\geq\frac{1}{8}n^{3}-\frac{1}{4}n^{2}+\frac{11}{8}n-\frac{9}{4}.$
Subtracting these two bounds we obtain, after simplification,
$W(C_{n,3})-W(G)\geq\frac{1}{24}\Big{\\{}(a-1)n^{2}+(a^{2}-18a+8)n-2a^{3}+13a^{2}+25a-39\Big{\\}}.$
Denote the right hand side of the above inequality by $f(n,a)$. To complete
the proof of the Lemma it suffices to show that $f(n,a)>0$ for $n\geq 26$ and
$3\leq a\leq\frac{n+1}{2}$. Now $\frac{\partial f}{\partial
a}=\frac{1}{24}\big{\\{}n^{2}+(2a-18)n-6a^{2}+26a+25\big{\\}}$. For constant
$n$, this is a quadratic function of $a$ which is concave down and thus it
attains its minimum for $a\in[3,\frac{n+1}{2}]$ at $a=3$ or $a=\frac{n+1}{2}$.
Since for $a=3$ we have $\frac{\partial f}{\partial
a}=\frac{1}{24}(n^{2}-12n+49)>0$, and for $a=\frac{n+1}{2}$ we have
$\frac{\partial f}{\partial a}=\frac{1}{48}(n^{2}-14n+73)>0$, the derivative
$\frac{\partial f}{\partial a}$ is positive for $3\leq a\leq\frac{n+1}{2}$. It
follows that the function $f$ is increasing in $a$ for constant $n$, and thus
$W(C_{n,3})-W(G)\geq f(3)=\frac{1}{24}\big{(}2n^{2}-37n+99\big{)},$
which is greater than $0$ for $n\geq 26$. This completes the proof of Theorem
2 $\Box$
## 5 Eulerian Graphs with Small Wiener Index
A natural question that arises in the context of of the Wiener index of
Eulerian graphs is how small the Wiener index of an Eulerian graph can be. For
Eulerian graphs of given order, this was answered in [15].
###### Proposition 1 (Gutman, Cruz and Rada [15]).
Let $G$ be an Eulerian graph of order $n$, where $n\geq 3$. Then
$W(G)\geq\left\\{\begin{array}[]{cc}\binom{n}{2}&\textrm{if $n$ is odd,}\\\
\binom{n}{2}+\frac{n}{2}&\textrm{if $n$ is even.}\end{array}\right.$
Equality holds if and only if $G$ is complete (for odd $n$), or $G$ is
obtained from the complete graph by removing the edges of a perfect matching
(for even $n$).
Finding the minimum value of the Wiener index of Eulerian graphs becomes more
challenging if not only the order, but also the size of the graph is
considered. We have the following lower bound on the Wiener index due to
Plesník [20].
###### Proposition 2.
Let $G$ be a connected graph with $n$ vertices and $m$ edges. Then
$W(G)\geq 2\binom{n}{2}-m.$
Equality holds if and only if the diameter of $G$ is at most $2$.
Proposition 2 yields a lower bound on the Wiener index of Eulerian graphs of
given order and size. However, if $m$ is so small relative to $n$ that there
is no Eulerian graph of diameter two of order $n$ and size $m$, then this
bound is not sharp. The following result determines the minimum size of an
Eulerian graph of order $n$ and diameter $2$. In the proof we use the fact
that the minimum size of a graph of order $n$ and diameter $2$ not containing
a vertex of degree $n-1$ is $2n-5$, which was proved by Erdős and Rényi [9],
see also [16].
###### Proposition 3.
Let $G$ be an Eulerian graph of order $n$ and diameter two. Then
$m(G)\geq\left\\{\begin{array}[]{cc}\frac{3}{2}(n-1)&\textrm{if $n$ is
odd,}\\\ 2n-5&\textrm{if $n$ is even.}\end{array}\right.$
This bound is sharp for $n\geq 9$.
Proof: First let $n$ be even. Since $G$ contains only vertices of even degree,
$G$ has no vertex of degree $n-1$. The above-mentioned result by Erdős and
Rényi [9] now proves that $m(G)\geq 2n-5$, as desired.
To see that the bound is sharp consider the graph obtained from a triangle
with vertices $a$, $b$ and $c$ and a star $K_{1,n-4}$ by joining two of the
leaves of the star to $a$, joining two other leaves to $b$, and joining the
remaining $n-8$ leaves to $c$. (We note that this graph was already described
in [16].)
Now let $n$ be odd. If $G$ contains no vertex of degree $n-1$, then we have
$m\geq 2n-5$ as above, and the result follows. If $G$ contains a vertex of
degree $n-1$, then all other vertices have degree at least $2$, and so the
degree sum of $G$ is at least $n-1+2(n-1)$, and so $m\geq\frac{3}{2}(n-1)$, as
desired.
The graph obtained from $\frac{n-1}{2}$ copies of the graph $K_{2}$ by adding
an new vertex and joining it to each of the $n-1$ vertices shows that the
bound is sharp. $\Box$
This leads naturally to the following question which we pose as a problem.
###### Question 1.
Given $n$ with $n\geq 9$, and $m$ with $m<2n-5$ if $n$ is even and
$m<\frac{3}{2}(n-1)$ if $n$ is odd. What is the minimum Wiener index of an
Eulerian graph of order $n$ and size $m$ and which graphs attain it?
## References
* [1] A. Alochukwu, P. Dankelmann, Wiener index in graphs with given minimum degree and maximum degree. ArXiv preprint arXiv:2011.13970 (2020).
* [2] R.A. Beezer, J.F. Riegsecker, B.A. Smith, Using minimum degree to bound average distance. Discrete Math. 226 no. 1-3 (2001), 365-371.
* [3] S. Bessy, F. Dross, M. Knor, R. S̆krekovski, Graphs with the second and third maximum Wiener indices over the $2$-vertex connected graphs. Discrete Appl. Math. 284 (2020), 195-200.
* [4] S. Bessy, F. Dross, M. Knor, R. S̆krekovski, The structure of graphs with given number of blocks and maximum Wiener index. J. Combin. Optim. 39 no. 1 (2020), 170-184.
* [5] S. Bessy, F. Dross, K. Hrin̆áková, M. Knor, R. S̆krekovski, Maximal Wiener index for graphs with prescribed number of blocks. Appl. Math. Comput. 380 (2020), article 125274.
* [6] E. Cela, N.S. Schmuck, S. Wimer, G.J. Woeginger, The Wiener maximum quadratic assignment problem. Discrete Optim. 8 no. 3 (2011), 411-416.
* [7] P. Dankelmann, R.C. Entringer, Average distance, minimum degree and spanning trees. J. Graph Theory 33 no. 1 (2000), 1-13.
* [8] P. Dankelmann, S. Mukwembi, H.C. Swart, Average distance and vertex connectivity. J. Graph Theory 62 (2009), 157-177.
* [9] P. Erdős, A. Rényi, On a problem in the theory of graphs (Hungarian. Russian, English summaries). Magyar Tud. Akad. Mat. Kutató Int. Közl. 7 (1962), 623-641.
* [10] A.A. Dobrynin, R.C. Entringer, I. Gutman, Wiener index of trees: theory and applications. Acta Applicandae Mathematica 66.3 (2001), 211-249.
* [11] P. Erdös, J. Pach, J. Spencer, On the mean distance between points of a graph. Congr. Numer. 64 (1988), 121-124.
* [12] M. Fischermann, A. Hoffmann, D. Rautenbach, L.A Székely, L. Volkmann, Wiener index versus maximum degree in trees. Discrete Appl. Math. 122 no. 1-3 (2002), 127-137.
* [13] B. Furtula, Odd-vertex-degree trees maximizing Wiener index. Kragujevac J. Math. 37 (2013), 129-134.
* [14] B. Fortula, I. Gutman, H. Lin, More trees with all degrees odd having extremal Wiener index. MATCH Commun. Math. Comput. Chem. 70 (2013),29311-296.
* [15] I. Gutman, R. Cruz, J. Rada, Wiener index of Eulerian graphs. Discrete Appl. Math. 162 (2014), 247-250.
* [16] M.A. Henning, J. Southey, A characterization of the non-trivial diameter two graphs of minimum size. Discrete Appl. Math. 187 (2015), 91-95.
* [17] M. Kouider, P. Winkler, Mean distance and minimum degree. J. Graph Theory 25 no. 1 (1997), 95-99.
* [18] H. Lin, Extremal Wiener index of trees with all degrees odd. MATCH Commun. Math. Comput. Chem. 70 (2013), 287-292.
* [19] H. Lin, Extremal Wiener index of trees with given number of vertices of even degree. MATCH Commun. Math. Comput. Chem. 72 (2014), 311-320.
* [20] J. Plesník, On the sum of all distances in a graph or digraph. J. Graph Theory 8 no. 1, (1984), 1-21.
* [21] D.H. Rouvray, The rich legacy of half century of the Wiener index. In: Topology in Chemistry - Discrete Mathematics of Molecules, D.H. Rouvray and R.B. King (eds.) Horwood, Chichester (2002), 16-37.
* [22] N.S. Schmuck, S.G. Wagner, H. Wang, Greedy trees, caterpillars, and Wiener-type graph invariants. MATCH Commun. Math. Comput. Chem. 68 (2012), 273-292.
* [23] D. Stevanović, Maximizing Wiener index of graphs with fixed maximum degree. MATCH Commun. Math. Comput. Chem. 60 (2008), 71-83.
* [24] H. Wang, The extremal values of the Wiener index of a tree with given degree sequence. Discrete Appl. Math. 156 no. 14 (2008), 2647-2654.
* [25] H. Wiener, Structural determination of paraffin boiling points. J. Amer. Chem. Soc. 69 (1947), 17-20.
|
# Structured Time-Delay Models for Dynamical Systems
with Connections to Frenet-Serret Frame
Seth M. Hirsh Department of Physics, University of Washington, Seattle, WA
([email protected]). Sara M. Ichinaga Applied and Computational Mathematical
Sciences Program, University of Washington, Seattle, WA ([email protected]).
Steven L. Brunton Department of Mechanical Engineering, University of
Washington, Seattle, WA<EMAIL_ADDRESS>J. Nathan Kutz Department of Applied
Mathematics, University of Washington, Seattle, WA<EMAIL_ADDRESS>Bingni W.
Brunton Department of Biology, University of Washington, Seattle, WA
([email protected]).
###### Abstract
Time-delay embeddings and dimensionality reduction are powerful techniques for
discovering effective coordinate systems to represent the dynamics of physical
systems. Recently, it has been shown that models identified by dynamic mode
decomposition (DMD) on time-delay coordinates provide linear representations
of strongly nonlinear systems, in the so-called Hankel alternative view of
Koopman (HAVOK) approach. Curiously, the resulting linear model has a matrix
representation that is approximately antisymmetric and tridiagonal with a zero
diagonal; for chaotic systems, there is an additional forcing term in the last
component. In this paper, we establish a new theoretical connection between
HAVOK and the Frenet-Serret frame from differential geometry, and also develop
an improved algorithm to identify more stable and accurate models from less
data. In particular, we show that the sub- and super-diagonal entries of the
linear model correspond to the intrinsic curvatures in Frenet-Serret frame.
Based on this connection, we modify the algorithm to promote this
antisymmetric structure, even in the noisy, low-data limit. We demonstrate
this improved modeling procedure on data from several nonlinear synthetic and
real-world examples.
Keywords: Dynamic mode decomposition, Time-delay coordinates, Frenet-Serret,
Koopman operator, Hankel matrix.
## 1 Introduction
Discovering meaningful models of complex, nonlinear systems from measurement
data has the potential to improve characterization, prediction, and control.
Focus has increasingly turned from first-principles modeling towards data-
driven techniques to discover governing equations that are as simple as
possible while accurately describing the data [1, 2, 3, 4]. However, available
measurements may not be in the right coordinates for which the system admits a
simple representation. Thus, considerable effort has gone into learning
effective coordinate transformations of the measurement data [5, 6, 7],
especially those that allow nonlinear dynamics to be approximated by a linear
system. These coordinates are related to eigenfunctions of the Koopman
operator [8, 9, 10, 11, 12, 13], with dynamic mode decomposition (DMD) [14]
being the leading computational algorithm for high-dimensional spatiotemporal
data [11, 15, 13]. For low-dimensional data, time-delay embedding [16] has
been shown to provide accurate linear models of nonlinear systems [5, 17, 18].
Linear time-delay models have a rich history [19, 20], and recently, DMD on
delay coordinates [15, 21] has been rigorously connected to these linearizing
coordinate systems in the Hankel alternative view of Koopman (HAVOK) approach
[5, 17, 7]. In this work, we establish a new connection between HAVOK and the
Frenet-Serret frame from differential geometry, which inspires an extension to
the algorithm that improves the stability of these models.
Time-delay embedding is a widely used technique to characterize dynamical
systems from limited measurements. In delay embedding, incomplete measurements
are used to reconstruct a representation of the latent high-dimensional system
by augmenting the present measurement with a time-history of previous
measurements. Takens showed that under certain conditions, time-delay
embedding produces an attractor that is diffeomorphic to the attractor of the
latent system [16]. Time-delay embeddings have also been extensively used for
signal processing and modeling [20, 19, 22, 23, 24, 25, 26, 27], for example,
in singular spectrum analysis (SSA) [19, 22] and the eigensystem realization
algorithm (ERA) [20]. In both cases, a time history of augmented delay vectors
are arranged as columns of a Hankel matrix, and the singular value
decomposition (SVD) is used to extract _eigen_ -time-delay coordinates in a
dimensionality reduction stage. More recently, these historical approaches
have been connected to the modern DMD algorithm [15], and it has become
commonplace to compute DMD models on time delay coordinates [15, 21]. The
HAVOK approach established a rigorous connection between DMD on delay
coordinates and eigenfunctions of the Koopman operator [5]; HAVOK [5] is also
referred to as Hankel DMD [17] or delay DMD [15].
HAVOK produces linear models where the matrix representation of the dynamics
has a peculiar and particular structure. These matrices tend to be skew-
symmetric and dominantly tridiagonal, with zero diagonal (see Fig. 2 for an
example). In the original HAVOK paper, this structure was observed in some
systems, but not others, with the structure being more pronounced in noise-
free examples with an abundance of data. It has been unclear how to interpret
this structure and whether or not it is a universal feature of HAVOK models.
Moreover, the eigen-time-delay modes closely resemble Legendre polynomials;
these polynomials were explored further in Kamb et al. [28]. The present work
directly resolves this mysterious structure by establishing a connection to
the Frenet-Serret frame from differential geometry.
The structure of HAVOK models may be understood by introducing intrinsic
coordinates from differential geometry [29]. One popular set of intrinsic
coordinates is the Frenet-Serret frame, which is formed by applying the Gram-
Schmidt procedure to the derivatives of the trajectory
$\dot{\bm{x}}(t),\ddot{\bm{x}}(t),\dddot{\bm{x}}(t),\ldots$ [30, 31, 32].
Alvarez-Vizoso et al. [33] showed that the SVD of trajectory data converges
locally to the Frenet-Serret frame in the limit of an infinitesimal time step.
The Frenet-Serret frame results in an orthogonal basis of polynomials, which
we will connect to the observed Legendre basis of HAVOK [5, 28]. Moreover, we
show that the dynamics, when represented in these coordinates, have the same
tridiagonal structure as the HAVOK models. Importantly, the terms along the
sub- and super-diagonals have a specific physical interpretation as intrinsic
curvatures. By enforcing this structure, HAVOK models are more robust to noisy
and limited data.
In this work, we present a new theoretical connection between time-delay
embedding models and the Frenet-Serret frame from differential geometry. Our
unifying perspective sheds light on the antisymmetric, tridiagonal structure
of the HAVOK model. We use this understanding to develop _structured_ HAVOK
models that are more accurate for noisy and limited data. Section 2 provides a
review of dimensionality reduction methods, time delay embeddings, and the
Frenet-Serret frame. This section also discusses current connections between
these fields. In Section 3, we establish the main result of this work,
connecting linear time-delay models with the Frenet-Serret frame, explaining
the tridiagonal, antisymmetric structure seen in Figure 2. We then illustrate
this theory on a synthetic example. In Section 4, we explore the limitations
and requirements of the theory, giving recommendations for achieving this
structure in practice. In Section 5, based on this theory, we develop a
modified HAVOK method, called _structured_ HAVOK (sHAVOK), which promotes
tridiagonal, antisymmetric models. We demonstrate this approach on three
nonlinear synthetic examples and two real-world datasets, namely measurements
of a double pendulum experiment and measles outbreak data, and show that
sHAVOK yields more stable and accurate models from significantly less data.
Figure 1: In this work, we unify key results from dimensionality reduction,
time-delay embedding and the Frenet-Serret frame to show that a dynamical
system may be decomposed into a sparse linear model plus a forcing term.
Further, this linear model has a particular structure: it is an antisymmetric
tridiagonal matrix with nonzero elements only along the super- and sub-
diagonals. These nonzero elements are interpretable as they are intrinsic
curvatures of the system in the Frenet-Serret frame.
## 2 Related Work
Our work relates and extends results from three fields: dimensionality
reduction, time-delay embedding, and the Frenet-Serret coordinate frame from
differential geometry. There is an extensive literature on each of these
fields, and here we give a brief introduction of the related work to establish
a common notation on which we build a unifying framework in Section 3.
### 2.1 Dimensionality Reduction
Recent advancements in sensor and measurement technologies have led to a
significant increase in the collection of time-series data from complex,
spatio-temporal systems. Although such data is typically high dimensional, in
many cases it can be well approximated with a low dimensional representation.
One central goal is to learn the underlying structure of this data. Although
there are many data-driven dimensionality reduction methods, here we focus on
linear techniques because of their effectiveness and analytic tractability. In
particular, given a data matrix $\bm{X}\in\mathbb{R}^{m\times n}$, the goal of
these techniques is to decompose $\bm{X}$ into the matrix product
$\bm{X}=\bm{U}\bm{V}^{\intercal},$ (1)
where $\bm{U}\in\mathbb{R}^{m\times k}$ and $\bm{V}\in\mathbb{R}^{n\times k}$
are low rank ($k<\min(m,n)$). The task of solving for $\bm{U}$ and $\bm{V}$ is
highly underdetermined, and different solutions may be obtained when different
assumptions are made.
Here we review two popular linear dimensionality reduction techniques:
singular value decomposition (SVD) [34, 35] and dynamic mode decomposition
(DMD) [36, 15, 13]. Both of these methods are key components of the HAVOK
algorithm and play a key role in determining the underlying tridiagonal
antisymmetric structure in Figure 2.
#### 2.1.1 Singular Value Decomposition (SVD)
The SVD is one of the most popular dimensionality reduction methods, and it
has been applied in a wide range of applications, including genomics [37],
physics [38], and image processing [39]. SVD is the underlying algorithm for
principal component analysis (PCA).
Given the data matrix $\bm{X}\in\mathbb{R}^{m\times n}$, the SVD decomposes
$\bm{X}$ into the product of three matrices,
$\bm{X}=\bm{U}\bm{\Sigma}\bm{V}^{\intercal},$
where $\bm{U}\in\mathbb{R}^{m\times m}$ and $\bm{V}\in\mathbb{R}^{n\times n}$
are unitary matrices, and $\bm{\Sigma}\in\mathbb{R}^{m\times n}$ is a diagonal
matrix with nonnegative entries [34, 35]. We denote the $i$th columns of
$\bm{U}$ and $\bm{V}$ as $\bm{u}_{i}$ and $\bm{v}_{i}$, respectively. The
diagonal elements of $\bm{\Sigma}$, $\sigma_{i}$, are known as the singular
values of $\bm{X}$, and they are written in descending order.
The rank of the data is defined to be $R$, which equals the number of nonzero
singular values. Consider the low rank matrix approximation
$\bm{X}_{r}=\sum_{j=1}^{r}\bm{u}_{j}\sigma_{j}\bm{v}_{j}^{T},$
with $r\leq R$. An important property of $\bm{X}_{r}$ is that it is the best
rank $r$ approximation to $\bm{X}$ in the least squares sense. In other words,
$\bm{X}_{r}=\operatorname*{argmin}_{\bm{Y}}\left\lVert\bm{X}-\bm{Y}\right\rVert\quad\text{such
that}\text{ rank}(\bm{Y})=r,$
with respect to both the $l_{2}$ and Frobenius norms. Further, the relative
error in this rank-$r$ approximation using the $l_{2}$ norm is
$\frac{\left\lVert\bm{X}-\bm{X}_{r}\right\rVert_{l_{2}}}{\left\lVert\bm{X}\right\rVert_{l_{2}}}=\frac{\sigma_{r+1}}{\sigma_{1}}.$
(2)
From (2), we immediately see that if the singular values decay rapidly,
($\sigma_{j+1}\ll\sigma_{j}$), then $\bm{X}_{r}$ is a good low-rank
approximation to $\bm{X}$. This property makes the SVD a popular tool for
compressing data.
#### 2.1.2 Dynamic Mode Decomposition (DMD)
DMD [14, 15, 13] is another linear dimensionality reduction technique that
incorporates an assumption that the measurements are time series data
generated by a linear dynamical system in time. DMD has become a popular tool
for modeling dynamical systems in such diverse fields, including fluid
mechanics [11, 14], neuroscience [21], disease modeling [40], robotics [41],
plasma modeling [42], resolvent analysis [43], and computer vision [44, 45].
Like the SVD, for DMD we begin with a data matrix
$\bm{X}\in\mathbb{R}^{m\times n}$. Here we assume that our data is generated
by an unknown dynamical system so that the columns of $\bm{X}$,
$\bm{x}(t_{k})$, are time snapshots related by the map
$\bm{x}(t_{k+1})=\bm{F}(\bm{x}(t_{k}))$. While $\bm{F}$ may be nonlinear, the
goal of DMD is to determine the best-fit linear operator
$\bm{A}:\mathbb{R}^{m}\to\mathbb{R}^{m}$ such that
$\bm{x}(t_{k+1})\approx\bm{A}\bm{x}(t_{k}).$
If we define the two time-shifted data matrices,
$\bm{X}_{1}^{n-1}=\begin{bmatrix}|&|&\cdots&|\\\
\bm{x}(t_{1})&\bm{x}_{2}(t_{2})&\cdots&x(t_{n-1})\\\ |&|&\cdots&|\\\
\end{bmatrix}\text{, and }\bm{X}_{2}^{n}=\begin{bmatrix}|&|&\cdots&|\\\
x(t_{2})&x(t_{3})&\cdots&x(t_{n})\\\ |&|&\cdots&|\\\ \end{bmatrix},$
then we can equivalently define $\bm{A}\in\mathbb{R}^{m\times m}$ to be the
operator such that
$\bm{X}_{2}^{n}\approx\bm{A}\bm{X}_{1}^{n-1}.$
It follows that $\bm{A}$ is the solution to the minimization problem
$\bm{A}=\min_{\bm{A^{\prime}}}\left\lVert\bm{X}_{2}^{n}-\bm{A^{\prime}}\bm{X}_{1}^{n-1}\right\rVert_{F},$
where $\left\lVert\cdot\right\rVert_{F}$ denotes the Frobenius norm.
A unique solution to this problem can be obtained using the exact DMD method
and the Moore-Penrose pseudo-inverse
$\hat{\bm{A}}=\bm{X}_{2}^{n}\left(\bm{X}_{1}^{n-1}\right)^{\dagger}$ [15, 13].
Alternative algorithms have been shown to perform better for noisy measurement
data, including optimized DMD [46], forward-backward DMD [47], and total-least
squares DMD [48].
One key benefit of DMD is that it builds an explicit temporal model and
supports short-term future state prediction. Defining
$\left\\{\lambda_{j}\right\\}$ and $\left\\{\bm{v}_{j}\right\\}$ to be the
eigenvalues and eigenvectors of $\bm{A}$, respectively, then we can write
$\bm{x}(t_{k})=\sum_{j=1}^{r}\bm{v}_{j}e^{\omega_{j}t_{k}},$ (3)
where $\omega_{j}=\ln(\lambda_{j})/\Delta t$ are eigenvalues normalized by the
sampling interval $\Delta t$, and the eigenvectors are normalized such that
$\sum_{j=1}^{r}\bm{v}_{j}=\bm{x}(t_{1})$. Thus, to compute the state at an
arbitrary time $t$, we can simply evaluate (3) at that time. Further, letting
$\bm{v}_{j}$ be the columns of $\bm{U}$ and $\\{\exp(\omega_{j}t_{k})\text{
for }k=1,\ldots r\\}$ be the columns of $\bm{V}$, then we can express data in
the form of (1).
### 2.2 Time Delay Embedding
Suppose we are interested in a dynamical system
$\frac{d\bm{\xi}}{dt}=\bm{F}(\bm{\xi}),$
where $\bm{\xi}(t)\in\mathbb{R}^{l}$ are states whose dynamics are governed by
some unknown nonlinear differential equation. Typically, we measure some
possibly nonlinear projection of $\bm{\xi}$,
$\bm{x}(\bm{\xi})\in\mathbb{R}^{d}$ at discrete time points $t=0,\Delta
t,\ldots,q\Delta t$. In general, the dimensionality of the underlying dynamics
is unknown, and the choice of measurements are limited by practical
constraints. Consequently, it is difficult to know whether the measurements
$\bm{x}$ are sufficient for modeling the system. For example, $d$ may be
smaller than $m$. In this work we are primarily interested in the case of
$d=1$; in other words, we have only a single one-dimensional time series
measurement for the system.
We can construct an embedding of our system using successive time delays of
the measurement $x$, at $x(t-\tau)$. Given a single measurement of our
dynamical system $x(t)\in\mathbb{R}$, for $t=0,\Delta t,\ldots q\Delta t$, we
can form the Hankel matrix $\bm{H}\in\mathbb{R}^{m\times n}$ by stacking time
shifted snapshots of $x$ [49],
$\bm{H}=\begin{bmatrix}x_{1}&x_{2}&x_{3}&x_{4}&\cdots&x_{n}\\\
x_{2}&x_{3}&x_{4}&x_{5}&\cdots&x_{n+1}\\\
\vdots&\vdots&\vdots&\vdots&\ddots&\vdots\\\
x_{m}&x_{m+1}&x_{m+2}&x_{m+3}&\cdots&x_{q+1}\end{bmatrix}.$ (4)
Each column may be thought of as an augmented state space that includes a
short, $m$-dimensional trajectory in time. Our data matrix $\bm{H}$ is then
this $m$-dimensional trajectory measured over $n$ snapshots in time.
There are several key benefits of using time delay embeddings. Most notably,
given a chaotic attractor, Taken’s embedding theorem states that a
sufficiently high dimensional time delay embedding of the system is
diffeomorphic to the original attractor [16], as illustrated in Figure 1. In
addition, recent results have shown that time delay matrices are guaranteed to
have strongly decaying singular value spectra. In particular, Beckerman et al.
[50] prove the following theorem:
###### Theorem 1.
Let $\bm{H}_{n}\in\mathbb{R}^{n\times n}$ be a positive definite Hankel
matrix, with singular values $\sigma_{1},\ldots,\sigma_{n}$. Then
$\sigma_{j}\leq C\rho^{-j/\log{n}}\sigma_{1}$ for constants $C$ and $\rho$ and
for $j=1,\ldots,n$.
Equivalently, $\bm{H}_{n}$ can be approximated up to an accuracy of
$\epsilon\left\lVert\bm{H}_{n}\right\rVert_{2}$ by a rank
$\mathcal{O}(\log{n}\log{1/\epsilon})$ matrix. From this, we see that
$\bm{H}_{n}$ can be well-approximated by a low-rank matrix.
Many methods have been developed to take advantage of this structure of the
Hankel matrix, including the eigensystem realization algorithm (ERA) [20],
singular spectrum analysis (SSA) [19], and nonlinear Laplacian spectrum
analysis [22]. DMD may also be computed on delay coordinates from the Hankel
matrix [15, 51, 21], and it has been shown that this approach may provide a
Koopman invariant subspace [52, 5]. In addition, this structure has also been
incorporated into neural network architectures [53].
### 2.3 HAVOK: Dimensionality Reduction and Time Delay Embeddings
Leveraging dimensionality reduction and time delay embeddings, the Hankel
alternative view of Koopman (HAVOK) algorithm constructs low dimensional
models of dynamical systems [5]. Specifically, HAVOK learns effective
measurement coordinates of the system and estimate its intrinsic
dimensionality. Remarkably, HAVOK models are simple, consisting of a linear
model and a forcing term that can be used for short term forecasting.
Figure 2: Outline of steps in HAVOK method. First, given a dynamical system a
single variable $x(t)$ is measured. Time-shifted copies of $x(t)$ are stacked
to form a Hankel matrix $\bm{H}$. The singular value decomposition (SVD) is
applied to $\bm{H}$, producing a low dimensional representation $\bm{V}$. The
dynamic mode decomposition (DMD) is then applied to $\bm{V}$ to form a linear
dynamical model and a forcing term.
We illustrate this method in Figure 2 for the Lorenz system (see section 5.2
for details about this system). To do so, we begin with a one dimensional time
series $x(t)$ for $t=0,\Delta t,\ldots,q\Delta t$. We construct a higher
dimensional representation using time delay embeddings, producing a Hankel
matrix $\bm{H}\in\mathbb{R}^{m\times n}$ as in (4) and computes its SVD,
$\bm{H}=\bm{U}\bm{\Sigma}\bm{V}^{\intercal}.$
If $\bm{H}$ is sufficiently low rank (with rank $r$), then we need only
consider the reduced SVD,
$\bm{H}_{r}=\bm{U}_{r}\bm{\Sigma}_{r}\bm{V}_{r}^{\intercal},$
where $\bm{U}_{r}\in\mathbb{R}^{m\times r}$ and
$\bm{V}_{r}\in\mathbb{R}^{n\times r}$ are orthogonal matrices and
$\bm{\Sigma}_{r}\in\mathbb{R}^{r\times r}$ is diagonal. Rearranging the terms,
$\bm{V}_{r}^{\intercal}=\bm{\Sigma}_{r}^{-1}\bm{U}_{r}^{\intercal}\bm{H}_{r}$
and we can think of
$\bm{V}_{r}^{\intercal}=\begin{bmatrix}\bm{v}_{1}&\bm{v}_{2}&\cdots&\bm{v}_{n}\end{bmatrix}$
(5)
as a lower dimensional representation of our high dimensional trajectory. For
quasi-periodic systems, the SVD decomposition of the Hankel matrix results in
principal component trajectories (PCT) [54], which reconstruct dynamical
trajectories in terms of periodic orbits.
To discover the linear dynamics, we apply DMD. In particular, we construct the
time shifted matrices,
$\bm{V}_{1}=\begin{bmatrix}\bm{v}_{1}&\bm{v}_{2}&\cdots&\bm{v}_{n-1}\end{bmatrix}\mbox{
and
}\bm{V}_{2}=\begin{bmatrix}\bm{v}_{2}&\bm{v}_{3}&\cdots&\bm{v}_{n}\end{bmatrix}.$
(6)
We then compute the linear approximation $\hat{\bm{A}}$ such that
$\bm{V}_{2}=\hat{\bm{A}}\bm{V}_{1}$, where
$\hat{\bm{A}}=\bm{V}_{2}\bm{V}_{1}^{\dagger}$. This yields a model
$\bm{v}_{i+1}=\hat{\bm{A}}\bm{v}_{i}$.
In the continuous case,
$\dot{\bm{v}}(t)=\bm{A}\bm{v}(t)$ (7)
which is related to first order in $\Delta t$ to the discrete case by
$\bm{A}\approx\left(\hat{\bm{A}}-\bm{I}\right)/\Delta t.$
For a general nonlinear dynamical system, this linear model yields a poor
reconstruction. Instead, [5] proposed a linear model plus a nonlinear forcing
term in the last component of $\bm{v}$ (Figure 2):
$\dot{\bm{v}}(t)=\bm{A}\bm{v}(t)+\bm{B}v_{r}(t),$ (8)
where $\bm{v}(t)\in\mathbb{R}^{r-1}$, $\bm{A}\in\mathbb{R}^{r-1\times r-1}$,
and $\bm{B}\in\mathbb{R}^{r-1}$. In this case, $\bm{V}_{2}$ is defined as
columns $2$ to $n$ of the SVD singular vectors with an $r-1$ rank truncation
$\bm{V}_{r-1}^{\intercal}$. $\hat{\bm{A}}\in\mathbb{R}^{r-1\times r-1}$ and
$\hat{\bm{B}}\in\mathbb{R}^{r-1\times 1}$ are computed as
$\left[\hat{\bm{A}},\hat{\bm{B}}\right]=\bm{V}_{2}\bm{V}_{1}^{\dagger}$. The
continuous analog of $\hat{\bm{B}}$, $\bm{B}$, is computed by
$\bm{B}\approx(\hat{\bm{B}}-\bm{I})/\Delta t$.
HAVOK was shown to be a successful model for a variety of systems, including a
double pendulum and switchings of Earth’s magnetic field. In addition, the
linear portion of the HAVOK model has been observed to adopt a very particular
structure: the dynamics matrix was antisymmetric, with nonzero elements only
on the superdiagonal and subdiagonal (Figure 2).
Much work has been done to study the properties of HAVOK. Arbabi et al. [17]
showed that, in the limit of an infinite number of time delays ($m\to\infty$),
$\bm{A}$ converges to the Koopman operator for ergodic systems. Bozzo et al.
[55] showed that in a similar limit, for periodic data, HAVOK converges to the
temporal discrete Fourier transform. Kamb et al. [28] connects HAVOK to the
use of convolutional coordinates. The primary goal of this current work is to
connect HAVOK to the concept of curvature in differential geometry, and with
these new insights, improve the HAVOK algorithm to take advantage of this
structure in the dynamics matrix. In contrast with much of the previous work,
we focus on the limit where only small amounts of noisy data are available.
### 2.4 The Frenet-Serret Coordinate Frame
Suppose we have a smooth curve $\bm{\gamma}(t)\in\mathbb{R}^{m}$ measured over
some time interval $t\in[a,b]$. As before, we would like to determine an
effective set of coordinates in which to represent our data. When using SVD or
DMD, the basis discovered corresponds to the spatial modes of the data and is
constant in time. However, for many systems, it is sometimes natural to
express both the coordinates and basis as functions of time [56, 57]. One
popular method for developing this noninertial frame is the Frenet-Serret
coordinate system, which has been applied in a wide range of fields, including
robotics [58, 59], aerodynamics [60], and general relativity [61, 62].
Let us assume that $\bm{\gamma}(t)$ has $r$ nonzero continuous derivatives,
$\bm{\gamma}^{\prime},(t),\bm{\gamma}^{\prime\prime}(t),\ldots\bm{\gamma}^{(r)}(t)$.
We further assume that these derivatives are linearly independent and
$\left\lVert\bm{\gamma}^{\prime}(t)\right\rVert\neq\bm{0}$ for all $t$. Using
the Gram-Schmidt process, we can form the orthonormal basis,
$\bm{e}_{1},\bm{e}_{2},\ldots,\bm{e}_{r}$,
$\displaystyle\begin{split}\bm{e}_{1}(t)&=\frac{\bm{\gamma}^{\prime}(t)}{\left\lVert\bm{\gamma}^{\prime}(t)\right\rVert},\\\
\bm{e}_{2}(t)&=\frac{\bm{\gamma}^{\prime\prime}(t)-\langle\bm{\gamma}^{\prime\prime}(t),\bm{e}_{1}(t)\rangle\bm{e}_{1}(t)}{\left\lVert\bm{\gamma}^{\prime\prime}(t)-\langle\bm{\gamma}^{\prime\prime}(t),\bm{e}_{1}(t)\rangle\bm{e}_{1}(t)\right\rVert},\\\
&\leavevmode\nobreak\ \kern 1.66672pt\vdots\\\
\bm{e}_{r}(t)&=\frac{\bm{\gamma}^{(r)}(t)-\sum_{k=1}^{r-1}\langle\bm{\gamma}^{(r)}(t),\bm{e}_{k}(t)\rangle\bm{e}_{k}(t)}{\left\lVert\bm{\gamma}^{(r)}(t)-\sum_{k=1}^{r-1}\langle\bm{\gamma}^{(r)}(t),\bm{e}_{k}(t)\rangle\bm{e}_{k}(t)\right\rVert}.\end{split}$
(9)
Here $\langle\cdot,\cdot\rangle$ denotes an inner product, and we choose
$r\leq m$ so that these vectors are linearly independent and hence form an
orthonormal basis basis. This set of basis vectors define the Frenet-Serret
frame.
To derive the evolution of this basis, let us define the matrix formed by
stacking these vectors
$\bm{Q}(t)=[\bm{e}_{1}(t),\bm{e}_{2}(t),\ldots,\bm{e}_{r}(t)]^{\intercal}\in\mathbb{R}^{r\times
m}$, so that $\bm{Q}(t)$ satisfies the following time-varying linear dynamics,
$\frac{d\bm{Q}}{dt}=\left\lVert\bm{\gamma}^{\prime}(t)\right\rVert\bm{K}(t)\bm{Q},$
(10)
where $\bm{K}(t)\in\mathbb{R}^{r\times r}$.
By factoring out the term $\left\lVert\bm{\gamma}^{\prime}(t)\right\rVert$
from $\bm{K}(t)$, it is guaranteed that $\bm{K}(t)$ does not depend on the
parametrization of the curve (i.e. the speed of the trajectory), but only on
its geometry. The matrix $\bm{K}(t)$ is highly structured and sparse; the
nonzero elements of $\kappa_{i}(t)$ are defined to be the curvatures of the
trajectory. The curvatures $\kappa_{i}(t)$ combined with the basis vectors
$\bm{e}_{i}(t)$ define the Frenet-Serret apparatus, which fully characterizes
the trajectory up to translation [33].
To understand the structure of $\bm{K}(t)$ we derive two key properties:
1. 1.
$\bm{K}_{i,j}(t)=-\bm{K}_{j,i}(t)$ (antisymmetry):
###### Proof.
Since $r\leq m$, then by construction $\bm{Q}(t)$ is a unitary matrix with
$\bm{QQ}^{\intercal}=\bm{I}$. Taking the derivative with respect to $t$,
$\frac{d\bm{Q}}{dt}\bm{Q}^{T}+\bm{Q}\frac{d\bm{Q}^{\intercal}}{dt}=0$, or
equivalently
$\frac{d\bm{Q}}{dt}\bm{Q}^{\intercal}=-\left(\frac{d\bm{Q}}{dt}\bm{Q}^{\intercal}\right)^{\intercal}.$
Since $\bm{Q}$ is unitary, then $\bm{Q}^{-1}=\bm{Q}^{\intercal}$, and hence
$\bm{K}(t)=\frac{1}{\left\lVert\bm{\gamma}^{\prime}(t)\right\rVert}\frac{d\bm{Q}}{dt}\bm{Q}^{\intercal},$
from which we immediately see that $\bm{K}(t)=-\bm{K}(t)^{\intercal}$. ∎
2. 2.
$\bm{K}_{i,j}(t)=0$ for $j\geq i+2$:
We first note that since
$\bm{e}_{i}(t)\in\text{span}\\{\bm{\gamma}^{\prime}(t),\ldots,\bm{\gamma}^{i}(t)\\}$,
its derivative must satisfy
$\bm{e}_{i}^{\prime}(t)\in\text{span}\\{\bm{\gamma}^{\prime}(t),\ldots,\bm{\gamma}^{(i+1)}(t)\\}$.
Now by construction, using the Gram-Schmidt method, $\bm{e}_{j}$ is orthogonal
to $\text{span}\\{\bm{\gamma}^{\prime}(t),\ldots,\bm{\gamma}^{(i+1)}(t)\\}$
for $j\geq i+2$. Since $\bm{e}_{i}^{\prime}(t)$ is in the span of this set,
then $\bm{e}_{j}$ must be orthogonal to $\bm{e}^{\prime}_{i}$ for $j\geq i+2$.
Thus, $\bm{K}_{i,j}(t)=\langle\bm{e}_{i}^{\prime}(t),\bm{e}_{j}\rangle=0$ for
$j\geq i+2$.
With these two constraints, $\bm{K}(t)$ takes the form,
$\bm{K}(t)=\begin{bmatrix}0&\kappa_{1}(t)&&0\\\
-\kappa_{1}(t)&\ddots&\ddots&&\\\ &\ddots&0&\kappa_{r-1}(t)\\\
0&&-\kappa_{r-1}(t)&0\end{bmatrix}.$ (11)
Thus $\bm{K}(t)$ is antisymmetric with nonzero elements only along the
superdiagonal and subdiagonal, and the values
$\kappa_{1}(t),\ldots,\kappa_{r-1}(t)$ are defined to be the curvatures of the
trajectory.
From a geometric perspective, $\bm{e}_{1}(t),\ldots,\bm{e}_{r}(t)$ form an
instantaneous (local) coordinate frame, which moves with the trajectory. The
curvatures define how quickly this frame changes with time. If the trajectory
is a straight line the curvatures are all zero. If $\kappa_{1}$ is constant
and nonzero, while all other curvatures are zero, then the trajectory lies on
a circle. If $\kappa_{1}$ and $\kappa_{2}$ are constant and nonzero with all
other curvatures zero, then the trajectory lies on a helix. Comparing the
structure of (11) to Figure 2 we immediately see a similarity. Over the
following sections we will shed light on this connection.
### 2.5 SVD and Curvature
Given time series data, the SVD constructs an orthonormal basis that is fixed
in time, whereas the Frenet-Serret frame constructs an orthonormal basis that
moves with the trajectory. In recent work, Alvarez-Vizoso et al. [33] showed
how these frames are related. In particular, the Frenet-Serret frame converges
to the SVD frame in the limit as the time interval of the trajectory goes to
zero.
To understand this further, consider a trajectory
$\bm{\gamma}(t)\in\mathbb{R}^{m}$ as described in Section 2.4. If we assume
that our measurements are from a small neighborhood $t\in(-\epsilon,\epsilon)$
(where $\epsilon\ll 1$), then $\bm{\gamma}(t)$ is well-approximated by its
Taylor expansion,
$\bm{\gamma}(t)-\bm{\gamma}(0)=\bm{\gamma}^{\prime}(0)t+\frac{\bm{\gamma}^{\prime\prime}(0)}{2}t^{2}+\frac{\bm{\gamma}^{\prime\prime\prime}(0)}{6}t^{3}+\cdots$
Writing this in matrix form, we have that
$\bm{\gamma}(t)-\bm{\gamma}(0)=\underbrace{\begin{bmatrix}|&|&|&|\\\
\bm{\gamma}^{\prime}(0)&\bm{\gamma}^{\prime\prime}(0)&\bm{\gamma}^{\prime\prime\prime}(0)&\cdots\\\
|&|&|&|\end{bmatrix}}_{\bm{\Gamma}}\underbrace{\begin{bmatrix}1&&&\\\
&\frac{1}{2}&&\\\ &&\frac{1}{6}&\\\
&&&\ddots\end{bmatrix}}_{\bm{\Sigma}}\underbrace{\begin{bmatrix}-&t&-\\\
-&t^{2}&-\\\ -&t^{3}&-\\\ -&\vdots&-\end{bmatrix}}_{\bm{T}^{\intercal}}.$ (12)
Recall one key property of the SVD is that the $r$th rank truncation in the
expansion is the best rank-$r$ approximation to the data in the least squares
sense. Since $\epsilon\ll 1$, then each subsequent term in this expansion is
much smaller than the previous term,
$\left\lVert\bm{\gamma}^{\prime}(0)t\right\rVert_{2}\ll\left\lVert\frac{\bm{\gamma}^{\prime\prime}(0)}{2}t^{2}\right\rVert_{2}\ll\left\lVert\frac{\bm{\gamma}^{\prime\prime\prime}(0)}{6}t^{3}\right\rVert_{2}\ll\ldots.$
(13)
From this, we see that the expansion in (12) is strongly related to the SVD.
However, in the SVD we have the constraint that the $\bm{U}$ and $\bm{V}$
matrices are orthogonal, while for the Taylor expansion $\bm{\Gamma}$ and
$\bm{T}$ have no such constraint. Alvarez et al. [33] show that in the limit
as $\epsilon\to 0$, then $\bm{U}$ is the result of applying the Gram-Schmidt
process to the columns of $\bm{\Gamma}$, and $\bm{V}$ is the result of
applying the Gram-Schmidt process to the columns of $\bm{T}$. Comparing this
to above, we see that
$\bm{U}=\begin{bmatrix}|&|&|&|\\\
\bm{e}_{1}(0)&\bm{e}_{2}(0)&\bm{e}_{3}(0)&\cdots\\\ |&|&|&|\end{bmatrix}\mbox{
and }\bm{V}=\begin{bmatrix}|&|&|&|\\\ p_{1}(t)&p_{2}(t)&p_{3}(t)&\cdots\\\
|&|&|&|\end{bmatrix},$
where $\bm{e}_{1}(t),\bm{e}_{2}(t),\ldots,\bm{e}_{r}(t)$ is the basis for the
Frenet-Serret frame defined in (9) and
$p_{i}(t)=\frac{t^{i}-\sum_{j=1}^{i-1}\left\langle t^{i},p_{j}(t)\right\rangle
p_{j}(t)}{\left\lVert t^{i}-\sum_{j=1}^{i-1}\left\langle
t^{i},p_{j}(t)\right\rangle p_{j}(t)\right\rVert}\text{ for }i=1,2,3,\ldots$
(14)
We note that the $p_{i}(t)$’s form a set of orthogonal polynomials independent
of the dataset. In this limit, the curvatures depend solely on the singular
values,
$\kappa_{i}(t)=\sqrt{a_{i}}\frac{\sigma_{i+1}}{\sigma_{1}(t)\sigma_{i}(t)}\text{,
where }a_{i-1}=\left(\frac{i}{i+(-1)^{i}}\right)^{2}\frac{4i^{2}-1}{3}.$
## 3 Unifying Singular Value Decomposition, Time Delay Embeddings, and the
Frenet-Serret Frame
\begin{overpic}[width=433.62pt]{Figures/Fig3.pdf} \end{overpic} Figure 3: An
illustration of how a highly structured, antisymmetric linear model arises
from time delay data. Starting with a one dimensional time-series, we
construct a $m\times n$ Hankel matrix using time-shifted copies of the data.
Assume that $n\gg m$, in which case $\bm{H}$ can be thought of as an $m$
dimensional trajectory over a long period ($n$ snapshots in time). Similarly,
the transpose of $\bm{H}$ may be thought of as a high dimensional ($n$
dimensional) trajectory over a short period ($m$ snapshots) in time. With this
interpretation, by the results of [33], the singular vectors of $\bm{H}$ after
applying centering yield the Frenet-Serret frame. Regression on the dynamics
in the Frenet-Serret frame yields the tridiagonal antisymmetric linear model
with an additional forcing term, which is nonzero only in the last component.
In this section, we show that time series data from a dynamical system may be
decomposed into a sparse linear dynamical model with nonlinear forcing, and
the nonzero elements along the sub- and super-diagonals of the linear part of
this model have a clear geometric meaning: they are curvatures of the system.
In Section 3.1, we combine key results about the Frenet-Serret frame, time
delays, and SVD to explain this structure. Following this theory, Section 3.2
illustrates this approach with a simple synthetic example. The decomposition
yields a set of orthogonal polynomials that form a coordinate basis for the
time-delay embedding. In Section 3.3, we explicitly describe these polynomials
and compare their properties to the Legendre polynomials.
### 3.1 Connecting SVD, Time Delay Embeddings, and Frenet-Serret Frame
Here we connect the properties of the SVD, time delay embeddings, and the
Frenet-Serret to decompose a dynamical model into a linear dynamical model
with nonlinear forcing, where the linear model is both antisymmetric and
tridiagonal. To do this, we follow the steps of the HAVOK method with slight
modifications and show how they give rise to these structured dynamics. This
process is illustrated in Figure 3.
Following the notation introduced in Section 2.3, let’s begin with the time
series $x(t)$ for $t=0,\Delta t,\ldots,q\Delta t$. We construct a time delay
embedding $\bm{H}\in\mathbb{R}^{m\times n}$, where we assume $m\ll n$.
Next we compute the SVD of $\bm{H}$ and show that the singular vectors
correspond to the Frenet-Serret frame at a fixed point in time. In particular,
to compute the SVD of this matrix, we consider the transpose
$\bm{H}^{\intercal}\in\mathbb{R}^{n\times m}$, which is also be a Hankel
matrix. Thus, the columns of $\bm{h}(t)$ can be thought of as a trajectory
$\bm{h}(t)\in\mathbb{R}^{n}$ for $t=0,\Delta t,\ldots,(m-1)\Delta t$. For
simplicity, we shift the origin of time so that $\bm{h}(t)$ spans
$t=-(m-1)\Delta t/2,\ldots,0,\ldots(m-1)\Delta t/2$, and we denote
$\bm{h}(i\Delta t)$ as $\bm{h}_{i}$. In this form,
$\bm{H}^{\intercal}=\begin{bmatrix}|&\cdots&|&\cdots&|\\\
\bm{h}_{(-m+1)/2}&\cdots&\bm{h}_{0}&\cdots&\bm{h}_{(m-1)/2}\\\
|&\cdots&|&\cdots&|\\\ \end{bmatrix}.$
Subtracting the central column $\bm{h}_{0}$ from $\bm{H}^{\intercal}$ (or
equivalently, the central row of $\bm{H}$) yields the centered matrix
$\bar{\bm{H}}^{\intercal}=\bm{H}^{\intercal}-\bm{h}_{0}\bm{1}^{\intercal}.$
(15)
We can then express $\bm{h}_{i}$ as a Taylor expansion about $\bm{h}_{0}$,
$\bm{h}_{i}-\bm{h}_{0}=\bm{h}_{0}^{{}^{\prime}}i\Delta
t+\frac{1}{2}\bm{h}_{0}^{\prime\prime}(i\Delta
t)^{2}+\frac{1}{3!}\bm{h}_{0}^{\prime\prime\prime}(i\Delta t)^{3}+\cdots.$
With this in mind, applying the results of [33] described in Section 2.5
yields the SVD111We define the left singular matrix as $\bm{V}$ and the right
singular matrix as $\bm{U}$. This definition can be thought of as taking the
SVD of the transpose of the matrix $\bm{H}-\bm{1}\bm{h}_{0}^{\intercal}$. This
keeps the definitions of the matrices more inline with the notation used in
HAVOK.,
$\bar{\bm{H}}^{\intercal}=\underbrace{\begin{bmatrix}|&|&|&\\\
\bm{e}_{0}^{1}&\bm{e}_{0}^{2}&\bm{e}_{0}^{3}&\cdots\\\
|&|&|&\end{bmatrix}}_{\bm{V}}\underbrace{\begin{bmatrix}\sigma_{1}&&&\\\
&\sigma_{2}&&\\\ &&\sigma_{3}&\\\
&&&\ddots\end{bmatrix}}_{\bm{\Sigma}}\underbrace{\begin{bmatrix}-&\bm{p}_{1}&-\\\
-&\bm{p}_{3}&-\\\ -&\bm{p}_{3}&-\\\
&\vdots&\end{bmatrix}}_{\bm{U}^{\intercal}}.$ (16)
The singular vectors in $\bm{V}$ correspond to the Frenet-Serret frame (the
Gram-Schmidt method applied to the vectors,
$\bm{h}^{\prime}_{0},\bm{h}^{\prime\prime}_{0},\bm{h}^{\prime\prime\prime}_{0}$),
$\displaystyle\bm{e}_{0}$
$\displaystyle=\frac{\bm{h}_{0}^{\prime}}{\left\lVert\bm{h}_{0}^{\prime}\right\rVert}$
$\displaystyle\bm{e}_{0}^{i}$
$\displaystyle=\frac{\bm{h}_{0}^{(i)}-\sum_{j=1}^{i-1}\langle\bm{h}_{0}^{(i)},\bm{e}_{0}^{j}\rangle\bm{e}_{0}^{j}}{\left\lVert\bm{h}_{0}^{(i)}-\sum_{j=1}^{i-1}\langle\bm{h}_{0}^{(i)},\bm{e}_{0}^{j}\rangle\bm{e}_{0}^{j}\right\rVert}.$
The matrix $\bm{U}$ is similarly defined by the discrete orthogonal
polynomials
$\displaystyle\bm{p}_{1}$ $\displaystyle=\frac{1}{c}_{1}\bm{p}$
$\displaystyle\bm{p}_{i}$
$\displaystyle=\frac{1}{c_{i}}\left(\bm{p}^{i}-\sum_{j=1}^{i-1}\langle\bm{p}^{i},\bm{p}_{j}\rangle\bm{p}_{j}\right),$
where $\bm{p}$ is the vector
$\bm{p}=\begin{bmatrix}(-m+1)/2&(-m+2)/2&\cdots&0&\cdots&(m-2)/2&(m-1)/2\end{bmatrix},$
(17)
and where $c_{i}$ is a normalization constant so that
$\langle\bm{p}_{i},\bm{p}_{i}\rangle=1$. Note that $\bm{p}^{i}$ here means
raise $\bm{p}$ to the power $i$ element-wise. These polynomials are similar to
the discrete orthogonal polynomials defined in [63], except $\bm{p}$ is the
normalized ones vector $\frac{1}{c_{1}}\left[1\cdots 1\right]$. These
polynomials will be discussed further in Section 3.3.
Next, we build a regression model of the dynamics. We first consider the case
where the system is closed (i.e. $\bar{\bm{H}}$ has rank $r$). Thinking of
$\bm{V}$ as the Frenet-Serret frame at a fixed point in time, then following
the Frenet-Serret equations (10),
$\frac{\bm{d\bm{V}}}{dt}^{\intercal}=\bm{A}\bm{V}^{\intercal},$ (18)
where $\bm{A}=\left\lVert\bm{h}^{\prime}_{0}\right\rVert\bm{K}$. Here $\bm{K}$
is a constant tridiagonal and antisymmetric matrix, which corresponds to the
curvatures at $t=0$.
From the dual perspective, we can think about this set of vectors $\bm{e}^{0}$
as an $r$-dimensional time series over $n$ snapshots in time,
$\bm{V}^{\intercal}=\begin{bmatrix}-&v_{1}(t)&-\\\ -&v_{2}(t)&-\\\ &\vdots&\\\
-&v_{r}(t)&-\\\ \end{bmatrix}=\begin{bmatrix}-&\bm{e}_{0}^{1}&-\\\
-&\bm{e}_{0}^{2}&-\\\ &\vdots&\\\ -&\bm{e}_{0}^{r}&-\\\
\end{bmatrix}\in\mathbb{R}^{r\times n}.$ (19)
Here $\bm{v}(t)=[v_{1}(t),v_{2}(t),\cdots
v_{r}(t)]^{\intercal}\in\mathbb{R}^{r}$ denotes the $r$-dimensional
trajectory, which corresponds to the $r$-dimensional coordinates considered in
(5) for HAVOK. From (18), these dynamics must therefore satisfy
$\dot{\bm{v}}(t)=\bm{A}\bm{v}(t),$
where $\bm{A}$ is a skew-symmetric tridiagonal matrix. If the system is not
closed, the dynamics take the form
$\begin{bmatrix}\dot{v}_{1}\\\ \dot{v}_{2}\\\ \vdots\\\ \dot{v}_{r}\\\
\dot{v}_{r+1}\\\
\vdots\end{bmatrix}=\left\lVert\bm{h}^{\prime}_{0}\right\rVert\begin{bmatrix}0&\kappa_{1}&&&&\\\
-\kappa_{1}&\ddots&\ddots&&&&\\\ &\ddots&0&\ddots&\\\
&&-\kappa_{r-1}&0&\kappa_{r}&\\\ &&&-\kappa_{r}&0&\ddots\\\
&&&&\ddots&\ddots&\end{bmatrix}\begin{bmatrix}v_{1}\\\ v_{2}\\\ \vdots\\\
v_{r}\\\ v_{r+1}\\\ \vdots\end{bmatrix}.$
We note that, due to the tridiagonal structure of $\bm{K}$, the governing
dynamics of the first $r-1$ coordinates $v_{1}(t),\ldots v_{r-1}(t)$ are the
same as in the unforced case. The dynamics of the last coordinate includes an
additional term $\dot{v}_{r}=-\kappa_{r-1}v_{r-1}+\kappa_{r+1}v_{r+1}$. The
dynamics therefore take the form,
$\frac{d\bm{v}}{dt}=\bm{A}\bm{v}(t)+\bm{B}v_{r+1}(t),$
where $\bm{B}$ is a vector that is nonzero only its last coordinate. Thus, we
recover a model as in (8), but with the desired tridiagonal skewsymmetric
structure. The matrix of curvatures is simply given by
$\bm{K}=\bm{A}/\left\lVert\bm{h}_{0}^{\prime}\right\rVert$.
To compute $\bm{A}$, similar to (6), we define two time shifted matrices
$\bm{V}_{1}=\begin{bmatrix}\bm{v}(t_{1})&\bm{v}(t_{2})&\cdots&\bm{v}(t_{m-1})\end{bmatrix}\quad\bm{V}_{2}=\begin{bmatrix}\bm{v}(t_{2})&\bm{v}(t_{3})&\cdots&\bm{v}(t_{m})\end{bmatrix}.$
(20)
The matrix $\bm{A}$ may then be approximated as
$\bm{A}=\frac{d\bm{V}}{dt}^{\intercal}\bm{V}^{\intercal^{\dagger}}\approx\left(\frac{\bm{V}_{2}-\bm{V}_{1}}{\Delta
t}\right)\bm{V}_{1}^{\dagger}.$ (21)
In summary, we have shown here that the trajectories of singular vectors
$\bm{v}(t)$ from a time-delay embedding are governed by approximately
tridiagonal antisymmetric dynamics, with a forcing term nonzero only in the
last component. Comparing these steps to those described in Section 2.3, we
see that the estimation of $\bm{K}$ is nearly identical to the steps in HAVOK.
In particular, $\left\lVert\bm{h}_{0}\right\rVert\bm{K}$ is the linear
dynamics matrix $\bm{A}$ in HAVOK. The only difference is the centering step
in (15), which is further discussed in Section 3.3.
### 3.2 HAVOK Computes Approximate Curvatures in a Synthetic Example
To illustrate the correspondence between nonzero elements of the HAVOK
dynamics matrix and curvatures, we start by considering an analytically
tractable synthetic example. We start by applying the steps of HAVOK as
described in [5] with an additional centering step. The resultant modes and
terms on the sub- and superdiagonals of the dynamics matrix are then compared
to curvatures computed with an analytic expression, and we show that they are
approximately the same, scaled by a factor of
$\left\lVert\bm{h}^{\prime}_{0}\right\rVert$.
We consider data from the one dimensional system governed by
$x(t)=\sin(t)+\sin(2t),$
for $t\in[0,10]$ and sampled at $\Delta t=0.001$. Following HAVOK, we form the
time delay matrix $\bm{H}\in\mathbb{R}^{41\times 9961}$ then center the data,
subtracting the middle row $\bm{h}_{0}$ from all other rows, which forms
$\bar{\bm{H}}$. We next apply the SVD to
$\bar{\bm{H}}^{\intercal}=\bm{V}\bm{\Sigma}\bm{U}^{\intercal}$.
Figure 4 shows the columns of $\bm{U}\in\mathbb{R}^{41\times 4}$ and the
columns of $\bm{V}\in\mathbb{R}^{9961\times 4}$. The columns of $\bm{U}$
correspond to the orthogonal polynomials described in Section 3.3 and the
columns of $\bm{V}$ are the instantaneous basis vectors $\bm{e}_{i}$ for the
$9961$ dimensional Frenet-Serret frame.
To compute the derivative of the state we now treat $\bm{V}$ as a 4
dimensional trajectory with $9961$ snapshots. Applying DMD to $\bm{V}$ yields
the $\bm{A}$ matrix,
$\bm{A}=\begin{bmatrix}-1.245\times
10^{-3}&\hbox{\pagecolor{Orange!70}$\displaystyle 1.205\times
10^{-2}$}&4.033\times 10^{-6}&1.444\times 10^{-7}\\\
\hbox{\pagecolor{rgb:red!60,0.1216;green!60,0.466666;blue!60,0.705882}$\displaystyle-1.224\times
10^{-2}$}&3.529\times 10^{-4}&\hbox{\pagecolor{Orange!70}$\displaystyle
4.458\times 10^{-3}$}&2.283\times 10^{-6}\\\ -9.390\times
10^{-4}&\hbox{\pagecolor{rgb:red!60,0.1216;green!60,0.466666;blue!60,0.705882}$\displaystyle-3.467\times
10^{-3}$}&5.758\times 10^{-4}&\hbox{\pagecolor{Orange!70}$\displaystyle
6.617\times 10^{-3}$}\\\ 3.970\times 10^{-4}&-6.568\times
10^{-4}&\hbox{\pagecolor{rgb:red!60,0.1216;green!60,0.466666;blue!60,0.705882}$\displaystyle-7.451\times
10^{-3}$}&2.835\times 10^{-4}\\\ \end{bmatrix}.$ (22)
This matrix is approximately antisymmetric and tridiagonal as we expect.
Next, we compute the Frenet-Serret frame for the time delay embedding using
analytic expressions and show that HAVOK indeed extracts the curvatures of the
system multiplied by $\left\lVert\bm{h}^{\prime}_{0}\right\rVert$. Forming the
time delay matrix, we can easily compute $\bm{h}_{0}=[x_{0.02},x_{0.02+\Delta
t}\ldots,x_{9.98}]$.
$\bm{h}_{0}=\begin{bmatrix}\sin(t)+\sin(2t)\text{ for
}t\in[0.02,0.021,\ldots,9.98]\end{bmatrix}$
and the corresponding derivatives,
$\displaystyle\dot{\bm{h}}_{0}=\begin{bmatrix}\cos(t)+2\cos(2t)\text{ for
}t\in[0.02,0.021,\ldots,9.98]\end{bmatrix}$
$\displaystyle\ddot{\bm{h}}_{0}=\begin{bmatrix}-\sin(t)-4\sin(2t)\text{ for
}t\in[0.02,0.021,\ldots,9.98]\end{bmatrix}$
$\displaystyle\dddot{\bm{h}}_{0}=\begin{bmatrix}-\cos(t)-8\cos(2t)\text{ for
}t\in[0.02,0.021,\ldots,9.98]\end{bmatrix}$
$\displaystyle\bm{h}^{(4)}_{0}=\begin{bmatrix}\sin(t)+16\sin(2t)\text{ for
}t\in[0.02,0.021,\ldots,9.98]\end{bmatrix}.$
The $5$th derivative $\bm{h}^{(5)}$ is given by $\cos(t)+32\cos(2t)$ and can
be expressed as a linear combination of the previous derivatives, namely,
$\bm{h}_{0}^{(5)}=-5\dddot{\bm{h}}_{0}-4\dot{\bm{h}}_{0}$. This can also be
shown using the fact that $x(t)$ satisfies the $4$th order ordinary
differential equation $x^{(4)}+5\ddot{x}+4x=0$.
Since only the first four derivatives are linearly independent, only the first
three curvatures are nonzero. Further, exact values of the first three
curvatures can be computed analytically using the following formulas from
[64],
$\kappa_{1}=\frac{\sqrt{\det(\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}\end{bmatrix}^{\intercal}\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}\end{bmatrix})}}{\left\lVert\dot{\bm{h}}_{0}\right\rVert^{3/2}}\mbox{,
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
}\kappa_{2}=\frac{\sqrt{\det(\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}&\dddot{\bm{h}}_{0}\end{bmatrix}^{\intercal}\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}&\dddot{\bm{h}}_{0}\end{bmatrix})}}{\det(\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}\end{bmatrix}^{\intercal}\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}\end{bmatrix})},$
$\kappa_{3}=\frac{\sqrt{\det(\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}&\dddot{\bm{h}}_{0}&\bm{h}^{(4)}_{0}\end{bmatrix}^{\intercal}\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}&\dddot{\bm{h}}_{0}&\bm{h}^{(4)}_{0}\end{bmatrix})\det(\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}\end{bmatrix}^{\intercal}\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}\end{bmatrix})}}{\det(\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}&\dddot{\bm{h}}_{0}\end{bmatrix}^{\intercal}\begin{bmatrix}\dot{\bm{h}}_{0}&\ddot{\bm{h}}_{0}&\dddot{\bm{h}}_{0}\end{bmatrix})\left\lVert\bm{h}_{0}\right\rVert}.$
These formulas yields the values $\kappa_{1}=1.205\times 10^{-2}$,
$\kappa_{2}=4.46\times 10^{-3}$, and $\kappa_{3}=6.62\times 10^{-3}$.
As expected, these curvature values are very close to those computed with
HAVOK, highlighted in (22). In particular, the superdiagonal entries of the
matrix appear to be a very good approximations to the curvatures. The reasons
why the superdiagonal, but not the subdiagonal, is so close in value to the
true curvatures is not yet well understood. Further, in Section 5, we use the
theoretical insights from Section 3.1 to propose a modification to the HAVOK
algorithm that yields an even better approximation to curvatures in the
Frenet-Serret frame.
\begin{overpic}[width=433.62pt]{Figures/Fig4.pdf} \end{overpic} Figure 4:
Frenet-Serret frame (left) and corresponding orthogonal polynomials (right)
for HAVOK applied to time-series generated by $x(t)=\sin(t)+\sin(2t)$. The
orthogonal polynomials and the Frenet-Serret frame are the right singular
vectors $\bm{U}$ and left singular vectors $\bm{V}$ of $\bar{\bm{H}}$,
respectively.
### 3.3 Orthogonal Polynomials and Centering
In the decomposition in (16), we define a set of orthonormal polynomials. Here
we discuss the properties of these polynomials, comparing them to the Legendre
polynomials and providing explicit expressions for the first several terms in
this series.
In Section 3.1, we apply the SVD to the centered matrix $\bar{\bm{H}}$, as in
(16). The columns of $\bm{U}$ in this decomposition yield a set of orthonormal
polynomials, which are defined by (14). In the continuous case, the inner
product in (14) is $\langle a(t),b(t)\rangle=\int_{-p}^{p}a(t)b(t)dt$, while
in the discrete case $\langle a,b\rangle=\sum_{j=-p}^{p}a_{j}b_{j}$. The first
five polynomials in the discrete case may be found in Appendix A. The first
five of these polynomials $p_{i}(x)$ in the continuous case are:
$\displaystyle p_{1}(x)=\frac{x}{c_{1}(p)}\text{, where
}c_{1}(p)=\frac{\sqrt{6}\,\sqrt{p^{3}}}{3}$ $\displaystyle
p_{2}(x)=\frac{x^{2}}{c_{2}(p)}\text{, where
}c_{2}(p)=\frac{\sqrt{10}\,\sqrt{p^{5}}}{5}$ $\displaystyle
p_{3}(x)=\frac{1}{c_{3}(p)}\left(x^{3}-\frac{3}{5}p^{2}x\right)\text{, where
}c_{3}(p)=\frac{2\,\sqrt{14}\,\sqrt{p^{7}}}{35}$ $\displaystyle
p_{4}(x)=\frac{1}{c_{4}(p)}\left(x^{4}-\frac{5}{7}p^{2}x^{2}\right)\text{,
where }c_{4}(p)=\frac{2\,\sqrt{2}\,\sqrt{p^{9}}}{21}$ $\displaystyle
p_{5}(x)=\frac{1}{c_{5}(p)}\left(x^{5}+\frac{5}{21}p^{4}x-\frac{10}{9}p^{2}x^{3}\right)\text{,
where }c_{5}(p)=\frac{8\,\sqrt{22}\,\sqrt{p^{11}}}{693}.$
By construction, $p_{i}(t)$ form a set of orthonormal polynomials, where
$p_{i}(t)$ has degree $i$.
Interestingly, these orthogonal polynomials are similar to the Legendre
polynomials $\bm{l}_{i}$ [65, 66], which are defined by the recursive relation
$\displaystyle\bm{l}_{1}=\frac{1}{c}_{1}\begin{bmatrix}1&1&\cdots&1\end{bmatrix}$
$\displaystyle\bm{l}_{i}=\frac{1}{p_{i}}\left(\bm{p}^{i}-\sum_{k=1}^{i-1}\langle\bm{p}^{i},\bm{l}_{k}\rangle\right),$
where $\bm{p}$ is as defined in (17). For the corresponding Legendre
polynomials normalized over $[-p,p]$, we refer the reader to [63].
The key difference between these two sets of polynomials is that the first
polynomial $\bm{p}_{1}$ is linear, while the first Legendre polynomial is
constant (i.e., corresponding in the discrete case to the normalized ones
vector). In particular, if $\bm{H}$ is not centered before decomposition by
SVD, the resulting columns of $\bm{U}$ will be the Legendre polynomials.
However, without centering, the resulting $\bm{V}$ will no longer be the
Frenet-Serret frame. Instead, the resulting frame corresponds to applying the
Gram-Schmidt method to the set
$\left\\{\bm{\gamma}(t),,\bm{\gamma}^{\prime}(t),\bm{\gamma}^{\prime\prime}(t),...\right\\}$
instead of
$\left\\{\bm{\gamma}^{\prime}(t),\bm{\gamma}^{\prime\prime}(t),\bm{\gamma}^{\prime\prime\prime}(t),...\right\\}$.
Recently it has been shown that using centering as a preprocessing step is
beneficial for the dynamic mode decomposition [67]. That being said, since the
derivation of the tridiagonal and antisymmetric structure seen in the Frenet-
Serret frame is based on the properties of the derivatives and orthogonality,
this same structure can be computed without the centering step.
## 4 Limits and Requirements
Section 3.1 has shown how HAVOK yields a good approximation to the Frenet-
Serret frame in the limit that the time interval spanned by each row $\bm{H}$
goes to zero. To be more precise, HAVOK yields the Frenet-Serret frame if (13)
is satisfied. However, this property can be difficult to check in practice.
Here we establish several rules for choosing and structuring the data so that
the HAVOK dynamics matrix adopts the structure we expect from theory.
Figure 5: Increasing sampling frequency and number of columns yields more
structured HAVOK models for the Lorenz system. Given the Hankel matrix
$\bm{H}$, the linear dynamical model is plotted for values of sampling period
$\Delta t$ equal to $0.01,0.005,0.001,0.0005$ for a fixed number of rows and
fixed time span of measurement (top). Similarly, the model is plotted for
values of number of columns $n$ equal to $1001,2001,5001,$ and $10001$ for
fixed sampling frequency and time span of measurement $q\Delta t$(bottom). As
we increase the sampling frequency and the number of columns of the data,
$\bm{A}$ becomes more antisymmetric with nonzero elements only on the super-
and sub-diagonals. These trends illustrate the results in Section 4.
Choose $\Delta t$ to be small. The specific constraint we have from (13) is
$\left\lVert\bm{h}^{\prime}_{0}t_{i}\right\rVert\gg\left\lVert\frac{\bm{h}^{\prime\prime}_{0}}{2}t_{i}^{2}\right\rVert\gg\left\lVert\frac{\bm{h}_{0}^{\prime\prime\prime}}{6}t_{i}^{3}\right\rVert\gg\cdots\gg\left\lVert\frac{\bm{h}_{0}^{(k)}}{k!}t_{i}^{k}\right\rVert,$
for $-m\Delta t/2\leq t_{i}\leq m\Delta t/2$ or more simply $\lvert
t_{i}\rvert\leq m\Delta t$, where $\Delta t$ is the sampling period of the
data and $m$ is the number of delays in the Hankel matrix $\bm{H}$. If we
assume that $m\Delta t<1$, then rearranging,
$m\Delta
t\ll\frac{2\left\lVert\bm{h}_{0}^{\prime}\right\rVert}{\left\lVert\bm{h}_{0}^{\prime\prime}\right\rVert},\frac{3\left\lVert\bm{h}_{0}^{\prime\prime}\right\rVert}{\left\lVert\bm{h}_{0}^{\prime\prime\prime}\right\rVert},\ldots,\frac{k\left\lVert\bm{h}_{0}^{(k-1)}\right\rVert}{\left\lVert\bm{h}_{0}^{(k)}\right\rVert}.$
(23)
In practice, since the series of ratios of derivatives defined in (23) grows,
it is only necessary to check the first inequality. By choosing the sampling
period of the data to be small, we can constrain the data to satisfy this
inequality. To illustrate the effect of decreasing $\Delta t$, Figure 5 (top)
shows the dynamics matrices $\bm{A}$ computed by the HAVOK algorithm for the
Lorenz system for a fixed number of rows of data and fixed time span of the
simulation. As $\Delta t$ becomes smaller, $\bm{A}$ becomes more structured in
that it is antisymmetric and tridiagonal.
Choose the number of columns $n$ to be large. The number of columns comes into
the Taylor expansion through the derivatives
$\left\lVert\bm{h}_{0}^{(k)}\right\rVert$, since
$\bm{h}_{0}^{(k)}\in\mathbb{R}^{n}$.
For the synthetic example $x(t)=\sin(t)+2\sin(t)$, we can show that the ratio
$2\left\lVert\bm{h}^{\prime}_{0}\right\rVert/\left\lVert\bm{h}^{\prime\prime}_{0}\right\rVert$
saturates to a fixed value in the limit as $n$ goes to infinity (see Appendix
B). However, for short time series (small values of $n$), this ratio can be
arbitrarily small, and hence (23) will be difficult to satisfy.
We illustrate this in Figure 5 using data from the Lorenz system. We compute
and plot the HAVOK linear dynamics matrix for a varying number of columns $n$,
while fixing the sampling frequency and time span of measurements $q\Delta t$.
We see that as we increase the number of columns, the dynamics becomes more
skew symmetric and tridiagonal. In general, due to practical constraints and
restrictions, it may be difficult to guarantee that given data satisfies these
two requirements. In Sections 4.1 and 5, we propose methods to tackle this
challenge.
### 4.1 Interpolation
From the first requirement, we see that the sampling frequency $\Delta t$
needs to be sufficiently small to recover the antisymmetric structure in
$\bm{A}$. However, in practice, it is not always possible to satisfy this
sampling criterion.
One solution to remedy this is to use data interpolation. To be precise, we
can increase the sampling rate by spline interpolation, then construct
$\bm{H}$ from the interpolated data that satisfies $\eqref{eq:series}$. The
ratio of the derivatives
$\left\lVert\bm{h}^{\prime}_{0}\right\rVert/\left\lVert\bm{h}^{\prime\prime}_{0}\right\rVert,\left\lVert\bm{h}^{\prime\prime}_{0}\right\rVert/\left\lVert\bm{h}^{\prime\prime\prime}_{0}\right\rVert,\ldots$
may also contain some dependence on $\Delta t$, but we observe that this
dependence is not significantly affected in practice.
Figure 6: In the case where a dynamical system is sparsely sampled,
interpolation can be used to recover a more tridiagonal and antisymmetric
matrix for the linear model in HAVOK. First, we simulate the Lorenz system,
measuring $x(t)$ with a sampling period of $\Delta t=0.1$. The resulting
dynamics model $\bm{A}$ and corresponding singular vectors of $\bm{U}$ are
plotted. Due to the low sampling frequency these values do not satisfy the
requirements in (23). Consequently the dynamics matrix is not antisymmetric
and the singular vectors do not correspond to the orthogonal polynomials in
Section 3.3. Next, the data is interpolated using cubic splines and
subsequently sampled using a sampling period of $\Delta t=0.001$. In this case
the data satisfies the assumptions in (23), which yields the tridiagonal
antisymmetric structure for $\bm{A}$ and orthogonal polynomials for $\bm{U}$
as predicted.
As an example, we consider a set of time series measurements generated from
the Lorenz system (see Section 5 for more details about this system). We start
with a sampling period of $\Delta t=0.1$ (Figure 6, top row). Note that here
we have simulated the Lorenz system at high temporal resolution then
subsampled to produce this timeseries data. Applying HAVOK with centering and
$m=201$, we see that $\bm{A}$ is not antisymmetric and the columns of $\bm{U}$
are not the orthogonal polynomials like in the synthetic example shown in
Figure 4.
Next, we apply cubic spline interpolation to this data, evaluating at a
sampling rate of $\Delta t=0.001$ (Figure 6, bottom row). We note that,
especially for real-world data with measurement noise, this interpolation
procedure also serves to smooth the data, making the computation of its
derivatives more tractable [68]. Applying HAVOK to this interpolated data
yields a new antisymmetric $\bm{A}$ matrix and the $\bm{U}$ corresponds to the
orthogonal polynomials described in Section 3.3.
## 5 Promoting structure in the HAVOK decomposition
HAVOK yields a linear model of a dynamical system explained by the Frenet-
Serret frame, and by leveraging these theoretical connections, here we propose
a modification of the HAVOK algorithm to promote this antisymmetric structure.
We refer to this algorithm as structured HAVOK (sHAVOK) and describe it in
Section 5.1. Compared to HAVOK, sHAVOK yields structured dynamics matrices
that better approximate the Frenet-Serret frame and more closely estimate the
curvatures. Importantly, sHAVOK also produces better models of the system
using significantly less data. We demonstrate its application to three
nonlinear synthetic example systems in Section 5.2 and two real-world datasets
in Section 5.3.
### 5.1 The Structured HAVOK (sHAVOK) Algorithm
We propose a modification to the HAVOK algorithm that more closely induces the
antisymmetric structure in the dynamics matrix, especially for shorter data
with a smaller number of delays $n$. The key innovation in sHAVOK is the
application of two SVD’s applied separately to time-shifted Hankel matrices
(compare Figure 2 and Figure 7). This simple modification enforces that the
singular vector bases on which the dynamics matrix is computed are orthogonal,
and thus more closely approximate the Frenet-Serret frame.
Figure 7: Outline of steps in structured HAVOK (sHAVOK). First, given a
dynamical system a single variable $x(t)$ is measured. Time-shifted copies of
$x(t)$ are stacked to form a Hankel matrix $\bm{H}$. $\bm{H}$ is split into
two time-shifted matrices, $\bm{H}_{1}$ and $\bm{H}_{2}$. The singular value
decomposition (SVD) is applied to these two matrices individually. This
results in reduced order representations, $\bm{V}_{1}$ and $\bm{V}_{2}$, of
$\bm{H}_{1}$ and $\bm{H}_{2}$, respectively. The matrices, $\bm{V}_{1}$ and
$\bm{V}_{2}$ are then used to construct an approximation to this low
dimensional state and its derivative. Finally, linear regression is performed
on these two matrices to form a linear dynamical model with an additional
forcing term in the last component.
Building on the HAVOK algorithm as summarized in Section 2.3, we focus on the
step where the singular vectors $\bm{V}$ are split into $\bm{V}_{1}$ and
$\bm{V}_{2}$. In the Frenet-Serret framework, we are interested in the
evolution of the orthonormal frame
$\bm{e}_{1}(t),\bm{e}_{2}(t),\ldots,\bm{e}_{r}(t)$. In HAVOK, $\bm{V}_{1}$ and
$\bm{V}_{2}$ correspond to instances of this orthonormal frame.
Although $\bm{V}$ is a unitary matrix, $\bm{V}_{1}$ and $\bm{V}_{2}$—which
each consist of removing a column from $\bm{V}$—are not. To enforce this
orthogonality, we propose to split $\bar{\bm{H}}$ into two time-shifted
matrices $\bar{\bm{H}}_{1}$ and $\bar{\bm{H}}_{2}$ (Figure 7) and then compute
two SVDs with rank truncation $r$,
$\bar{\bm{H}}_{1}=\bm{U}_{1}\bm{\Sigma}_{1}\bm{V}_{1}^{\intercal}\text{ and
}\bar{\bm{H}}_{2}=\bm{U}_{2}\bm{\Sigma}_{2}\bm{V}_{2}^{\intercal}.$
By construction, $\bm{V}_{1}$ and $\bm{V}_{2}$ are now orthogonal matrices.
Like in HAVOK, our goal is to estimate the dynamics matrix $\bm{A}$ such that
$\dot{\bm{v}}(t)=\bm{A}\bm{v}(t).$
To do so, we use the matrices $\bm{V}_{1}$ and $\bm{V}_{2}$ to construct the
state and its derivative,
$\displaystyle\bm{V}=\bm{V}_{1}$
$\displaystyle\frac{d\bm{V}}{dt}=\frac{\bm{V}_{2}-\bm{V}_{1}}{\Delta t}.$
$\bm{A}$ then satisfies
$\bm{A}=d\bm{V}/dt\bm{V}^{\intercal^{\dagger}}\approx\left(\bm{V}_{2}^{\intercal}-\bm{V}_{1}\right)/\Delta
t\bm{V}_{1}^{\intercal^{\dagger}}$ (24)
If this system is not closed (nonzero forcing term), then $\bm{V}_{2}$ is
defined as columns $2$ to $n-1$ of the SVD singular vectors with an $r-1$ rank
truncation $\bm{V}_{r-1}^{\intercal}$, and
$\hat{\bm{A}}\in\mathbb{R}^{r-1\times r-1}$ and
$\hat{\bm{B}}\in\mathbb{R}^{r-1\times 1}$ are computed as
$\left[\hat{\bm{A}},\hat{\bm{B}}\right]=\bm{V}_{2}^{\intercal}\bm{V}_{1}$. The
corresponding pseudocode is elaborated in Appendix C.
As a simple analytic example, we apply sHAVOK to the same system described in
Section 3.2 generated by $x(t)=\sin(t)+\sin(2t)$. The resulting dynamics
matrix is
$\bm{A}=\begin{bmatrix}-1.116\times
10^{-5}&\hbox{\pagecolor{Orange!70}$\displaystyle 1.204\times
10^{-2}$}&-1.227\times 10^{-5}&8.728\times 10^{-8}\\\
\hbox{\pagecolor{rgb:red!60,0.1216;green!60,0.466666;blue!60,0.705882}$\displaystyle-1.204\times
10^{-2}$}&-1.269\times 10^{-5}&\hbox{\pagecolor{Orange!70}$\displaystyle
4.458\times 10^{-3}$}&4.650\times 10^{-6}\\\ 2.053\times
10^{-5}&\hbox{\pagecolor{rgb:red!60,0.1216;green!60,0.466666;blue!60,0.705882}$\displaystyle-4.458\times
10^{-3}$}&-4.897\times 10^{-6}&\hbox{\pagecolor{Orange!70}$\displaystyle
6.617\times 10^{-3}$}\\\ -9.956\times 10^{-8}&-1.118\times
10^{-7}&\hbox{\pagecolor{rgb:red!60,0.1216;green!60,0.466666;blue!60,0.705882}$\displaystyle-6.617\times
10^{-3}$}&-3.368\times 10^{-6}\\\ \end{bmatrix}.$
We see immediately that, with this small modification, $\bm{A}$ has become
much more structured compared to (22). Specifically, the estimates of the
curvatures both below and above the diagonal are now equal, and the rest of
the elements in the matrix, which should be zero, are almost all smaller by an
order of magnitude. In addition, the curvatures are equal to the true analytic
values up to three decimal places.
### 5.2 Comparison of HAVOK and sHAVOK for Three Synthetic Examples
The results of HAVOK and sHAVOK converge in the limit of infinite data, and
the models they produce are most different in cases of shorter time series
data, where we may not have measurements over long periods of time. Using
synthetic data from three nonlinear example systems, we compute models using
both methods and compare the corresponding dynamics matrices $\bm{A}$ (Figure
8). In every case, the $\bm{A}$ matrix computed using the sHAVOK algorithm is
more antisymmetric and has a stronger tridiagonal structure than the
corresponding matrix computed using HAVOK.
In addition to the dynamics matrices, we also show in Figure 8 the eigenvalues
of $\bm{A}$, $\omega_{k}\in\mathbb{C}$ for $k=1,\ldots r$ for HAVOK (teal) and
sHAVOK (maroon). We additionally plot the eigenvalues (black crosses)
corresponding to those computed from the data measured in the large data
limit, but at the same sampling frequency. In this large data limit, both
sHAVOK and HAVOK yield the same antisymmetric tridiagonal dynamics matrix and
corresponding eigenvalues. Comparing the eigenvalues, we immediately see that
eigenvalues from sHAVOK more closely match those computed in the large data
limit. Thus, even with a short trajectory, we can still recover models and key
features of the underlying dynamics. Below, we describe each of the systems
and their configurations.
Figure 8: Structured HAVOK (sHAVOK) yields more structured models from short
trajectories than HAVOK on three examples. For each system, we simulated a
trajectory extracting a single coordinate in time (gray). We then apply HAVOK
and sHAVOK to data $x(t)$ from a short subset of this trajectory, shown in
black. The middle columns show the resulting dynamics matrices $\bm{A}$ from
the models. Compared to HAVOK, the resulting model for sHAVOK consistently
shows stronger structure in that they are antisymmetric with nonzero elements
only along the sub- and super-diagonals. The corresponding eigenvalue spectra
of $\bm{A}$ for HAVOK and sHAVOK are plotted in teal and maroon, respectively,
in addition to eigenvalues from HAVOK for the full (gray) trajectory. In all
cases, the sHAVOK eigenvalues are much closer in value to those from the long
trajectory limit than HAVOK.
Lorenz Attractor: We first illustrate these two methods on the Lorenz system.
Originally developed in the fluids community, the Lorenz (1963) system is
governed by three first order differential equations [69]:
$\displaystyle\dot{x}=\sigma(y-x)$ $\displaystyle\dot{y}=x(\rho-z)-y$
$\displaystyle\dot{z}=xy-\beta z.$
The Lorenz system has since been used to model systems in a wide variety of
fields, including chemistry [70], optics [71], and circuits [72].
We simulate $3,000$ samples with initial condition $[-8,8,27]$ and a stepsize
of $\Delta t=0.001$, measuring the variable $x(t)$. We use the common
parameters $\sigma=10,\rho=28$, and $\beta=8/3$. This trajectory is shown in
Figure 8 and corresponds to a few oscillations about a fixed point. We compare
the spectra to that of a longer trajectory containing $300,000$ samples, which
we take to be an approximation of the true spectrum of the system.
Rössler Attractor: The Rössler attractor is given by the following nonlinear
differential equations [73, 74]:
$\displaystyle\dot{x}=-y-z$ $\displaystyle\dot{y}=x+ay$
$\displaystyle\dot{z}=b+z(x-c).$
We choose to measure the variable $x(t)$. This attractor is a canonical
example of chaos, like the Lorenz attractor. Here we perform a simulation with
$70,000$ samples and a stepsize of $\Delta t=0.001$. We choose the following
common values of $a=0.1$, $b=0.1$ and $c=14$ and the initial condition
$x_{0}=y_{0}=z_{0}=1$. We similarly plot the trajectory and dynamics matrices.
We compare the spectra in this case to a longer trajectory using a simulation
for $300,000$ samples.
Double Pendulum: The double pendulum is a similar nonlinear differential
equation, which models the motion of a pendulum which is connected at the end
to another pendulum [75]. This system is typically represented by its
Lagrangian,
$\mathcal{L}=\frac{1}{6}ml^{2}\left(\dot{\theta}_{2}^{2}+4\dot{\theta}_{1}^{2}+3\dot{\theta}_{1}\dot{\theta}_{2}\cos{\left(\theta_{1}-\theta_{2}\right)}\right)+\frac{1}{2}mgl\left(3\cos{\theta_{1}}+\cos{\theta_{2}}\right),$
(25)
where $\theta_{1}$ and $\theta_{2}$ are the angles between the top and bottom
pendula and the vertical axis, respectively. $m$ is the mass at the end of
each pendulum, $l$ is the length of each pendulum and $g$ is the acceleration
constant due to gravity. Using the Euler-Lagrange equations,
$\frac{d}{dt}\frac{\partial\mathcal{L}}{\partial\dot{\theta}_{i}}-\frac{\partial\mathcal{L}}{\partial\theta_{i}}=0\text{
for }i=1,2,$
we can construct two second order differential equations of motion.
The trajectory is computed using a variational integrator to approximate
$\delta\int_{a}^{b}\mathcal{L}(\theta_{1},\theta_{2},\dot{\theta}_{1},\dot{\theta}_{2})dt=0.$
We simulate this system with a stepsize of $\Delta t=0.001$ and for $1200$
samples. We choose $m_{1}=m_{2}=l_{1}=l_{2}=1$ and $g=10$, and use initial
conditions $\theta_{1}=\theta_{2}=\pi/2$, $\dot{\theta}_{1}=-0.01$ and
$\dot{\theta}_{2}=-0.005$. As our measurement for HAVOK and sHAVOK we use
$x(t)=\sin(\theta_{1}(t))$ and compare our data to a long trajectory
containing $100,000$ samples.
### 5.3 sHAVOK Applied to Real-world Datasets
Here we apply sHAVOK to two real world time series datasets, the trajectory of
a double pendulum and measles outbreak data. Similar to the synthetic
examples, we find that the the dynamics matrix from sHAVOK is much more
antisymmetric and tridiagonal compared to the dynamics matrix for HAVOK. In
both cases, some of the HAVOK eigenvalues contain positive real components; in
other words, these models have unstable dynamics. However, the sHAVOK spectra
do not contain positive real components, resulting in much more accurate and
stable models (Figure 9).
Figure 9: Comparison of HAVOK and structured HAVOK (sHAVOK) for two real world
systems: a double pendulum and measles outbreak data. For each system, we
measure a trajectory extracting a single coordinate (gray). We then apply
HAVOK and sHAVOK to a subset of this trajectory, shown in black. The $\bm{A}$
matrices for the resulting linear dynamical models are shown. sHAVOK yields
models with an antisymmetric structure, with nonzero elements only along the
subdiagonal and superdiagonal. The corresponding eigenvalue spectra for HAVOK
and sHAVOK are additionally plotted in teal and maroon, respectively, along
with eigenvalues from HAVOK for a long trajectory. In both cases, the
eigenvalues of sHAVOK are much closer in value to those in the long trajectory
limit than HAVOK. Some of the eigenvalues of HAVOK are unstable and have
positive real components. The corresponding reconstructions of the first
singular vector of the corresponding Hankel matrices are shown along with the
real data. Note that the HAVOK models are unstable, growing exponentially due
to the unstable eigenvalues, while the sHAVOK models do not. Credit for images
on left : (double pendulum) [76] and (measles) CDC/ Cynthia S. Goldsmith;
William Bellini, Ph.D.
Double Pendulum: We first look at measurements of a double pendulum [76]. A
picture of the setup can be found in Figure 9. The Lagrangian in this case is
very similar to that in (25). One key difference in the synthetic case is that
all of the mass is contained at the joints, while in this experiment, the mass
is spread over each arm. To accommodate this, the Lagrangian can be slightly
modified,
$\mathcal{L}=\frac{1}{2}\left(m_{1}(\dot{x}_{1}^{2}+\dot{y}_{1}^{2})+m_{2}(\dot{x}_{2}^{2}+\dot{y}_{2}^{2})\right)+\frac{1}{2}\left(I_{1}\dot{\theta}_{1}^{2}+I_{2}\dot{\theta}_{2}^{2}\right)-\left(m_{1}y_{1}+m_{2}y_{2}\right)g,$
where $x_{1}=a_{1}\sin(\theta_{1})$,
$x_{2}=l_{1}\sin(\theta_{1})+a_{2}\sin(\theta_{2})$,
$y_{1}=a_{1}\cos(\theta_{1})$, and
$y_{2}=l_{1}\cos(\theta_{1})+a_{2}\cos(\theta_{2})$. $m_{1}$, and $m_{2}$ are
the masses of the pendula, $l_{1}$ and $l_{2}$ are the lengths of the pendula,
$a_{1}$ and $a_{2}$ are the distances from the joints to the center of masses
of each arm, and $I_{1}$ and $I_{2}$ are the moments of inertia for each arm.
When $m_{1}=m_{2}=m$, $a_{1}=a_{2}=l_{1}=l_{2}$, and $I_{1}=I_{2}=ml^{2}$ we
recover (25). We sample the data at $\Delta t=0.001$s and plot
$\sin(\theta_{2}(t))$ over a 15s time interval. The data over this interval
appears approximately periodic.
Measles Outbreaks: As a second example we apply measles outbreak data from New
York City between 1928 to 1964 [77]. The case history of measles over time has
been shown to exhibit chaotic behavior [78, 79], and [5] applied HAVOK to
measles data and successfully showed that the method could extract transient
behavior.
For both systems, we apply sHAVOK to a subset of the data corresponding to the
black trajectories $x(t)$ shown in Figure 9. We then compare that to HAVOK
applied over the same interval. We use $m=101$ delays with a $r=5$ rank
truncation for the double pendulum, and $m=51$ delays and a $r=6$ rank
truncation for the measles data. For the measles data, prior to applying
sHAVOK and HAVOK the data is first interpolated and sampled at a rate of
$\Delta t=0.0018$ years. Like in previous examples, the resulting sHAVOK
dynamics is tridiagonal and antisymmetric while the HAVOK dynamics matrix is
not. Next, we plot the corresponding spectra for these two methods, in
addition to the eigenvalues applied to HAVOK over the entire time series. Most
noticeably, the eigenvalues from sHAVOK are closer to the long data limit
values. In addition, two of the HAVOK eigenvalues lie to the right of the real
axis, and thus have positive real components. All of the sHAVOK eigenvalues,
on the other hand, have negative real components. This difference is most
prominent in the reconstructions of the first singular vector. In particular,
since two of the eigenvalues from HAVOK are positive, the reconstructed time
series grows exponentially. In contrast, for sHAVOK the corresponding time-
series remains bounded providing a much better model of the true data.
## 6 Discussion
In this paper, we describe a new theoretical connection between models
constructed from time-delay embeddings, specifically using the HAVOK approach,
and the Frenet-Serret frame from differential geometry. This unifying
perspective explains the peculiar antisymmetric, tridiagonal structure of
HAVOK models: namely, the sub- and super-diagonal entries of the linear model
correspond to the intrinsic curvatures in the Frenet-Serret frame. Inspired by
this theoretical insight, we develop an extension we call _structured_ HAVOK
that effectively yields models with this structure. Importantly, we
demonstrate that this modified algorithm improves the stability and accuracy
of time-delay embedding models, especially when data is noisy and limited in
length. All code is available at https://github.com/sethhirsh/sHAVOK.
Establishing theoretical connections between time-delay embedding,
dimensionality reduction, and differential geometry opens the door for a wide
variety of applications and future work. By understanding this new
perspective, we now better understand the requirements and limitations of
HAVOK and have proposed simple modifications to the method which improve its
performance on data. However, the full implications of this theory remain
unknown. Differential geometry, dimensionality reduction and time delay
embeddings are all well-established fields, and by understanding these
connections we can develop more robust and interpretable methods for modeling
time series.
For instance, by connecting HAVOK to the Frenet-Serret frame, we recognize the
importance of enforcing orthogonality for $\bm{V}_{1}$ and $\bm{V}_{2}$ and
inspired development of sHAVOK. With this theory, we can incorporate further
improvements on the method. For example, sHAVOK can be thought of as a first
order forward difference method, approximating the derivative and state by
$\left(\bm{V}_{2}-\bm{V}_{1}\right)/\Delta t$ and $\bm{V}_{1}$, respectively.
By employing a central difference scheme, such as approximating the state by
$\bm{V}$, we have observed this to further enforce the antisymmetry in the
dynamics matrix and move the corresponding eigenvalues towards the imaginary
axis.
Throughout this analysis, we have focused purely on linear methods. In recent
years, nonlinear methods for dimensionality reduction, such as autoencoders
and diffusion maps, have gained popularity [80, 81, 7]. Nonlinear models
similarly benefit from promoting sparsity and interpretability. By
understanding the structures of linear models, we hope to generalize these
methods to create more accurate and robust methods that can accurately model a
greater class of functions.
## Acknowledgments
We are grateful for discussions with S. H. Singh, and K. D. Harris; and to K.
Kaheman for providing the double pendulum dataset. We thank to thank A. G.
Nair for providing valuable insights and feedback in designing the analysis.
This work was funded by the Army Research Office (W911NF-17-1-0306 to SLB);
Air Force Office of Scientific Research (FA9550-17-1-0329 to JNK); the Air
Force Research Lab (FA8651-16-1-0003 to BWB); the National Science Foundation
(award 1514556 to BWB); the Alfred P. Sloan Foundation and the Washington
Research Foundation to BWB.
## References
* [1] M. Schmidt and H. Lipson, “Distilling free-form natural laws from experimental data,” _Science_ , vol. 324, no. 5923, pp. 81–85, 2009.
* [2] J. Bongard and H. Lipson, “Automated reverse engineering of nonlinear dynamical systems,” _Proceedings of the National Academy of Sciences_ , vol. 104, no. 24, pp. 9943–9948, 2007.
* [3] S. L. Brunton, J. L. Proctor, and J. N. Kutz, “Discovering governing equations from data by sparse identification of nonlinear dynamical systems,” _Proceedings of the National Academy of Sciences_ , vol. 113, no. 15, pp. 3932–3937, 2016.
* [4] S. L. Brunton and J. N. Kutz, _Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control_. Cambridge University Press, 2019.
* [5] S. L. Brunton, B. W. Brunton, J. L. Proctor, E. Kaiser, and J. N. Kutz, “Chaos as an intermittently forced linear system,” _Nature communications_ , vol. 8, no. 1, p. 19, 2017.
* [6] B. Lusch, J. N. Kutz, and S. L. Brunton, “Deep learning for universal linear embeddings of nonlinear dynamics,” _Nature Communications_ , vol. 9, no. 1, p. 4950, 2018.
* [7] K. Champion, B. Lusch, J. N. Kutz, and S. L. Brunton, “Data-driven discovery of coordinates and governing equations,” _Proceedings of the National Academy of Sciences_ , vol. 116, no. 45, pp. 22 445–22 451, 2019.
* [8] B. O. Koopman, “Hamiltonian systems and transformation in Hilbert space,” _Proceedings of the National Academy of Sciences_ , vol. 17, no. 5, pp. 315–318, 1931.
* [9] I. Mezić and A. Banaszuk, “Comparison of systems with complex behavior,” _Physica D: Nonlinear Phenomena_ , vol. 197, no. 1, pp. 101–133, 2004.
* [10] I. Mezić, “Spectral properties of dynamical systems, model reduction and decompositions,” _Nonlinear Dynamics_ , vol. 41, no. 1-3, pp. 309–325, 2005\.
* [11] C. W. Rowley, I. Mezić, S. Bagheri, P. Schlatter, D. Henningson _et al._ , “Spectral analysis of nonlinear flows,” _Journal of fluid mechanics_ , vol. 641, no. 1, pp. 115–127, 2009.
* [12] I. Mezić, “Analysis of fluid flows via spectral properties of the Koopman operator,” _Annual Review of Fluid Mechanics_ , vol. 45, pp. 357–378, 2013.
* [13] J. N. Kutz, S. L. Brunton, B. W. Brunton, and J. L. Proctor, _Dynamic mode decomposition: data-driven modeling of complex systems_. SIAM, 2016.
* [14] P. J. Schmid, “Dynamic mode decomposition of numerical and experimental data,” _Journal of fluid mechanics_ , vol. 656, pp. 5–28, 2010.
* [15] J. H. Tu, C. W. Rowley, D. M. Luchtenburg, S. L. Brunton, and J. N. Kutz, “On dynamic mode decomposition: Theory and applications,” _Journal of Computational Dynamics_ , vol. 1, no. 2, pp. 391–421, 2014.
* [16] F. Takens, “Detecting strange attractors in turbulence,” in _Dynamical systems and turbulence, Warwick 1980_. Springer, 1981, pp. 366–381.
* [17] H. Arbabi and I. Mezic, “Ergodic theory, dynamic mode decomposition, and computation of spectral properties of the Koopman operator,” _SIAM Journal on Applied Dynamical Systems_ , vol. 16, no. 4, pp. 2096–2126, 2017.
* [18] K. P. Champion, S. L. Brunton, and J. N. Kutz, “Discovery of nonlinear multiscale systems: Sampling strategies and embeddings,” _SIAM Journal on Applied Dynamical Systems_ , vol. 18, no. 1, pp. 312–333, 2019.
* [19] D. S. Broomhead and R. Jones, “Time-series analysis,” _Proceedings of the Royal Society of London. A. Mathematical and Physical Sciences_ , vol. 423, no. 1864, pp. 103–121, 1989.
* [20] J.-N. Juang and R. S. Pappa, “An eigensystem realization algorithm for modal parameter identification and model reduction,” _Journal of guidance, control, and dynamics_ , vol. 8, no. 5, pp. 620–627, 1985.
* [21] B. W. Brunton, L. A. Johnson, J. G. Ojemann, and J. N. Kutz, “Extracting spatial–temporal coherent patterns in large-scale neural recordings using dynamic mode decomposition,” _Journal of neuroscience methods_ , vol. 258, pp. 1–15, 2016.
* [22] D. Giannakis and A. J. Majda, “Nonlinear Laplacian spectral analysis for time series with intermittency and low-frequency variability,” _Proceedings of the National Academy of Sciences_ , vol. 109, no. 7, pp. 2222–2227, 2012.
* [23] S. Das and D. Giannakis, “Delay-coordinate maps and the spectra of Koopman operators,” _Journal of Statistical Physics_ , vol. 175, no. 6, pp. 1107–1145, 2019.
* [24] N. Dhir, A. R. Kosiorek, and I. Posner, “Bayesian delay embeddings for dynamical systems,” in _NIPS Timeseries Workshop_ , 2017.
* [25] D. Giannakis, “Delay-coordinate maps, coherence, and approximate spectra of evolution operators,” _arXiv preprint arXiv:2007.02195_ , 2020.
* [26] W. Gilpin, “Deep learning of dynamical attractors from time series measurements,” _arXiv preprint arXiv:2002.05909_ , 2020.
* [27] S. Pan and K. Duraisamy, “On the structure of time-delay embedding in linear models of non-linear dynamical systems,” _Chaos: An Interdisciplinary Journal of Nonlinear Science_ , vol. 30, no. 7, p. 073135, 2020.
* [28] M. Kamb, E. Kaiser, S. L. Brunton, and J. N. Kutz, “Time-delay observables for Koopman: Theory and applications,” _arXiv preprint arXiv:1810.01479_ , 2018\.
* [29] M. P. Do Carmo, _Differential geometry of curves and surfaces: revised and updated second edition_. Courier Dover Publications, 2016.
* [30] B. O’neill, _Elementary differential geometry_. Academic press, 2014.
* [31] J.-A. Serret, “Sur quelques formules relatives à la théorie des courbes à double courbure.” _Journal de mathématiques pures et appliquées_ , pp. 193–207, 1851.
* [32] M. D. Spivak, _A comprehensive introduction to differential geometry_. Publish or perish, 1970.
* [33] J. Álvarez-Vizoso, R. Arn, M. Kirby, C. Peterson, and B. Draper, “Geometry of curves in $\mathbb{R}^{n}$ from the local singular value decomposition,” _Linear Algebra and its Applications_ , vol. 571, pp. 180–202, 2019.
* [34] G. H. Golub and C. Reinsch, “Singular value decomposition and least squares solutions,” in _Linear Algebra_. Springer, 1971, pp. 134–151.
* [35] I. Joliffe and B. Morgan, “Principal component analysis and exploratory factor analysis,” _Statistical methods in medical research_ , vol. 1, no. 1, pp. 69–95, 1992.
* [36] P. Schmid and J. Sesterhenn, “Dynamic mode decomposition of numerical and experimental data,” _APS_ , vol. 61, pp. MR–007, 2008.
* [37] O. Alter, P. O. Brown, and D. Botstein, “Singular value decomposition for genome-wide expression data processing and modeling,” _Proceedings of the National Academy of Sciences_ , vol. 97, no. 18, pp. 10 101–10 106, 2000\.
* [38] O. Santolík, M. Parrot, and F. Lefeuvre, “Singular value decomposition methods for wave propagation analysis,” _Radio Science_ , vol. 38, no. 1, 2003.
* [39] N. Muller, L. Magaia, and B. M. Herbst, “Singular value decomposition, eigenfaces, and 3d reconstructions,” _SIAM review_ , vol. 46, no. 3, pp. 518–545, 2004.
* [40] J. L. Proctor and P. A. Eckhoff, “Discovering dynamic patterns from infectious disease data using dynamic mode decomposition,” _International health_ , vol. 7, no. 2, pp. 139–145, 2015.
* [41] E. Berger, M. Sastuba, D. Vogt, B. Jung, and H. B. Amor, “Estimation of perturbations in robotic behavior using dynamic mode decomposition,” _Journal of Advanced Robotics_ , vol. 29, no. 5, pp. 331–343, 2015.
* [42] A. A. Kaptanoglu, K. D. Morgan, C. J. Hansen, and S. L. Brunton, “Characterizing magnetized plasmas with dynamic mode decomposition,” _Physics of Plasmas_ , vol. 27, p. 032108, 2020.
* [43] B. Herrmann, P. J. Baddoo, R. Semaan, S. L. Brunton, and B. J. McKeon, “Data-driven resolvent analysis,” _arXiv preprint arXiv:2010.02181_ , 2020\.
* [44] J. Grosek and J. N. Kutz, “Dynamic mode decomposition for real-time background/foreground separation in video,” _arXiv preprint arXiv:1404.7592_ , 2014.
* [45] N. B. Erichson, S. L. Brunton, and J. N. Kutz, “Compressed dynamic mode decomposition for background modeling,” _Journal of Real-Time Image Processing_ , vol. 16, no. 5, pp. 1479–1492, 2019.
* [46] T. Askham and J. N. Kutz, “Variable projection methods for an optimized dynamic mode decomposition,” _SIAM Journal on Applied Dynamical Systems_ , vol. 17, no. 1, pp. 380–416, 2018.
* [47] S. T. Dawson, M. S. Hemati, M. O. Williams, and C. W. Rowley, “Characterizing and correcting for the effect of sensor noise in the dynamic mode decomposition,” _Experiments in Fluids_ , vol. 57, no. 3, pp. 1–19, 2016\.
* [48] M. S. Hemati, C. W. Rowley, E. A. Deem, and L. N. Cattafesta, “De-biasing the dynamic mode decomposition for applied Koopman spectral analysis,” _Theoretical and Computational Fluid Dynamics_ , vol. 31, no. 4, pp. 349–368, 2017.
* [49] J. R. Partington, J. R. Partington _et al._ , _An introduction to Hankel operators_. Cambridge University Press, 1988, vol. 13.
* [50] B. Beckermann and A. Townsend, “Bounds on the singular values of matrices with displacement structure,” _SIAM Review_ , vol. 61, no. 2, pp. 319–344, 2019\.
* [51] Y. Susuki and I. Mezić, “A prony approximation of Koopman mode decomposition,” in _Decision and Control (CDC), 2015 IEEE 54th Annual Conference on_. IEEE, 2015, pp. 7022–7027.
* [52] S. L. Brunton, B. W. Brunton, J. L. Proctor, and J. N. Kutz, “Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control,” _PLoS ONE_ , vol. 11, no. 2, p. e0150171, 2016.
* [53] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang, “Phoneme recognition using time-delay neural networks,” _IEEE transactions on acoustics, speech, and signal processing_ , vol. 37, no. 3, pp. 328–339, 1989\.
* [54] D. Dylewsky, E. Kaiser, S. L. Brunton, and J. N. Kutz, “Principal component trajectories (PCT): Nonlinear dynamics as a superposition of time-delayed periodic orbits,” _arXiv preprint arXiv:2005.14321_ , 2020.
* [55] E. Bozzo, R. Carniel, and D. Fasino, “Relationship between singular spectrum analysis and Fourier analysis: Theory and application to the monitoring of volcanic activity,” _Computers & Mathematics with Applications_, vol. 60, no. 3, pp. 812–820, 2010.
* [56] V. I. Arnol’d, _Mathematical methods of classical mechanics_. Springer Science & Business Media, 2013, vol. 60.
* [57] L. Meirovitch, _Methods of analytical dynamics_. Courier Corporation, 2010.
* [58] J. Colorado, A. Barrientos, A. Martinez, B. Lafaverges, and J. Valente, “Mini-quadrotor attitude control based on hybrid backstepping & Frenet-Serret theory,” in _2010 IEEE International Conference on Robotics and Automation_. IEEE, 2010, pp. 1617–1622.
* [59] R. Ravani and A. Meghdari, “Velocity distribution profile for robot arm motion using rational Frenet–Serret curves,” _Informatica_ , vol. 17, no. 1, pp. 69–84, 2006.
* [60] M. Pilté, S. Bonnabel, and F. Barbaresco, “Tracking the Frenet-Serret frame associated to a highly maneuvering target in 3D,” in _2017 IEEE 56th Annual Conference on Decision and Control (CDC)_. IEEE, 2017, pp. 1969–1974.
* [61] D. Bini, F. de Felice, and R. T. Jantzen, “Absolute and relative Frenet-Serret frames and Fermi-Walker transport,” _Classical and Quantum Gravity_ , vol. 16, no. 6, p. 2105, 1999.
* [62] B. R. Iyer and C. Vishveshwara, “Frenet-Serret description of gyroscopic precession,” _Physical Review D_ , vol. 48, no. 12, p. 5706, 1993.
* [63] J. F. Gibson, J. D. Farmer, M. Casdagli, and S. Eubank, “An analytic approach to practical state space reconstruction,” _Physica D: Nonlinear Phenomena_ , vol. 57, no. 1-2, pp. 1–30, 1992.
* [64] E. Gutkin, “Curvatures, volumes and norms of derivatives for curves in Riemannian manifolds,” _Journal of Geometry and Physics_ , vol. 61, no. 11, pp. 2147–2161, 2011.
* [65] M. Abramowitz and I. A. Stegun, _Handbook of mathematical functions with formulas, graphs, and mathematical tables_. US Government printing office, 1948, vol. 55.
* [66] E. T. Whittaker and G. N. Watson, _A course of modern analysis_. Cambridge university press, 1996.
* [67] S. M. Hirsh, K. D. Harris, J. N. Kutz, and B. W. Brunton, “Centering data improves the dynamic mode decomposition,” _arXiv preprint arXiv:1906.05973_ , 2019.
* [68] F. Van Van Breugel, J. N. Kutz, and B. W. Brunton, “Numerical differentiation of noisy data: A unifying multi-objective optimization framework,” _IEEE Access_ , vol. 8, pp. 196 865–196 877, 2020.
* [69] E. N. Lorenz, “Deterministic nonperiodic flow,” _Journal of the atmospheric sciences_ , vol. 20, no. 2, pp. 130–141, 1963.
* [70] D. Poland, “Cooperative catalysis and chemical chaos: a chemical model for the Lorenz equations,” _Physica D: Nonlinear Phenomena_ , vol. 65, no. 1-2, pp. 86–99, 1993.
* [71] C. Weiss and J. Brock, “Evidence for Lorenz-type chaos in a laser,” _Physical review letters_ , vol. 57, no. 22, p. 2804, 1986.
* [72] N. Hemati, “Strange attractors in brushless DC motors,” _IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications_ , vol. 41, no. 1, pp. 40–45, 1994.
* [73] O. E. Rössler, “An equation for continuous chaos,” _Physics Letters A_ , vol. 57, no. 5, pp. 397–398, 1976.
* [74] ——, “An equation for hyperchaos,” _Physics Letters A_ , vol. 71, no. 2-3, pp. 155–157, 1979.
* [75] T. Shinbrot, C. Grebogi, J. Wisdom, and J. A. Yorke, “Chaos in a double pendulum,” _American Journal of Physics_ , vol. 60, no. 6, pp. 491–499, 1992\.
* [76] K. Kaheman, E. Kaiser, B. Strom, J. N. Kutz, and S. L. Brunton, “Learning discrepancy models from experimental data,” in _58th IEEE Conference on Decision and Control_. IEEE, 2019.
* [77] W. P. London and J. A. Yorke, “Recurrent outbreaks of measles, chickenpox and mumps: I. seasonal variation in contact rates,” _American journal of epidemiology_ , vol. 98, no. 6, pp. 453–468, 1973.
* [78] W. M. Schaffer and M. Kot, “Do strange attractors govern ecological systems?” _BioScience_ , vol. 35, no. 6, pp. 342–350, 1985.
* [79] G. Sugihara and R. M. May, “Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series,” _Nature_ , vol. 344, no. 6268, pp. 734–741, 1990.
* [80] R. R. Coifman and S. Lafon, “Diffusion maps,” _Applied and computational harmonic analysis_ , vol. 21, no. 1, pp. 5–30, 2006.
* [81] A. Ng, “Sparse autoencoder,” _CS294A Lecture notes_ , vol. 72, no. 2011, pp. 1–19, 2011.
* [82] M. P. Knapp, “Sines and cosines of angles in arithmetic progression,” _Mathematics magazine_ , vol. 82, no. 5, p. 371, 2009.
## Appendix A Discrete Orthogonal Polynomials
In Section 3.3 we introduced a set of orthogonal polynomials that appear in
HAVOK, and listed these polynomials in the continuous case. The first five
polynomials in the discrete case are listed below.
$\displaystyle p_{1}(n)=\frac{n}{c_{1}}$ $\displaystyle
p_{2}(n)=\frac{n^{2}}{c_{2}}$ $\displaystyle
p_{3}(n)=\frac{1}{c_{3}}\left(n^{3}-\frac{n\,\left(3\,p^{2}+3\,p-1\right)}{5}\right)$
$\displaystyle
p_{4}(n)=\frac{1}{c_{4}}\left(n^{4}-\frac{5\,n^{2}\,\left(3\,p^{4}+6\,p^{3}-3\,p+1\right)}{7\,\left(3\,p^{2}+3\,p-1\right)}\right)$
$\displaystyle
p_{5}(n)=\frac{1}{c_{5}}\left(\frac{5\,\left(\frac{n\,\left(3\,p^{2}+3\,p-1\right)}{5}-n^{3}\right)\,\left(2\,p^{2}+2\,p-3\right)}{9}-\frac{n\,\left(3\,p^{4}+6\,p^{3}-3\,p+1\right)}{7}+n^{5}\right)$
$\displaystyle
c_{1}=\sqrt{\frac{p\,\left(2\,p+1\right)\,\left(p+1\right)}{3}}$
$\displaystyle
c_{2}=\sqrt{\frac{p\,\left(2\,p+1\right)\,\left(p+1\right)\,\left(3\,p^{2}+3\,p-1\right)}{15}}$
$\displaystyle
c_{3}=\sqrt{\frac{p\,\left(2\,p-1\right)\,\left(2\,p+1\right)\,\left(2\,p+3\right)\,\left(p-1\right)\,\left(p+1\right)\,\left(p+2\right)}{175}}$
$\displaystyle
c_{4}=\sqrt{\frac{p\,\left(2\,p-1\right)\,\left(2\,p+1\right)\,\left(2\,p+3\right)\,\left(p-1\right)\,\left(p+1\right)\,\left(p+2\right)\,\left(15\,p^{4}+30\,p^{3}-35\,p^{2}-50\,p+12\right)}{2205\,\left(3\,p^{2}+3\,p-1\right)}}$
$\displaystyle
c_{5}=\sqrt{\frac{4\,p\,\left(2\,p-1\right)\,\left(2\,p+1\right)\,\left(2\,p-3\right)\,\left(2\,p+3\right)\,\left(2\,p+5\right)\,\left(p-1\right)\,\left(p+1\right)\,\left(p-2\right)\,\left(p+2\right)\,\left(p+3\right)}{43659}}$
## Appendix B Column Rule for Synthetic Example
In Section 4, we state that when applying HAVOK to the synthetic example in
3.2 in the limit as the number of columns $n$ in the Hankel matrix $\bm{H}$
goes to infinity, the derivatives in (23) converge to fixed values. Here we
prove that the first ratio in the series
$\frac{2\left\lVert\bm{h}^{\prime}_{0}\right\rVert}{\left\lVert\bm{h^{\prime\prime}_{0}}\right\rVert}$
approaches a constant as $n\to\infty$. Further terms in the sequence, can be
shown to have the same behavior using a similar proof.
We start with the system $x(t)=\sin(t)+\sin(2t)$. The central row of the
matrix $\bm{h}_{0}$ will be of the form $x(t)$ for some $a,b\in\mathbb{Z}$
such that
$t=\begin{bmatrix}a\Delta t&(a+1)\Delta t&(a+2)\Delta t&\dots&b\Delta
t\end{bmatrix}.$
In particular $b=n+a$. Thus, showing that the limit as $n\to\infty$ is
equivalent to the limit as $b\to\infty$.
$\begin{split}\frac{||\bm{h}_{0}^{\prime\prime}||}{||\bm{h}^{\prime}_{0}||}&=\frac{||-\sin(t)-4\sin(2t)||}{||\cos(t)+2\cos(2t)||}\\\
&=\frac{||\begin{bmatrix}-\sin(a\Delta t)-4\sin(2a\Delta
t)&\dots&-\sin(b\Delta t)-4\sin(2b\Delta
t)\end{bmatrix}||}{||\begin{bmatrix}\cos(a\Delta t)+2\cos(2a\Delta
t)&\dots&\cos(b\Delta t)+2\cos(2b\Delta t)\end{bmatrix}||}\\\
&=\sqrt{\frac{\sum_{k=a}^{b}(\sin(k\Delta t)+4\sin(2k\Delta
t))^{2}}{\sum_{k=a}^{b}(\cos(k\Delta t)+2\cos(2k\Delta t))^{2}}}\\\
&=\sqrt{\frac{\sum_{k=a}^{b}(\sin^{2}(k\Delta t)+8\sin(k\Delta t)\sin(2k\Delta
t)+16\sin^{2}(2k\Delta t))}{\sum_{k=a}^{b}(\cos^{2}(k\Delta t)+4\cos(k\Delta
t)\cos(2k\Delta t)+4\cos^{2}(2k\Delta t))}}\\\
&=\sqrt{\frac{\sum_{k=a}^{b}(\frac{17}{2}+4\cos(k\Delta
t)-\frac{1}{2}\cos(2k\Delta t)-4\cos(3k\Delta t)-8\cos(4k\Delta
t))}{\sum_{k=a}^{b}(\frac{5}{2}+2\cos(k\Delta t)+\frac{1}{2}\cos(2k\Delta
t)+2\cos(3k\Delta t)+2\cos(4k\Delta t))}}.\end{split}$
In the last step we have used the trigonometric identities
$\sin^{2}(a)=\frac{1}{2}(1-\cos(2a))$, and
$\cos^{2}(a)=\frac{1}{2}(1+\cos(2a))$.
Using [82], we have the identity
$\sum_{k=0}^{q}\cos(Bk)=\frac{\sin\big{[}(\frac{q+1}{2})B\big{]}\cos\big{[}(\frac{q}{2})B\big{]}}{\sin(\frac{B}{2})},\quad
B,q\in\mathbb{R}.$
$\sum_{k=a}^{b}\cos(Bk)=\frac{\sin\big{[}(\frac{b+1}{2})B\big{]}\cos\big{[}(\frac{b}{2})B\big{]}-\sin\big{[}(\frac{a}{2})B\big{]}\cos\big{[}(\frac{a-1}{2})B\big{]}}{\sin(\frac{B}{2})},\quad
B,a,b\in\mathbb{R}.$
Defining $g(b)$ and $h(b)$ as the numerator and denominator under the radical,
$\displaystyle g(b)$ $\displaystyle=\sum_{k=a}^{b}(4\cos(k\Delta
t)-\frac{1}{2}\cos(2k\Delta t)-4\cos(3k\Delta t)-8\cos(4k\Delta t))$
$\displaystyle=\frac{4(\sin[(b+1)(\frac{\Delta t}{2})]\cos[b(\frac{\Delta
t}{2})]-\sin[a(\frac{\Delta t}{2})]\cos[(a-1)(\frac{\Delta
t}{2})])}{\sin[\frac{\Delta t}{2}]}$ $\displaystyle-\frac{\sin[(b+1)\Delta
t]\cos[b\Delta t]-\sin[a\Delta t]\cos[(a-1)\Delta t]}{2\sin[\Delta t]}$
$\displaystyle-\frac{4(\sin[(b+1)(\frac{3\Delta t}{2})]\cos[b(\frac{3\Delta
t}{2})]-\sin[a(\frac{3\Delta t}{2})]\cos[(a-1)(\frac{3\Delta
t}{2})])}{\sin[\frac{3\Delta t}{2}]}$
$\displaystyle-\frac{8(\sin[(b+1)(2\Delta t)]\cos[b(2\Delta t)]-\sin[a(2\Delta
t)]\cos[(a-1)(2\Delta t)])}{\sin[2\Delta t]}$ $\displaystyle h(b)$
$\displaystyle=\sum_{k=a}^{b}(2\cos(k\Delta t)+\frac{1}{2}\cos(2k\Delta
t)+2\cos(3k\Delta t)+2\cos(4k\Delta t))$
$\displaystyle=\frac{2(\sin[(b+1)(\frac{\Delta t}{2})]\cos[b(\frac{\Delta
t}{2})]-\sin[a(\frac{\Delta t}{2})]\cos[(a-1)(\frac{\Delta
t}{2})])}{\sin[\frac{\Delta t}{2}]}$ $\displaystyle+\frac{\sin[(b+1)\Delta
t]\cos[b\Delta t]-\sin[a\Delta t]\cos[(a-1)\Delta t]}{2\sin[\Delta t]}$
$\displaystyle+\frac{2(\sin[(b+1)(\frac{3\Delta t}{2})]\cos[b(\frac{3\Delta
t}{2})]-\sin[a(\frac{3\Delta t}{2})]\cos[(a-1)(\frac{3\Delta
t}{2})])}{\sin[\frac{3\Delta t}{2}]}$
$\displaystyle+\frac{2(\sin[(b+1)(2\Delta t)]\cos[b(2\Delta t)]-\sin[a(2\Delta
t)]\cos[(a-1)(2\Delta t)])}{\sin[2\Delta t]}.$
Note that we have the following:
$\lim_{b\to\infty}\frac{g(b)}{b}=0\quad\text{and}\quad\lim_{b\to\infty}\frac{h(b)}{b}=0.$
Using this fact, then
$\begin{split}\lim_{b\to\infty}\frac{2||\bm{h}_{0}^{\prime\prime}||}{||\bm{h}^{\prime}_{0}||}&=2\lim_{b\to\infty}\sqrt{\frac{\sum_{k=a}^{b}\frac{17}{2}+\sum_{k=a}^{b}(4\cos(k\Delta
t)-\frac{1}{2}\cos(2k\Delta t)-4\cos(3k\Delta t)-8\cos(4k\Delta
t))}{\sum_{k=a}^{b}\frac{5}{2}+\sum_{k=a}^{b}(2\cos(k\Delta
t)+\frac{1}{2}\cos(2k\Delta t)+2\cos(3k\Delta t)+2\cos(4k\Delta t))}}\\\
&=2\lim_{b\to\infty}\sqrt{\frac{\frac{17}{2}(b-a+1)+g(b)}{\frac{5}{2}(b-a+1)+h(b)}}\\\
&=2\lim_{b\to\infty}\sqrt{\frac{\frac{17}{2}-\frac{17a}{2b}+\frac{17}{2b}+\frac{g(b)}{b}}{\frac{5}{2}-\frac{5a}{2b}+\frac{5}{2b}+\frac{h(b)}{b}}}\\\
&=2\sqrt{\frac{17}{5}}.\end{split}$
## Appendix C Structured HAVOK (sHAVOK) algorithm
Here we present pseudocode for the sHAVOK algorithms with and without forcing
terms.
Algorithm 1 Structured HAVOK (sHAVOK) without forcing
Input: Measured signal $x(t)$, number of delays $m$, and rank of Hankel Matrix
$\bm{r}$.
Output: Dynamics matrix $\hat{\bm{A}}\in\mathbb{R}^{r\times r}$.
$\bm{H}:=\text{Hankel}(x(t),m)$
$\bm{H}_{1}:=\bm{H}[:,1:n-1]$
$\bm{H}_{2}:=\bm{H}[:,2:n]$
$\bm{U}_{1}\bm{\Sigma}_{1}\bm{V}_{1}^{\intercal}:=\text{SVD}(\bm{H}_{1},r)$
$\bm{U}_{2}\bm{\Sigma}_{2}\bm{V}_{2}^{\intercal}:=\text{SVD}(\bm{H}_{2},r)$
$\hat{\bm{A}}:=\bm{V}_{2}^{\intercal}\bm{V}_{1}$
Algorithm 2 Structured HAVOK (sHAVOK) with forcing
Input: Measured signal $x(t)$, number of delays $m$, and rank of Hankel Matrix
$\bm{r}$.
Output: Dynamics matrix $\hat{\bm{A}}\in\mathbb{R}^{r-1\times r-1}$ and
forcing term $\hat{\bm{B}}\in\mathbb{R}^{r-1}$.
$\bm{H}:=\text{Hankel}(x(t),m)$
$\bm{H}_{1}:=\bm{H}[:,1:n-1]$
$\bm{H}_{2}:=\bm{H}[:,2:n]$
$\bm{U}_{1}\bm{\Sigma}_{1}\bm{V}_{1}^{\intercal}:=\text{SVD}(\bm{H}_{1},r)$
$\bm{U}_{2}\bm{\Sigma}_{2}\bm{V}_{2}^{\intercal}:=\text{SVD}(\bm{H}_{2},r-1)$
$[\hat{\bm{A}},\hat{\bm{B}}]:=\bm{V}_{2}^{\intercal}\bm{V}_{1}$
|
G. Menardi
*Giovanna Menardi,
via C. Battisti 241, 35121 Padova, Italy
# Nonparametric clustering for image segmentation
Giovanna Menardi Department of Statistical Sciences, University of Padova,
Padova, Italy<EMAIL_ADDRESS>Menardi G
(26 April 2016; 6 June 2016; 6 June 2016)
###### Abstract
[Summary]Image segmentation aims at identifying regions of interest within an
image, by grouping pixels according to their properties. This task resembles
the statistical one of clustering, yet many standard clustering methods fail
to meet the basic requirements of image segmentation: segment shapes are often
biased toward predetermined shapes and their number is rarely determined
automatically. Nonparametric clustering is, in principle, free from these
limitations and turns out to be particularly suitable for the task of image
segmentation. This is also witnessed by several operational analogies, as, for
instance, the resort to topological data analysis and spatial tessellation in
both the frameworks.
We discuss the application of nonparametric clustering to image segmentation
and provide an algorithm specific for this task. Pixel similarity is evaluated
in terms of density of the color representation and the adjacency structure of
the pixels is exploited to introduce a simple, yet effective method to
identify image segments as disconnected high-density regions. The proposed
method works both to segment an image and to detect its boundaries and can be
seen as a generalization to color images of the class of thresholding methods.
A revised version of this manuscript is the following:
, (2020), Nonparametric clustering for image segmentation, Statistical
Analysis and DataMining: The ASA Data Science Journal, 13(1), p. 83-97.
###### keywords:
Image Segmentation, Kernel smoothing, Mode, Nonparametric Density Estimation
††articletype: Research Article
## 1 Introduction and motivation
In the recent years, the need of analyzing large amounts of image information
has become relevant in several contexts and applications. Daily examples
include medical diagnosis based on X-ray or magnetic resonance images, video
surveillance and geographic information system applications, and image
tagging. A possible goal of image analysis is the one of _segmentation_ , the
automatic process of identifying salient regions and single objects in an
image, with the purpose of content retrieval, object detection or recognition,
occlusion boundary, image compression or editing.
Digital images are created by a variety of input devices, such as cameras or
scanners, and they have usually a fixed resolution, _i.e._ they are
represented by a fixed number of digital values, known as _pixels_. Pixels are
the smallest individual element in an image, holding quantized values that
represent the brightness of a given color at any specific location of the
image. When an image is segmented, a label is assigned to each pixel, so that
pixels with the same label share similar characteristics in terms of color,
intensity, or texture.
This task recalls closely the aim of cluster analysis, and thereby clustering
methods have been featured as a standard tool to segment images. Within this
framework, an approach which naturally lends itself to the task of image
segmentation is known as _nonparametric_ or _modal_ clustering. With respect
to most of clustering methods, relying on heuristic ideas of similarity
between objects, the nonparametric formulation claims a sounder theoretical
ground, since the definition of a precise notion of clusters allows for a
“ground truth” to aim at when evaluating a clustering configuration or
comparing alternatives. Specifically, a probability density function is
assumed to underlie the data and clusters are defined as the domains of
attraction of the modes of the density function, estimated nonparametrically.
The correspondence between clusters and regions around the modes of the data
distribution entails some reasons of attractiveness. First, the number of
clusters is an intrinsic property of the data generator mechanism, thereby
conceptually well defined, and its determination is itself an integral part of
the clustering procedure. Additionally, modal regions comply with the
geometric intuition about the notion of clusters, also because they are not
bound to any particular shape. These reasons make nonparametric clustering
particularly suitable for the segmentation of digital images, as segments
shall be allowed to assume arbitrary shapes and an automatic determination of
the number of segments would be desirable.
In this work the use of nonparametric clustering for image segmentation is
discussed. Pixel similarity is evaluated in terms of density of the color
representation and the adjacency structure of the pixels is exploited to
introduce a simple method to assess the connectedness of the modal density
regions.
In the following, an overview about nonparametric clustering is provided along
with its connection with methods for image segmentation. Within this
framework, a novel method specifically conceived for image segmentation is
proposed and discussed, and several applications illustrated.
## 2 Background
### 2.1 Overview of nonparametric clustering
| |
---|---|---
Figure 1: A section of a density function at a level $\lambda_{0}$ (left), the
identified level set (middle panel), formed by two disconnected regions and
the associated cluster tree, with leaves corresponding to the modes. The
horizontal line is at the level $\lambda_{0}$ (right).
Nonparametric, or modal, clustering hinges on the assumption that the observed
data $(x_{1},\ldots,x_{n})^{\prime}$ are a sample from a probability density
function $f:\mathbb{R}^{d}\mapsto\mathbb{R}^{+}$. The modes of $f$ are
regarded as the archetypes of the clusters, which are in turn represented by
the surrounding regions. The identification of the modal regions is performed
according to two alternative directions. One strand of methods looks for an
explicit representation of the modes of the density and associates each
cluster to the set of points along the steepest ascent path towards a mode.
Optimization method are applied to find the local maxima of the density, such
as the _mean-shift_ algorithm 13, and a number of its variants 9, 30, 5. We
consider, alternatively, a second strand, which associates the clusters to
disconnected density level sets of the sample space, without attempting the
explicit task of mode detection. Specifically, a section of $f$ at a given
level $\lambda$ singles out the (upper) level set
$L(\lambda)=\\{x\in\mathbb{R}^{d}:f(x)\geq\lambda\\},\quad
0\leq\lambda\leq\max f$
which may be connected or disconnected. In the latter case, it consists of a
number of connected components, each of them associated with a cluster at the
level $\lambda$.
While there may not exist a single $\lambda$ which catches all the modal
regions, any connected component of $L(\lambda)$ includes at least one mode of
the density and, on the other hand, for each mode there exists some $\lambda$
for which one of the connected components of the associated $L(\lambda)$
includes this mode at most. Hence, not only it is not necessary to define a
specific level $\lambda$ to identify the groups, which would be difficult and
often not effective in providing the overall number of modes, but conversely,
all the modal regions may be detected by identifying the connected components
of $L(\lambda)$ for different $\lambda$s. Varying $\lambda$ along its range
gives rise to a hierarchical structure of the high-density sets, known as the
_cluster tree_. For each $\lambda$, it provides the number of connected
components of $L(\lambda),$ and each of its leaves corresponds to a _cluster
core_ , _i.e._ the largest connected component of $L(\lambda)$ including one
mode only. Figure 1 illustrates a simple example of this idea: cluster cores
associated with the highest modes $2$ and $3$ are identified by the smallest
$\lambda$ larger than $\lambda_{3}$, while the smallest $\lambda$ larger than
$\lambda_{1}$ identifies the cluster core associated to mode $1$. In some
sense, the cluster tree provides a tool to inspect data with different degree
of sharpness: clusters 2 and 3 are distinct, but they merge together to create
a lower resolution cluster. Instead of indexing the tree with $\lambda,$ it is
equivalent to consider the probability associated to $L(\lambda)$, which
varies inversely with $\lambda$. For this reason the tree is usually plotted
upside down. This will be also the convention considered for the rest of the
paper.
From the operational point of view, two choices are required to implement the
ideas underlying nonparametric clustering. Since $f$ is unknown, a
nonparametric estimator $\hat{f}$ is employed to obtain its representation. A
common choice for multidimensional data is the product kernel estimator see,
e.g., 22:
$\hat{f}(x;h)=\sum_{i=1}^{n}\frac{1}{nh_{1}\cdot\ldots\cdot
h_{d}}\prod_{j=1}^{d}K\left(\frac{x^{(j)}-x_{i}^{(j)}}{h_{j}}\right)$ (1)
where $x^{(j)}$ is the $j-th$ component of $x$, the univariate kernel $K$ is
usually taken to be a unimodal, nonnegative function centered at zero and
integrating to one, and a different smoothing parameter $h_{j}$ is chosen for
each component. In fact, for the development of the method, it does not really
matter which specific estimator is adopted, provided that $\hat{f}$ is
positive and finite at the observed points.
A second choice derives from the lack, in multidimensional sample spaces, of
an obvious method to identify the connected components of a level set. For
these reasons the inherent literature has mainly focused on developing
efficient methods for this task _e.g._ 25, 18.
Note that the union of the cluster cores does not produce a partition of the
sample space, as regions at the tails or at the valleys of $f$, where the
attraction of each mode is low, are initially left unallocated. However, the
availability of a density measure allows for providing each unallocated
observation with a degree of confidence of belonging to the cluster cores.
Depending on the application, the evaluation of such confidence may be
exploited to force the assignment or may result in the opportunity of fuzzy
clustering schemes.
### 2.2 Related works on image segmentation
Also due to the extensiveness of its applicability, several different methods
have been proposed to pursue the task of image segmentation. These are broadly
ascribable to two alternative routes see, for a review, 11: _noncontextual_
techniques ignore the relationships among the features in the image and assign
the same label to pixels sharing some global attribute, such as the gray level
or the color brightness. _Thresholding_ , for instance, compares the intensity
of each pixel with a suitable threshold and associates higher values to the
foreground of the image, of main interest, and lower values to the background.
Recent contributions within this framework are 1, 12. _Contextual_ techniques,
conversely, also account for pixel location or color gradient. Within this
class, _region-based methods_ mainly rely on the assumption that the
neighboring pixels within one region have similar value see, e.g. 8, 16, 19.
Boundary-based methods as _edge detection_ and _active contours_ build on
finding pixel differences rather than similarities, to determine a closed
boundary between the foreground and the background of the image. Examples of
recent procedures following these approaches are 15, 10 and, respectively, 29,
31. _Watershed_ segmentation builds a distance map of a gray-scale image or of
its gradient and considers it as topographic relief, to be flooded from its
minima. When two lakes merge, a dam is built, representing the boundary
between two segments e.g. 23.
Within the framework of clustering methods, $K-$means clustering is diffusely
used for image segmentation 26, perhaps due to its simplicity, but a few
severe limitations prevent its effectiveness. First, $K-$means clustering is
known to produce sub-optimal solutions as it highly depends on the
initialization of the centroids. Additionally, it requires a prior
specification of the number of clusters. In image segmentation this operation
is undoubtedly easier than in other clustering applications. On the other
hand, the need of human intervention vanishes the effort to automate the
segmentation procedure. Finally, $K-$means is known to be biased toward the
identification of spherical clusters, which can be restrictive in image data
where segments may assume arbitrarily odd shapes.
While nonparametric clustering is rarely mentioned as the underlying approach
to perform image analysis, it features some connection with a number of
segmentation algorithms. By exploiting some notions from Morse theory, in 7 an
elegant formalization of the notion of modal (nonparametric) cluster, which
closely recalls the ideas of watershed segmentation is provided: intuitively,
if the density underlying the data is figured as a mountainous landscape, and
modes are its peaks, clusters are the “regions that would be flooded by a
fountain emanating from a peak of the mountain range”. Furthermore,
segmentation _thresholding_ is often performed by looking for the minimum of
the histogram of the grey intensities, i.e. the antimode of a histogram-based
density estimator of the grey intensities. Since any antimode lies between two
modes, the approach can be interpreted as a naive, single-$\lambda$,
implementation of the density level set formulation above mentioned, where
gray intensities are employed as a measure of density. While without a
specific reference to nonparametric clustering, gradient ascent algorithms in
the guise of mean-shift, have been also sometimes applied for image
segmentation 27, 17, 32. Indeed, by climbing the modes of a kernel density
estimate, the mean-shift is rightly ascribable to the class of non parametric
clustering methods. Similar instruments are also at the basis of active
contours models, where a suitable measure of energy is iteratively minimized
by a gradient descent algorithm to identify the segment contours.
As a further link, even when applied to different goals, image analysis and
nonparametric clustering share several tools: an example is provided by
spatial tessellation as the Voronoi or Delaunay diagrams, which have been used
in nonparametric clustering to identify density level sets connected
components 3 and are frequently employed in image analysis for thinning and
skeletonization.
## 3 A nonparametric method for image segmentation
### 3.1 Description of the procedure
Let $\mathcal{I}=\\{p_{1},\ldots,p_{n}\\}$ be a digital image, where the
ordered set of pixels $p_{i}=((x_{i},y_{i}),z_{i}),i=1,\ldots,n,$ is described
by the pair $(x_{i},y_{i})$ denoting the coordinates of the pixels location,
and by the vector $z_{i}$ denoting the adopted color model, _e.g._
$z_{i}=(z_{i}^{(r)},z_{i}^{(g)},z_{i}^{(b)})$ in the RGB color representation
24. In grayscale images, $z_{i}$ is a scalar quantifying the gray intensity.
The particularization of nonparametric clustering in the framework of image
analysis requires a density function to be defined at the pixels. A sensible
choice builds $\hat{f}$ based on the color coordinates $z_{i}$. The
specification of the (1) is then:
$\hat{f}(z)=\frac{1}{nh_{r}h_{g}h_{b}}\sum_{i=1}^{n}K\left(\frac{z^{(r)}-z_{i}^{(r)}}{h_{r}}\right)K\left(\frac{z^{(g)}-z_{i}^{(g)}}{h_{g}}\right)K\left(\frac{z^{(b)}-z_{i}^{(b)}}{h_{b}}\right).$
(2)
For example, if the Uniform kernel $K(u)=\tfrac{1}{2}\mathbf{1}_{\\{|u|<1\\}}$
is selected and $h_{j}\to 0$, each pixel is provided with a density
proportional to the frequency of its color in the image. Similar
interpretations hold with different kernel functions.
Consider, as an example, the image in the top panel of Figure 2. For the sake
of illustration, the image has been built by using colors entirely
characterized by the red and blue RGB channels (_i.e._ each pixel has the
green channel set to 0). The image is clearly composed by 4 segments, each of
them featured by a main color pattern appearing with different shades. The
bottom panel displays the color intensities of each pixel, disregarding their
location within the image. Pixels sharing a similar color have similar red and
blue intensities, and cluster together in different areas of the plane. Hence,
the color density estimate exhibts 4 modes, associated to the 4 color
patterns.
As it will be discussed in Section 3.2, an alternative to (2) would also
account for the spatial coordinates of the pixels.
Once that the color density has been estimated, the associated upper level
sets
$\hat{L}(\lambda)=\\{p_{i}\in\mathcal{I}:\hat{f}(z_{i})\geq\lambda\\}\qquad
0\leq\lambda<\max\hat{f}$
are easily determined for a grid of $\lambda$ values. Next step is the
identification of the connected components of the
$\hat{L}(\lambda)^{\prime}$s. Unlike the above mentioned case of clustering
data on $\mathbb{R}^{d}$, where the identification of connected regions is
ambiguous, the notion of connectedness is (almost) unequivocally determined in
the image framework, due to the spatial structure of the pixels. This
justifies the procedure here proposed, which builds on the level set
formulation of nonparametric clustering but naturally exploits the adjacency
structure of the pixels to identify the connected components of the modal
regions. For a given $\lambda$, the connected components of $\hat{L}(\lambda)$
are approximated as follows:
* (i)
for each pixel of $\hat{L}(\lambda)$, identify the adjacent pixels forming its
_4-neighbourhood_ , _i.e._ a central pixel has four connected neighbors - top,
bottom, right and left.
* (ii)
approximate the connected components of $\hat{L}(\lambda)$ by the union of
adjacent pixels in $\hat{L}(\lambda)$.
Hence, in order to identify a segment, not only the pixels need to share a
similar color, but a constraint of spatial connectedness is also imposed by
the procedure. For example, if a black spot was embedded in the violet segment
in the toy image of Figure 2, the associated pixels would contribute to form
(and hence raise) the mode of the density at the bottom corner of the
lowermost panel of the Figure. However, at a level $\lambda$ for which the
black colored pixels would have higher density, $\hat{L}(\lambda)$ would be
disconnected, because formed by both the pixels of the black square already in
the image, and the pixels of the embedded, not adjacent, black spot.
Algorithm 1 Nonparametric density-based segmentation: main steps of the
procedure
1:$h=(h_{r},h_{g},h_{b})$; $K(\cdot)$; $\epsilon$; Class $\in$ {TRUE, FALSE}
(set Class:= FALSE to not allocate pixels not belonging to the segment cores;)
2:Identify the set $N_{4}(p_{i})$ of pixels in the 4-neighborhood of
$p_{i},i=1,\ldots,n$
3:Compute
$\hat{f}(z)=\sum_{i=1}^{n}\frac{1}{nh_{r}h_{g}h_{b}}\prod_{j=\\{r,g,b\\}}K\left(\frac{z^{(j)}-z_{i}^{(j)}}{h_{j}}\right)$,
$\quad\forall z\in\\{z_{i}\\}_{i=1,\ldots,n}$
4:Set $\lambda:=0$
5:while $0\leq\lambda\leq\max\hat{f}$ do
6: identify $\hat{L}(\lambda)=\\{p_{i}:\hat{f}(z_{i})\geq\lambda\\}$;
7: find the connected components of $\hat{L}(\lambda)$ (as the union of
adjacent pixels in $\hat{L}(\lambda)$)
8: $\lambda:=\lambda+\epsilon$
9:end while
10:Build the hierarchy of the connected components of
$\hat{L}{(\lambda)}^{\prime}s$ and obtain the cluster tree
11:Denote core pixels as $p_{c}$ and unallocated pixels as $p_{u}$
12:Assign the label $\ell_{c}\in\\{1,\ldots,M\\}$ to each core pixel $p_{c}$
13:Set Isolated:= $\emptyset$
14:if Class = TRUE then
15: while $\\{p_{u}\\}\neq\emptyset\,\,\vee$ {Isolated} $=\\{p_{u}\\}$ do
16: Set Isolated:= $\emptyset$
17: for all $p_{u}$ do
18:
$\hat{f}_{m}(z_{u})=\sum_{c:\ell_{c}=m}\frac{1}{nh_{r}h_{g}h_{b}}\prod_{j=\\{r,g,b\\}}K\left(\frac{z^{(j)}-z_{c}^{(j)}}{h_{j}}\right)$,
$m=1\ldots,M$
19: set $m_{0}:=\mbox{argmax}_{m}\hat{f}_{m}(z_{u})$
20: if $\exists$ $p_{c}$ such that ($p_{c}$ $\in$
$N_{4}(p_{u})\wedge\ell_{c}=m_{0}$) then
21: assign the label $\ell_{c}:=m_{0}$ to $p_{u}$
22: else
23: Isolated: = {Isolated $\cup\,p_{u}$}
24: end if
25: end for
26: update $\\{p_{u}\\}$ and $\\{p_{c}\\}$
27: end while
28:else
29: set $\ell_{u}:=0$
30:end if
31:RETURN: $\ell_{1},\ldots,\ell_{n}$
For varying $\lambda$, the procedure described so far creates $M$ groups of
pixels $\ell_{m}$ ($m=1,\ldots,M$), which we call, in analogy with the
clustering problem, (segment) _cores_ , and it leaves a number of pixels
unlabeled. Depending on the application at hand, we can either decide to force
their assignment to the existing segment cores or to leave these pixels
unallocated. In fact, a peculiar aspect is that the unlabeled points are not
positioned randomly in the image, but are inevitably on the outskirts of the
existing segment cores. As will be illustrated in the Section 4, unallocated
pixels include (or correspond to) the contours of the segments.
The possible allocation of the unlabeled pixels to the existing groups is
essentially a classification problem that may be faced according to a wide
range of choices. To remain within the same kind of approach pursued so far,
and consistently with the purpose of identifying segments as connected sets of
pixels, we propose the following idea to classify an unallocated pixel
$p_{u}$: compute the $M$ estimated densities $\hat{f}_{m}(z_{u})$, each based
on the pixels already assigned to the $m^{th}$ core only ($m=1,\ldots,M)$;
then, set
$m_{0}=\mbox{argmax}_{m}\hat{f}_{m}(z_{u})$ (3)
and assign $p_{u}$ to the group with label $m_{0}$ provided that at least one
of the pixels already assigned to the $m_{0}^{th}$ segment core is adjacent to
$p_{u}$. The operational implementation of this idea is here performed in a
sequential manner, as detailed in the reported pseudo-code, along with the
main steps of the whole segmentation procedure.
It may be the case that none of the pixels already assigned to the segment
presenting maximum density (3) is adjacent to $p_{u}$, (_i.e._ the color of
$p_{u}$ is not similar to the color of any other adjacent segment). These
pixels usually lie at the borders of the segments and may either be left
unallocated, or the assignment can be forces to the highest density segment,
disregarding spatial adjacency, or to a novel further segment.
Figure 2: A 4-segments toy image entirely characterized by red and blue
channels (top); red and blue intensities of the pixels and, superimposed,
their kernel density estimate, highlighting 4 clear modes corresponding to the
4 segments (bottom). Figure 3: A simple gray-scale image (left), and the
estimate of color density superimposed to the spatial coordinates to the
pixels, represented both as the level set density (middle panel) and as a
perspective plot (right panel).
### 3.2 Discussion
Since the procedure illustrated so far accounts for both the colors and the
connectivity of the image patterns, it emulates, in some sense, the behavior
of the human eye, which instinctively, perceives different segments in a
picture as either disconnected set of pixels or image patterns with a diverse
color intensity. A simple illustration of this latter aspect is witnessed by
the grayscale image in Figure 3. Even if the gray intensities of the
foreground (the inner square) and the background are similar, the density
estimator (2) perfectly distinguishes the two density levels and the isoline
identifies the contours of the foreground segment. Conversely, with respect to
the former aspect, a major limitation of nonparametric clustering, in
principle inherited by the proposed segmentation procedure, derives from the
definition of mode itself, which requires a separating “gap” between dense
patterns. In Figure 3, the density shows itself like a squared hole, and there
is no lower density area between the background and the foreground. This
prevents $\hat{L}(\lambda)$ to be a disconnected set for any $\lambda$, which
would guarantee the identification of two segments. This behavior is somewhat
paradoxical, as the neater the image, the less ideal the setting for the
procedure to work effectively: within an image, dripped contours of a segment,
indeed, manifest themselves as small changes of colors at the borders with
respect to the interior. Since the perimeter of a shape is always smaller than
its area, and the density of a pixel is positively associated with the
frequency of its color, dripped contours would guarantee that the color
density along the contour of a segment is lower than its inner density, and
hence a valley would arise between a segment and its background. In fact, the
considered example has been built ad hoc by setting the gray intensity for
each pixel. In practice, many images have segment contours not defined with
such neatness, no matter what the image resolution is. This is especially true
with segments having either curve or sloped contours, since the shades of
colors along the border of the segments allow to prevent a sawtoothed
rendering. See Figure 4 for an illustration.
Figure 4: A simple gray-scale image (left), and a zoommed detail (middle),
showing that when a segment has either curve or sloped contours, the color is
ripped along the border, to prevent a sawtoothed rendering. Due to this
feature, the perspective plot of the density estimate, based on the (2)
highlights a valley at the border of the foreground (right).
When the image does not features itself with dripped contours, it is possible
to overcome the issue of lacking valleys in the density by introducing some
blurring of the image. To this aim, given that the identification of pixel
neighbors is required anyway for the identification of disconnected regions, a
simple strategy is to replace each pixel value with an average of the values
in its neighborhood. In fact, a quite common practice prior to thresholding
segmentation is to smooth an image using averaging or median mask. See 11
§10.2.1.
A further, somewhat related, issue concerns the choice of the density measure.
While we choose to build $\hat{f}$ based on the color intensities only, an
alternative route would consist in exploiting the whole available information
in terms of both the color coordinates and the spatial coordinates, _i.e._ :
$\hat{f}(p)=\frac{1}{n\prod_{j}h_{j}}\sum_{i=1}^{n}K\left(\frac{z^{(r)}-z_{i}^{(r)}}{h_{r}}\right)K\left(\frac{z^{(g)}-z_{i}^{(g)}}{h_{g}}\right)K\left(\frac{z^{(b)}-z_{i}^{(b)}}{h_{b}}\right)K\left(\frac{x-x_{i}}{h_{x}}\right)K\left(\frac{y-y_{i}}{h_{y}}\right).$
(4)
Note that, depending on the choice of both the kernel and the bandwidth, the
(4) can be a function of the distance between $(x,y)$ and $(x_{i},y_{i})$.
Let, for instance, $h_{x}=h_{y}=1.$ Then, the last two factors are
proportional to $e^{d_{2}((x,y),(x_{i},y_{i}))^{2}}$ or to
$e^{d_{1}((x,y),(x_{i},y_{i}))}$ when a Gaussian or, respectively, a Laplace
kernel is selected, with $d_{p}(\cdot,\cdot)$ the $L_{p}$ distance.
Estimating the density as in (4) would also overcome the above mentioned
problem of lacking valleys at the borders of the segments: in (2), the largest
contribution to the density of a generic pixel is provided by all the pixels
having similar colors; conversely, if also the spatial coordinates are
involved in $\hat{f}$, the density of a generic pixel depends on pixels with
similar colors _and_ close spatially. Hence, at the borders of a segment,
where part of the adjacent pixels have a different color, the density turns
out to be lower than the interior pixels (see Figure 5 for an illustration).
While this behavior is desirable for the purpose of segmentation, the
interpretation of $\hat{f}$ in terms of color frequency fails and, indeed, a
higher computational effort is required. Additionally, some empirical work not
included in the manuscript has proven that estimating $f$ via the (4) results
in oversegmenting the image, hence using the only color coordinates to
estimate density is overall preferable.
In fact, two further aspects concerning the estimation of $\hat{f}$ need to be
accounted for, concerning the selection of the kernel function $K$ and the
smoothing vector $h$. With respect to the former choice, it has been well-
established that it does not have a strong impact on the density estimate see,
e.g. 28, 6. However, the use of a bounded-support kernel, such as the uniform
one (which would be overall more easily interpretable in this context) is in
general not advisable for image segmentation, as it entails that in the
evaluation of the density of a pixel, other pixels have weight either constant
and positive, or null, depending on $h$ and on the difference between color
intensities. In this sense, different hues of the same colors could be
evaluated as either the same colors or as completely different colors. For
this reason, and especially when colors are not homogeneous within the image,
a longer-tail kernel is more secure. Concerning the choice of $h,$ on the
other hand, in clustering it is not as critical as it is in density
estimation, since a rough indication about the high density location may
suffice. To this aim, one may resort to one of the numerous bandwidth
selectors proposed in literature about kernel density estimation, such as the
ones based on cross-validation or plug-in strategies. The reader may refer to
6 for a recent review. In any case, both choices are certainly issues to be
tackled, and will represent object of empirical investigation in the next
section.
As a final note of this section, observe that, since the proposed procedure
relies on the definition of a segment as a cluster of connected pixels sharing
the same color intensity, it cannot identify segments which configure
themselves as isolated pixels within some differently colored background.
Depending on the application, and on the image resolution, this may represent
a limitation, or not. For example, the stars in the image of a starry sky
might appear as single isolated yellow pixels if the image has low resolution
(_i.e._ it is small), thus likely required to be identified as small segments.
On the other hand, single differently-colored pixels might be just spots or
imperfections, which would be preferably not segmented.
Figure 5: Density estimate superimposed to the grayscale image in Figure 3
when $\hat{f}$ is based both on the colors and the spatial coordinates: level
set representation (top panel) and perspective plot (bottom panel).
## 4 Empirical work
### 4.1 Simulation study
As a first step of the empirical analysis, we present the results of a
compendious simulation study aimed at understanding and systematizing the
strengths and the limits of the proposed segmentation method, also with
respect to the sensitivity of the density function to different kernels and
smoothing amounts.
To provide the images with the randomness required to run simulations, yet
guaranteeing a given segmentation structure, images are generated as follows:
each pixel is associated a priori with a nominal color intensity, which
defines the segment membership; then, the actual intensity is randomly drawn
from a normal distribution having the nominal intensity as a mean.
Four main simulation settings are considered. Stemming from a reference image,
with two convex shaped segments and no clefts (setting A1), one characteristic
is changed at a time for each simulation setting, in order to control for its
effect on the resulting segmentation. A larger number of smaller segments is
considered (setting B1), non-convex shapes (setting C1), as well as the
presence of clefts in the image, where the color intensity might mix up with
the color intensity of neighbour segments (setting D1). Within each of these
settings, different degrees of color homogeneity are simulated by increasing
the variance of the normal distribution from which the color intensity of each
pixel is drawn (setting A2, B2, C2, D2). Additionally, shaded segment contours
are considered, determined by setting the border color intensity at the mean
of the nominal intensities of the adjacent segments (setting A3, B3, C3, D3).
One example of image generated from each setting and subsetting is displayed
in the top of Tables 1, 2, 3, 4.
Table 1: Simulation results for setting A: each entry displays the Monte Carlo
average of the Adjusted Rand Index (ARI) and of the number of detected
segments (the true number of segments is 2). Standard deviations are reported
in brackets.
| |
A1: benchmark |
A2: shaded contours |
A3: heterogeneous colors
---|---|---|---|---
| ARI | 0.99 | 0.98 | 0.91
Normal K | | (0.08) | (0.03) | (0.14)
$h=h_{N}$ | # segments | 3.10 | 2.92 | 2.71
| | (1.54) | (1.52) | (1.22)
| ARI | 1.00 | 0.99 | 0.60
Normal K | | (0.00) | (0.02) | (0.20)
$h=0.75h_{N}$ | # segments | 2.00 | 2.00 | 5.60
| | (0.00) | (0.09) | (2.22)
| ARI | 1.00 | 0.96 | 0.77
Normal K | | (0.09) | (0.06) | (0.22)
$h=1.25h_{N}$ | # segments | 2.00 | 2.27 | 4.07
| | (0.00) | (0.83) | (2.22)
| ARI | 1.00 | 0.54 | 0.57
Uniform K | | (0.00) | (0.45) | (0.21)
$h=h_{N}$ | # segments | 2.00 | 1.86 | 5.68
| | (0.00) | (1.08) | (2.06)
| ARI | 1.00 | 0.95 | 0.66
Uniform K | | (0.00) | (0.11) | (0.23)
$h=0.75h_{N}$ | # segments | 2.00 | 2.18 | 4.92
| | (0.00) | (0.73) | (2.33)
| ARI | 0.55 | 0.48 | 0.48
Uniform K | | (0.16) | (0.38) | (0.16)
$h=1.25h_{N}$ | # segments | 6.72 | 2.82 | 6.55
| | (1.86) | (1.84) | (1.78)
Table 2: Simulation results for setting B: each entry displays the Monte Carlo
average of the Adjusted Rand Index (ARI) and of the number of detected
segments (the true number of segments is 9). Standard deviations are reported
in brackets.
| |
B1: benchmark |
B2: shaded contours |
B3: heterogeneous colors
---|---|---|---|---
| ARI | 0.99 | 0.83 | 0.87
Normal K | | (0.02) | (0.04) | (0.10)
$h=h_{N}$ | # segments | 9.51 | 9.31 | 8.75
| | (0.70) | (0.79) | (1.36)
| ARI | 0.99 | 0.84 | 0.93
Normal K | | (0.02) | (0.02) | (0.06)
$h=0.75h_{N}$ | # segments | 9.16 | 9.19 | 9.95
| | (0.39) | (0.44) | (1.18)
| ARI | 0.84 | 0.78 | 0.82
Normal K | | (0.13) | (0.08) | (0.10)
$h=1.25h_{N}$ | # segments | 6.99 | 9.00 | 8.66
| | (1.13) | (1.34) | (1.53)
| ARI | 1.00 | 0.78 | 0.91
Uniform K | | (0.02) | (0.06) | (0.07)
$h=h_{N}$ | # segments | 8.98 | 9.30 | 9.68
| | (0.20) | (0.98) | (1.23)
| ARI | 0.65 | 0.78 | 0.96
Uniform K | | (0.00) | (0.07) | (0.04)
$h=0.75h_{N}$ | # segments | 5.00 | 8.97 | 9.34
| | (0.00) | (0.83) | (0.64)
| ARI | 0.94 | 0.65 | 0.85
Uniform K | | (0.11) | (0.12) | (0.09)
$h=1.25h_{N}$ | # segments | 8.48 | 7.99 | 8.96
| | (1.42) | (1.54) | (1.50)
For the analyses, density is estimated via (2), with both Gaussian and Uniform
kernels. The bandwidth $h$ is selected as the asymptotically optimal solution
for data following a Normal distribution (in the following, $h=h_{N}$). This
selection criterion is, by construction, sub-optimal. Indeed, any assumption
of Normality of the data color distribution cannot hold since multi-segment
images have, in principle, a multimodal density. Nevertheless, this rule of
thumb has resulted quite effective in clustering applications and has been
then used in this analysis due to its simplicity and low computational burden.
In order to understand the sensitivity of the procedure to different amounts
of smoothing, the bandwidth has been both shrinked and enlarged slightly to
the values $h=0.75h_{N}$ and $h=1.25h_{N}$ to evaluate the effect on the
segmentation.
The agreement between the true and the detected segment membership has been
measured in terms of number of detected segments and Adjusted Rand Index 14,
which takes increasing values for improved quality of the segmentation (the
maximum value $1$ is associated to a perfect segmentation).
In each simulation setting 500 grayscale images of $320$ pixels are generated,
as this allows for keeping feasible the computational burden. All the
empirical work has been performed in the `R` environment 21. Images have been
handled via the packages `EBIimage` 20 and `BiOps` 4, while the segmentation
routines have been built as adjustments of the clustering routines available
in the package `pdfCluster` 2.
Simulation results are summarized in Tables 1, 2, 3, 4. The procedure works
comprehensively in a satisfactory way. With a Normal kernel, it provides
almost perfect segmentations in the simplest setting A1, A2, A3, where results
are robust to different amounts of smoothing. In the more challenging settings
featured by shaded contours and heterogeneous color segments the performance
of the segmentation worsens, yet remaining remarkably good in most of the
considered frameworks. Neither non-convex segment shapes or the presence of
many small segments do compromise the segmentation. In the latter case,
segmentation may worsen for large smoothing, due to the risk of cluster
merging when the density is oversmoothed. The greatest challenge is
represented by the setting D (Table 4), since within interstices of the
segments (like the thin frame of the main image) the color intensity mixes up
with the intensity of the neighbours segments and prevents the edge detection.
Increasing the smoothing amount may help to reduce the entailed
oversegmentation. In fact, this behaviour is exacerbated by the small size of
the segmented images, as thin clefts in high-resolution images can be anyway
associated to relatively large spatial areas, and the color mixing is not
expected to affect the whole areas. It shall be noticed, however, that
oversegmentation appears as a general tendency which seems to feature the
proposed method, since the number of detected segments is, on average, higher
than the true one.
Both choices of the kernel appear appropriate at a first sight and such choice
is overall not relevant in the easiest settings. However, the use of a bounded
Uniform kernel is riskier as expected, and results show higher variability and
high sensitivity to a bad selection of the smoothing amount. Hence, the
general preference for unbounded-support kernels is confirmed.
Table 3: Simulation results for setting C: each entry displays the Monte Carlo
average of the Adjusted Rand Index (ARI) and of the number of detected
segments (the true number of segments is 2). Standard deviations are reported
in brackets.
| |
C1: benchmark |
C2: shaded contours |
C3: heterogeneous colors
---|---|---|---|---
| ARI | 1.00 | 1.00 | 0.80
Normal K | | (0.00) | (0.01) | (0.20)
$h=h_{N}$ | # segments | 2.00 | 2.00 | 3.87
| | (0.00) | (0.00) | (2.14)
| ARI | 1.00 | 0.94 | 0.57
Normal K | | (0.00) | (0.18) | (0.17)
$h=0.75h_{N}$ | # segments | 2.00 | 2.19 | 6.21
| | (0.00) | (0.78) | (2.02)
| ARI | 1.00 | 1.00 | 0.67
Normal K | | (0.00) | (0.01) | (0.20)
$h=1.25h_{N}$ | # segments | 2.00 | 2.00 | 5.15
| | (0.00) | (0.04) | (2.18)
| ARI | 1.00 | 0.98 | 0.94
Uniform K | | (0.00) | (0.07) | (0.11)
$h=h_{N}$ | # segments | 2.00 | 2.13 | 2.54
| | (0.00) | (0.58) | (1.10)
| ARI | 1.00 | 0.97 | 0.83
Uniform K | | (0.00) | (0.15) | (0.17)
$h=0.75h_{N}$ | # segments | 2.00 | 2.01 | 3.55
| | (0.00) | (0.29) | (1.67)
| ARI | 0.29 | 0.59 | 0.71
Uniform K | | (0.32) | (0.46) | (0.37)
$h=1.25h_{N}$ | # segments | 2.56 | 2.02 | 2.82
| | (1.55) | (1.23) | (1.88)
Table 4: Simulation results for setting D: each entry display the Monte Carlo
average of the Adjusted Rand Index (ARI) and of the number of detected
segments (the true number of segments is 3). Standard deviations are reported
in brackets.
| |
D1: benchmark |
D2: shaded contours |
D3: heterogeneous colors
---|---|---|---|---
| ARI | 0.49 | 0.24 | 0.38
Normal K | | (0.09) | (0.25) | (0.07)
$h=h_{N}$ | # segments | 7.16 | 2.77 | 9.28
| | (1.47) | (2.01) | (1.85)
| ARI | 0.42 | 0.12 | 0.36
Normal K | | (0.09) | (0.19) | (0.06)
$h=0.75h_{N}$ | # segments | 8.94 | 2.55 | 10.59
| | (1.69) | (2.28) | (1.72)
| ARI | 0.61 | 0.13 | 0.39
Normal K | | (0.34) | (0.20) | (0.12)
$h=1.25h_{N}$ | # segments | 3.12 | 2.93 | 7.97
| | (1.97) | (2.24) | (2.01)
| ARI | 1.00 | 0.28 | 0.36
Uniform K | | (0.00) | (0.24) | (0.07)
$h=h_{N}$ | # segments | 3.00 | 3.60 | 9.87
| | (0.00) | (2.53) | (1.90)
| ARI | 1.00 | 0.42 | 0.42
Uniform K | | (0.00) | (0.17) | (0.13)
$h=0.75h_{N}$ | # segments | 3.00 | 3.70 | 7.53
| | (0.00) | (2.22) | (1.89)
| ARI | 0.81 | 0.39 | 0.36
Uniform K | | (0.26) | (0.30) | (0.06)
$h=1.25h_{N}$ | # segments | 3.63 | 3.72 | 10.37
| | (1.88) | (2.31) | (1.85)
### 4.2 Real images illustration
In this section, some examples are presented to illustrate the proposed
procedure in action, and to evaluate its effectiveness in identifying the
salient regions for varying characteristics of more challenging images.
Further examples are presented in the online Supplementary Material. Grayscale
and multichannel images are considered, and selected for featuring either
shaded and neat colors, and possible highlighted contours; both convex and
nonconvex shaped segments have been analyzed. The procedure is here compared
with the performance of some competitors: as a benchmark methods, $K-$means
clustering is considered, thresholding based on the Otsu algorithm, and an
edge-detection algorithm, based on the Sobel filter see, for details, 11
§10.2.1 and, respectively, §4.1. The first method has been given a head start
by setting the number of clusters to its true number, as it is intuitively
perceived by the author. The choice is not always obvious, especially for
photos or, in general, shaded-color images. The second algorithm assumes that
the image contains two classes of pixels -black/white- grossly corresponding
to two modal regions of the histogram built on the gray intensities. It
calculates the optimum threshold separating the two classes based on the
minimization of the intra-class variance. Although it is designed for
grayscale images only, thus requiring a prior preprocessing of multi-channel
images, it has been considered for comparison as it represents a rough, yet
effective, binary variant of the proposed procedure. The third method works in
a slightly different logic, as it is aimed at edge rather than segment
detection. Then, segments are assumed to be the sets of pixels within the
linked edges.
A further aim of the empirical work is to investigate the use of the density
function as a tool to identify the main features of an image: one question of
interest is its ability to detect edges of the segments. Also, in agreement
with the previous section, its sensitivity to different smoothing amounts is
evaluated, by testing $h\in\\{h_{N},0.75,h_{N},1.25h_{N}\\}$. Due to the
considerations arisen in the simulation study, a Gaussian kernel only has been
considered here.
Additionally, the ability of the cluster tree to identify a hierarchy of
cluster merging that is meaningful with respect to the image has been tested.
Finally, the allocation of low-density pixels, not belonging to the segment
cores has been observed and commented.
The top left panels of Figures 6, 7 display two examples of grayscale images,
selected for the analysis due to their different characteristics and different
degrees of difficulty in segmentation. Multichannel images are displayed in
the top left panels of Figures 8 and 9 .
original image | $K-$means segmentation, $K=5$ | thresholding segmentation | Sobel segmentation | nonparametric segmentation, $h=h_{N}$
---|---|---|---|---
| | | |
segment cores, $h=h_{N}$ | cluster tree, $h=h_{N}$ | density contours, $h=h_{N}$ | nonparametric segmentation, $h=0.75h_{N}$ | nonparametric segmentation, $h=1.25h_{N}$
| | | |
Figure 6: Segmentation results. Segments have been assigned arbitrary colors,
except for the thresholding segmentation, where segments are either black or
white by construction.
The benchmark procedures work satisfactorily, in grayscale and multichannel
images. Thresholding is intrinsically limited, as it identifies two segments
only by construction. It is, nevertheless, able to reconstruct the broad
features of the image, even when it is particularly challenging. See, _e.g._
the famous Einstein’s grimace, in Figure 7, not easy at all to be segmented,
yet very well recognizable even in the binary reconstruction via thresholding.
A self-evident limitation occurs when similar color shades characterize
adjacent segments, since the dichotomous choice between black or white
segments unavoidably determine a corrupt reconstruction of the image. See
Figures 6 as an example of this behaviour.
Despite its simplicity, $K-$means behave very well, being able to reconstruct
accurately most of the images where the segment distinction is unarguable
(Figures 6, 8). On the other hand, the procedure requires the number of
segments to be known in advance, and a remarkable head start has been then
granted by providing the true number of segments as an input. In some
applications such number might be known a priori; consider, for example, X-ray
or magnetic resonance medical images, where the number of segments can be set
based on anatomy knowledge. However, if the purpose of image segmentation
would be to attempt a first automatization of the diagnostic process, setting
the number of segments to its ‘normal’ value, would prevent detection of
fractures or anomalies, thus going in the opposite direction. A further
feature of $K-$means is its complete noncontextuality; since distances from
the cluster centroids are computed on the basis of the color only,
disconnected segments might be assigned to the same cluster just for sharing a
similar color, disregarding the adjacency structure. Depending on
applications, this characteristic may be sensible or not. For instance, in
Figure 8, assigning head and body to the same segment is particularly
consistent. On the other hand, a complete noncontextuality may lead to the
identification of meaningless clusters, as it happens for Figure 7 whose
segmentation by $K-$means results in a pixelated image.
The Sobel algorithm recognizes the edges between different segments generally
in a very detailed way. This is particularly evident not only in simple
examples with neat contours, like Bart Simpson (Figure 8) but also in
challenging images like the fish (Figure 9). On the other hand, detection of
the edges can be even too much detailed (Figure 7), with a risk of not
identifying the salient regions of the image. Similarly to thresholding, it
fails when there is small change in the color shades between adjacent
segments, as it occurs with Figure 6. Furthermore, the edges do not form
closed boundaries, thus possibly requiring the possible application of some
further algorithm to complete the scope and define the segments.
original image | $K-$means segmentation, $K=14$ | thresholding segmentation | Sobel segmentation | nonparametric segmentation, $h=h_{N}$
---|---|---|---|---
| | | |
segment cores, $h=h_{N}$ | cluster tree, $h=h_{N}$ | density contours, $h=h_{N}$ | nonparametric segmentation, $h=0.75h_{N}$ | nonparametric segmentation, $h=1.25h_{N}$
| | | |
Figure 7: Segmentation results. Segments have been assigned arbitrary colors, except for the thresholding segmentation, where segments are either black or white by construction. original image | $K-$means segmentation, $K=5$ | thresholding segmentation | Sobel segmentation | nonparametric segmentation, $h=h_{N}$
---|---|---|---|---
| | | |
segment cores, $h=h_{N}$ | cluster tree, $h=h_{N}$ | density contours, $h=h_{N}$ | nonparametric segmentation, $h=0.75h_{N}$ | nonparametric segmentation, $h=1.25h_{N}$
| | | |
Figure 8: Segmentation results. Segments have been assigned arbitrary colors, except for the thresholding segmentation, where segments are either black or white by construction. original image | $K-$means segmentation, $K=5$ | thresholding segmentation | Sobel segmentation | nonparametric segmentation, $h=h_{N}$
---|---|---|---|---
| | | |
segment cores, $h=h_{N}$ | cluster tree, $h=h_{N}$ | density contours, $h=h_{N}$ | nonparametric segmentation, $h=0.75h_{N}$ | nonparametric segmentation, $h=1.25h_{N}$
| | | |
Figure 9: Segmentation results. Segments have been assigned arbitrary colors,
except for the thresholding segmentation, where segments are either black or
white by construction.
The performance of the proposed nonparametric procedure are generally
satisfactory when applied to both multichannel and grayscale images. The
procedure does not result challenged by the need of distinguishing contours of
assorted segments, both for size and nonconvex shapes, as especially evidenced
by Figures 7 and 8. Consistently with the results of the simulation study, it
is especially able to identify segments as connected regions characterized by
uniformity of color, but performs well also when applied to image featured by
shaded colors. On the con’s side, the procedure is somewhat sensitive to
perceive color differences even when they are not distinguishable with the
unaided eye at once, thus resulting in oversegmenting the image. As the method
mainly hinges on the density, which is built on the image colors, it is in
principle framed within the class of the noncontextual algorithms. However, it
takes in some information about the spatial relationship between the pixels
since each segment is, by construction, a (high density) connected set which
is disconnected from the other segments. This characteristic prevents pixeled
segmentations like $K-$means and, on the other hand does not allow the
identification of unique segments as disconnected regions sharing the same
color (see Figure 8, where Bart’s body and head are classified as different
segments).
Note that most of the segmented images include a number of black-colored
regions, corresponding to unallocated pixels, and typically located at the
borders of the segments. These are associated to low-density areas evaluated
in the second step of the segmentation procedure. As discussed at the end of
Section 3.1, in those cases, none of the pixels already assigned to the
segment presenting maximum density (3) is adjacent to the ones under
evaluation, _i.e._ , their color is not similar to the color of any other
adjacent segment. In the analysis, these pixels are left unallocated, to
enlighten the low degree of confidence in their classification. Related to
this aspect, and depending on subject-matter considerations, the procedure
allows the opportunity of not allocating pixels that are not belonging to the
cluster cores. The values of $\hat{f}_{m}$ in (3), suitably normalized,
provide a degree of confidence in the allocation, in the guise of fuzzy
clustering schemes. This is especially useful in all the images where colors
are homogeneous and segments well separated, as unallocated pixels mostly
identify the boundaries of the segments (first bottom panel of Figures 6 to
9).
With regard to the cluster tree (second bottom panel of Figures 6 to 9), it
works effectively with multichannel images in establishing a hierarchy of
cluster-merging which can be associated to different levels of resolution. In
general, when scanning the density for varying $\lambda$, two segments appear
disconnected because they are separated by some pixels having lower density
color. At a lower level of $\lambda$, also these latter pixels have density
above the threshold and merging can occur due to the spatial connectedness of
all the involved pixels. In the Bart image, for example, clusters that are
kept separated due to small color differences (see the face and the neck) are
the first to be aggregated through the cluster tree. Similarly, in the fish
image (Figure 9), at some high density level the different segments composing
the sea are aggregated. In grayscale images similar conclusions may be drawn.
In the Einstein image, for instance, highest-density aggregations of the tree
branches entail merging the segments of the background. Yet, establishing a
meaningful hierarchy of the segments is less easy with grayscale images, where
also the human eye is challenged to aggregate clusters without resorting to
subject-matter considerations and on the basis of the color only.
In general, the density function results in an effective tools to identify the
main features of the images, and density contours work well as edge detectors
of the segments when $h=h_{N}$. See the third bottom panel of Figures 6 to 9.
Additionally, the segmentation is quite robust to variation of $h$. This is
especially true for simple images with neat contours as the Greyscale raws and
Bart Simpson (IV and V bottom panels of Figures 6 and 8), where comparable
segmentations are produced over the range of considered values of $h$. As
expected, more challenging images tend to be oversegmented for small $h$ as
seen, for instance, in the background of the Einstein and fish images (IV
bottom panel of Figures 7 and 9). A large value of $h$, on the other hand,
smoothes the density and results in segment aggregation, as seen in the last
bottom panel of Figures 7 and 9. This can be seen as a way to reduce the risk
of over-segmentation.
## 5 Concluding remarks
Image segmentation is a complicated task whose implementation cannot, in
general, leave aside subject-matter considerations. All this considered, the
proposed procedure is framed halfway between contextual and noncontextual
segmentation algorithms, and may be then applied to a variety of situations.
It can be either applied fully automatically, or be richly customized,
depending on the goals of the segmentation. It is provided with some useful
tools that may integrate the output of segmentation, as an estimate of the
density of the pixels, which may be used to determine the degree of confidence
about the segment allocation, as well as for edge detection, and the cluster
tree, which allows for displaying different levels of resolution of the
segmentation itself.
## Supporting information
Some further examples are available in the Supplementary Material, as part of
the online article.
## Acknowledgments
I wish to thank Fabio Barbaro, who first approached the embrionic ideas of
this work when drafting his degree thesis.
## References
* Ayala et al. 2015 Ayala, H., F. dos Santos, V. Mariani, and dos Santos C.L., 2015: Image thresholding segmentation based on a novel beta differential evolution approach. Expert Systems with Applications, 42, no. 4, 2136–2142.
* Azzalini and Menardi 2014 Azzalini, A. and G. Menardi, 2014: Clustering via nonparametric density estimation: The R package pdfCluster. Journal of Statistical Software, 57, no. 11, 1–26.
URL http://www.jstatsoft.org/v57/i11/
* Azzalini and Torelli 2007 Azzalini, A. and N. Torelli, 2007: Clustering via nonparametric density estimation. Statistics and Computing, 17, 71–80.
* Bordese and Alini 2017 Bordese, M. and W. Alini, 2017: biOps: Image processing and analysis.
* Carreira-Perpinan 2008 Carreira-Perpinan, M., 2008: Generalised blurring mean-shift algorithms for nonparametric clustering. CVPR, IEEE Computer Society.
* Chacón and Duong 2018 Chacón, J. and T. Duong, 2018: Multivariate Kernel Smoothing and Its Applications. Chapman and Hall/CRC.
* Chacón 2015 Chacón, J. E., 2015: A population background for nonparametric density-based clustering. Statistical Science, 30, no. 4, 518–532.
* Chan and Vese 2001 Chan, T. and L. Vese, 2001: Active contours without edges. IEEE Transactions on Image Processing, 10, no. 2, 266–277.
* Comaniciu and Meer 2002 Comaniciu, D. and P. Meer, 2002: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Analysis and Machine Intelligence, 24, no. 5, 603–619.
* Dollár and Zitnick 2013 Dollár, P. and C. Zitnick, 2013: Structured forests for fast edge detection. Proceedings of the IEEE International Conference on Computer Vision, 1841–1848.
* Dougherty 2009 Dougherty, G., 2009: Digital image processing for medical applications. Cambridge University Press.
* El Aziz et al. 2017 El Aziz, M., A. Ewees, and A. Hassanien, 2017: Whale optimization algorithm and moth-flame optimization for multilevel thresholding image segmentation. Expert Systems with Applications, 83, 242–256.
* Fukunaga and Hostetler 1975 Fukunaga, K. and L. D. Hostetler, 1975: The estimation of the gradient of a density function, with application in pattern recognition. IEEE Transactions on Information Theory, 21, no. 1, 32–40.
* Hubert and Arabie 1985 Hubert, L. and P. Arabie, 1985: Comparing partitions. Journal of classification, 2, no. 1, 193–218.
* Kimmel and Bruckstein 2003 Kimmel, R. and A. Bruckstein, 2003: Regularized laplacian zero crossings as optimal edge integrators. International Journal of Computer Vision, 53, no. 3, 225–243.
* Li et al. 2008 Li, C., C. Kao, J. Gore, and Z. Ding, 2008: Minimization of region-scalable fitting energy for image segmentation. IEEE transactions on Image Processing, 17, no. 10, 1940–1949.
* Liu et al. 2008 Liu, T., H. Zhou, F. Lin, Y. Pang, and J. Wu, 2008: Improving image segmentation by gradient vector flow and mean shift. Pattern Recognition Letters, 29, no. 1, 90–95.
* Menardi and Azzalini 2014 Menardi, G. and A. Azzalini, 2014: An advancement in clustering via nonparametric density estimation. Statistics and Computing, 24, no. 5, 753–767.
* Ning et al. 2010 Ning, J., L. Zhang, D. Zhang, and C. Wu, 2010: Interactive image segmentation by maximal similarity based region merging. Pattern Recognition, 43, no. 2, 445–456.
* Pau et al. 2010 Pau, G., F. Fuchs, O. Sklyar, M. Boutros, and W. Huber, 2010: Ebimage—an R package for image processing with applications to cellular phenotypes. Bioinformatics, 26, no. 7, 979–981, doi:10.1093/bioinformatics/btq046.
* R Core Team 2018 R Core Team, 2018: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria.
URL https://www.R-project.org/
* Scott and Sain 2005 Scott, D. and S. Sain, 2005: Multidimensional density estimation. Handbook of Statistics, 24, 229–261.
* Seal et al. 2015 Seal, A., A. Das, and P. Sen, 2015: Watershed: an image segmentation approach. International Journal of Computer Science and Information Technologies, 6, 2295–2297.
* Soille 2013 Soille, P., 2013: Morphological image analysis: principles and applications. Springer Science & Business Media.
* Stuetzle and Nugent 2010 Stuetzle, W. and R. Nugent, 2010: A generalized single linkage method for estimating the cluster tree of a density. Journal of Computational and Graphical Statistics, 19, no. 2, 397–418.
* Sulaiman and Isa 2010 Sulaiman, S. and N. Isa, 2010: Adaptive fuzzy-k-means clustering algorithm for image segmentation. IEEE Transactions on Consumer Electronics, 56, no. 4.
* Tao et al. 2007 Tao, W., H. Jin, and Y. Zhang, 2007: Color image segmentation based on mean shift and normalized cuts. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 37, no. 5, 1382–1389.
* Wand and Jones 1994 Wand, M. P. and M. C. Jones, 1994: Kernel smoothing. Chapman and Hall/CRC.
* Xiang et al. 2006 Xiang, Y., A. Chung, and J. Ye, 2006: An active contour model for image segmentation based on elastic interaction. Journal of Computational Physics, 219, no. 1, 455–476.
* Yuan et al. 2012 Yuan, X., B. Hu, and R. He, 2012: Agglomerative mean-shift clustering. IEEE Transactions on Knowledge and Data Engineering, 24, no. 2, 209–219.
* Zhang et al. 2017 Zhang, L., X. Peng, G. Li, and H. Li, 2017: A novel active contour model for image segmentation using local and global region-based information. Machine Vision and Applications, 28, no. 1-2, 75–89.
* Zhou et al. 2013 Zhou, H., X. Li, G. Schaefer, M. Celebi, and P. Miller, 2013: Mean shift based gradient vector flow for image segmentation. Computer Vision and Image Understanding, 117, no. 9, 1004–1016.
|
Generalized Tilings with Height Functions
Olivier Bodini 111LIP, École Normale Supérieure de Lyon, 46, Allée d’Italie,
69364 Lyon Cedex 05, France<EMAIL_ADDRESS>and Matthieu Latapy
222LIAFA, Université Paris 7, 2, place Jussieu, 75005 Paris, France.
<EMAIL_ADDRESS>
“Height functions are cool!”
Cris Moore, DM-CCG’01 talk [LMN01].
Abstract: In this paper, we introduce a generalization of a class of tilings
which appear in the literature: the tilings over which a height function can
be defined (for example, the famous tilings of polyominoes with dominoes). We
show that many properties of these tilings can be seen as the consequences of
properties of the generalized tilings we introduce. In particular, we show
that any tiling problem which can be modelized in our generalized framework
has the following properties: the tilability of a region can be constructively
decided in polynomial time, the number of connected components in the
undirected flip-accessibility graph can be determined, and the directed flip-
accessibility graph induces a distributive lattice structure. Finally, we give
a few examples of known tiling problems which can be viewed as particular
cases of the new notions we introduce.
Keywords: Tilings, Height Functions, Tilability, Distributive Lattices, Random
Sampling, Potentials, Flows.
## 1 Preliminaries
Given a finite set of elementary shapes, called _tiles_ , a _tiling_ of a
given region is a set of translated tiles such that the union of the tiles
covers exactly the region, and such that there is no overlapping between any
tiles. See for example Figure 1 for a tiling of a polyomino (set of squares on
a two-dimensional grid) with dominoes ($1\times 2$ and $2\times 1$
rectangles). Tilings are widely used in physics to modelize natural objects
and phenomena. For example, quasicrystals are modelized by Penrose tilings
[CF96] and dimers on a lattice are modelized by domino tilings [Ken00].
Tilings appeared in computer science with the famous undecidability of the
question of whether the plane is tilable or not using a given finite set of
tiles [Ber66]. Since then, many studies appeared concerning these objects,
which are also strongly related to many important combinatorial problems
[Lat00].
Figure 1: From left to right: the two possible tiles (called _dominoes_), a
polyomino (_i.e._ a set of squares) to tile, and a possible tiling of the
polyomino with dominoes.
A local transformation is often defined over tilings. This transformation,
called _flip_ , is a local rearrangement of some tiles which makes possible to
obtain a new tiling from a given one. One then defines the (undirected) _flip-
accessibility graph_ of the tilings of a region $R$, denoted by
$\overline{A_{R}}$, as follows: the vertices of $\overline{A_{R}}$ are all the
tilings of $R$, and $\\{t,t^{\prime}\\}$ is an (undirected) edge of
$\overline{A_{R}}$ if and only if there is a flip between $t$ and
$t^{\prime}$. See Figure 2 for an example. The flip notion is a key element
for the enumeration of the tilings of a given region, and for many
algorithmical questions. For example, we will see in the following that the
structure of $\overline{A_{R}}$ may give a way to sample randomly a tiling of
$R$, which is crucial for physicists. This notion is also a key element to
study the entropy of the physical object [LR93], and to examine some of its
properties like frozen areas, weaknesses, and others [JPS01].
Figure 2: From left to right: the flip operation over dominoes, and two
examples of tilings which can be obtained from the one shown in Figure 1 by
one flip. In these tilings, we shaded the tiles which moved during the flip.
On some classes of tilings which can be drawn on a regular grid, it is
possible to define a _height function_ which associates an integer to any node
of the grid (it is called the _height_ of the point). For example, one can
define such a function over domino tilings as follows. As already noticed, a
domino tiling can be drawn on a two dimensional square grid. We can draw the
squares of the grid in black and white like on a chessboard. Let us consider a
polyomino $P$ and a domino tiling $T$ of $P$, and let us distinguish a
particular point $p$ on the boundary of $P$, say the one with smaller
coordinates. We say that $p$ is of height $0$, and that the height of any
other point $p^{\prime}$ of $P$ is computed as follows: initialize a counter
to zero, and go from $p$ to $p^{\prime}$ using a path composed only of edges
of dominoes in $T$, increasing the counter when the square on the right is
black and decreasing it when the square is white. The height of $p^{\prime}$
is the value of the counter when one reaches $p^{\prime}$. One can prove that
this definition is consistent and can be used as the height function for
domino tilings [Thu90]. See Figure 3 for an example.
Figure 3: The directed flip-accessibility graph of the tilings of a polyomino
by dominoes. The height of each point of the polyomino is shown for each
tiling. The set of all the tilings of this polyomino is ordered by the flip
relation directed with respect to the height functions.
These height functions make it possible to define $A_{R}$, the _directed_
flip-accessibility graph of the tilings of $R$: the vertices of $A_{R}$ are
the tilings of $R$ and there is a directed edge $(t,t^{\prime})$ if and only
if $t$ can be transformed into $t^{\prime}$ by a flip which decreases the sum
of the heights of all the points. See Figure 3 for an example with domino
tilings. The generalized tilings we introduce in this paper are based on these
height functions, and most of our results are induced by them.
These notions of height functions are close to classical notions of flows
theory in graphs. Let $G=(V,E)$ be a directed graph. A flow on $G$ is a map
from $E$ into $\mathbb{C}$ (actually, we will only use flows with values in
$\mathbb{Z}$). Given two vertices $v$ and $v^{\prime}$ of $G$, a travel from
$s$ to $s^{\prime}$ is a set of edges of $G$ such that, if one forgets their
orientations, then one obtains a path from $s$ to $s^{\prime}$. Given a flow
$C$, the flux of $C$ on the travel $T$ is
$F_{T}(C)\ =\ \sum_{e\in T^{+}}C(e)\ -\ \sum_{e\in T^{-}}C(e)$
where $T^{+}$ is the set of vertices of $T$ which are traveled in the right
direction when one goes from $s$ to $s^{\prime}$, and $T^{-}$ is the set of
vertices traveled in the reverse direction. One can easily notice that the
flux is additive by concatenation of travels: if $T_{1}$ and $T_{2}$ are two
travels such that the ending point of $T_{1}$ is equal to the starting point
of $T_{2}$, then $F_{T_{1}\cdot T_{2}}(C)\ =\ F_{T_{1}}(C)+F_{T_{2}}(C)$. See
[Ahu93] for more details about flows theory in graphs.
Since there is no circuit in the graph $A_{R}$ (there exists no nonempty
sequence of flips which transforms a tiling into itself), it induces an order
relation over all the tilings of $R$: $t\leq t^{\prime}$ if and only if
$t^{\prime}$ can be obtained from $t$ by a sequence of (directed) flips. In
Section 3, we will study $A_{R}$ under the order theory point of view, and we
will meet some special classes of orders, which we introduce now. A lattice is
an order $L$ such that any two elements $x$ and $y$ of $L$ have a greatest
lower bound, called the _infimum_ of $x$ and $y$ and denoted by $x\wedge y$,
and a lowest greater bound, called the _supremum_ of $x$ and $y$ and denoted
by $x\vee y$. The infimum of $x$ and $y$ is nothing but the greatest element
among the ones which are lower than both $x$ and $y$. The supremum is defined
dually. A lattice $L$ is _distributive_ if for all $x$, $y$ and $z$ in $L$,
$x\vee(y\wedge z)=(x\vee y)\wedge(x\vee z)$ and $x\wedge(y\vee z)=(x\wedge
y)\vee(x\wedge z)$. For example, it is known that the flip-accessibility graph
of the domino tilings of a polyomino without holes is always a distributive
lattice [Rem99b]. Therefore, this is the case of the flip-accessibility graph
shown in Figure 3 (notice that the maximal element of the order is at the
bottom, and the minimal one at the top of the diagram since we used the
discrete dynamical models convention: the flips go from top to bottom).
Lattices (and especially distributive lattices) are strongly structured sets.
Their study is an important part of order theory, and many results about them
exist. In particular, various codings and algorithms are known about lattices
and distributive lattices. For example, there exists a generic algorithm to
sample randomly an element of any distributive lattice [Pro98]. For more
details about orders and lattices, we refer to [DP90].
Finally, let us introduce a useful notation about graphs. Given a directed
graph $G=(V,E)$, the undirected graph
$\overline{G}=(\overline{V},\overline{E})$ is the graph obtained from $G$ by
removing the orientations of the edges. In other words, $\overline{V}=V$, and
$\overline{E}$ is the set of undirected edges $\\{v,v^{\prime}\\}$ such that
$(v,v^{\prime})\in E$. We will also call $\overline{G}$ the _undirected
version_ of $G$. Notice that this is consistent with our definitions of
$A_{R}$ and $\overline{A_{R}}$.
In this paper, we introduce a generalization of tilings on which a height
function can be defined, and show how some known results may be understood in
this more general context. All along this paper, like we did in the present
section, we will use the tilings with dominoes as a reference to illustrate
our definitions and results. We used this unique example because it is very
famous and simple, and permits to give clear figures. We emphasize however on
the fact that our definitions and results are much more general, as explained
in the last section of the paper.
## 2 Generalized tilings.
In this section, we give all the definitions of the generalized notions we
introduce, starting from the objects we tile to the notions of tilings, height
functions, and flips. The first definitions are very general, therefore we
will only consider some classes of the obtained objects, in order to make the
more specific notions (mainly height functions and flips) relevant in this
context. However, the general objects introduced may be useful in other cases.
Let $G$ be a simple ($G$ has no multiple edges, no loops, and if
$(v,v^{\prime})$ is an edge then $(v^{\prime},v)$ can not be an edge) directed
graph. We consider a set $\Theta$ of elementary circuits of $G$, which we will
call cells. Then, a polycell is any set of cells in $\Theta$. Given a polycell
$P$, we call the edges of cells in $P$ the _edges of $P$_, and their vertices
the _vertices of $P$_. A polycell $P$ is $k$-regular if and only if there
exists an integer $k$ such that each cell of $P$ is a circuit of length $k$.
Given a polycell $P$, the _boundary_ of $P$, denoted by $\partial P$, is a
(arbitrarily) distinguished set of edges of $P$. We say that a vertex of $P$
is on the boundary of $P$ if it is the extremity of an edge in $\partial P$. A
polycell $P$ is full if the undirected boundary $\overline{\partial P}$ is
connected.
Given an edge $e$ of $P$ which is not in $\partial P$, we call the set of all
the cells in $P$ which have $e$ in common a tile. We will always suppose that,
given any set of cells of $P$, they have at most one edge in common. A
_tiling_ $Q$ of a polycell $P$ is then a partition of the set of cells of $P$
into tiles. A polycell $P$ which admits at least a tiling $Q$ is tilable.
Notice that, if one considers a tiling $Q$, then one has a natural bijection
$\pi$ between the tiles of $Q$ and a set of edges of $G$: if $t$ is a tile in
$Q$, then $\pi(t)$ is nothing but the edge which is in each of the cells which
define $t$ (recall that we have made the assumption that this edge is unique).
The edges in $\pi(Q)=\\{\pi(t),\ t\in Q\\}$ are called the tiling edges of
$Q$. See Figure 4 and Figure 5 for some examples. Notice that if we
distinguish exactly one edge of each cell of a polycell $P$, then the
distinguished edges can be viewed as the tiling edges of $P$. Indeed, each
edge induces a tile (the set of cells which have this edge in common), and
each cell is in exactly in one tile.
Figure 4: From left to right: a polycell $P$ (the boundary $\partial P$ is
composed of all the edges except the dotted ones), the three tiles of $P$, and
a tiling of $P$ represented by its tiling edges (the dotted edges).
Figure 5: Left: a $4$-regular polycell $P$, the boundary of which is composed
of those edges which belong to only one cell. Right: a tiling of $P$
represented by its tiling edges (the dotted edges). Notice that this figure is
very close to Figure 1.
Let $P$ be a $k$-regular tilable polycell and $Q$ be a tiling of $P$. We
associate to $Q$ a flow $C_{Q}$ on $\Theta$ (seen as a graph):
$C_{Q}(e)=\left\\{\begin{array}[]{ll}1-k&\mbox{if the edge $e$ is a tiling
edge of $Q$}\\\ 1&\mbox{otherwise.}\end{array}\right.$
For each cell $c$, we define $T_{c}$ as the travel which contains exactly the
edges in $c$ (in other words, it consists in turning around $c$). Notice that
the flux of $C_{Q}$ on the travel $T_{c}$ is always null: $F_{T_{c}}(C_{Q})=0$
since each cell contains exactly a tiling edge, valued $1-k$, and $k-1$ other
edges, valued $1$. Moreover, for each edge $e\in\partial P$, we have
$C_{Q}(e)=1$.
Let us consider a polycell $P$ and a flow $C$ on the edges of $P$ such that
$C(e)=1$ for all edge $e$ in $\partial P$. If for all closed travel $T$
(_i.e._ a cycle when one forgets the orientation of each edge) on the boundary
of $P$ we have $F_{T}(C)=0$, then we say that $P$ has a balanced boundary.
More specifically, if for all closed travel $T$ in $P$ (not only on the
boundary) we have $F_{T}(C)=0$, then the flow $C$ is called a tension.
Finally, a polycell $P$ is contractible if it satisfies the two following
properties:
* •
$P$ has a balanced boundary.
* •
$C$ is a tension if and only if for all cell $c$, $F_{T_{c}}(C)=0$.
Notice that if $P$ is a contractible $k$-regular polycell and $Q$ is a tiling
of $P$, then the flow $C_{Q}$ is a tension.
Now, if we (arbitrarily) distinguish a vertex $\nu$ on the boundary of $P$, we
can associate to the tension $C_{Q}$ a potential $\varphi_{Q}$, defined over
the vertices of $P$:
* •
$\varphi_{Q}(\nu)=0$.
* •
for all vertices $x$ and $y$ of $P$,
$\varphi_{Q}(y)-\varphi_{Q}(x)=F_{T_{x,y}}(C_{Q})$ where $T_{x,y}$ is a travel
between $x$ and $y$.
The distinguished vertex is needed else $\varphi_{Q}$ would only be defined at
almost a constant, but one can choose any vertex on the boundary. Notice that
this potential can be viewed as a _height function_ associated to $Q$, and we
will see that it indeed plays this role in the following. Therefore, we will
call the potential $\varphi_{Q}$ the _height function_ of $Q$. See Figure 6
for an example.
Figure 6: From left to right: a tiling $Q$ of a polycell (represented by its
tiling edges, the dotted ones), the tension $C_{Q}$ and the height function
(or potential) $\varphi_{Q}$ it induces. Again, this figure may be compared to
Figure 3 (topmost tiling).
We now have all the main notions we need about tilings of polycells, including
height functions, except the notion of flips. In order to introduce it, we
need to prove the following:
###### Theorem 2.1.
Let $P$ be a $k$-regular contractible polycell. There is a bijection between
the tilings of $P$ and the tensions $C$ on $P$ which verify:
* •
for all edge $e$ in $\partial P$, $C(e)=1$,
* •
and for all edge $e$ of $P$, $C(e)\in\\{1-k,1\\}$.
###### Proof.
For all tiling $Q$ of $P$, we have defined above a flow $C_{Q}$ which verifies
the property in the claim, and such that for all cell $c$,
$F_{T_{c}}(C_{Q})=0$. Since $P$ is contractible, this last point implies that
$C_{Q}$ is a tension. Conversely, let us consider a tension $C$ which
satisfies the hypotheses. Since each cell is of length $k$, and since
$C(e)\in\\{1-k,1\\}$, the fact that $F_{T_{c}}(C)=0$ implies that each cell
has exactly one negative edge. These negative edges can be considered as the
tiling edges of a tiling of $P$, which ends the proof. ∎
Given a $k$-regular contractible polycell $P$ defined over a graph $G$, this
theorem allows us to make no distinction between a tiling $Q$ and the
associated tension $C_{Q}$. This makes it possible to define the notion of
flip as follows. Suppose there is a vertex $x$ in $P$ which is not on the
boundary and such that its height, with respect to the height function of $Q$,
is greater than the height of each of its neighbors in $\overline{G}$. We will
call such a vertex a _maximal_ vertex. The neighbors of $x$ in $\overline{G}$
have a smaller height than $x$, therefore the outgoing edges of $x$ in $G$ are
tiling edges of $Q$ and the incoming edges of $x$ in $G$ are not. Let us
consider function $C_{Q^{\prime}}$ defined as follows:
$C_{Q^{\prime}}(e)=\left\\{\begin{array}[]{ll}1-k&\mbox{if $e$ is an outgoing
edge of $x$}\\\ 1&\mbox{if $e$ is an incoming edge of $x$}\\\
C_{Q}(e)&\mbox{else.}\end{array}\right.$
Each cell $c$ which contains $x$ contains exactly one outgoing edge of $x$ and
one incoming edge of $x$, therefore we still have
$F_{T_{c}}(C_{Q^{\prime}})=0$. Therefore, $C_{Q^{\prime}}$ is a tension, and
so it induces from Theorem 2.1 a tiling $Q^{\prime}$. We say that $Q^{\prime}$
is obtained from $Q$ by a _flip around $x$_, or simply by a _flip_. Notice
that $Q^{\prime}$ can also be defined as the tiling associated to the height
function obtained from the one of $Q$ by decreasing the height of $x$ by $k$,
and without changing anything else. This corresponds to what happens with
classical tilings (see for example [Rem99b]). See Figure 7 for an example.
Figure 7: A flip which transforms a tiling $Q$ of a polycell $P$ into another
tiling $Q^{\prime}$ of $P$. From left to right, the flip is represented
between the tilings, then between the associated tensions, and finally between
the associated height functions.
We now have all the material needed to define and study $A_{P}$, the
(directed) flip-accessibility graph of the tilings of $P$:
$A_{P}=(V_{P},E_{P})$ is the directed graph where $V_{P}$ is the set of all
the tilings of $P$ and $(Q,Q^{\prime})$ is an edge in $E_{P}$ if $Q$ can be
transformed into $Q^{\prime}$ by a flip. We will also study the undirected
flip-accessibility graph $\overline{A_{P}}$. The properties of these graphs
are crucial for many questions about tilings, like enumeration, generation and
sampling.
## 3 Structure of the flip-accessibility graph.
Let us consider a $k$-regular contractible polycell $P$ and a tiling $Q$ of
$P$. Let $h$ be the maximal value among the heights of all the points with
respect to the height function of $Q$. If $Q$ is such that all the vertices of
height $h$ are on the boundary of $P$, then it is said to be a _maximal
tiling_. For a given $P$, we denote by $\mbox{Tmax}_{P}$ the set of the
maximal tilings of $P$. We will see that these tilings play a particular role
in the graph $A_{P}$. In particular, we will give an explicit relation between
them and the number of connected components of $\overline{A_{P}}$. Recall that
we defined the maximal vertices of $Q$ as the vertices which have a height
greater than the height of each of their neighbors, with respect to the height
function of $Q$ (they are _local_ maximals).
###### Lemma 3.1.
Let $P$ be a $k$-regular tilable contractible polycell ($P$ is not necessarily
full). There exists a maximal tiling $Q$ of $P$.
###### Proof.
Let $V$ be the set of vertices of $P$, and let $Q$ be a tiling of $P$ such
that for all tiling $Q^{\prime}$ of $P$, we have:
$\sum_{x\in V}\varphi_{Q}(x)\leq\sum_{x\in V}\varphi_{Q^{\prime}}(x).$
We will prove that $Q$ is a maximal tiling. Suppose there is a maximal vertex
$x_{m}$ which is not on the boundary. Therefore, one can transform $Q$ into
$Q^{\prime}$ by a flip around $x_{m}$. Then $\sum_{x\in
V}\varphi_{Q^{\prime}}(x)\ =\ \sum_{x\in V}\varphi_{Q}(x)-k$, which is in
contradiction with the hypothesis. ∎
###### Lemma 3.2.
For all tiling $Q$ of a $k$-regular contractible polycell $P$, there exists a
unique tiling in $\mbox{Tmax}_{P}$ reachable from $Q$ by a sequence of flips.
###### Proof.
It is clear that at least one tiling in $\mbox{Tmax}_{P}$ can be reached from
$Q$ by a sequence of flips, since the flip operation decreases the sum of the
heights, and since we know from the proof of Lemma 3.1 that a tiling such that
this sum is minimal is always in $\mbox{Tmax}_{P}$. We now have to prove that
the tiling in $\mbox{Tmax}_{P}$ we obtain does not depend on the order in
which we flip around the successive maximal vertices. Since making a flip
around a maximal point $x$ is nothing but decreasing its height by $k$ and
keeping the other values, if we have two maximal vertices $x$ and $x^{\prime}$
then it is equivalent to make first the flip around $x$ and after the flip
around $x^{\prime}$ or the converse. ∎
###### Lemma 3.3.
Let $P$ be a $k$-regular contractible and tilable polycell. A tiling $Q$ in
$\mbox{Tmax}_{P}$ is totally determined by the values of $\varphi_{Q}$ on
$\partial P$.
###### Proof.
The proof is by induction over the number of cells in $P$. Let $x$ be a
maximal vertex for $\varphi_{Q}$ in $\partial P$. For all outgoing edges $e$
of $x$, $C_{Q}(e)=1-k$ (otherwise $\varphi(x)$ would not be maximal).
Therefore, these edges can be considered as tiling edges, and determine some
tiles of a tiling $Q$ of $P$. Iterating this process, one finally obtains $Q$.
See Figure 8 for an example. ∎
###### Theorem 3.4.
Let $P$ be a $k$-regular contractible and tilable polycell. The number of
connected components in $\overline{A_{P}}$ is equal to the cardinal of
$\mbox{Tmax}_{P}$.
###### Proof.
Immediate from Lemma 3.2. ∎
This theorem is very general and can explain many results which appeared in
previous papers. We obtain for example the following corollary, which
generalizes the one saying that any domino tiling of a polyomino can be
transformed into any other one by a sequence of flips [BNRR95].
###### Corollary 3.5.
Let $P$ be a full $k$-regular contractible and tilable polycell. There is a
unique element in $\mbox{Tmax}_{P}$, which implies that $\overline{A_{P}}$ is
connected.
###### Proof.
Since $\overline{\partial P}$ is connected, the heights of the points in
$\partial P$ are totally determined by the orientation of the edges of
$\partial P$ and do not depend on any tiling $Q$. Therefore, from Lemma 3.3,
there is a unique tiling in $\mbox{Tmax}_{P}$. ∎
As a consequence, if $P$ is a full tilable and contractible polycell, the
height of a vertex $x$ on the boundary of $P$ is independent of the considered
tiling. In the case of full polyominoes, this restriction of $\varphi_{Q}$ to
the boundary of $P$ is called height on the boundary [Fou97] and has been
introduced in [Thu90]. Notice that this height on the boundary can be defined
in the more general case where $P$ has a balanced boundary.
Notice also that the proof of Lemma 3.3 gives an algorithm to build the unique
maximal tiling of any $k$-regular contractible and tilable full polycell $P$,
since the height function on the boundary of $P$ can be computed without
knowing any tiling of $P$. See Algorithm 1 and Figure 8. This algorithm gives
in polynomial time a tiling of $P$ if it is tilable. It can also be used to
decide whether $P$ is tilable or not. Therefore, it generalizes the result of
Thurston [Thu90] saying that it can be decided in polynomial time if a given
polyomino is tilable with dominoes.
Input: A full $k$-regular contractible polycell $P$, its boundary $\partial P$
and a distinguished vertex $\nu$ on this boundary. Output: An array _tension_
on integers indexed by the edges of $P$ and another one _height_ indexed by
the vertices of $P$. The first gives the tension associated to the maximal
tiling, and the second gives its height function. begin
$P^{\prime}\leftarrow P$;
$\mbox{height}[\nu]\leftarrow 0$;
for each _edge $e=(v,v^{\prime})$ in $\partial P^{\prime}$_ do
$\mbox{tension}[e]\leftarrow 1$;
for each _vertex $v$ on the boundary of $P^{\prime}$_ do
Compute $\mbox{height}[v]$ using the values in tension;
repeat
for each _vertex $v$ on the boundary of $P^{\prime}$ which has the minimal
height among the heights of all the vertices on the boundary_ do
for each _incoming edge $e$ of $v$_ do
$\mbox{tension}[e]\leftarrow 1-k$;
for each _edge $e^{\prime}$ in a cell containing $e$_ do
$\mbox{tension}[e^{\prime}]\leftarrow 1$;
for each _edge $e=(v,v^{\prime})$ such that $\mbox{tension}[e]$ has newly be
computed_ do
Compute $\mbox{height}[v]$ and $\mbox{height}[v^{\prime}]$ using the values in
tension;
Remove in $P^{\prime}$ the cells which contain a negative edge;
Compute the boundary of $P^{\prime}$: it is composed of all the vertices of
$P^{\prime}$ which have a computed height;
until _$P^{\prime}$ is empty_;
end
1 Computation of the maximal tiling of a full $k$-regular contractible
polycell.
Figure 8: An example of execution of Algorithm 1. From left to right, we give
the polycell, the result of the computation of the height on the boundary, and
then the results of each iteration of the addition of tiles and removing of
tiles process. In this example, the first iteration of the algorithm gives one
vertical tile, and the second (and last) iteration gives four horizontal
tiles.
With these results, we obtained much information concerning a central question
of tilings: the connectivity of the undirected flip-accessibility graph. We
did not only give a condition under which this graph is connected, but we also
gave a relation between the number of its connected components and some
special tilings. We will now deepen the study of the structure induced by the
flip relation by studying the directed flip-accessibility graph, and in
particular the partial order it induces over the tilings: $t\leq t^{\prime}$
if and only if $t^{\prime}$ can be obtained from $t$ by a sequence of
(directed) flips.
###### Lemma 3.6.
Let $Q$ and $Q^{\prime}$ be two tilings in the same connected component of
$A_{P}$ for a given $k$-regular contractible polycell $P$. Let us consider
$x_{m}$ such that
$\left|\varphi_{Q}(x_{m})-\varphi_{Q^{\prime}}(x_{m})\right|$ is maximal in
$\\{\left|\varphi_{Q}(x)-\varphi_{Q^{\prime}}(x)\right|,\ x\mbox{ is a vertex
of $P$}\\}$. Then, one can make a flip around $x_{m}$ from $Q$ or
$Q^{\prime}$.
###### Proof.
We can suppose that $\varphi_{Q^{\prime}}(x_{m})<\varphi_{Q}(x_{m})$
(otherwise we exchange $Q$ and $Q^{\prime}$). We will show that the height
function $\varphi$ defined by $\varphi(x_{m})=\varphi_{Q}(x_{m})-k$ and
$\varphi(x)=\varphi_{Q}(x)$ for all vertex $x\not=x_{m}$ defines a tiling of
$P$ (which is therefore obtained from $Q$ by a flip around $x_{m}$). Let us
consider any circuit which contains $x_{m}$. Therefore, it contains an
incoming edge $(x_{p},x_{m})$ and an outgoing edge $(x_{m},x_{s})$ of $x_{m}$.
We will prove that $\varphi_{Q}(x_{p})=\varphi_{Q}(x_{m})-1$ and
$\varphi_{Q}(x_{s})=\varphi_{Q}(x_{m})-k+1$, which will prove the claim since
it proves that $x_{m}$ is a maximal vertex.
The couple $(\varphi_{Q}(x_{p}),\varphi_{Q}(x_{s}))$ can have three values:
$(\varphi_{Q}(x_{m})-1,\varphi_{Q}(x_{m})+1)$,
$(\varphi_{Q}(x_{m})-1,\varphi_{Q}(x_{m})-k+1)$, or
$(\varphi_{Q}(x_{m})+k-1,\varphi_{Q}(x_{m})+1)$. But, if
$\varphi_{Q}(x_{s})=\varphi_{Q}(x_{m})+1$ then
$\varphi_{Q^{\prime}}(x_{s})=\varphi(x_{m})+1$, and so
$\varphi_{Q^{\prime}}(x_{m})=\varphi(x_{m})+k$, which is a contradiction. If
$\varphi_{Q}(x_{p})=\varphi_{Q}(x_{m})+k-1$ then
$\varphi_{Q^{\prime}}(x_{p})=\varphi_{Q}(x_{m})+k-1$, and so
$\varphi_{Q^{\prime}}(x_{m})>\varphi_{Q}(x_{m})$, which is a contradiction
again. Therefore, $(\varphi_{Q}(x_{p}),\varphi_{Q}(x_{s}))$ must be equal to
$(\varphi_{Q}(x_{m})-1,\varphi_{Q}(x_{m})-k+1)$ for all circuit which contain
$x_{m}$, which is what we needed to prove. ∎
Let us now consider two tilings $Q$ and $Q^{\prime}$ of a $k$-regular
contractible polycell $P$. Let us define
$\max(\varphi_{Q},\varphi_{Q^{\prime}})$ as the height function such that its
value at each point is the maximal between the values of $\varphi_{Q}$ and
$\varphi_{Q^{\prime}}$ at this point. Let us define
$\min(\varphi_{Q},\varphi_{Q^{\prime}})$ dually. Then, we have the following
result:
###### Lemma 3.7.
Given two tilings $Q$ and $Q^{\prime}$ of a $k$-regular contractible polycell
$P$, $\max(\varphi_{Q},\varphi_{Q^{\prime}})$ and
$\min(\varphi_{Q},\varphi_{Q^{\prime}})$ are the height functions of tilings
of $P$.
###### Proof.
We can see that $\max(\varphi_{Q},\varphi_{Q^{\prime}})$ is the height
function of a tiling of $P$ by iterating Lemma 3.6:
$\sum\limits_{x}\left|\varphi_{Q}(x)-\varphi_{Q^{\prime}}(x)\right|$ can be
decreased until $Q=Q^{\prime}$. The proof for
$\min(\varphi_{Q},\varphi_{Q^{\prime}})$ is symmetric. ∎
###### Theorem 3.8.
If $P$ is a $k$-regular contractible polycell, then $A_{P}$ induces a
distributive lattice structure over the tilings of $P$.
###### Proof.
Given two tilings $Q$ and $Q^{\prime}$ in $A_{P}$, let us define the following
binary operations:
$\varphi_{Q}\wedge\varphi_{Q^{\prime}}=\min(\varphi_{Q},\varphi_{Q^{\prime}})$
and
$\varphi_{Q}\vee\varphi_{Q^{\prime}}=\max(\varphi_{Q},\varphi_{Q^{\prime}})$.
It is clear from the previous results that this defines the infimum and
supremum of $Q$ and $Q^{\prime}$. To show that the obtained lattice is
_distributive_ , it suffices now to verify that these relations are
distributive together. ∎
As already discussed, this last theorem gives much information on the
structure of the flip-accessibility graphs of tilings of polycells. It also
gives the possibility to use in the context of tilings the numerous results
known about distributive lattices, in particular the generic random sampling
algorithm described in [Pro98].
To finish this section, we give another proof of Theorem 3.8 using only
discrete dynamical models notions. This proof is very simple and has the
advantage of putting two combinatorial object in a relation which may help
understanding them. However, the reader not interested in discrete dynamical
models may skip the end of this section.
An Edge Firing Game (EFG) is defined by a connected undirected graph $G$ with
a distinguished vertex $\nu$, and an orientation $O$ of $G$. In other words,
$\overline{O}=G$. We then consider the set of obtainable orientations when we
iterate the following rule: if a vertex $v\not=\nu$ only has incoming edges
(it is a _sink_) then one can reverse all these edges. This set of
orientations is ordered by the reflexive and transitive closure of the
evolution rule, and it is proved in [Pro93] that it is a distributive lattice.
We will show that the set of tilings of any $k$-regular contractible polycell
$P$ Theorem 3.8.
Let us consider a $k$-regular contractible polycell $P$ defined over a graph
$G$, and $G^{\prime}$ the sub-graph of $G$ which contains exactly the vertices
and edges in $P$. Let us now consider the height function $\varphi_{Q}$ of a
tiling $Q$ of $P$, and let us define the orientation $\pi(Q)$ of
$\overline{G^{\prime}}$ as follows: each undirected edge $\\{v,v^{\prime}\\}$
in $\overline{G^{\prime}}$ is directed from $v$ to $v^{\prime}$ in $\pi(Q)$ if
$\varphi_{Q}(v^{\prime})>\varphi_{Q}(v)$. Then, the maximal vertices of $Q$
are exactly the ones which have only incoming edges in $\pi(Q)$, and applying
the EFG rule to a vertex of $\pi(Q)$ is clearly equivalent to making a flip
around this vertex in $Q$. Therefore, the configuration space of the EFG is
isomorphic to the flip-accessibility graph $A_{P}$, which proves Theorem 3.8.
## 4 Some applications.
In this section, we study some examples which appear in the literature with
the help of our generalized framework. We show how these classes of tiling
problems can be seen as special cases of $k$-regular contractible polycells
tilings. We therefore obtain as corollaries some known results about these
problems, as well as some new results.
### 4.1 Polycell drawn on the plane.
Let us consider a set of vertices $V$ and a set $\Theta$ of elementary
(undirected) cycles of length $k$, with vertices in $V$, such that any couple
of cycles in $\Theta$ have at most one edge in common. Not let us consider the
undirected graph $G=(V,E)$ such that $e$ is an edge of $G$ if and only if it
is an edge of a cycle in $\Theta$. Moreover, let us restrict ourselves to the
case where $G$ is a planar graph which can be drawn in such a way that no
cycle of $\Theta$ is drawn inside another one. $G$ is 2-dual-colorable if one
can color in black and white each bounded face in such a way that two faces
which have an edge in common have different colors. See for example Figure 9.
Figure 9: Two examples of graphs which satisfy all the properties given in
the text. The leftmost is composed of cycles of length $3$ and has a hole. The
rightmost one is composed of cycles of length $4$.
Figure 10: A tiling of each of the objects shown in Figure 9, obtained using
the polycells formailsm.
The fact that $G$ has the properties above, including being 2-dual-colorable,
makes it possible to encode tilings with bifaces (the tiles are two adjacent
faces) as tilings of polycells. This includes tilings with dominoes, and
tilings with calissons. Following Thurston [Thu90], let us define an oriented
version of G as follows: the edges which constitute the white cycles
boundaries are directed to travel the cycle in the clockwise orientation, and
the edges which constitute the black cycles boundaries are directed
counterclockwise. One can then verify that a balanced boundary polycell
defined this way is always contractible. Therefore, our results can be
applied, which generalizes the results of Chaboud [Cha96] and Thurston
[Thu90].
### 4.2 Rhombus tiling in higher dimension.
Let us consider the canonical basis $\\{e_{1},\dots,e_{d}\\}$ of the
$d$-dimensional affine space ${\mathbb{R}}^{d}$, and let us define
$e_{d+1}=\sum_{i=1}^{d}e_{i}$. For all $\alpha$ between $1$ and $d+1$, let us
define the zonotope $Z_{d,d}^{\alpha}$ as the following set of points:
$Z_{d,d}^{\alpha}\ =\ \\{x\in{\mathbb{R}}^{d}\mbox{ such that
}x=\sum_{i=1,i\not=\alpha}^{d+1}\lambda_{i}e_{i},\mbox{ with
}-1\leq\lambda_{i}\leq 1\\}.$
In other words, the $Z_{d,d}^{\alpha}$ is the zonotope defined by all the
vectors $\varepsilon_{i}$ except the $\alpha$-th. We are interested in the
tilability of a given solid $S$ when the set of allowed tiles is
$\\{Z_{d,d}^{\alpha},\ 1\leq\alpha\leq d+1\\}$. These tilings are called
_codimension one rhombus tilings_ , and they are very important as a physical
model of quasicristals [DMB97]. If $d=2$, they are nothing but the tilings of
regions of the plane with three parallelograms which tile an hexagon, which
have been widely studied. See Figure 11 for an example in dimension $2$, and
Figure 13 for an example in dimension $3$.
In order to encode this problem by a problem over polycells, let us consider
the directed graph $G$ with vertices in ${\mathbb{Z}}^{d}$ and such that
$e=(x,y)$ is an edge if and only if $y=x+\varepsilon_{j}$ for an integer $j$
between $1$ and $d$ or $y=x-\varepsilon_{d+1}$. We will call diagonal edges
the edges which correspond to the second case. This graph can be viewed as a
$d$-dimensional directed grid (the direction are given by the order on the
coordinates), to which we add a diagonal edge in the reverse direction, in
each element of the grid. An example in dimension $3$, is given in Figure 12.
Figure 11: If one forgets the orientations and removes the dotted edges, then
the rightmost object is a classical codimencion one rhombus tiling of a part
of the plane ($d=2$). From the polycells point of view, the leftmost object
represents the underlying graph $G$, the middle object represents a polucell
$P$ (the boundary of which is the set of the edges which belong to only one
cell), and the rightmost object represents a tiling of $P$ (the dottes edges
are the tiling edges).
Figure 12: The $3$-dimensional grid is obtained by a concatenation of cubes
like this one.
Figure 13: A codimension one rhombus tiling with $d=3$ (first line, rightmost
object). It is composed of four different three dimensional tiles, and the
first line shows how it can be constructing by adding successive tiles. The
second line shows the position of each tile with respect to the cube.
Each edge is clearly in a one-to-one correspondence with a copy of a
$Z_{d,d}^{\alpha}$ translated by an integer vector: this is the copy on the
$d$-dimensional grid of which it is a diagonal. The set $\Theta$ of the cells
we will consider is the set of all the circuits of length $d+1$ which contain
exactly one diagonal edge. Therefore, each edge belongs to a $d!$ cells, and
so the tiles will be themselves composed of $d!$ cells. Given a polycell $P$
defined over $\Theta$, we define $\partial P$ as the set of the edges of $P$
which do not belong to $d!$ circuits of $P$.
First notice that a full polycell defined over $G$ is always contractible.
Therefore, our previous results can be applied, which generalizes some results
presented in [DMB97] and [LM99, LMN01]. We also generalize some results about
the 2-dimensional case, which has been widely studied.
## 5 Conclusion and Perspectives.
In conclusion, we gave in this paper a generalized framework to study the
tiling problems over which a height function can be defined. This includes the
famous tilings of polyominoes with dominoes, as well as various other classes,
like codimension one rhombus tilings, tilings on torus, on spheres, three-
dimensional tilings, and others we did not detail here. We gave some results
on our generalized tilings which made it possible to obtain a large set of
known results as corollaries, as well as to obtain new results on tiling
problems which appear in the scientific literature. Many other problems may
exist which can be modelized in the general framework we have introduced, and
we hope that this paper will help understanding them.
Many tiling problems, however, do not lead to the definition of any height
function. The key element to make such a function exist is the presence of a
strong underlying structure (the $k$-regularity of the polycell, for example).
Some important tiling problems (for example tilings of zonotopes) do not have
this property, and so we can not apply our results in this context. Some of
these problems do not have the strong properties we obtained on the tilings of
$k$-regular contractible polycells, but may be included in our framework,
since our basic definitions of polycells and tilings being very general. This
would lead to general results on more complex polycells, for example polycells
which are not $k$-regular.
Acknowledgments: The authors thank Frédéric Chavanon for useful comments on
preliminary versions, which deeply improved the manuscript quality.
|
Further author information: (Send correspondence to P.B.)
P.B.: E-mail<EMAIL_ADDRESS>
S.L.: E-mail<EMAIL_ADDRESS>
# Physical Reservoir Computing with Origami and its Application to Robotic
Crawling
Priyanka Bhovad Department of Mechanical Engineering, Clemson University,
Clemson, SC, US Suyi Li Department of Mechanical Engineering, Clemson
University, Clemson, SC, US
###### Abstract
A new paradigm called physical reservoir computing has recently emerged, where
the nonlinear dynamics of high-dimensional and fixed physical systems are
harnessed as a computational resource to achieve complex tasks. Via extensive
simulations based on a dynamic truss-frame model, this study shows that an
origami structure can perform as a dynamic reservoir with sufficient computing
power to emulate high-order nonlinear systems, generate stable limit cycles,
and modulate outputs according to dynamic inputs. This study also uncovers the
linkages between the origami reservoir’s physical designs and its computing
power, offering a guideline to optimize the computing performance.
Comprehensive parametric studies show that selecting optimal feedback crease
distribution and fine-tuning the underlying origami folding designs are the
most effective approach to improve computing performance. Furthermore, this
study shows how origami’s physical reservoir computing power can apply to soft
robotic control problems by a case study of earthworm-like peristaltic
crawling without traditional controllers. These results can pave the way for
origami-based robots with embodied mechanical intelligence.
###### keywords:
Physical reservoir computing, Origami, Morphological computation, Soft
robotics, Peristaltic locomotion
## 1 INTRODUCTION
The animal kingdom is an endless source of inspiration for soft robotics [1,
2]. Researchers have constructed compliant robots that can mimic all kinds of
animal motions, like octopus locomotion [3], elephant trunk grasping [4],
insect flying [5], jellyfish and fish swimming [6, 7, 8], as well as snake and
insects crawling [9, 10, 11]. These robots share many similarities with
animals regarding their shape and motion kinematics; however, their underlying
sensing, actuation, and control architectures could be fundamentally
different. Our engineered soft robots typically rely on a centralized
controller (aka. an “electronic brain”) that takes up all computing work to
process sensor information, generate control commands, and make decisions.
This approach often struggles to achieve high actuation speed and control
effectiveness as soft robots exhibit virtually infinite degrees of freedom and
complicated dynamic characteristics. On the other hand, animals have highly
interconnected networks of nerves and muscles that can share the workload with
the brain [12, 13, 14]. The animal body’s morphology is an integral part of
its actuation, control, and ultimately its “brain’s” decision-making process,
leading to far superior efficiency than our engineered soft robots.
Motivated by this disparity, an increasing number of researchers have embraced
soft bodies’ nonlinear dynamics as a computational resource to create an
embodied intelligence and control [15, 16, 17, 18, 19, 20, 21]. As a result, a
new computational paradigm called morphological computation has emerged in
which the physical body of the robot itself takes part in performing low-level
control tasks, such as locomotion coordination and modulation, to simplify the
overall control architecture significantly [15, 16, 18, 17, 22]. The
contributions of body morphology to cognition and control involve three major
categories [20]: (1) Morphology facilitating control: wherein the physical
design enables certain behaviors such as motion sequencing (e.g., passive
dynamic walker [23]). (2) Morphology facilitating perception: wherein the
physical design enables sensing (e.g., the nonuniform distribution of cells in
the compound eyes of fly [24]). (3) Morphological computation, such as the
_physical reservoir computing_ (PRC), wherein a physical body performs genuine
computations. Among these, physical reservoir computing shows promising
potentials because of its balanced simplicity and versatility to perform
applicable computation with encoding and decoding [20].
Reservoir computing is a computational framework based on artificial recurrent
neural networks (RNNs), which have been used extensively for problems
involving time-series prediction like the stock market and weather
forecasting, robotic motion planning and control, text and speech recognition
[25, 26, 27, 28, 29, 30, 21, 31]. In RNNs, the output of the current time step
depends on the results from the previous time step in addition to the current
input. Since RNNs involve both forward and back-propagation of input data,
training them became a challenging task. To address this difficulty, Jaeger
introduced the concept of a _fixed_ recurrent neural network as Echo State
Networks (ESNs) [25], and Maass introduced Liquid State Machines (LSMs) [26].
Later, these two concepts merged under the umbrella of reservoir computing
(RC). In RC, the neural network (aka. the “reservoir”) has fixed
interconnections and input weights, and only the linear output readout weights
are trained by simple techniques like linear or ridge regression. These
reservoirs’ dynamics transform the input data stream into a high-dimensional
state space, capturing its non-linearities and time-dependent information for
computation tasks.
More importantly, the reservoir’s fixed nature opens up the possibility of
using physical bodies — such as a random network of nonlinear spring and mass
oscillators [18, 32, 33], tensegrity structures [15, 16, 34, 17], and soft
robotic arms [19, 35, 36] — to conduct computation, hence the paradigm of
Physical Reservoir Computing. These physical systems have shown to possess
sufficient computational power to achieve complex computing tasks like
emulating other non-linear dynamic systems, pattern generation [34, 17, 18,
32, 19, 21], speech recognition [37], and machine learning [36, 21, 31, 33].
More importantly, robotic bodies with sufficient nonlinear dynamics can also
perform like a physical reservoir and directly generate locomotion gait
without using the traditional controllers [17, 21, 38, 39, 40].
In this study, we investigate the use of origami as a physical reservoir and
harness its computing power for robotic locomotion generation. Origami is an
ancient art of folding paper into sophisticated and three-dimensional shapes.
Over the past decades, it has evolved into an engineering framework for
constructing deployable structures [41, 42, 43], advanced materials [44, 45,
46, 47, 48, 49], and robotics [50, 51, 52, 53, 54, 55, 56]. Origami has many
appealing advantages for use in robotics. It is compact, easy to fabricate,
and scale-independent (aka. Origami robots can be fabricated at different
scales but still follow similar folding principles [57, 50, 58, 59]).
Moreover, the nonlinear mechanics and dynamics induced by folding could
enhance robotic performance [60, 61].
We show that origami’s nonlinear folding dynamics also possess significant
computing power. A mechanical system must exhibit several essential properties
to perform as a reservoir [21]. The first one is high-dimensionality, which
allows the reservoir to gather as much information possible from the input
data stream, separating its spatio-temporal dependencies and projecting it
onto a high-dimensional state-space. The second one is non-linearity so that
the reservoir acts as a nonlinear filter to map the information from the input
stream. All the computation complexity is associated with this nonlinear
mapping, thus training the linear static readout becomes a straightforward
task. The third one is fading memory (or short-term memory), ensuring that
only the recent input history influences the current output. The fourth one is
separation property to classify and segregate different response signals
correctly, even with small disturbances or fluctuations. Moreover, if two
input time series differed in the past, the reservoir should produce different
states at subsequent time points [62]. Our physics-informed numerical
simulations prove that origami inherently satisfies these four requirements
and can complete computation tasks like emulation, pattern generation, and
output modulation.
Moreover, we conduct extensive numerical simulations to uncover the linkage
between origami design and its computing power, providing the guideline to
optimize computing performance. Finally, we demonstrate how to directly embed
reservoir computing in an origami robotic body to generate earthworm-like
peristalsis crawling without using any traditional controllers. This study’s
results could foster a new family of origami-based soft robots that operate
with simple mechatronics, interact with the environment through distributed
sensor and actuator networks, and respond to external disturbances by
modulating their activities.
In what follows: Section (2) details the construction of an origami reservoir,
including the lattice framework used to simulate its nonlinear dynamics.
Section 3 elucidates the origami reservoir’s computing power through various
numerical experiments. Section 4 discusses the parametric analysis that
uncovers the linkages between computing performance and physical design.
Section 5 applies the reservoir computing to an origami robot’s crawling
problem. Finally, Section 6 concludes this paper with a summary and
discussion.
## 2 Constructing The Origami Reservoir
In this study, we construct a physical reservoir using the classical Miura-ori
sheets. It is essentially a periodic tessellation of unit cells, each
consisting of four identical quadrilateral _facets_ with _crease_ lengths $a$
$b$ and an internal sector angle $\gamma$ (Figure 1 (a)) [63, 44]. The folded
geometry of Miura-ori can be fully defined with a dihedral _folding angle_
$\theta$ ($\in{[-\pi/2,\pi/2]}$) between the $x$-$y$ reference plane and its
facets. The reservoir size is defined as $n\times m$, where $n$ and $m$ are
the number of origami _nodes_ (aka. vertices where crease lines meet) in $x$
and $y$-directions, respectively. $N$ is the total number of creases in the
origami reservoir.
### 2.1 Dynamics Modeling of the Origami
To investigate this origami reservoir’s computing capacity, one must first
obtain its time responses under dynamic excitation. To this end, we adopt and
expand the lattice framework approach to simulate its nonlinear dynamics [63,
64, 65]. In this approach, origami creases are represented by pin-jointed
stretchable truss elements with prescribed spring coefficient $K_{s}$. Folding
(or bending) along the crease line is simulated by assigning torsional spring
coefficient $K_{b}$ (Figure 1 (b)). We further triangulate the quadrilateral
facets with additional truss elements to estimate the facet bending with
additional torsional stiffness (typically, $K_{b}$ across the facets is larger
than those along the creases). Therefore, this approach discretizes the
continuous origami sheet into a network of pin-jointed truss elements
connected at the nodes. A typical reservoir consists of an interconnected
network of units governed by nonlinear dynamics, and the origami reservoir, in
this case, consists of a network of nodes with their interconnections defined
by the underlying crease pattern. The corresponding governing equations of
motion, in terms of node #p’s displacement ($\mathbf{x}_{p}$) as an example,
are:
$m_{p}\ddot{\mathbf{x}}_{p}^{(j)}=\mathbf{F}_{p}^{(j)},$ (1)
where the superscript “$(j)$” represents the $j^{\text{th}}$ time step in
numerical simulation, and $m_{p}$ is the equivalent nodal mass. Unless noted
otherwise, the mass of the origami sheet is assumed to be equally distributed
to all its nodes. $\mathbf{F}_{p}^{(j)}$ is the summation of internal and
external forces acting on this node in that
$\mathbf{F}_{p}^{(j)}=\sum\mathbf{F}_{s,p}^{(j)}+\sum\mathbf{F}_{b,p}^{(j)}+\mathbf{F}_{d,p}^{j}+\mathbf{F}_{a,p}^{(j)}+m_{p}\textbf{g},$
(2)
where the five terms on the right hand side are the forces from truss
stretching, crease/facet bending, equivalent damping, external actuation, and
gravity, respectively. The formulation of these forces are detailed below.
Figure 1: The nonlinear Truss-frame approach for simulating the origami
dynamics. (a) The crease pattern of the classical Miura-ori, with a unit cell
highlighted. (b) The rigid-folding kinematics of the Miura-ori. (c) The truss-
frame approach discretizes the Miura-ori unit cell, showing the distribution
of truss elements along the creases and across the facets, as well as the
nodal masses. (d) Detailed kinematics and mechanics set up to analyze the
bending and stretching along the truss #$pq$. Notice that $\mathbf{m}^{(j)}$
and $\mathbf{n}^{(j)}$ are the current surface normal vectors defined by
triangles #$pqr$ and #$pqv$, respectively. (e) The bending of the Miura-ori
sheet under its weight. This simulation serves to validate appropriate
material property assignments.
Truss stretching forces: The truss elements are essentially elastic springs
with axial stretching stiffness ($K_{s}^{(j)}=EA/l^{(j)}$). Here, $EA$ is the
material constant, and $l^{(j)}$ is the truss element’s length at the current
$j^{\text{th}}$ time step. Thus, the axial stiffness is updated at each time-
step, accommodating the truss element’s increase in stiffness as it is
compressed and vice-a-versa. The stretching forces from a truss connecting
node #p and one of its neighbouring nodes #$q$ is,
$\mathbf{F}_{s,p}^{(j)}=-K_{s}^{(j)}\left(l_{pq}^{(j)}-l_{pq}^{(0)}\right)\frac{\mathbf{r}_{p}^{(j)}-\mathbf{r}_{q}^{(j)}}{|\mathbf{r}_{p}^{(j)}-\mathbf{r}_{q}^{(j)}|}$
(3)
where $l_{pq}^{(0)}$ is the truss length at its initial resting state.
$\mathbf{r}_{p}^{(j)}$ and $\mathbf{r}_{q}^{(j)}$ are the current position
vectors of these two nodes, respectively. To calculate the total truss
stretching forces acting on node #$p$, similar equations apply to all of its
neighbour nodes through trusses (e.g., node $q$, $r$, $s$, $t$, $u$, and $v$
in Figure 1(c)).
Crease/facet bending forces: The crease folding and facet bending are
simulated with torsional spring coefficient ($K_{b}^{(j)}=k_{b}l^{(j)}$),
where $k_{b}$ is torsional stiffness _per unit length_. Here, we adopt the
formulation developed by Liu and Paulino [64]. For example, the force acting
on nodes #$p$ due to the crease folding along the truss between #$p$ and #$q$
is:
$\mathbf{F}_{b,p}^{(j)}=-K_{b}^{(j)}(\varphi_{pq}^{(j)}-\varphi_{pq}^{(0)})\frac{\partial\varphi_{pq}^{(j)}}{\partial\mathbf{r}_{p}^{(j)}}$
(4)
where $\varphi_{pq}^{(j)}$ is the current dihedral angle along truss $pq$
(aka. the dihedral angle between the triangles #$pqr$ and #$pqv$ in 1(d)), and
$\varphi_{pq}^{(0)}$ is the corresponding initial value. $\varphi_{pq}^{(j)}$
can be calculated as
$\displaystyle\varphi_{pq}^{(j)}$
$\displaystyle=\eta\arccos\left(\frac{\mathbf{m}^{(j)}\cdot\mathbf{n}^{(j)}}{|\mathbf{m}^{(j)}||\mathbf{n}^{(j)}|}\right)\text{
modulo }2\pi$ (5) $\displaystyle\eta$
$\displaystyle=\begin{cases}\text{sign}\left(\mathbf{m}^{(j)}\cdot\mathbf{r}_{pv}^{(j)}\right),&\mathbf{m}^{(j)}\cdot\mathbf{r}_{pv}^{(j)}\neq
0\\\ 1.&\mathbf{m}^{(j)}\cdot\mathbf{r}_{pv}^{(j)}=0\end{cases}$ (6)
Here, $\mathbf{m}^{(j)}$ and $\mathbf{n}^{(j)}$ are current surface normal
vector of the triangles #$pqr$ and #$pqv$, respectively, in that
$\mathbf{m}^{(j)}=\mathbf{r}_{rq}^{(j)}\times\mathbf{r}_{pq}^{(j)}$ and
$\mathbf{n}^{(j)}=\mathbf{r}_{pq}^{(j)}\times\mathbf{r}_{pv}^{(j)}$. In
addition, $\mathbf{r}_{pq}^{(j)}=\mathbf{r}_{p}^{(j)}-\mathbf{r}_{q}^{(j)}$,
$\mathbf{r}_{rq}^{(j)}=\mathbf{r}_{r}^{(j)}-\mathbf{r}_{q}^{(j)}$, and
$\mathbf{r}_{pv}^{(j)}=\mathbf{r}_{p}^{(j)}-\mathbf{r}_{v}^{(j)}$. This
definition of $\varphi_{pq}^{(j)}$ ensures that the folding angle for valley
crease lies in $(0,\pi]$ and the folding angle for mountain crease lies in
$(\pi,2\pi]$. The derivative between folding angle $\varphi_{pq}^{(j)}$ and
the nodal #$p$’s current position vector is
$\frac{\partial\varphi_{pq}^{(j)}}{\partial\mathbf{r}_{p}^{(j)}}=\left(\frac{\mathbf{r}_{pv}^{(j)}\cdot\mathbf{r}_{pq}^{(j)}}{|\mathbf{r}_{pq}^{(j)}|^{2}}-1\right)\frac{\partial\varphi_{pq}^{(j)}}{\partial\mathbf{r}_{v}^{(j)}}-\left(\frac{\mathbf{r}_{rq}^{(j)}\cdot\mathbf{r}_{pq}^{(j)}}{|\mathbf{r}_{pq}^{(j)}|^{2}}\right)\frac{\partial\varphi_{pq}^{(j)}}{\partial\mathbf{r}_{r}^{(j)}}$
(7)
where
$\displaystyle\frac{\partial\varphi_{pq}^{(j)}}{\partial\mathbf{r}_{r}^{(j)}}$
$\displaystyle=\frac{|\mathbf{r}_{pq}^{(j)}|}{|\mathbf{m}^{(j)}|^{2}}\mathbf{m}^{(j)},$
(8)
$\displaystyle\frac{\partial\varphi_{pq}^{(j)}}{\partial\mathbf{r}_{v}^{(j)}}$
$\displaystyle=-\frac{|\mathbf{r}_{pq}^{(j)}|}{|\mathbf{n}^{(j)}|^{2}}\mathbf{n}^{(j)}.$
(9)
Again, to calculate the total crease folding and facet bending forces acting
on node #$q$, similar equations apply to trusses connected to this node (e.g.,
truss $pq$, $pr$, $ps$, $pt$, $pu$, and $pv$ in Figure 1(b)).
Damping forces: Estimating damping ratio and damping force is essential to
achieve realistic dynamic responses and reduce numerical simulation error
accumulation. In this study, we follow the formulation developed in [66, 65].
This formulation first calculates the average velocity of a node with respect
to its neighboring nodes ($\mathbf{v}_{\text{avg}}^{(j)}$) to effectively
remove the rigid body motion components from the relative velocities and
ensure that these components are not damped. Then damping force
$\mathbf{F}_{d,p}^{(j)}$ applied on node #$p$ is given by
$\displaystyle\mathbf{F}_{d,p}^{(j)}$
$\displaystyle=-c_{d}^{(j)}(\mathbf{v}_{p}^{(j)}-\mathbf{v}_{\text{avg}}^{(j)})$
(10) $\displaystyle c_{d}^{(j)}$ $\displaystyle=2\zeta\sqrt{K_{s}^{(j)}m_{p}}$
(11)
where $c_{d}^{(j)}$ is the equivalent damping coefficient, and $\zeta$ is the
damping ratio.
Actuation force: In the origami reservoir, two types of creases receive
actuation. The first type is “input creases,” and they receive input signal
$u(t)$ required for emulation and output modulation tasks. The second type is
“feedback creases,” and they receive reference or current output signal $z(t)$
required by all computing tasks in this study except for the emulation task
(more on the applications of input and feedback creases in Section 2.2). In
the case of multiple outputs, different groups of feedback creases are
present. Here, the selection of input and feedback creases are random. There
are many methods to implement actuation to deliver input $u(t)$ and
reference/feedback signal $z(t)$ to the reservoir. For example, the actuation
can take the form of nodal forces on a mass-spring-damper network [18, 32],
motor generated base rotation on octopus-inspired soft arm [19], or spring
resting length changes in a tensegrity structure [34]. In origami, the
actuation can take the form of moments that can fold or unfold the selected
creases. We assume that the resting angle $\varphi^{(0)}$ of the input and
feedback creases will change — in response to the actuation at every time step
— to a new equilibrium $\varphi_{a,0}^{(j)}$ in that [67, 34]
$\displaystyle\varphi_{a,0}^{(j)}$
$\displaystyle=W_{\text{in}}\tanh(u^{(j)})+\varphi^{(0)}\quad\text{for input
creases;}$ (12) $\displaystyle\varphi_{a,0}^{(j)}$
$\displaystyle=W_{\text{fb}}\tanh(z^{(j)})+\varphi^{(0)}\quad\text{for
feedback creases.}$ (13)
where $W_{\text{in}}$ and $W_{\text{fb}}$ are the input and feedback weight
associated with these actuated creases. They are assigned before the training
and remain unchanged after that. $u^{(j)}$ and $z^{(j)}$ are the input and
feedback signal at the $j^{\text{th}}$ time step. The magnitude of
$W_{\text{in}}$ and $W_{\text{fb}}$ are selected such that
$\varphi_{a,0}^{(j)}\in[0,2\pi]$ and consistent with the folding angle
assignment. This approach of assigning new equilibrium folding angles is
similar to traditional neural network studies that use $\tanh$ as a nonlinear
activation function to transform function $z(t)$ into a new one with
magnitudes between $[-1,1]$. Additionally, it prevents actuator saturation due
to spurious extreme values of $z(t)$. Denote the torsional stiffness of
actuated creases by $K_{b,a}^{(j)}$, and we can update Equation (4) for the
actuated creases (using node #$p$ as an example)
$\displaystyle\mathbf{F}_{a,p}^{(j)}=-K_{b,a}^{(j)}\left(\varphi_{pq}^{(j)}-\varphi_{a,0,pq}^{(j)}\right)\frac{\partial\varphi_{pq}^{(j)}}{\partial\mathbf{r}_{p}^{(j)}},$
(14)
The calculation of other terms in this equation are the same as those in the
force from crease folding and facet bending. Once the governing equations of
motion are formulated, they are solved using MATLAB’s ode45 solver with
$10^{-3}$ second time-steps. Although the governing equation of motions use
nodal displacement $\mathbf{x}^{(j)}$ as the independent variables, we use the
dihedral crease angles $\varphi^{(j)}$ as the _reservoir state_ variables to
characterize the origami’s time responses. This is because measuring crease
angles is easier to implement by embedded sensors, and $\varphi^{(j)}$ can be
directly calculated from $\mathbf{x}^{(j)}$ via the Equations 5 and 6.
### 2.2 Setting Up Reservoir Computing
Similar to the actuated creases (aka. input creases and feedback creases), we
designate “sensor creases” for measuring the reservoir states. We denote
$N_{a}$ as the number of actuated creases, and $N_{s}$ for sensor creases. It
is worth noting that, the actuated creases are typically small subset of all
origami creases (i.e., $N_{a}<N$). The sensor creases, on the other hand, can
be all of the origami creases ($N_{s}=N$) or a small subset as well
($N_{s}<N$).
Once the selections of input, feedback, and sensor creases are completed, one
can proceed to the computing. Physical reservoir computing for tasks that
require feedback (e.g., pattern generations in Section 3.2, and output
modulation in 3.3) consists of two phases: The “training phase” and “closed-
loop phase.” While the emulation tasks require the training phase only
(Section 3.1).
Training phase: In this phase, we use the teacher forcing to obtain the
readout weights $W_{i}$ corresponding to every reservoir state (aka. the
dihedral angles of the sensor creases). Suppose one wants to train the
reservoir to generate a nonlinear time series $z(t)$ (aka. the reference
output). The feedback creases receive the reference output and it dynamically
excites the origami reservoir under an open-loop condition without feedback
(Figure 2(a)). The reservoir states $\varphi^{(j)}$ at every time step are
measured and then compiled into a matrix $\mathbf{\Phi}$.
Once the numerical simulation is over, we segregate the reservoir state matrix
$\mathbf{\Phi}$ into the washout step, training step, and testing step. The
washout step data is discarded to eliminate the initial transient responses.
We then calculate the output readout weights $W_{i}$ using the training step
data via simple linear regression:
$\displaystyle\mathbf{W}_{\text{out}}=[\mathbf{1}\;\mathbf{\Phi}]^{+}\mathbf{Z}=\mathbf{\bar{\Phi}}^{+}\mathbf{Z}$
(15)
where, $[.]^{+}$ refers to the Moore-Penrose pseudo-inverse to accommodate
non-square matrix. $\mathbf{1}$ is a column of ones for calculating the bias
term $W_{\text{out},0}$ to shift the fitted function when necessary.
$\mathbf{Z}$ contains the reference signals at each time step, and it is a
matrix if more than one references present. Lastly, we use testing step data
to verify reservoir performance. It is worth noting that white noise of
amplitude $10^{-3}$ is superimposed on the reservoir state matrix during
training to ensure the robustness of the readout result against numerical
imperfections, external perturbations [32], and instrument noise in “real-
world” applications.
Closed-loop phase: Once the training phase is over and readout weights are
obtained, we run the reservoir in the closed-loop condition. That is, instead
of using the reference output $z(t)$, the current output $z^{*}(t)$ is sent to
the feedback creases (Figure 2(b)), and
$\displaystyle
z^{*}(t)=W_{\text{out},0}+\sum_{i=1}^{N_{s}}W_{\text{out},i}\varphi_{i}(t)=\mathbf{W}_{out}^{T}\bar{\mathbf{\Phi}}$
(16)
where, $N_{s}$ is the number of sensor creases, and
$\bar{\mathbf{\Phi}}=[\mathbf{1}\;\mathbf{\Phi}]$. Thus, the reservoir runs
autonomously in the closed-loop phase without any external interventions.
Figure 2: The setup of physical reservoir computing with origami. (a) The
training phase. The feedback creases receive the reference (or targeted)
output $z(t)$; while white noise is added to the reservoir state vector
$\mathbf{\Phi}(t)$ before calculating output weights
$\mathbf{W}_{\text{out}}$; (b) The closed-loop phase. The output weights
obtained in the training phase are used to calculate the current output, which
is fed back to the feedback creases.
We study the closed loop performance of reservoir by calculating the Mean
Squared Error (MSE) using M time-steps as follows:
$\text{MSE}=\frac{1}{M}\sum_{j=1}^{M}\left(z(j)-z^{*}(j)\right)^{2}$ (17)
To estimate performance when multiple reference outputs are present, we
combine the MSEs by taking a norm over the individual MSEs.
## 3 Computation Tasks By the Origami Reservoir
In this section, we use the origami reservoir to emulate multiple non-linear
filters simultaneously, perform pattern generation, and modulate outputs. The
baseline variables for the origami geometric design, material properties, and
reservoir parameters are given in Table 1.
Figure 3: Emulation tasks with the origami reservoir. (a) The Miura-ori
reservoir used for this task with input creases highlighted. Appropriate
boundary conditions are also necessary. (b) Examples of trajectories generated
in the emulation task including (from top to bottom) input signal $u(t)$, 2nd
order, 10th order system, and Volterra series. Dashed curves are the targeted
trajectories, and solid curves are the result of the reservoir. (c) Error
analysis of the emulation tasks. Circles are the standard deviation of MSE,
and horizontal bars are the corresponding extreme values. Table 1: Design of a
baseline origami reservoir in this study
Reservoir size and material properties
---
Parameter | Value
Size | 9$\times$9
Nodal Mass | 7 g
$k_{s}$ | 100 N/m
$k_{c}^{a}$ | 1 N/(m-rad)
$k_{c}$ | 0.2525 N/(m-rad)
$K_{f}$ | 10 N/(m-rad)
$\zeta$ | 0.2
Geometric design of Miura-ori
Parameter | Value
$a$ | 16 mm
$b$ | 10 mm
$\gamma$ | 48$\degree$
$\theta$ | 60$\degree$
Actuator and sensor creases
Parameter | Value
No. of sensors ($N_{s}$) | $N$
No. of actuators ($N_{a}$) | 0.45$N$
No. of Feedback creases | 0.3$N$
No. of Input creases | 0.15$N$
Table 2: Emulation task functions
Type | Functions in discretized form (at $j^{th}$ time step)
---|---
Input | $u(j)=0.2\sin(2\pi f_{1}j\Delta t)\sin(2\pi f_{2}j\Delta t)\sin(2\pi f_{3}j\Delta t)$
| $f_{1}=2.11$ Hz, $f_{2}=3.73$ Hz, $f_{3}=4.33$ Hz
$2^{\text{nd}}$ order system | $z_{1}(j+1)=0.4z_{1}(j)+0.4z_{1}(j)z_{1}(j-1)+0.6(u(j\Delta t))^{3}+0.1$
$10^{\text{th}}$-order system | $\displaystyle z_{2}(j+1)=0.3z_{2}(j-1)+0.05z_{2}(j-1)\sum_{i=1}^{10}z_{2}(j-i)$
| $+1.5u((j-10)\Delta t)u((j-1)\Delta t)+0.1$
Discrete Volterra series | $\displaystyle z_{3}(j+1)=100\sum_{\tau_{1}=0}^{T}\sum_{\tau_{2}=0}^{T}h(\tau_{1},\tau_{2})u(j-\tau_{1})u(nj-\tau_{2})$
| $\displaystyle h(\tau_{1},\tau_{2})=\exp\left(\frac{(\tau_{1}\Delta
t-\mu_{1})^{2}}{2\sigma_{1}^{2}}+\frac{(\tau_{2}\Delta
t-\mu_{2})^{2}}{2\sigma_{2}^{2}}\right)$
| $\mu_{1}=\mu_{2}=0.1,\sigma_{1}=\sigma_{2}=0.05,\Delta t=10^{-3}$
### 3.1 Emulation Task
This sub-section shows that the origami reservoir can emulate multiple
nonlinear filters simultaneously using a single input. Such emulation is a
benchmark task for evaluating the performance in RNN training [68] and prove
the multi-tasking capability of physical reservoirs [18, 19]. Note that the
emulation task involves only the training phase, so there are no feedback
creases in this case. Consequently, we excite the reservoir by sending the
input function $u(t)$ to the input creases and train it to find three sets of
readout weights in parallel via linear regression. Here, $u(t)$ is a product
of three sinusoidal functions with different frequencies, and the three target
non-linear filters are $2^{\text{nd}}$-order non-linear dynamic system
$z_{1}(t)$, a $10^{\text{th}}$-order non-linear dynamic system $z_{2}(t)$, and
discrete Volterra series $z_{3}(n)$ (detailed in Table 2).
We use a $9\times 9$ Miura-ori reservoir in this task, exciting the reservoir
from complete rest and training it for 100 seconds. We discard the first 50
seconds of data as the washout step, use the data from the next 45 seconds to
calculate the optimum static readout weights, and then use the last 5 seconds
of data to calculate the MSE for performance assessments. Results in Figure 3
show that the origami reservoir can emulate these three nonlinear filters. As
the nonlinearity and complexity of the nonlinear filter increases, MSE also
increases (Figure 3(b)).
Moreover, we compare the emulation performance when all $N$ creases are used
as sensor creases versus when only actuated creases are used as sensors
($N_{s}=N_{a}=pN$). The increase in MSE is marginal in the latter case.
Therefore, the origami satisfies the previously mentioned nonlinearity and
fading memory requirements to be a physical reservoir, and one only needs to
use the input creases angles as the reservoir states to simplify the reservoir
setup.
### 3.2 Pattern Generation Task
Pattern generation tasks are essential for achieving periodic activities such
as robotic locomotion gait generation and manipulator control where persistent
memory is required. That is, by embedding these patterns (or limit cycles) in
the origami reservoir, one can generate periodic trajectories in the closed-
loop. We again use a $9\times 9$ Miura-ori reservoir and randomly select
$30\%$ of its creases as the feedback creases (this task does not require
input creases). These feedback creases are divided into two groups for the two
components of 2D trajectories. We run the training phase for 100 seconds for
each pattern, discard the initial 15 seconds of data as the washout step and
use the next 51 seconds’ data to calculate the optimum output readout weights.
Generating non-linear Limit cycles: In the following results, the origami
reservoir demonstrates its computation capability via generating quadratic
limit cycles (LC), Van der Pol limit cycles, and the Lissajous curve in
closed-loop. The quadratic limit cycle is defined by two differential
equations:
$\displaystyle\dot{x}_{1}$
$\displaystyle=x_{1}+x_{2}-\epsilon(t)x_{1}\left(x_{1}^{2}+x_{2}^{2}\right),$
(18) $\displaystyle\dot{x}_{2}$
$\displaystyle=-2x_{1}+x_{2}-x_{2}\left(x_{1}^{2}+x_{2}^{2}\right),$ (19)
where the parameter $\epsilon(t)$ determines the shape of the limit cycle
($\epsilon(t)=1$ in this case). The Van der Pol limit cycle is defined by:
$\displaystyle\dot{x}_{1}$ $\displaystyle=x_{2},$ (20)
$\displaystyle\dot{x}_{2}$
$\displaystyle=-x_{1}+\left(1-x_{1}^{2}\right)x_{2}.$ (21)
The Lissajous curve is a graph of two sinusoidal signals parameterized by
their frequency ratio ($f_{1}/f_{2}=0.5$) and phase difference
($\delta=\pi/2$):
$\displaystyle x_{1}$ $\displaystyle=\sin\left(f_{1}t+\delta\right)$ (22)
$\displaystyle x_{2}$ $\displaystyle=\sin\left(f_{2}t\right)$ (23)
Figure 4: Stable pattern generation under closed-loop using the Miura-ori
reservoir. (a) This task’s origami reservoir includes two groups of feedback
creases required to generate 2D limit cycles. (b-d) The closed-loop
trajectories of quadratic limit cycle, Van der Pol oscillator, and the
Lissajous curve, respectively. In these plots, the first row of time responses
shows the closed-loop output after 100s of training. The third row of time
responses shows how the trained reservoir can recover the targeted limit
cycles from an initial resting condition. The corresponding phase portraits
are as shown in the second row. Here, the dashed curves are targeted
trajectories, and the solid curves are the reservoir’s outputs. (e) Van der
Pol limit cycle recovery after the temporary failure of sensor and actuator
creases. The two simulations are the same except for the number of sensor
creases ($N_{s}=N$ for the first test, $N_{s}=0.3N$ for the second). The
insert figures show the corresponding phase-portraits.
As shown in Figure 4(b), the origami reservoir can generate all three periodic
trajectories just by changing the output readout weights. The MSE for
Quadratic LC, Van der Pol LC, and Lissajous curves, calculated using the data
for first 10 seconds’ closed-loop run (M = 10000), are $3.28\times 10^{-7}$,
$2.03\times 10^{-5}$, and $5.5\times 10^{-4}$, respectively. As expected, MSE
increases as the complexity of the curve increases.
Stability and robustness of the pattern generation: After finding the readout
weights, we test the stability of these three limit cycles by starting the
origami reservoir from total rest in the close-loop and running it for more
than 1000 seconds. The limit cycle is stable if and only it can recover the
pattern from zero initial conditions and stays on target for at least 1000
seconds of simulation [32, 19]. The results in Figure 4(c) indicates that the
torsional moments generated from the feedback signals on the feedback creases
are sufficient to recover and maintain the three limit cycles from total rest.
Small phase differences occur between generated trajectories and the targets
because the reservoir takes a slightly different path than the target, and the
Lissajous curve takes more than 15 seconds to recover fully. Nonetheless, the
origami reservoir successfully passes this test.
To further analyze the robustness of reservoir-generated limit cycles, we
simulate actuator and sensor failures. As the origami reservoir generates the
Van der Pol limit cycles in these tests, all feedback and sensor creases stop
working (aka. their signals set to zero) for 10 seconds. We conduct these
tests when all creases are used as sensor creases ($N_{s}=N$) and when only
feedback creases are sensor creases ($N_{s}=N_{a}=0.3N$). The simulation
results in Figure 4(e) show that, although the reservoir diverges to a
trajectory far away from the target during the actuator and sensor failure, it
can immediately recover the Van der Pol limit cycles after the end of these
failures.
Figure 5: Results of modulation task under closed-loop using The Miura-ori
reservoir. (a) This task’s origami reservoir includes two groups of feedback
creases and input creases. (b) Quadratic limit cycle trajectories under
closed-loop and the corresponding input signal $\epsilon(t)$. The results are
obtained after 500 seconds of training. (c) Closed-loop trajectory recovery
from the initial resting conditions. (d) The corresponding phase-portraits,
where the targeted trajectories are overlaid on top of the reservoir output.
### 3.3 Output Modulation Task
Output modulation capability allows the reservoir to adjust its output
according to a randomly varying input signal without changing the readout
weights. This ability is also essential for soft robotic control applications
because it allows the robot to switch behaviors according to external stimuli
or environmental changes. In this task, we randomly select input creases,
which account for $15\%$ of the total creases, in addition to the feedback
creases (Figure 5(a)). Moreover, all creases are used as sensor creases
($N_{s}=N$). The simulation results in Figure 5(b, c) shows the generated
quadratic limit cycles with modulated input (Equation (18, 19)). The origami
reservoir can react to the input and modulate the magnitude of the quadratic
limit cycles. The MSE is $3.8\times 10^{-4}$, which is remarkably small,
considering this task’s complexity.
## 4 Correlating Physical Design and Computing Performance
In this section, we use the mean squared error (MSE) as the metric to examine
the connections between the origami reservoir’s design and computing
performance. In particular, This analysis aims to investigate the sensitivity
of MSE to different parameter changes and identify the optimal origami
designs. To this end, in-depth parametric analyses are conducted to examine
the effect of (1) reservoir size and material properties, (2) crease pattern
geometry, and (3) feedback and sensor crease distribution. We use both Van der
Pol and quadratic limit cycle generation tasks to ensure the broad
applicability of parametric study results.
Table 3: Variables for reservoir size and material properties parametric study
Parameter | Base value | Distribution
---|---|---
Nodal mass (g) | 7 | [1,50]
Geometric | Standard | $\sigma=\chi\exp(\frac{-||(N_{i}-N_{j})||}{l})$
imperfections | Miura-ori | $\mu=0$, $\chi=0.4a$, $l=4a$
Truss torsional | $K_{b}^{a}=1$, | $K_{b}^{a}=1$,
stiffness N/(m-rad) | $K_{b}=0.2525$ | $K_{b}\in[0.005,0.5]$
### 4.1 Reservoir Size, Material Properties, and Vertices Perturbation
We observe that feedback crease distribution affects reservoir computing
performance quite significantly. In particular, poorly distributed feedback
creases might result in failed pattern generating tasks. Therefore, we first
conduct numerical simulations by randomly changing the feedback crease
distributions (72 unique designs in total) and identifying the best performing
one (with the least MSE). We refer to this best performing feedback crease
distribution as the _base design_ (Figure 6(a, c)) for the following
parametric studies. Then, we conduct another parametric study regarding the
nodal mass, crease stiffness, and vertices perturbation. We vary these three
parameters, one at a time, for 72 randomly selected designs (six batches of
jobs in parallel on a computer with 12 cores). The baseline values and range
of the parameters are in Table 3.
The origami reservoir performance turns out to be highly sensitive to the
nodal mass variation. As opposed to the uniform nodal mass in base design, a
randomly distributed nodal mass can significantly increase or decrease the MSE
for both pattern generation tasks. However, randomly distributing mass in an
origami sheet is quite challenging in practical applications. So the use of
varying mass distribution should be judicially done based on the particular
application at hand. On the other hand, the origami performance is much less
sensitive to the crease torsional stiffness. By randomly changing the
stiffness, one can achieve performance at par with the base design.
Moreover, we investigate the effects of random geometric imperfection in the
base designs of origami reservoir. To this end, we adopt the formulation
introduced by Liu et al. [69], which introduce small perturbations to the
nodal positions in folded origami. Such imperfections are inevitable in
practice due to various manufacturing defects. It is found that these small
imperfections do not worsen the MSE significantly and in fact could reduce the
MSE by a moderate degree (Figure 6(a),(b)).
It is also worth noting that the larger $9\times 9$ Miura origami reservoir
performs better than the smaller one because larger origami contains more
folding angles to constitute the reservoir state matrix. Therefore, the high-
dimensionality of a reservoir is desirable to produce smaller MSE.
Figure 6: Effect of reservoir size and material properties on the reservoir
computing performance. (a) The distribution of MSE from the Quadratic limit
cycle simulations using random feedback crease distributions and different
design parameter distributions. Here “FB” stands for feedback crease
distribution, “M” stands for nodal mass distribution, “V” stands for origami
vertices geometry perturbation, and “$K_{f}$” stands for crease torsional
stiffness distribution. It is worth emphasizing that the “FB” results come
from one parametric study of 72 unique designs, and the “M,” “V,” and
“$K_{f}$” are results of the subsequent simulation. The bar charts depict the
average value, standard deviation (circles), and extreme values (horizontal
bars) of MSE. (b) A similar result from the Van der Pol limit cycle generation
task. (c) The feedback crease distributions of the four different baseline
designs used in this parametric study.
### 4.2 Origami Design
Figure 7: Effect of Miura-ori geometric design on the reservoir performance.
(a-c) The Miura-ori geometry and the corresponding landscape of MSE
distribution when $\theta=50\degree$, $60\degree$, and $70\degree$,
respectively. The lighter and darker regions correspond to larger and smaller
errors, respectively, while the white regions depict origami designs that
failed the computing task. (d) The unit cell geometry of four representative
designs with the same crease $a$ length but different sector angles $\gamma$
and crease length ratios $a/b$.
A unique advantage of origami based structures and materials is their
considerable freedom to tailor the geometric design. To this end, we start
from the Base Design I of $9\times 9$ Miura-ori reservoir, vary its crease
length ratio $(a/b)$ and internal sector angle $(\gamma)$, and then run the
quadratic limit cycle task with 100 crease length and sector angle
combinations at three folding angles $(\theta=50\degree,60\degree,70\degree)$.
The results of the parametric analysis are shown in Figure 7. We observe that,
at lower folding angles (flatter origami), the origami reservoir has a higher
possibility to fail the pattern generation tasks. The computing performance
improves significantly with a reduced MSE as the origami folds more (or as
$\theta$ increases). This trend is probably because highly folded origami
offers an increased range of folding motion.
Moreover, there are two design sets with the lowest MES: $a/b\approx 1.5$,
$\gamma\approx 45\degree$, and $a/b\approx 2.5$, $\gamma\approx 60\degree$.
Generally speaking, a moderate to high crease-length ratio and small sector
angles can create “skewed” origami patterns that appear to give better
computing performance across all values folding angles. The best designs here
have MSEs at the order of $10^{-7}$, which is of the same magnitude as we
found previously by tailoring the nodal mass and crease stiffness.
Figure 8: Effect of varying the number of actuator and sensor creases.
### 4.3 Actuator and Sensors Distribution
Finally, it is important, for practical applications, to find the minimum
amount of input/feedback and sensor creases required for achieving acceptable
computing performance. To this end, we start with the $9\times 9$ Miura-ori
reservoir and conduct two tests. In the first test, we vary the percentage of
feedback creases ($N_{a}=0.2N,0.3N,0.4N,0.5N$, each with 24 randomly generated
crease distributions) while using all crease dihedral angles to constitute the
reservoir state matrix (i.e., $N_{s}=N$). In the second test, we use the same
feedback crease design and only use these feedback creases’ dihedral angles to
formulate the reservoir state matrix (i.e., $N_{s}=N_{a}$).
We find that if only $20\%$ of crease are used for feedback, the origami
reservoir might fail the quadratic limit cycle task. On the other hand, the
MSE reduces only marginally as we increase the percentage of feedback creases
beyond $30\%$ (Figure 8). Therefore, we can conclude that using only
$30\%-40\%$ of total creases as the feedback and sensors crease will provide
us an adequate computing performance. This result is significant because it
shows that, even though a large size (high-dimensionality) of the reservoir is
essential for computing performance, one does not need to measure (readout)
every reservoir state. In this way, the practical implementation of the
origami reservoir can be significantly simplified.
In conclusion, the parametric analyses lay out the strategy to optimize the
origami reservoir performance by tailoring the underlying physical and
computational design. A larger origami with a higher-dimension can ensure low
computational error, but one only needs to use $30\%$ $40\%$ of its creases as
the feedback and sensor creases to tap into the origami’s computing capacity.
Meanwhile, the distribution of these feedback and sensor creases must be
carefully chosen with extensive simulations. To further improve computing
performance, one can tailor the origami’s mass distribution, crease stiffness,
and geometric design. Among these options, optimizing the folding geometry
should be the most effective because it is easy to implement in practical
applications.
## 5 Application to soft robotic crawling
This section demonstrates the application of origami reservoir computing to
generate an earthworm-inspired peristaltic crawling gait in a robotic system.
The earthworm uses peristalsis to navigate uneven terrain, burrow through
soil, and move in confined spaces. The lack of complex external appendages
(aka., legs or wings) makes earthworm-inspired robots ideal for field
exploration, disaster relief, or tunnel drilling [70, 71, 72]. The body of an
earthworm consists of segments (metamerism) grouped into multiple “driving
modules” [73, 60]. Each driving module includes contracting, anchoring, and
extending segments actuated by antagonistic muscles (Figure 9(a)). During
peristaltic locomotion, these segments alternately contract, anchor (to the
environment with the help of setae), and extend to create a propagating
peristalsis wave, thus moving the body forward.
We design an earthworm-inspired origami robot consisting of two $3\times 9$
Miura-ori reservoir connected via a stiff central bridge (9(b)). The left and
right half of the robots are symmetric in design, and the central bridge
design allows differential motion between the two halves to facilitate turning
in response to the external input. In each origami reservoir, we embed two
groups of feedback creases (Figure 9(b)) with feedback weights assigned such
that their values for the front and back-half are equal but opposite to each
other. This arrangement reduces the number of reference outputs needed to
generate a crawling gait. To create a peristalsis locomotion gait, we train
the origami reservoirs to generate multiple harmonic signals with a phase
difference of $\pi/2$ among them (aka. a pattern generation task shown Figure
9(b)). We train the robot for 100 seconds and discard the first 15 seconds of
data as the washout step.
Figure 9: Reservoir computing powered crawling origami robot. (a) The
kinematics of a peristaltic locomotion cycle in an earthworm. For clarity, the
earthworm body is simplified and consists of six identical segments organized
into two driving modules. The earthworm body moves forward while the
peristaltic wave of anchoring segments (or driving modules) propagates
backward. (b) The design of an earthworm inspired origami crawling robot that
features two stripes of Miura-ori connected by a zig-zag shaped “ridge.” This
robot has four groups of feedback creases. (c) The closed-loop trajectory
generated by the feedback creases after training. (d) Peristaltic locomotion
cycle in the origami robot as a result of the generated trajectory.
Also, we apply ideal anchors to the bottom origami creases that are in contact
with the surface below. These anchors are assumed to be kinematically attached
to the ground when the associated origami crease folds and relaxed as the
crease unfolds (or flattens). Such anchor design is feasible by leveraging the
origami facets’ folding motion, as shown in the author’s previous study [60].
Figure 9(d) illustrates the robotic locomotion generated by reservoir
computing, while Figure 9(c) depicts the closed-loop response and the limit
cycle recovery from total rest (MSE is $3.9\times 10^{-04})$. As the origami
reservoir generates the multiple harmonic signals with a phase difference, its
folding motion naturally “synchronizes” to these signals, generating a
peristaltic wave of folding and unfolding. As a result, the robot crawls
forward like an earthworm, without using any traditional controllers.
## 6 Summary and Conclusion
We demonstrate the physical reservoir computing capability of origami via
extensive benchmark simulations and parametric studies. First, we develop a
simulation environment to study the nonlinear origami dynamics and detail the
origami reservoir setup. This reservoir successfully achieves many computing
tasks such as emulation, pattern generation, and modulation, all of which are
relevant to robotic applications. We also conduct comprehensive parametric
analysis to uncover the linkage between origami reservoir design and its
computing performance. This new knowledge base offers us a guideline to
optimize computing performance. To the authors’ best knowledge, this is the
first study to rigorously examine the performance of physical reservoir
computer from the lens of the physical design. Finally, we demonstrate how to
embed reservoir computing into an origami robot for control without
traditional controllers through the example of peristaltic crawling.
We list four requirements for a mechanical system to be a reservoir in the
introduction, and origami satisfies all these requirements. The tessellated
origami structures are inherently high-dimensional. For example, a $7\times 7$
Miura-ori with 49 nodes contains $N=60$ crease dihedral angles, all or a small
portion of them can serve as the reservoir states. The nonlinearity of origami
partly originates from the nonlinear kinematic relationships between these
crease angles and external geometry. Also, since origami patterns are highly
structured (ordered), small perturbations in the material properties,
imperfections of crease geometry, and the introduction of local actuation are
sufficient to destroy the regularity and create disorder. These properties
make origami highly nonlinear dynamic reservoirs. The origami reservoir’s
performance in the emulation task proves that it can act as a nonlinear filter
and satisfies fading memory property. Nonlinear patterns can be embedded into
the origami reservoir, and the resulting pattern generation is robust against
external disturbances and recoverable under different initial conditions,
proving separation property. Finally, adding the feedback can create
persistent memory, which is conducive to learning new tasks.
For future robots to work autonomously in unstructured and dynamic
environments, the robot body and brain have to work together by continuously
exchanging information about the current condition, processing this
information, and taking appropriate actions. The physical reservoir computing
embodied robots shown in this study presents a step toward this vision. The
reservoir embedded in the robot body directly gathers information from the
distributed sensor-actuator network to perform low-level control tasks like
locomotion generation. The resulting soft robot can generate the global target
behavior autonomously without controlling every element individually.
Moreover, the generated trajectories could be robust against external
disturbances and modulated according to changing working conditions.
A challenge in implementing physical reservoir computing is the many sensors
and actuators required, even though these sensors and actuators can be simple
individually. Our results contribute in this regard by showing that only a
small portion of origami creases are required to be equipped with sensors and
actuators to tap into the reservoir computing power.
In summary, origami reservoir computing provides an attractive pathway for
facilitating synergistic collaboration between the soft robot’s body and the
brain. The reservoir computing, coupled with unique mechanical properties that
origami can offer — multi-stability [47, 49, 74], nonlinear stiffness [47, 44,
45, 49], and negative Poisson’s ratio [47, 44, 49] — opens up new avenues to
the next generation of soft robots with embedded mechanical intelligence.
## ACKNOWLEDGMENTS
The authors acknowledge the support from the National Science Foundation (CMMI
– 1933124), as well as the Clemson University for the generous allotment of
computing time on Palmetto cluster.
## References
* [1] Miriyev, A. and Kovač, M., “Skills for physical artificial intelligence,” Nature Machine Intelligence 2(11), 658–660 (2020).
* [2] Laschi, C., Mazzolai, B., and Cianchetti, M., “Soft robotics: Technologies and systems pushing the boundaries of robot abilities,” Science Robotics 1(1), eaah3690 (2016).
* [3] Cianchetti, M., Calisti, M., Margheri, L., Kuba, M., and Laschi, C., “Bioinspired locomotion and grasping in water: The soft eight-arm OCTOPUS robot,” Bioinspiration and Biomimetics 10(3), 035003 (2015).
* [4] Hannan, M. W. and Walker, I. D., “Kinematics and the Implementation of an Elephant’s Trunk Manipulator and Other Continuum Style Robots,” Journal of Robotic Systems 20(2), 45–63 (2003).
* [5] Ma, K. Y., Chirarattananon, P., Fuller, S. B., and Wood, R. J., “Controlled Flight of a Biologically Inspired, Insect-Scale Robot,” Science 340, 603–607 (may 2013).
* [6] Joshi, A., Kulkarni, A., and Tadesse, Y., “Fludojelly: experimental study on jellyfish-like soft robot enabled by soft pneumatic composite (spc),” Robotics 8(3), 56 (2019).
* [7] Ren, Z., Hu, W., Dong, X., and Sitti, M., “Multi-functional soft-bodied jellyfish-like swimming,” Nature Communications 10(1) (2019).
* [8] Katzschmann, R. K., Marchese, A. D., and Rus, D., “Hydraulic autonomous soft robotic fish for 3D swimming,” Springer Tracts in Advanced Robotics 109, 405–420 (2016).
* [9] Rafsanjani, A., Zhang, Y., Liu, B., Rubinstein, S. M., and Bertoldi, K., “Kirigami skins make a simple soft actuator crawl,” Science Robotics 3(15), 1–8 (2018).
* [10] Wu, Y., Yim, J. K., Liang, J., Shao, Z., Qi, M., Zhong, J., Luo, Z., Yan, X., Zhang, M., Wang, X., et al., “Insect-scale fast moving and ultrarobust soft robot,” Science Robotics 4(32), eaax1594 (2019).
* [11] Seok, S., Onal, C. D., Cho, K. J., Wood, R. J., Rus, D., and Kim, S., “Meshworm: A peristaltic soft robot with antagonistic nickel titanium coil actuators,” IEEE/ASME Transactions on Mechatronics 18(5), 1485–1497 (2013).
* [12] Hoffmann, M. and Müller, V. C., “Simple or complex bodies? Trade-offs in exploiting body morphology for control,” Studies in Applied Philosophy, Epistemology and Rational Ethics 28(January 2018), 335–345 (2017).
* [13] Laschi, C. and Mazzolai, B., “Lessons from animals and plants: The symbiosis of morphological computation and soft robotics,” IEEE Robotics and Automation Magazine 23(3), 107–114 (2016).
* [14] Trivedi, D., Rahn, C. D., Kier, W. M., and Walker, I. D., “Soft robotics: Biological inspiration, state of the art, and future research,” Applied Bionics and Biomechanics 5(3), 99–117 (2008).
* [15] Paul, C., Investigation of Morphology and Control in Biped Locomotion, PhD thesis, University of Zurich, Switzerland (2004).
* [16] Paul, C., “Morphological computation. A basis for the analysis of morphology and control requirements,” Robotics and Autonomous Systems 54(8), 619–630 (2006).
* [17] Caluwaerts, K., D’Haene, M., Verstraeten, D., and Schrauwen, B., “Locomotion without a brain: Physical reservoir computing in Tensegrity structures,” Artificial Life 19(1), 35–66 (2013).
* [18] Hauser, H., Ijspeert, A. J., Füchslin, R. M., Pfeifer, R., and Maass, W., “Towards a theoretical foundation for morphological computation with compliant bodies,” Biological Cybernetics 105(5-6), 355–370 (2011).
* [19] Nakajima, K., Hauser, H., Kang, R., Guglielmino, E., Caldwell, D. G., and Pfeifer, R., “A soft body as a reservoir: case studies in a dynamic model of octopus-inspired soft robotic arm,” Frontiers in Computational Neuroscience 7(July), 1–19 (2013).
* [20] Müller, V. C. and Hoffmann, M., “What Is Morphological Computation? On How the Body Contributes to Cognition and Control,” Artificial Life 23, 1–24 (feb 2017).
* [21] Tanaka, G., Yamane, T., Héroux, J. B., Nakane, R., Kanazawa, N., Takeda, S., Numata, H., Nakano, D., and Hirose, A., “Recent advances in physical reservoir computing: A review,” Neural Networks 115, 100–123 (2019).
* [22] Füchslin, R. M., Dzyakanchuk, A., Flumini, D., Hauser, H., Hunt, K. J., Luchsinger, R. H., Reller, B., Scheidegger, S., and Walker, R., “Morphological computation and morphological control: Steps toward a formal theory and applications,” Artificial Life 19(1), 9–34 (2013).
* [23] Collins, S. H., Wisse, M., and Ruina, A., “A Three-Dimensional Walking Robot with Two Legs and Knees,” The International Journal of Robotics Research 20(7), 607–615 (2001).
* [24] Floreano, D., Pericet-Camara, R., Viollet, S., Ruffier, F., Brückner, A., Leitel, R., Buss, W., Menouni, M., Expert, F., Juston, R., Dobrzynski, M. K., L’Eplattenier, G., Recktenwald, F., Mallot, H. A., and Franceschini, N., “Miniature curved artificial compound eyes,” Proceedings of the National Academy of Sciences of the United States of America 110(23), 9267–9272 (2013).
* [25] Jaeger, H., “The “echo state” approach to analysing and training recurrent neural networks-with an erratum note,” Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148(34), 13 (2001).
* [26] Maass, W. W., Markram, H., and Natschläger, T., “The “ liquid computer”: A novel strategy for real-time computing on time series,” Special Issue on Foundations of Information Processing of TELEMATIK 8(1), 39–43 (2002).
* [27] Maass, W., Joshi, P., and Sontag, E. D., “Computational aspects of feedback in neural circuits,” PLoS Computational Biology 3(1), 0015–0034 (2007).
* [28] Maass, W., “Liquid state machines: motivation, theory, and applications,” in [Computability in context: computation and logic in the real world ], 275–296, World Scientific (2011).
* [29] Lukoševičius, M. and Jaeger, H., “Reservoir computing approaches to recurrent neural network training,” Computer Science Review 3(3), 127–149 (2009).
* [30] Schrauwen, B., Verstraeten, D., and Van Campenhout, J., “An overview of reservoir computing: theory, applications and implementations,” in [Proceedings of the 15th european symposium on artificial neural networks. p. 471-482 2007 ], 471–482 (2007).
* [31] Nakajima, K., “Physical reservoir computing-an introductory perspective,” Japanese Journal of Applied Physics 59(6) (2020).
* [32] Hauser, H., Ijspeert, A. J., Füchslin, R. M., Pfeifer, R., and Maass, W., “The role of feedback in morphological computation with compliant bodies,” Biological Cybernetics 106(10), 595–613 (2012).
* [33] Morales, G., Huam, S. G., Telles, J., Yamanaka, Y., Nakajima, K., Yaguchi, T., and Hauser, H., “Mass-Spring Damper Array as a Mechanical Medium for Computation,” International Conference on Artificial Neural Networks 1, 208–217 (2018).
* [34] Caluwaerts, K. and Schrauwen, B., “The body as a reservoir: locomotion and sensing with linear feedback,” in [2nd International conference on Morphological Computation (ICMC 2011) ], (2011).
* [35] Li, T., Nakajima, K., Cianchetti, M., Laschi, C., and Pfeifer, R., “Behavior switching using reservoir computing for a soft robotic arm,” Proceedings - IEEE International Conference on Robotics and Automation 1, 4918–4924 (2012).
* [36] Nakajima, K., Hauser, H., Li, T., and Pfeifer, R., “Exploiting the dynamics of soft materials for machine learning,” Soft Robotics 5(3), 339–347 (2018).
* [37] Fernando, C. and Sojakka, S., “Pattern recognition in a bucket,” in [Advances in Artificial Life ], Banzhaf, W., Ziegler, J., Christaller, T., Dittrich, P., and Kim, J. T., eds., 588–597, Springer Berlin Heidelberg, Berlin, Heidelberg (2003).
* [38] Degrave, J., Caluwaerts, K., Dambre, J., and Wyffels, F., “Developing an embodied gait on a compliant quadrupedal robot,” IEEE International Conference on Intelligent Robots and Systems 2015-Decem, 4486–4491 (2015).
* [39] Agogino, A. K., SunSpiral, V., and Atkinson, D., “Super ball bot-structures for planetary landing and exploration,” NASA Technical Report (2018).
* [40] Urbain, G., Degrave, J., Carette, B., Dambre, J., and Wyffels, F., “Morphological properties of mass-spring networks for optimal locomotion learning,” Frontiers in Neurorobotics 11(MAR), 1–13 (2017).
* [41] Filipov, E. T., Paulino, G. H., and Tachi, T., “Origami tubes with reconfigurable polygonal cross-sections,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science 472(2185), 20150607 (2016).
* [42] Morgan, J., Magleby, S. P., and Howell, L. L., “An Approach to Designing Origami-Adapted Aerospace Mechanisms,” Journal of Mechanical Design 138(5), 052301 (2016).
* [43] Dang, X., Feng, F., Plucinsky, P., James, R. D., Duan, H., and Wang, J., “Deployable origami structures that approximate a general surface,” (2020).
* [44] Schenk, M. and Guest, S. D., “Geometry of Miura-folded metamaterials,” Proceedings of the National Academy of Sciences 110, 3276–3281 (feb 2013).
* [45] Silverberg, J. L., Evans, A. A., McLeod, L., Hayward, R. C., Hull, T. C., Santangelo, C. D., and Cohen, I., “Using origami design principles to fold reprogrammable mechanical metamaterials,” Science 345, 647–650 (aug 2014).
* [46] Yasuda, H., Gopalarethinam, B., Kunimine, T., Tachi, T., and Yang, J., “Origami‐Based Cellular Structures with In Situ Transition between Collapsible and Load‐Bearing Configurations,” Advanced Engineering Materials 1900562, 1900562 (2019).
* [47] Li, S., Fang, H., Sadeghi, S., Bhovad, P., and Wang, K. W., “Architected Origami Materials: How Folding Creates Sophisticated Mechanical Properties,” Advanced Materials 31(5), 1–18 (2019).
* [48] Yan, Z., Zhang, F., Wang, J., Liu, F., Guo, X., Nan, K., Lin, Q., Gao, M., Xiao, D., Shi, Y., Qiu, Y., Luan, H., Kim, J. H., Wang, Y., Luo, H., Han, M., Huang, Y., Zhang, Y., and Rogers, J. A., “Controlled Mechanical Buckling for Origami-Inspired Construction of 3D Microstructures in Advanced Materials,” Advanced Functional Materials 26, 2629–2639 (apr 2016).
* [49] Kamrava, S., Mousanezhad, D., Ebrahimi, H., Ghosh, R., and Vaziri, A., “Origami-based cellular metamaterial with auxetic, bistable, and self-locking properties,” Scientific Reports 7, 46046 (2017).
* [50] Rus, D. and Tolley, M. T., “Design, fabrication and control of origami robots,” Nature Reviews Materials 3, 1 (jun 2018).
* [51] Miyashita, S., Guitron, S., Li, S., and Rus, D., “Robotic metamorphosis by origami exoskeletons,” Science Robotics 2(10), eaao4369 (2017).
* [52] Belke, C. H. and Paik, J., “Mori: A Modular Origami Robot,” IEEE/ASME Transactions on Mechatronics 22(5), 2153–2164 (2017).
* [53] Onal, C. D., Tolley, M. T., Wood, R. J., and Rus, D., “Origami-Inspired Printed Robots,” IEEE/ASME Transactions on Mechatronics 20(5), 2214–2221 (2015).
* [54] Onal, C. D., Wood, R. J., and Rus, D., “An origami-inspired approach to worm robots,” IEEE/ASME Transactions on Mechatronics 18(2), 430–438 (2013).
* [55] Yan, R., Luo, M., Wan, Z., Qin, Y., Santoso, J., Skorina, E., and Onal, C., “OriSnake: Design, Fabrication and Experimental Analysis of a 3-D Origami Snake Robot,” IEEE Robotics and Automation Letters 3(3), 1–1 (2018).
* [56] Novelino, L. S., Ze, Q., Wu, S., Paulino, G. H., and Zhao, R., “Untethered control of functional origami microrobots with distributed actuation,” Proceedings of the National Academy of Sciences 117(39), 24096–24101 (2020).
* [57] Peraza-Hernandez, E. A., Hartl, D. J., Malak Jr, R. J., and Lagoudas, D. C., “Origami-inspired active structures: a synthesis and review,” Smart Materials and Structures 23(9), 094001 (2014).
* [58] Ning, X., Wang, X., Zhang, Y., Yu, X., Choi, D., Zheng, N., Kim, D. S., Huang, Y., Zhang, Y., and Rogers, J. A., “Assembly of advanced materials into 3d functional structures by methods inspired by origami and kirigami: a review,” Advanced Materials Interfaces 5(13), 1800284 (2018).
* [59] Morris, E., McAdams, D. A., and Malak, R., “The state of the art of origami-inspired products: A review,” in [International Design Engineering Technical Conferences and Computers and Information in Engineering Conference ], 50169, V05BT07A014, American Society of Mechanical Engineers (2016).
* [60] Bhovad, P., Kaufmann, J., and Li, S., “Peristaltic locomotion without digital controllers: Exploiting the origami multi-stability to coordinate robotic motions,” Extreme Mechanics Letters 32, 100552 (2019).
* [61] Zhakypov, Z., Falahi, M., Shah, M., and Paik, J., “The design and control of the multi-modal locomotion origami robot, tribot,” in [2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) ], 4349–4355 (2015).
* [62] Legenstein, R. and Maass, W., “What makes a dynamical system computationally powerful,” New directions in statistical signal processing: From systems to brain , 127–154 (2007).
* [63] Schenk, M. and Guest, S. D., “Origami folding: A structural engineering approach,” in [Proceedings of The Fifth International Meeting of Origami Science Mathematics and Education (5OSME) ], 291–303 (2011).
* [64] Liu, K. and Paulino, G. H., “Nonlinear mechanics of non-rigid origami: an efficient computational approach,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science 473(2206), 20170348 (2017).
* [65] Ghassaei, A., Demaine, E. D., and Gershenfeld, N., “Fast, interactive origami simulation using gpu computation,” Origami 7, 1151–1166 (2018).
* [66] Hiller, J. and Lipson, H., “Dynamic Simulation of Soft Multimaterial 3D-Printed Objects,” Soft Robotics 1(1), 88–101 (2014).
* [67] Paul, C., Valero-Cuevas, F. J., and Lipson, H., “Design and control of tensegrity robots for locomotion,” IEEE Transactions on Robotics 22(5), 944–957 (2006).
* [68] Atiya, A. F. and Parlos, A. G., “New results on recurrent network training: unifying the algorithms and accelerating convergence,” IEEE Transactions on Neural Networks 11(3), 697–709 (2000).
* [69] Liu, K., Novelino, L. S., Gardoni, P., and Paulino, G. H., “Big influence of small random imperfections in origami-based metamaterials,” Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 476(2241), 20200236 (2020).
* [70] Calderón, A. A., Ugalde, J. C., Chang, L., Cristóbal Zagal, J., and Pérez-Arancibia, N. O., “An earthworm-inspired soft robot with perceptive artificial skin,” Bioinspiration and Biomimetics 14, 0–12 (mar 2019).
* [71] Kamata, M., Yamazaki, S., Tanise, Y., Yamada, Y., and Nakamura, T., “Morphological change in peristaltic crawling motion of a narrow pipe inspection robot inspired by earthworm’s locomotion,” Advanced Robotics 32(7), 386–397 (2018).
* [72] Fang, H., Zhang, Y., and Wang, K. W., “Origami-based earthworm-like locomotion robots,” Bioinspiration and Biomimetics 12(6), 065003 (2017).
* [73] Quillin, K. and Quillin, “Kinematic scaling of locomotion by hydrostatic animals: ontogeny of peristaltic crawling by the earthworm lumbricus terrestris,” The Journal of experimental biology 202 (Pt 6), 661–74 (mar 1999).
* [74] Waitukaitis, S., Menaut, R., Chen, B. G., and van Hecke, M., “Origami multistability: from single vertices to metasheets,” Physical Review Letters 114, 055503 (feb 2015).
|
Mechanistic determination of tear film thinning via fitting simplified models
to tear breakup Fitting simplified models to tear breakup 1Department of
Mathematical Sciences, University of Delaware, Newark, USA 2School of
Optometry, Indiana University, Bloomington, USA 1Rayanne A.Luke 1Richard
J.Braun 2Carolyn G.Begley Department of Mathematical Sciences, University of
Delaware, 222 S. Chapel St, Newark, USA<EMAIL_ADDRESS>dry eye fluorescent
imaging optimization tear breakup tear film To determine whether evaporation,
tangential flow, or a combination of the two cause tear film breakup in a
variety of instances; to estimate related breakup parameters that cannot be
measured in breakup during subject trials; and to validate our procedure
against previous work. Five ordinary differential equation models for tear
film thinning were designed that model evaporation, osmosis, and various types
of flow. Eight tear film breakup instances of five healthy subjects that were
identified in fluorescence images in previous work were fit with these five
models. The fitting procedure used a nonlinear least squares optimization that
minimized the difference of the computed theoretical fluorescent intensity
from the models and the experimental fluorescent intensity from the images.
The optimization was conducted over the evaporation rate and up to three flow
rate parameters. The smallest norm of the difference was determined to
correspond to the model that best explained the tear film dynamics. All of the
breakup instances were best fit by models with time-dependent flow. Our
optimal parameter values and thinning rate and fluid flow profiles compare
well with previous partial differential equation model results in most
instances. Our fitting procedure suggests that the combination of the
Marangoni effect and evaporation cause most of the breakup instances.
Comparison with results from previous work suggests that the simplified models
can capture the essential tear film dynamics in most cases, thereby validating
this procedure as one that could be used on many other instances.
## 1 Introduction
Tear film breakup (TBU) occurs when a thinned region forms in the tear film
(TF). Clinically, this is defined as the first dark area that is observed in
the fluorescent (FL) TF following instillation of fluoresecein dyenorn1969.
Various mechanisms are thought to cause different types of TBU:
evaporationlemp2007, willcox2017tf, king2018 causes relatively slow
thinningking2010, and rapid thinning may be explained by Marangoni-driven
tangential flowzhong2019 or, plausibly, dewetting in cases of circular
TBUyokoi2013, yokoi2019. The Marangoni effect drives outward flow at the
aqueous/lipid interface induced by shear stress as a result of a lipid
concentration gradientberger74, craster2009. Dewetting from a defective
corneal surface region has been hypothesized to drive outward tangential flow
from pressure gradients due to van der Waals type forces sharma85, sharma99,
zhang03. A related term is full-thickness tear film breakup (FT-TBU), which is
when the aqueous layer has thinned to the point where the lipid layer and
glycocalyx touchbegley13, king2018.
The effects of evaporation and the Marangoni effect on TBU have been
extensively studied and modeled separatelyking2013, peng2014, braun2018,
zhong2018; only recently have they been explored in combination to explain
breakup occurring on an intermediate time scalezhong2019. Zhong et
al.zhong2019 developed a partial differential equation (PDE) model with one
spatial variable that incorporated both mechanisms. Luke et al.luke2021 fit
FT-TBU data from fluorescent (FL) images from healthy subjects with a rescaled
version of the Zhong et al.zhong2019 model. The optimization was conducted via
nonlinear least squares minimization of the theoretical and experimental FL
intensity, and TBU parameters were estimated for the model. The PDE fitting
process is time-consuming and limited to spots or streaks; more complicated
shapes could be fit using two spatial dimensions.
Ordinary differential equation (ODE) models have been designed to capture TF
thinning without spatial variation; exact solutions exist for some cases.
Braun et al.braun2019 extended an ODE model without flow from previous
workbraun2014 to include time-independent extensional fluid flow; a time-
dependent version has recently been developed and is presented here. The flow
is divergent from the origin and can be considered with evaporative loss and
osmotic supply. Winter _et al._ winter2010 (PDE model) and Braunbraun2012 (ODE
model) included van der Waals forces to stop thinning with a zero permeability
condition at the tear/cornea interface; such terms and forces are omitted from
the models in this work. Luke et al.luke2020 fit TBU data with ODE models with
evaporation, with or without osmosis, but without tangential flow. Neither the
PDE nor ODE model gave the best fit for all of the TBU instances. The
instances best fit by the ODE models had rates of FL intensity decrease most
closely approximated by a constant.
Both TF instability and hyperosmolarity are important to study because they
are proposed as etiological causes of dry eye syndrome gilbard1978,
craig2017defn, willcox2017tf. Osmolarity is the osmotically-active salt ion
concentration in the aqueous layertomlinson09, stahl2012. A concentration
difference between the corneal epithelium and aqueous layer induces osmotic
flow from the cornea to the TFpeng2014, braun2015. TF osmolarity may be
measured in the inferior meniscus clinicallylemp2011; the healthy range is
296-302 Osm/(m${}^{3})$lemp2011, tomlinson2006, versura2010. Dry eye
measurements in the same location can reach 316-360
mOsm/(m${}^{3})$gilbard1978, tomlinson2006, sullivan2010, dartt2013 but
estimates for the TF over the cornea reach 900 mOsm/m3 or higherliu09,
braun2015, peng2014, luke2020. High levels of TF osmolarity are associated
with pain, inflammation and cellular changespflugfelder2011, belmonte2015,
liu09. In support of these potentially high levels of TF osmolarity over the
cornea, mathematical models without spatial variation have estimated peak
osmolarities up to ten times the isotonic concentrationbraun2012, braun2015.
The modeling work of Peng et al.peng2014 found that evaporation elevates
osmolarity in breakup regions.
TF thinning rates have been measured experimentally or estimated in many
studies. A few experimental methods include spectral interferometry
nichols2005, kimball2010, king2010, an open chamberhamano1981, an
evaporimeterpeng2014b, and averaging pure water and functioning lipid layer
rates over the cornea obtained by heat transfer analysis and thermal
imagingdursch2018. In Braun et al.braun2018, both peak and background
evaporation rates in TBU instances, as well as the width of the evaporation
distribution, were studied parametrically. Subsequently, parameter estimation
schemes were developed in Luke et al.luke2020, luke2021 for fitting PDE models
to experimental FL intensity distributions. They found evaporation rates
ranging from -36.9 to 4.91 $\mu$m/min (the upper bound indicating thickening)
and overall TF thinning rates ranging from -23.5 to -1.85 $\mu$m/min. These
thinning rates were comparable to, or a little faster than, previous
experimental rates measured there were not specifically seeking TBU instances
nichols2005.
In this paper, we fit a hierarchy of ODE models to the same dataset as in Luke
et al.luke2021. The authors fit TBU instances with PDE models that
incorporated evaporation and the Marangoni effectluke2021. We use these PDE
results as a guide when determining whether our results have captured what we
believe to be the correct dynamics. In some cases, the ODE models are better
able to follow the experimental data than the PDEs, suggesting different
conclusions may be drawn for those particular instances.
## 2 Methods
### 2.1 FL images
The data was taken from twenty-five normal, healthy subjects in a study
conducted at Indiana University awisigyau2020 as discussed in several
papersluke2020, luke2021. Approval was received from the Biomedical
Institutional Review Board of Indiana University, Declaration of Helsinki
principles were followed during data collection, and informed consent was
obtained from all subjects. Subjects were seated behind a slit lamp
biomicroscope and 2% sodium fluorescein solution was instilled in the
patient’s eye. A light with a cobalt blue excitation filter illuminated the
eye so that the aqueous layer of the TF fluoresced greencarlson2004. A trial
is the sequence of images of the subject’s eye following a few quick blinks.
The trial records the fluorescence of the aqueous part of the TF. The trials
typically start with an FL concentration close to 0.2%, which is the so-called
critical concentration where peak fluorescence occurs for thin TFswebber86.
The critical FL concentration can be expressed in molar as 0.0053 M; see
Appendix A.2.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
Figure 1: (a)-(g): the last image in each trial. The bright rectangle, called
the Purkinje image, is due to the reflection from the light source. The images
have been brightened for better visibility. (h): Surface plot of the FL
intensity over time for subject S27v2t2 5:00 shown in (g).
For our purposes, FT-TBU is thinning to what is evidently a very small aqueous
thickness, as determined by the aid of a computer. We fit the central data of
the same extracted data from spot- or streak-shaped FT-TBU instances in Luke
et al.luke2021. All instances reported in this paper are shown in Figure 1.
The time resolution of our dynamic data is restricted by the frame rate, which
is 4 or 5 $s^{-1}$ depending on the trial.
### 2.2 Models
There is a hierarchy of ODE models we explore; the most complicated is derived
in Appendix A.3. Each variation of the model is determined by which mechanisms
are assumed to affect the TF. The options are a combination of evaporation,
osmosis, and flow. If flow is present, there are several choices we use. These
simple models are designed to capture the key ingredients in thinning in order
to distinguish which is the dominant mechanism causing thinning: evaporation,
outward tangential flow, or a combination of the two. Evaporation-dominated
thinning is characterized by inward tangential flow, if anyluke2020, while
Marangoni flow is characterized by strong outward tangential flow that
decreases in strength as the trial progressesluke2021. In all models that
follow, $h(t)$ denotes the nondimensional TF thickness.
#### 2.2.1 Initial conditions
In all cases of the model, we have uniform nondimensional initial conditions
$c(0)=1,\ h(0)=1,\mbox{ and }f(0)=f_{0}.$ (1)
After eliminating $c$ and $f$, the only initial condition needed is the one
for $h$.
#### 2.2.2 Case E model
The simplest variations of the model assumes constant evaporation is the only
mechanism affecting the TF thickness. The nondimensional evaporation rate is
$v$. The differential equation for $h$ , which is conservation of water, is
given by
$\dot{h}=-v.$ (2)
A dot indicates a time derivative.
#### 2.2.3 Case O model
We assume constant evaporation and osmosis affect the TF thickness. Figure 2
shows a sketch of the model (on left). Osmolarity is quantified in the
nondimensional variable $c$. Osmosis is written as a concentration difference
between the cornea and the TF; a greater TF osmolarity will drive osmotic flow
from the cornea into the TF. The differential equation is given by
$\dot{h}=-v+P_{c}(c-1),$ (3)
where $P_{c}$ is nondimensional permeability constant. Mass conservation of
solute (osmolarity), namely, $ch=1$, allows us to eliminate $c$braun2014 to
obtain a single ODE for $h$:
$\dot{h}=-v+P_{c}\left[\frac{1}{h}-1\right].$ (4)
Cases E and O model situations where evaporation is the dominant mechanism
affecting TF thinning and where flow is not important.
#### 2.2.4 Case F model
We assume that evaporation, osmosis, and flow affect the thickness of the TF.
Figure 2 shows a sketch of this model (on right). We introduce nondimensional
velocity along the TF, $u(x,t)$. In this first case,
$u(x,t)=u(x)=ax,$ (5)
where $\partial_{x}u=a$ is the strain rate. This flow may be thought of as
stretching taffy if $a>0$. A single curve of Figure 3 illustrates this flow
profile. Mass conservation becomes $ch=e^{-at}$, and the differential equation
for the TF thickness is
$\dot{h}=-v+P_{c}\left[\frac{\exp(-at)}{h}-1\right]-ah.$ (6)
For this and all following models, solute conservation relates the derivative
of the solute mass ($hc$) to the strain rate. For the case F model, this is
given by
$\dot{h}=-(\partial_{x}u)h=-ah.$ (7)
The sign of the strain rate $a$ suggests the kind of flow present. If $a<0$,
the flow is inward, mimicking healing capillary flow. This characterizes
evaporation-dominated thinning. If $a>0$, the flow is outward, mimicking
Marangoni flow, driven by interfacial shear stress. This characterizes
Marangoni effect-dominated thinning.
Figure 2: Schematic for the Case O and F models.
#### 2.2.5 Case D model
This model is designed to mimic the time-dependent flow seen in TBU instances
where flow is dominated by the Marangoni effect. We assume evaporation,
osmosis, and decaying extensional flow affect the TF thickness. Here,
$u(x,t)=b_{1}e^{-b_{2}t}x,$ (8)
where $b_{1}$ and $b_{2}$ are flow rate and decay rate parameters,
respectively. In this case, the strain rate is
$\partial_{x}u=b_{1}e^{-b_{2}t}$. The differential equation for TF thickness
$h$ is
$\dot{h}=-v+P_{c}\left\\{\frac{1}{h}\exp\left[\frac{b_{1}}{b_{2}}\left(e^{-b_{2}t}-1\right)\right]-1\right\\}-b_{1}e^{-b_{2}t}h,$
(9)
where the first term inside the brackets is the result of using mass
conservation to eliminate $c$.
#### 2.2.6 Case M model
Our most complicated the model is an extension of case D to allow the flow to
decay to a constant, nonzero value. We assume that evaporation, osmosis, and a
combination of constant and decaying extensional flow affect the TF thickness.
This model allows for the flow to change direction: for example, it may start
outward and strong, but then decay to a weakly inward, constant value. The
nondimensional horizontal fluid profile is given by
$u(x,t)=(a+b_{1}e^{-b_{2}t})x.$ (10)
The exponential term will greatly diminish that part of the flow after
$1/b_{2}$ units of time. The strain rate is
$\partial_{x}u=a+b_{1}e^{-b_{2}t}$. An example of this fluid profile is shown
in Figure 3.
Figure 3: An example of the fluid flow profile $u$ from Equation 10. This
simulation models strong outward tangential flow that dies off and then inward
capillary flow persists. The nondimensional parameters used are
$a=-0.25,b_{1}=1,$ and $b_{2}=2$. A final nondimensional time of 2 was used.
Arrows indicate increasing time.
The nondimensional film thickness, $h(t)$, is governed by
$\dot{h}=-v+P_{c}\left\\{\frac{1}{h}\exp\left[-at+\frac{b_{1}}{b_{2}}\left(e^{-b_{2}t}-1\right)\right]-1\right\\}-(a+b_{1}e^{-b_{2}t})h.$
(11)
Appendix A.3 shows how $c$ can be eliminated in 11 via solute conservation.
### 2.3 FL intensity
Nondimensional FL intensity $I$ from the TF is given by
$I=I_{0}\frac{1-\exp(-\phi hf)}{(1+f^{2})}.$ (12)
Similarly as for the equation for $h$, $f$ can be eliminated so that the FL
intensity $I$ for the Case M model is given by
$I=I_{0}\frac{1-\exp\left\\{-\phi
f_{0}\exp\left[-at+\frac{b_{1}}{b_{2}}\left(e^{-b_{2}t}-1\right)\right]\right\\}}{1+\left\\{f_{0}\exp\left[-at+\frac{b_{1}}{b_{2}}\left(e^{-b_{2}t}-1\right)\right]/h\right\\}^{2}}.$
(13)
The expressions for $I$ for the other cases can be obtained by setting
$b_{1}=0$ if there is no time-dependence in the flow, by setting $a=0$ if
there is only time-dependent flow, and by setting $a=b_{1}=0$ if there is no
flow at all.
### 2.4 Estimating initial physical quantities
We estimate the initial FL concentration following Wu et al. wu2015. This
value is assumed to be uniform throughout the TF. By inverting the dimensional
version of Equation 12 (Equation 32 in Appendix A.3) for $h^{\prime}$, we
obtain an initial TF thickness estimate:
$h_{0}^{\prime}=-\frac{1}{\epsilon_{f}f_{0}^{\prime}}\log\left\\{1-\frac{I}{I_{0}}\left[1+\left(\frac{f_{0}^{\prime}}{f_{\text{cr}}}\right)^{2}\right]\right\\}.$
(14)
Model eye calculationswu2015 determine $I_{0}$ through a least squares
problem. The relative initial FL intensity $I$ at the center of the TBU
instance, where the minimum FL intensity in the corneal region of interest has
been subtracted off, is used. More details about the procedure can be found in
Luke et al.luke2020.
### 2.5 Optimization
We follow the process described in Luke et al.luke2020; a summary is given
below.
#### 2.5.1 Data preparation
We use custom MATLAB codes to convert the images in each trial from RGB color
to grayscale, smooth the images with a Gaussian filter, and stabilize the
images using the Purkinje imageawisigyau2020, a bright artefact reflecting the
light source. We use the same roughly linear or circular FT-TBU instances that
were chosen and fit by a PDE model in Luke et al.luke2021 to compare with the
PDE results. We fit our theoretical FL intensity to the central data of a
subset of about 6-10 time levels of experimental FL intensity data from the
trial. The starting frame is the last frame before the FL intensity data
starts decreasing. The first few frames of a trial are taken at a lower light
setting to obtain an initial FL concentration estimate, and in some trials
there is evidence that thinning has begun during this interval. As a result,
the first bright image already exhibits significant decrease in FL intensity
in the center of breakup. Luke et al.luke2021 remedied this issue by
introducing “ghost” time levels, allowing the model solution to start with a
uniform time level that is not compared to the experimental FL data. This is a
product of the low time resolution of our data. In this work, we also use
ghost times as appropriate. The last frame is the final frame before the FL
intensity data stop decreasing.
#### 2.5.2 Optimization problem
We discuss the optimization for the case M model. Expressed in continuous
variables, we seek to minimize $||I_{\text{th}}(t)-I_{\text{ex}}(t)||_{2}^{2}$
over the parameters $v^{\prime}$, the evaporation rate, $a^{\prime}$, the
constant extensional flow rate, $b_{1}^{\prime}$, the decaying extensional
flow rate, and $b_{2}^{\prime}$, the decay rate. Here, $t$ corresponds to the
time after the light source brightness has been increased to the high setting.
This variable has been nondimensionalized with the scalings given in Appendix
A.2. The norm is over all $t\in[0,T]$ excluding any “ghost” time levels from
the theoretical FL intensity, where $T$ corresponds to the total length of the
trial. The optimization problem may be written
$\operatorname*{argmin}_{v^{\prime},a^{\prime},b_{1}^{\prime},b_{2}^{\prime}}||I_{\text{th}}(t;v^{\prime},a^{\prime},b_{1}^{\prime},b_{2}^{\prime})-I_{\text{ex}}(t)||_{2}^{2}.$
(15)
Theoretical intensity, $I_{\text{th}}$, is computed after solving the
differential equation for film thickness, $h$. Similar optimizations are
conducted for each of the other models.
#### 2.5.3 Numerical method and stopping criterion
The five ODEs for $h$ are solved using ode15s in MATLAB (MathWorks, Natick,
MA, USA). For the optimization, we use a nonlinear least squares minimization
implemented by lsqnonlin in MATLAB with the trust-region reflective
algorithmnocedal2006 and we add a second order finite difference approximation
of the Jacobianleveque2007 to improve performance. To generate initial guesses
for optimization, forward computations were conducted until the theoretical
dynamics were close to the experimental. For each instance, the solver stopped
because the change in residual was less than the specified tolerance.
Optimization tolerances of roughly the square root of the solver tolerances
were used.
## 3 Results
### 3.1 Exact solutions
Exact solutions exist for the case E model and for case F with $P_{c}=0$. For
the case E model, using our initial condition, the nondimensional exact
solution is
$h(t)=1-vt.$ (16)
This solution ignores the physical reality that evaporation ceases once the TF
reaches zero thickness; thus, the solution is only relevant for $t\in[0,1/v].$
If we assume that time-independent flow is the only mechanism affecting the TF
thickness in case F, then $v=P_{c}=0$ in (6) applies:
$\dot{h}=-ah.$ (17)
Using our initial condition, we find that
$h(t)=e^{-at}.$ (18)
To model TF thinning, we assume $a>0$ in this instance. Thus, as expected, the
TF thins to zero as time increaseshowell1996.
### 3.2 Numerical solutions
In Figure 4 we plot nondimensional theoretical solutions for all of the
models. For comparison purposes, we have used the same parameter values for
each of the five models. In particular, both flow parameters are positive,
indicating outward flow. The nondimensional parameters that result from our
scalings are $a=0.45,b_{1}=0.9,b_{2}=2.4,v=0.5,$ and $P_{c}=0.0653$. We see
that the case O solution thins slightly less than the case E solution due to
osmosis, which adds fluid to the TF. For the three models involving flow,
since we have selected both $a,b_{1}>0$, the case M model shows the most
thinning since the outward flow is strongest of all the models. As expected,
the case F and D models are similar early on, but become increasingly similar
once the flow is shut off in the D model. Osmolarity and normalized FL
concentration solutions are identical in the absence of spatial variation.
Both quantities are inversely related to TF thickness and increase at the
origin in the presence of outward flow; the profiles reach the highest peak
value for the case M model, which exhibits the greatest decrease in TF
thickness and the strongest fluid flow.
(a)
(b)
(c)
Figure 4: Nondimensional theoretical solutions for the five cases of the model
with
$v^{\prime}=30\ \mu$m/min, $a^{\prime}=0.15$ /s, $b_{1}^{\prime}=0.3$ /s,
$b_{2}^{\prime}=0.8$ /s, $f_{0}^{\prime}=0.3$%, $d=3\ \mu$m, and $t_{s}=3$ s.
### 3.3 Fitting results
We begin by presenting our aggregate results of fitting the same instances
that were fit with the mixed-mechanism PDE model of Luke et al.luke2021. The
best fit results as determined by the smallest norm are shown in Table 1. Each
FT-TBU instance is labeled by subject, visit, and trial number, the location
of the breakup as a clock reading, and the type of breakup (streak or spot).
Images showing the FT-TBU instances can be found in Section 2.1. A combination
of the evaporation rate, constant flow rate, decaying flow rate, and decay
rate are adjusted to accomplish the fit. The optimal parameters are given for
the case of the model with the smallest norm. Section 3.4 shows examples of
the experimental data, fits, and resulting theoretical solutions using the
optimal parameters found by nonlinear least squares minimization. The S18v2t4
7:30 spot was originally fit with a single ghost time level in Luke et
al.luke2021 but alternatively fit with two in the supplementary material; we
choose to fit with two here as well.
Five of the FT-TBU instances are best fit by the case M model and the other
three are best fit by the case D model. Notably, the versions of the model
without flow produce theoretical solutions with worse fits in all eight
instances; we take this as strong evidence that flow plays a crucial role in
causing the TF thinning. This is expected as each FT-TBU was previously fit
with a model that combined evaporation and strong Marangoni flowluke2021. It
is worth mentioning a few other fits: the S27v2t2 5:00 streak is also fit well
with the case D model, with the case F model not far behind; the case M model
fits the S18v2t4 7:30 spot data well; and the S13v2t10 6:30 spot data also
matches reasonably well with the case F model.
Table 1 shows a wide range of evaporation rates. Notably, the S9v2t5 4:30 spot
instance, which was categorized as evaporation-dominated in Luke et
al.luke2021, has the highest optimal evaporation rate. In contrast, the
S10v1t6 12:30 spot and S27v2t2 5:00 streak are faster instances that were
categorized as Marangoni effect-dominated in the aforementioned paper; these
cases exhibit the two smallest evaporation rates seen in Table 1. The S18v2t4
7:30 and S10v1t6 12:30 spots have the strongest outward flow of all instances,
with initial strain rates close to or over 2 $s^{-1}$. Further, all five
Marangoni effect-dominated FT-TBUs from the previous paper exhibit outward
flow, the characteristic direction of Marangoni flow. The S9v2t1 3:00 streak
and S9v2t5 4:00 and 4:30 spots are the three instances designated as
evaporation-dominated or transitional thinning in the PDE paper; these all
show some amount of inward flow, which is characteristic of evaporation-
dominated thinning.
Trial | FT-TBU ID | $h_{0}^{\prime}$ ($\mu$m) | | $f_{0}^{\prime}$
---
(%)
$v^{\prime}$ ($\frac{\mu\text{m}}{\text{min}}$) | $a^{\prime}$ ($s^{-1}$) | $b_{1}^{\prime}$ ($s^{-1}$) | $b_{2}^{\prime}$ ($s^{-1}$) | Norm | Model
S9v1t4+ | 4:00 — | 3.32 | .324 | 24.1 | .0316 | .418 | 5.75 | .203 | M
S9v2t1 | 3:00 — | 5.01 | .292 | 27.3 | .461 | -.490 | .0715 | .110 | M
S9v2t5 | 4:00 $\circ$ | 2.1 | .299 | 22.4 | .217 | -.417 | .882 | .118 | M
S9v2t5 | 4:30 $\circ$ | 2.33 | .299 | 50.9 | .360 | -.564 | .367 | .192 | M
S10v1t6++ | 12:30 $\circ$ | 3.08 | .293 | 1.27 | | 1.95 | .277 | .0280 | D
S13v2t10+ | 6:30 — | 3.59 | .259 | 26.4 | | .138 | .102 | .121 | D
S18v2t4++ | 7:30 $\circ$ | 2.48 | .363 | 25.2 | | 2.41 | 8.85 | .111 | D
S27v2t2+ | 5:00 — | 1.91 | .4 | 9.32 | .714 | -.368 | .540 | .0271 | M
Table 1: Results from fitting various ODE models (up to four parameters). The
subject (S) number, visit (v) number and (t) trial number are listed. A $+$
denotes using a “ghost” first time level in the PDE fit and “ghost” time in
the ODE fit. The FT-TBU location is a clock reading taken from the center of
the pupil. FT-TBU type is denoted by — for a streak, and $\circ$ for a spot.
The initial TF thickness and FL concentration estimates are given. The optimal
parameters are given for the case of the model with the smallest norm. The
evaporative thinning rates are given by $v^{\prime}$, constant extensional
flow rate by $a^{\prime}$ and decaying extensional flow and decay rates by
$b_{1}^{\prime}$ and $b_{2}^{\prime}$.
### 3.4 Fitting examples
The S10v1t6 12:30 spot is shown as an example of our fitting procedure in
Figure 5. Figure 5c shows the line of data extracted for the PDE fit recorded
in Luke et al.luke2021; we fit the breakup data at the midpoint of the line
with our ODE models. The results for each of the six ODE models are recorded
in Table 2. In order to determine the model selected to report in Table 1, we
compare the 2-norms of the difference between the theoretical and experimental
FL intensities and select the case of the model corresponding to the smallest
value.
The first six seconds of the trial are obscured by eyelashes and the upper
eyelid. The spot has already started to form and darkens quickly after the
breakup region is revealed around six seconds into the trial (see Figure 5b).
In order to fit the data with our model, we use “ghost” time levels for 0.5
seconds. Figure 5d shows that the experimental FL intensity drops to less than
10% of its initial value.
(a)
(b)
(c)
(d)
Figure 5: Extracted data and best fit results for the S10v1t6 12:30 spot. In (c) the image has been brightened and contrast-enhanced. Case (c) evaporation (see Luke et al.luke2021) was used in the PDE fit. Model | | $v^{\prime}$
---
($\frac{\mu\text{m}}{\text{min}}$)
| $a^{\prime}$
---
($s^{-1}$)
| $b_{1}^{\prime}$
---
($s^{-1}$)
| $b_{2}^{\prime}$
---
($s^{-1}$)
Residual | Norm
Evap only (E) | 120 | | | | 3.88 $\times 10^{-2}$ | 1.97 $\times 10^{-1}$
Evap + osm (O) | 122 | | | | 3.47 $\times 10^{-2}$ | 1.86 $\times 10^{-1}$
Evap, osm, flow (F) | 0.00 | 1.74 | | | 2.0 $\times 10^{-3}$ | 4.50 $\times 10^{-2}$
Evap, osm, dec. flow (D) | 1.27 | | 1.95 | 0.277 | 7.86 $\times 10^{-4}$ | 2.80 $\times 10^{-2}$
Evap, osm, mixed flow (M) | 4.91 | 0.656 | 1.19 | 0.423 | 6.10 $\times 10^{-4}$ | 4.04 $\times 10^{-2}$
Mixed-mech PDE center | 5.92 | | | | | 5.58 $\times 10^{-2}$
Table 2: S10v1t6 12:30 center of spot data fit with the five cases of models.
The central data of the best PDE fit is shown for comparison.
In Table 2, the two ODE models without flow select unrealistic evaporation
rates in an attempt to match the rapid thinning of the S10v1t6 12:30 spot. On
average, the evaporation rates chosen by the ODE models with flow are among
the smallest optimal values for all mixed-mechanism fit instances. This is
likely due to the relatively large flow rate parameters–unlike any other
trial, the initial flow value $b_{1}^{\prime}$ or $a^{\prime}$ is above 1
$s^{-1}$ for each of the three models that involve flow. We take this as
strong evidence that the Marangoni effect is the dominant mechanism causing
the thinning. Further evidence of this statement is the fact that the case F
model selected zero evaporation. This instance was fit well with a Marangoni
effect-only PDE model which ignored evaporation, which is consistent with our
small or zero optimal evaporation rates. The case D model produces the
smallest residual. This FT-TBU instance exhibits the largest drop in FL
intensity of all eight; the decaying model gives the best fit because the TF
has likely thinned to almost zero thickness, allowing little flow, if any.
The S27v2t2 5:00 streak data and fits are shown in Figure 6. As in Luke et
al.luke2021, we use a quarter second of “ghost” time at the start of the fit.
This instance is of particular interest because the center of the mixed-
mechanism PDE theoretical FL intensity does not capture the dynamics of the
experimental data well. In Luke et al.luke2021, the S27v2t2 5:00 streak was
categorized as Marangoni-effect dominated due to the large Marangoni number
and outward flow of the best fit. The best fit case M model selects outward
flow; however, the close second-best (D) and third-best (F) cases select
inward flow. These latter two models also select a significantly larger
evaporation rate than the others. The S27v2t2 5:00 streak was also fit with an
evaporation-only modelbraun2018 in Luke et al.luke2021. That fit (E PDE) is
shown along with the mixed-mechanism fit (MM PDE) and the ODE results in
Figure 6b and outperforms the mixed-mechanism PDE fit. Further, the optimal
peak evaporation rate for the evaporation-only PDE fit is 35.3 $\mu$m/min,
which is a large but plausible evaporation rate. This suggests that
evaporation may play a larger role in this instance than previously thought.
(a)
(b)
Figure 6: Extracted data and best fit results for the S27v2t2 5:00 streak.
Uniform evaporation was used in the mixed-mechanism PDE fit.
In Figures 7a-c we show the S9v2t5 4:00 spot data and in Figure 7b we show the
fits. We have plotted the central data from both the best-fit mixed-mechanism
(MM PDE) and evaporation-only (E PDE) models for comparison because this
instance is also fit well with the latter model and was categorized as
evaporation-dominated in Luke et al.luke2021. All three ODE models with flow
select some amount of inward flow, which aligns well with the PDE model, whose
flow profile changes sign at the origin as time progresses (see Table 4). In
all cases, the flow is of a significantly smaller magnitude than the Marangoni
effect-dominated or transitional thinning instances. The case M model gives
the smallest residual. Notably, the evaporation-only fits for this instance
give closer residuals to the best fit model than other instances; this
suggests that evaporation is the dominant mechanism causing the thinning.
(a)
(b)
Figure 7: Extracted data and best fit results for the S9v2t5 4:00 spot data.
Case (c) evaporation (see Luke et al.luke2021) was used in the mixed-mechanism
PDE fit.
## 4 Discussion
The quantities recorded in Table 1 show more variation than the PDE results in
some cases, but the qualitative similarities in the solutions are an important
takeaway. For each TBU instance, the best fit ODE model includes time-
dependent flow. This is strong evidence that evaporation alone cannot explain
this thinning and that the Marangoni effect played a role, since it is
characterized by non-constant thinning.
In Figure 8 we show the various time derivatives $\dot{h}^{\prime}$ computed
from the optimal values of the ODE models as well as the optimal
$\partial_{t^{\prime}}h^{\prime}$ measured at the origin for the three
examples of mixed-mechanism fitting shown in Section 3.4. The average starting
two seconds in or the value at the final time point is recorded in Table 3.
This delay in averaging matches the approach in Luke et al.luke2021 and mimics
experimental proceduresnichols2005. These values are shown along with the
optimal evaporation rates for comparison. The S10v1t6 12:30 spot, which showed
the most rapid thinning when fit with the mixed-mechanism PDE model, shows
dynamic rates of thinning for a large portion of the trial for many of the ODE
fits in Figure 8a. The case (D) and (M) ODE model $\dot{h}^{\prime}$ values
are very close, which we expect since they gave similar residuals when fit to
the data. The S9v2t5 4:00 spot shows non-constant dynamics near the end of the
trial in Figure 8c and we see further qualitative and quantitative agreement
between the mixed-mechanism and evaporation-only PDE results. The case (D) and
(M) models for this instance also show $\dot{h}^{\prime}>0$ in the first
quarter second; the theoretical TF thickness solution is in fact slightly
positive early on. This is likely an attempt by the optimization to fit the
concave down portion of the data in the first second or so. In general, the
PDE models produce $\partial_{t^{\prime}}h^{\prime}$ values in the first
quarter second that are much larger than the corresponding ODE numbers.
(a)
(b)
(c)
Figure 8: $\dot{h}^{\prime}$ from the five ODE models are plotted alongside
$\partial_{t^{\prime}}h^{\prime}$ from the PDE model. The time point at which
averaging will begin is shown as a dashed vertical line.
In Table 3 we record the optimal evaporation rate (the top number) and an
average thinning rate (the bottom number) for the PDE fit and each of the five
ODE fits. The average thinning rate is either taken starting two seconds into
the trial, or if the trial is two seconds or less, the value at the final time
is recorded. The values corresponding to the best fit by an ODE model as
determined by the smallest residual are shaded. In all instances, the overall
thinning rate of the best ODE fit is larger than that of the PDE. This may
reflect the tendency of the theoretical PDE FL intensity to lag the
experimental FL intensity in later times. In most instances, the best fit
overall thinning rate is larger than the evaporation rate, indicative of
outward thinning that supports the notion that the Marangoni effect
contributed to the thinning. Notably, for the S9v2t5 4:30 spot, which was
categorized as evaporation-dominated in Luke et al.luke2021, the evaporation
rate is larger than the thinning rate, suggesting inward capillary flow
combats the thinning. Some short trials exhibit rapid dynamics which occur
early on; the recorded thinning rate may not represent the entirety of the
trial.
Trial | | FT-TBU
---
ID
| $v^{\prime}$
---
$\partial_{t^{\prime}}h^{\prime}$
| $v^{\prime}_{E}$
---
$\dot{h^{\prime}}$
| $v^{\prime}_{O}$
---
$\dot{h^{\prime}}$
| $v^{\prime}_{F}$
---
$\dot{h^{\prime}}$
| $v^{\prime}_{D}$
---
$\dot{h^{\prime}}$
| $v^{\prime}_{M}$
---
$\dot{h^{\prime}}$
S9v1t4 | 4:00 — | | -6.26
---
-15.8
| -28.7
---
-28.7
| -30.2
---
-26.4
| -13.5
---
-26.3
| -16.4
---
-26.0
| -24.1
---
-24.0
S9v2t1 | 3:00 — | | -30.3
---
-21.2
| -30.4
---
-30.4
| -31.9
---
-28.5
| -29.1
---
-29.2
| -38.6
---
-20.6
| -27.3
---
-38.1
S9v2t5 | 4:00 $\circ$ | | -26.2
---
-18.2
| -22.2
---
-22.2
| -23.4
---
-20.1
| -27.7
---
-20.2
| -37.5
---
-25.1
| -22.4
---
-30.0
S9v2t5 | 4:30 $\circ$ | | -36.9
---
-23.4
| -42.8
---
-42.8
| -44.6
---
-35.7
| -50.0
---
-36.5
| -48.7
---
-38.2
| -50.9
---
-44.9
S10v1t6 | 12:30 $\circ$ | | -5.92
---
-7.60
| -120
---
-120
| -122
---
-0.0131
| 0
---
-9.92
| -1.27
---
-10.4
| -4.91
---
-11.5
S13v2t10 | 6:30 — | | -13.6
---
-21.8
| -37.1
---
-37.1
| -38.9
---
-33.6
| -21.5
---
-31.7
| -26.4
---
-31.1
| -20.4
---
-30.8
S18v2t4 | 7:30 $\circ$ | | -13.1
---
-16.3
| -37.1
---
-37.1
| -39.0
---
-32.4
| -0.0011
---
-18.9
| -25.2
---
21.0
| -20.3
---
-20.5
S27v2t2 | 5:00 — | | -6.11
---
-16.0
| -31.1
---
-31.1
| -32.0
---
-28.6
| -43.6
---
-23.9
| -47.8
---
-38.8
| -9.32
---
-30.9
Table 3: The optimal evaporation rates are recorded along with estimates of
average $\dot{h^{\prime}}$ for the mixed-mechanism model fits (starting two
seconds into the trial). All rates are measured in $\mu$m/min. PDE values are
given by $v^{\prime}$ and $\partial_{t^{\prime}}h^{\prime}$; the rest are from
the various ODE models. The value at the last time was used for trials less
than two seconds in length. The five cases of ODE models are in order and
denoted by subscripts on the evaporation value: $v^{\prime}_{E}$,
$v_{O}^{\prime}$, $v_{F}^{\prime}$, $v_{D}^{\prime}$ and $v_{M}^{\prime}$. The
shaded entries correspond to the model giving the best fit as determined by
the smallest norm.
We plot $\partial_{r^{\prime}}\bar{u}^{\prime}$ or
$\partial_{x^{\prime}}\bar{u}^{\prime}$ for the three examples from Section
3.4 in Figure 9 along with $\partial_{x^{\prime}}u^{\prime}$ from the relevant
ODE models. In Figures 9b,c we also plot the evaporation-only PDE profiles. At
least one ODE model does a decent job approximating the qualitative behavior
of the PDE flow profile after the first quarter second or so. A notable
exception is the constant extensional flow option for the S10v1t6 12:30 spot;
this suggests the flow profile is highly time-dependent. An average value for
each model and instance is taken over the whole trial and recorded in Table 4.
(a)
(b)
(c)
Figure 9: The mixed-mechanism $\partial_{r^{\prime}}\bar{u}^{\prime}$ or
$\partial_{x^{\prime}}\bar{u}^{\prime}$ are plotted against the strain rate
$\partial_{x^{\prime}}u^{\prime}$ for each ODE model with flow.
In Table 4 we record the flow profile data for each instance. The shaded
entries correspond to the best fit. From left to right, the parameters
correspond to the coefficients of the flow terms from the case F, D, and M
models, respectively. For most cases, the optimal flow directions match the
PDE results, an indicator that they were correctly classified in Luke et
al.luke2021. The three instances classified as evaporation-dominated or
transitional thinning in Luke et al.luke2021 show some amount of inward flow
in both the PDE and ODE best fits, which is consistent since evaporation-
dominated thinning is characterized by inward flow. In the cases of the S9v1t4
4:00 streak, S10v1t6 12:30 spot, S13v2t10 6:30 streak, and S18v2t4 7:30 spot,
the ODE strain rates are always positive, which match the signs of the average
strain rates of their corresponding PDE fits. This is evidence that outward
tangential flow is important for explaining these instances, and so the
thinning is likely influenced by the Marangoni effect. In the case of the
S27v2t2 5:00 streak, the final strain rate signs of the best fit ODE and
mixed-mechanism PDE models do not match. In this instance, the evaporation-
only and mixed-mechanism PDEs show qualitative differences, and the
evaporation-only PDE gives a better fit to the central data than the mixed-
mechanism version. Perhaps the breakup dynamics of the streak in fact includes
inward flow. For the S27v2t2 5:00 streak, this theory is supported by the text
in Section 3.4.
Trial | FT-TBU ID | $\dot{\gamma}^{\prime}$ | $a^{\prime}_{F}$ | $b_{1D}^{\prime}$ | | $b_{2D}^{\prime}$
---
$a_{M}^{\prime}$ | | $b_{1M}^{\prime}$
---
| $b_{2M}^{\prime}$
---
S9v1t4 | 4:00 — | .150 | .189 | .164 | .0421 | .0316 | .418 | 5.75
S9v2t1 | 3:00 — | -.0218 | .0199 | -.0894 | .385 | .461 | -.490 | .0715
S9v2t5 | 4:00 $\circ$ | -.0427 | -.0631 | -.312 | .517 | .217 | -.417 | .882
S9v2t5 | 4:30 $\circ$ | -.0786 | -.0757 | -.963 | .708 | .360 | -.564 | .367
S10v1t6 | 12:30 $\circ$ | .572 | 1.74 | 1.95 | .277 | .656 | 1.19 | .423
S13v2t10 | 6:30 — | .147 | .172 | .138 | .102 | .173 | .0257 | .812
S18v2t4 | 7:30 $\circ$ | .172 | .674 | 2.41 | 8.85 | .0733 | 3.69 | 12.8
S27v2t2 | 5:00 — | .343 | -.201 | -.282 | .0761 | .714 | -.368 | .540
Table 4: Estimates of the extensional rate $\dot{\gamma}^{\prime}$, which is
either $\partial_{r^{\prime}}\bar{u}^{\prime}$ for spots or
$\partial_{x^{\prime}}\bar{u}^{\prime}$ for streaks, at the origin taken over
the entire trial length in ($s^{-1}$) for the mixed-mechanism model fits.
These are compared with the optimal values from the three ODE models with
flow. The shaded entries correspond to the model giving the best fit as
determined by the smallest norm.
In general, the ODE models do a good job of capturing the essence of the
dynamics of the PDE model fits. Each instance that we expect to have some
outward flow has at least one positive flow parameter, and vice versa for the
inward flow instances (see Table 4). The ODE models are able to be a little
more fine-tuned, as the PDE fit is done over space as well; as such, we might
expect the central PDE data that is shown for comparison not to match the data
quite as closely. The PDE fits often struggle to keep up with the experimental
data in later times; this is reflected in the slower (in general) average
thinning rate $\partial_{t^{\prime}}h^{\prime}$ as compared to the ODE values
$\dot{h}^{\prime}$ (see Table 3). However, the ODE data and fit is a
simplification of the overall breakup dynamics, and viewing temporospatial
data has value on its own.
(a)
(b)
Figure 10: Histograms of rates of change plotted against experimental point
measurements from Nichols et al.nichols2005; note that the experiment cannot
distiguish between $v^{\prime}$ and $\dot{h}^{\prime}$. The best fit ODE model
data is shown as determined by the smallest norm.
Figure 10 compares the PDE and ODE best fit evaporation and thinning rate
results to experimental point measurements reported in Nichols et
al.nichols2005. For both histograms, a bin size of 5 $\mu$m/min was used.
While the ODE rates show a wider range than the PDE results, there is
significant overlap. The overall thinning rate is more comparable to the
Nichols et al.nichols2005 data since that study could not separate evaporation
from the other mechanisms affecting TF thickness. We expect our thinning rates
to be larger than the point measurements since the Nichols et al.nichols2005
study did not target breakup. While some of the ODE data lies outside the
experimental distribution, many ODE thinning rate values are comparable,
suggesting these simplified models return physically relevant quantities that
cannot be otherwise estimated.
(a)
(b)
Figure 11: Histograms of maximum osmolarity and minimum TF thickness (final
times of fit).
In Figure 11a we compare the maximum osmolarity values of the PDE and best fit
ODE models. A bin size of 50 mOsmol/L was used. Both PDE and ODE peak
osmolarity estimates are reasonable compared to other experimental and
theoretical valuesliu09, peng2014. The ODE results show greater variation and
exhibit larger maximal values on average. Peng et al.peng2014b and Braun et
al.braun2018 showed that diffusion reduces the peak osmolarity in TBU, which
is only relevant for a model with spatial variation. As such, the lower PDE
maximum osmolarity values are expected. Further, Braun et al.braun2015
reported theoretical osmolarity values up to ten times the isotonic level, and
so our largest osmolarity value, which is just over five times isotonic, is
not unreasonable.
Figure 11b records the minimum thicknesses of the PDE and best ODE fits. A bin
size of 0.2 $\mu$m was used. There is significant overlap of the PDE and ODE
model results, suggesting that the simplified version can capture the end
behavior of TF dynamics with a sufficient level of accuracy. The minimum TF
thickness values are larger on average for the PDE models; this may be
explained by the lag of the theoretical FL intensity behind the experimental
data at later times in many of the PDE fitsluke2020, luke2021.
Overall, there is more variability in the ODE results than the PDE results. We
may overfit the subtleties of the dynamics with four parameters in the case
(M) model, especially when the few data points of the central dynamics are
essentially linear. Further, in some instances, the dynamics of the case (D)
and (M) models are nearly indistinguishable, suggesting the additional
parameter in the (M) model that mimics steady outward flow may not be
necessary. The PDE data, which combines spatial and temporal information, is
only fit with three parameters, reducing the likelihood of overfitting. Slight
differences in the experimental data that are likely due to noise can affect
the optimal parameters. Better time resolution would help get rid of the
influence of outlier time levels on the optimization.
In order to compare with the PDE results, we scale with a characteristic
length and horizontal velocity whose meanings are less clear in the context of
the ODE model. The time scale we use is more of a characteristic time to bend
the curve rather than the time to an overall decrease in FL intensity. We show
the PDE results only for comparison; this ODE fitting process can be used on
many other instances.
## 5 Conclusions and future perspectives
We fit the same data as in Luke et al.luke2021 with simplified models to
validate our ODE fitting procedure and find good qualitative agreement of PDE
and ODE results in most instances. The ODE fitting procedure provides a
relatively quick process that returns important information about the TBU
instance, including parameters that cannot currently be measured directly in
vivo.
We are working on a machine learning approach to automatically identify
breakup instances and fit the central data with our ODE models. This strategy
could be applied on a large scale to obtain statistical information about a
wide range of TBU shapes.
## Acknowledgements
This work was supported by National Science Foundation grant DMS 1909846. The
content is solely the responsibility of the authors and does not necessarily
represent the official views of the funding sources.
## Appendix A Appendix
### A.1 Governing dimensional equations
We derive the case (M) model. We use Cartesian coordinates
$(x^{\prime},z^{\prime})$ to denote the position and
$\vec{u}^{\prime}=(u^{\prime},v^{\prime})$ to denote the fluid velocity.
Primes denote dimensional quantities. The TF is modeled as an incompressible
Newtonian fluid on $0<x^{\prime}<X_{0}$ and
$0<z^{\prime}<h^{\prime}(x^{\prime},t^{\prime})$, where
$h^{\prime}(x^{\prime},t^{\prime})$ denotes the thickness of the film.
Conservation of mass of TF fluid is given by
$\nabla^{\prime}\cdot\vec{u}^{\prime}=0.$ (19)
At the film/cornea interface $z^{\prime}=0$, we require osmosis across a
perfect semipermeable membrane:
$u^{\prime}=0,\quad v^{\prime}=P_{o}V_{w}(c^{\prime}-c_{0}),$ (20)
where $c^{\prime}$ is the osmolarity. The membrane permeability is given by
$P_{o}$, the molar volume of water is $V_{w}$, and $c_{0}$ is the isotonic
osmolarity. The kinematic condition at the fluid/air interface is given by
$\partial_{t^{\prime}}h^{\prime}=v^{\prime}|_{z^{\prime}=h^{\prime}}-u^{\prime}|_{z^{\prime}=h^{\prime}}\partial_{x^{\prime}}h^{\prime}-J^{\prime},$
(21)
where $J^{\prime}$ is evaporation. Since we have assumed the film is spatially
uniform, we have $\partial_{x^{\prime}}h^{\prime}=0$, and thus
$\dot{h^{\prime}}=v^{\prime}_{z^{\prime}=h^{\prime}}-J^{\prime}.$ (22)
The dot indicates an ordinary derivative in time. We assume a combination of
constant and decaying extensional flow:
$u^{\prime}=(a^{\prime}+b_{1}^{\prime}e^{-b_{2}^{\prime}t^{\prime}})x^{\prime},$
(23)
where $a^{\prime}$ is a constant flow rate, and $b_{1}^{\prime}$ and
$b_{2}^{\prime}$ are a flow and decay rate, respectively.
### A.2 Scalings
The governing equations can be nondimensionalized using the following
scalings:
$c^{\prime}=c_{0}c,\quad\quad h^{\prime}=dh,\quad\quad
t^{\prime}=(\ell/U)t,\quad\quad u^{\prime}=Uu,\quad\quad
f^{\prime}=f_{\text{cr}}f,\quad\quad J^{\prime}=\rho UJ.$ (24)
The following scalings are a result of the nondimensionalization process:
$v^{\prime}=\epsilon Uv,\quad\quad a^{\prime}=(U/\ell)a,\quad\quad
b_{1}^{\prime}=(U/\ell)b_{1},\quad\quad b_{2}^{\prime}=(U/\ell)b_{2}.$ (25)
Dimensional quantities are denoted by primes. We follow the same scalings used
in Luke et al.luke2021 to compare with their results. We take the length of
the trial as $t_{s}$ and then $U=\ell/t_{s}$, where
$\ell=\left(\frac{t_{s}\sigma_{0}d^{3}}{\mu}\right)^{1/4}.$ (26)
Dimensional parameters used in the model are summarized in Table 5.
Dimensional parameters
---
Parameter | Description | Value | Reference
$\rho$ | Density | $10^{3}$ kg $\cdot$ m-3 | Water
$d$ | Initial TF thickness | $2-5\times 10^{-6}$ m | Calculatedluke2021
$f_{0}^{\prime}$ | Init. FL concentration | $0.259-0.4$ % | Calculatedluke2021
$v^{\prime}$ | Evaporative thinning rate | $0.5-25$ $\mu$m/min | Nichols et al.nichols2005
$a^{\prime}$ | Constant extensional flow rate | $-0.201-1.74$ $s^{-1}$ | Calculated
$b_{1}^{\prime}$ | Decaying extensional flow rate | $-0.564-3.69$ $s^{-1}$ | Calculated
$b_{2}^{\prime}$ | Decay rate | $0.0421-12.8$ $s^{-1}$ | Calculated
$V_{w}$ | Molar volume of water | 1.8 $\times 10^{-5}$ m${}^{3}\cdot$ mol-1 | Water
$c_{0}$ | Isotonic osmolarity | $300$ mOsmol/L | Lemp et al.lemp2011
$P_{o}$ | Permeability of cornea | $12.1\times 10^{-6}$ m/s | Braun et al.braun2015
$\epsilon_{f}$ | Naperian extinction coefficient | $1.75\times 10^{7}$ m-1 M-1 | Mota et al.mota1991
$\mu$ | Viscosity | $1.3$ $\times 10^{-3}\ \text{Pa}\cdot\text{s}$ | Tiffanytiffany1991
$\sigma_{0}$ | Surface tension | $0.045$ N $\cdot$ m-1 | Nagyogá & Tiffanynagyova1999
$\ell$ | Characteristic length | $0.138-0.412$ mm | Calculated
$U$ | Characteristic velocity | $0.0560-0.0990$ mm/s | Calculated
$t_{s}$ | Time scale | $1.75-6.6$ s | Fit intervalluke2021
Table 5: The dimensional parameters used are shown. The range of estimates for
thinning rates are from point measurements; this range includes the values
given by our optimization. Non-dimensional parameters with typical values
---
Parameter | Description | Expression | Value
$\epsilon$ | Aspect ratio | $d/\ell$ | 0.0130
$P_{c}$ | Permeability of cornea | $\displaystyle P_{o}V_{w}c_{0}/(\epsilon U)$ | 0.0653
$\phi$ | Nondimensional Napierian extinction coefficient | $\displaystyle\epsilon_{f}f_{\text{cr}}d$ | 0.279
Table 6: Dimensionless parameters that arise from scaling the dimensional
fluid mechanics problem. The values given are based upon the values of Table 5
and those used to generate Figure 4.
FL concentration is typically reported as a percentage in the ocular
literature. For a particular FL concentration $f^{\prime}$ given as a
percentage, this quantity is converted to molar as $f_{M}^{\prime}$ by
$f_{M}^{\prime}=\frac{\rho}{M_{w}}\frac{f^{\prime}}{100},$ (27)
where $\rho$ is the density of water (Table 5) and $M_{w}$ is the molecular
weight of sodium fluorescein (approximately 376 g/mol). Critical FL
concentration $f_{\text{cr}}$, 0.2%, makes an 0.0053 M solution when dissolved
in water. This conversion of $f_{\text{cr}}$ to molar is necessary to compute
the dimensionless Napierian extinction coefficient $\phi$ (Table 6).
### A.3 Derivation of TF equations
Using the scalings 24, 25, we nondimensionalize the governing equations. From
the nondimensional version of Equation 23, we have that
$u_{x}=a+b_{1}e^{-b_{2}t}.$ In Cartesian coordinates, conservation of mass is
given by $u_{x}+v_{z}=0$. Integrating this equation over the vertical domain,
we have
$0=\int_{0}^{h}\left[\partial_{x}u+\partial_{z}v\right]dz=-(a+b_{1}e^{-b_{2}t})h+\partial_{t}h+J-P_{c}(c-1),$
where we’ve used the independence of $u_{x}$ from $z$. Rewriting this result
as a differential equation for $h$, we have
$\dot{h}=-(a+b_{1}e^{-b_{2}t})h-J+P_{c}(c-1).$ (28)
Our transport equation for $c$ is (without any spatial terms):
$h\dot{c}=Jc-P_{c}(c-1)c.$ (29)
Multiplying Eq. (28) by $c$ and adding the result to Eq. (29), we have an ODE
for the product $hc$:
$\dot{(hc)}=-(a+b_{1}e^{-b_{2}t})hc.$
Separating and integrating gives
$hc=A\exp\left(-at+\frac{b_{1}}{b_{2}}e^{-bt}\right),$ (30)
where $A$ is an arbitrary constant of integration. Using $h(0)=c(0)=1$ and
$f(0)=f_{0}$ nondimensionally, we solve for $c$ and $f$:
$c=\frac{1}{h}\exp\left[-at+\frac{b_{1}}{b_{2}}\left(e^{-b_{2}t}-1\right)\right],\quad
f=\frac{f_{0}}{h}\exp\left[-at+\frac{b_{1}}{b_{2}}\left(e^{-b_{2}t}-1\right)\right].$
(31)
Replacement in our ODE for $h$ gives the equations shown in the text.
The equation for fluorescent intensity is
$I=I_{0}\frac{1-e^{-\epsilon_{f}h^{\prime}f^{\prime}}}{\left(f^{\prime}_{\text{cr}}\right)^{2}+(f^{\prime})^{2}},$
(32)
where $\epsilon_{f}$ is the molar extinction coefficient,
$f_{\text{cr}}^{\prime}$ is the critical fluorescein concentration, and
$I_{0}$ is a normalization factor. The nondimensional version with $f$
eliminated is given in the text.
|
# On The Impact Of 22Ne On The Pulsation Periods Of Carbon-Oxygen White Dwarfs
With Helium Dominated Atmospheres
Morgan T. Chidester School of Earth and Space Exploration, Arizona State
University, Tempe, AZ 85287, USA Joint Institute for Nuclear Astrophysics -
Center for the Evolution of the Elements, USA F.X. Timmes School of Earth and
Space Exploration, Arizona State University, Tempe, AZ 85287, USA Joint
Institute for Nuclear Astrophysics - Center for the Evolution of the Elements,
USA Josiah Schwab Department of Astronomy and Astrophysics, University of
California, Santa Cruz, CA 95064, USA Richard H. D. Townsend Department of
Astronomy, University of Wisconsin-Madison, Madison, WI 53706, USA Ebraheem
Farag School of Earth and Space Exploration, Arizona State University, Tempe,
AZ 85287, USA Joint Institute for Nuclear Astrophysics - Center for the
Evolution of the Elements, USA Anne Thoul Space sciences, Technologies and
Astrophysics Research (STAR) Institute, Université de Liège, Allée du 6
Ao$\hat{u}$t 19C, Bat. B5C, 4000 Liège, Belgium C. E. Fields Department of
Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA
Joint Institute for Nuclear Astrophysics - Center for the Evolution of the
Elements, USA Evan B. Bauer Center for Astrophysics $|$ Harvard &
Smithsonian, 60 Garden St Cambridge, MA 02138, USA Department of Physics,
University of California, Santa Barbara, CA 93106, USA Michael H. Montgomery
Department of Astronomy and McDonald Observatory, University of Texas, Austin,
TX 78712, USA Morgan T. Chidester<EMAIL_ADDRESS>
###### Abstract
We explore changes in the adiabatic low-order g-mode pulsation periods of
0.526, 0.560, and 0.729 $\mathrm{M}_{\odot}$ carbon-oxygen white dwarf models
with helium-dominated envelopes due to the presence, absence, and enhancement
of $\mathrm{{}^{22}Ne}$ in the interior. The observed g-mode pulsation periods
of such white dwarfs are typically given to 6$-$7 significant figures of
precision. Usually white dwarf models without $\mathrm{{}^{22}Ne}$ are fit to
the observed periods and other properties. The root-mean-square residuals to
the $\simeq$ 150$-$400 s low-order g-mode periods are typically in the range
of $\sigma_{\rm rms}$ $\lesssim$ 0.3 s, for a fit precision of $\sigma_{\rm
rms}/P$ $\lesssim$ 0.3%. We find average relative period shifts of $\Delta
P/P$ $\simeq$ $\pm$ 0.5% for the low-order dipole and quadrupole g-mode
pulsations within the observed effective temperature window, with the range of
$\Delta P/P$ depending on the specific g-mode, abundance of
$\mathrm{{}^{22}Ne}$, effective temperature, and mass of the white dwarf
model. This finding suggests a systematic offset may be present in the fitting
process of specific white dwarfs when $\mathrm{{}^{22}Ne}$ is absent. As part
of the fitting processes involves adjusting the composition profiles of a
white dwarf model, our study on the impact of $\mathrm{{}^{22}Ne}$ can provide
new inferences on the derived interior mass fraction profiles. We encourage
routinely including $\mathrm{{}^{22}Ne}$ mass fraction profiles, informed by
stellar evolution models, to future generations of white dwarf model fitting
processes.
Stellar physics (1621); Stellar evolution (1599); Stellar pulsations (1625);
White dwarf stars (1799); Non-radial pulsations (1117)
††software: MESA (Paxton et al., 2011, 2013, 2015, 2018, 2019,
http://mesa.sourceforge.net), MESASDK 20190830 (Townsend, 2019a, b),
wd_builder https://github.com/jschwab/wd_builder, GYRE (Townsend & Teitler,
2013; Townsend et al., 2018, https://github.com/rhdtownsend/gyre), Montreal
White Dwarf Database (Dufour et al., 2017,
http://www.montrealwhitedwarfdatabase.org), matplotlib (Hunter, 2007), and
NumPy (van der Walt et al., 2011).
## 1 Introduction
Photons emitted from stellar surfaces and neutrinos released from stellar
interiors may not directly reveal all that we want to know about the internal
constitution of the stars. For example, a direct view of the chemical
stratification from the core to the surface is hidden. These interior
abundance profiles matter: they impact a star’s opacity, thermodynamics,
nuclear energy generation, and pulsation properties. The stellar models, in
turn, are used to interpret the integrated light of stellar clusters and
galaxies (e.g., Alsing et al., 2020), decipher the origin of the elements
(e.g., Arcones et al., 2017; Placco et al., 2020), predict the frequency of
merging neutron stars and black holes (Giacobbo & Mapelli, 2018; Farmer et
al., 2020; Marchant & Moriya, 2020; Abbott et al., 2020), and decipher the
population(s) of exploding white dwarfs that underlay Type Ia supernova
cosmology (e.g., Miles et al., 2016; Rose et al., 2020).
Neutrino astronomy, in concert with stellar models, can probe the isotopic
composition profiles in energy producing regions of the Sun (Borexino
Collaboration et al., 2018, 2020) and nearby ($d$ $\lesssim$ 1 kpc)
presupernova massive stars up to tens of hours before core-collapse (e.g.,
Patton et al., 2017; Simpson et al., 2019; Mukhopadhyay et al., 2020). Stellar
seismology, also in concert with stellar models, can probe the elemental
composition profiles in pulsating stars from the upper main-sequence (e.g.,
Simón-Díaz et al., 2018; Pedersen et al., 2019; Balona & Ozuyar, 2020) through
the red-giant branch (e.g., Hekker & Christensen-Dalsgaard, 2017; Hon et al.,
2018) to white dwarfs (WDs, e.g., Hermes et al., 2017; Giammichele et al.,
2018; Córsico et al., 2019; Bell et al., 2019; Bischoff-Kim et al., 2019;
Althaus et al., 2020).
Most of a main-sequence star’s initial metallicity $Z$ comes from the carbon-
nitrogen-oxygen (CNO) and $\mathrm{{}^{56}Fe}$ nuclei inherited from its
ambient interstellar medium. All of the CNO piles up at $\mathrm{{}^{14}N}$
when H-burning on the main-sequence is completed because the
$\mathrm{{}^{14}N}$($p$,$\gamma$)$\mathrm{{}^{15}O}$ reaction rate is the
slowest step in the H-burning CNO cycle. During the ensuing He-burning phase,
all of the 14N is converted to 22Ne by the reaction sequence
$\mathrm{{}^{14}N}$($\alpha$,$\gamma$)18F(,$e^{+}\nu_{e}$)18O($\alpha$,$\gamma$)$\mathrm{{}^{22}Ne}$.
The abundance of $\mathrm{{}^{22}Ne}$ when He-burning is completed is thus
proportional to the initial CNO abundance of the progenitor main-sequence
star. The weak reaction in this sequence powers the neutrino luminosity during
He-burning (e.g., Serenelli & Fukugita, 2005; Farag et al., 2020) and marks
the first time in a star’s life that the core becomes neutron rich. For zero-
age main sequence (ZAMS) masses between $\simeq$ 0.5 $\mathrm{M}_{\odot}$
(Demarque & Mengel, 1971; Prada Moroni & Straniero, 2009; Gautschy, 2012) and
$\simeq$ 7 $\mathrm{M}_{\odot}$ (Becker & Iben, 1979, 1980; García-Berro et
al., 1997), depending on the treatment of convective boundary mixing
(Weidemann, 2000; Denissenkov et al., 2013; Jones et al., 2013; Farmer et al.,
2015; Lecoanet et al., 2016; Constantino et al., 2015, 2016, 2017), the
$\mathrm{{}^{14}N}$($\alpha$,$\gamma$)18F(,$e^{+}\nu_{e}$)18O($\alpha$,$\gamma$)$\mathrm{{}^{22}Ne}$
reaction sequence determines the $\mathrm{{}^{22}Ne}$ content of a resulting
carbon-oxygen white dwarf (CO WD). We follow the convention that
$\mathrm{{}^{22}Ne}$ is the “metallicity” of the CO WD.
Camisassa et al. (2016) analyze the impact of $\mathrm{{}^{22}Ne}$ on the
sedimentation and pulsation properties of H-dominated atmosphere WDs (i.e.,
the DAV class of WD) with masses of 0.528, 0.576, 0.657, and 0.833
$\mathrm{M}_{\odot}$. These WD models result from $Z$ = 0.02 non-rotating
evolutionary models that start from the ZAMS and are evolved through the core-
hydrogen and core-helium burning, thermally pulsing asymptotic giant branch
(AGB), and post-AGB phases. At low luminosities, $\log(L/\mathrm{L}_{\odot})$
$\lesssim$ $-4.5$, they find that $\mathrm{{}^{22}Ne}$ sedimentation delays
the cooling of WDs by 0.7 to 1.2 Gyr, depending on the WD mass. They also find
that $\mathrm{{}^{22}Ne}$ sedimentation induces differences in the periods
that are larger than the present observational uncertainties.
Giammichele et al. (2018) analyze in their supplemental material the effect of
$\mathrm{{}^{22}Ne}$ on the pulsation periods of a 0.570 $\mathrm{M}_{\odot}$
template-based model for the DB WD KIC 08626021. They considered a model
consisting of pure oxygen core surrounded by a pure helium envelope with the
same mass and effective temperature equal to those inferred for KIC 08626021.
Next, they considered a model that replaces the pure oxygen core with an
oxygen-dominated core plus a trace amount of $\mathrm{{}^{22}Ne}$. They find
that the model with $\mathrm{{}^{22}Ne}$ has, on average, shorter pulsation
periods.
This article is novel in three ways. One, we explore the impact of
$\mathrm{{}^{22}Ne}$ on the adiabatic low-order g-mode pulsation periods of CO
WD models with a He-dominated atmosphere (i.e., the DBV class of WD) as the
models cool through the range of observed DBV effective temperatures. Two, we
derive an approximation formula for the Brunt-Väisälä frequency in WDs that
allows new physical insights into why the low-order g-mode pulsation periods
change due to the presence, and absence, of $\mathrm{{}^{22}Ne}$. Three, we
analyze how the $\mathrm{{}^{22}Ne}$ induced changes in the pulsation periods
depend on the mass and temporal resolutions of the WD model. Our explorations
can help inform inferences about the interior mass fraction profiles derived
from fitting the observed periods of specific DBV WDs (e.g., Metcalfe et al.,
2002; Fontaine & Brassard, 2002; Metcalfe, 2003; Metcalfe et al., 2003; Hermes
et al., 2017; Giammichele et al., 2017, 2018; Charpinet et al., 2019; De
Gerónimo et al., 2019; Bischoff-Kim et al., 2019).
In Section 2 we summarize the input physics, and discuss in detail the
chemical stratification, cooling properties, and g-mode pulsation periods of
one DBV WD model. In Section 3 we present our results on changes to the low-
order g-mode pulsation periods due to the presence, or absence, of
$\mathrm{{}^{22}Ne}$ from this model. In Section 4 we study changes in the
g-mode pulsation periods due to $\mathrm{{}^{22}Ne}$ from a less massive and a
more massive WD model. In Section 5 we summarize and discuss our results. In
Appendix A we study the robustness of our results with respect to mass and
temporal resolution, and in Appendix B we discuss in more depth some of the
input physics.
Figure 1: Mass fraction profiles of the 0.56 $\mathrm{M}_{\odot}$DB WD
resulting from the evolution of the 2.1 $\mathrm{M}_{\odot}$, $Z$=0.02, ZAMS
model.
## 2 A Baseline WD Model
### 2.1 Input Physics
We use MESA version r12115 (Paxton et al., 2011, 2013, 2015, 2018, 2019) to
evolve a $2.1\,\mathrm{M}_{\odot}$, $Z$ = 0.02 metallicity model from the ZAMS
through core H-burning and core He-burning. After winds during the thermal
pulses on the AGB have reduced the H-rich envelope mass to
$0.01\,\mathrm{M}_{\odot}$, the remaining hydrogen is stripped from the
surface to form a young, 0.56 $\mathrm{M}_{\odot}$DB WD. This model is tuned
to match the observed and inferred properties of KIC 08626021 (Bischoff-Kim et
al., 2014; Giammichele et al., 2018; Timmes et al., 2018; Charpinet et al.,
2019; De Gerónimo et al., 2019). Additional details of the input physics are
given in Appendix B, and the MESA r12115 files to reproduce our work are
available at https://doi.org/10.5281/zenodo.4338180 (catalog
https://doi.org/10.5281/zenodo.4338180)
### 2.2 Mass Fraction Profiles
Figure 1 shows the mass fraction $X(^{A}Z)$ profiles of the resulting 0.56
$\mathrm{M}_{\odot}$DB WD model, where $A$ is the number of nucleons and $Z$
is the number of protons. Brown boxes divide the mass fraction profiles into
three regions according to their origins and uncertainties. The
$X(\mathrm{{}^{12}C})$ and $X(\mathrm{{}^{16}O})$ profiles in the innermost
$\simeq$ 90% by mass region are determined during core and shell He-burning.
The main uncertainties in this region are the
$\mathrm{{}^{12}C}$($\alpha$,$\gamma$)$\mathrm{{}^{16}O}$ reaction rate (e.g.,
deBoer et al., 2017), and the treatment of convective mixing boundaries during
core H-and He-burning (e.g., Constantino et al., 2015, 2016, 2017).
The CO and $X(\mathrm{{}^{4}He})$ profiles between $\simeq$ 1% and $\simeq$
10% of the exterior WD mass originate from shell He-burning during the thermal
pulse phase of evolution on the AGB. Most of the total He mass is held in this
region. The primary uncertainties in this region are the number of thermal
pulses and convective boundary layer mixing. The number of thermal pulses a
model undergoes is sensitive to the mass resolution, time resolution, mass
loss rate, and the treatment of convective boundaries (Iben & Renzini, 1983;
Herwig, 2005; Karakas & Lattanzio, 2014). The sharp change in all the mass
fractions at $\simeq$ 1% of the exterior WD mass marks the extent reached by
the convective boundary during the last thermal pulse.
CO profiles in this region may also be subject to other mixing processes. For
example, the magnitude of the $X(\mathrm{{}^{12}C})$ “bump” is subject to the
strength and duration of the thermohaline instability, which occurs when
material is stable to convection according to the Ledoux criterion, but has an
inverted molecular weight gradient (Baines & Gill, 1969; Brown et al., 2013;
Garaud, 2018; Bauer & Bildsten, 2018).
The $X(\mathrm{{}^{4}He})$ profile of the outer $\simeq$ 0.1% to 1% of the WD
mass is determined by shell H-burning. All of the initial CNO mass fractions
have been converted to $\mathrm{{}^{14}N}$. The main uncertainties in this
region are the number of thermal pulses during the AGB phase of evolution,
late or very late thermal pulses (Bloecker, 1995a, b; Blöcker, 2001), and
mechanisms to remove the residual high entropy, H-rich layer to produce a DB
WD from single and binary evolution (e.g., D’Antona & Mazzitelli, 1990;
Althaus & Benvenuto, 1997; Parsons et al., 2016).
The $X(\mathrm{{}^{22}Ne})$ profile is essentially flat and spans the inner
$\simeq$ 99% by mass. As discussed in Section 1, $X(\mathrm{{}^{22}Ne})$ is
created from $\mathrm{{}^{14}N}$ during He-burning.
Figure 2: Evolution of baseline model’s photon luminosity $L$ and neutrino
luminosity $L_{\nu}$ (left top), effective temperature $T_{\rm eff}$ and
radius $R$ (left middle), central temperature $T_{c}$ and central density
$\rho_{c}$ (left bottom). Time begins a few thermal timescales after the ab
initio WD is released. Gray bands show the luminosity and $T_{\rm eff}$ range
of currently observed DBV WD (Montreal White Dwarf Database, Dufour et al.,
2017). Mass fraction profiles are shown at $T_{\rm eff}$ = 30,039 K (right
top), 15,012 K (right middle), and 12,092 K (right bottom) and at the end of
the evolution. Initial mass fraction profiles are shown as solid curves and
the diffusing mass fraction profiles are shown as dotted curves.
Figure 3: Propagation diagram for the dipole $\ell$ = 1 (top) and quadrupole
$\ell$ = 2 (bottom) g-modes at $L$ = 0.01 $\mathrm{L}_{\odot}$ for the
baseline WD model. The Lamb frequency ($S_{\ell}$, orange), Brunt-Väisälä
frequency ($N$, blue), radial order $n$ = 1,2,3,10 eigenfrequencies (dotted
black lines), nodes in the radial eigenfunction (filled circles), and g-mode
period of each radial order are labeled.
### 2.3 Constructing Ab Initio White Dwarf Models
Starting from a set of pre-main sequence (pre-MS) initial conditions, accurate
predictions for the properties of the resulting WD model, especially the mass
fraction profiles, do not exist due to the nonlinear system of equations being
approximated. In addition, evolving a stellar model from the pre-MS to a WD
can be resource intensive. It can thus be useful for systematic studies to
build ab initio WD models (e.g., WDEC, Bischoff-Kim & Montgomery, 2018). By ab
initio we mean calculations that begin with a WD model, as opposed to a WD
model that is the result of a stellar evolution calculation from the pre-MS. A
potential disadvantage (or advantage) of ab initio WD models is the imposed
initial mass fraction profiles may not be attainable by a stellar model
evolved from the pre-MS. Throughout the remainder of this article we use a new
capability, wd_builder, to construct ab initio WD models in MESA of a given
mass and chemical stratification.
The initial structure of an ab initio WD model is approximated as an
isothermal core and a radiative envelope in hydrostatic equilibrium. Here we
specify an initial WD mass of 0.56 $\mathrm{M}_{\odot}$, the same WD mass as
produced by the stellar evolution calculation. The imposed
$X(\mathrm{{}^{4}He})$, $X(\mathrm{{}^{12}C})$, $X(\mathrm{{}^{14}N})$,
$X(\mathrm{{}^{16}O})$, and $X(\mathrm{{}^{22}Ne})$ profiles are taken from
the stellar evolution derived mass fraction profiles of Figure 1 and
normalized to sum to unity in each cell. Henceforth we refer to this ab initio
WD model as the “baseline model”.
For ab initio WD models we use He-dominated, $\log$(H/He) = $-$5.0, model
atmosphere tables spanning 5,000 K $\leq$ $T_{\rm eff}$ $\leq$ 40,000 K that
were provided by Odette Toloza (2019, private communication) using the Koester
WD atmosphere software instrument (Koester, 2010). These tabulated atmospheres
for DB WDs are publicly available as a standard atmosphere option as of MESA
r12115. In addition, we use five element classes for the diffusion classes –
$\mathrm{{}^{4}He}$, $\mathrm{{}^{12}C}$, $\mathrm{{}^{14}N}$
$\mathrm{{}^{16}O}$, and $\mathrm{{}^{22}Ne}$. Otherwise, all of the physics
implementations and modeling choices are as described in Section 2.1.
The initial baseline model is then evolved with MESA. As the model is not in
thermal equilibrium, there is an initial transient phase lasting a few thermal
timescales that is disregarded. The thermal timescale is $\tau_{\rm th}$
$\simeq$ $E_{\rm th}/L_{\rm tot}$ $\simeq$ 0.67 Myr, where $E_{\rm th}$ is the
thermal energy of the WD and $L_{\rm tot}$ is the photon plus neutrino
luminosity. Specifically, we set the zero point to be 1.5 thermal timescales
($\simeq$ 1 Myr) after the transient reaches its peak luminosity. The
evolution terminates when $L_{\rm tot}$ falls below
$\log(L/\mathrm{L}_{\odot})$ = $-$2.5.
Figure 2 shows the cooling properties of the baseline model. Plasmon neutrino
emission dominates the energy loss budget at $T_{\rm eff}\gtrsim 25,000\,{\rm
K}$ (e.g., Vila, 1966; Kutter & Savedoff, 1969; Winget et al., 2004; Bischoff-
Kim & Montgomery, 2018). Photons leaving the WD surface begin to dominate the
cooling as the electrons transition to a strongly degenerate plasma (van Horn,
1971). The luminosity becomes proportional to the enclosed mass, $L_{r}$
$\propto$ $\,M_{r}$, in this model only when $T_{\rm eff}\lesssim 20,000\,{\rm
K}$ (Timmes et al., 2018). Energy transport in the interior is dominated by
conduction, driven primarily by electron-ion scattering. Energy transport in
the outer layers is dominated by radiation or convection associated with the
partial ionization of He at $T_{\rm eff}\simeq 30,000\,{\rm K}$.
Figure 2 also shows the diffusion of the initial mass fractions as the
baseline WD model cools to $T_{\rm eff}$ = 30,000 K, 15,000 K and 12,138 K
(corresponding to the termination at $\log(L/\mathrm{L}_{\odot})$ = $-$2.5).
Element diffusion of $\mathrm{{}^{22}Ne}$ is modest for the baseline 0.56
$\mathrm{M}_{\odot}$ DB WD model. Depletion of the $\mathrm{{}^{22}Ne}$ mass
fraction at $\log(1-M_{r}/M$) $\simeq$ $-1.9$ has occurred by the time the
model has cooled to $T_{\rm eff}$ $\simeq$ 30,000 K. As the model cools
further, the surface regions in the tail of the He-dominated layer further
deplete and a small $\mathrm{{}^{22}Ne}$ bump forms and propagates inwards
toward the center. The timescale for $\mathrm{{}^{22}Ne}$ to travel from near
the surface to the center of this WD model is $\tau_{\rm D}\simeq
2\bar{Z}\Gamma^{1/3}\rho_{6}^{-1/2}\ {\rm Gyr}\simeq 30\ {\rm Gyr}$ (Isern et
al., 1991; Bravo et al., 1992; Bildsten & Hall, 2001; Deloye & Bildsten, 2002;
Camisassa et al., 2016), where $\bar{Z}$ is the mean charge of the material,
$\Gamma$ is the electrostatic to thermal energy ratio, and $\rho_{6}$ is the
baryon mass density in units of 106 g cm-3. Thus, the $X(\mathrm{{}^{22}Ne})$
profile does not significantly change as the 0.56 $\mathrm{M}_{\odot}$baseline
model evolves to $\log(L/\mathrm{L}_{\odot})$ = $-$2.5 in $\simeq$ 350 Myr.
More massive WDs show larger amounts of $\mathrm{{}^{22}Ne}$ sedimentation
over the same time period (Camisassa et al., 2016). WD cooling data suggests a
significant enhancement due to to $\mathrm{{}^{22}Ne}$ diffusion (Cheng et
al., 2019; Bauer et al., 2020), but does not effect the baseline model until
it cools to effective temperatures lower than considered here ($T_{\rm eff}$
$\lesssim$ 10,000 K).
### 2.4 Pulsation Periods of the Baseline Model
Having established the structural and composition profiles of a cooling
baseline WD model, we now consider the g-mode pulsation periods. Some of the
material is classic (e.g., Unno et al., 1989; Fontaine & Brassard, 2008), but
we also derive and verify the accuracy of an approximation formula for the
Brunt-Väisälä frequency in WDs that allows physical insights into why the low-
order g-mode pulsation periods change due to variations in the mass fraction
of $\mathrm{{}^{22}Ne}$. This material is essential for establishing that the
baseline model, before introducing any modifications to the chemical profiles,
produces pulsation periods that are commensurate with the observations of DBV
WDs.
Figure 3 shows the propagation diagram (e.g., Unno et al., 1989) for the
baseline WD model after it has cooled to $T_{\rm eff}$ = 16,045 K and dimmed
to $L$ = 0.01 $\mathrm{L}_{\odot}$, within the DBV WD observation window.
Adiabatic pulsation frequencies are calculated using release 5.2 of the GYRE
software instrument (Townsend & Teitler, 2013; Townsend et al., 2018). For a
fixed radial overtone number, the $\ell$ = 1 periods are $\sim$ $\sqrt{3}$
longer than the $\ell$ = 2 periods, due to the local dispersion relation for
low-frequency g-modes $\sigma_{g}$ scaling as
$\sigma_{g}^{2}\simeq\ell(\ell+1)N^{2}/(k_{r}^{2}r^{2})\ ,$ (1)
where $k_{r}$ is the radial wave number. The Brunt-Väisälä frequency $N$ is
$N^{2}=\frac{g^{2}\rho}{P}\frac{\chi_{T}}{\chi_{\rho}}(\nabla_{{\rm
ad}}-\nabla_{T}+B)\ ,$ (2)
where $g$ is the gravitational acceleration, $\rho$ is the mass density, $P$
is the pressure, $T$ is the temperature, $\chi_{T}$ is the temperature
exponent $\partial({\rm ln}P)/\partial({\rm ln}\rho)|_{T,\mu_{I}}$,
$\chi_{\rho}$ is the density exponent $\partial({\rm ln}P)/\partial({\rm
ln}T)|_{\rho,\mu_{I}}$, $\nabla_{{\rm ad}}$ is the adiabatic temperature
gradient, $\nabla_{T}$ is the actual temperature gradient, and $B$ accounts
for composition gradients (e.g., Hansen & Kawaler, 1994; Fontaine & Brassard,
2008). Bumps in the $N$ profile of Figure 3 correspond to transitions in the
$X(\mathrm{{}^{16}O})$, $X(\mathrm{{}^{12}C})$, and $X(\mathrm{{}^{4}He})$
profiles. The implementation of Equation 2 in MESA is described in Section 3
of Paxton et al. (2013).
An approximation for $N^{2}$ in the interiors of WDs that yields physical
insights begins by assuming $\nabla_{{\rm ad}}$ is much larger than
$\nabla_{T}$ and $B$. Then
$N^{2}=\frac{g^{2}\rho}{P}\frac{\chi_{T}}{\chi_{\rho}}\nabla_{{\rm ad}}\ .$
(3)
In the interior of a WD the ions are ideal and dominate the temperature
derivatives of an electron degenerate plasma. Substituting the pressure scale
height $H$ = $P/(\rho g)$ and equation 3.110 of Hansen & Kawaler (1994)
$\chi_{T}=\frac{\rho}{P}\frac{k_{B}T}{\mu_{I}m_{p}}$ (4)
into Equation 3 gives
$N^{2}=\frac{1}{H^{2}\chi_{\rho}}\frac{k_{B}T}{\mu_{I}m_{p}}\nabla_{{\rm ad}}\
,$ (5)
where $k_{B}$ is the Boltzmann constant, $\mu_{I}$ = 1/($\sum_{i}X_{i}/A_{i})$
is the ion mean molecular weight, and $m_{p}$ is the mass of the proton.
Equation 3.90 of Hansen & Kawaler (1994) shows $\nabla_{{\rm ad}}$ =
$(\Gamma_{3}-1)/\Gamma_{1}$, where $\Gamma_{1}$ is the first adiabatic index
and $\Gamma_{3}$ $\rightarrow$ $k_{B}/(\mu_{I}m_{p}c_{v})$ is the third
adiabatic index, where in the gas phase the ideal specific heat capacity is
$c_{v}$ = $3k_{B}/(2\mu_{I}m_{p})$. The sentence beneath equation 3.112 of
Hansen & Kawaler (1994) thus notes that $\Gamma_{3}-1$ = 2/3 for the ions in
the gas phase ($\Gamma_{3}-1$ = 1/3 in the liquid phase). Combining these
expressions, yields the approximation
$N^{2}=\frac{2}{3\Gamma_{1}\chi_{\rho}H^{2}}\frac{k_{B}T}{\mu_{I}m_{p}}\ .$
(6)
Figure 4: Comparison of the approximation for $N^{2}$ (blue curve) in Equation
(6) and the full calculation of $N^{2}$ from MESA (green curve). Figure 5:
Mass-radius relation of the baseline DB WD model at
$\log(L/\mathrm{L}_{\odot})$ = $-$2.5 with key features located: the
transition from $X(\mathrm{{}^{16}O})$ to $X(\mathrm{{}^{12}C})$ dominated,
the rise of $X(\mathrm{{}^{4}He})$, the $X(\mathrm{{}^{12}C})$ bump, where
$S_{\ell}$ $<$ $N$ occurs, the transition to $X(\mathrm{{}^{4}He})$ dominated,
and where $N^{2}$ $<$ 0\. Figure 6: Period evolution of the $\ell$ = 1
(purple) and $\ell$ = 2 (green) g-modes at radial orders $n$=1,2,3,10 as the
baseline model cools. Each point represents a timestep in MESA where the
g-mode was calculated by GYRE. The gray band show the $T_{\rm eff}$ range of
observed DBV WD.
Figure 4 compares the approximation in Equation (6) with the full $N^{2}$
calculation from MESA. The difference at $r/R$ $\simeq$ 0.5 corresponds to the
$X$($\mathrm{{}^{16}O}$) $\rightarrow$ $X$($\mathrm{{}^{12}C}$) transition, at
$r/R$ $\simeq$ 0.8 to the $\mathrm{{}^{12}C}$ bump, and at $r/R$ $\simeq$ 0.9
to the transition to a He dominated atmosphere. Except for the outermost
layers and regions where the composition gradients are significant, the
agreement is sufficient to use Equation (6) as a scaling relation for building
physical insights. We always use, however, the full $N^{2}$ calculation from
MESA for any quantitative analysis.
It is useful to reference features of the baseline model with respect to mass
or radius. Figure 5 thus shows the mass-radius relation of the baseline model
at $\log(L/\mathrm{L}_{\odot})$ = $-$2.5 with key transitions labeled.
Figure 7: Top to Bottom: Relative period differences of the $g_{1,1}$,
$g_{1,2}$, $g_{2,1}$, $g_{2,2}$, $g_{10,1}$ and $g_{10,2}$ modes between the
baseline model, PB, and a model where the $\mathrm{{}^{22}Ne}$ has been
replaced with $\mathrm{{}^{14}N}$, P14N. We use the notation $g_{n,\ell}$ for
a g-mode of order $n$ and degree $\ell$. Gray bands show the $T_{\rm eff}$
range of currently observed DBV WDs. Figure 8: Top to Bottom: Relative
differences in the $H^{2}$, $\mu_{I}$ $\Gamma_{1}$, $\chi_{\rho}$, and $T$
contributions to $N^{2}$ in Equation 6. Subscript B represents the baseline
model, and subscript 14N represents and a model where $\mathrm{{}^{22}Ne}$ has
been replaced with $\mathrm{{}^{14}N}$.
Figure 6 shows the low-order g-mode pulsation periods as the baseline WD model
cools. The periods increase due to $N^{2}$ decreasing as the cooling
progresses, per Equation 6. Higher radial orders have steeper slopes due to
the periods scaling with $k_{r}$ in Equation 1. The increase in number of MESA
models at $T_{\rm eff}$ $\simeq$ 30,000 K is due to the partial ionization of
He, which leads to envelope convection in relatively hot DBV WDs. The change
in slope at $T_{\rm eff}$ $\simeq$ 20,000 K is due to the luminosity becoming
proportional to the enclosed mass, $L_{r}\propto M_{r}$, as the plasmon
neutrino emission becomes insignificant.
In Appendix A we show that the low-order g-mode pulsation periods of the
baseline model calculated with GYRE are only weakly dependent on the mass and
temporal resolution of the MESA calculations.
Figure 9: Weight functions of the low-order g-modes for baseline model with
$\mathrm{{}^{22}Ne}$ (black curves) and a baseline model where
$\mathrm{{}^{22}Ne}$ has been replaced with $\mathrm{{}^{14}N}$ (green
curves). Subpanels show the relative percent differences between the two
curves. The profiles shown are when the two models have cooled to
$\log(L/\mathrm{L}_{\odot})$ = $-$2.5. Nodes in the radial eigenfunctions are
marked by filled circles. Figure 10: Top to Bottom: Relative period
differences of the $g_{1,1}$, $g_{1,2}$, $g_{2,1}$, $g_{2,2}$, $g_{10,1}$ and
$g_{10,2}$ modes between the baseline model, PB, a zero-metallicity WD model
(gray curves) where the $\mathrm{{}^{14}N}$ and $\mathrm{{}^{22}Ne}$ have been
put into $\mathrm{{}^{4}He}$ and $\mathrm{{}^{12}C}$ respectively, and a
super-solar metallicity model (green curves) where the $\mathrm{{}^{14}N}$ and
$\mathrm{{}^{22}Ne}$ of the baseline model are doubled. Figure 11: Top to
Bottom: Relative differences in the $H^{2}$, $\mu_{I}$ $\Gamma_{1}$,
$\chi_{\rho}$, and $T$ contributions to $N^{2}$ in Equation 6. Subscript B
represents the baseline model, and subscript Z represents the zero-metallicity
models (gray curves) and super-solar metallicity models (green curves).
## 3 The Impact Of $\mathrm{{}^{22}Ne}$
Having established the cooling properties and g-mode pulsation periods of a
baseline model whose mass fraction profiles are from a stellar evolution
model, we now explore changes in the g-mode pulsation periods due to changes
in the $\mathrm{{}^{22}Ne}$ mass fraction profile shown in Figure 1. We
consider three modifications: replacing $\mathrm{{}^{22}Ne}$ with
$\mathrm{{}^{14}N}$, a metal-free model, and a super-solar metallicity model.
### 3.1 Putting the 22Ne into 14N
Replacing $X$($\mathrm{{}^{22}Ne}$) with $X$($\mathrm{{}^{14}N}$) is a model
for the reaction sequence
$\mathrm{{}^{14}N}$($\alpha$,$\gamma$)18F(,$e^{+}\nu_{e}$)18O($\alpha$,$\gamma$)$\mathrm{{}^{22}Ne}$
either physically not occurring or being ignored. Figure 7 shows the relative
differences in the low-order g-mode pulsation periods from this composition
change. All of the relative differences are negative, implying the pulsation
periods in models that exclude $\mathrm{{}^{22}Ne}$ are longer than the
corresponding pulsation periods in models that include $\mathrm{{}^{22}Ne}$.
The magnitude of the relative period differences span $\simeq$ 0.25%$-$1% over
the range of currently observed DBV WDs, with the $g_{1,1}$ and $g_{1,2}$
modes showing the largest differences at cooler $T_{\rm eff}$. The change in
the slopes at $T_{\rm eff}$ $\simeq$ 20,000 K is due to plasmon neutrino
emission becoming insignificant, and thus the luminosity becoming proportional
to the enclosed mass, $L_{r}\propto M_{r}$.
What drives these g-mode period changes? Replacing an isotope which has a
larger mass number with an isotope which has a smaller mass number decreases
$\mu_{I}$. This replacement also increases $H$ through the mechanical
structure and equation of state of the CO WD. Figure 8 shows the relative
differences in the $H^{2}$, $\mu_{I}$, $\Gamma_{1}$, $\chi_{\rho}$ and $T$
contributions to $N^{2}$ in Equation 6. These changes collectively determine
the magnitude and sign of the period change relative to the baseline model.
For this $X$($\mathrm{{}^{22}Ne}$) $\rightarrow$ $X$($\mathrm{{}^{14}N}$)
model, the overall positive changes in $\mu_{I}$ and $T$ are counteracted by
the negative changes from $H^{2}$, $\Gamma_{1}$, and $\chi_{\rho}$. The
magnitude of the relative difference in $H^{2}$ drives the net result of a
smaller $N^{2}$ and thus longer g-mode periods. The nearly uniform negative
change in $H^{2}$ imply a change in the radius of the WD model. We find
$(R_{B}-R_{14N})/R_{B}$ $\simeq$ $-$0.4%, meaning the
$X$($\mathrm{{}^{22}Ne}$) $\rightarrow$ $X$($\mathrm{{}^{14}N}$) model has a
larger radius than the model with $\mathrm{{}^{22}Ne}$. This is expected given
differences in the electron fraction of a WD.
Figure 9 compares the weight functions of the baseline model with
$\mathrm{{}^{22}Ne}$ and the model where the $\mathrm{{}^{22}Ne}$ has been
replaced with $\mathrm{{}^{14}N}$. Following Kawaler et al. (1985), the weight
function is
$\frac{{\rm d}\zeta}{{\rm d}r}=\frac{[C({\bf y},r)+N({\bf y},r)+G({\bf
y},r)]\rho r^{2}}{\int_{r=0}^{r=R}T({\bf y},r)\rho r^{2}{\rm d}r}\ ,$ (7)
where $C({\bf y},r)$ varies with the Lamb frequency, $N({\bf y},r)$ contains
the Brunt-Väisälä frequency, $G({\bf y},r)$ involves the gravitational
eigenfunctions, $T({\bf y},r)$ is proportional to the kinetic energy density,
and ${\bf y}=(y_{1},y_{2},y_{3},y_{4})$ are the Dziembowski (1971) variables.
The frequency of an adiabatic mode is then
$\nu^{2}=\zeta=\int_{r=0}^{r=R}\frac{{\rm d}\zeta}{{\rm d}r}\cdot{\rm d}r\ .$
(8)
The weight function for the two models is dominated by the $N({\bf y},r)$ term
except for the surface layers. Figure 9 shows that the net effect of the
$\mathrm{{}^{22}Ne}$ $\rightarrow$ $\mathrm{{}^{14}N}$ composition change is a
shift in $\zeta$, the area under the weight function curves, towards smaller
frequencies of the low-order g-modes. The subpanels in Figure 9 illustrate the
relative percent differences between the weight function curves. Most of the
changes in $\zeta$ occur at the CO transition region ($r/R$ $\simeq$ 0.45, see
Figure 5), $\mathrm{{}^{12}C}$ bump ($r/R$ $\simeq$ 0.8 ), and at the
transition to a He-dominated atmosphere ($r/R$ $\simeq$ 0.9). The changes in
these regions get as large as $\sim$ 10%. We identify the dipole g-mode of
radial order $n$ = 2 as being more sensitive to the location and gradient of
$\mu_{I}$ at the CO transition ($r/R$ $\simeq$ 0.5) than other low-order
g-modes.
### 3.2 Zero-Metallicity and Super-Solar Metallicity
Replacing $X$($\mathrm{{}^{14}N}$) with $X$($\mathrm{{}^{4}He}$) and
$X$($\mathrm{{}^{22}Ne}$) with $X$($\mathrm{{}^{12}C}$) is a model for
ignoring the birth metallicity of the ZAMS star, CNO burning on the main-
sequence, and the
$\mathrm{{}^{14}N}$($\alpha$,$\gamma$)18F(,$e^{+}\nu_{e}$)18O($\alpha$,$\gamma$)$\mathrm{{}^{22}Ne}$
reaction sequence during He-burning. Most studies of the pulsation periods of
observed WDs use zero-metallicity DBV WDs when deriving the interior mass
fraction profiles, although see Camisassa et al. (2016) for a counterexample.
Alternatively, doubling $X$($\mathrm{{}^{14}N}$) at the expense of
$X$($\mathrm{{}^{4}He}$) and doubling $X$($\mathrm{{}^{22}Ne}$) at the expense
of $X$($\mathrm{{}^{12}C}$) is a model for a super-solar metallicity DBV WD.
Figure 10 compares the relative change in the low-order g-mode pulsation
periods of the zero and super-solar metallicity models. The period differences
are negative for the zero-metallicity model and positive for the super-solar
metallicity model. Zero-metallicity DBV WD models have longer periods than the
baseline model, which in turn has longer periods than the super-solar
metallicity model. The relative period differences of the zero and super-solar
metallicity models are mostly symmetric about the baseline model’s $Z$ = 0.02
metallicity. The period differences of the zero-metallicity models, averaged
over the $T_{\rm eff}$ evolution, are $\Delta P(g_{1,1})$ $\simeq$ $-$0.57 s,
$\Delta P(g_{1,2})$ $\simeq$ $-$0.40 s, $\Delta P(g_{2,1})$ $\simeq$ $-$0.52
s, and $\Delta P(g_{2,2})$ $\simeq$ $-$0.40 s. For the super-solar metallicity
models the averaged absolute period differences are $\Delta P(g_{1,1})$
$\simeq$ 0.66 s, $\Delta P(g_{1,2})$ $\simeq$ 0.45 s, $\Delta P(g_{2,1})$
$\simeq$ 0.46 s, and $\Delta P(g_{2,2})$ $\simeq$ 0.35 s. Over the $T_{\rm
eff}$ range of currently observed DBV WDs, the mean relative period change of
the dipole modes is 0.57% and the maximum of relative period change is 0.88%.
The relative period change of the quadrupole modes is smaller, with a mean of
0.33% and a maximum of 0.63%.
Figure 11 shows the relative differences in the $H^{2}$, $\mu_{I}$,
$\Gamma_{1}$, $\chi_{\rho}$ and $T$ contributions to $N^{2}$ of Equation 6 for
the zero and super-solar metallicity models. These changes collectively
determine the magnitude and sign of the period change relative to the baseline
model. For the zero-metallicity models, the combined positive changes in
$\mu_{I}$ and $T$ are counteracted by the collective negative changes from
$H^{2}$, $\Gamma_{1}$, and $\chi_{\rho}$. The net change is negative,
resulting in smaller $N^{2}$ and longer g-mode periods. Similar reasoning for
the super-solar metallicity models leads to a net positive change, resulting
in larger $N^{2}$ and smaller g-mode periods. The magnitude of the difference
in $H^{2}$ drives the overall result for both metallicity cases. The nearly
uniform changes in $H^{2}$ imply changes in the radii, and we find
$(R_{B}-R_{Z})/R_{B}$ $\simeq$ $\pm$0.4% with zero-metallicity models having
smaller radii and super-solar metallicity models having larger radii.
Interrogating further the composition dependence, the top panels of Figure 12
compare the mass fraction profiles of the $X$($\mathrm{{}^{22}Ne}$) $\simeq$
0.02 baseline and zero-metallicity at 30,000 K, 15,000 K and 12,100 K as a
function of mass coordinate. Element diffusion is operating in both models.
The middle panels show the relative differences in these mass fraction
profiles, with the $\mathrm{{}^{22}Ne}$ and $\mathrm{{}^{14}N}$ offsets zeroed
out. The C and O differences at $\log(1-M_{r}/M)$ $\simeq$ $-$0.25, from
Figure 5, correspond to the C/O transition at $r/R$ $\simeq$ 0.5. The He
difference at $\log(1-M_{r}/M)$ $\simeq$ $-$1.0 correlates to the rise of He
at $r/R$ $\simeq$ 0.75. Similarily, the C, O and He differences at
$\log(1-M_{r}/M)$ $\simeq$ $-$2.0 maps to He dominating the composition at
$r/R$ $\simeq$ 0.9. These relative differences are the largest at 30,000 K,
reaching $\simeq$ 7.5% for $\mathrm{{}^{16}O}$ and $\simeq$ $-$6% for
$\mathrm{{}^{4}He}$. The relative differences at 15,000 K and 12,100 K have
about the same magnitude, $\simeq$ 7.5% for $\mathrm{{}^{16}O}$ and $\simeq$
$-$1% for $\mathrm{{}^{4}He}$. The relative mass fraction differences span a
larger range of $\log(1-M_{r}/M)$ as the models cool due to element diffusion.
The bottom panels of Figure 12 show the corresponding relative difference in
the $\mu_{I}$ profiles. As $\mu_{I}$ is calculated by dividing the mass
fraction of a isotope by its atomic weight, the relative differences in the
mass fraction profiles are reduced in the $\mu_{I}$ profiles. The $\mu_{I}$
profile for 12,100 K in terms of a mass coordinate is the same as the
$\mu_{I}$ profile in Figure 11 in terms of a radial coordinate.
We also computed the relative period differences between the
$X$($\mathrm{{}^{22}Ne}$) $\simeq$ 0.02 baseline and zero-metallicity model
with diffusion turned off to disentangle structural and diffusion effects. The
results are shown in Figure 13. While there is a slight variation from the
zero-metallicity gray curves shown in Figure 10, mostly in the higher order
$g_{10,1}$ and $g_{10,2}$ modes, the magnitude of the relative differences
remains the same. This further suggests that the period shifts are a direct
consequence of the presence or absence of $\mathrm{{}^{22}Ne}$.
Figure 12: Top Panels: Mass fraction profiles for 0.56
$\mathrm{M}_{\odot}$baseline (colored curves) and zero metallicity (black
dashed curves) models at $T_{\rm eff}$ $\simeq$ 30,000 K, 15,000 K, and 12,100
K. Middle Panels: Relative differences in mass fraction profiles, where we
have zeroed out the $\mathrm{{}^{22}Ne}$ and $\mathrm{{}^{14}N}$ offsets from
$\mathrm{{}^{12}C}$ and $\mathrm{{}^{4}He}$ respectively. Bottom Panel:
Relative differences in $\mu_{I}$. Figure 13: Top to Bottom: Relative period
differences of the $g_{1,1}$, $g_{1,2}$, $g_{2,1}$, $g_{2,2}$, $g_{10,1}$ and
$g_{10,2}$ modes between the baseline model, PB, and the zero-metallicity WD
model, PZ, with diffusion turned off.
Figure 14: Mass fractions profiles for 0.52 $\mathrm{M}_{\odot}$(left column)
and 0.73 $\mathrm{M}_{\odot}$(right column) ab initio DB WD models at $T_{\rm
eff}$ $\simeq$ 30,000 K (top), 15,000 K (middle), and at the end of the
evolution (bottom). Initial mass fraction profiles are shown as solid curves
and the diffusing mass fraction profiles are shown as dotted curves.
Figure 15: Relative period differences of the $g_{1,1}$, $g_{1,2}$, $g_{2,1}$,
$g_{2,2}$, $g_{10,1}$ and $g_{10,2}$ modes for 0.526 $\mathrm{M}_{\odot}$(left
column) and 0.729 $\mathrm{M}_{\odot}$(right column). Differences are between
the baseline model, PB, a zero-metallicity WD model (gray curves) where the
$\mathrm{{}^{14}N}$ and $\mathrm{{}^{22}Ne}$ have been put into
$\mathrm{{}^{4}He}$ and $\mathrm{{}^{12}C}$ respectively, and a super-solar
metallicity model (green curves) where the $\mathrm{{}^{14}N}$ and
$\mathrm{{}^{22}Ne}$ of the baseline model are doubled.
## 4 Trends in the period changes with the white dwarf mass
Using the same physics and numerical choices as for the 0.56
$\mathrm{M}_{\odot}$ baseline model, we evolved a $Z$ = 0.02, 1.1
$\mathrm{M}_{\odot}$ ZAMS stellar model from the pre-main sequence to a 0.526
$\mathrm{M}_{\odot}$ DB WD, and a $Z$ = 0.02, 3.6 $\mathrm{M}_{\odot}$ ZAMS
model to a 0.729 $\mathrm{M}_{\odot}$ DB WD. This initial to final mass
mapping is similar to Table 1 of Camisassa et al. (2016). Relative to the 0.56
$\mathrm{M}_{\odot}$ baseline model, the 0.526 $\mathrm{M}_{\odot}$ WD model
has a thicker He-layer and a more abbreviated extent of
$X$($\mathrm{{}^{22}Ne}$). Conversely, the 0.729 $\mathrm{M}_{\odot}$ WD model
has a smaller $\mathrm{{}^{12}C}$ bump, a thinner He-layer, and a more
extended $X$($\mathrm{{}^{22}Ne}$) profile. These mass fraction profiles were
imposed on 0.526 $\mathrm{M}_{\odot}$and 0.729 $\mathrm{M}_{\odot}$ ab initio
WD models, respectively.
Figure 14 shows the diffusion of these initial mass fraction profiles as the
ab initio WD models cool to $T_{\rm eff}$ $\simeq$ 30,000 K, then $\simeq$
15,000 K and finally $\simeq$ 12,000 K (corresponding to the termination at
$\log(L/\mathrm{L}_{\odot})$ = $-$2.5). Element diffusion is more pronounced
for the more massive 0.729 $\mathrm{M}_{\odot}$ DB WD model due to its larger
surface gravity. An enhancement forms in the $X$($\mathrm{{}^{22}Ne}$) profile
at $\log(1-M_{r}/M$) $\simeq$ $-2.0$ by the time the 0.729
$\mathrm{M}_{\odot}$ model has cooled to $T_{\rm eff}$ $\simeq$ 30,000 K. As
the model further cools, the $X$($\mathrm{{}^{22}Ne}$) bump grows in amplitude
as it propagates inwards toward the center through the He-dominated outer
layers. The $X$($\mathrm{{}^{22}Ne}$) bump generates an increase in the local
$N^{2}$ in the regions it propagates through from a locally larger $\mu_{I}$
and a smaller compensating $H^{2}$. The regions trailing the
$X$($\mathrm{{}^{22}Ne}$) bump are depleted of $X$($\mathrm{{}^{22}Ne}$),
causing a decrease in the local $N^{2}$ in these regions.
We find longer low-order g-mode periods for the more massive WD, consistent
with Camisassa et al. (2016). As was done for the 0.56 $\mathrm{M}_{\odot}$
baseline model, we replace $X$($\mathrm{{}^{14}N}$) with
$X$($\mathrm{{}^{4}He}$) and $X$($\mathrm{{}^{22}Ne}$) with
$X$($\mathrm{{}^{12}C}$) to generate a zero-metallicity ab initio DB WD model.
We also double $X$($\mathrm{{}^{14}N}$) at the expense of
$X$($\mathrm{{}^{4}He}$) and double $X$($\mathrm{{}^{22}Ne}$) at the expense
of $X$($\mathrm{{}^{12}C}$) to generate a super-solar metallicity DB WD.
Figure 15 compares the relative change in the low-order g-mode pulsation
periods of the zero and super-solar metallicity 0.526 $\mathrm{M}_{\odot}$ and
0.729 $\mathrm{M}_{\odot}$ DB WD models. As for the 0.56 $\mathrm{M}_{\odot}$
baseline model, the relative period differences are mostly symmetric about the
reference model’s $Z$ = 0.02 metallicity. For the 0.526 $\mathrm{M}_{\odot}$
models, over the $T_{\rm eff}$ range of currently observed DBV WDs, the mean
relative period change of the dipole modes is 0.99% and the maximum of
relative period change is 1.43%. The relative period change of the quadrupole
modes is smaller, with a mean of 0.25% and a maximum of 0.43%. For the 0.729
$\mathrm{M}_{\odot}$ models, the mean relative period change of the dipole
modes is 0.65% and the maximum of relative period change is 1.02%. The
relative period change of the quadrupole modes is again smaller, with a mean
of 0.40% and a maximum of 0.65%. These values are commensurate with the mean
and maximum relative period changes found for the 0.56 $\mathrm{M}_{\odot}$
baseline model.
There are a few trends in the relative period differences with respect to the
WD mass. For the zero-metallicity $n$ = 2 and $n$ = 10 g-modes, the average
relative differences in the observed Teff range increase with increasing mass.
For example, as the WD mass is increased from 0.526 M⊙ to 0.560 M⊙, we find
the average relative period differences increase by factors of 1.74, 1.22,
2.43, and 1.46, for the g2,1, g2,2, g10,1, and g10,2 modes, respectively. As
the WD mass is further increased from 0.560 M⊙ to 0.729 M⊙, we find additional
magnification factors of 1.21, 1.29, 1.21, and 1.26, for g-modes g2,1, g2,2,
g10,1, and g10,2 respectively. The absence of $\mathrm{{}^{22}Ne}$ causes a
greater deviation from the reference metallicity model as the WD mass
increases.
The g2,1 and g2,2 g-modes show a trend in the local minimum as the WD mass
increases. For the 0.526 M⊙ model, the g2,1 g-mode has a local minimum at
$T_{\rm eff}$ $\lessapprox$ 20,000 K. For the 0.526 M⊙ baseline model, this
local minimum crosses zero at $T_{\rm eff}$ $\simeq$ 20,000 K. For the 0.729
M⊙ model, the local minimum is deeper and crosses zero at at $T_{\rm eff}$
$\simeq$ 25,000 K. These trends with mass are due to when energy lost by the
cooling WD is no longer dominated by neutrino cooling.
## 5 Discussion
We explored changes in the adiabatic low-order g-mode pulsation periods of
0.526, 0.560, and 0.729 $\mathrm{M}_{\odot}$ DB WD models due to the presence,
absence, and enhancement of $\mathrm{{}^{22}Ne}$ as the models cool through
the observed range of effective temperatures. We found mean relative period
shifts of $\Delta P/P$ $\simeq$ $\pm$ 0.5% for the low-order dipole and
quadrupole g-mode pulsations within the observed effective temperature window,
with a range of $\Delta P/P$ that depends on the specific g-mode, mass
fraction of $\mathrm{{}^{22}Ne}$, effective temperature, and mass of the WD
model. Shifts in the pulsation periods due to the presence, absence, or
enhancement of $X$($\mathrm{{}^{22}Ne}$) mostly arise from a competition
between the pressure scale height and ion mean molecular weight.
Low-mass DAV WDs, the ZZ Ceti class of stars, have pulsation periods in the
100$-$1500 s range (e.g., Vincent et al., 2020). Comparing low-mass DAV WDs
produced from stellar evolution models with and without diffusion of
$\mathrm{{}^{22}Ne}$, Camisassa et al. (2016) find that the
$\mathrm{{}^{22}Ne}$ sedimentation induces mean period differences of $\simeq$
3 s, reaching maximum period differences of $\simeq$ 11 s. For the more
massive DAV WD models, where sedimentation of $\mathrm{{}^{22}Ne}$ is
stronger, they find mean period differences of $\simeq$ 15 s between when
diffusion is on and off, and a maximum period differences of $\simeq$ 47 s.
Comparatively, our article focuses on DBV WD models, the evolution of the
pulsation periods as the DBV WD models cool, and the impact of
$\mathrm{{}^{22}Ne}$ being present, absent, or enhanced in the WD interior.
Nevertheless, we conducted an experiment of turning element diffusion off in
our 0.56 $\mathrm{M}_{\odot}$ baseline model. At
$\rm\log(L/\mathrm{L}_{\odot})$ = $-2.5$, we find an absolute mean difference
for $n$ = 1 to $n$ = 11 of $\simeq$ 16 s, with a maximum period difference at
$n$ = 9 of $\simeq$ 56 s. This maximum difference equates to a $\simeq$ 7%
relative difference between when diffusion is on and off. Our period changes
are slightly higher than those found in Camisassa et al. (2016)’s 0.833 M⊙
model, and much larger than the differences found in their 0.576 M⊙ model.
These differences could be a result of DAV versus DBV models, as DAV models
have different cooling times than DBV models. In addition, Camisassa et al.
(2016) computes their period differences at $\rm log(L/L_{\odot})=-2.80$ and
$\rm log(L/L_{\odot})=-2.93$ for their 0.576 and 0.833 M⊙ models,
respectively. These are dimmer than the $\rm\log(L/\mathrm{L}_{\odot})$ =
$-2.5$ used for our calculations. Our maximum radial order is found up to 11
at this luminosity, while Camisassa et al. (2016) uses more radial orders,
with a maximum radial order of 50.
Giammichele et al. (2018) compares the g-mode pulsation periods of a pure
oxygen core surrounded by a pure helium envelope with those from an oxygen-
dominated core with $X$($\mathrm{{}^{22}Ne}$) = 0.02 surrounded by a pure
helium envelope. They find including $\mathrm{{}^{22}Ne}$ yields shorter
periods, with mean period differences of $\simeq$ 0.1%. We find a mean period
shift that is about 5 times larger in our 0.56 $\mathrm{M}_{\odot}$baseline
model. This difference may be caused by the contrast in the composition of the
models, which in turn causes variances in the local mean molecular weight and
pressure scale height scaling described by Equation 6.
Are 1% or less period differences important? The g-mode periods of observed
DBV WD are found from a Fourier analysis of the photometric light curves and
are typically given to 6$-$7 significant figures of precision. Usually zero-
metallicity WD models (i.e., without $\mathrm{{}^{22}Ne}$) are fit to the
observed g-mode periods and other properties (e.g., $T_{\rm eff}$, $\log g$).
The root-mean-square residuals to the $\simeq$ 150$-$400 s low-order g-mode
periods are typically in the range $\sigma_{\rm rms}$ $\lesssim$ 0.3 s (e.g.,
Bischoff-Kim et al., 2014), for a fit precision of $\sigma_{\rm rms}/P$
$\lesssim$ 0.3%. Our finding of a mean relative period shift of $\Delta P/P$
$\simeq$ $\pm$ 0.5% induced by including $\mathrm{{}^{22}Ne}$ in WD models
suggests a systematic offset may be present in the fitting process of specific
WDs when $\mathrm{{}^{22}Ne}$ is absent. As part of the fitting process
involves adjusting the composition profiles of the model WD, this study on the
impact of $\mathrm{{}^{22}Ne}$ can inform inferences about the derived
interior mass fraction profiles. We encourage routinely including
$\mathrm{{}^{22}Ne}$ mass fraction profiles, informed by stellar evolution
models, to future generations of DBV WD model fitting processes.
The adiabatic low-order g-mode pulsation periods of our DB WD models depend
upon simplifying assumptions in the stellar evolution calculations (e.g.,
convective boundary layer mixing, shellular rotation), uncertainties (e.g.,
mass loss history, stripping of the residual thin H layer, thickness of the
He-dominated atmosphere), and unknown inherent systematics. We hypothesize
that these model dependencies and systematics could mostly cancel when
dividing one model result by another model result, such as when calculating
the relative period shifts $\Delta P/P$. We anticipate exploring a larger
range of models, similar in approach to Fields et al. (2016), to test this
conjecture in future studies.
The MESA project is supported by the National Science Foundation (NSF) under
the Software Infrastructure for Sustained Innovation program grants
(ACI-1663684, ACI-1663688, ACI-1663696). This research was also supported by
the NSF under grant PHY-1430152 for the Physics Frontier Center “Joint
Institute for Nuclear Astrophysics - Center for the Evolution of the Elements”
(JINA-CEE). A.T. is a Research Associate at the Belgian Scientific Research
Fund (F.R.S-FNRS). We acknowledge useful discussions at virtual Sky House
2020. This research made extensive use of the SAO/NASA Astrophysics Data
System (ADS).
## Appendix A Convergence Studies
In this appendix we demonstrate that the pulsation periods of the baseline
model are only weakly dependent on the details of the mass and temporal
resolution of the MESA \+ GYRE calculations.
A MESA parameter controlling the mass resolution is max_dq, the maximum
fraction a model’s mass in one cell. That is, the minimum number of cells in a
model is $N_{\rm min\ cells}$ = 1/max_dq. We use $N_{\rm min\ cells}$ = 5,000
for all the results reported. MESA can also adaptively refines its mesh based
on a set of mesh functions. The maximum cell-to-cell variation of these
functions is maintained around the value of the control mesh_delta_coeff. We
use mesh_delta_coeff = 1 for all the results reported. Primarily as a result
of these two mesh parameters, the total number of cells in the baseline model
is $\simeq$ 8,000 cells.
A MESA parameter controlling the time resolution is the largest change in the
central temperature allowed over a timestep, delta_lgT_cntr_limit. For all the
reported results, we use delta_lgT_cntr_limit = 0.001. MESA can also
adaptively adjusts the timestep based on other criteria, but this setting
dominates the size of every timestep as the baseline WD model cools. The total
number of timesteps in the baseline model is $\simeq$ 1,000 and varies roughly
linearly with delta_lgT_cntr_limit.
Figure 16 shows changes in the low-order g-mode periods for different $N_{\rm
min\ cells}$ as the models cool. The time resolution is held fixed at
delta_lgT_cntr_limit = 0.001. Our standard $N_{\rm min\ cells}$ = 5,000
baseline model is the basis of the comparison and shown as the horizontal
lines. A model with 10 times less mass resolution than our standard mass
resolution, $N_{\rm min\ cells}$ = 500, induces maximum relative period
changes of $\simeq$ 0.05% at $\simeq$ 30,000 K for $g_{1,1}$, $\simeq$ 0.07%
at $\simeq$ 35,000 K for $g_{1,2}$, $\simeq$ 0.07% at $\simeq$ 45,000 K for
$g_{2,1}$, and $\simeq$ 0.07% at $\simeq$ 45,000 K for $g_{2,2}$. A model with
5 times less mass resolution than our standard mass resolution, $N_{\rm min\
cells}$ = 1,000, reduces these maximum relative period changes by $\simeq$
20%. A model with 5 times more mass resolution than our standard mass
resolution, $N_{\rm min\ cells}$ = 25,000 causes maximum relative period
changes of 0.000022% at $g_{1,1}$ to 0.028% at $g_{10,1}$. These maximum
relative period changes are, respectively, a factor of $\simeq$ 20,000 to 20
smaller than the relative period change caused by including or excluding
$\mathrm{{}^{22}Ne}$.
Figure 17 shows changes in the low-order g-mode periods for different
delta_lgT_cntr_limit as the models cool. The mass resolution is held fixed at
$N_{\rm min\ cells}$ = 5,000. Our standard delta_lgT_cntr_limit = 0.001
baseline model is the basis of the comparison and shown as the horizontal
lines. A model with 10 times less time resolution, delta_lgT_cntr_limit =
0.01, causes maximum relative period changes of $\simeq$ $-$0.05% at $\simeq$
50,000 K for $g_{1,1}$, $\simeq$ 0.02% at $\simeq$ 50,000 K for $g_{1,2}$,
$\simeq$ $-$0.06% at $\simeq$ 40,000 K for $g_{2,1}$, $\simeq$ $-$0.05% at
$\simeq$ 45,000 K for $g_{2,2}$, $\simeq$ $-$0.25% at $\simeq$ 45,000 K for
$g_{10,1}$, and $\simeq$ $-$0.25% at $\simeq$ 50,000 K for $g_{10,2}$. A model
with 5 times less time resolution than our standard mass resolution,
delta_lgT_cntr_limit = 0.005, reduces these maximum relative period changes by
$\simeq$ 10%. A model with 5 times more time resolution, delta_lgT_cntr_limit
= 0.0002, has average period changes of 0.00061 s for $g_{1,1}$, $-$0.00077 s
for $g_{1,2}$, 0.0034 s for $g_{2,1}$, 0.0010 s for $g_{2,2}$, 0.0021 s for
$g_{10,1}$, and 0.0014 s for $g_{10,2}$. The average period changes are a
factor of $\simeq$ 1000 smaller than the average period changes caused by
including or excluding $\mathrm{{}^{22}Ne}$.
Figure 16: Relative differences in the $g_{1,1}$, $g_{1,2}$, $g_{2,1}$,
$g_{2,2}$, $g_{10,1}$, $g_{10,2}$ pulsation periods for different minimum mass
resolutions as the baseline WD models cool. We use the notation $g_{n,\ell}$
for a g-mode of order $n$ and degree $\ell$. The minimum mass resolution of
5,000 cells, used for all the results reported, is shown by the black
horizontal lines. Figure 17: Relative differences in the $g_{1,1}$,
$g_{1,2}$, $g_{2,1}$, $g_{2,2}$, $g_{10,1}$, and $g_{10,2}$ pulsation period
for different temporal resolutions as the baseline WD models cool. The largest
change in the central temperature allowed over a timestep,
delta_lgT_cntr_limit = 0.001, used for all the results reported, is shown by
the black horizontal lines.
## Appendix B Input Physics Details
In this appendix we briefly discuss the salient physics used in our MESA
models.
### B.1 Thermodynamics
The MESA r12115 equation of state (EOS) is a blend of the OPAL (Rogers &
Nayfonov, 2002), SCVH (Saumon et al., 1995), PTEH Pols et al. (1995), HELM
(Timmes & Swesty, 2000), and PC (Potekhin & Chabrier, 2010) EOSes. The MESA
EOS also covers the late stages of WD cooling where the ions in the core
crystallize (e.g., Bauer et al., 2020). WD interiors lie in the PC region of
the MESA EOS, which provides a semi-analytic EOS treatment for arbitrary
composition. The default in MESA version 12115 is to account for each species
of ion with mass fraction greater than $10^{-3}$ when calling the PC EOS.
Therefore changing the interior composition in a WD model, such as including
or excluding 22Ne, self-consistently changes the thermodynamics.
### B.2 Opacities
MESA r12115 divides the radiative opacity tables into two temperature regimes,
low ($T$ $\lesssim$ 104 K) and high ($T$ $\gtrsim$ 104 K). For the stellar
evolution calculations from the pre-MS to a WD we use the Ferguson et al.
(2005) low-temperature regions, and for the high-temperature regions we use
the OPAL Type I opacities (Iglesias & Rogers, 1996), smoothly transitioning to
the OPAL Type II opacities (Iglesias & Rogers, 1996) starting at the end of
core H-burning. In our WD models, the radiative opacities are provided by the
OPAL Type 2 tables, which are functions of the hydrogen mass fraction X, metal
mass fraction Z, and the C/O-enhancements. Thus for the same temperature and
density, our $X$($\mathrm{{}^{22}Ne}$) $\rightarrow$ $X$($\mathrm{{}^{14}N}$)
replacement in Section 3.1 does not change the radiative opacities. Our
$X$($\mathrm{{}^{14}N}$) $\rightarrow$ $X$($\mathrm{{}^{4}He}$) and
$X$($\mathrm{{}^{22}Ne}$) with $\rightarrow$ $X$($\mathrm{{}^{12}C}$)
replacements to generate zero-metallicity ab initio DB WD in Section 3.2
decreases Z in the He-dominated envelope and increases the C enhancement in
the interior. Conversely, our doubling $X$($\mathrm{{}^{14}N}$) at the expense
of $X$($\mathrm{{}^{4}He}$) and doubling $X$($\mathrm{{}^{22}Ne}$) at the
expense of $X$($\mathrm{{}^{12}C}$) to generate a super-solar metallicity ab
initio DB WD in Section 3.2 increases Z in the He-dominated envelope and
decreases the C enhancement in the interior. Electron conduction opacities are
from Cassisi et al. (2007), which are the relevant opacity in the WD interior.
The conduction opacities are a function of the mean atomic number $\bar{Z}$,
which MESA evaluates using the full composition vector in each cell.
### B.3 Nuclear Reaction Networks
We use MESA’s mesa_49.net, a nuclear reaction network that follows 49 isotopes
from $\mathrm{{}^{1}H}$ to $\mathrm{{}^{34}S}$, including
$\mathrm{{}^{22}Ne}$. This impact of this reaction network on properties of CO
WDs from Monte Carlo stellar models is discussed by Fields et al. (2016). All
forward thermonuclear reaction rates are from the JINA reaclib version V2.2
2017-10-20 (Cyburt et al., 2010). Inverse rates are calculated directly from
the forward rates (those with positive $Q$-value) using detailed balance,
rather than using fitted rates. The nuclear partition functions used to
calculate the inverse rates are from Rauscher & Thielemann (2000). Electron
screening factors for both weak and strong thermonuclear reactions are from
Chugunov et al. (2007) with plasma parameters from Itoh et al. (1979). All the
weak rates are based (in order of precedence) on the tabulations of Langanke &
Martínez-Pinedo (2000), Oda et al. (1994), and Fuller et al. (1985). Thermal
neutrino energy losses are from Itoh et al. (1996).
### B.4 Mass Loss
The implementations of mass loss in MESA r12115 are based on a number of
observationally and theoretically motivated prescriptions, but uncertainties
remain on line-driven and dust-driven winds (Dupree, 1986; Willson, 2000;
Boulangier et al., 2019). We follow the mass loss settings used by the MIST
isochrones (Dotter, 2016; Choi et al., 2016), with a combination of the Reimer
mass loss prescription (Reimers, 1975) with $\eta$=0.1 on the Red Giant Branch
and a Blöcker mass loss prescription (Bloecker, 1995a) with $\eta$=0.5 on the
AGB.
### B.5 Rotation and Magnetic Fields
MESA r12115 implements the inherently 3D process of rotation by making the 1D
shellular approximation (Zahn, 1992; Meynet & Maeder, 1997), where the angular
velocity is constant over isobars. The transport of angular momentum and
material due to rotationally induced instabilities is followed using a
diffusion approximation (e.g., Endal & Sofia, 1978; Pinsonneault et al., 1989;
Heger et al., 2000; Maeder & Meynet, 2003, 2004; Suijs et al., 2008) for the
dynamical shear instability, secular shear instability, Eddington-Sweet
circulation, Goldreich-Schubert-Fricke instability, and Spruit-Tayler dynamo.
See Heger et al. (2000) for a description of the different instabilities and
diffusion coefficients.
Magnetic fields are implemented in MESA using the formalism of Heger et al.
(2005), where a magnetic torque due to a dynamo (Spruit, 2002) allows angular
momentum to be transported inside the star. The azimuthal and radial
components of the magnetic field are modeled as $B_{\phi}$ $\sim$
$r\sqrt{(4\pi\rho)}\omega_{A}$ and $B_{r}$ $\sim$ $B_{\phi}/(rk)$
respectively, where $r$ is the radial coordinate, $\omega_{A}$ the Alfvén
frequency, and $k$ the wavenumber. These magnetic fields provide a torque $S$
= $B_{r}B_{\phi}/(4\pi)$ which slows down the rotation rate by decreasing the
amount of differential rotation (Heger et al., 2005).
We initialize rotation by imposing a solid body rotation law,
$\Omega/\Omega_{{\rm crit}}$ = 1.9$\times$10-4, at the ZAMS. ZAMS is defined
as where the nuclear burning luminosity is 99% of the total luminosity, and
the rotation rate is normalized by the surface critical rotation rate
$\Omega_{crit}$ = $\sqrt{(1-L/L_{{\rm edd}})cM/R^{3}}$, where $c$ is the speed
of light, $M$ is the mass of the star, $R$ the stellar radius, $L$ the
luminosity and $L_{{\rm edd}}$ the Eddington luminosity. The initial magnetic
field is set to $B_{r}$ = $B_{\phi}$ = 0. Effects from rotationally induced
mass loss are not included.
### B.6 Element Diffusion
Element diffusion is implemented in MESA r12115 following Thoul et al. (1994),
and described in Section 3 of Paxton et al. (2018). All isotopes in the
reaction network are categorized into classes according to their atomic
masses, each of which has a representative member whose properties are used to
calculate the diffusion velocities. Diffusion coefficients are calculated, by
default, according to Stanton & Murillo (2016), whose formalism is based on
binary collision integrals between each pair of species in the plasma. The
diffusion equation is then solved using the total mass fraction within each
class. From the ZAMS to the construction of the DB WD, we use the ten element
classes $\mathrm{{}^{1}H}$, $\mathrm{{}^{3}He}$, $\mathrm{{}^{4}He}$,
$\mathrm{{}^{12}C}$, $\mathrm{{}^{14}N}$ $\mathrm{{}^{16}O}$,
$\mathrm{{}^{20}Ne}$, $\mathrm{{}^{22}Ne}$, $\mathrm{{}^{24}Mg}$ and
$\mathrm{{}^{28}Si}$.
## References
* Abbott et al. (2020) Abbott, R., Abbott, T. D., Abraham, S., et al. 2020, ApJ, 900, L13, doi: 10.3847/2041-8213/aba493
* Alsing et al. (2020) Alsing, J., Peiris, H., Leja, J., et al. 2020, ApJS, 249, 5, doi: 10.3847/1538-4365/ab917f
* Althaus & Benvenuto (1997) Althaus, L. G., & Benvenuto, O. G. 1997, ApJ, 477, 313, doi: 10.1086/303686
* Althaus et al. (2020) Althaus, L. G., Gil Pons, P., Córsico, A. H., et al. 2020, arXiv e-prints, arXiv:2011.10439. https://arxiv.org/abs/2011.10439
* Arcones et al. (2017) Arcones, A., Bardayan, D. W., Beers, T. C., et al. 2017, Progress in Particle and Nuclear Physics, 94, 1, doi: 10.1016/j.ppnp.2016.12.003
* Baines & Gill (1969) Baines, P. G., & Gill, A. E. 1969, Journal of Fluid Mechanics, 37, 289, doi: 10.1017/S0022112069000553
* Balona & Ozuyar (2020) Balona, L. A., & Ozuyar, D. 2020, MNRAS, 493, 5871, doi: 10.1093/mnras/staa670
* Bauer & Bildsten (2018) Bauer, E. B., & Bildsten, L. 2018, ApJ, 859, L19, doi: 10.3847/2041-8213/aac492
* Bauer et al. (2020) Bauer, E. B., Schwab, J., Bildsten, L., & Cheng, S. 2020, arXiv e-prints, arXiv:2009.04025. https://arxiv.org/abs/2009.04025
* Becker & Iben (1979) Becker, S. A., & Iben, Jr., I. 1979, ApJ, 232, 831, doi: 10.1086/157345
* Becker & Iben (1980) —. 1980, ApJ, 237, 111, doi: 10.1086/157850
* Bell et al. (2019) Bell, K. J., Córsico, A. H., Bischoff-Kim, A., et al. 2019, A&A, 632, A42, doi: 10.1051/0004-6361/201936340
* Bildsten & Hall (2001) Bildsten, L., & Hall, D. M. 2001, ApJ, 549, L219
* Bischoff-Kim & Montgomery (2018) Bischoff-Kim, A., & Montgomery, M. H. 2018, AJ, 155, 187, doi: 10.3847/1538-3881/aab70e
* Bischoff-Kim et al. (2014) Bischoff-Kim, A., Østensen, R. H., Hermes, J. J., & Provencal, J. L. 2014, ApJ, 794, 39, doi: 10.1088/0004-637X/794/1/39
* Bischoff-Kim et al. (2019) Bischoff-Kim, A., Provencal, J. L., Bradley, P. A., et al. 2019, ApJ, 871, 13, doi: 10.3847/1538-4357/aae2b1
* Blöcker (2001) Blöcker, T. 2001, Ap&SS, 275, 1. https://arxiv.org/abs/astro-ph/0102135
* Bloecker (1995a) Bloecker, T. 1995a, A&A, 297, 727
* Bloecker (1995b) —. 1995b, A&A, 299, 755
* Borexino Collaboration et al. (2018) Borexino Collaboration, Agostini, M., Altenmüller, K., et al. 2018, Nature, 562, 505, doi: 10.1038/s41586-018-0624-y
* Borexino Collaboration et al. (2020) —. 2020, Nature, 587, 577
* Boulangier et al. (2019) Boulangier, J., Clementel, N., van Marle, A. J., Decin, L., & de Koter, A. 2019, MNRAS, 482, 5052, doi: 10.1093/mnras/sty2560
* Bravo et al. (1992) Bravo, E., Isern, J., Canal, R., & Labay, J. 1992, A&A, 257, 534
* Brown et al. (2013) Brown, J. M., Garaud, P., & Stellmach, S. 2013, ApJ, 768, 34, doi: 10.1088/0004-637X/768/1/34
* Buchler & Yueh (1976) Buchler, J. R., & Yueh, W. R. 1976, ApJ, 210, 440, doi: 10.1086/154847
* Camisassa et al. (2016) Camisassa, M. E., Althaus, L. G., Córsico, A. H., et al. 2016, ApJ, 823, 158, doi: 10.3847/0004-637X/823/2/158
* Cassisi et al. (2007) Cassisi, S., Potekhin, A. Y., Pietrinferni, A., Catelan, M., & Salaris, M. 2007, ApJ, 661, 1094
* Charpinet et al. (2019) Charpinet, S., Brassard, P., Giammichele, N., & Fontaine, G. 2019, A&A, 628, L2, doi: 10.1051/0004-6361/201935823
* Cheng et al. (2019) Cheng, S., Cummings, J. D., & Ménard, B. 2019, ApJ, 886, 100, doi: 10.3847/1538-4357/ab4989
* Choi et al. (2016) Choi, J., Dotter, A., Conroy, C., et al. 2016, ApJ, 823, 102, doi: 10.3847/0004-637X/823/2/102
* Chugunov et al. (2007) Chugunov, A. I., Dewitt, H. E., & Yakovlev, D. G. 2007, Phys. Rev. D, 76, 025028, doi: 10.1103/PhysRevD.76.025028
* Constantino et al. (2015) Constantino, T., Campbell, S. W., Christensen-Dalsgaard, J., Lattanzio, J. C., & Stello, D. 2015, MNRAS, 452, 123, doi: 10.1093/mnras/stv1264
* Constantino et al. (2017) Constantino, T., Campbell, S. W., & Lattanzio, J. C. 2017, MNRAS, 472, 4900, doi: 10.1093/mnras/stx2321
* Constantino et al. (2016) Constantino, T., Campbell, S. W., Lattanzio, J. C., & van Duijneveldt, A. 2016, MNRAS, 456, 3866, doi: 10.1093/mnras/stv2939
* Córsico et al. (2019) Córsico, A. H., Althaus, L. G., Miller Bertolami, M. M., & Kepler, S. O. 2019, A&A Rev., 27, 7, doi: 10.1007/s00159-019-0118-4
* Cyburt et al. (2010) Cyburt, R. H., Amthor, A. M., Ferguson, R., et al. 2010, ApJS, 189, 240, doi: 10.1088/0067-0049/189/1/240
* D’Antona & Mazzitelli (1990) D’Antona, F., & Mazzitelli, I. 1990, ARA&A, 28, 139, doi: 10.1146/annurev.aa.28.090190.001035
* De Gerónimo et al. (2019) De Gerónimo, F. C., Battich, T., Miller Bertolami, M. M., Althaus, L. G., & Córsico, A. H. 2019, A&A, 630, A100, doi: 10.1051/0004-6361/201834988
* deBoer et al. (2017) deBoer, R. J., Görres, J., Wiescher, M., et al. 2017, Reviews of Modern Physics, 89, 035007, doi: 10.1103/RevModPhys.89.035007
* Deloye & Bildsten (2002) Deloye, C. J., & Bildsten, L. 2002, ApJ, 580, 1077
* Demarque & Mengel (1971) Demarque, P., & Mengel, J. G. 1971, ApJ, 164, 317, doi: 10.1086/150841
* Denissenkov et al. (2013) Denissenkov, P. A., Herwig, F., Truran, J. W., & Paxton, B. 2013, ApJ, 772, 37, doi: 10.1088/0004-637X/772/1/37
* Dotter (2016) Dotter, A. 2016, ApJS, 222, 8, doi: 10.3847/0067-0049/222/1/8
* Dufour et al. (2017) Dufour, P., Blouin, S., Coutu, S., et al. 2017, in Astronomical Society of the Pacific Conference Series, Vol. 509, 20th European White Dwarf Workshop, ed. P. E. Tremblay, B. Gaensicke, & T. Marsh, 3. https://arxiv.org/abs/1610.00986
* Dupree (1986) Dupree, A. K. 1986, ARA&A, 24, 377, doi: 10.1146/annurev.aa.24.090186.002113
* Dziembowski (1971) Dziembowski, W. A. 1971, Acta Astron., 21, 289
* Endal & Sofia (1978) Endal, A. S., & Sofia, S. 1978, ApJ, 220, 279
* Farag et al. (2020) Farag, E., Timmes, F. X., Taylor, M., Patton, K. M., & Farmer, R. 2020, ApJ, 893, 133, doi: 10.3847/1538-4357/ab7f2c
* Farmer et al. (2015) Farmer, R., Fields, C. E., & Timmes, F. X. 2015, ApJ, 807, 184, doi: 10.1088/0004-637X/807/2/184
* Farmer et al. (2020) Farmer, R., Renzo, M., de Mink, S., Fishbach, M., & Justham, S. 2020, arXiv e-prints, arXiv:2006.06678. https://arxiv.org/abs/2006.06678
* Ferguson et al. (2005) Ferguson, J. W., Alexander, D. R., Allard, F., et al. 2005, ApJ, 623, 585, doi: 10.1086/428642
* Fields et al. (2016) Fields, C. E., Farmer, R., Petermann, I., Iliadis, C., & Timmes, F. X. 2016, ApJ, 823, 46, doi: 10.3847/0004-637X/823/1/46
* Fontaine & Brassard (2002) Fontaine, G., & Brassard, P. 2002, ApJ, 581, L33, doi: 10.1086/345787
* Fontaine & Brassard (2008) —. 2008, PASP, 120, 1043, doi: 10.1086/592788
* Freedman et al. (2008) Freedman, R. S., Marley, M. S., & Lodders, K. 2008, ApJS, 174, 504, doi: 10.1086/521793
* Frommhold et al. (2010) Frommhold, L., Abel, M., Wang, F., et al. 2010, Molecular Physics, 108, 2265, doi: 10.1080/00268976.2010.507556
* Fuller et al. (1985) Fuller, G. M., Fowler, W. A., & Newman, M. J. 1985, ApJ, 293, 1, doi: 10.1086/163208
* Garaud (2018) Garaud, P. 2018, Annual Review of Fluid Mechanics, 50, 275, doi: 10.1146/annurev-fluid-122316-045234
* García-Berro et al. (1997) García-Berro, E., Ritossa, C., & Iben, Jr., I. 1997, ApJ, 485, 765\. http://stacks.iop.org/0004-637X/485/i=2/a=765
* Gautschy (2012) Gautschy, A. 2012, ArXiv e-prints. https://arxiv.org/abs/1208.3870
* Giacobbo & Mapelli (2018) Giacobbo, N., & Mapelli, M. 2018, MNRAS, 480, 2011, doi: 10.1093/mnras/sty1999
* Giammichele et al. (2017) Giammichele, N., Charpinet, S., Brassard, P., & Fontaine, G. 2017, A&A, 598, A109, doi: 10.1051/0004-6361/201629935
* Giammichele et al. (2018) Giammichele, N., Charpinet, S., Fontaine, G., et al. 2018, Nature, 554, 73, doi: 10.1038/nature25136
* Hansen & Kawaler (1994) Hansen, C. J., & Kawaler, S. D. 1994, Stellar Interiors. Physical Principles, Structure, and Evolution. (New York: Springer-Verlag), doi: 10.1007/978-1-4419-9110-2
* Heger et al. (2000) Heger, A., Langer, N., & Woosley, S. E. 2000, ApJ, 528, 368
* Heger et al. (2005) Heger, A., Woosley, S. E., & Spruit, H. C. 2005, ApJ, 626, 350
* Hekker & Christensen-Dalsgaard (2017) Hekker, S., & Christensen-Dalsgaard, J. 2017, A&A Rev., 25, 1, doi: 10.1007/s00159-017-0101-x
* Hermes et al. (2017) Hermes, J. J., Kawaler, S. D., Bischoff-Kim, A., et al. 2017, ApJ, 835, 277, doi: 10.3847/1538-4357/835/2/277
* Herwig (2005) Herwig, F. 2005, ARA&A, 43, 435
* Hon et al. (2018) Hon, M., Stello, D., & Zinn, J. C. 2018, ApJ, 859, 64, doi: 10.3847/1538-4357/aabfdb
* Hunter (2007) Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90
* Iben & Renzini (1983) Iben, Jr., I., & Renzini, A. 1983, ARA&A, 21, 271, doi: 10.1146/annurev.aa.21.090183.001415
* Iglesias & Rogers (1996) Iglesias, C. A., & Rogers, F. J. 1996, ApJ, 464, 943
* Isern et al. (1991) Isern, J., Hernanz, M., Mochkovitch, R., & Garcia-Berro, E. 1991, A&A, 241, L29
* Itoh et al. (1996) Itoh, N., Hayashi, H., Nishikawa, A., & Kohyama, Y. 1996, ApJS, 102, 411
* Itoh et al. (1979) Itoh, N., Totsuji, H., Ichimaru, S., & Dewitt, H. E. 1979, ApJ, 234, 1079
* Jones et al. (2013) Jones, S., Hirschi, R., Nomoto, K., et al. 2013, ApJ, 772, 150, doi: 10.1088/0004-637X/772/2/150
* Karakas & Lattanzio (2014) Karakas, A. I., & Lattanzio, J. C. 2014, ArXiv e-prints. https://arxiv.org/abs/1405.0062
* Kawaler et al. (1985) Kawaler, S. D., Winget, D. E., & Hansen, C. J. 1985, ApJ, 295, 547, doi: 10.1086/163398
* Koester (2010) Koester, D. 2010, Mem. Soc. Astron. Italiana, 81, 921
* Kutter & Savedoff (1969) Kutter, G. S., & Savedoff, M. P. 1969, ApJ, 156, 1021, doi: 10.1086/150033
* Langanke & Martínez-Pinedo (2000) Langanke, K., & Martínez-Pinedo, G. 2000, Nuclear Physics A, 673, 481, doi: 10.1016/S0375-9474(00)00131-7
* Lecoanet et al. (2016) Lecoanet, D., Schwab, J., Quataert, E., et al. 2016, ApJ, 832, 71, doi: 10.3847/0004-637X/832/1/71
* Lederer & Aringer (2009) Lederer, M. T., & Aringer, B. 2009, A&A, 494, 403, doi: 10.1051/0004-6361:200810576
* Maeder & Meynet (2003) Maeder, A., & Meynet, G. 2003, A&A, 411, 543
* Maeder & Meynet (2004) —. 2004, A&A, 422, 225
* Marchant & Moriya (2020) Marchant, P., & Moriya, T. J. 2020, A&A, 640, L18, doi: 10.1051/0004-6361/202038902
* Marigo & Aringer (2009) Marigo, P., & Aringer, B. 2009, A&A, 508, 1539, doi: 10.1051/0004-6361/200912598
* Metcalfe (2003) Metcalfe, T. S. 2003, ApJ, 587, L43, doi: 10.1086/375044
* Metcalfe et al. (2003) Metcalfe, T. S., Montgomery, M. H., & Kawaler, S. D. 2003, MNRAS, 344, L88, doi: 10.1046/j.1365-8711.2003.07128.x
* Metcalfe et al. (2002) Metcalfe, T. S., Salaris, M., & Winget, D. E. 2002, ApJ, 573, 803, doi: 10.1086/340796
* Meynet & Maeder (1997) Meynet, G., & Maeder, A. 1997, A&A, 321, 465
* Miles et al. (2016) Miles, B. J., van Rossum, D. R., Townsley, D. M., et al. 2016, ApJ, 824, 59, doi: 10.3847/0004-637X/824/1/59
* Mukhopadhyay et al. (2020) Mukhopadhyay, M., Lunardini, C., Timmes, F. X., & Zuber, K. 2020, ApJ, 899, 153, doi: 10.3847/1538-4357/ab99a6
* Oda et al. (1994) Oda, T., Hino, M., Muto, K., Takahara, M., & Sato, K. 1994, Atomic Data and Nuclear Data Tables, 56, 231, doi: 10.1006/adnd.1994.1007
* Parsons et al. (2016) Parsons, S. G., Rebassa-Mansergas, A., Schreiber, M. R., et al. 2016, MNRAS, 463, 2125, doi: 10.1093/mnras/stw2143
* Patton et al. (2017) Patton, K. M., Lunardini, C., Farmer, R. J., & Timmes, F. X. 2017, ApJ, 851, 6, doi: 10.3847/1538-4357/aa95c4
* Paxton et al. (2011) Paxton, B., Bildsten, L., Dotter, A., et al. 2011, ApJS, 192, 3, doi: 10.1088/0067-0049/192/1/3
* Paxton et al. (2013) Paxton, B., Cantiello, M., Arras, P., et al. 2013, ApJS, 208
* Paxton et al. (2015) Paxton, B., Marchant, P., Schwab, J., et al. 2015, ApJS, 220, 15, doi: 10.1088/0067-0049/220/1/15
* Paxton et al. (2018) Paxton, B., Schwab, J., Bauer, E. B., et al. 2018, ApJS, 234, 34, doi: 10.3847/1538-4365/aaa5a8
* Paxton et al. (2019) Paxton, B., Smolec, R., Schwab, J., et al. 2019, ApJS, 243, 10, doi: 10.3847/1538-4365/ab2241
* Pedersen et al. (2019) Pedersen, M. G., Chowdhury, S., Johnston, C., et al. 2019, ApJ, 872, L9, doi: 10.3847/2041-8213/ab01e1
* Pinsonneault et al. (1989) Pinsonneault, M. H., Kawaler, S. D., Sofia, S., & Demarque, P. 1989, ApJ, 338, 424
* Placco et al. (2020) Placco, V. M., Santucci, R. M., Yuan, Z., et al. 2020, arXiv e-prints, arXiv:2006.04538. https://arxiv.org/abs/2006.04538
* Pols et al. (1995) Pols, O. R., Tout, C. A., Eggleton, P. P., & Han, Z. 1995, MNRAS, 274, 964, doi: 10.1093/mnras/274.3.964
* Potekhin & Chabrier (2010) Potekhin, A. Y., & Chabrier, G. 2010, Contributions to Plasma Physics, 50, 82, doi: 10.1002/ctpp.201010017
* Prada Moroni & Straniero (2009) Prada Moroni, P. G., & Straniero, O. 2009, A&A, 507, 1575, doi: 10.1051/0004-6361/200912847
* Rauscher & Thielemann (2000) Rauscher, T., & Thielemann, F.-K. 2000, Atomic Data and Nuclear Data Tables, 75, 1, doi: 10.1006/adnd.2000.0834
* Reimers (1975) Reimers, D. 1975, Memoires of the Societe Royale des Sciences de Liege, 8, 369
* Rogers & Nayfonov (2002) Rogers, F. J., & Nayfonov, A. 2002, ApJ, 576, 1064
* Rose et al. (2020) Rose, B. M., Rubin, D., Cikota, A., et al. 2020, ApJ, 896, L4, doi: 10.3847/2041-8213/ab94ad
* Saumon et al. (1995) Saumon, D., Chabrier, G., & van Horn, H. M. 1995, ApJS, 99, 713, doi: 10.1086/192204
* Seaton (2005) Seaton, M. J. 2005, MNRAS, 362, L1, doi: 10.1111/j.1365-2966.2005.00019.x
* Serenelli & Fukugita (2005) Serenelli, A. M., & Fukugita, M. 2005, ApJ, 632, L33, doi: 10.1086/497535
* Simón-Díaz et al. (2018) Simón-Díaz, S., Aerts, C., Urbaneja, M. A., et al. 2018, A&A, 612, A40, doi: 10.1051/0004-6361/201732160
* Simpson et al. (2019) Simpson, C., Abe, K., Bronner, C., et al. 2019, ApJ, 885, 133, doi: 10.3847/1538-4357/ab4883
* Spruit (2002) Spruit, H. C. 2002, A&A, 381, 923
* Stanton & Murillo (2016) Stanton, L. G., & Murillo, M. S. 2016, Phys. Rev. E, 93, 043203, doi: 10.1103/PhysRevE.93.043203
* Suijs et al. (2008) Suijs, M. P. L., Langer, N., Poelarends, A.-J., et al. 2008, A&A, 481, L87. https://arxiv.org/abs/0802.3286
* Thoul et al. (1994) Thoul, A. A., Bahcall, J. N., & Loeb, A. 1994, ApJ, 421, 828, doi: 10.1086/173695
* Timmes & Swesty (2000) Timmes, F. X., & Swesty, F. D. 2000, ApJS, 126, 501, doi: 10.1086/313304
* Timmes et al. (2018) Timmes, F. X., Townsend, R. H. D., Bauer, E. B., et al. 2018, ApJ, 867, L30, doi: 10.3847/2041-8213/aae70f
* Townsend (2019a) Townsend, R. H. D. 2019a, MESA SDK for Linux, 20190503, Zenodo, doi: 10.5281/zenodo.2669541
* Townsend (2019b) —. 2019b, MESA SDK for Mac OS, 20190503, Zenodo, doi: 10.5281/zenodo.2669543
* Townsend et al. (2018) Townsend, R. H. D., Goldstein, J., & Zweibel, E. G. 2018, MNRAS, 475, 879, doi: 10.1093/mnras/stx3142
* Townsend & Teitler (2013) Townsend, R. H. D., & Teitler, S. A. 2013, MNRAS, 435, 3406, doi: 10.1093/mnras/stt1533
* Trampedach et al. (2014) Trampedach, R., Stein, R. F., Christensen-Dalsgaard, J., Nordlund, Å., & Asplund, M. 2014, MNRAS, 445, 4366, doi: 10.1093/mnras/stu2084
* Unno et al. (1989) Unno, W., Osaki, Y., Ando, H., Saio, H., & Shibahashi, H. 1989, Nonradial oscillations of stars (Tokyo: University of Tokyo Press)
* van der Walt et al. (2011) van der Walt, S., Colbert, S. C., & Varoquaux, G. 2011, Computing in Science Engineering, 13, 22, doi: 10.1109/MCSE.2011.37
* van Horn (1971) van Horn, H. M. 1971, in IAU Symposium, Vol. 42, White Dwarfs, ed. W. J. Luyten (Dordrecht: Springer), 97
* Vila (1966) Vila, S. C. 1966, ApJ, 146, 437, doi: 10.1086/148908
* Vincent et al. (2020) Vincent, O., Bergeron, P., & Lafrenière, D. 2020, AJ, 160, 252, doi: 10.3847/1538-3881/abbe20
* Weidemann (2000) Weidemann, V. 2000, A&A, 363, 647
* Willson (2000) Willson, L. A. 2000, ARA&A, 38, 573, doi: 10.1146/annurev.astro.38.1.573
* Winget et al. (2004) Winget, D. E., Sullivan, D. J., Metcalfe, T. S., Kawaler, S. D., & Montgomery, M. H. 2004, ApJ, 602, L109, doi: 10.1086/382591
* Yurchenko et al. (2011) Yurchenko, S. N., Barber, R. J., & Tennyson, J. 2011, MNRAS, 413, 1828, doi: 10.1111/j.1365-2966.2011.18261.x
* Zahn (1992) Zahn, J.-P. 1992, A&A, 265, 115
|
# Note on the Kato property of sectorial forms
Ralph Chill R. Chill, Institut für Analysis, Fakultät Mathematik, Technische
Universität Dresden, 01062 Dresden, Germany<EMAIL_ADDRESS>and
Sebastian Król S. Król, Faculty of Mathematics and Computer Science, Adam
Mickiewicz University Poznań, ul. Uniwersytetu Poznańskiego 4, 61-614 Poznań,
Poland<EMAIL_ADDRESS>
###### Abstract.
We characterise the Kato property of a sectorial form $\mathfrak{a}$, defined
on a Hilbert space ${V}$, with respect to a larger Hilbert space ${H}$ in
terms of two bounded, selfadjoint operators ${T}$ and ${Q}$ determined by the
imaginary part of $\mathfrak{a}$ and the embedding of ${V}$ into ${H}$,
respectively. As a consequence, we show that if a bounded selfadjoint operator
${T}$ on a Hilbert space ${V}$ is in the Schatten class $S_{p}({V})$ ($p\geq
1$), then the associated form
$\mathfrak{a}_{T}(\cdot,\cdot):=\langle(I+i{T})\cdot,\cdot\rangle_{V}$ has the
Kato property with respect to every Hilbert space ${H}$ into which ${V}$ is
densely and continuously embedded. This result is in a sense sharp. Another
result says that if ${T}$ and ${Q}$ commute then the form $\mathfrak{a}$ with
respect to ${H}$ possesses the Kato property.
###### 1991 Mathematics Subject Classification:
46D05
The second author was partially supported by the Alexander von Humboldt
Foundation and NCN grant UMO-2017/27/B/ST1/00078
## 1\. Introduction and preliminaries
Let $\mathfrak{a}:{V}\times{V}\to\mathbb{C}$ be a bounded, sectorial,
coercive, sesquilinear form on a complex Hilbert space ${V}$, which is densely
and continuously embedded into a second Hilbert space ${H}$. Then
$\mathfrak{a}$ induces a sectorial, invertible operator ${L}_{H}$ on ${H}$,
and Kato’s square root problem is to know whether the domain of
${L}_{H}^{\frac{1}{2}}$ is equal to the form domain ${V}$. If this is the
case, then we say that the couple $(\mathfrak{a},{H})$ has the Kato property.
In this short note we characterise the Kato property of $(\mathfrak{a},{H})$
in terms of two bounded, selfadjoint operators ${T}$,
${Q}\in{\mathcal{L}}({V})$ determined by the imaginary part of $\mathfrak{a}$
and by the embedding of ${V}$ into ${H}$, respectively. We show that the Kato
property of $(\mathfrak{a},{H})$ is equivalent to the similarity of
${Q}(I+i{T})^{-1}$ to an accretive operator, or to the similarity of
$(I+{Q}+i{T})(I-{Q}+i{T})^{-1}$ to a contraction; see Theorem 2.1. The
established link to different characterisations known in the literature
provides an interesting connection between a variety of techniques and results
mainly from operator theory of bounded operators, harmonic analysis,
interpolation theory, or abstract evolution equations.
In particular, we show that if a bounded, selfadjoint operator ${T}$ on a
Hilbert space ${V}$ is in the Schatten class $S_{p}({V})$ for some $p\geq 1$,
then the associated form
$\mathfrak{a}_{T}(\cdot,\cdot):=\langle(I+i{T})\cdot,\cdot\rangle_{V}$ has the
Kato property with respect to every Hilbert space ${H}$ into which ${V}$ is
densely and continuously embedded; see Corollary 3.2. This result is in a
sense sharp; see Proposition 4.1.
On the other hand, if $\mathfrak{a}$ is an arbitrary bounded, sectorial,
coercive form on ${V}$, then for every nonnegative, injective operator
${Q}\in{\mathcal{L}}({V})$ which is of the form $I+P$ with $P\in S_{p}({V})$
($p\geq 1$), the pair $(\mathfrak{a},{H}_{Q})$ has the Kato property, where
${H}_{Q}$ is the completion of ${V}$ with respect to the inner product
$\langle{Q}\cdot,\cdot\rangle_{V}$; see Corollary 3.4. Another straightforward
consequence of Theorem 2.1 says that for every pair $({T},{Q})$ of
selfadjoint, commuting operators (with ${Q}$ being nonnegative and injective),
the form $\mathfrak{a}_{T}$ has the Kato property with respect to ${H}_{Q}$;
see Corollary 2.2.
We conclude this introduction with some preliminaries.
### 1.1. Forms
Let $\mathfrak{a}$ be a bounded, sesquilinear form on a complex Hilbert space
${V}$. Denote by $\mathfrak{a}^{*}$ the adjoint form of $\mathfrak{a}$, that
is, $\mathfrak{a}^{*}(u,v):=\overline{\mathfrak{a}(v,u)}$ for every
$u,v\in{V}$. Then we call
$\displaystyle\mathfrak{s}:=\operatorname{Re}\mathfrak{a}$
$\displaystyle:=(\mathfrak{a}+\mathfrak{a}^{*})/2\quad\text{ and}$
$\displaystyle\mathfrak{t}:=\operatorname{Im}\mathfrak{a}$
$\displaystyle:=(\mathfrak{a}-\mathfrak{a}^{*})/2i$
the real part and the imaginary part of $\mathfrak{a}$, respectively. Note
that $\mathfrak{s}=\operatorname{Re}\mathfrak{a}$ and
$\mathfrak{t}=\operatorname{Im}\mathfrak{a}$ are symmetric forms on ${V}$ and
$\mathfrak{a}=\mathfrak{s}+i\mathfrak{t}$. Throughout the following, we assume
that $\mathfrak{a}$ is coercive in the sense that
$\operatorname{Re}\mathfrak{a}(u,u)\geq\eta\,\|u\|_{V}^{2}$ for some $\eta>0$
and every $u\in{V}$. This means that
$\mathfrak{s}=\operatorname{Re}\mathfrak{a}$ is an equivalent inner product on
${V}$, and for simplicity we assume that $\mathfrak{s}$ is equal to the inner
product on ${V}$: $\mathfrak{s}(u,v)=\langle u,v\rangle_{V}$ ($u$, $v\in{V}$).
We shall also assume that $\mathfrak{a}$ is sectorial, that is, there exists
$\beta\geq 0$ such that
(1.1)
$|\operatorname{Im}\mathfrak{a}(u,u)|\leq\beta\,\operatorname{Re}\mathfrak{a}(u,u),\quad
u\in{V}.$
Let ${H}$ be a second Hilbert space such that ${V}$ is densely and
continuously embedded into ${H}$, that is, there exists a bounded, injective,
linear operator $j:{V}\to{H}$ with dense range. In the sequel we identify
${V}$ with $j({V})$. The embedding $j$ induces a bounded, linear embedding
${j}{{}^{\prime}}:{H}\to{V}^{\prime}$ (where ${V}^{\prime}$ is the space of
bounded, antilinear functionals on ${V}$) given by
${j}{{}^{\prime}}(u):=\langle u,\cdot\rangle_{H},\quad u\in{H}.$
Then we have the following picture:
${V}\xhookrightarrow{j\
}{H}\xhookrightarrow{{j}{{}^{\prime}}}{V}^{\prime}\text{ and
}{V}\xhookrightarrow{{j}{{}^{\prime}}j}{V}^{\prime}.$
We write also $J:={j}{{}^{\prime}}j$ for the linear embedding of ${V}$ into
the dual space ${V}^{\prime}$. As usual, ${V}^{\prime}$ is equipped with the
inner product $\langle u,v\rangle_{{V}^{\prime}}:=\langle
I_{V}u,I_{V}v\rangle_{V}$, where $I_{V}:{V}^{\prime}\rightarrow{V}$ is the
Riesz isomorphism.
### 1.2. Bounded operators associated with the pair $(\mathfrak{a},{H})$
Let $(\mathfrak{a},{H})$ be given as above. We define two associated bounded,
linear operators on ${V}$. In fact, by the Riesz-Fréchet representation
theorem, there exist two unique selfadjoint operators
${T}={T}_{\mathfrak{a}}$, ${Q}={Q}_{H}\in{\mathcal{L}}({V})$, such that
$\displaystyle\mathfrak{t}(u,v)$ $\displaystyle=\langle{T}u,v\rangle_{V}\text{
and}$ (1.2) $\displaystyle\langle u,v\rangle_{H}$
$\displaystyle=\langle{Q}u,v\rangle_{V}\text{ for every }u,\,v\in{V},$
and hence, by recalling our convention that
$\mathfrak{s}=\langle\cdot,\cdot\rangle_{V}$,
(1.3) $\mathfrak{a}(u,v)=\langle(I+i{T})u,v\rangle_{V}\text{ for every
}u,\,v\in{V}.$
Moreover, since $\langle\cdot,\cdot\rangle_{H}$ is an inner product, ${Q}$ is
nonnegative and injective. In fact, ${Q}=j^{*}j$, where $j^{*}:{H}\to{V}$ is
the Hilbert space adjoint of $j$.
Conversely, every selfadjoint operator ${T}\in{\mathcal{L}}({V})$ induces via
the equality (1.3) a bounded, sesquilinear, sectorial form $\mathfrak{a}$ on
${V}$ for which $\operatorname{Re}\mathfrak{a}$ coincides with the inner
product $\langle\cdot,\cdot\rangle_{V}$, and for which
$\operatorname{Im}\mathfrak{a}$ is represented by ${T}$. Similarly, every
nonnegative, injective operator ${Q}\in{\mathcal{L}}({V})$ induces via the
equality (1.2) an inner product
$\langle\cdot,\cdot\rangle_{H}:=\langle{Q}\cdot,\cdot\rangle_{V}$ on ${V}$,
and thus, by taking the completion, a Hilbert space ${H}_{Q}$ into which ${V}$
is densely and continuously embedded.
We say that the pair of operators $({T},{Q})$ is associated with the pair
$(\mathfrak{a},{H})$, or, conversely, the pair $(\mathfrak{a},{H})$ is
associated with the pair $({T},{Q})$.
### 1.3. Unbounded operators associated with the pair $(\mathfrak{a},{H})$
Given a pair $(\mathfrak{a},{H})$ as above, we define also associated closed,
linear operators on ${H}$ and ${V}^{\prime}$.
First, we denote by ${L}_{H}:={L}_{\mathfrak{a},{H}}$ the, in general,
unbounded operator on ${H}$ given by
$\displaystyle{\mathcal{D}}({L}_{H})$ $\displaystyle:=\\{u\in j({V}):\exists
f\in{H}\,\forall v\in{V}\,:\,\mathfrak{a}(j^{-1}u,v)=\langle
f,jv\rangle_{H}\\},$ $\displaystyle{L}_{H}u$ $\displaystyle:=f.$
Second, we denote by ${L}_{{V}^{\prime}}:={L}_{\mathfrak{a},{V}^{\prime}}$ the
operator on ${V}^{\prime}$ which is given by
$\displaystyle{\mathcal{D}}({L}_{{V}^{\prime}})$
$\displaystyle:=({j}{{}^{\prime}}j)({V})=J({V}),$
$\displaystyle{L}_{{V}^{\prime}}u$
$\displaystyle:=\mathfrak{a}(J^{-1}u,\cdot).$
In a similar way we define the operators ${L}_{\mathfrak{s},{H}}$ and
${L}_{\mathfrak{s},{V}^{\prime}}$ associated with the real part
$\mathfrak{s}=\operatorname{Re}\mathfrak{a}$.
Recall that a closed, linear operator $(A,{\mathcal{D}}(A))$ on a Banach space
$X$ is called sectorial of angle $\theta\in(0,\pi)$ if
$\sigma(A)\subseteq\Sigma_{\theta}:=\\{z\in\mathbb{C}:|\arg z|\leq\theta\\},$
and if for every $\theta^{\prime}\in(\theta,\pi)$ one has
$\sup_{z\not\in\Sigma_{\theta^{\prime}}}\|zR(z,A)\|<\infty.$
We simply say that $A$ is sectorial if it is sectorial for some angle
$\theta\in(0,\pi)$. The numerical range of a closed, linear operator
$(A,{\mathcal{D}}(A))$ on a Hilbert space ${H}$ is the set
$W(A):=\\{\langle Au,u\rangle_{H}:u\in{\mathcal{D}}(A),\,\|u\|_{H}=1\\}.$
The operator $A$ is said to be $\theta$-accretive for $\theta\in(0,\pi)$, if
$W(A)\subseteq\Sigma_{\theta}$, that is, if
$|\arg\langle Au,u\rangle_{H}|\leq\theta\text{ for every
}u\in{\mathcal{D}}(A).$
If $\theta=\frac{\pi}{2}$, that is, $\operatorname{Re}\langle
Au,u\rangle_{H}\geq 0$ for every $u\in{\mathcal{D}}(A)$, we say that $A$ is
accretive.
Both operators ${L}_{H}$ and ${L}_{{V}^{\prime}}$ defined above are sectorial
for some angle $\theta\in(0,\frac{\pi}{2})$. Since $\mathfrak{a}$ is assumed
to be coercive, we have $0\in\rho({L}_{H})$ and
$0\in\rho({L}_{{V}^{\prime}})$, that is, both ${L}_{H}$ and
${L}_{{V}^{\prime}}$ are isomorphisms from their respective domains onto ${H}$
and ${V}^{\prime}$, respectively; see e.g. [14, Theorem 2.1, p. 58].
It is easy to check that the numerical range of ${L}_{H}$ is contained in the
sector $\Sigma_{\theta}$ with $\theta=\arctan\beta$ and in particular
${L}_{H}$ is $\theta$-accretive. As a consequence, by [8, Theorem 11.13],
${L}_{H}$ admits a bounded $H^{\infty}$ functional calculus. We refer the
reader to [8] or [4] for the background on fractional powers and $H^{\infty}$
functional calculus of sectorial operators.
## 2\. Characterisations of the Kato property
Let $(\mathfrak{a},{H})$ be as above, that is, $\mathfrak{a}$ is a bounded,
sectorial, coercive, sesquilinear form on a Hilbert space ${V}$ which embeds
densely and continuously into a second Hilbert space ${H}$. Let
${L}_{H}={L}_{\mathfrak{a},{H}}$ be defined as above. We say that the couple
$(\mathfrak{a},{H})$ has Kato’s property if
${\mathcal{D}}({L}_{H}^{1/2})={V}$. By the Closed Graph Theorem, if
$(\mathfrak{a},{H})$ has the Kato property, then the norms
$\|{L}_{H}^{1/2}\cdot\|_{H}$ and $\|\cdot\|_{{V}}$ are equivalent on ${V}$.
According to Kato [6] and Lions [10], the coincidence of any two of the spaces
${\mathcal{D}}({L}_{H}^{1/2})$, ${\mathcal{D}}({L}_{H}^{*1/2})$ and ${V}$
implies the coincidence of the three.
Moreover, by Subsection 1.2, it is natural to say that a pair $({T},{Q})$ of
selfadjoint, bounded operators on a Hilbert space ${V}$, with ${Q}$ being
nonnegative and injective, has the Kato property, if the associated pair
$(\mathfrak{a}_{T},{H}_{Q})$ has the Kato property, where
$\mathfrak{a}_{T}(u,v)=\langle(I+i{T})u,v\rangle_{V}$ ($u$, $v\in{V}$) and
${H}_{Q}$ is the completion of $({V},\langle{Q}\cdot,\cdot\rangle_{V})$.
The main result of this section is the following characterisation of the Kato
property of $(\mathfrak{a},{H})$ in terms of the associated pair of bounded
operators $({T},{Q})$.
###### Theorem 2.1.
Let $({T},{Q})$ be the pair of operators associated with $(\mathfrak{a},{H})$
as above. Then the following assertions are equivalent:
* (i)
$(\mathfrak{a},{H})$ has the Kato property
* (ii)
There exists a positive operator ${S}$ on ${V}$ such that
$\langle{Q}{S}(I+i{T})u,u\rangle_{V}\in\Sigma_{\theta}\text{ for every
}u\in{V}\text{ and some }\theta<\frac{\pi}{2}.$
* (ii’)
There exists a positive operator ${S}$ on ${V}$ such that
$\operatorname{Re}\,\langle{Q}{S}(I+i{T})u,u\rangle_{V}\geq 0\text{ for every
}u\in{V}.$
* (iii)
There exists a positive operator ${S}$ on ${V}$ such that
$\langle{S}(I-i{T})^{-1}{Q}u,u\rangle_{V}\in\Sigma_{\theta}\text{ for every
}u\in{V}\text{ and some }\theta<\frac{\pi}{2},$
that is, $(I-i{T})^{-1}{Q}$ is similar to a $\theta$-accretive operator, or,
equivalently, the operator $(I-i{T})^{-1}{Q}$ has a bounded
$H^{\infty}(\Sigma_{\theta})$ functional calculus.
* (iv)
There exists a positive operator ${S}$ on ${V}$ such that
$\langle{S}{Q}(I+i{T})^{-1}u,u\rangle_{V}\in\Sigma_{\theta}\text{ for every
}u\in{V}\text{ and some }\theta<\frac{\pi}{2},$
that is, ${Q}(I+i{T})^{-1}$ is similar to a $\theta$-accretive operator, or,
equivalently, the operator ${Q}(I+i{T})^{-1}$ has a bounded
$H^{\infty}(\Sigma_{\theta})$ functional calculus.
* (v)
The operator $(I-{Q}+i{T})(I+{Q}+i{T})^{-1}$ is polynomially bounded.
* (vi)
The operator $(I-{Q}+i{T})(I+{Q}+i{T})^{-1}$ is similar to a contraction.
Recall that, if ${T}$, ${Q}\in{\mathcal{L}}({V})$ are selfadjoint operators,
then ${Q}{T}$ is selfadjoint if and only if ${T}$ and ${Q}$ commute, or if and
only if $\langle{Q}{T}u,u\rangle_{V}\in\mathbb{R}$ for every $u\in{V}$.
Therefore, the above Theorem 2.1(ii’) gives the following sufficient condition
for $(\mathfrak{a},{H})$ to have the Kato property.
###### Corollary 2.2.
If ${T}$ and ${Q}$ commute, then $(\mathfrak{a},{H})$ has the Kato property.
We start with auxiliary results on the operators appearing in Theorem 2.1.
###### Lemma 2.3.
Let ${T}$ and ${Q}$ be selfadjoint, bounded operators on a Hilbert space
${V}$. Assume that ${Q}$ is nonnegative. Then, the operator
$A:={Q}(I+i{T})^{-1}\in{\mathcal{L}}({V})$ is sectorial of angle
$\theta<\frac{\pi}{2}$.
###### Proof.
By a standard argument based on the Neumann series extension it is sufficient
to show that $\sup_{\operatorname{Re}z\leq 0}\|zR(z,A)\|<\infty$. Note that
for every $z\in\mathbb{C}$ with $\operatorname{Re}z\leq 0$ and every $u\in{V}$
we have
$\displaystyle\langle(z+iz{T}-{Q})u,u\rangle_{V}$
$\displaystyle=(\operatorname{Re}z\,\|u\|_{V}^{2}-\operatorname{Im}z\,\langle{T}u,u\rangle_{V}-\langle{Q}u,u\rangle)$
$\displaystyle\phantom{=}+i\,(\operatorname{Im}z\,\|u\|_{V}^{2}+\operatorname{Re}z\,\langle{T}u,u\rangle_{V}),$
and therefore
$\displaystyle|\langle(z+iz{T}-{Q})u,u\rangle_{V}|^{2}$
$\displaystyle=|z|^{2}\,\|u\|_{V}^{4}-2\operatorname{Re}z\,\|u\|_{V}^{2}\langle{Q}u,u\rangle_{V}+$
$\displaystyle\phantom{=}+(|z|^{2}\,\langle{T}u,u\rangle^{2}_{V}+2\,\operatorname{Im}z\,\langle{T}u,u\rangle_{V}\,\langle{Q}u,u\rangle_{V}+\langle{Q}u,u\rangle_{V}^{2})$
$\displaystyle\geq|z|^{2}\,\|u\|_{V}^{4},$
or, by the Cauchy-Schwarz inequality,
$\|(z+iz{T}-{Q})u\|_{V}\geq|z|\,\|u\|_{V}.$
This inequality implies that $z+iz{T}-{Q}$ is injective and has closed range.
A duality argument, using similar estimates as above, shows that $z+iz{T}-{Q}$
has dense range, and therefore $z+iz{T}-{Q}$ is invertible for every
$z\in\mathbb{C}$ with $\operatorname{Re}z\leq 0$. Moreover, the above
inequality shows that
$\sup_{\operatorname{Re}z\leq 0}\|z(z+iz{T}-{Q})^{-1}\|\leq 1.$
As a consequence, $z-A=(z+iz{T}-{Q})(I+i{T})^{-1}$ is invertible for every
$z\in\mathbb{C}$ with $\operatorname{Re}z\leq 0$ and
$\sup_{\operatorname{Re}z\leq 0}\|zR(z,A)\|=\sup_{\operatorname{Re}z\leq
0}\|(I+i{T})\,z(z+iz{T}-{Q})^{-1}\|\leq\|I+i{T}\|.$
∎
Let $A\in{\mathcal{L}}({V})$ be a bounded, sectorial operator of angle
$\theta\in(0,\frac{\pi}{2})$, and let ${C}:=(I-A)(I+A)^{-1}$ be its Cayley
transform. Then the equality
$\displaystyle(z-1)(z-{C})^{-1}=\frac{z-1}{z+1}\,\left(\frac{z-1}{z+1}+A\right)^{-1}\,(I+A)$
shows that ${C}$ is a Ritt operator, that is,
$\sigma({C})\subseteq{\mathbb{D}}\cup\\{1\\}$ (where
$\mathbb{D}\subset\mathbb{C}$ is the open unit disk) and
$\sup_{|z|>1}\|(z-1)R(z,{C})\|<\infty.$
From this and the preceding lemma, we obtain the following statement.
###### Lemma 2.4.
The Cayley transform
${C}=(I-A)(I+A)^{-1}=(I-{Q}+i{T})(I+{Q}+i{T})^{-1}$
of the operator $A={Q}(I+i{T})^{-1}$ is a Ritt operator.
Recall that a bounded operator ${C}$ on a Hilbert space ${V}$ is a Ritt
operator if and only if it is power bounded and
$\sup_{n\in\mathbb{N}}n\|{C}^{n}-{C}^{n+1}\|<\infty$; see [12]. Furthermore, a
bounded operator ${C}$ on a Hilbert space is polynomially bounded if there
exists a constant $M\geq 0$ such that for every polynomial $p$ one has
$\|p({C})\|\leq M\,\sup_{|z|\leq 1}|p(z)|.$
The proof of Theorem 2.1 is a consequence of the characterisation of the Kato
property by means of the boundedness of the $H^{\infty}$ functional calculus
for the operator ${L}_{{V}^{\prime}}={L}_{\mathfrak{a},{V}^{\prime}}$ given by
Arendt in [1, Theorem 5.5.2, p.45].
###### Lemma 2.5.
Let ${L}_{{V}^{\prime}}={L}_{\mathfrak{a},{V}^{\prime}}$ be the operator
associated with $(\mathfrak{a},{H})$ as above. Then the following assertions
are equivalent:
* (i)
$(\mathfrak{a},{H})$ has the Kato property.
* (ii)
${L}_{{V}^{\prime}}$ has a bounded $H^{\infty}$ functional calculus.
Moreover, if (i) or (ii) holds, then ${L}_{{V}^{\prime}}$ has a bounded
$H^{\infty}(\Sigma_{\theta})$ functional calculus for every
$\theta>\arctan\beta$ with $\beta$ as in (1.1).
For the convenience of the reader we recall the proof of this result using our
notation and with slight modifications.
###### Proof.
First of all, note that the operator ${L}_{H}$ can be expressed as the
operator $j^{\prime-1}{L}_{{V}^{\prime}}j^{\prime}$ with domain
$\\{u\in{H}:j^{\prime}u\in{\mathcal{D}}({L}_{{V}^{\prime}}\textrm{ and
}{L}_{{V}^{\prime}}j^{\prime}u\in j^{\prime}({H})\\}$. Then
$({\lambda}-{L}_{H})^{-1}=j^{\prime-1}({\lambda}-{L}_{{V}^{\prime}})^{-1}j^{\prime}$
for every ${\lambda}\notin\Sigma_{\theta}$, and by the definition of the
square roots via contour integrals,
${L}_{H}^{-\frac{1}{2}}=j^{\prime-1}{L}_{{V}^{\prime}}^{-\frac{1}{2}}j^{\prime}.$
(i)$\Rightarrow$(ii) Therefore, if
${\mathcal{D}}({L}_{H}^{\frac{1}{2}})={\mathcal{R}}({L}_{H}^{-\frac{1}{2}})=j({V})$,
then
$\displaystyle{\mathcal{D}}({L}_{{V}^{\prime}}^{\frac{1}{2}})$
$\displaystyle:={L}_{{V}^{\prime}}^{-\frac{1}{2}}({V}^{\prime})={L}_{{V}^{\prime}}^{-\frac{1}{2}}{L}_{{V}^{\prime}}(j^{\prime}j({V}))$
$\displaystyle={L}_{{V}^{\prime}}^{-\frac{1}{2}}{L}_{{V}^{\prime}}(j^{\prime}{L}_{H}^{-\frac{1}{2}}({H}))={L}_{{V}^{\prime}}^{-\frac{1}{2}}{L}_{{V}^{\prime}}({L}_{{V}^{\prime}}^{-\frac{1}{2}}j^{\prime}({H}))$
$\displaystyle=j^{\prime}({H}),$
where the last equality follows from
${L}^{\frac{1}{2}}_{{V}^{\prime}}{L}_{{V}^{\prime}}^{\frac{1}{2}}={L}_{{V}^{\prime}}$;
see e.g. [8, Theorem 15.15, p.289]. By [14, Corollary 2.3, p.113],
$j^{\prime}({H})=[{V}^{\prime},{\mathcal{D}}({L}_{\mathfrak{s},{V}^{\prime}})]_{\frac{1}{2}}$,
where on ${\mathcal{D}}({L}_{\mathfrak{s},{V}^{\prime}})=J({V})$ we consider
the graph norm of ${L}_{\mathfrak{s},{V}^{\prime}}$, that is,
$\|{L}_{\mathfrak{s},{V}^{\prime}}\cdot\|_{{V}^{\prime}}+\|\cdot\|_{{V}^{\prime}}$.
Since for $v\in{V}$ we have
$I_{V}{L}_{{V}^{\prime}}Jv=(I+i{T})v\quad\textrm{ and }\quad
I_{V}{L}_{\mathfrak{s},{V}^{\prime}}Jv=v,$
hence
$\|{L}_{{V}^{\prime}}Jv\|_{{V}^{\prime}}=\|(I+i{T})v\|_{V}\quad\textrm{and
}\quad\|{L}_{{V}^{\prime}}Jv\|_{{V}^{\prime}}=\|v\|_{V}.$
Consequently, the invertibility of $I+i{T}$ implies, that the graph norm of
${L}_{\mathfrak{s},{V}^{\prime}}$ is equivalent to the graph norm of
${L}_{{V}^{\prime}}={L}_{\mathfrak{a},{V}^{\prime}}$ on
${\mathcal{D}}({L}_{{V}^{\prime}})=J({V})$. Therefore, we get that
$[{V}^{\prime},{\mathcal{D}}({L}_{{V}^{\prime}})]_{\frac{1}{2}}={\mathcal{D}}({L}_{{V}^{\prime}}^{\frac{1}{2}}).$
Hence, by [14, Theorem 16.3, p.532], ${L}_{{V}^{\prime}}$ has a bounded
$H^{\infty}$ functional calculus.
(ii)$\Rightarrow$(i) On the other hand, if ${L}_{{V}^{\prime}}$ has a bounded
$H^{\infty}$ functional calculus, then as above we get
${\mathcal{D}}({L}_{{V}^{\prime}}^{\frac{1}{2}})=[{V}^{\prime},{\mathcal{D}}({L}_{{V}^{\prime}})]_{\frac{1}{2}}=j^{\prime}({H})$.
Therefore,
$\displaystyle{\mathcal{D}}({L}_{{H}}^{\frac{1}{2}})$
$\displaystyle:={L}_{{H}}^{-\frac{1}{2}}j^{\prime-1}(j^{\prime}({H}))={L}_{{H}}^{-\frac{1}{2}}j^{\prime-1}{L}_{{V}^{\prime}}^{-\frac{1}{2}}({V}^{\prime})$
$\displaystyle=j^{\prime-1}j^{\prime}{L}_{{H}}^{-\frac{1}{2}}j^{\prime-1}{L}_{{V}^{\prime}}^{-\frac{1}{2}}({V}^{\prime})=j^{\prime-1}{L}_{{V}^{\prime}}^{-\frac{1}{2}}{L}_{{V}^{\prime}}^{-\frac{1}{2}}({V}^{\prime})=j^{\prime-1}j^{\prime}j({V})$
$\displaystyle=j({V}).$
For the last statement about the angle of the $H^{\infty}$ functional calculus
first note, that
${\mathcal{D}}({L}_{{V}^{\prime}}^{\frac{1}{2}})=[{V}^{\prime},{\mathcal{D}}({L}_{{V}^{\prime}})]_{\frac{1}{2}}=j^{\prime}({H})$
yields
$L_{H}=j^{\prime-1}L_{{V}^{\prime}}^{-\frac{1}{2}}L_{{V}^{\prime}}L_{{V}^{\prime}}^{\frac{1}{2}}j^{\prime}.$
Moreover, by the Closed Graph Theorem, the operator $j^{\prime}$ is an
isomorphism from $H$ onto
${\mathcal{D}}({L}_{{V}^{\prime}}^{\frac{1}{2}})=j^{\prime}(H)$ equipped with
the graph norm. Since the operator $L_{H}$ is $(\arctan\beta)$-accretive,
therefore it is sectorial of angle $\arctan\beta$, and consequently the
operator $L_{{V}^{\prime}}$, too. Finally, for example, by [14, Theorem 16.3,
p.532] (cf. [14, Remark 16.2, p.536]), $L_{{V}^{\prime}}$ has a bounded
$H^{\infty}$ functional calculus in any sectorial domain $\Sigma_{\theta}$
with $\theta>\arctan\beta$. This completes the proof. ∎
###### Proof of Theorem 2.1.
Assume that $(\mathfrak{a},{H})$ has the Kato property. By Lemma 2.5,
${L}_{{V}^{\prime}}$ has a bounded $H^{\infty}(\Sigma_{\theta})$ functional
calculus for every $\theta>\arctan\beta$. Fix
$\theta\in(\arctan\beta,\frac{\pi}{2})$. By the characterisation of the
boundedness of the $H^{\infty}$ functional calculus, [8, Theorem 11.13,
p.229], ${L}_{{V}^{\prime}}$ is $\theta$-accretive with respect to an
equivalent inner product $\langle\cdot,\cdot\rangle_{\theta}$ on
${V}^{\prime}$. Let $\widetilde{S}\in{\mathcal{L}}({V}^{\prime})$ be the
positive operator such that
$\langle\cdot,\cdot\rangle_{\theta}=\langle\widetilde{{S}}\cdot,\cdot\rangle_{{V}^{\prime}}$.
Then ${S}:=I_{V}\widetilde{{S}}I_{V}^{-1}\in{\mathcal{L}}({V})$ is a positive
operator on ${V}$.
First, note that $I_{V}{L}_{{V}^{\prime}}Jv=(I+i{T})v$ and $I_{V}Jv={Q}v$ for
every $v\in{V}$. Then,
$\displaystyle\langle{L}_{{V}^{\prime}}Jv,Jv\rangle_{\theta}$
$\displaystyle=\langle\widetilde{S}{L}_{{V}^{\prime}}Jv,Jv\rangle_{{V}^{\prime}}$
$\displaystyle=\langle
I_{V}\widetilde{S}I^{-1}_{V}I_{V}{L}_{{V}^{\prime}}J^{-1}v,I_{V}Jv\rangle_{V}$
$\displaystyle=\langle I_{V}{L}_{{V}^{\prime}}Jv,{S}I_{V}Jv\rangle_{V}$
$\displaystyle=\langle I_{V}Jv,{S}(I+i{T})v\rangle_{V}$
$\displaystyle=\langle(I+i{T})v,{S}{Q}v\rangle_{V}$
$\displaystyle=\langle{Q}{S}(I+i{T})v,v\rangle_{V}$
for every $v\in{V}$. Therefore, the operator ${Q}{S}(I+i{T})$ is
$\theta$-accretive with respect to $\langle\cdot,\cdot\rangle_{V}$. Therefore,
(i)$\Rightarrow$(ii)$\Rightarrow$(ii′). The implication (ii′)$\Rightarrow$(i)
follows from a similar argument.
The equivalences (ii)$\Leftrightarrow$(iii)$\Leftrightarrow$(iv) follow from
the following chain of equivalences which holds for every positive operator
${S}\in{\mathcal{L}}({V})$ and $\theta\in(0,\frac{\pi}{2}]$:
$\displaystyle{Q}{S}(I+i{T})\text{ is }\theta\text{-accretive}$
$\displaystyle\Leftrightarrow\quad$ $\displaystyle\forall
u\in{V}:\,\langle{Q}{S}(I+i{T})u,u\rangle_{V}\in\Sigma_{\theta}$
$\displaystyle\Leftrightarrow\quad$ $\displaystyle\forall
u\in{V}:\,\langle{Q}{S}u,(I+i{T})^{-1}u\rangle_{V}\in\Sigma_{\theta}$
$\displaystyle\Leftrightarrow\quad$ $\displaystyle\forall u\in{V}:\,\langle
u,{S}^{\frac{1}{2}}{Q}(I+i{T})^{-1}{S}^{-\frac{1}{2}}u\rangle_{V}\in\Sigma_{\theta}$
$\displaystyle\Leftrightarrow\quad$
$\displaystyle{S}^{\frac{1}{2}}(I-i{T})^{-1}{Q}{S}^{-\frac{1}{2}}\text{ is
}\theta\text{-accretive}$ $\displaystyle\Leftrightarrow\quad$
$\displaystyle{S}^{\frac{1}{2}}{Q}(I+i{T})^{-1}{S}^{-\frac{1}{2}}\text{ is
}\theta\text{-accretive}.$
For (iv)$\Leftrightarrow$(v), set $A:={Q}(I-i{T})^{-1}$, and note that its
Cayley transform is given by
${C}:=\phi(A)=(I-{Q}+i{T})(I+{Q}+i{T})^{-1},$
where $\phi$ is the conformal map $\phi(z):=(1-z)(1+z)^{-1}$ from
$\Sigma_{\frac{\pi}{2}}$ onto $\\{|z|<1\\}$. Moreover, for every polymomial
$p$ we have
$p({C})=(p\circ\phi)(A)\quad\textrm{and
}\quad\sup_{z\in\Sigma_{\theta}}|(p\circ\phi)(z)|\leq\sup_{|z|<1}|p(z)|.$
Therefore, the boundedness of the $H^{\infty}(\Sigma_{\theta})$ functional
calculus of $A$ with $\theta\leq\frac{\pi}{2}$ yields the polynomial
boundedness of its Cayley transform ${C}$. For the converse, by Runge’s
theorem, it is easy to see that $A$ has a bounded
$\mathcal{R}(\Sigma_{\frac{\pi}{2}})$ functional calculus; here
$\mathcal{R}(\Sigma_{\frac{\pi}{2}})$ stands for the algebra of rational
functions with poles outside $\Sigma_{\frac{\pi}{2}}$. Then, the boundedness
of the $H^{\infty}(\Sigma_{\frac{\pi}{2}})$ functional calculus follows again
by an approximation argument and McIntosh’s convergence theorem [11, Section
5, Theorem]; see also [4, Proposition 3.13, p.66]. Since $A$ is
$\theta$-sectorial for some $\theta<\frac{\pi}{2}$, [8, Theorem 11.13] gives
(iv).
Finally, for (v)$\Rightarrow$(vi), since the Cayley transform ${C}=\phi(A)$ is
a Ritt operator, see Lemma 2.4, by [9, Theorem 5.1], it is similar to a
contraction. The converse is a consequence of the von Neumann inequality. ∎
###### Remark 2.6.
(a) By [8, Theorem 11.13 H7)] one can show that if $(\mathfrak{a},{H})$ has
the Kato property, then (iv) in Theorem 2.1 holds with
$S:=\int_{\Sigma_{\pi-\theta}}A^{*}e^{zA^{*}}Ae^{zA}\textrm{
d}z=\int_{\Sigma_{\pi-\theta}}|Ae^{zA}|^{2}\textrm{ d}z,$
where $A:={Q}(I-i{T})^{-1}$ and the integral exists in the weak operator
topology.
(b) Note that in the case when the operator ${Q}$ is invertible on ${V}$, or
equivalently, the inner products on ${H}$ and ${V}$ are equivalent, then
$(\mathfrak{a},{H})$ has the Kato property simply because
${L}_{H}\in{\mathcal{L}}({H})={\mathcal{L}}({V})$. It should be pointed out,
that in this case the similarity to a contraction of the operator
(2.1) ${C}=(I-{Q}+i{T})(I+{Q}+i{T})^{-1}=({T}-i(I-{Q}))({T}-i(I+{Q}))^{-1},$
which is stated in Theorem 2.1 (vi), can be proved in a straightforward way.
Indeed, in [3, Theorem 1] Fan proved that an operator
${C}\in{\mathcal{L}}({V})$ with $1\in\rho({C})$ is similar to a contraction if
and only if it can be expressed in the form
$(E-iF)(E-iG)^{-1},$
for some selfadjoint operators $E$, $F$, $G\in{\mathcal{L}}({V})$ such that
$G+F$ and $G-F$ are positive with $0\in\rho(G-F)$. Therefore, in the case of
${Q}$ being invertible, the above stated expression of the operator ${C}$,
that is, (2.1), satisfies these conditions.
## 3\. Kato property and triangular operators
Recall that a bounded operator $\Delta$ on a Hilbert space ${V}$ is triangular
if there exists a constant $M\geq 0$ such that
(3.1) $\left|\sum_{j=1}^{n}\sum_{k=1}^{j}\langle\Delta
u_{j},v_{k}\rangle_{V}\right|\leq
M\sup_{|a_{j}|=1}\left\|\sum_{j=1}^{n}a_{j}u_{j}\right\|\sup_{|a_{j}|=1}\left\|\sum_{j=1}^{n}a_{j}v_{j}\right\|.$
for every $n\in\mathbb{N}$ and every $u_{1}$, $\dots$, $u_{n}$, $v_{1}$,
$\dots$, $v_{n}\in{V}$. By a theorem of Kalton [5, Theorem 5.5], an operator
$\Delta$ on ${V}$ is triangular if and only if
$\sum_{n=1}^{\infty}\frac{s_{n}(\Delta)}{n+1}<\infty$, where
$(s_{n}(\Delta))_{n\in\mathbb{N}}$ is the sequence of singular values of
$\Delta$. Therefore, the Schatten-von Neumann classes are included in the
class of triangular operators. We refer the reader to [5, Section 5] for basic
properties of triangular operators. One interest in the class of triangular
operators stems from the following perturbation theorem by Kalton [5, Theorem
7.7].
###### Lemma 3.1.
Let $A$ and ${B}$ be two sectorial operators on a Hilbert space ${H}$. Assume
that ${B}$ has a bounded $H^{\infty}$ functional calculus, and that
$A=(I+\Delta){B}$ for some triangular operator $\Delta$. Then $A$ has a
bounded $H^{\infty}$ functional calculus, too.
Combining this result with Theorem 2.1, we show that the Kato property of
$(\mathfrak{a},{H})$ is preserved under certain triangular perturbations of
the imaginary part of $\mathfrak{a}$, and in particular, that for every
bounded, selfadjoint operators ${T}$ and ${Q}$ on a Hilbert space ${V}$ such
that ${T}$ is triangular and ${Q}$ is nonnegative and injective, the pair
$({T},{Q})$ has the Kato property, that is, ${Q}(I-i{T})^{-1}$ is similar to
an accretive operator on ${V}$.
###### Corollary 3.2.
Let $\mathfrak{a}$ and $\mathfrak{b}$ be two sectorial forms on ${V}$ with the
same real parts, that is,
$\operatorname{Re}\mathfrak{a}=\operatorname{Re}\mathfrak{b}$. Let the
imaginary parts $\mathfrak{t}_{\mathfrak{a}}$ and
$\mathfrak{t}_{\mathfrak{b}}$ of $\mathfrak{a}$ and $\mathfrak{b}$ be
determined by selfadjoint operators ${T}_{\mathfrak{a}}$,
${T}_{\mathfrak{b}}\in{\mathcal{L}}({V})$, respectively. Assume that
$(\mathfrak{b},{H})$ has the Kato property, and that
${T}_{\mathfrak{a}}-{T}_{\mathfrak{b}}$ is a triangular operator. Then
$(\mathfrak{a},{H})$ has the Kato property, too.
In particular, if ${T}_{\mathfrak{a}}$ is a triangular operator, then
$(\mathfrak{a},{H})$ has the Kato property for every Hilbert space ${H}$ into
which ${V}$ is densely and continuously embedded.
###### Proof of Corollary 3.2.
Note that by the second resolvent equation we get
$(I-i{T}_{\mathfrak{a}})^{-1}{Q}-(I-i{T}_{\mathfrak{b}})^{-1}{Q}=i(I-i{T}_{\mathfrak{a}})^{-1}({T}_{\mathfrak{a}}-{T}_{\mathfrak{b}})(I+i{T}_{\mathfrak{b}})^{-1}{Q}.$
Therefore, since the operator
$i(I-i{T}_{\mathfrak{a}})^{-1}({T}_{\mathfrak{a}}-{T}_{\mathfrak{b}})$ is
triangular, the claim follows from Lemma 2.3, Lemma 3.1, and Theorem 2.1
(iii).
Alternatively, note that Arendt’s result, Lemma 2.5, used in the proof of
Theorem 2.1, can be directly applied to get Corollary 3.2. Indeed, set
$\Delta:={L}_{\mathfrak{a},{V}^{\prime}}{L}_{\mathfrak{b},{V}^{\prime}}^{-1}-I,$
so that
${L}_{\mathfrak{a},{V}^{\prime}}=(I+\Delta){L}_{\mathfrak{b},{V}^{\prime}}$.
By our assumption, ${L}_{\mathfrak{b},{V}^{\prime}}$ admits a bounded
$H^{\infty}$ functional calculus. We also recall that both
${L}_{\mathfrak{a},{V}^{\prime}}$ and ${L}_{\mathfrak{b},{V}^{\prime}}$ are
sectorial operators. By Lemma 3.1, it is thus sufficient to show that the
operator $\Delta$ is triangular. Since
$\operatorname{Re}\mathfrak{a}=\operatorname{Re}\mathfrak{b}$, we get
$\Delta=i\,({L}_{\operatorname{Im}\mathfrak{a},{V}^{\prime}}-{L}_{\operatorname{Im}\mathfrak{b},{V}^{\prime}})\,{L}_{\mathfrak{b},{V}^{\prime}}^{-1}$.
Fix $u$ and $v$ in ${V}^{\prime}$. Then
$\displaystyle\langle\Delta u,v\rangle_{{V}^{\prime}}$ $\displaystyle=\langle
I_{V}\Delta u,I_{V}v\rangle_{V}$
$\displaystyle=i[({L}_{\operatorname{Im}\mathfrak{a},{V}^{\prime}}-{L}_{\operatorname{Im}\mathfrak{b},{V}^{\prime}}){L}_{\mathfrak{b},{V}^{\prime}}^{-1}u](I_{V}v)$
$\displaystyle=i\,\operatorname{Im}\mathfrak{a}\bigl{(}({L}_{\mathfrak{b},{V}^{\prime}}J)^{-1}u,\,I_{V}v\bigr{)}-i\,\operatorname{Im}\mathfrak{b}\bigl{(}({L}_{\mathfrak{b},{V}^{\prime}}J)^{-1}u,\,I_{V}v\bigr{)}$
$\displaystyle=i\langle({L}_{\mathfrak{b},{V}^{\prime}}J)^{-1}u,({T}_{\mathfrak{a}}-{T}_{\mathfrak{b}})I_{V}v\rangle_{V}.$
Since ${L}_{\mathfrak{b},{V}^{\prime}}J$ is an isomorphism from ${V}$ onto
${V}^{\prime}$, the triangularity of $\Delta$ is equivalent to the
triangularity of ${T}_{\mathfrak{a}}-{T}_{\mathfrak{b}}$.
For the proof of the second statement, it is sufficient to apply the one just
proved for a symmetric form, that is, $\mathfrak{b}$ with
$\operatorname{Im}\mathfrak{b}=0$. ∎
In an analoguous way, by combining Lemma 3.1 with Theorem 2.1 (iii), we get
the following perturbation result for the real parts of forms.
###### Corollary 3.3.
Let $\mathfrak{a}$ and $\mathfrak{b}$ be two sectorial forms on a space ${V}$
with the same imaginary parts, that is,
$\mathfrak{t}_{\mathfrak{a}}=\mathfrak{t}_{\mathfrak{b}}$, and equivalent real
parts $\mathfrak{s}_{a}$ and $\mathfrak{s}_{b}$. Let
${S}\in{\mathcal{L}}({V})$ be such that
$\mathfrak{s}_{\mathfrak{a}}({S}u,v)=\mathfrak{s}_{\mathfrak{b}}(u.v)$.
If ${S}-I$ is triangular and $(\mathfrak{b},{H})$ has the Kato property, then
$(\mathfrak{a},{H})$ has the Kato property, too.
###### Proof.
According to Theorem 2.1 (iii), if $({H},\langle\cdot,\cdot\rangle)$ is a
Hilbert space such that $(\mathfrak{b},{H})$ has the Kato property, then the
operator $(I-i{T}_{\mathfrak{b}})^{-1}{Q}_{\mathfrak{b}}$, where
${Q}_{\mathfrak{b}}$ is a nonnegative, injective operator on
$({V},\mathfrak{s}_{\mathfrak{b}})$ with $\langle
u,v\rangle=\mathfrak{s}_{\mathfrak{b}}({Q}_{\mathfrak{b}}u,v)$ ($u$,
$v\in{V}$), has a bounded $H^{\infty}$-functional calculus. The corresponding
operator ${Q}_{\mathfrak{a}}$ for the form $\mathfrak{a}$ is equal to
${S}{Q}_{\mathfrak{b}}$ and ${T}_{\mathfrak{a}}={S}{T}_{\mathfrak{b}}$. Hence,
$(I-i{T}_{\mathfrak{a}})^{-1}{Q}_{\mathfrak{a}}=(I-i{S}{T}_{\mathfrak{b}})^{-1}{S}{Q}_{\mathfrak{b}}.$
Then, since
$(i{T}_{\mathfrak{b}}-i{S}{T}_{\mathfrak{b}})(I-i{T}_{\mathfrak{b}})^{-1}{Q}_{\mathfrak{b}}$
is triangular and
$(I-i{S}{T}_{\mathfrak{b}})^{-1}{Q}_{\mathfrak{b}}-(I-i{T}_{\mathfrak{b}})^{-1}{Q}_{\mathfrak{b}}=(I-i{S}{T}_{\mathfrak{b}})^{-1}(i{T}_{\mathfrak{b}}-i{S}{T}_{\mathfrak{b}})(I-i{T}_{\mathfrak{b}})^{-1}{Q}_{\mathfrak{b}}$
the operator $(I-i{S}{T}_{\mathfrak{b}})^{-1}{Q}_{\mathfrak{b}}$ has a bounded
$H^{\infty}$ functional calculus. Moreover, note that
$\displaystyle(I-i{S}{T}_{\mathfrak{b}})^{-1}{S}{Q}_{\mathfrak{b}}-(I-i{S}{T}_{\mathfrak{b}})^{-1}{Q}_{\mathfrak{b}}$
$\displaystyle=(I-i{S}{T}_{\mathfrak{b}})^{-1}({S}-I){Q}_{\mathfrak{b}}$
$\displaystyle=\Delta(I-i{S}{T}_{\mathfrak{b}})^{-1}{Q}_{\mathfrak{b}},$
where
$\Delta=(I-i{S}{T}_{\mathfrak{b}})^{-1}({S}-I)(I-i{S}{T}_{\mathfrak{b}})$
is triangular. Therefore, again by Lemma 3.1,
$(I-i{S}{T}_{\mathfrak{b}})^{-1}{S}{Q}_{\mathfrak{b}}$ has a bounded
$H^{\infty}$ functional calculus, which completes the proof. ∎
Finally, for the sake of completeness, we state a perturbation result for the
operator ${Q}$ generating the Hilbert space ${H}$ in $(\mathfrak{a},{H})$. Its
proof follows directly from Lemma 3.1 and Theorem 2.1(iv).
###### Corollary 3.4.
Assume that $(\mathfrak{a},{H})$ has the Kato property. Let ${Q}$ be the
nonnegative, injective operator on ${V}$ associated with ${H}$. Then,
$(\mathfrak{a},{H}_{\hat{{Q}}})$ has the Kato property for every Hilbert space
${H}_{\hat{{Q}}}$ with $\hat{{Q}}$ being a triangular perturbation of ${Q}$,
that is, $\hat{{Q}}=(I+\Delta){Q}$ for some triangular operator $\Delta$.
## 4\. Optimality of Corollary 3.2
Below, we show that the class of triangular operators is, in a sense, the
largest subclass of compact operators for which Corollary 3.2 holds. Recall
that, by [5, Theorem 5.5], a compact operator ${T}$ is not triangular, if
$\sum_{n\in\mathbb{N}}\frac{s_{n}({T})}{n}=\infty$, where $s_{n}({T})$
(${n\in\mathbb{N}}$) stands for the $n$-th singular value of ${T}$.
###### Proposition 4.1.
Let $(a_{n})_{{n\in\mathbb{N}}}$ be a nonincreasing sequence of positive
numbers with $\sum_{{n\in\mathbb{N}}}\frac{a_{n}}{n}=\infty$. Then, there
exists a sectorial form $\mathfrak{a}$ such that the singular values
$(s_{n}({T}_{\mathfrak{a}}))_{{n\in\mathbb{N}}}$ of the operator
${T}_{\mathfrak{a}}$ determined by the imaginary part of $\mathfrak{a}$
satisfy $s_{n}({T})\preceq a_{n}$ $({n\in\mathbb{N}})$, but not for every
Hilbert space ${H}$ for which ${V}$ is densely and continuously embedded in
${H}$, the couple $(\mathfrak{a},{H})$ has the Kato property.
Equivalently, there exist a selfadjoint, compact operator ${T}$ on a Hilbert
space ${V}$ with $s_{n}({T})\preceq a_{n}$ $({n\in\mathbb{N}})$, and a
nonnegative, injective operator ${Q}$ on ${V}$ such that ${Q}(I+i{T})^{-1}$ is
not similar to an accretive operator.
In order to construct an example we adapt two related results from [5] and
[2]. Recall that, in [2], the sesquilinear form $\mathfrak{a}$ on a Hilbert
space ${H}$ is expressed as
(4.1) $\mathfrak{a}(u,v)=\langle A{S}u,{S}v\rangle_{H},\quad
u,v\in{V}:={\mathcal{D}}({S}),$
where ${S}$ is a positive selfadjoint (not neccessarily bounded) operator on
${H}$, and $A$ is a bounded invertible $\theta$-accretive operator on ${H}$
for some $\theta<\pi/2$. (Here, we call the selfadjoint operator ${S}$ on
${H}$ _positive_ if $\langle{S}u,u\rangle_{H}>0$ for all
$u\in{\mathcal{D}}(A)\setminus\\{0\\}$.) Then, following Kato’s terminology
[6], $\mathfrak{a}$ is a regular accretive form in ${H}$. The operator
${L}_{\mathfrak{a},{H}}$ associated with the form $\mathfrak{a}$ on ${H}$ is
given by ${S}A{S}$. Note that $\mathfrak{s}=\operatorname{Re}\mathfrak{a}$ is
an equivalent inner product to $\langle{S}\cdot,{S}\cdot\rangle_{H}$, and in
order to put it in our setting, we additionally assume that $0\in\rho({S})$.
Then $\mathfrak{s}$ is a _complete_ inner product on
${V}:={\mathcal{D}}({S})$. In fact, since ${S}$ is selfadjoint, to get the
completeness of this inner product, it is sufficient that ${S}$ is injective
and has closed range.
For the convenience of the reader we restate two auxiliary results from [5]
and [2].
###### Lemma 4.2 ([5], Theorem 8.3).
Let ${H}$ be a separable Hilbert space and let $(e_{n})_{{n\in\mathbb{N}}}$ be
an orthonormal basis. Let ${S}$ be the sectorial operator defined by
${S}e_{n}=2^{n}e_{n}$ $({n\in\mathbb{N}})$ with
${\mathcal{D}}({S}):=\\{x\in{H}:\sum_{n=1}^{\infty}2^{2n}|\langle
x,e_{n}\rangle_{H}|^{2}<\infty\\}$. Suppose $K\in{\mathcal{L}}({H})$ is a non-
triangular compact operator. Then, there exist bounded operators $U$ and $W$
on ${H}$ such that for every $m\in\mathbb{N}$, the operator $(I+2^{-m}WKU){S}$
fails to have a bounded $H^{\infty}$ functional calculus.
###### Lemma 4.3 ([2], Theorem 10.1).
Let $A$, ${S}$, $\mathfrak{a}$ have the properties specified above. Then
$(\mathfrak{a},{H})$ has the Kato property if and only if the operator $A{S}$
has a bounded $H^{\infty}$ functional calculus.
###### Lemma 4.4.
Let $A$, ${S}$, $\mathfrak{a}$ have the properties specified above. Let ${T}$
and ${Q}$ be the operators associated with $\mathfrak{a}$ and ${H}$.
* (i)
The operator ${T}\in{\mathcal{L}}({V})$ is compact if and only if the operator
$\operatorname{Im}A\in{\mathcal{L}}({H})$ is compact. Then, $s_{n}({T})\simeq
s_{n}(\operatorname{Im}A)$ ($n\in\mathbb{N}$), that is, there exists $c>0$
such that $c^{-1}s_{n}({T})\leq s_{n}(\operatorname{Im}A)\leq cs_{n}({T})$ for
every $n\in\mathbb{N}$.
* (ii)
The operator ${Q}\in{\mathcal{L}}({V})$ is compact if and only if the
embedding of ${V}$ into ${H}$ is compact, if and only if
${S}^{-1}\in{\mathcal{L}}({H})$ is compact. Then, $s_{n}({Q})\simeq
s_{n}({S}^{-1})\simeq s_{n}(j)$ ($n\in\mathbb{N}$) where $j$ denotes the
canonical embedding of ${V}$ into ${H}$.
###### Proof.
First, note that the operators ${T}$ and ${Q}$ are of the form:
$\displaystyle{T}$
$\displaystyle={S}^{-1}(\operatorname{Re}A)^{-1}\operatorname{Im}A\,{S}\quad\text{and}$
$\displaystyle{Q}$
$\displaystyle={S}^{-1}(\operatorname{Re}A)^{-1}{S}^{-1}_{|},$
where $\operatorname{Re}A$ and $\operatorname{Im}A$ denote the real and the
imaginary part of $A$, and ${S}^{-1}_{|}$ is the restriction of
${S}^{-1}\in{\mathcal{L}}({H})$ to ${V}$, considered as an operator in
${\mathcal{L}}({V},{H})$. These expressions give the first statements in (i)
and (ii). The second assertion in (i) follows in a straightforward from, e.g.,
[13, Theorem 7.7, p. 171].
To prove the corresponding one of (ii), assume that ${S}^{-1}$ is compact with
spectrum $\sigma({S}^{-1})=:\\{\mu_{n}\\}_{n\in\mathbb{N}}$, where
$\mu_{n}\rightarrow 0^{+}$ as $n\rightarrow\infty$. Therefore, there exists an
orthonormal system $\\{e_{n}\\}_{n\in\mathbb{N}}$ in ${H}$ such that
${S}h=\sum_{n}\mu_{n}^{-1}\langle h,e_{n}\rangle_{{H}}e_{n}$ for
$h\in{\mathcal{D}}({S})=\\{h\in{H}:\sum_{n}\mu_{n}^{-2}|(h,e_{n})|^{2}<\infty\\}$.
Let ${C}:{V}_{*}\rightarrow{H}$, ${C}u:={S}^{-1}u$, $u\in{\mathcal{D}}({S})$,
where ${V}_{*}$ denote the Hilbert space
$({\mathcal{D}}({S}),\langle{S}\cdot,{S}\cdot\rangle_{H})$. Of course,
${C}\in{\mathcal{L}}({V}_{*},{H})$ and
${C}^{*}{C}\in{\mathcal{L}}({V}_{\star})$ are compact. Moreover, note that
${C}^{*}{C}u=\sum_{n}\mu_{n}^{2}\langle
u,g_{n}\rangle_{{V}_{\star}}g_{n},\quad\quad u\in{V}_{\star},$
where $g_{n}:=\mu_{n}^{-2}e_{n}$ $(n\in\mathbb{N})$ is an orthonormal basis
for ${V}_{*}$. Thus, the singular values of ${C}$ are given by
$s_{n}({C})=\mu_{n}$, $n\in\mathbb{N}$.
Now, let $I_{S}$ denote the identity map on ${\mathcal{D}}({S})$ considered as
an operator from ${V}$ onto ${V}_{*}$. Therefore, we have
${S}^{-1}_{|}={C}I_{S}$ and, by [13, Theorem 7.1, p. 171], we get
$\displaystyle s_{n}({Q})$
$\displaystyle\leq\|{S}^{-1}(\operatorname{Re}A)^{-1}\|_{{\mathcal{L}}({H})}s_{n}({C}I_{S})=\|{S}^{-1}(\operatorname{Re}A)^{-1}\|_{{\mathcal{L}}({H})}s_{n}(I_{S}^{*}{C}^{*})$
$\displaystyle\leq\|{S}^{-1}(\operatorname{Re}A)^{-1}\|_{{\mathcal{L}}({H})}\|I_{S}^{*}\|_{{\mathcal{L}}({V}_{\star},{V})}s_{n}({C}^{*})$
$\displaystyle\leq\|{S}^{-1}(\operatorname{Re}A)^{-1}\|_{{\mathcal{L}}({H})}\|I_{S}^{*}\|_{{\mathcal{L}}({V}_{\star},{V})}s_{n}({C})\quad\textrm{
and }$ $\displaystyle s_{n}({C})$
$\displaystyle=s_{n}((\operatorname{Re}A){S}{Q}I_{S}^{-1})$
$\displaystyle\leq\|(\operatorname{Re}A){S}\|_{{\mathcal{L}}({V},{H})}s_{n}({Q}I_{S}^{-1})$
$\displaystyle\leq\|(\operatorname{Re}A){S}\|_{{\mathcal{L}}({V},{H})}\|I_{S}^{-1}\|_{{\mathcal{L}}({V}_{\star},{V})}s_{n}({Q}).$
Finally, note that $s_{n}({S}^{-1})$ is equal to the $n$-th singular value of
the embedding of ${V}_{*}$ into ${H}$. This completes the proof. ∎
###### Proof of Proposition 4.1.
Suppose that ${H}$, ${S}$, $K$, $U$, $W$ have the properties specified above
in Lemma 4.2. Fix $m\in\mathbb{N}$ such that the numerical range of the
operator $A:=I+2^{-m}WKU)$ is contained in $\\{|z-1|<1\\}$. Then, by Lemma
4.3, the couple $(\mathfrak{a},{H})$ does not have the Kato property, where
the form $\mathfrak{a}$ corresponds to the operators $A$ and ${S}$ as above.
Moreover, by Lemma 4.4(i),
$s_{n}({T})\simeq s_{n}(\operatorname{Im}A)\simeq
s_{n}\bigl{(}2^{-m}(WKU-U^{*}K^{*}W^{*})\bigr{)}\preceq
s_{n}(K)\quad(n\in\mathbb{N}).$
This completes the proof. ∎
## References
* [1] W. Arendt, _Semigroups and evolution equations: functional calculus, regularity and kernel estimates_. In Handbook of Differential Equations (C. M. Dafermos, E. Feireisl eds.), pages 1-85, Elsevier, North Holland, 2004.
* [2] P. Auscher, A. McIntosh, and A. Nahmod, _Holomorphic functional calculi of operators, quadratic estimates and interpolation_ , Indiana Univ. Math. J. 46 (1997), 375-403.
* [3] K. Fan, _On similarity of operators_ ,, Adv. Math. 10 (1973), 395-400.
* [4] M. Haase, The functional calculus for sectorial operators, volume 169 of Operator Theory: Advances and Applications, Birkhäuser, Basel, 2006.
* [5] N. J. Kalton, _Perturbations of the $H^{\infty}$-calculus_, Collect. Math. 58 (2007), 291-325.
* [6] T. Kato, _Fractional powers of dissipative operators_ , J. Math. Soc. Japan 13 (1961), 246-274.
* [7] T. Kato, _Fractional powers of dissipative operators II_ , J. Math. Soc. Japan 14 (1962), 242-248.
* [8] P. C. Kunstmann and L. Weis, _Maximal $L^{p}$ regularity for parabolic equations, Fourier multiplier theorems and $H^{\infty}$ functional calculus_, In Levico Lectures, Proceedings of the Autumn School on Evolution Equations and Semigroups (M. Iannelli, R. Nagel, S. Piazzera eds.), volume 69, pages 65-320, Springer, Berlin, 2004.
* [9] C. Le Merdy, _The similarity problem for bounded analytic semigroups on Hilbert space_ , Semigroup Forum 56 (1998), 205-224.
* [10] J.-L. Lions, _Espaces d’interpolation et domaines de puissances fractionnaires d’opérateurs_ , J. Math. Soc. Japan 14 (1962), 233-241.
* [11] A. McIntosh, _Operators which have an $H_{\infty}$ functional calculus_, In Miniconference on operator theory and partial differential equations (North Ryde, 1986), volume 14 of Proc. Centre Math. Anal. Austral. Nat. Univ., pages 210-231. Austral. Nat. Univ., Canberra, 1986.
* [12] B. Nagy and J. Zemánek, _A resolvent condition implying power boundedness_ , Studia Math. 134 (1999), 143-151.
* [13] J. Weidmann, Linear operators in Hilbert spaces, volume 68 of Graduate Texts in Mathematics, Springer, New York, 1980.
* [14] A. Yagi, Abstract parabolic evolution equations and their applications, Springer Monographs in Mathematics, Berlin, 2010.
|
# BOOSTR: A Dataset for Accelerator Control Systems
Diana Kafkes Jason St. John Fermi National Accelerator Laboratory, Batavia,
IL 60510, USA
###### Abstract
BOOSTR (Booster Operation Optimization Sequential Time-Series for
Reinforcement) was created to provide cycle-by-cycle time series of readings
and settings from instruments and controllable devices of the Booster,
Fermilab’s Rapid-Cycling Synchrotron (RCS) operating at 15 Hz. BOOSTR provides
a time series from 55 device readings and settings which pertain most directly
to the high-precision regulation of the Booster’s Gradient Magnet Power Supply
(GMPS). To our knowledge, this is the first known dataset of accelerator
device parameters made publicly available. We are releasing it in the hopes
that it can be used to demonstrate aspects of artificial intelligence for
advanced control systems, such as reinforcement learning and autonomous
anomaly detection.
## Background & Summary
Tuning and controlling particle accelerators is challenging and time
consuming. Even marginal improvements to accelerator operation can translate
very efficiently into improved scientific yield for an experimental particle
physics program. The data released here was collected in the hopes of
achieving improvement in precision for the Fermilab Booster Gradient Magnet
Power Supply (GMPS) regulatory system, which is detailed below.
The Fermilab Booster receives a 400 MeV proton beam from the Linear
Accelerator and accelerates it to 8 GeV through synchronously raising
accelerator cavity radiofrequency and instigating a controlled magnetic field
to steer the beam with combined-function bending and focusing electromagnets,
known as gradient magnets. These magnets are powered by the GMPS, which
operates on a 15 Hz cycle between a minimum current (at injection) and a
maximum current (at beam extraction). The GMPS is realized as four power
supplies, evenly distributed around the Booster, and each powers one fourth of
the gradient magnets. The role of the GMPS regulator is to calculate and apply
small compensating offsets in the GMPS driving signal in order to improve the
agreement of the resulting observed minimum and maximum currents with their
set points. Without regulation, the fitted minimum of the magnetic field may
vary from the set point by as much as a few percent.
At beam injection, a perturbation of only a percent is enough to significantly
decrease beam transfer efficiency and thereby reduce the beam flux ultimately
available to the high-energy particle physics experiments run at the lab.
Disturbances to the magnet current can occur due to ambient temperature
changes, other nearby high-power pulsed radio-frequency systems, and
electrical ground movement induced by power supplies of other particle
accelerators at the complex. The current GMPS regulation involves a PID
(Proportional-Integral-Derivative) control scheme (see Figure 1 for
schematic). The regulator calculates estimates for the minimum and maximum
currents of the offset-sinusoidal magnetic field from the previous 15 Hz
cycle. These values are then used to adjust the power supply program and
decrease systemic error in the next cycle’s current, such that it more closely
matches the set point. Presently, the PID system achieves regulation errors
corresponding to roughly 0.1% of the set value.
Although some 200,000 entries populate the device database of Fermilab’s
accelerator control system [1], the 55 device value time series presented here
in BOOSTR [2] were collected in accordance with suggestion by Fermilab
accelerator subject matter experts (SMEs). These values exhibit correlations
with GMPS power supply perturbations. The full data were collected during two
separate periods: from June 3, 2019 to July 11, 2019 — when the accelerator
was shut down for regular maintenance — and from December 3, 2019 to April 13,
2020 — when the accelerator was shut down in response to the Covid-19
Pandemic.
Data from a single day of BOOSTR was previously described in a Datasheet [3].
A proof-of-concept paper [4] (submitted to Physical Review Accelerators and
Beams) used this subset of BOOSTR and demonstrated the viability of training a
reinforcement learning agent to control GMPS regulation better than the
existing PID system. Relative to the original Datasheet [3], this manuscript
is expanded with more SME input, describes more than 100 times more data, and
includes documentation of validation not presented in the original Datasheet.
## Methods
#### Collection Process
A data collection node was created and set to request data at 15 Hz from the
Data Pool Manager of the Accelerator Control Network (ACNET) [1]. The created
scheme involved front-end nodes, each managing their respective devices,
replying with timestamped values at the stated rate barring differences of
clock speed, input-output (I/O) lag time variations due to network traffic
fluctuations, and higher-priority interruptions from competing processes on
the front-end node. These inconsistencies were later addressed through a time-
alignment process discussed in the Data Processing Section. The collection
node stored the data in a circular buffer approximately 10 days deep.
A Python script managed by a nightly cron job polled the data collection node
for the most recent midnight-to-midnight 24 hours of timestamped data for each
of the 55 time series identified by SMEs. A second cron-managed script did the
same for relevant accelerator control events issued in the same period. These
event data correspond to important cycles achieved through controlling the
devices at the accelerator. Event data were requested by a separate data
collection node.
Each day’s data harvest was originally stored in HDF5 (Hierarchical Data
Format Version 5) files. Any data instances missing from the parquet files
released here were not included in the original data buffers from which this
dataset was drawn.
### Data Processing
Each instance was created through a concatenation of each device’s timestamp
data table within every HDF5 file and then saved in parquet format. A similar
procedure was undertaken for one of the accelerator control event signals
polled, Event0C, as its broadcast is synchronized with the GMPS magnetic field
minimum. Event0C was collected to correct a problem in the observed sampling
frequency: there was an issue of the sampling of each device being nominally
at 15 Hz, but in reality synchrony was demonstrably imperfect, and the time
intervals between successive timestamps display varying lags.
Since Event0c serves as the baseline or heartbeat of the Booster at
approximately 15 Hz and is synchronized with the smoothly varying electrical
current GMPS regulates, we used Event0c to time-align our data. The alignment
approximates the data available to the GMPS regulator operating in real time.
We used the GMPS-synchronized Event0C’s timestamp as the moment to begin
forward inference, taking the value for each device time series which had the
most recent corresponding timestamp. In practice, this required timestamp-
sorted series for each device and finding the most recent device value,
relative to Event0c timestamp, in a lookback window equal to the maximum
interval between device timestamps (necessarily excluding the five month gap
between our two data collection periods). This time-alignment step was run
over the whole dataset in multiple parallel processes using Apache Spark.
Notably, the data recorded for Event0c was missing the period from July 1 to
11, 2019. Therefore aligning on this variable discarded some of the data
collected during our first period of collection.
## Data Records
The data release is stored on Zenodo [2]. Each instance is a zip compressed
parquet of one of the 55 aligned time series with columns corresponding to the
aligned time stamp, original time stamp, difference between time stamps, and
the reading/setting value [2]. The original timestamp and time difference is
included to demonstrate the mechanics of our alignment process and enable a
check for reproducibility. All timestamps are in Greenwich Mean Time (UTC).
Our data release contains device data from each of the four gradient magnet
power supplies, the GMPS PID regulator, and the Main Injector, where the beam
is directed after acceleration via the Booster. Minimum and maximum current
information readings and settings, the feedback and transductor gain settings,
and the feed-forward start trigger are collected as part of the current PID
regulation scheme. The “inhibit” value controls whether the GMPS regulator
accepts settings changes for parameters other than the minimum and maximum
current, such as the gain settings (any positive value will prevent changes).
Additionally, $\dot{\vec{B}}$, the rate of change of the magnetic field, is
recorded as a proxy for the magnetic field we are interested in regulating.
Timing information derived from $\dot{\vec{B}}$ = 0 synchronizes the current
PID regulator system.
We acknowledge that the ACNET parameter names are by no means standardized
across different particle accelerators and that they will appear especially
abstruse for those well-versed in control systems who are new to working with
accelerators. In Table 1, we detail explanations of each of the parameters
read from devices (devices whose first letter is followed by :) and indicate
whether the device setting was included in the dataset (devices whose first
letter is followed by _ and appear in Setting column), since describing these
corresponding pairs would be redundant. In Figures 2 and 3, we visualize
metadata trends for each “nonconstant" parameter in each data collection
period (see Table 2 for a list of values we considered to be virtually
unchanging within the two periods) and also provide the mean and standard
deviation of device readings across the two collection periods in Table 1.
Furthermore, Table 1 includes dates missing in the data recorded for each
reading. As a reminder the data were collected during two separate periods:
from June 3, 2019 to June 30, 2019 (July is missing due to time-alignment with
Event0C) and from December 3, 2019 to April 13, 2020. Finally, in Figure 4 we
demonstrate the centrality of each recorded parameter with a heatmap of
histogram values.
Additionally, we provide the PID regulator status values B|GMPSSC (ACNET
status parameters include |), whose 16 bits contain various motherboard
states. Here we are concerned with bit 3, which indicates whether or not the
GMPS regulator was on (1), and bit 7, which indicates whether the Booster is
in its normal alternating current (AC) mode (1) or “coasting beam” direct
current (DC) mode at constant beam energy (0). Unlike the rest of the devices,
this status value is presently recorded at only 1 Hz because it was not
included in our initial data node request and was relegated to an archived
data node at lesser frequency. While the same time-aligning described above
was applied to align B|GMPSSC, due to the slow sampling rate, we caution the
user to refer closely to the original timestamp such that they might make
decisions about whether to use data when GMPS was off and to inform them of
potential problems when interpolating in a region immediately before or after
a status change. See Figure 5 for more details on B|GMPSSC values.
## Technical Validation
In order to verify the quality of this dataset, we pored over the electronic
logbook (Elog) [5] that Fermilab Booster technicians and operators use to
record changes to device settings as well as observations while in operation.
We used these Elog entries to authenticate our data’s viability across
timescales. First, we used the Elog to corroborate expert acknowledgement of
the major spikes observed in Figures 2 and 3. These outlier changes, typically
seen in the value’s mean and standard deviation, represented major changes
made on that specific day, including when the Booster was switched from
alternating current (AC) to direct current (DC) mode (see Figure 5) as well as
when the GMPS regulator was turned off altogether. These reconciliations are
presented in Table 3.
Furthermore, we pinpoint changes in the AC vs. DC settings according to the
Elog [5] for June 24, 2019 and March 11, 2020 in Figure 6. Here applying a
bitmask reveals that a B|GMPSSC value of 159 indicates AC mode/GMPS on, while
31 indicates DC mode/GMPS on, and 407 indicates AC mode/GMPS off. In this
figure, the plotted timestamps were offset to Central Time (UTC-5) in order to
align with times given in the Elog, which were not recorded in UTC. On June
24, the trace of B|GMPSSC clearly shows GMPS regulation briefly switching off
before commencing DC studies from 8:00 AM - 6:00 PM with a value of 31, then
being turned back to 159. On March 11, B|GMPSSC is at 159 before 6:00 AM, off
at 407 from 6:00 - 9:50 AM, and then is set to AC mode from 9:50 AM - 12:45
PM, to DC mode from 12:45 - 3:49 PM, and back to AC mode for the rest of the
day, as per the Elog [5]. The close correspondence of these changes in our
data to the recorded actions and observations of Booster personnel boost our
confidence in the quality and relevance of the collected dataset.
Additionally, we plot settings changes on March 10 and 11 documented in the
Elog [5] in Figure 7. The blip in B:ACMNPG from 6.5 to 13.5 is visible as is
the slight decrease in B_VIMIN around 4:00 PM CST, which were mentioned in
Table 3.
## Usage Notes
BOOSTR could be used to train various control networks for accelerator
regulation, to construct “digital twins” of the Fermilab Booster regulator’s
control environment, or to develop anomaly detection/categorization
capabilities. Please note: there are no legal or ethical ramifications of
using this data as it was collected from a machine, and not collected from or
representative of people.
In the future, the dataset could feasibly be expanded to include more of the
200,000 available ACNET system parameters and therefore be used to control,
mimic, or monitor further aspects of the particle accelerator complex. One
could argue that this initial dataset might become the foundation upon which
substantial fine-tuning of particle accelerators could depend.
## Code availability
The preprocessing code is available here. When using BOOSTR data, the authors
recommend ordering by time immediately, as the parquet files do not store the
data entries sequentially [2].
## Acknowledgements
This dataset was created as part of the “Accelerator Control with Artificial
Intelligence” Project conducted under the auspices of the Fermilab Laboratory
Directed Research and Development Program (Project ID FNAL-LDRD-2019-027). The
manuscript has been authored by Fermi Research Alliance, LLC under Contract
No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science,
Office of High Energy Physics and is registered at Fermilab as Technical
Report Number FERMILAB-PUB-21-005-SCD.
We are extremely grateful for Brian Schupbach and Kent Triplett for lending
their Booster technical expertise, without which we could not have validated
our dataset. Additionally, we would like to acknowledge Burt Holzman for
guidance on getting set up in the cloud, and the help of the Databricks
federal support staff, in particular Sheila Stewart. Furthermore, we would
like to recognize Jovan Mitrevski and Aleksandra Ciprijanovic for useful
discussions and a careful reading of this manuscript.
## Author contributions statement
J.S. created the data collection script, and set up and maintained the cron
jobs to record the data in HDF5 files. D.K. migrated the data from on-premise
storage to the cloud, wrote the preprocessing and time-alignment code, and
validated the data. Both authors reviewed this manuscript.
## Competing interests
The authors declare no competing interests.
## References
* [1] Cahill, K. & et. al. The fermilab accelerator control system. _ICFA Beam Dyn. Newslett._ 47, 106–124 (2008).
* [2] Kafkes, D. & St. John, J. BOOSTR: A Dataset for Accelerator Control Systems (Full Release), 10.5281/zenodo.4382663 (2021).
* [3] Kafkes, D. & St. John, J. BOOSTR: A Dataset for Accelerator Control Systems (Partial Release), 10.5281/zenodo.4088982 (2020).
* [4] John, J. S. _et al._ Real-time artificial intelligence for accelerator control: A study at the fermilab booster (2020). 2011.07371.
* [5] Hazelwood, K. J. Fermilab electronic logbook. www-bd.fnal.gov/Elog.
## Figures & Tables
Figure 1: Overview of current GMPS control system [4]. Presently, a human
operator specifies a target program for B:VIMIN and B:VIMAX, the GMPS
compensated minimum and maximum currents respectively, via the Fermilab
Accelerator Control Network that is transmitted to the GMPS control board.
Table 1: Description of BOOSTR dataset parameters. Here “GMPS" denotes the
Gradient Magnet Power Supplies (1-4), “MI" means main injector, “MDAT" refers
to Fermilab’s Machine Data communications protocol. Device parameters that
begin with B relate to the accelerator Booster, whereas device parameters that
begin with I relate to the Main Injector. Parameter mean and standard
deviation have been truncated to two decimal points.
Parameter | Details (Units) | Setting | Mean (Std) | Missing Dates
---|---|---|---|---
B:ACMNIG | Min AC integral feedback gain | B_ACMNIG | 0.75 (0) | 2019: 6/14, 12/12
B:ACMNPG | Min AC proportional feedback gain | B_ACMNPG | 9.1968 (0.75) | 2019: 6/14, 12/12
B:ACMXIG | Max AC integral feedback gain | B_ACMXIG | 0.30 (0) | 2019: 6/14, 12/12
B:ACMXPG | Max AC proportional feedback gain | B_ACMXPG | 3.00 (0) | 2019: 6/14, 12/12
B:DCIG | DC integral feedback gain | B_DCIG | 0 (0) | 2019: 6/14, 12/12
B:DCPG | DC proportional feedback gain | B_DCPG | 0.10 (1.35) | 2019: 6/14, 12/12
B:GMPS1V | GMPS1 output voltage (V) | | 81.82 (89.56) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:GMPS2V | GMPS2 output voltage (V) | | 85.29 (96.00) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:GMPS3V | GMPS3 output voltage (V) | | 63.67 (61.19) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:GMPS4V | GMPS4 output voltage (V) | | 61.08 (65.19) | 2019: 6/14, 12/12
B:GMPSBT | $\partial B/\partial t$ offset trigger (s) | B_GMPSBT | 0 (0) | 2019: 6/14, 12/12
B|GMPSSC | Binary status control of GMPS | | N/A (N/A) | 2019: 6/07, 6/12, 6/14-15
| | | | and 2020: 1/18, 3/08, 3/15
B:GMPSFF | Feedforward start trigger (s) | B_GMPSFF | 1.32 (1.52) | 2019: 6/14, 12/12
B:IMAXXG | Max transductor gain (A/V) | B_IMAXXG | -117.12 (1.80) | 2019: 6/14, 12/12
B:IMAXXO | Max transductor offset (A) | B_IMAXXO | 10.00 (0) | 2019: 6/14, 12/12
B:IMINXG | Min transductor gain (A/V) | B_IMINXG | -11.73 (0.23) | 2019: 6/14, 12/12
B:IMINXO | Min transductor offset (A) | B_IMINXO | 0 (0) | 2019: 6/14, 12/12
B:IMINER | Discrepancy from setting at min (0.1 A) | | 1.93 (3.86) | 2019: 6/14, 12/12
B:IMINST | $\partial B/\partial t$ sample off | B_IMINST | 0 (0) | 2019: 6/14, 12/12
B:IPHSTC | Phase regulator time constant | B_IPHSTC | 20.00 (0.01) | 2019: 6/14, 12/12
B:LINFRQ | 60 Hz line frequency offset (mHz) | | -0.44 (16.31) | 2019: 6/03-7/11, 12/12
| | | | 12/30-31 and 2020: 1/01-06
B:NGMPS | Number of GMPS suppliers | | 4.00 (0) | 2019: 6/14, 12/12
B:PS1VGM | GMPS1 V- to ground (V) | | -2.30 (23.56) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:PS2VGM | GMPS2 V- to ground (V) | | -21.29 (27.52) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:PS3VGM | GMPS3 V- to ground (V) | | -15.13 (14.11) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:PS4VGM | GMPS4 V- to ground (V) | | -26.27 (17.22) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:PS1VGP | GMPS1 V+ to ground (V) | | 52.00 (34.82) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:PS2VGP | GMPS2 V+ to ground (V) | | 26.53 (30.53) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:PS3VGP | GMPS3 V+ to ground (V) | | 20.17 (13.74) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:PS4VGP | GMPS4 V+ to ground (V) | | 9.14 (12.75) | 2019: 6/14, 12/12, 12/30-31
| | | | and 2020: 1/01-4/12
B:VIMAX | Compensated max GMPS current (A) | B_VIMAX | 772.43 (385.98) | 2019: 6/14, 12/12
B:VIMIN | Compensated min GMPS current (A) | B_VIMIN | 83.20 (40.68) | 2019: 6/14,
B:VINHBT | Inhibit value | B_VINHBT | 1.00 (0.04) | 2019: 6/14, 12/12
B:VIPHAS | GMPS ramp phase wrt line voltage (rad) | B_VIPHAS | 1.80 (0.14) | 2019: 6/14, 12/12
I:IB | MI lower bend current (A) | | 2275.71 (2571.03) | 2019: 6/03-19, 12/12
I:MDAT40 | MDAT measured MI current (A) | | 2400.57 (2576.37) | 2019: 6/03-7/11, 12/12
I:MXIB | Main dipole bend current (A) | | 2279.68 (2564.53) | 2019: 6/03-11, 6/13-19, 12/12
Table 2: Summary of nearly constant variables in both periods of data collection. Here “nearly constant" denotes variables having a standard deviation less than $10^{-5}$ across both periods. Device | Setting | Constant Value
---|---|---
B:DCIG | B_DCIG | 0
B:GMPSBT | B_GMPSBT | 0.0003
B:IMINST | B_IMINST | 0
B:IMINXO | B:IMINXO | 0
Table 3: Summary of Booster-related electronic log (Elog) [5] entries corresponding to spikes in Figures 2 and 3. Original Central Time times are given with values in parenthesis designating UTC. Here “RF" denotes radiofrequency. Date | Elog Entry Summary
---|---
6/8/2019 | GMPS in AC mode until 8:00 AM (13:00) on 6/08, then switched off until 6/17
6/10/2019 | GMPS was locked/tagged out for outage, West Booster gallery RF off from
8:30-10:00 (13:30-3:00) for work
6/17/2019 | GMPS turned back on and put in AC mode
6/22/2019 | High energy physics beam turned off at 8:00 PM (1:00 +1) (GMPS remained in AC mode)
6/24/2019 | DC studies from 8:00 AM - 6:00 PM (13:00 - 23:00), back to AC mode
6/26/2019 | Alternated between AC and DC mode, GMPS off for 30 min around 5:30 PM (22:30)
6/27/2019 | GMPS in AC mode all day, but removing certain study events caused bias to creep up,
eventually tripping the RF
6/28/2019 | Alternated between AC and DC mode
12/8/2019 | B_VIMIN adjusted, GMPS in AC mode
12/12/2019 | AC mode, operators reset virtual machine environment locally
12/28 - 12/30/2019 | No beam from injector, GMPS in AC mode
12/31/2019 | Booster injection back, GMPS in AC mode
1/1/2020 | GMPS off for 15 min around 9:30 AM (14:30), in AC mode for rest of day
2/4/2020 | RF sparking in gallery due to reverting of RF capture settings, GMPS in AC mode
2/5/2020 | GMPS off from 6:00 AM - 3:30 PM (11:00 - 20:30), then in AC mode for rest of day
2/6/2020 | Lowered beam intensity to users, but GMPS was in AC mode all day
3/5/2020 | Beam tails were large, so turned B_VIMIN down
3/10/2020 | GMPS in AC mode all day, B:ACMNPG changed from 6.5 to 13.5, B_VIMIN decreased
from 103.440 to 103.420
3/11/2020 | GMPS in AC mode from 12:00 AM - 6:00 AM (5:00 - 11:00), off from 6:00 - 10:00 AM
(11:00 - 15:00), then alternated between AC and DC mode, B_VIMIN adjusted from
| 103.418 to 103.386
3/13/2020 | GMPS off from 9:30 AM - 11:00 AM (14:30-16:00), back on and put in AC mode
3/20/2020 | Booster turned off on account of Covid-19 pandemic at 12:00 PM (17:00)
Figure 2: Metadata variable trends for Period 1: June 3, 2019 to June 30,
2019. The graphs show the mean for each variable on the given date and shades
in the standard deviation of that variable on that date.
Figure 3: Metadata variable trends for Period 2: December 2, 2019 to April 13,
2020. The graphs show the mean for each variable on the given date and shades
in the standard deviation of that variable on that date. Figure 4: Heatmap of
histogram distributions for each reading and setting variable with equal
sampling. This is only meant to characterize the centrality of each recorded
value. See Fig. 2 and 3 for actual metadata value ranges.
Figure 5: Daily values of B|GMPSSC (should be interpreted as taking the mode
for each day) whose bits encode relevant Booster statuses. Figure 6: Values of
status B|GMPSSC corresponding to Table 3 entries for June 24, 2019 and March
11, 2020 (timestamps were put in Central Time to align with Elog). Recall: a
value of 159 indicates AC study/GMPS on, 31 indicates DC study/GMPS on, and
407 indicates AC study/GMPS off. These traces display this value at a much
greater granularity than Figure 5.
Figure 7: Switches corresponding to Table 3 entries for March 10, 2020 and
March 11, 2020: B:ACMNPG changed from 6.5 to 13.5 and B_VIMIN decreased from
103.418 to 103.386 (timestamps were put in Central Time to align with Elog).
The sudden large increase in B_VIMIN from 12:45 - 3:49 PM CST to a value off
the plotted region corresponds to the DC mode observed in Figure 6.
|
# Bayesian Bandwidths in Semiparametric Modelling for Nonnegative Orthant Data
with Diagnostics
Célestin C. Kokonendji<EMAIL_ADDRESS>Sobom M. Somé
<EMAIL_ADDRESS>Université Bourgogne Franche-Comté, Laboratoire de
Mathématiques de Besançon, UMR 6623 CNRS-UBFC, Besançon, France Université
Joseph KI-ZERBO, LANIBIO, Ouagadougou, Burkina Faso Université Thomas
SANKARA, UFR ST, Ouagadougou, Burkina Faso
###### Abstract
Multivariate nonnegative orthant data are real vectors bounded to the left by
the null vector, and they can be continuous, discrete or mixed. We first
review the recent relative variability indexes for multivariate nonnegative
continuous and count distributions. As a prelude, the classification of two
comparable distributions having the same mean vector is done through under-,
equi- and over-variability with respect to the reference distribution.
Multivariate associated kernel estimators are then reviewed with new proposals
that can accommodate any nonnegative orthant dataset. We focus on bandwidth
matrix selections by adaptive and local Bayesian methods for semicontinuous
and counting supports, respectively. We finally introduce a flexible
semiparametric approach for estimating all these distributions on nonnegative
supports. The corresponding estimator is directed by a given parametric part,
and a nonparametric part which is a weight function to be estimated through
multivariate associated kernels. A diagnostic model is also discussed to make
an appropriate choice between the parametric, semiparametric and nonparametric
approaches. The retention of pure nonparametric means the inconvenience of
parametric part used in the modelization. Multivariate real data examples in
semicontinuous setup as reliability are gradually considered to illustrate the
proposed approach. Concluding remarks are made for extension to other multiple
functions.
###### keywords:
Associated kernel , Bayesian selector , dispersion index , model diagnostics ,
multivariate distribution , variation index , weighted distribution.
Mathematics Subject Classification 2020: 62G07; 62G20; 62G99; 62H10; 62H12.
††journal: arXiv
## 1 Introduction
The $d$-variate nonnegative orthant data on
$\mathbb{T}_{d}^{+}\subseteq[0,\infty)^{d}$ are real $d$-vectors bounded to
the left by the null vector $\mathbf{0}_{d}$, and they can be continuous,
discrete (e.g., count, categorial) or mixed. For simplicity, we here assume
either $\mathbb{T}_{d}^{+}=[0,\infty)^{d}$ for semicontinuous or
$\mathbb{T}_{d}^{+}=\mathbb{N}^{d}:=\\{0,1,2,\ldots\\}^{d}$ for counting; and,
we then omit both setups of categorial and mixed which can be a mix of
discrete and continuous data (e.g., [53]) or other time scales (see, e.g.,
[36]). Modelling such datasets of $\mathbb{T}_{d}^{+}$ need nonnegative
orthant distributions which are generally not easy to handle in practical data
analysis. The baseline parametric distribution (e.g., [20, 34]) for the
analysis of nonnegative countinuous data is the exponential distribution
(e.g., in Reliability) and that of count data is the Poisson one. However,
there intrinsic assumptions of the two first moments are often not realistic
for many applications. The nonparametric topic of associated kernels, which is
adaptable to any support $\mathbb{T}_{d}^{+}$ of probability density or mass
function (pdmf), is widely studied in very recent years. We can refer to [7,
8, 12, 13, 29, 43, 51, 52, 61, 62] for general results and more specific
developments on associated kernel orthant distributions using classical cross-
validation and Bayesian methods to select bandwidth matrices. Thus, a natural
question of a flexible semiparametric modelling now arises for all these
multivariate orthant datasets.
Indeed, we first need a review of the recent relative variability indexes for
multivariate semicontinuous ([31]) and count ([25]) distributions. The
infinite number and complexity of multivariate parametric distributions
require the study of different indexes for comparisons and discriminations
between them. Simple classifications of two comparable distributions are done
through under-, equi- and over-variability with respect to the reference
distribution. We refer to [57] and references therein for univariate
categorial data which does not yet have its multivariate version. We then
survey multivariate associated kernels that can accommodate any nonnegative
orthant dataset. Most useful families shall be pointed out, mainly as a
product of univariate associated kernels and including properties and
constructions. Also, we shall focus on bandwidth matrix selections by Bayesian
methods. Finally, we have to introduce a flexible semiparametric approach for
estimating multivariate nonnegative orthant distribution. Following [15] for
classical kernels, the corresponding estimator shall be directed by a given
parametric part, and a nonparametric part which is a weight function to be
estimated through multivariate associated kernels. But what is the meaning of
a diagnostic model to make an appropriate choice between the parametric,
semiparametric and nonparametric approaches in this multivariate framework?
Such a discussion is to highlight practical improvements on standard
nonparametric methods for multivariate semicontinuous datasets, through the
use of a reasonable parametric-start description. See, for instance, [27, 33,
48] for univariate count datasets.
In this paper, the main goal is to introduce a family of semiparametric
estimators with multivariate associated kernels for both semicontinuous and
count data. They are meant to be flexible compromises between a grueling
parametric and fuzzy nonparametric approaches. The rest of the paper is
organized as follow. Section 2 presents a brief review of the relative
variability indexes for multivariate nonnegative orthant distributions, by
distinguishing the dispersion for counting and the variation for
semicontinuous. Section 3 displays a short panoply of multivariate associated
kernels which are useful for semicontinuous and for count datasets. Properties
are reviewed with new proposals, including both appropriated Bayesian methods
of bandwidths selections. In Section 4, we introduce the semiparametric kernel
estimators with $d$-variate parametric start. We also investigate the
corresponding diagnostic model. Section 5 is devoted to numerical
illustrations, especially for uni- and multivariate semicontinuous datasets.
In Section 6, we make some final remarks in order to extend to other multiple
functions, as regression. Eventually, appendixes are exhibited for technical
proofs and illustrations.
## 2 Relative Variability Indexes for Orthant Distributions
Let $\boldsymbol{X}=(X_{1},\ldots,X_{d})^{\top}$ be a nonnegative orthant
$d$-variate random vector on $\mathbb{T}_{d}^{+}\subseteq[0,\infty)^{d}$,
$d\geq 1$. We use the following notations:
$\sqrt{\mathrm{var}\boldsymbol{X}}=(\sqrt{\mathrm{var}X_{1}},\ldots,\sqrt{\mathrm{var}X_{d}})^{\top}$
is the elementwise square root of the variance vector of $\boldsymbol{X}$;
$\mathrm{diag}\sqrt{\mathrm{var}\boldsymbol{X}}=\mathrm{diag}_{d}(\sqrt{\mathrm{var}X_{j}})$
is the $d\times d$ diagonal matrix with diagonal entries
$\sqrt{\mathrm{var}X_{j}}$ and $0$ elsewhere; and,
$\mathrm{cov}\boldsymbol{X}=(\mathrm{cov}(X_{i},X_{j}))_{i,j\in\\{1,\ldots,d\\}}$
denotes the covariance matrix of $\boldsymbol{X}$ which is a $d\times d$
symmetric matrix with entries $\mathrm{cov}(X_{i},X_{j})$ such that
$\mathrm{cov}(X_{i},X_{i})=\mathrm{var}X_{i}$ is the variance of $X_{i}$.
Then, one has
$\mathrm{cov}\boldsymbol{X}=(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}})(\boldsymbol{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rho}}_{\boldsymbol{X}}})(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}}),$
(2.1)
where $\boldsymbol{\rho}_{\boldsymbol{X}}=\boldsymbol{\rho}(\boldsymbol{X})$
is the correlation matrix of $\boldsymbol{X}$; see, e.g., [21, Eq. 2-36]. It
is noteworthy that there are huge many multivariate distributions with
exponential (resp. Poisson) margins. Therefore, we denote a generic
$d$-variate exponential distribution by
$\mathscr{E}_{d}(\boldsymbol{\mu},\boldsymbol{\rho})$, given specific positive
mean vector $\boldsymbol{\mu}^{-1}:=(\mu_{1}^{-1},\ldots,\mu_{d}^{-1})^{\top}$
and correlation matrix
$\boldsymbol{\rho}=(\rho_{ij})_{i,j\in\\{1,\ldots,d\\}}$. Similarly, a generic
$d$-variate Poisson distribution is given by
$\mathscr{P}_{d}(\boldsymbol{\mu},\boldsymbol{\rho})$, with positive mean
vector $\boldsymbol{\mu}:=(\mu_{1},\ldots,\mu_{d})^{\top}$ and correlation
matrix $\boldsymbol{\rho}$. See, e.g., Appendix 7.1 for more extensive
exponential and Poisson models with possible behaviours in the negative
correlation setup. The uncorrelated or independent $d$-variate exponential and
Poisson will be written as $\mathscr{E}_{d}(\boldsymbol{\mu})$ and
$\mathscr{P}_{d}(\boldsymbol{\mu})$, respectively, for
$\boldsymbol{\rho}=\boldsymbol{I}_{d}$ the $d\times d$ unit matrix. Their
respective $d$-variate probability density function (pdf) and probability mass
function (pmf) are the product of $d$ univariate ones.
According to [31] and following the recent univariate unification of the well-
known (Fisher) dispersion and the (Jørgensen) variation indexes by Touré et
al. [55], the relative variability index of $d$-variate nonnegative orthant
distributions can be written as follows. Let $\boldsymbol{X}$ and
$\boldsymbol{Y}$ be two random vectors on the same support
$\mathbb{T}_{d}^{+}\subseteq[0,\infty)^{d}$ and assume
$\boldsymbol{m}:=\mathbb{E}\boldsymbol{X}=\mathbb{E}\boldsymbol{Y}$,
$\boldsymbol{\Sigma}_{\boldsymbol{X}}:=\mathrm{cov}\boldsymbol{X}$ and
$\mathbf{V}_{F_{\boldsymbol{Y}}}(\boldsymbol{m}):=\mathrm{cov}(\boldsymbol{Y})$
fixed, then the relative variability index of $\boldsymbol{X}$ with respect to
$\boldsymbol{Y}$ is defined as the positive quantity
$\mathrm{RWI}_{\boldsymbol{Y}}(\boldsymbol{X}):=\mathrm{tr}[\boldsymbol{\Sigma}_{\boldsymbol{X}}\mathbf{W}^{+}_{F_{\boldsymbol{Y}}}(\boldsymbol{m})]\gtreqqless
1,$ (2.2)
where “$\mathrm{tr}(\cdot)$” stands for the trace operator and
$\mathbf{W}^{+}_{F_{\boldsymbol{Y}}}(\boldsymbol{m})$ is the unique Moore-
Penrose inverse of the associated matrix
$\mathbf{W}_{F_{\boldsymbol{Y}}}(\boldsymbol{m}):=[\mathbf{V}_{F_{\boldsymbol{Y}}}(\boldsymbol{m})]^{1/2}[\mathbf{V}_{F_{\boldsymbol{Y}}}(\boldsymbol{m})]^{\top/2}$
to $\mathbf{V}_{F_{\boldsymbol{Y}}}(\boldsymbol{m})$. From (2.2),
$\mathrm{RWI}_{\boldsymbol{Y}}(\boldsymbol{X})\gtreqqless 1$ means the over-
(equi- and under-variability) of $\boldsymbol{X}$ compared to $\boldsymbol{Y}$
is realized if $\mathrm{RWI}_{\boldsymbol{Y}}(\boldsymbol{X})>1$
($\mathrm{RWI}_{\boldsymbol{Y}}(\boldsymbol{X})=1$ and
$\mathrm{RWI}_{\boldsymbol{Y}}(\boldsymbol{X})<1$, respectively).
The expression (2.2) of RWI does not appear to be very easy to handle in this
general formulation on $\mathbb{T}_{d}^{+}\subseteq[0,\infty)^{d}$, even the
empirical version and interpretations. We now detail both multivariate cases
of counting and of semicontinous. Their corresponding empirical versions are
given in [25, 31].
### 2.1 Relative Dispersion Indexes for Count Distributions
For $\mathbb{T}_{d}^{+}=\mathbb{N}^{d}$, let
$\mathbf{W}_{F_{\boldsymbol{Y}}}(\boldsymbol{m})=\sqrt{\boldsymbol{m}}\sqrt{\boldsymbol{m}}^{\top}$
be the $d\times d$ matrix of rank 1. Then,
$\boldsymbol{\Sigma}_{\boldsymbol{X}}\mathbf{W}^{+}_{F_{\boldsymbol{Y}}}(\boldsymbol{m})$
of (2.2) is also of rank 1 and has only one positive eigenvalue, denoted by
$\mathrm{GDI}(\boldsymbol{X}):=\frac{\sqrt{\mathbb{E}\boldsymbol{X}}^{\top}\,(\mathrm{cov}\boldsymbol{X})\sqrt{\mathbb{E}\boldsymbol{X}}}{\mathbb{E}\boldsymbol{X}^{\top}\mathbb{E}\boldsymbol{X}}\gtreqqless
1$ (2.3)
and called generalized dispersion index of $\boldsymbol{X}$ compared to
$\mathbf{Y}\sim\mathscr{P}_{d}(\mathbb{E}\mathbf{X})$ with
$\mathbb{E}\mathbf{Y}=\mathbb{E}\mathbf{X}=\boldsymbol{m}$ ([25]). For $d=1$,
$\mathrm{GDI}(X_{1})=\mathrm{var}X_{1}/\mathbb{E}X_{1}=\mathrm{DI}(X_{1})$ is
the (Fisher) dispersion index with respect to the Poisson distribution. To
derive this interpretation of GDI, we successively decompose the denominator
of (2.3) as
$\mathbb{E}\boldsymbol{X}^{\top}\mathbb{E}\boldsymbol{X}=\sqrt{\mathbb{E}\boldsymbol{X}}^{\top}\,(\mathrm{diag}\mathbb{E}\boldsymbol{X})\sqrt{\mathbb{E}\boldsymbol{X}}=[(\mathrm{diag}\sqrt{\mathbb{E}\boldsymbol{X}})\\!\sqrt{\mathbb{E}\boldsymbol{X}}]^{\top}(\boldsymbol{I}_{d})[(\mathrm{diag}\sqrt{\mathbb{E}\boldsymbol{X}})\\!\sqrt{\mathbb{E}\boldsymbol{X}}]$
(2.4)
and the numerator of (2.3) by using also (2.1) as
$\sqrt{\mathbb{E}\boldsymbol{X}}^{\top}\,(\mathrm{cov}\boldsymbol{X})\sqrt{\mathbb{E}\boldsymbol{X}}=[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}})\sqrt{\mathbb{E}\boldsymbol{X}}]^{\top}(\boldsymbol{\rho}_{\boldsymbol{X}})\,[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}})\sqrt{\mathbb{E}\boldsymbol{X}}].$
Thus, $\mathrm{GDI}(\boldsymbol{X})$ makes it possible to compare the full
variability of $\boldsymbol{X}$ (in the numerator) with respect to its
expected uncorrelated Poissonian variability (in the denominator) which
depends only on $\mathbb{E}\boldsymbol{X}$. In other words, the count random
vector $\mathbf{X}$ is over- (equi- and under-dispersed) with respect to
$\mathscr{P}_{d}(\mathbb{E}\mathbf{X})$ if $\mathrm{GDI}(\boldsymbol{X})>1$
($\mathrm{GDI}(\boldsymbol{X})=1$ and $\mathrm{GDI}(\boldsymbol{X})<1$,
respectively). This is a generalization in multivariate framework of the well-
known (univariate) Fisher dispersion index by [25]. See, e.g., [4, 25] for
illustrative examples. Also, we can modify $\mathrm{GDI}(\boldsymbol{X})$ to
$\mathrm{MDI}(\boldsymbol{X})$, as marginal dispersion index, by replacing
$\mathrm{cov}\boldsymbol{X}$ in (2.3) with
$\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}}$ to obtain dispersion
information only coming from the margins of $\boldsymbol{X}$.
More generally, for two count random vectors $\boldsymbol{X}$ and
$\boldsymbol{Y}$ on the same support
$\mathbb{T}_{d}^{+}\subseteq\mathbb{N}^{d}$ with
$\mathbb{E}\boldsymbol{X}=\mathbb{E}\boldsymbol{Y}$ and
$\mathrm{GDI}(\boldsymbol{Y})>0$, the relative dispersion index is defined by
$\mathrm{RDI}_{\boldsymbol{Y}}(\boldsymbol{X}):=\frac{\mathrm{GDI}(\boldsymbol{X})}{\mathrm{GDI}(\boldsymbol{Y})}=\frac{[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}})\sqrt{\mathbb{E}\boldsymbol{X}}]^{\top}(\boldsymbol{\rho}_{\boldsymbol{X}})\,[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}})\sqrt{\mathbb{E}\boldsymbol{X}}]}{[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{Y}})\sqrt{\mathbb{E}\boldsymbol{Y}}]^{\top}(\boldsymbol{\rho}_{\boldsymbol{Y}})\,[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{Y}})\sqrt{\mathbb{E}\boldsymbol{Y}}]}\gtreqqless
1;$ (2.5)
i.e., the over- (equi- and under-dispersion) of $\boldsymbol{X}$ compared to
$\boldsymbol{Y}$ is realized if
$\mathrm{GDI}(\boldsymbol{X})>\mathrm{GDI}(\boldsymbol{Y})$
($\mathrm{GDI}(\boldsymbol{X})=\mathrm{GDI}(\boldsymbol{Y})$ and
$\mathrm{GDI}(\boldsymbol{X})<\mathrm{GDI}(\boldsymbol{Y})$, respectively).
Obviously, GDI is a particular case of RDI with any general reference than
$\mathscr{P}_{d}$. Consequently, many properties of GDI are easily extended to
RDI.
### 2.2 Relative Variation Indexes for Semicontinuous Distributions
Assuming here $\mathbb{T}_{d}^{+}=[0,\infty)^{d}$ and
$\mathbf{W}_{F_{\boldsymbol{Y}}}(\boldsymbol{m})=\boldsymbol{m}\boldsymbol{m}^{\top}$
another $d\times d$ matrix of rank 1. Then, we also have that
$\boldsymbol{\Sigma}_{\boldsymbol{X}}\mathbf{W}^{+}_{F_{\boldsymbol{Y}}}(\boldsymbol{m})$
of (2.2) is of rank 1. Similar to (2.3), the generalized variation index of
$\boldsymbol{X}$ compared to $\mathscr{E}_{d}(\mathbb{E}\mathbf{X})$ is
defined by
$\mathrm{GVI}(\boldsymbol{X}):=\frac{\mathbb{E}\boldsymbol{X}^{\top}\,(\mathrm{cov}\boldsymbol{X})\;\mathbb{E}\boldsymbol{X}}{(\mathbb{E}\boldsymbol{X}^{\top}\mathbb{E}\boldsymbol{X})^{2}}\gtreqqless
1;$ (2.6)
i.e., $\mathbf{X}$ is over- (equi- and under-varied) with respect to
$\mathscr{E}_{d}(\mathbb{E}\mathbf{X})$ if $\mathrm{GVI}(\boldsymbol{X})>1$
($\mathrm{GVI}(\boldsymbol{X})=1$ and $\mathrm{GVI}(\boldsymbol{X})<1$,
respectively); see [31]. Remark that when $d=1$,
$\mathrm{GVI}(X_{1})=\mathrm{var}X_{1}/(\mathbb{E}X_{1})^{2}=\mathrm{VI}(X_{1})$
is the univariate (Jørgensen) variation index which is recently introduced by
Abid et al. [2]. From (2.4) and using again (2.1) for rewritting the numerator
of (2.6) as
$\mathbb{E}\boldsymbol{X}^{\top}\,(\mathrm{cov}\boldsymbol{X})\;\mathbb{E}\boldsymbol{X}=[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}})\mathbb{E}\boldsymbol{X}]^{\top}(\boldsymbol{\rho}_{\boldsymbol{X}})\,[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}})\mathbb{E}\boldsymbol{X}],$
$\mathrm{GVI}(\boldsymbol{X})$ of (2.6) can be interpreted as the ratio of the
full variability of $\boldsymbol{X}$ with respect to its expected uncorrelated
exponential $\mathscr{E}_{d}(\mathbb{E}\mathbf{X})$ variability which depends
only on $\mathbb{E}\boldsymbol{X}$. Similar to $\mathrm{MDI}(\boldsymbol{X})$,
we can define $\mathrm{MVI}(\boldsymbol{X})$ from
$\mathrm{GVI}(\boldsymbol{X})$. See [31] for properties, numerous examples and
numerical illustrations.
The relative variation index is defined, for two semicontinuous random vectors
$\boldsymbol{X}$ and $\boldsymbol{Y}$ on the same support
$\mathbb{T}_{d}^{+}=[0,\infty)^{d}$ with
$\mathbb{E}\boldsymbol{X}=\mathbb{E}\boldsymbol{Y}$ and
$\mathrm{GVI}(\boldsymbol{Y})>0$, by
$\mathrm{RVI}_{\boldsymbol{Y}}(\boldsymbol{X}):=\frac{\mathrm{GVI}(\boldsymbol{X})}{\mathrm{GVI}(\boldsymbol{Y})}=\frac{[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}})\mathbb{E}\boldsymbol{X}]^{\top}(\boldsymbol{\rho}_{\boldsymbol{X}})\,[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{X}})\mathbb{E}\boldsymbol{X}]}{[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{Y}})\mathbb{E}\boldsymbol{Y}]^{\top}(\boldsymbol{\rho}_{\boldsymbol{Y}})\,[(\mathrm{diag}\\!\sqrt{\mathrm{var}\boldsymbol{Y}})\mathbb{E}\boldsymbol{Y}]}\gtreqqless
1;$ (2.7)
i.e., the over- (equi- and under-variation) of $\boldsymbol{X}$ compared to
$\boldsymbol{Y}$ is carried out if
$\mathrm{GVI}(\boldsymbol{X})>\mathrm{GVI}(\boldsymbol{Y})$
($\mathrm{GVI}(\boldsymbol{X})=\mathrm{GVI}(\boldsymbol{Y})$ and
$\mathrm{GVI}(\boldsymbol{X})<\mathrm{GVI}(\boldsymbol{Y})$, respectively). Of
course, RVI generalizes GVI for multivariate semicontinuous distributions. For
instance, one refers to [31] for more details on its discriminating power in
multivariate parametric models from two first moments.
## 3 Multivariate Orthant Associated Kernels
Nonparametric techniques through associated kernels represent an alternative
approach for multivariate orthant data. Let
$\textbf{X}_{1},\ldots,\textbf{X}_{n}$ be independent and identically
distributed (iid) nonnegative orthant $d$-variate random vectors with an
unknown joint pdmf $f$ on $\mathbb{T}_{d}^{+}\subseteq[0,\infty)^{d}$, for
$d\geq 1$. Then the multivariate associated kernel estimator
$\widetilde{f}_{n}$ of $f$ is expressed as
$\widetilde{f}_{n}(\mathbf{x})=\frac{1}{n}\sum_{i=1}^{n}\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{X}_{i}),~{}~{}~{}\forall\mathbf{x}=(x_{1},\ldots,x_{d})^{\top}\in\mathbb{T}_{d}^{+},$
(3.1)
where $\mathbf{H}$ is a given $d\times d$ bandwidth matrix (i.e., symmetric
and positive definite) such that
$\mathbf{H}\equiv\mathbf{H}_{n}\rightarrow\mathbf{0}_{\mathbf{d}}$ (the
$d\times d$ null matrix) as $n\rightarrow\infty$, and
$K_{\mathbf{x},\mathbf{H}}(\cdot)$ is a multivariate (orthant) associated
kernel, parameterized by $\mathbf{x}$ and $\mathbf{H}$; see, e.g., [29]. More
precisely, we have the following refined definition.
###### Definition 3.1
Let $\mathbb{T}_{d}^{+}$ be the support of the pdmf to be estimated,
$\mathbf{x}\in\mathbb{T}_{d}^{+}$ a target vector and H a bandwidth matrix. A
parameterized pdmf $\mathbf{K}_{\mathbf{x},\mathbf{H}}(\cdot)$ on support
$\mathbb{S}_{\mathbf{x},\mathbf{H}}\subseteq\mathbb{T}_{d}^{+}$ is called
”multivariate orthant associated kernel” if the following conditions are
satisfied:
$\mathbf{x}\in\mathbb{S}_{\mathbf{x},\mathbf{H}},\;\mathbb{E}\mathcal{Z}_{\mathbf{x},\mathbf{H}}=\mathbf{x}+\mathbf{A}(\mathbf{x},\mathbf{H})\to\mathbf{x}\;and\;\mathrm{cov}\mathcal{Z}_{\mathbf{x},\mathbf{H}}=\mathbf{B}(\mathbf{x},\mathbf{H})\to\mathbf{0}_{d}^{+},$
where $\mathcal{Z}_{\mathbf{x},\mathbf{H}}$ denotes the corresponding orthant
random vector with pdmf $\mathbf{K}_{\mathbf{x},\mathbf{H}}$ such that vector
$\mathbf{A}(\mathbf{x},\mathbf{H})\to\mathbf{0}$ (the $d$-dimentional null
vector) and positive definite matrix
$\mathbf{B}(\mathbf{x},\mathbf{H})\to\mathbf{0}_{d}^{+}$ as
$\mathbf{H}\to\mathbf{0}_{d}$ (the $d\times d$ null matrix), and
$\mathbf{0}_{d}^{+}$ stands for a symmetric matrix with entries $u_{ij}$ for
$i,j=1,\dots,d$ such that $u_{ij}\in[0,1)$.
This definition exists in the univariate count case of [26, 33] and
encompasses the multivariate one by [29]. The choice of the orthant associated
kernel satisfying
$\lim\limits_{\mathbf{H}\rightarrow\mathbf{0}_{\mathbf{d}}}\mathrm{Cov}(\mathcal{Z}_{\mathbf{x},\mathbf{H}})=\mathbf{0}_{\mathbf{d}}$
assures the convergence of its corresponding estimator named of the second
order. Otherwise, the convergence of its corresponding estimator is not
guarantee for $u_{ij}\in(0,1)$, a right neighborhood of $0$, in Definition 3.1
and it is said a consistent first-order smoother; see, e.g., [26] for discrete
kernels. In general, $d$-under-dispersed count associated kernels are
appropriated for both small and moderate sample sizes; see, e.g., [26] for
univariate cases. As for the selection of the bandwidth $\mathbf{H}$, it is
very crucial because it controls the degree of smoothing and the form of
orientation of the kernel. As a matter of fact, a simplification can be
obtained by considering a diagonal matrix
$\mathbf{H}=\mathbf{diag}_{d}(h_{j})$. Since it is challenging to obtain a
full multivariate orthant distribution
$\mathbf{K}_{\mathbf{x},\mathbf{H}}(\cdot)$ for building a smoother, several
authors suggest the product of univariate orthant associated kernels,
$\mathbf{K}_{\mathbf{x},\mathbf{H}}(\cdot)=\prod_{j=1}^{d}K_{x_{j},h_{j}}(\cdot),$
(3.2)
where $K_{x_{j},h_{j}}$, $j=1,\ldots,d$, belong either to the same family or
to different families of univariate orthant associated kernels. The below two
subsections shall be devoted to summaries of discrete and semicontinuous
univariate associated kernels.
Before showing some main properties of the associated kernel estimator (3.1),
let us recall that the family of $d$-variate classical (symmetric) kernels
$\mathbf{K}$ on $\mathbb{S}_{d}\subseteq\mathbb{R}^{d}$ (e.g., [47, 49, 60])
can be also presented as (classical) associated kernels. Indeed, from (3.1)
and writting for instance
$\mathbf{K}_{\mathbf{x},\mathbf{H}}(\cdot)=(\det\mathbf{H})^{-1/2}\mathbf{K}\left[\mathbf{H}^{-1/2}(\mathbf{x}-\cdot)\right]$
where “$\det$” is the determinant operator, one has
$\mathbb{S}_{\mathbf{x},\mathbf{H}}=\mathbf{x}-\mathbf{H}^{-1/2}\mathbb{S}_{d}$,
$\mathbf{A}(\mathbf{x},\mathbf{H})=\mathbf{0}$ and
$\mathbf{B}(\mathbf{x},\mathbf{H})=\mathbf{H}^{1/2}\boldsymbol{I}_{d}\mathbf{H}^{1/2}=\mathbf{H}$.
In general, one uses the classical (associated) kernels for smoothing
continuous data or pdf having support $\mathbb{T}_{d}=\mathbb{R}^{d}$.
The purely nonparametric estimator (3.1) with multivariate associated kernel,
$\widetilde{f}_{n}$ of $f$, is generally defined up to the normalizing
constant $C_{n}$. Several simulation studies (e.g., [29, Table 3.1]) are shown
that $C_{n}=C_{n}(\mathbf{K},\mathbf{H})$ (depending on samples, associated
kernels and bandwidths) is approximatively $1$. Without loss of generality,
one here assumes $C_{n}=1$ as for all classical (associated) kernel estimators
of pdf. The following proposition finally proves its mean behaviour and
variability through the integrated bias and integrated variance of
$\widetilde{f}_{n}$, respectively. In what follows, let us denote by
$\boldsymbol{\nu}$ the reference measure (Lebesgue or counting) on the
nonnegative orthant set $\mathbb{T}_{d}^{+}$ and also on any set
$\mathbb{T}_{d}\subseteq\mathbb{R}^{d}$.
###### Proposition 3.2
Let
$C_{n}:=\int_{\mathbb{T}_{d}}\widetilde{f}_{n}(\mathbf{x})\boldsymbol{\nu}(d\mathbf{x})=C_{n}(\mathbf{K},\mathbf{H})$.
Then, for all $n\geq 1$:
$\mathbb{E}(C_{n})=1+\int_{\mathbb{T}_{d}}\mathrm{Bias}\\{\widetilde{f}_{n}(\mathbf{x})\\}\boldsymbol{\nu}(d\mathbf{x})\;\;\;and\;\;\;\mathrm{var}(C_{n})=\int_{\mathbb{T}_{d}}\mathrm{var}\\{\widetilde{f}_{n}(\mathbf{x})\\}\boldsymbol{\nu}(d\mathbf{x}).$
###### Proof 1
Let $n\geq 1$. One successively has
$\mathbb{E}(C_{n})=\int_{\mathbb{T}_{d}}\left[f(\mathbf{x})+\mathbb{E}\\{\widetilde{f}_{n}(\mathbf{x})\\}-f(\mathbf{x})\right]\boldsymbol{\nu}(d\mathbf{x})=\int_{\mathbb{T}_{d}}f(\mathbf{x})\boldsymbol{\nu}(d\mathbf{x})+\int_{\mathbb{T}_{d}}\left[\mathbb{E}\\{\widetilde{f}_{n}(\mathbf{x})\\}-f(\mathbf{x})\right]\boldsymbol{\nu}(d\mathbf{x}),$
which leads to the first result because $f$ is a pdmf on $\mathbb{T}_{d}$. The
second result on $\mathrm{var}(C_{n})$ is trivial.
The following general result is easily deduced from Proposition 3.2. To our
knwoledge, it appears to be new and interesting in the framework of the pdmf
(associated) kernel estimators.
###### Corollary 3.3
If $C_{n}=1$, for all $n\geq 1$, then:
$\int_{\mathbb{T}_{d}}\mathrm{Bias}\\{\widetilde{f}_{n}(\mathbf{x})\\}\boldsymbol{\nu}(d\mathbf{x})=0$
and
$\int_{\mathbb{T}_{d}}\mathrm{var}\\{\widetilde{f}_{n}(\mathbf{x})\\}\boldsymbol{\nu}(d\mathbf{x})=0$.
In particular, Corollary 3.3 holds for all classical (associated) kernel
estimators. The two following properties on the corresponding orthant
multivariate associated kernels shall be needed subsequently.
(K1) There exists the second moment of $\mathbf{K}_{\mathbf{x},\mathbf{H}}$:
$\mu_{j}^{2}(\mathbf{K}_{\mathbf{x},\mathbf{H}}):=\int_{\mathbb{S}_{\mathbf{x},\mathbf{H}}\bigcap\mathbb{T}_{d}^{+}}u_{j}^{2}\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{u})\boldsymbol{\nu}(d\mathbf{u})<\infty,\;\;\;\forall
j=1,\ldots,d.$
(K2) There exists a real largest number
$r=r(\mathbf{K}_{\mathbf{x},\mathbf{H}})>0$ and $0<c(\mathbf{x})<\infty$ such
that
$||\mathbf{K}_{\mathbf{x},\mathbf{H}}||_{2}^{2}:=\int_{\mathbb{S}_{\mathbf{x},\mathbf{H}}\bigcap\mathbb{T}_{d}^{+}}\\{\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{u})\\}^{2}\boldsymbol{\nu}(d\mathbf{u})\leq
c(\mathbf{x})(\det\mathbf{H})^{-r}.$
In fact, (K1) is a necessary condition for smoothers to have a finite variance
and (K2) can be deduced from the continuous univariate cases (e.g., [24]) and
also from the discrete ones (e.g., [26]).
We now establish both general asymptotic behaviours of the pointwise bias and
variance of the nonparametric estimator (3.1) on the nonnegative orthant set
$\mathbb{T}_{d}^{+}$; its proof is given in Appendix 7.2. For that, we need
the following assumptions by endowing $\mathbb{T}_{d}^{+}$ with the Euclidean
norm $||\cdot||$ and the associated inner product $\langle\cdot,\cdot\rangle$
such that $\langle\mathbf{a},\mathbf{b}\rangle=\mathbf{a}^{\top}\mathbf{b}$.
(a1) The unknown pdmf $f$ is bounded function and twice differentiable or
finite difference in $\mathbb{T}_{d}^{+}$ and $\nabla f(\mathbf{x})$ and
$\mathcal{H}f(\mathbf{x})$ denote respectively the gradient vector (in
continuous or discrete sense, respectively) and the corresponding Hessian
matrix of the function $f$ at $\mathbf{x}$.
(a2) There exists a positive real number $r>0$ such that
$||K_{\mathbf{x},\mathbf{H}_{n}}||_{2}^{2}(\det\mathbf{H}_{n})^{r}\to
c_{1}(\mathbf{x})>0$ as $n\to\infty$.
Note that (a2) is obviously a consequence of (K2).
###### Proposition 3.4
Under the assumption (a1) on $f$, then the estimator $\widetilde{f}_{n}$ in
(3.1) of $f$ verifies
$\mathrm{Bias}\\{\widetilde{f}_{n}(\mathbf{x})\\}=\left\langle\nabla
f(\mathbf{x}),\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)\right\rangle+\frac{1}{2}\operatorname{tr}\left\\{{\cal
H}f\left(\mathbf{x}\right)\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})+\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)^{\mathsf{T}}\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)\right]\right\\}+o\left\\{\operatorname{tr}\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})\right]\right\\},$
(3.3)
for any $\mathbf{x}\in\mathbb{T}_{d}^{+}$. Moreover, if (a2) holds then
$\mathrm{var}\\{\widetilde{f}_{n}(\mathbf{x})\\}=\frac{1}{n}f(\mathbf{x})||\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}||_{2}^{2}+o\left[\frac{1}{n(\det\mathbf{H}_{n})^{r}}\right].$
(3.4)
For $d=1$ and according to the proof of Proposition 3.4, one can easily write
$\mathbb{E}\widehat{f}_{n}(x)$ as follows:
$\mathbb{E}\widehat{f}_{n}(x)=\mathbb{E}f(\mathcal{Z}_{x,h})=\sum_{k\geq
0}\frac{1}{k!}\mathbb{E}\left(\mathcal{Z}_{x,h}-\mathbb{E}\mathcal{Z}_{x,h}\right)^{k}f^{(k)}(\mathbb{E}\mathcal{Z}_{x,h}),$
where $f^{(k)}$ is the $k$th derivative or finite difference of the pdmf $f$
under the existence of the centered moment of order $k\geq 2$ of
$\mathcal{Z}_{x,h}$.
Concerning bandwidth matrix selections in a multivariate associated kernel
estimator (3.1), one generally use the cross-validation technique (e.g., [26,
27, 28, 29, 56]). However, it is tedious and less precise. Many papers have
recently proposed Bayesian approaches (e.g., [7, 8, 50, 52, 59, 61] and
references therein). In particular, they have recommended local Bayesian for
discrete smoothing of pmf (e.g., [7, 8, 23]) and adaptive one for continuous
smoothing of pdf (e.g., [50, 52, 59]).
Denote $\mathcal{M}$ the set of positive definite [diagonal] matrices [from
(3.2), resp.] and let $\pi$ be a given suitable prior distribution on
$\mathcal{M}$. Under the squared error loss function, the Bayes estimator of
$\mathbf{H}$ is the mean of the posterior distribution. Then, the local
Bayesian bandwidth at the target $\mathbf{x}\in\mathbb{T}_{d}^{+}$ takes the
form
$\widetilde{\mathbf{H}}(\mathbf{x}):=\int_{\mathcal{M}}\mathbf{H}\pi(\mathbf{H})\widetilde{f}_{n}(\mathbf{x})d\mathbf{H}\left[\int_{\mathcal{M}}\widetilde{f}_{n}(\mathbf{x})\pi(\mathbf{H})d\mathbf{H}\right]^{-1},\;\;\mathbf{x}\in\mathbb{T}_{d}^{+},$
(3.5)
and the adaptive Bayesian bandwidth for each observation
$\mathbf{X}_{i}\in\mathbb{T}_{d}^{+}$ of $\mathbf{X}$ is given by
$\widetilde{\mathbf{H}}_{i}:=\int_{\mathcal{M}}\mathbf{H}_{i}\pi(\mathbf{H}_{i})\widetilde{f}_{n,\mathbf{H}_{i},-i}(\mathbf{X}_{i})d\mathbf{H}_{i}\left[\int_{\mathcal{M}}\widetilde{f}_{n,\mathbf{H}_{i},-i}(\mathbf{X}_{i})\pi(\mathbf{H}_{i})d\mathbf{H}_{i}\right]^{-1},\;\;i=1,\ldots,n,$
(3.6)
where $\widetilde{f}_{n,\mathbf{H}_{i},-i}(\mathbf{X}_{i})$ is the leave-one-
out associated kernel estimator of $f(\mathbf{X}_{i})$ deduced from (3.1) as
$\widetilde{f}_{n,\mathbf{H}_{i},-i}(\mathbf{X}_{i}):=\frac{1}{n-1}\sum_{\ell=1,\ell\neq
i}^{n}\mathbf{K}_{\mathbf{X}_{i},\mathbf{H}_{i}}(\mathbf{X}_{\ell}).$ (3.7)
Note that the (global) cross-validation bandwidth matrix
$\widetilde{\mathbf{H}}_{CV}$ and the global Bayesian one
$\widetilde{\mathbf{H}}_{B}$ are obtained, respectively, from (3.7) as
$\widetilde{\mathbf{H}}_{CV}:=\mathrm{arg}\min_{\mathbf{H}\in\mathcal{M}}\left[\int_{\mathbb{T}_{d}^{+}}\\{\widetilde{f}_{n}(\mathbf{x})\\}^{2}\boldsymbol{\nu}(d\mathbf{x})-\frac{2}{n}\sum_{i=1}^{n}\widetilde{f}_{n,\mathbf{H},-i}(\mathbf{X}_{i})\right]$
and
$\widetilde{\mathbf{H}}_{B}:=\int_{\mathcal{M}}\mathbf{H}\pi(\mathbf{H})d\mathbf{H}\prod_{i=1}^{n}\widetilde{f}_{n,\mathbf{H},-i}(\mathbf{X}_{i})d\mathbf{H}\left[\int_{\mathcal{M}}\pi(\mathbf{H})d\mathbf{H}\prod_{i=1}^{n}\widetilde{f}_{n,\mathbf{H},-i}(\mathbf{X}_{i})\right]^{-1}.$
### 3.1 Discrete Associated Kernels
We only present three main and useful families of univariate discrete
associated kernels for (3.2) and satisfying (K1) and (K2).
###### Example 3.5 (categorical)
For fixed $c\in\\{2,3,\ldots\\}$, the number of categories and
$\mathbb{T}_{1}^{+}=\\{0,1,\ldots,c-1\\}$, one defines the Dirac discrete
uniform (DirDU) kernel by
$K_{x,h}^{DirDU}(u)=(1-h)^{\mathds{1}_{u=x}}\left(\frac{h}{c-1}\right)^{1-\mathds{1}_{u=x}},$
for $x\in\\{0,1,\ldots,c-1\\}$, $h\in(0,1]$, with
$\mathbb{S}_{x}:=\\{0,1,\ldots,c-1\\}=\mathbb{T}_{1}^{+}$,
$A(x,h)=h\\{c/2-x-x/(c-1)\\}$ and
$B(x,h)=h\\{c(2c-1)/6+x^{2}-xc+x^{2}/(c-1)\\}-h^{2}\\{c/2-x-x/(c-1)\\}^{2}$.
It has been introduced in multivariate setup by Aitchison and Aitken [3] and
investigated as a discrete associated kernel which is symmetric to the target
$x$ by [26] in univariate case; see [8] for a Bayesian approach in
multivariate setup. Note here that its normalized constant is always
$1=C_{n}$.
###### Example 3.6 (symmetric count)
For fixed $m\in\mathbb{N}$ and $\mathbb{T}_{1}^{+}\subseteq\mathbb{Z}$, the
symmetric count triangular kernel is expressed as
$K_{x,h}^{SCTriang}(u)=\frac{(m+1)^{h}-|u-x|^{h}}{P(m,h)}\mathds{1}_{\\{x,x\pm
1,\ldots,x\pm m\\}}(u),$
for $x\in\mathbb{T}_{1}^{+}$, $h>0$, with $\mathbb{S}_{x}:=\\{x,x\pm
1,\ldots,x\pm m\\}$, $P(m,h)=(2m+1)(m+1)^{h}-2\sum_{\ell=0}^{m}\ell^{h}$,
$A(x,h)=0$ and
$\displaystyle B(x,h)$ $\displaystyle=$
$\displaystyle\frac{1}{P(m,h)}\left\\{\frac{m(2m+1)(m+1)^{h+1}}{3}-2\sum_{\ell=0}^{m}\ell^{h+2}\right\\}$
$\displaystyle\simeq$ $\displaystyle
h\left\\{\frac{m(2m^{2}+3m+1)}{3}\log(m+1)-2\sum_{\ell=1}^{m}\ell^{2}\log\ell\right\\}+O(h^{2}),$
where $\simeq$ holds for $h$ sufficiently small.
It has been first proposed by Kokonendji et al. [28] and then completed in
[32] with an asymmetric version for solving the problem of boundary bias in
count kernel estimation.
###### Example 3.7 (standard count)
Let $\mathbb{T}_{1}^{+}\subseteq\mathbb{N}$, the standard binomial kernel is
defined by
$K_{x,h}^{Binomial}(u)=\frac{(x+1)!}{u!(x+1-u)!}\left(\frac{x+h}{x+1}\right)^{u}\left(\frac{1-h}{x+1}\right)^{x+1-u}\mathds{1}_{\\{0,1,\ldots,x+1\\}}(u),$
for $x\in\mathbb{T}_{1}^{+}$, $h\in(0,1]$, with
$\mathbb{S}_{x}:=\\{0,1,\ldots,x+1\\}$, $A(x,h)=h$ and
$B(x,h)=(x+h)(1-h)/(x+1)\to x/(x+1)\in[0,1]$ as $h\to 0$.
Here $B(x,h)$ tends to $x/(x+1)\in[0,1)$ when $h\to 0$ and the new Definition
3.1 holds. This first-order and under-dispersed binomial kernel is introduced
in [26] which becomes very useful for smoothing count distribution through
small or moderate sample size; see, e.g., [7, 8, 23] for Bayesian approaches
and some references therein. In addition, we have the standard Poisson kernel
where $K_{x,h}^{Poisson}$ follows the equi-dispersed Poisson distribution with
mean $x+h$, $\mathbb{S}_{x}:=\mathbb{N}=:\mathbb{T}_{1}^{+}$, $A(x,h)=h$ and
$B(x,h)=x+h\to x\in\mathbb{N}$ as $h\to 0$. Recently, Huang et al. [17] have
introduced the Conway-Maxwell-Poisson kernel by exploiting its under-dispersed
part and its second-order consistency which can be improved via the mode-
dispersion approach of [37]; see also [16, Section 2.4].
### 3.2 Semicontinuous Associated Kernels
Now, we point out eight main and useful families of univariate semicontinuous
associated kernels for (3.2) and satisfying (K1) and (K2). Which are gamma (G)
of [9] (see also [14]), inverse gamma (Ig) (see also [42]) and log-normal 2
(LN2) by [37], inverse Gaussian (IG) and reciprocal inverse Gaussian by [46]
(see also [18]), log-normal 1 (LN1) and Birnbaum-Saunders by [19] (see also
[38, 41]), and Weibull (W) of [45] (see also [41]). It is noteworthy that the
link between LN2 of [37] and LN1 of [19] is through changing $(x,h)$ to
$(x\exp(h^{2}),2\sqrt{\log(1+h})$. Several other semicontinuous could be
constructed by using the mode-dispersion technique of [37] from any
semicontinuous distribution which is unimodal and having a dispersion
parameter. Recently, one has the scaled inverse chi-squared kernel of [11].
Table 4.1 summarizes these eight semicontinuous univariate associated kernels
with their ingredients of Definition 3.1 and order of preference (O.) obtained
graphically. In fact, the heuristic classification (O.) is done through the
behaviour of the shape and scale of the associated kernel around the target
$x$ at the edge as well as inside; see Figure 4.1 for edge and Figure 4.2 for
inside. Among these eight kernels, we thus have to recommend the five first
univariate associated kernels of Table 4.1 for smoothing semicontinuous data.
This approach could be improved by a given dataset; see, e.g., [35] for
cumulative functions.
## 4 Semiparametric Kernel Estimation with $d$-Variate Parametric Start
We investigate the semiparametric orthant kernel approach which is a
compromise between the pure parametric and the nonparametric methods. This
concept was proposed by Hjort and Glad [15] for continuous data, treated by
Kokonendji et al. [27] for discrete univariate data and, recently, studied by
Kokonendji et al. [33] with an application to radiation biodosimetry.
Table 4.1: Eight semicontinuous univariate associated kernels on
$\mathbb{S}_{x,h}\subseteq[0,\infty)$ and classifyed by ”O.”
O. | Name | $K_{x,h}(u)$ | $A(x,h)$ | $B(x,h)$
---|---|---|---|---
1 | LN2 [37] | $(uh\sqrt{2\pi})^{-1}\exp\left(\left[\log\\{x\exp(h^{2})\\}-\log u\right]/2h^{2}\right)$ | $x[\exp(3h^{2}\\!/2)\\!-\\!1]$ | $x^{2}\exp(3h^{2})[\exp(h^{2})-1]$
2 | W [45] | $[\Gamma(h)/x][u\Gamma(1+h)/x]^{1/h-1}\exp\left\\{-[u\Gamma(1+h)/x]^{1/h}\right\\}$ | $0$ | $x^{2}\\!\left[\Gamma(1\\!+\\!2h)/\Gamma^{2}(1\\!+\\!2h)\\!-\\!1\right]$
3 | G [9] | $h^{-1-x/h}u^{x/h}\exp(-u/h)/\Gamma(1+x/h)$ | $h$ | $(x+h)h$
4 | BS [19] | $(uh\sqrt{2\pi})^{-1}\\!\left[(xu)^{-1/2}\\!+\\!(x/u^{3})^{-1/2}\right]\exp\left[(2\\!-\\!u/x\\!-\\!x/u)/2h\right]$ | $xh/2$ | $x^{2}h(2+5h/2)/2$
5 | Ig [37] | $h^{1-1/xh}u^{-1/xh}\exp(-1/uh)/\Gamma(1/xh-1)$ | $2x^{2}h/(1-2xh)$ | $x^{3}h/[(1-3xh)(1-2xh)^{2}]$
6 | RIG [46] | $(\sqrt{2\pi uh})^{-1}\exp\left\\{[x-h][2-(x-h)/u-u/(x-h)]/2h\right\\}$ | $0$ | $(x-h)h$
7 | IG [46] | $(\sqrt{2\pi hu^{3}})^{-1}\exp\left\\{[2-x/u-u/x)]/2hx\right\\}$ | $0$ | $x^{3}h$
8 | LN1 [19] | $(u\sqrt{8\pi\log(1+h)})^{-1}\exp\left(-[\log u-\log x]^{2}/[8\log(1+h)]\right)$ | $xh(h+2)$ | $x^{2}(1+h)^{4}[(1+h)^{4}-1]$
$\Gamma(v):=\int_{0}^{\infty}s^{v-1}\exp(-s)ds$ is the classical gamma
function with $v>0$.
Figure 4.1: Comparative graphics of the eight univariate semicontinuous
associated kernels of Table 4.1 on the edge ($x=0.3$) with $h=0.1$ and
$h=0.4$.
Figure 4.2: Comparative graphics of the eight univariate semicontinuous
associated kernels of Table 4.1 inside ($x=2.3$) with $h=0.1$ and $h=0.4$.
Without loss of generality, we here assume that any $d$-variate pdmf $f$ can
be formulated (e.g., [30] for $d=1$) as
$f(\mathbf{x})=w(\mathbf{x};\boldsymbol{\theta})\,p_{d}(\mathbf{x};\boldsymbol{\theta}),\;\;\;\forall\mathbf{x}\in\mathbb{T}_{d}^{+},$
(4.1)
where $p_{d}(\cdot;\boldsymbol{\theta})$ is the non-singular parametric part
according to a reference $d$-variate distribution with corresponding unknown
parameters $\boldsymbol{\theta}=(\theta_{1},\ldots,\theta_{k})^{\top}$ and
$w(\cdot;\boldsymbol{\theta}):=f(\cdot)/p_{d}(\cdot;\boldsymbol{\theta})$ is
the unknown orthant weight function part, to be estimated with a multivariate
orthant associated kernel. The weight function at each point can be considered
as the local multiplicative correction factor aimed to accommodate any
pointwise departure from the reference $d$-variate distribution. However, one
cannot consider the best fit of parametric models as the start distribution in
this semiparametric approach. Because the corresponding weight function is
close to zero and becomes a noise which is unappropriated to smooth by an
associated kernel, especially for the continuous cases.
Let $\mathbf{X}_{1},\ldots,\mathbf{X}_{n}$ be iid nonnegative orthant
$d$-variate random vectors with unknown pdmf $f$ on
$\mathbb{T}_{d}^{+}\subseteq[0,\infty)^{d}$. The semiparametric estimator of
(4.1) with (3.2) is expressed as follows:
$\displaystyle\widehat{f}_{n}(\mathbf{x})$ $\displaystyle=$ $\displaystyle
p_{d}(\mathbf{x};\widehat{\boldsymbol{\theta}}_{n})\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_{d}(\mathbf{X}_{i};\widehat{\boldsymbol{\theta}}_{n})}\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{X}_{i})$
(4.2) $\displaystyle=$
$\displaystyle\frac{1}{n}\sum_{i=1}^{n}\frac{p_{d}(\mathbf{x};\widehat{\boldsymbol{\theta}}_{n})}{p_{d}(\mathbf{X}_{i};\widehat{\boldsymbol{\theta}}_{n})}\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{X}_{i}),\;\;\;\mathbf{x}\in\mathbb{T}_{d}^{+},$
where $\widehat{\boldsymbol{\theta}}_{n}$ is the estimated parameter of
$\boldsymbol{\theta}$. From (4.2), we then deduce the nonparametric orthant
associated kernel estimate
$\widetilde{w}_{n}(\mathbf{x};\widehat{\boldsymbol{\theta}}_{n})=\frac{1}{n}\sum_{i=1}^{n}\frac{1}{p_{d}(\mathbf{X}_{i};\widehat{\boldsymbol{\theta}}_{n})}\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{X}_{i})$
(4.3)
of the weight function $x\mapsto
w(\mathbf{x};\widehat{\boldsymbol{\theta}_{n}})$ which depends on
$\widehat{\boldsymbol{\theta}}_{n}$. One can observe that Proposition 3.2 also
holds for
$\widehat{f}_{n}(\cdot)=p_{d}(\cdot;\widehat{\boldsymbol{\theta}}_{n})\widetilde{w}_{n}(\cdot;\widehat{\boldsymbol{\theta}}_{n})$.
However, we have to prove below the analogous of Proposition 3.4.
### 4.1 Known $d$-Variate Parametric Model
Let $p_{d}(\cdot;\boldsymbol{\theta}_{0})$ be a fixed orthant distribution in
(4.1) with $\boldsymbol{\theta}_{0}$ known. Writing
$f(\mathbf{x})=p_{d}(\mathbf{x};\boldsymbol{\theta}_{0})\,w(\mathbf{x})$, we
estimate the nonparametric weight function $w$ by
$\widetilde{w}_{n}(\mathbf{x})=n^{-1}\sum_{i=1}^{n}\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{X}_{i})/p_{d}(\mathbf{X}_{i};\boldsymbol{\theta}_{0})$
with an orthant associated kernel method, resulting in the estimator
$\widehat{f}_{n}(\mathbf{x})=p_{d}(\mathbf{x};\boldsymbol{\theta}_{0})\widetilde{w}_{n}(\mathbf{x})=\frac{1}{n}\sum\limits_{i=1}^{n}\frac{p_{d}(\mathbf{x};\boldsymbol{\theta}_{0})}{p_{d}(\mathbf{X}_{i};\boldsymbol{\theta}_{0})}\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{X}_{i}),\;\;\;\;\;\mathbf{x}\in\mathbb{T}_{d}^{+}.$
(4.4)
The following proposition is proven in Appendix 7.2.
###### Proposition 4.1
Under the assumption (a1) on
$f(\cdot)=p_{d}(\cdot;\boldsymbol{\theta}_{0})w(\cdot)$, then the estimator
$\widehat{f}_{n}(\cdot)=p_{d}(\cdot;\boldsymbol{\theta}_{0})\widetilde{w}_{n}(\cdot)$
in (4.4) of $f$ satisfies
$\displaystyle\mathrm{Bias}\\{\widehat{f}_{n}(\mathbf{x})\\}$ $\displaystyle=$
$\displaystyle
p_{d}(\mathbf{x};\boldsymbol{\theta}_{0})\left[w(\mathbf{x})-f(\mathbf{x})\\{p_{d}(\mathbf{x};\boldsymbol{\theta}_{0})\\}^{-1}+\left\langle\nabla
w(\mathbf{x}),\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)\right\rangle\right]$
$\displaystyle+\frac{1}{2}\;p_{d}(\mathbf{x};\boldsymbol{\theta}_{0})\left(\operatorname{tr}\left\\{{\cal
H}w\left(\mathbf{x}\right)\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})+\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)^{\mathsf{T}}\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)\right]\right\\}\right)$
$\displaystyle+\left(1+o\left\\{\operatorname{tr}\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})\right]\right\\}\right),$
for any $\mathbf{x}\in\mathbb{T}_{d}^{+}$. Furthermore, if (a2) holds then one
has
$\mathrm{var}\\{\widehat{f}_{n}(\mathbf{x})\\}=\mathrm{var}\\{\widetilde{f}_{n}(\mathbf{x})\\}$
of (3.4).
It is expected that the bias here is quite different from that of (3.3).
### 4.2 Unknown $d$-Variate Parametric Model
Let us now consider the more realistic and practical semiparametric estimator
$\widehat{f}_{n}(\cdot)=p_{d}(\cdot;\widehat{\boldsymbol{\theta}}_{n})\widetilde{w}_{n}(\cdot;\widehat{\boldsymbol{\theta}}_{n})$
presented in (4.2) of
$f(\cdot)=p_{d}(\cdot;\boldsymbol{\theta})w(\cdot;\boldsymbol{\theta})$ in
(4.1) such that the parametric estimator $\widehat{\boldsymbol{\theta}}_{n}$
of $\boldsymbol{\theta}$ can be obtained by the maximum likelihood method; see
[15] for quite a general estimator of $\boldsymbol{\theta}$. In fact, if the
$d$-variate parametric model $p_{d}(\cdot;\boldsymbol{\theta})$ is
misspecified then this $\widehat{\boldsymbol{\theta}}_{n}$ converges in
probability to the pseudotrue value $\boldsymbol{\theta}_{0}$ satisfying
$\boldsymbol{\theta}_{0}:=\arg\min\limits_{\boldsymbol{\theta}}\int_{\mathbf{x}\in\mathbb{T}_{d}^{+}}f(\mathbf{x})\log[f(\mathbf{x})/p_{d}(\mathbf{x};\boldsymbol{\theta})]\boldsymbol{\nu}(d\mathbf{x})$
from the Kullback-Leibler divergence (see, e.g., [58]).
By writting $p_{0}(\cdot):=p_{d}(\cdot;\boldsymbol{\theta}_{0})$ this best
$d$-variate parametric approximant, but this $p_{0}(\cdot)$ is not explicitly
expressible as the one in (4.4). According to [15] (see also [27]), we can
represent the proposed estimator
$\widehat{f}_{n}(\cdot)=p_{d}(\cdot;\widehat{\boldsymbol{\theta}}_{n})\widetilde{w}_{n}(\cdot;\widehat{\boldsymbol{\theta}}_{n})$
in (4.2) as
$\widehat{f}_{n}(\mathbf{x})\doteq\frac{1}{n}\sum_{i=1}^{n}\frac{p_{0}(\mathbf{x})}{p_{0}(\mathbf{X}_{i})}\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{X}_{i}),\;\;\;\mathbf{x}\in\mathbb{T}_{d}^{+}.$
(4.5)
Thus, the following result provides approximate bias and variance. We omit its
proof since it is analogous to the one of Proposition 4.1.
###### Proposition 4.2
Let $p_{0}(\cdot):=p_{d}(\cdot;\boldsymbol{\theta}_{0})$ be the best
$d$-variate approximant of the unknown pdmf
$f(\cdot)=p_{d}(\cdot;\boldsymbol{\theta})w(\cdot;\boldsymbol{\theta})$ as
(4.1) under the Kullback–Leibler criterion, and let
$w(\cdot):=f(\cdot)/p_{0}(\cdot)$ be the corresponding $d$-variate weight
function. As $n\to\infty$ and under the assumption (a1) on $f$, then the
estimator
$\widehat{f}_{n}(\cdot)=p_{d}(\cdot;\widehat{\boldsymbol{\theta}}_{n})\widetilde{w}_{n}(\cdot;\widehat{\boldsymbol{\theta}}_{n})$
in (4.2) of $f$ and refomulated in (4.5) satisfies
$\displaystyle\mathrm{Bias}\\{\widehat{f}_{n}(\mathbf{x})\\}$ $\displaystyle=$
$\displaystyle
p_{0}(\mathbf{x})\left[w(\mathbf{x})-f(\mathbf{x})\\{p_{0}(\mathbf{x})\\}^{-1}+\left\langle\nabla
w(\mathbf{x}),\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)\right\rangle\right]$
$\displaystyle+\frac{1}{2}\;p_{0}(\mathbf{x})\left(\operatorname{tr}\left\\{{\cal
H}w\left(\mathbf{x}\right)\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})+\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)^{\mathsf{T}}\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)\right]\right\\}\right)$
$\displaystyle+\left(1+o\left\\{\operatorname{tr}\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})\right]\right\\}+n^{-2}\right),$
for any $\mathbf{x}\in\mathbb{T}_{d}^{+}$. Furthermore, if (a2) holds then we
have
$\mathrm{var}\\{\widehat{f}_{n}(\mathbf{x})\\}=\mathrm{var}\\{\widetilde{f}_{n}(\mathbf{x})\\}$
of (3.4).
Once again, the bias is different from that of (3.3). Thus, the proposed
semiparametric estimator $\widehat{f}_{n}$ in (4.2) of $f$ can be shown to be
better or not than the traditional nonparametric one $\widetilde{f}_{n}$ in
(3.1). The following subsection provides a practical solution.
### 4.3 Model Diagnostics
The estimated weight function
$\widetilde{w}_{n}(\mathbf{x},\widehat{\boldsymbol{\theta}}_{n})$ given in
(4.3) provides useful information for model diagnostics. The $d$-variate
weight function $w(\cdot)$ is equal one if the $d$-variate parametric start
model $p_{d}(\cdot;\boldsymbol{\theta}_{0})$ is indeed the true pdmf. Hjort
and Glad [15] proposed to check this adequacy by examining a plot of the
weight function for various potential models with pointwise confidence bands
to see wether or not $w(\mathbf{x})=1$ is reasonable. See also [27, 33] for
univariate count setups.
In fact, without technical details here we use the model diagnostics for
verifying the adequacy of the model by examining a plot of
$\mathbf{x}\mapsto\widetilde{w}_{n}(\mathbf{x};\widehat{\boldsymbol{\theta}}_{n})$
or
$\widetilde{W}_{n}(\mathbf{x}):=\log\widetilde{w}_{n}(\mathbf{x};\widehat{\boldsymbol{\theta}}_{n})=\log[\widehat{f}_{n}(\mathbf{x})/p_{d}(\mathbf{x};\widehat{\boldsymbol{\theta}}_{n})]$
(4.6)
for all $\mathbf{x}=\mathbf{X}_{i}$, $i=1,\ldots,n$, with a pointwise
confidence band of $\pm 1.96$ for large $n$; that is to see how far away it is
from zero. More precisely, for instance, $\widetilde{W}_{n}$ is $<5\%$ for
pure nonparametric, it belongs to $[5\%,95\%]$ for semiparametric, and it is
$>95\%$ for full parametric models. It is noteworthy that the retention of
pure nonparametric means the inconvenience of parametric part considered in
this approach; hence, the orthant dataset is left free.
## 5 Semicontinuous Examples of Application with Discussions
For a practical implementation of our approach, we propose to use the popular
multiple gamma kernels as (3.2) by selecting the adaptive Bayesian procedure
of [52] to smooth
$\widetilde{w}_{n}(\mathbf{x};\widehat{\boldsymbol{\theta}}_{n})$. Hence, we
shall gradually consider $d$-variate semicontinuous cases with $d=1,2,3$ for
real datasets. All computations and graphics have been done with the R
software [44].
### 5.1 Adaptive Bayesian Bandwidth Selection for Multiple Gamma Kernels
From Table 4.1, the function $G_{x,h}(\cdot)$ is the gamma kernel [9] given on
the support $\mathbb{S}_{x,h}=[0,\infty)=\mathbb{T}_{1}^{+}$ with $x\geq 0$
and $h>0$:
$G_{x,h}(u)=\dfrac{u^{x/h}}{\Gamma\left(1+x/h\right)h^{1+x/h}}\exp{\left(-\dfrac{u}{h}\right)}\mathds{1}_{[0,\infty)}(u),$
where $\mathds{1}_{E}$ denotes the indicator function of any given event $E$.
This gamma kernel $G_{x,h}(\cdot)$ appears to be the pdf of the gamma
distribution, denoted by $\mathcal{G}(1+x/h,h)$ with shape parameter $1+x/h$
and scale parameter $h$. The multiple gamma kernel from (3.2) is written as
$\mathbf{K}_{\mathbf{x},\mathbf{H}}(\cdot)=\prod_{j=1}^{d}G_{x_{j},h_{j}}(\cdot)$
with $\mathbf{H}=\mathrm{diag}_{d}\left(h_{j}\right)$.
For applying (3.6) and (3.7) in the framework of semiparametric estimator
$\widehat{f}_{n}$ in (4.2), we assume that each component
$h_{i\ell}=h_{i\ell}(n)$, $\ell=1,\ldots,d$, of $\mathbf{H}_{i}$ has the
univariate inverse gamma prior $\mathcal{I}g(\alpha,\beta_{\ell})$
distribution with same shape parameters $\alpha>0$ and, eventually, different
scale parameters $\beta_{\ell}>0$ such that
$\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{d})^{\top}$. We here recall that
the pdf of $\mathcal{I}g(\alpha,\beta_{\ell})$ with $\alpha,\beta_{\ell}>0$ is
defined by
$Ig_{\alpha,\beta_{\ell}}(u)=\frac{\beta_{\ell}^{\alpha}}{\Gamma(\alpha)}u^{-\alpha-1}\exp(-\beta_{\ell}/u)\mathds{1}_{(0,\infty)}(u),\;\;\ell=1,\ldots,d.$
(5.1)
The mean and the variance of the prior distribution (5.1) for each component
$h_{i\ell}$ of the vector $\mathbf{H}_{i}$ are given by
$\beta_{\ell}/(\alpha-1)$ for $\alpha>1$, and
$\beta_{\ell}^{2}/(\alpha-1)^{2}(\alpha-2)$ for $\alpha>2$, respectively. Note
that for a fixed $\beta_{\ell}>0$, $\ell=1,\ldots,d$, and if
$\alpha\rightarrow\infty$, then the distribution of the bandwidth vector
$\mathbf{H}_{i}$ is concentrated around the null vector $\mathbf{0}$.
From those considerations, the closed form of the posterior density and the
Bayesian estimator of $\mathbf{H}_{i}$ are given in the following proposition
which is proven in Appendix 7.2.
###### Proposition 5.1
For fixed $i\in\\{1,2,\ldots,n\\}$, consider each observation
$\mathbf{X}_{i}=(X_{i1},\ldots,X_{id})^{\top}$ with its corresponding
$\mathbf{H}_{i}=\mathrm{diag}_{d}\left(h_{ij}\right)$ of univariate bandwidths
and defining the subset
$\mathbb{I}_{i}=\left\\{k\in\\{1,\ldots,d\\}~{};X_{ik}=0\right\\}$ and its
complementary set
$\mathbb{I}^{c}_{i}=\left\\{\ell\in\\{1,\ldots,d\\}~{};X_{i\ell}\in(0,\infty)\right\\}$.
Using the inverse gamma prior $Ig_{\alpha,\beta_{\ell}}$ of (5.1) for each
component $h_{i\ell}$ of $\mathbf{H}_{i}$ in the multiple gamma estimator with
$\alpha>1/2$ and
$\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{d})^{\top}\in(0,\infty)^{d}$,
then:
(i) the posterior density is the following weighted sum of inverse gamma
$\displaystyle\pi(\mathbf{H}_{i}\mid\mathbf{X}_{i})$ $\displaystyle=$
$\displaystyle\frac{p_{d}(\mathbf{X}_{i};\widehat{\boldsymbol{\theta}}_{n})}{D_{i}(\alpha,\boldsymbol{\beta})}\sum_{j=1,j\neq
i}^{n}\frac{1}{p_{d}(\mathbf{X}_{j};\widehat{\boldsymbol{\theta}}_{n})}\left(\prod_{k\in\mathbb{I}_{i}}C_{jk}(\alpha,\beta_{k})\,Ig_{\alpha+1,X_{jk}+\beta_{k}}(h_{ik})\right)$
$\displaystyle\times\left(\prod_{\ell\in\mathbb{I}^{c}_{i}}A_{ij\ell}(\alpha,\beta_{\ell})\,Ig_{\alpha+1/2,B_{ij\ell}(\beta_{\ell})}(h_{i\ell})\right),$
with
$A_{ij\ell}(\alpha,\beta_{\ell})=[\Gamma(\alpha+1/2)]/(\beta_{\ell}^{-\alpha}X_{i\ell}^{1/2}\sqrt{2\pi}[B_{ij\ell}(\beta_{\ell})]^{\alpha+1/2})$,
$B_{ij\ell}(\beta_{\ell})=X_{i\ell}\log(X_{i\ell}/X_{j\ell})+X_{j\ell}-X_{i\ell}+\beta_{\ell}$,
$C_{jk}(\alpha,\beta_{k})=[\Gamma(\alpha+1)]/[\beta_{k}^{-\alpha}(X_{jk}+\beta_{k})^{\alpha+1}]$,
and
$D_{i}(\alpha,\boldsymbol{\beta})=p_{d}(\mathbf{X}_{i};\widehat{\boldsymbol{\theta}}_{n})\sum_{j=1,j\neq
i}^{n}\left(\prod_{k\in\mathbb{I}_{i}}A_{ijk}(\alpha,\beta_{k})\right)\left(\prod_{\ell\in\mathbb{I}^{c}_{i}}B_{ij\ell}(\beta_{\ell})\right)/p_{d}(\mathbf{X}_{j};\widehat{\boldsymbol{\theta}}_{n})$;
(ii) under the quadratic loss function, the Bayesian estimator
$\widehat{\mathbf{\mathbf{H}}}_{i}=\mathrm{diag}_{d}\left(~{}\widehat{h}_{im}\right)$
of $\mathbf{H}_{i}$ in (4.2) is
$\displaystyle\widehat{h}_{im}$ $\displaystyle=$
$\displaystyle\frac{p_{d}(\mathbf{X}_{i};\widehat{\boldsymbol{\theta}}_{n})}{D_{i}(\alpha,\boldsymbol{\beta})}\sum_{j=1,j\neq
i}^{n}\frac{1}{p_{d}(\mathbf{X}_{j};\widehat{\boldsymbol{\theta}}_{n})}\left(\prod_{k\in\mathbb{I}_{i}}C_{jk}(\alpha,\beta_{k})\right)\left(\prod_{\ell\in\mathbb{I}^{c}_{i}}A_{ij\ell}(\alpha,\beta_{\ell})\right)$
$\displaystyle\times\left(\frac{X_{jm}+\beta_{m}}{\alpha}\mathds{1}_{\\{0\\}}(X_{im})+\frac{B_{ijm}(\beta_{m})}{\alpha-1/2}\mathds{1}_{(0,\infty)}(X_{im})\right),$
for $m=1,2,\ldots,d,$ with the previous notations of
$A_{ij\ell}(\alpha,\beta_{\ell})$, $B_{ijm}(\beta_{m})$,
$C_{jk}(\alpha,\beta_{k})$ et $D_{i}(\alpha,\boldsymbol{\beta})$.
Following Somé and Kokonendji [52] for nonparametric approach, we have to
select the prior parameters $\alpha$ and
$\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{d})^{\top}$ of the multiple
inverse gamma of $\mathcal{I}g(\alpha,\beta_{\ell})$ in (5.1) as follows:
$\alpha=\alpha_{n}=n^{2/5}>2$ and $\beta_{\ell}>0$, $\ell=1,\ldots,d$, to
obtain the convergence of the variable bandwidths to zero with a rate close to
that of an optimal bandwidth. For practical use, we here consider each
$\beta_{\ell}=1$.
### 5.2 Semicontinuous Datasets
The numerical illustrations shall be done through the following dataset of
Table 5.1 which are recently used in [52] for nonpaprametric approach and only
in trivariate setup as semicontinuous data. It concerns three measurements
(with $n=42$) of drinking water pumps installed in the Sahel. The first
variable $X_{1}$ represents the failure times (in months) and, also, it is
recently used by Touré et al. [55]. The second variable $X_{2}$ refers to the
distance (in kilometers) between each water pump and the repair center in the
Sahel, while the third one $X_{3}$ stands for average volume (in $m^{3}$) of
water per day.
Table 5.1: Drinking water pumps trivariate data measured in the Sahel with $n=42$. $X_{1}:$ | 23 | 261 | 87 | 10 | 120 | 14 | 62 | 15 | 47 | 225 | 71 | 20 | 246 | 21
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
$X_{2}:$ | 97 | 93 | 94 | 100 | 98 | 84 | 96 | 110 | 121 | 73 | 90 | 93 | 103 | 116
$X_{3}:$ | 26 | 52 | 22 | 39 | 23 | 26 | 32 | 17 | 10 | 39 | 31 | 42 | 52 | 26
$X_{1}:$ | 19 | 42 | 20 | 5 | 12 | 120 | 17 | 11 | 3 | 14 | 71 | 11 | 5 | 14
$X_{2}:$ | 114 | 82 | 96 | 94 | 77 | 91 | 117 | 103 | 99 | 113 | 79 | 109 | 84 | 118
$X_{3}:$ | 26 | 36 | 43 | 36 | 6 | 27 | 15 | 36 | 9 | 52 | 11 | 20 | 25 | 37
$X_{1}:$ | 11 | 16 | 90 | 1 | 16 | 52 | 95 | 10 | 1 | 14 | 4 | 7 | 14 | 20
$X_{2}:$ | 98 | 93 | 94 | 103 | 109 | 110 | 89 | 108 | 101 | 93 | 102 | 138 | 103 | 96
$X_{3}:$ | 25 | 18 | 43 | 43 | 24 | 38 | 6 | 40 | 21 | 34 | 15 | 23 | 68 | 37
Table 5.2 displays all empirical univariate, bivariate and trivariate
variation (2.6) and dispersion (2.3) indexes from Table 5.1. Hence, each
$X_{j}$, $(X_{j},X_{k})$ and $(X_{1},X_{2},X_{3})$ is over-dispersed compared
to the corresponding uncorrelated Poisson distribution. But, only
$(X_{1},X_{3})$ (resp. $X_{1}$) can be considered as a bivariate equi-
variation (resp. univariate over-variation) with respect to the corresponding
uncorrelated exponential distribution; and, other $X_{j}$, $(X_{j},X_{k})$ and
$(X_{1},X_{2},X_{3})$ are under-varied. In fact, we just compute dispersion
indexes for curiosity since all values in Table 5.1 are positive integers;
and, we herenow omit the counting point of view in the remainder of the
analysis.
Table 5.2: Empirical univariate (in diagonal), bivariate (off diagonal) and trivariate (at the corner) variation and dispersion indexes. $\widehat{\mathrm{GVI}}_{3}=0.0533$ | $X_{1}$ | $X_{2}$ | $X_{3}$ | $\widehat{\mathrm{GDI}}_{3}=15.1229$ | $X_{1}$ | $X_{2}$ | $X_{3}$
---|---|---|---|---|---|---|---
$X_{1}$ | $1.9425$ | $0.0557$ | $1.0549$ | $X_{1}$ | $89.5860$ | $14.3223$ | $70.7096$
$X_{2}$ | $0.0557$ | $0.0167$ | $0.0157$ | $X_{2}$ | $14.3223$ | $1.6623$ | $2.0884$
$X_{3}$ | $1.0549$ | $0.0157$ | $0.2122$ | $X_{3}$ | $70.7096$ | $2.0884$ | $6.3192$
Thus, we are gradually investing in semiparametric approaches for three
univariates, three bivariates and only one trivariate from
$(X_{1},X_{2},X_{3})$ of Table 5.1.
### 5.3 Univariate Examples
For each univariate semicontinuous dataset $X_{j}$, $j=1,2,3$, we have already
computed the GVI in Table 5.2 which belongs in $(0.01,1.95)\ni 1$. This allows
to consider our flexible semiparametric estimation $\widehat{f}_{n,j}$ with an
exponential $\mathscr{E}_{1}(\mu_{j})$ as start in (4.2) and using adaptive
Bayesian bandwidth in gamma kernel of Proposition 5.1. Hence, we deduce the
corresponding diagnostic percent $\widetilde{W}_{n,j}$ from (4.6) for deciding
an appropriate approach. In addition, we first present the univariate
nonparametric estimation $\widetilde{f}_{n,j}$ with adaptive Bayesian
bandwidth in gamma kernel of [50] and then propose another parametric
estimation of $X_{j}$ by the standard gamma model with shape ($a_{j}$) and
scale ($b_{j}$) parameters.
Table 5.3 transcribes parameter maximum likelihood estimates of exponential
and gamma models with diagnostic percent from Table 5.1. Figure 5.1 exhibits
histogram, $\widetilde{f}_{n,j}$, $\widehat{f}_{n,j}$, exponential, gamma and
diagnostic $\widetilde{W}_{n,j}$ graphs for each univariate data $X_{j}$. One
can observe differences with the naked eye between $\widetilde{f}_{n,j}$ and
$\widehat{f}_{n,j}$ although they are very near and with the same pace. The
diagnostic $\widetilde{W}_{n,j}$ graphics lead to conclude to semiparametric
approach for $X_{2}$ and to full parametric models for $X_{3}$ and slightly
also for $X_{1}$. Thus, we have suggested gamma model with two parameters for
improving the starting exponential model; see, e.g., [30, Table 2] for
alternative parametric models.
Table 5.3: Parameter estimates of models and diagnostic percents of univariate datasets. Estimate | $\widehat{\mu}_{j}\;\;\;$ | $\widetilde{W}_{n,j}$ ($\%$) | $\widehat{a}_{j}\;\;\;\;$ | $\widehat{b}_{j}\;\;\;\;$
---|---|---|---|---
$X_{1}$ | $0.0217$ | $95.2381$ | $0.7256$ | $63.5618$
$X_{2}$ | $0.0100$ | $76.1905$ | $56.9817$ | $1.7470$
$X_{3}$ | $0.0336$ | $100.0000$ | $3.7512$ | $7.9403$
()
()
()
()
()
()
Figure 5.1: Comparative graphs of estimates of $X_{1}$, $X_{2}$ and $X_{3}$
with their corresponding diagnostics.
### 5.4 Bivariate and Trivariate Examples
For the sake of flexibility and efficiency, we here analyse our proposed
semiparametric estimation $\widehat{f}_{n}$ with an uncorrolated exponential
as start in (4.2) and using adaptive Bayesian bandwidth in gamma kernel of
Proposition 5.1. This concerns all bivariate and trivariate datasets from
Table 5.1 for which their GVI are in $(0.01,1.06)\ni 1$ from Table 5.2. All
the computation times are alsmost instantaneous.
Table 5.4: Correlations, MVI, parameter estimates and diagnostic percents of bi- and trivariate cases. Dataset | $(X_{1},X_{2})$ | $(X_{1},X_{3})$ | $(X_{2},X_{3})$ | $(X_{1},X_{2},X_{3})$
---|---|---|---|---
$\widehat{\rho}(X_{j},X_{k})$ | $-0.3090$ | $0.2597$ | $0.0245$ | $\det\widehat{\boldsymbol{\rho}}=0.8325$
$\widehat{\mathrm{MVI}}$ | $0.0720$ | $0.9857$ | $0.0155$ | $0.0634$
$(\widehat{\mu}_{j})$ | $(0.0217,0.0100)$ | $(0.0217,0.0336)$ | $(0.0100,0.0336)$ | $(0.0217,0.0100,0.0336)$
$\widetilde{W}_{n}$ ($\%$) | $9.5238$ | $52.3809$ | $26.1005$ | $0.0000$
()
()
()
()
Figure 5.2: Univariate projections of diagnostic graphs for bivariate and
trivariate models.
Table 5.4 reports the main numerical results of the corresponding
correlations, MVI, parameter estimates and finally diagnostic percent
$\widetilde{W}_{n}$ from (4.6) that we intentionally omit to represent some
graphics in three or four dimensions. However, Figure 5.2 displays some
representative projections of $\widetilde{W}_{n}$. From Table 5.4, the cross
empirical correlations are closed to 0 and all MVI are smaller than $1$ which
allow to consider uncorrelated exponential start-models. The maximum
likelihood method is also used for estimating the parameters $\mu_{j}$ for
getting the same results as in Table 5.3. Thus, the obtained numerical values
of $\widetilde{W}_{n}$ indicate semiparametric approaches for all bivariate
datasets and the purely nonparametric method for the trivariate one; see [52]
for more details on this nonparametric analysis. This progressive
semiparametric analysis of the trivariate dataset of Table 5.1 shows the
necessity of a suitable choice of the parametric start-models, which may take
into account the correlation structure. Hence, the retention of pure
nonparametric means the inconvenience of parametric part used in the
modelization. Note that we could consider the Marshall-Olkin exponential
distributions with nonnegative correlations as start-models; but, they are
singular. See Appendix 7.1 for a brief review.
## 6 Concluding Remarks
In this paper, we have presented a flexible semiparametric approach for
multivariate nonnegative orthant distributions. We have first recalled
multivariate variability indexes GVI, MVI, RVI, GDI, MDI and RDI from RWI as a
prelude to the second-order discrimination for these parametric distributions.
We have then reviewed and provided new proposals to the nonparametric
estimators through multivariate associated kernels; e.g., Proposition 3.2 and
Corollary 3.3. Both effective adaptive and local Bayesian selectors of
bandwidth matrices are suggested for semicontinuous and counting data,
respectively.
All these previous ingredients were finally used to develop the semiparametric
modelling for multivariate nonnegative orthant distributions. Numerical
illustrations have been simply done for univariate and multivariate
semicontinuous datasets with the uncorrolated exponential start-models after
examining GVI and MVI. The adaptive Bayesian bandwidth selection (3.6) in
multiple gamma kernel (Proposition 5.1) were here required for applications.
Finally, the diagnostic models have played a very interesting role for helping
to the appropriate approach, even if it means improving it later.
At the meantime, Kokonendji et al. [23] proposed an in-depth practical
analysis of multivariate count datasets starting by multivariate
(un)correlated Poisson models after reviewing GDI and RDI. They have also
established an equivalent of our Proposition 5.1 for the local Bayesian
bandwidth selection (3.5) by using the multiple binomial kernel from Example
3.7. As one of the many perspectives, one could consider the categorial setup
with local Bayesian version of the multivariate associated kernel of Aitchison
and Aitken [3] from Example 3.5 of the univariate case.
At this stage of analysis, all the main foundations are now available for
working in multivariate setup such as variability indexes, associated kernels,
Bayesian selectors and model disgnostics. We just have to adapt them to each
situation encountered. For instance, we have the semiparametric regression
modelling; see, e.g., Abdous et al. [1] devoted to count explanatory variables
and [48]. Also, an opportunity will be opened for hazard rate functions (e.g.,
[45]). The near future of other functional groups, such categorial and mixed,
can now be considered with objectivity and feasibility.
## 7 Appendix
### 7.1 On a Broader $d$-Variate Parametric Models and the Marshall-Olkin
Exponential
According to Cuenin et al. [10], taking $p\in\\{1,2\\}$ in their multivariate
Tweedie models of flexible dependence structure, another way to define the
$d$-variate Poisson and exponential distributions is given by
$\mathscr{P}_{d}(\boldsymbol{\Lambda})$ and
$\mathscr{E}_{d}(\boldsymbol{\Lambda})$, respectively. The $d\times d$
symmetric variation matrix
$\boldsymbol{\Lambda}=(\lambda_{ij})_{i,j\in\\{1,\ldots,d\\}}$ is such that
$\lambda_{ij}=\lambda_{ji}\geq 0$, the mean of the corresponding marginal
distribution is $\lambda_{ii}>0$, and the non-negative correlation terms
satisfy
$\rho_{ij}=\frac{\lambda_{ij}}{\sqrt{\lambda_{ii}\lambda_{jj}}}\in[0,\min\\{R(i,j),R(j,i)\\}),$
(7.1)
with
$R(i,j)=\sqrt{\lambda_{ii}/\lambda_{jj}}\,(1-\lambda_{ii}^{-1}\sum_{\ell\neq
i,j}\lambda_{i\ell})\in(0,1)$. Their constructions are perfectly defined
having $d(d+1)/2$ parameters as in
$\mathscr{P}_{d}(\boldsymbol{\mu},\boldsymbol{\rho})$ and
$\mathscr{E}_{d}(\boldsymbol{\mu},\boldsymbol{\rho})$. Moreover, we attain the
exact bounds of the correlation terms in (7.1). Cuenin et al. [10] have
pointed out the construction and simulation of the negative correlation
structure from the positive one of (7.1) by considering the inversion method.
The negativity of a correlation component is crucial for the phenomenon of
under-variability in a bivariate/multivariate positive orthant model. Figure
7.1 (right) plots a limit shape of any bivariate positive orthant distribution
with very strong negative correlation (in red), which is not the diagonal line
of the upper bound ($+1$) of positive correlation (in blue); see, e.g., [10]
for details on both bivariate orthant (i.e., continuous and count) models.
Conversely, Figure 7.1 (left) represents the classic lower ($-1$) and upper
($+1$) bounds of correlations on $\mathbb{R}^{2}$ or finite support.
$-3$$-2$$-1$$0$$1$$2$$3$$-3$$-2$$-1$$0$$1$$2$$3$
$0$$1$$2$$3$$4$$5$$6$$0$$1$$2$$3$$4$$5$$6$
Figure 7.1: Support of bivariate distributions with maximum correlations
(positive in blue and negative in red): model on $\mathbb{R}^{2}$ (left) and
also finite support; model on $\mathbb{T}_{2}^{+}\subseteq[0,\infty)^{2}$
(right), without finite support.
The $d$-variate exponential
$\boldsymbol{X}=(X_{1},\ldots,X_{d})^{\top}\sim\mathscr{E}_{d}(\boldsymbol{\mu},\mu_{0})$
of Marshall and Olkin [39] is built as follows. Let $Y_{1},\ldots,Y_{d}$ and
$Z$ be univariate exponential random variables with parameters
$\mu_{1}>0,\ldots,\mu_{d}>0$ and $\mu_{0}\geq 0$, respectively. Then, by
setting $X_{j}:=Y_{j}+Z$ for $j=1,\ldots,d$, one has
$\mathbb{E}X_{j}=1/(\mu_{j}+\mu_{0})=\sqrt{\mathrm{var}X_{j}}$ and
$\mathrm{cov}(X_{j},X_{\ell})=\mu_{0}/\\{(\mu_{j}+\mu_{0})(\mu_{\ell}+\mu_{0})(\mu_{j}+\mu_{\ell}+\mu_{0})\\}$
for all $j\neq\ell$. Each correlation
$\rho(X_{j},X_{\ell})=\mu_{0}/(\mu_{j}+\mu_{\ell}+\mu_{0})$ lies in $[0,1]$ if
and only if $\mu_{0}\geq 0$. From its survival (or reliability) function
$S(\mathbf{x};\boldsymbol{\mu},\mu_{0})=\exp\left(-\mu_{0}\max(x_{1},\ldots,x_{d})-\sum_{j=1}^{d}\mu_{j}x_{j}\right),$
its pdf can be written as
$p_{d}(\mathbf{x};\boldsymbol{\mu},\mu_{0})=\left\\{\begin{array}[]{ll}S(\mathbf{x};\boldsymbol{\mu},\mu_{0})(\mu_{0}+\mu_{\ell})\prod\limits_{j=1,j\neq\ell}^{d}\mu_{j}&\mathrm{if}\;x_{\ell}:=\max(x_{1},\ldots,x_{d})\;\mathrm{and}\;x_{\ell}\neq
x_{j},\;j\neq\ell\\\
S(\mathbf{x};\boldsymbol{\mu},\mu_{0})\mu_{0}\mu_{j_{1}}\cdots\mu_{j_{k}}&\mathrm{if}\;x_{j_{1}},\ldots,x_{j_{k}}<x_{\ell_{k+1}}=\cdots=x_{\ell_{d}}\\\
S(\mathbf{x};\boldsymbol{\mu},\mu_{0})\mu_{0}&\mathrm{if}\;x_{1}=\cdots=x_{d}>0.\end{array}\right.$
It is not absolutely continuous with respect to the Lebesgue measure in
$\mathbb{T}_{d}^{+}$ and has singularities corresponding to the cases where
two or more of the $x_{j}$’s are equal. Karlis [22] has proposed a maximum
likelihood estimation of parameters via an EM algorithm. Finally, Kokonendji
et al. [31] have calculated
$\mathrm{GVI}(\boldsymbol{X})=1+\frac{\mu_{0}\sum_{j=1}^{d}(\mu_{j}+\mu_{0})^{-1}\\{\sum_{\ell\neq
j}(\mu_{j}+\mu_{\ell}+\mu_{0})^{-1}(\mu_{\ell}+\mu_{0})^{-1}\\}}{\\{(\mu_{1}+\mu_{0})^{-2}+\cdots+(\mu_{d}+\mu_{0})^{-2}\\}^{2}}\geq
1\;\;(\Leftrightarrow\mu_{0}\geq 0).$
and
$\mathrm{MVI}(\boldsymbol{X})=\frac{\sum_{j=1}^{d}(\mu_{j}+\mu_{0})^{-4}}{\sum_{j=1}^{d}(\mu_{j}+\mu_{0})^{-4}+2\sum_{1\leq
j<\ell\leq 1}(\mu_{j}+\mu_{0})^{-2}(\mu_{\ell}+\mu_{0})^{-2}}<1.$
Hence, the Marshall-Olkin exponential model
$\boldsymbol{X}\sim\mathscr{E}_{d}(\boldsymbol{\mu},\mu_{0})$ is always under-
varied with respect to the MVI and over- or equi-varied with respect to GVI.
If $\mu_{0}=0$ then $\mathscr{E}_{d}(\boldsymbol{\mu},\mu_{0})$ is reduced to
the uncorrolated $\mathscr{E}_{d}(\boldsymbol{\mu})$ with
$\mathrm{GVI}(\boldsymbol{Y})=1$. However, the assumption of non-negative
correlations between components is sometimes unrealistic for some analyzes.
### 7.2 Proofs of Proposition 3.4, Proposition 4.1 and Proposition 5.1
###### Proof 2 (Proof of Proposition 3.4)
From Definition 3.1, we get (see also [29] for more details)
$\displaystyle\mathbb{E}\left[\widetilde{f}_{n}(\mathbf{x})\right]-f(\mathbf{x})$
$\displaystyle=$
$\displaystyle\mathbb{E}\left[\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{X}_{j})\right]-f(\mathbf{x})=\int_{\mathbb{S}_{\mathbf{x},\mathbf{H}_{n}}\cap\mathbb{T}_{d}^{+}}\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{u})f(\mathbf{u})\boldsymbol{\nu}(d\mathbf{u})-f(\mathbf{x})$
(7.2) $\displaystyle=$ $\displaystyle\mathbb{E}\left[f\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right)\right]-f(\mathbf{x}).$
Next, using (7.2), by a Taylor expansion of the function $f(\cdot)$ over the
points ${\cal Z}_{\mathbf{x},\mathbf{H}_{n}}$ and $\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]$, we get
$\displaystyle f\left({\cal Z}_{\mathbf{x},\mathbf{H}_{n}}\right)$
$\displaystyle=$ $\displaystyle f\left(\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)+\left\langle\nabla
f\left(\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right),\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}-\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)\right\rangle$ (7.3)
$\displaystyle+\frac{1}{2}\left\langle{\cal H}f\left(\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}-\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right),\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}-\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)\right\rangle$
$\displaystyle+\left\|{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}-\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right\|^{2}o(1)$ $\displaystyle=$
$\displaystyle f\left(\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)+\left\langle\nabla
f\left(\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right),\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}-\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)\right\rangle$
$\displaystyle+\frac{1}{2}\operatorname{tr}\left[{\cal
H}f\left(\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}-\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}-\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)^{\mathsf{T}}\right]$
$\displaystyle+\operatorname{tr}\left[\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}-\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}-\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)^{\mathsf{T}}\right]o(1),$
where $o(1)$ is uniform in a neighborhood of $\mathbf{x}$. Therefore, taking
the expectation in both sides of (7.3) and then substituting the result in
(7.2), we get
$\displaystyle\mathbb{E}\left[\widetilde{f}_{n}(\mathbf{x})\right]-f(\mathbf{x})$
$\displaystyle=$ $\displaystyle f\left(\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)-f(\mathbf{x})+\frac{1}{2}\operatorname{tr}\left[{\cal
H}f\left(\mathbb{E}\left[{\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right]\right)\operatorname{var}\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right)\right]$
$\displaystyle+o\left\\{\operatorname{tr}\left[\operatorname{var}\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right)\right]\right\\}$ $\displaystyle=$
$\displaystyle
f\left(\mathbf{x}+\mathbf{A}\right)-f(\mathbf{x})+\frac{1}{2}\operatorname{tr}\left[{\cal
H}f\left(\mathbf{x}+\mathbf{A}\right)\mathbf{B}(\mathbf{x},\mathbf{H}_{n})\right]+o\left\\{\operatorname{tr}\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})\right]\right\\},$
where
$o\left\\{\operatorname{tr}\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})\right]\right\\}$
is uniform in a neighborhood of $\mathbf{x}$. The second Taylor expansion of
the function $f(\cdot)$ over the points $\mathbf{x}$ and
$\mathbf{x}+\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)$ allows to
conclude the bias (3.3).
About the variance term, $f$ being bounded, we have
$\mathbb{E}\left[\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{X}_{j})\right]=O(1)$.
It follows that:
$\displaystyle\mathrm{var}\left[\widetilde{f}_{n}(\mathbf{x})\right]$
$\displaystyle=$
$\displaystyle\frac{1}{n}\mathrm{var}\left[\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{X}_{j})\right]$
$\displaystyle=$
$\displaystyle\frac{1}{n}\left[\int_{\mathbb{S}_{\mathbf{x},\mathbf{H}_{n}}\cap\mathbb{T}_{d}^{+}}\mathbf{K}^{2}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{u})f(\mathbf{u})\boldsymbol{\nu}(d\mathbf{u})+O(1)\right]$
$\displaystyle=$
$\displaystyle\frac{1}{n}\int_{\mathbb{S}_{\mathbf{x},\mathbf{H}_{n}}\cap\mathbb{T}_{d}^{+}}\mathbf{K}^{2}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{u})\begin{pmatrix}f(\mathbf{x})+\left\langle\nabla
f(\mathbf{x}),\mathbf{x}-\mathbf{u}\right\rangle\\\
+\frac{1}{2}(\mathbf{x}-\mathbf{u})^{T}{\cal
H}f(\mathbf{x})(\mathbf{x}-\mathbf{u})\\\
+o\left[\left(||\mathbf{x}-\mathbf{u}||^{2}\right)\right]\end{pmatrix}\boldsymbol{\nu}(d\mathbf{u})$
$\displaystyle=$
$\displaystyle\frac{1}{n}f(\mathbf{x})||\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}||_{2}^{2}+o\left[\frac{1}{n(\det\mathbf{H}_{n})^{r}}\right].$
###### Proof 3 (Proof of Proposition 4.1)
Since one has
$\mathrm{Bias}[\widehat{f}_{n}(\mathbf{x})]=p_{d}(\mathbf{x};\boldsymbol{\theta}_{0})\mathbb{E}[\widetilde{w}_{n}(\mathbf{x})]-f(\mathbf{x})$
and
$\mathrm{var}[\widehat{f}_{n}(\mathbf{x})]=[p_{d}(\mathbf{x};\boldsymbol{\theta}_{0})]^{2}\mathrm{var}[\widetilde{w}_{n}(\mathbf{x})]$,
it is enough to calculate $\mathbb{E}[\widetilde{w}_{n}(\mathbf{x})]$ and
$\mathrm{var}[\widetilde{w}_{n}(\mathbf{x})]$ following Proposition 3.4
applied to
$\widetilde{w}_{n}(\mathbf{x})=n^{-1}\sum_{i=1}^{n}\mathbf{K}_{\mathbf{x},\mathbf{H}}(\mathbf{X}_{i})/p_{d}(\mathbf{X}_{i};\boldsymbol{\theta}_{0})$
for all $\mathbf{x}\in\mathbb{T}_{d}^{+}$.
Indeed, one successively has
$\displaystyle\mathbb{E}[\widetilde{w}_{n}(\mathbf{x})]$ $\displaystyle=$
$\displaystyle\mathbb{E}\left[\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{X}_{1})/p_{d}(\mathbf{X}_{1};\boldsymbol{\theta}_{0})\right]$
$\displaystyle=$
$\displaystyle\int_{\mathbb{S}_{\mathbf{x},\mathbf{H}_{n}}\cap\mathbb{T}_{d}^{+}}\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{u})[p_{d}(\mathbf{u};\boldsymbol{\theta}_{0})]^{-1}f(\mathbf{u})\boldsymbol{\nu}(d\mathbf{u})=\mathbb{E}\left[w\left({\cal
Z}_{\mathbf{x},\mathbf{H}_{n}}\right)\right]$ $\displaystyle=$ $\displaystyle
w(\mathbf{x})+\left\langle\nabla
w(\mathbf{x}),\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)\right\rangle+\frac{1}{2}\left(\operatorname{tr}\left\\{{\cal
H}w\left(\mathbf{x}\right)\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})+\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)^{\mathsf{T}}\mathbf{A}\left(\mathbf{x},\mathbf{H}_{n}\right)\right]\right\\}\right)$
$\displaystyle+o\left\\{\operatorname{tr}\left[\mathbf{B}(\mathbf{x},\mathbf{H}_{n})\right]\right\\},$
which leads to the announced result of
$\mathrm{Bias}[\widehat{f}_{n}(\mathbf{x})]$. As for
$\mathrm{var}[\widetilde{w}_{n}(\mathbf{x})]$, one also write
$\displaystyle\mathrm{var}\left[\widetilde{w}_{n}(\mathbf{x})\right]$
$\displaystyle=$
$\displaystyle\frac{1}{n}\mathrm{var}\left[\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{X}_{1})/p_{d}(\mathbf{X}_{1};\boldsymbol{\theta}_{0})\right]$
$\displaystyle=$
$\displaystyle\frac{1}{n}\left[\int_{\mathbb{S}_{\mathbf{x},\mathbf{H}_{n}}\cap\mathbb{T}_{d}^{+}}\mathbf{K}^{2}_{\mathbf{x},\mathbf{H}_{n}}(\mathbf{u})[p_{d}(\mathbf{u};\boldsymbol{\theta}_{0})]^{-2}f(\mathbf{u})\boldsymbol{\nu}(d\mathbf{u})+O(1)\right]$
$\displaystyle=$
$\displaystyle\frac{1}{n}f(\mathbf{x})[p_{d}(\mathbf{x};\boldsymbol{\theta}_{0})]^{-2}||\mathbf{K}_{\mathbf{x},\mathbf{H}_{n}}||_{2}^{2}+o\left[\frac{1}{n(\det\mathbf{H}_{n})^{r}}\right]$
and the desired result of $\mathrm{var}[\widehat{f}_{n}(\mathbf{x})]$ is
therefore deduced.
###### Proof 4 (Proof of Proposition 5.1)
We have to adapt Theorem 2.1 of Somé and Kokonendji [52] to this
semiparametric estimator $\widehat{f}_{n}$ in (4.2). First, the leave-one-out
associated kernel estimator (3.7) becomes
$\widehat{f}_{n,\mathbf{H}_{i},-i}(\mathbf{X}_{i}):=\frac{p_{d}(\mathbf{X}_{i};\widehat{\boldsymbol{\theta}}_{n})}{n-1}\sum_{\ell=1,\ell\neq
i}^{n}\frac{1}{p_{d}(\mathbf{X}_{\ell};\widehat{\boldsymbol{\theta}}_{n})}\mathbf{K}_{\mathbf{X}_{i},\mathbf{H}_{i}}(\mathbf{X}_{\ell}).$
Then, the posterior distribution deduced from (3.6) is exppressed as
$\pi(\mathbf{H}_{i}\mid\mathbf{X}_{i}):=\pi(\mathbf{H}_{i})\widehat{f}_{n,\mathbf{H}_{i},-i}(\mathbf{X}_{i})\left[\int_{\mathcal{M}}\widehat{f}_{n,\mathbf{H}_{i},-i}(\mathbf{X}_{i})\pi(\mathbf{H}_{i})d\mathbf{H}_{i}\right]^{-1}$
and which leads to the result of Part (i) via [52, Theorem 2.1 (i)] for
details. Consequently, we similarly deduce the adaptive Bayesian estimator
$\widehat{\mathbf{\mathbf{H}}}_{i}=\mathrm{diag}_{d}\left(~{}\widehat{h}_{im}\right)$
of Part (ii).
## Acknowledgements
We sincerely thank Mohamed Elmi Assoweh for some interesting discussions.
## Abbreviations
The following abbreviations are used in this manuscript:
GDI | Generalized dispersion index
---|---
GVI | Generalized variation index
iid | Independent and identically distributed
MDI | Marginal dispersion index
MVI | Marginal variation index
pdf | Probability density function
pdmf | Probability density or mass function
pmf | Probability mass function
RDI | Relative dispersion index
RVI | Relative variation index
RWI | Relative variability index
## References
## References
* Abdous et al. [2012] Abdous, B.; Kokonendji, C.C.; Senga Kiessé, T. On semiparametric regression for count explanatory variables. J. Statist. Plann. Infer. 2012, 142, 1537–1548.
* Abid et al. [2020] Abid, R.; Kokonendji, C.C.; Masmoudi, A. Geometric Tweedie regression models for continuous and semicontinuous data with variation phenomenon. AStA Adv. Statist. Anal. 104, 2020, 33–58.
* Aitchison and Aitken [1976] Aitchison, J.; Aitken, C.G.G.; Multivariate binary discrimination by the kernel method. Biometrika 1976, 63, 413–420.
* Arnold and Manjunath [2020] Arnold, B.C.; Manjunath, B.G. Statistical inference for distributions with one Poisson conditional. Preprint arXiv 2020, arXiv:2009.01296.
* Balakrishnan and Basu [1995] Balakrishnan, N.; Basu, A.P. The Exponential Distribution: Theory, Models and Applications. Gordon and Breach: Amsterdam, Netherlands, 1995.
* Balakrishnan and Lai [2009] Balakrishnan, N.; Lai, C.-D. Continuous Bivariate Distributions, Second Edition. Springer: New York, USA, 2009.
* Belaid et al. [2016] Belaid, N.; Adjabi, S.; Kokonendji, C.C.; Zougab, N. Bayesian local bandwidth selector in multivariate associated kernel estimator for joint probability mass functions. J. Statist. Comput. Simul. 2016, 86, 3667–3681.
* Belaid et al. [2018] Belaid, N.; Adjabi, S.; Kokonendji, C.C.; Zougab, N. Bayesian adaptive bandwidth selector for multivariate binomial kernel estimator. Commun. Statist. Theory Meth. 2018, 47, 2988–3001.
* Chen [2000] Chen, S.X. Probability density function estimation using gamma kernels. Ann. Inst. Statist. Math. 52 2000, 52, 471–480.
* Cuenin et al. [2016] Cuenin, J.; Jørgensen, B.; Kokonendji, C.C. Simulations of full multivariate Tweedie with flexible dependence structure. Comput. Statist. 2016, 31, 1477–1492.
* Erçelik and Nadar [2020] Erçelik, E.; Nadar, M. A new kernel estimator based on scaled inverse chi-squared density function. American J. Math. Management Sciences 2020, forthcoming (DOI: 10.1080/01966324.2020.1854138).
* Funke and Kawka [2015] Funke, B.; Kawka, R. Nonparametric density estimation for multivariate bounded data using two non-negative multiplicative bias correction methods. Comput. Statist. Data Anal. 2015, 92, 148–162.
* Hirukawa [2018] Hirukawa, M. Asymmetric Kernel Smoothing - Theory and Applications in Economics and Finance. Springer Briefs in Statistics: Singapore, Siingapore, 2018.
* Hirukawa and Sakudo [2015] Hirukawa, M.; Sakudo, M. Family of the generalised gamma kernels: a generator of asymmetric kernels for nonnegative data. J. Nonparam. Statist. 2015, 27, 41–63.
* Hjort and Glad [1995] Hjort, N.L.; Glad, I.K. Nonparametric density estimation with a parametric start. Ann. Statist. 1995, 23, 882–904.
* Huang [2020] Huang, A. On arbitrarily underdispersed Conway-Maxwell-Poisson distributions. arXiv 2020, arXiv:2011.07503.
* Huang et al. [2020] Huang, A.; Sippel, L.; Fung, T. A consistent second-order discrete kernel smoother. arXiv 2020, arXiv:2010.03302.
* Igarashi and Kakizawa [2014] Igarashi, G.; Kakizawa, Y. Re-formulation of inverse Gaussian, reciprocal inverse Gaussian, and Birnbaum-Saunders kernel estimators. Statist. Probab. Lett. 2014, 84, 235–246.
* Jin and Kawczak [2003] Jin, X.; Kawczak, J. Birnbaum-Saunders and lognormal kernel estimators for modelling durations in high frequency financial data. Ann. Econ. Finance 4 2003, 4, 103–124.
* Johnson et al. [1997] Johnson, N.L.; Kotz, S.; Balakrishnan, N. Discrete Multivariate Distributions. Wiley: New York, USA, 1997.
* Johnson and Wicheron [2007] Johnson, R.A.; Wichern, D.W. Applied Multivariate Statistical Analysis, 6th Edition. Pearson Prentice Hall: New Jersey, USA, 2007.
* Karlis [2003] Karlis, D. ML estimation for multivariate shock models via an EM algorithm. Ann. Inst. Statist. Math. 2003, 55, 817–830.
* Belaid et al. [2021a] Kokonendji, C.C.; Belaid, N.; Abid, R.; Adjabi, S. Flexible semiparametric kernel estimation of multivariate count distribution with Bayesian bandwidth selection. Statist. Meth. Appl. 2021a, forthcoming.
* Kokonendji and Libengué Dobélé-Kpoka [2018] Kokonendji, C.C.; Libengué Dobélé-Kpoka, F.G.B. Asymptotic results for continuous associated kernel estimators of density functions. African Diaspora J. Math. 2018, 21, 87–97.
* Kokonendji and Puig [2018] Kokonendji, C.C.; Puig, P. Fisher dispersion index for multivariate count distributions: a review and a new proposal. J. Multiv. Anal. 2018, 165, 180–193.
* Kokonendji and Senga Kiessé [2011] Kokonendji, C.C.; Senga Kiessé, T. Discrete associated kernels method and extensions. Statist. Methodol. 2011, 8, 497–516.
* Kokonendji et al. [2009] Kokonendji, C.C.; Senga Kiessé, T.; Balakrishnan, N. Semiparametric estimation for count data through weighted distributions. J. Statist. Plann. Infer. 2009, 139, 3625–3638.
* Kokonendji et al. [2007] Kokonendji, C.C.; Senga Kiessé, T.; Zocchi, S.S. Discrete triangular distributions and non-parametric estimation for probability mass function. J. Nonparam. Statist. 2007, 19, 241–254.
* Kokonendji and Somé [2018] Kokonendji, C.C.; Somé, S.M. On multivariate associated kernels to estimate general density functions. J. Korean Statist. Soc. 2018, 47, 112–126.
* Kokonendji et al. [2021b] Kokonendji, C.C.; Touré, A.Y.; Abid, R. On general exponential weight functions and variation phenomenon. Sankhy$\bar{a}$ A 2021b, forthcoming (DOI:10.1007/s13171-020-00226-z).
* Kokonendji et al. [2020] Kokonendji, C.C.; Touré, A.Y.; Sawadogo, A. Relative variation indexes for multivariate continuous distributions on $[0,\infty)^{k}$ and extensions. AStA Adv. Statist. Anal. 2020, 104, 285–307.
* Kokonendji and Zocchi [2010] Kokonendji, C.C.; Zocchi, S.S. Extensions of discrete triangular distribution and boundary bias in kernel estimation for discrete functions. Statist. Probab. Lett. 2010, 80, 1655–1662.
* Kokonendji et al. [2017] Kokonendji, C.C.; Zougab, N.; Senga Kiessé, T. Poisson-weighted estimation by discrete kernel with application to radiation biodosimetry. In Biomedical Big Data & Statistics for Low Dose Radiation Research - Extended Abstracts Fall 2015; Ainsbury, E.A., Calle, M.L., Cardis, E., Einbeck, J., Gómez, G., Puig, P., Eds; Springer Birkhäuser: Basel, Switzerland, 2017; vol. VII, Part II, Chap. 19, pp. 115–120.
* Kotz et al. [2000] Kotz, S.; Balakrishnan, N.; Johnson, N.L. Continuous Multivariate Distributions - Models and Applications, Second Edition. Wiley: New York, USA, 2000.
* Lafaye de Micheaux and Ouimet [2020] Lafaye de Micheaux, P.; Ouimet, F. A study of seven asymmetric kernels for the estimation of cumulative distribution functions. arXiv 2020, arXiv:2011.14893.
* Ligengué Dobelé-Kpoka [2013] Libengué Dobélé-Kpoka, F.G.B. Méthode Non-Paramétrique par Noyaux Associés Mixtes et Applications. Ph.D. Thesis - LmB no. 14334, Université de Franche-Comté, Besançon, France, 2013.
* Ligengué Dobelé-Kpoka and Kokonendji [2017] Libengué Dobélé-Kpoka, F.G.B.; Kokonendji, C.C. The mode-dispersion approach for constructing continuous associated kernels. African Statist. 2017, 12, 1417–1446.
* Marchant et al. [2013] Marchant, C.; Bertin, K.; Leiva, V.; Saulo, H. Generalized Birnbaum–Saunders kernel density estimators and an analysis of financial data. Comput. Statist. Data Anal. 2013, 63, 1–15.
* Marshall and Olkin [1967] Marshall, A.W.; Olkin, I. A multivariate exponential distribution. J. Amer. Statist. Assoc. 1967, 62, 30–44.
* Marshall and Olkin [2007] Marshall, A.W.; Olkin, I. Life Distributions: Structure of Nonparametric, Semiparametric, and Parametric Families. Springer: New York, USA, 2007.
* Mombeni et al. [2019] Mombeni, H.A.; Masouri, B.; Akhoond, M.R. Asymmetric kernels for boundary modification in distribution function estimation. REVSTAT 2019, forthcoming.
* Mousa et al. [2016] Mousa, A.M.; Hassan, M.K.; Fathi, A. A new non parametric estimator for Pdf based on inverse gamma distribution. Commun. Statist. Th. Meth. 2016, 45, 7002–7010.
* Ouimet [2020] Ouimet, F. Density estimation using Dirichlet kernels. arXiv 2020, arXiv:2002.06956v2.
* R Core Team [2020] R Core Team. R:A Language and Environment for Statistical Computing. R Foundation for Statistical Computing: Vienna, Austria, 2020. http://cran.r-project.org/
* Salha et al. [2014] Salha, R.B.; Ahmed, H.I.E.S.; Alhoubi, I.M. Hazard rate function estimation using Weibull kernel. Open J. Statist. 2014, 4, 650–661.
* Scaillet [2004] Scaillet, 0. Density estimation using inverse and reciprocal inverse Gaussian kernels. J. Nonparam. Statist. 2004, 16, 217–226.
* Scott [1992] Scott, D.W. Multivariate Density Estimation - Theory, Practice, and Visualization. Wiley: New York, USA, 1992.
* Senga Kiessé et al. [2016] Senga Kiessé, T.; Zougab, N.; Kokonendji, C.C. Bayesian estimation of bandwidth in semiparametric kernel estimation of unknown probability mass and regression functions of count data. Comput. Statist. 2016, 31, 189–206.
* Silverman [1986] Silverman, B.W. Density Estimation for Statistics and Data Analysis. Chapman and Hall: London, UK, 1986.
* Somé [2021] Somé, S.M. Bayesian selector of adaptive bandwidth for gamma kernel density estimator on $[0,\infty)$: simulations and applications. Commun. Statist. Simul. Comput. 2021, forthcoming (DOI: 10.1080/03610918.2020.1828921).
* Somé and Kokonendji [2016] Somé, S.M.; Kokonendji, C.C. Effects of associated kernels in nonparametric multiple regressions. J. Statist. Theory Pract. 2016, 10, 456–471.
* Somé and Kokonendji [2021] Somé, S.M.; Kokonendji, C.C. Bayesian selector of adaptive bandwidth for multivariate gamma kernel estimator on $[0,\infty)^{d}$. J. Appl. Statist. 2021, forthcoming (accepted 2021.01.14).
* Somé et al. [2016] Somé, S.M.; Kokonendji, C.C.; Ibrahim, M. Associated kernel discrimination analysis for mixed data. Electr. J. Appl. Statist. Anal. 2016, 9, 385–399.
* Su [2015] Su, P. Generation of multivariate data with arbitrary marginals - The R package ’NORTARA’, R Foundation for Statistical Computing: Vienna, Austria, 2015. https://cran.r-project.org/web/packages/NORTARA/
* Touré et al. [2020] Touré, A.Y.; Dossou-Gbété, S.; Kokonendji, C.C. Asymptotic normality of the test statistics for the unified relative dispersion and relative variation indexes. J. Appl. Statist. 2020, 47, 2479–2491.
* Wansouwé et al. [2016] Wansouwé, W.E.; Somé, S.M.; Kokonendji, C.C. Ake: an R package for discrete and continuous associated kernel estimations. The R Journal 2016, 8, 259–276.
* Weiss [2019] Weiß, C.H. On some measures of ordinal variation. J. Appl. Statist. 2019, 46, 2905–2926.
* White [1982] White, H. Maximum likelihood estimation of misspecified models. Econometrica 1982, 50, 1–26.
* Ziane et al. [2015] Ziane, Y.; Zougab, N.; Adjabi, S. Adaptive Bayesian bandwidth selection in asymmetric kernel density estimation for nonnegative heavy-tailed data. J. Appl. Statist. 2015, 42, 1645–1658.
* Zougab et al. [2014] Zougab, N.; Adjabi, S.; Kokonendji, C.C. Bayesian estimation of adaptive bandwidth matrices in multivariate kernel density estimation. Comput. Statist. Data Anal. 2014, 75, 28–38.
* Zougab et al. [2016] Zougab, N.; Adjabi, S.; Kokonendji, C.C. Comparison study to bandwidth selection in binomial kernel estimation using Bayesian approaches. J. Statist. Theory Pract. 2016, 10, 133–153.
* Zougab et al. [2018] Zougab, N.; Harfouche, L; Ziane, Y; Adjabi, S. Multivariate generalized Birnbaum–Saunders kernel density estimators. Commun. Statist. Theory Meth. 2018, 47, 4534–4555.
###### Contents
1. 1 Introduction
2. 2 Relative Variability Indexes for Orthant Distributions
1. 2.1 Relative Dispersion Indexes for Count Distributions
2. 2.2 Relative Variation Indexes for Semicontinuous Distributions
3. 3 Multivariate Orthant Associated Kernels
1. 3.1 Discrete Associated Kernels
2. 3.2 Semicontinuous Associated Kernels
4. 4 Semiparametric Kernel Estimation with $d$-Variate Parametric Start
1. 4.1 Known $d$-Variate Parametric Model
2. 4.2 Unknown $d$-Variate Parametric Model
3. 4.3 Model Diagnostics
5. 5 Semicontinuous Examples of Application with Discussions
1. 5.1 Adaptive Bayesian Bandwidth Selection for Multiple Gamma Kernels
2. 5.2 Semicontinuous Datasets
3. 5.3 Univariate Examples
4. 5.4 Bivariate and Trivariate Examples
6. 6 Concluding Remarks
7. 7 Appendix
1. 7.1 On a Broader $d$-Variate Parametric Models and the Marshall-Olkin Exponential
2. 7.2 Proofs of Proposition 3.4, Proposition 4.1 and Proposition 5.1
|
# Global Optimization of the Mean First Passage Time for Narrow Capture
Problems in Elliptic Domains
Jason Gilbert Electronic mail<EMAIL_ADDRESS>Department of Mathematics
and Statistics, University of Saskatchewan Alexei Cheviakov Corresponding
Author. Alternative English spelling: Alexey Shevyakov. Electronic mail:
<EMAIL_ADDRESS>Department of Mathematics and Statistics, University
of Saskatchewan
###### Abstract
Narrow escape and narrow capture problems which describe the average times
required to stop the motion of a randomly travelling particle within a domain
have applications in various areas of science. While for general domains, it
is known how the escape time decreases with the increase of the trap sizes,
for some specific 2D and 3D domains, higher-order asymptotic formulas have
been established, providing the dependence of the escape time on the sizes and
locations of the traps. Such results allow the use of global optimization to
seek trap arrangements that minimize average escape times. In a recent paper,
the escape time expansion for a 2D elliptic domain was derived, providing the
dependence of the average MFPT on sizes and locations of small internal traps.
The goal of this work is to systematically seek global minima of MFPT for
$1\leq N\leq 50$ traps, and compare the corresponding putative optimal trap
arrangements for different values of the domain eccentricity.
## 1 Introduction
The narrow capture problem, as described here, concerns the average time
required for a particle undergoing Brownian motion to encounter a region
within the domain, referred to as a trap, which causes its motion to cease. It
is mathematically defined as a Neumann-Dirichlet Poisson problem
$\begin{split}&\Delta u=-\dfrac{1}{D}\ ,\quad x\in\bar{\Omega}\
;\qquad\bar{\Omega}=\Omega\setminus\mathop{\cup}_{j=1}^{N}\Omega_{\varepsilon_{j}}\
;\\\\[5.0pt] &\partial_{n}u=0\ ,\quad x\in\partial\Omega\ ;\qquad u=0\ ,\quad
x\in\partial\Omega_{\varepsilon_{j}}\ ,\quad j=1,\ldots,N\ ;\end{split}$ (1.1)
where $\Omega\subset\mathbb{R}^{n}$, $n=2,3$, denotes the physical domain of
the problem; $\\{\Omega_{\varepsilon_{j}}\\}_{j=1}^{N}$ are small volume trap
domains within $\Omega$; $\bar{\Omega}$ is the domain except the traps, where
the motion of particles takes place, and $\partial_{n}$ denotes the normal
derivative on the boundary of the domain. The diffusivity $D$ of the medium
filling $\bar{\Omega}$ is assumed constant. Problem (1.1) describes the
distribution of the mean capture time, the time $u(x)$ needed for a particle
to be captured by any trap, averaged over a large number of launches from the
same point $x\in\bar{\Omega}$. An illustration of the problem is provided in
Figure 1.
Figure 1: (a) A two-dimensional narrow capture problem in the unit disk
containing internal traps with absorbing boundaries
$\\{\partial\Omega_{\epsilon_{j}}\\}$. (b) A three-dimensional narrow capture
problem, a sample Brownian particle trajectory, leading to a capture in a trap
(lowermost) denoted by purple color (color online).
The boundary conditions on $\partial\Omega$ are reflective: $\partial_{n}u=0$,
whereas the traps $\Omega_{\varepsilon_{j}}$ are characterized by immediate
absorption of a boundary particle, which is manifested by a Dirichlet boundary
conditions $u=0$ on all $\partial\Omega_{\varepsilon_{j}}$.
The above generic formulation affords a variety of physical interpretations,
ranging from biological to electrostatic (see, e.g., Ref. [1, 2] for an
overview of applications). In this work, it will be strictly considered in
terms of a particle undergoing Brownian motion [3]. In this case, the problem
regards the stopping time [1, 4] of a Brownian particle, confined to a domain
with a reflective boundary which contains a number of absorbing “traps”. When
the path of the particle intersects the boundary of one of these traps, the
particle is captured, meaning that the process of Brownian motion is stopped.
This stopping time may be interpreted as the time required for the particle to
leave the confining domain, thus it is often referred to as the first passage
time [5, 6, 7]. As Brownian motion is an inherently random process, there is
no single first passage time which accurately characterizes the process;
instead, the mean first passage time (MFPT), representing the average time
taken for a particle to encounter a trap based on its first observed position,
is used. Interpreting the problem (1.1) accordingly, $u$ is the MFPT; $D$ is
the diffusion coefficient, representing the mobility of the Brownian particle;
$\Omega_{\varepsilon_{j}}$ is the portion of the domain occupied by the
$j_{th}$ trap.
Given the physical domain and the number and sizes of the traps, it is of
interest to ask whether there is an _optimal_ arrangement of traps within the
domain which minimizes the MFPT, or in other words, maximizes the rate at
which Brownian particles encounter the absorbing traps. Related work dedicated
to similar optimization, in the case that the traps are asymptotically small
relative to the size of the domain, for various kinds of confining domains
with interior or boundary traps, can be found, for example, in Refs [8, 9, 10,
11, 12] and references therein. Both (putative) globally optimal and multiple
locally optimal arrangements of boundary and volume traps have been computed
in various settings.
In this contribution, we consider the narrow capture problem for the case of a
2D elliptical domain, elaborating on previous work on the subject for the case
of a circular domain [8], and the more general case of the elliptical domain
[9]. The paper is organized as follows. In Section 2 we briefly summarise the
previous results which this work is based on. This includes a review of the
approximations used in the case that the traps are small relative to the
domain size, and are well-separated; as well as an explanation of the choice
of merit function, and relevant constraints, for each case.
In Section 3 the optimization method chosen in our study is described. This
includes an explanation of our selection of algorithms, as well the details of
their use, and some decisions made to facilitate comparison to previous
studies. Specifically, we seek the optimal configuration of traps for numbers
of traps $N\leq 50$, and elliptic domain eccentricities of $0$, $1/8$, $1/4$,
and $1/2$.
In Section 4 the results of the study are presented. These results include the
optimized values of the merit functions (related to the domain-average
Brownian particle capture time) for each number of traps, and each domain
eccentricity; in the case of the unit disk, our results are compared to those
of a previous study. Additionally, the distributions of traps for some
illustrative cases are shown, and bulk measures of trap distribution are
calculated for each of the optimized configurations.
In Section 5, the results presented in Section 4 are discussed, and some
remarks are provided regarding on the distribution of traps, the
interpretation of the bulk quantities used, and some related problems.
## 2 Problem Statement
The goal of this contribution is to further explore the optimal configuration
of absorbing traps in terms of the asymptotic expansions for the circular and
elliptical domains for which asymptotic MFPT formulae depending on trap
positions are now available [8, 9]. To this end, numerical methods will be
used to approximate the optimum configurations of larger numbers of traps than
have previously been considered. In the case of the unit disk, the parameter
space used in this study is general and does not assume any simplifying
restrictions that have been imposed in previous works to reduce computation
complexity.
In essence, the problem at hand is to find the trap positions which minimize
the spatial average of the MFPT $u(x)$, denoted by $\overline{u}_{0}$. This
section will summarize the equations used to minimize this quantity, as
derived in [8, 9]. Here the unit disk will be considered a special case of the
elliptical domain with zero eccentricity. In either case, when the domain
contains $N$ identical circular traps, each of radius $\varepsilon$, the
average MFPT satisfies [9]
$\overline{u}_{0}\ =\ \dfrac{|\Omega|}{2\pi D\nu N}\ +\
\dfrac{2\pi}{N}\textbf{e}^{T}\mathcal{G}\mathcal{A}\ ,$ (2.1)
where the column vector $\mathcal{A}$ is the solution of the linear system
$\left[I+2\pi\nu\left(I-E\right)\mathcal{G}\right]\mathcal{A}\ =\
\dfrac{|\Omega|}{2\pi DN}\textbf{e}\ .$
Here $E\equiv\textbf{e}\textbf{e}^{T}/N$, $\textbf{e}=(1,\ldots,1)^{T}$,
$\nu\equiv-1/\log\varepsilon$, and the Green’s matrix $\mathcal{G}$ depends on
the trap locations $\\{\mathbf{x}_{1},\ldots,\mathbf{x}_{N}\\}$, such that
$\mathcal{G}_{ij}\ =\
\left\\{\begin{array}[]{ll}G(\mathbf{x}_{i};\mathbf{x}_{j})\quad\text{if}\
i\neq j\ ,\\\\[5.0pt] R(\mathbf{x}_{i};\mathbf{x}_{j})\quad\text{if}\ i=j\
;\end{array}\quad\text{where}\quad
R(\mathbf{x}_{i};\mathbf{x}_{j})=\lim_{\mathbf{x}_{i}\to\mathbf{x}_{j}}G(\mathbf{x}_{i};\mathbf{x}_{j})\right.$
(2.2)
where $G$ is the Green’s function of the domain.
Examining the equation (2.1), it can be seen that the first term depends only
on the combined trap size but does not depend on the trap locations. The
second term explicitly depends on the trap locations is the quantity and
therefore can be optimized by adjusting trap positions. The merit function
subject to optimization can be chosen as
$q\left(\mathbf{x}\right)\ =\ \textbf{e}^{T}\mathcal{G}\mathcal{A}\ .$ (2.3)
For a value of $q$ to be a permissible solution, it is required that
$\overline{u}_{0}\geq 0$, as it is a measure of time; all traps must be within
the domain; no trap may be in contact with any other trap (or the two must
instead be represented as a single non-circular trap).
While the preceding statements are true for both the circular and the
elliptical domain, the elements of the matrix $\mathcal{G}$ are populated by
evaluating the Green’s function of the domain for each pair of traps, and the
form of this function differs for the two cases considered here.
In the case of the circular domain, the elements of the Green’s matrix
$\mathcal{G}$ are computed using the Neumann Green’s function [8]
$G(\mathbf{x};\mathbf{x}_{0})\ =\
\dfrac{1}{2\pi}\left(-\log|\mathbf{x}-\mathbf{x}_{0}|\ -\
\log\left|\mathbf{x}|\mathbf{x}_{0}|-\dfrac{\mathbf{x}_{0}}{|\mathbf{x}_{0}|}\right|\
+\ \dfrac{1}{2}\left(|\mathbf{x}|^{2}+|\mathbf{x}_{0}|^{2}\right)\ -\
\dfrac{3}{4}\right)\ ,$ (2.4a) and its regular part $R(\mathbf{x})\ =\
\dfrac{1}{2\pi}\left(-\log\left|\mathbf{x}|\mathbf{x}|-\dfrac{\mathbf{x}}{|\mathbf{x}|}\right|\
+\ |\mathbf{x}|^{2}\ -\ \dfrac{3}{4}\right)\ .$ (2.4b)
The Green’s function for the elliptical domain, in the form of a quickly
convergent series, has been derived in [9] using elliptical coordinates
$(\xi,\eta)$. It has the form
$\begin{split}G(\mathbf{x},\mathbf{x}_{0})\ =\
\dfrac{1}{4|\Omega|}\left(|\mathbf{x}|^{2}+|\mathbf{x}_{0}|^{2}\right)\ -\
\dfrac{3}{16|\Omega|}(a^{2}+b^{2})\ -\ \dfrac{1}{4\pi}\log\beta\ -\
\dfrac{1}{2\pi}\xi_{>}\\\\[5.0pt] \ -\
\dfrac{1}{2\pi}\sum^{\infty}_{n=0}\log\left(\prod_{j=1}^{8}|1-\beta^{2n}z_{j}|\right)\
,\qquad\mathbf{x}\neq\mathbf{x}_{0}\ ,\end{split}$ (2.5)
where $a$ and $b$ are the major and minor axis of the domain, respectively;
the area of the domain is $|\Omega|=\pi ab$, the parameter
$\beta=(a-b)/(a+b)$, and the values $\xi_{>}=\mathop{\hbox{\rm
max}}(\xi,\xi_{0})$, $z_{1},\ldots,z_{8}$ are defined in terms of the
elliptical coordinates $\xi$ and $\eta$ as follows.
The Cartesian coordinates $(x,y)$ and the elliptical coordinates $(\xi,\eta)$
are related by the transformation
$x=f\cosh\xi\cos\eta\ ,\quad y=f\sinh\xi\sin\eta\ ,\quad f=\sqrt{a^{2}-b^{2}}\
.$ (2.6a) For the major and minor axis of the elliptical domain, one has
$a=f\cosh\xi_{b}\ ,\quad b=f\sinh\xi_{b}\
,\quad\xi_{b}=\tanh^{-1}\left(\dfrac{b}{a}\right)=-\dfrac{1}{2}\log\beta\ .$
(2.6b)
For the backward transformation, to determine $\xi(x,y)$, equation (2.6a) is
solved to give
$\xi=\dfrac{1}{2}\log\left(1-2s+2\sqrt{s^{2}-s}\right)\ ,\quad
s\equiv\dfrac{-\mu-\sqrt{\mu^{2}+4f^{2}y^{2}}}{2f^{2}}\ ,\quad\mu\equiv
x^{2}+y^{2}-f^{2}\ .$ (2.7a) In a similar fashion, $\eta(x,y)$ is found in
terms of $\eta_{\star}\equiv\sin^{-1}(\sqrt{p})$ to be $\eta\ =\
\left\\{\begin{array}[]{ll}\eta_{\star}\ ,&\text{if}\ x\geq 0,\ y\geq
0\\\\[5.0pt] \pi-\eta_{\star}\ ,&\text{if}\ x<0,\ y\geq 0\\\\[5.0pt]
\pi+\eta_{\star}\ ,&\text{if}\ x\leq 0,\ y<0\\\\[5.0pt] 2\pi-\eta_{\star}\
,&\text{if}\ x>0,\ y<0\\\\[5.0pt] \end{array}\right.\ ,\quad\text{where}\quad
p\equiv\dfrac{-\mu+\sqrt{\mu^{2}+4f^{2}y^{2}}}{2f^{2}}\ .$ (2.7b)
Finally, the $z_{j}$-terms appearing in the infinite sum of equation (2.5) are
defined via $\xi$, $\eta$, and $\xi_{b}$ as
$\begin{array}[]{lll}z_{1}\equiv e^{-|\xi-\xi_{0}|+i(\eta-\eta_{0})}\
,&z_{2}\equiv e^{|\xi-\xi_{0}|-4\xi_{b}+i(\eta-\eta_{0})}\ ,&z_{3}\equiv
e^{-(\xi+\xi_{0})-2\xi_{b}+i(\eta-\eta_{0})}\ ,\\\\[4.0pt] z_{4}\equiv
e^{(\xi+\xi_{0})-2\xi_{b}+i(\eta-\eta_{0})}\ ,&z_{5}\equiv
e^{(\xi+\xi_{0})-4\xi_{b}+i(\eta+\eta_{0})}\ ,&z_{6}\equiv
e^{-(\xi+\xi_{0})+i(\eta+\eta_{0})}\ ,\\\\[4.0pt] z_{7}\equiv
e^{|\xi-\xi_{0}|-2\xi_{b}+i(\eta+\eta_{0})}\ ,&z_{8}\equiv
e^{-|\xi-\xi_{0}|-2\xi_{b}+i(\eta+\eta_{0})}\ .&\\\ \end{array}$ (2.8)
The regular part of the Neumann Green’s function, $R$, can be expressed in
similar terms as $G$ in equation (2.5) but requires a restatement of the
$z_{j}$-terms given in (2.8). It is given by
$\begin{split}R(\mathbf{x}_{0})\ =\ &\dfrac{|\mathbf{x}_{0}|^{2}}{2|\Omega|}\
-\ \dfrac{3}{16|\Omega|}(a^{2}+b^{2})\ +\ \dfrac{1}{2\pi}\log(a+b)\ -\
\dfrac{\xi_{0}}{2\pi}\ +\
\dfrac{1}{4\pi}\log\left(\cosh^{2}\xi_{0}-\cos^{2}\eta_{0}\right)\\\\[5.0pt]
&\ -\ \dfrac{1}{2\pi}\sum_{n=1}^{\infty}\log\left(1-\beta^{2n}\right)\ -\
\dfrac{1}{2\pi}\sum_{n=0}^{\infty}\log\left(\prod_{j=2}^{8}\left|1-\beta^{2n}z^{0}_{j}\right|\right).\end{split}\
$ (2.9a) Here $z^{0}_{j}$ denotes the limiting value of $z_{j}$, as defined in
equation (2.8), as $(\xi,\eta)\to(\xi_{0},\eta_{0})$, given by
$\begin{array}[]{lll}&z^{0}_{2}=\beta^{2}\ ,&z^{0}_{3}=\beta e^{-2\xi_{0}}\
,\\\\[5.0pt] z^{0}_{4}=\beta e^{2\xi_{0}}\
,&z^{0}_{5}=\beta^{2}e^{2(\xi_{0}+i\eta_{0})}\
,&z^{0}_{6}=e^{2(-\xi_{0}+i\eta_{0})}\ ,\\\\[5.0pt] z^{0}_{7}=\beta
e^{2i\eta_{0}}\ ,&z^{0}_{8}=\beta e^{2i\eta_{0}}\ .&\\\ \end{array}$ (2.9b)
## 3 Global optimization
In this section, the methods used to find the optimum trap configurations
minimizing the average MFPT (2.1) are discussed. We include a description of
the general strategy for optimization, the algorithms used, and their specific
implementations.
For both problems the same general approach was taken to finding the optimum.
This was to search iteratively, switching between global and local searches
after each iteration. The global search used the particle swarm algorithm
PSwarm [13], as implemented in the freely available software package OPTI
[14]. The default values for the social and cognitive parameters were chosen,
meaning the local optimum known by each particle tended to be as attractive as
the known global optimum. These values were chosen with the intent that the
parameter space would be explored as broadly as possible. For the local
search, the Nelder-Mead algorithm [15], as implemented in MATLAB R2020, was
used. The algorithms were chosen based on their generality and simplicity to
adapt them to the current study.
In addition, for the case of the elliptical domain, special care needed to be
taken to ensure that the traps were well-separated. If the traps were to come
into contact, or overlap with one another, the asymptotic equation (2.1) can
yield non-physical values $\overline{u}_{0}<0$, (which is a common feature of
asymptotic formulas that replace finite traps with “point traps” in various
narrow escape and narrow capture setups). In the MFPT optimization, the traps
are effectively repelled from one another, as well as from their “reflections”
in the domain boundary; this is mathematically manifested in the fact that the
Green’s function (2.4a) for the disk domain grows logarithmically as
$\mathbf{x}\to\mathbf{x}_{0}$, as well as when $|\mathbf{x}|\to 1$. In
particular, $q$ increases as distance between traps decreases, as traps begin
to overlap, $q$ decreases extremely rapidly, appearing to the optimization
algorithm to be a favourable configuration. Though this problem can be
addressed by artificially assigning $q$ a very high value when an unacceptable
configuration is encountered, this approach was found to significantly reduce
the effectiveness of the global search, as many evaluations of $q$ would be
wasted on these configurations. Instead, an optimum was first found using the
iterative approach taking $\varepsilon=0$, following which a local search was
carried out using these coordinates as an initial guess for a local search,
and taking $\varepsilon=0.05$ in order to facilitate comparison with previous
studies [9].
Defining the eccentricity of the elliptic domain according to the usual
formula
$\kappa\ =\ \sqrt{1-\left(\dfrac{b}{a}\right)^{2}}\ ,$ (3.1)
where $a$ is the major axis and $b$ the minor, optimum configurations were
computed for $N\leq 50$ for domain eccentricities of $\kappa=0$, $1/8$, $1/4$,
and $1/2$. For each eccentricity value, the axes of the ellipse were chosen so
that the area of the domain was fixed at $|\Omega|=\pi$, to allow for natural
comparisons.
## 4 Optimization Results
A comparison of $\overline{u}_{0}$ for the optimal configurations found for
the unit disk in previous work [8] to those presented here, found using the
more accurate approximation [9], shows that the newly computed optimums for
the unit disk are consistently better than previous results. Though it is
convention to discuss optimal configurations in terms of their interaction
energy, the quantity used as the merit function, here the $\overline{u}_{0}$
is used for comparison. This serves the purpose of normalizing the results
obtained using the expression for $\overline{u}_{0}$ used in previous work
[8],
$\overline{u}_{0}\ =\ \dfrac{|\Omega|}{2\pi D\nu
N}\left(1+\dfrac{2\pi\nu}{N}\textbf{e}^{T}\mathcal{G}\textbf{e}\right)\ ,$
(4.1)
to equation (2.1), the newly derived expression [9],
$\overline{u}_{0}\ =\ \dfrac{|\Omega|}{2\pi D\nu
N}\left(1+\dfrac{4\pi^{2}D\nu}{|\Omega|}\textbf{e}^{T}\mathcal{G}\mathcal{A}\right)\
.$ (4.2)
Though it would be sufficient to compute only the second term for comparison,
$\overline{u}_{0}$ serves as a more consistent standard for comparison. Table
1 compares $\overline{u}_{0}$ for each $N$ reported in the previous study,
computed using the results found in Table 2 of Ref. [8], while Figure 2
depicts the relative difference between the two.
The computed optimal values of the merit function (2.3) that correspond to
putative globally optimal minima of the average MFPT (2.1), for the domain
eccentricities $\kappa=0$ (circular disk), $1/8$, $1/4$, and $1/2$, are
presented in Table 2 below, and are graphically shown in Figure 3. While the
first three plots are nearly identical, the plot (d) for the largest
eccentricity value differs significantly for small $N$, but becomes similar to
the other plots for larger $N$.
Plots comparing the optimal configurations of select $N$ for each of the
eccentricities considered in this study are shown in Figures 4–7. Each plot
shows the position of the trap within the domain, along with a visualization
of a Delaunay triangulation [16] calculated using the traps as vertices, to
illustrate the distribution of, and relative distance between, traps. In
addition, it was of interest to see how the ring-like distribution of traps
would change with the eccentricity of the domain. To visualize this change, a
scaling factor was calculated for each trap such that each trap would lie on a
scaled copy of the elliptic domain boundary. These scaling factors are shown
in the lower subplots in Figures 4–7. For the case of $N=5$, Figure 4 can be
compared to the optimal configurations presented in Ref [9], through which it
can be seen that the two are qualitatively similar and exhibit the same
relationship between trap distribution and domain eccentricity.
In order to examine the distribution of traps in terms of their mutual
distance, a Delaunay triangulation was computed to identify approximate
nearest neighbours to each trap (see upper plots in Figures 4–7). In general,
for a configuration of $N$ traps distributed, in some sense, “uniformly” over
the elliptic domain of area $|\Omega|=\pi$, the average “area per trap” is
given by $A(N)=|\Omega|/N=\pi/N$. Likening an optimal arrangement of $N$ traps
to a collection of circles packed into an enclosed space, the (average)
distance $\langle d\rangle$ between two neighbouring traps would be the
distance between the centers of two identical circles representing the area
occupied by each trap; it would be related to the area per trap as
$A(N)=\pi\langle d\rangle^{2}/4$. One consequently finds that the average
distance between neighbouring traps, equivalent to the diameter of one of the
circles, is given by
$\langle d\rangle\ =\ \sqrt{\dfrac{4|\Omega|}{\pi N}}\ =\ \dfrac{2}{\sqrt{N}}\
,$ (4.3)
Extending this comparison to the traps nearest the boundary, the smallest
distance between a trap and the boundary was taken to be the radius of a
circle surrounding the trap, and the diameter of this circle was compared to
the smallest distance between two traps. This essentially provides a measure
of the distance between a near-the-boundary trap and its “reflection” in the
Neumann boundary.
In Figure 8, for each of the four considered eccentricities of the elliptic
domain, the mean pairwise distance between neighbouring traps is plotted as a
function of $N$, along with minimum pairwise distance between traps, and
$2\times$ minimal distance to the boundary. These are compared with the
average distance formula (4.3) coming from the “area per trap” argument. It
can be observed that the simple formula (4.3) may be used as a reasonable
estimate of common pairwise distances between traps in an optimal
configuration. The case of $\kappa=0.5$ serves as somewhat of an exception.
This could either be due to a significant change in the distribution of the
traps as the eccentricity increases, or it could be an artefact introduced by
the calculation of the Delaunay triangulation.
As mentioned in Section 3, the optimal configuration for the case
$\varepsilon=0$ was used as an initial guess when used to search for the
optimum in the case $\varepsilon=0.05$. Using this approach it was found that
the optimal configuration for the two cases were not identical, implying that
the optimal configuration significantly depends on the size of the traps.
$N$ | $\overline{u}_{0}$ | $\overline{u}_{0}^{\prime}$
---|---|---
6 | 0.11648 | 0.11648
7 | 0.09299 | 0.09297
8 | 0.07660 | 0.07660
9 | 0.06518 | 0.06512
10 | 0.05653 | 0.05624
11 | 0.04920 | 0.04900
12 | 0.04291 | 0.04278
13 | 0.03805 | 0.03796
14 | 0.03380 | 0.03375
15 | 0.03042 | 0.03038
16 | 0.02747 | 0.02745
17 | 0.02502 | 0.02499
18 | 0.02286 | 0.02280
19 | 0.02078 | 0.02076
20 | 0.01909 | 0.01907
21 | 0.01756 | 0.01755
22 | 0.01626 | 0.01624
23 | 0.01512 | 0.01510
24 | 0.01411 | 0.01403
25 | 0.01314 | 0.01307
Table 1: Average MFPT in the unit disk for previously computed optimal
configurations $\overline{u}_{0}$ ( Ref. [8], Table 2 ), compared to that of
the newly computed configurations for the unit disk
$\overline{u}_{0}^{\prime}$. Here it can be seen that the new values are
consistently smaller, at most differing in the third significant figure. A
plot of the difference between these two, relative to the previous results,
can be found in Figure 2. Figure 2: Relative difference between average MFPT
in the unit disk for previously computed optimal configurations
$\overline{u}_{0}$, compared to that of the newly computed configurations
$\overline{u}_{0}^{\prime}$ according to
$(\overline{u}_{0}^{\prime}-\overline{u}_{0})/\overline{u}_{0}$. Here it can
be seen that the new values are consistently smaller, at most differing in the
third significant figure. A table of these values can be found in Table 1.
Figure 3: The putative optimal values of the merit function (2.3) for the
ellipse eccentricity values $\kappa=0$ (a), $\kappa=0.125$ (b), $\kappa=0.250$
(c), and $\kappa=0.500$ (d), as functions of the number of traps $N$. [The
corresponding numerical values are presented in Table 2.]
$N$ | Merit Value
---|---
| $\kappa=0$ | $\kappa=0.125$ | $\kappa=0.25$ | $\kappa=0.5$
1 | -0.0597 | -0.0594 | -0.0540 | 0.1730
2 | -0.0754 | -0.0792 | -0.0854 | 0.0175
3 | -0.0969 | -0.0967 | -0.0959 | -0.0452
4 | -0.1112 | -0.1113 | -0.1115 | -0.0793
5 | -0.1207 | -0.1207 | -0.1200 | -0.1007
6 | -0.1272 | -0.1274 | -0.1289 | -0.1154
7 | -0.1348 | -0.1347 | -0.1342 | -0.1261
8 | -0.1409 | -0.1408 | -0.1393 | -0.1343
9 | -0.1451 | -0.1451 | -0.1447 | -0.1407
10 | -0.1489 | -0.1494 | -0.1492 | -0.1457
11 | -0.1526 | -0.1532 | -0.1533 | -0.1498
12 | -0.1567 | -0.1566 | -0.1569 | -0.1530
13 | -0.1599 | -0.1598 | -0.1603 | -0.1559
14 | -0.1632 | -0.1632 | -0.1632 | -0.1587
15 | -0.1659 | -0.1660 | -0.1657 | -0.1614
16 | -0.1685 | -0.1686 | -0.1683 | -0.1642
17 | -0.1708 | -0.1705 | -0.1708 | -0.1668
18 | -0.1731 | -0.1731 | -0.1733 | -0.1693
19 | -0.1756 | -0.1755 | -0.1756 | -0.1718
20 | -0.1777 | -0.1776 | -0.1775 | -0.1741
21 | -0.1798 | -0.1797 | -0.1796 | -0.1756
22 | -0.1815 | -0.1815 | -0.1816 | -0.1768
23 | -0.1831 | -0.1831 | -0.1833 | -0.1800
24 | -0.1848 | -0.1851 | -0.1848 | -0.1820
25 | -0.1864 | -0.1867 | -0.1864 | -0.1834
$N$ | Merit Value
---|---
| $\kappa=0$ | $\kappa=0.125$ | $\kappa=0.25$ | $\kappa=0.5$
26 | -0.1880 | -0.1882 | -0.1880 | -0.1858
27 | -0.1897 | -0.1896 | -0.1896 | -0.1875
28 | -0.1911 | -0.1910 | -0.1911 | -0.1893
29 | -0.1925 | -0.1925 | -0.1925 | -0.1909
30 | -0.1940 | -0.1940 | -0.1938 | -0.1920
31 | -0.1953 | -0.1953 | -0.1952 | -0.1941
32 | -0.1964 | -0.1964 | -0.1965 | -0.1953
33 | -0.1977 | -0.1978 | -0.1978 | -0.1958
34 | -0.1989 | -0.1989 | -0.1990 | -0.1975
35 | -0.2000 | -0.2001 | -0.2000 | -0.1987
36 | -0.2012 | -0.2012 | -0.2012 | -0.2003
37 | -0.2025 | -0.2024 | -0.2023 | -0.2005
38 | -0.2035 | -0.2035 | -0.2033 | -0.2022
39 | -0.2045 | -0.2044 | -0.2043 | -0.2028
40 | -0.2056 | -0.2055 | -0.2053 | -0.2037
41 | -0.2065 | -0.2065 | -0.2064 | -0.2046
42 | -0.2074 | -0.2073 | -0.2073 | -0.2049
43 | -0.2083 | -0.2084 | -0.2083 | -0.2065
44 | -0.2093 | -0.2092 | -0.2092 | -0.2069
45 | -0.2102 | -0.2103 | -0.2102 | -0.2086
46 | -0.2110 | -0.2111 | -0.2111 | -0.2091
47 | -0.2119 | -0.2120 | -0.2119 | -0.2102
48 | -0.2127 | -0.2128 | -0.2128 | -0.2098
49 | -0.2136 | -0.2136 | -0.2136 | -0.2118
50 | -0.2144 | -0.2143 | -0.2143 | -0.2126
Table 2: Optimized values of the merit function (2.3), for each number of
traps $N$ and eccentricity $\kappa$ considered in this study. Plots of these
values are found in Figure 3.
Figure 4: Plots depicting optimal trap distribution for $N=5$, comparing
eccentricities of (a) $\kappa=0$, (b) $\kappa=0.125$, (c) $\kappa=0.250$, (d)
$\kappa=0.500$. Upper plots show positions of traps along with a crude
visualization of nearest-neighbour pairs calculated using Delaunay
triangulation. Lower plots show the scaling factor applied to the domain
boundary such that the scaled version would pass through each trap, numbered
from 1 to $N$.
Figure 5: Plots depicting optimal trap distribution for $N=10$, comparing
eccentricities of (a) $\kappa=0$, (b) $\kappa=0.125$, (c) $\kappa=0.250$, (d)
$\kappa=0.500$. Upper plots show positions of traps along with a crude
visualization of nearest-neighbour pairs calculated using Delaunay
triangulation. Lower plots show the scaling factor applied to the domain
boundary such that the scaled version would pass through a given trap,
numbered from 1 to $N$.
Figure 6: Plots depicting optimal trap distribution for $N=25$, comparing
eccentricities of (a) $\kappa=0$, (b) $\kappa=0.125$, (c) $\kappa=0.250$, (d)
$\kappa=0.500$. Upper plots show positions of traps along with a crude
visualization of nearest-neighbour pairs calculated using Delaunay
triangulation. Lower plots show the scaling factor applied to the domain
boundary such that the scaled version would pass through a given trap,
numbered from 1 to $N$.
Figure 7: Plots depicting optimal trap distribution for $N=40$, comparing
eccentricities of (a) $\kappa=0$, (b) $\kappa=0.125$, (c) $\kappa=0.250$, (d)
$\kappa=0.500$. Upper plots show positions of traps along with a crude
visualization of nearest-neighbour pairs calculated using Delaunay
triangulation. Lower plots show the scaling factor applied to the domain
boundary such that the scaled version would pass through a given trap,
numbered from 1 to $N$.
Figure 8: Plots depicting local properties of trap distribution, for domain
eccentricities $\kappa=0$ (a), $\kappa=0.125$ (b), $\kappa=0.250$ (c), and
$\kappa=0.500$ (d). The curve entitled “Measure of Area per Trap” shows the
distance $\langle d\rangle$ computed using the “area per trap” argument and
the resulting formula (4.3).
## 5 Discussion
At this point some interpretation of the previously stated results will be
presented. This discussion will concern the putative optimal values of the
average MFPT (2.1) in elliptic domains with internal traps, values of the
related merit function (2.3), the positions of traps within the domain, and
the bulk measures of trap distribution which were employed. To begin, the
method of study will be briefly reiterated. In order to study the dependence
of optimal trap configurations on both the number of traps, and the
eccentricity of the elliptic domain, merit functions for the average MFPT were
minimized for $N\leq 50$ and eccentricity (3.1) values of 0, 0.125, 0.25, and
0.5, while the area of the ellipse was kept constant, $|\Omega|=\pi$. In the
search for a global optimum, an iterative approach, which switched between
global and local searches, was used. The method used here was similar to that
used in Ref [9], though a different algorithm was used for the local search,
as well as in Ref [11] which used a different algorithm for both searches. A
somewhat different approach was employed in Ref [17], which made use of
numerical solutions to the Poisson problem.
In the case of the unit disk, comparing the results of this study to those of
a previous study [8] demonstrated that the optimums reported here are
consistently an improvement on previous work. This is due to the use of a more
accurate asymptotic expression [9], as well as removing the constraint that
all traps be located on rings within the domain.
Plots of the putative globally optimal merit function values as functions of
$N$, for each eccentricity, can be found in Figure 3. From these plots it
seems that eccentricity is an important factor when there are few traps,
around $N<10$, but each function behaves similarly as $N$ increases. In
particular, as the number of traps $N$ increases, it is natural to expect that
the average MFPT $\overline{u}_{0}$ (2.1) approaches zero; the merit function
$q(\mathbf{x})$ therefore must, as $N\to\infty$, approach from above the value
$|\Omega|/(4\pi^{2}D\nu)\simeq-0.238$, which agrees with the plots in Figure
3.
Examination of the positions of traps in the optimized configurations, both
visually and in terms of their radial coordinates, gives the impression that
the optimal configuration is one which consists of traps placed on the
vertices of nested polygons. These polygons, while irregular, seem to possess
some consistent structure, including being convex. (It is interesting to note
that the optimal configurations of confined interacting points often take
similar forms, both in two and three dimensions [18, 19, 20, 21].) Due to the
geometrical symmetries of the ellipse, the optimal configurations are defined
uniquely modulo the group $C^{2}\times C^{2}$ of reflections with respect to
both axes, which includes the rotation by $\pi$. The numerical algorithms,
however, choose a single specific representative of the equivalent putative
globally optimal configurations. For example, for non-symmetric numerically
optimal configuration, several traps may be found along the midline of one
half (right or left) of the domain. Optimal trap configurations with the same
symmetry group as the ellipse were also observed (see, e.g., Figure 5(c)).
In addition to the examination of individual trap positions in each optimized
configuration, quantities were calculated using the distances between
neighbouring traps, defined according to a Delaunay triangulation of the trap
coordinates, which served as bulk measures of the distribution of traps in
each configuration. Plots of these measures, shown in Figure 8, illustrate
that the mean distance between neighbouring traps tends to be close to the
diameter of a circle which would occupy the average area of the domain per
trap, as in equation (4.3). Additionally, the minimum distance between any two
traps tends to be twice the minimum distance between a trap and the domain
boundary, which supports the intuitive reasoning that for the boundary value
problem (1.1) with interior traps, the Neumann boundary condition on
$\partial\Omega$ “reflects” each trap, so that under the average MFPT
optimization, every trap tends to “repel” from its reflection in the boundary
the same way as it is repelled from other traps.
In future work, it would be of interest to address two related problems. The
first is to carry out similar investigations for the near-disk domains
considered in Ref. [9]. Another interesting direction is the development of a
scaling law which would predict the behaviour of the MFPT as the number of
traps increases with their positions defined according to a specific
distribution, in particular, for distributions that globally or locally
minimize MFPT, or other distributions of practical significance. A similar
problem, along with the dilute trap fraction limit of homogenization theory,
was addressed in Refs. [22, 23] for the narrow escape problem involving
boundary traps located on the surface of a sphere in three dimensions.
### Acknowledgements
A.C. is thankful to NSERC of Canada for research support through the Discovery
grant RGPIN-2019-05570.
## References
* [1] Sidney Redner. A Guide to First-Passage Processes. Cambridge University Press, 2001.
* [2] David Holcman and Zeev Schuss. The narrow escape problem. SIAM Review, 56(2):213–257, 2014.
* [3] PG Saffman and M Delbrück. Brownian motion in biological membranes. Proceedings of the National Academy of Sciences, 72(8):3111–3113, 1975.
* [4] Metzler Ralf, Redner Sidney, and Oshanin Gleb. First-passage Phenomena and their Applications, volume 35. World Scientific, 2014.
* [5] Paul C Bressloff, Berton A Earnshaw, and Michael J Ward. Diffusion of protein receptors on a cylindrical dendritic membrane with partially absorbing traps. SIAM Journal on Applied Mathematics, 68(5):1223–1246, 2008.
* [6] Paul C Bressloff and Jay M Newby. Stochastic models of intracellular transport. Reviews of Modern Physics, 85(1):135, 2013.
* [7] D Holcman and Z Schuss. Escape through a small opening: receptor trafficking in a synaptic membrane. Journal of Statistical Physics, 117(5-6):975–1014, 2004.
* [8] Theodore Kolokolnikov, Michele S Titcombe, and Michael J Ward. Optimizing the fundamental neumann eigenvalue for the laplacian in a domain with small traps. European Journal of Applied Mathematics, 16(2):161–200, 2005.
* [9] Sarafa A Iyaniwura, Tony Wong, Colin B Macdonald, and Micheal J Ward. Optimization of the mean first passage time in near-disk and elliptical domains in 2-D with small absorbing traps (in press). arXiv preprint arXiv:2006.12722, 2020.
* [10] Alexei F Cheviakov and Michael J Ward. Optimizing the principal eigenvalue of the Laplacian in a sphere with interior traps. Mathematical and Computer Modelling, 53(7-8):1394–1409, 2011.
* [11] Jason Gilbert and Alexei Cheviakov. Globally optimal volume-trap arrangements for the narrow-capture problem inside a unit sphere. Physical Review E, 99(1):012109, 2019.
* [12] Wesley JM Ridgway and Alexei F Cheviakov. Locally and globally optimal configurations of N particles on the sphere with applications in the narrow escape and narrow capture problems. Physical Review E, 100(4):042413, 2019.
* [13] A Ismael F Vaz and Luís N Vicente. A particle swarm pattern search method for bound constrained global optimization. Journal of Global Optimization, 39(2):197–219, 2007.
* [14] Jonathan Currie, David I Wilson, Nick Sahinidis, and Jose Pinto. Opti: Lowering the barrier between open source optimizers and the industrial matlab user. Foundations of computer-aided process operations, 24:32, 2012.
* [15] Jeffrey C Lagarias, James A Reeds, Margaret H Wright, and Paul E Wright. Convergence properties of the nelder–mead simplex method in low dimensions. SIAM Journal on optimization, 9(1):112–147, 1998.
* [16] Der-Tsai Lee and Bruce J Schachter. Two algorithms for constructing a delaunay triangulation. International Journal of Computer & Information Sciences, 9(3):219–242, 1980.
* [17] Sarafa Iyaniwura, Tony Wong, Michael J Ward, and Colin B Macdonald. Simulation and optimization of mean first passage time problems in 2-d using numerical embedded methods and perturbation theory. arXiv preprint arXiv:1911.07842, 2019.
* [18] Neil JA Sloane, Ronald H Hardin, TDS Duff, and John H Conway. Minimal-energy clusters of hard spheres. Discrete & Computational Geometry, 14(3):237–259, 1995.
* [19] Vinothan N Manoharan, Mark T Elsesser, and David J Pine. Dense packing and symmetry in small clusters of microspheres. Science, 301(5632):483–487, 2003.
* [20] M Saint Jean, C Even, and C Guthmann. Macroscopic 2D Wigner islands. EPL (Europhysics Letters), 55(1):45, 2001.
* [21] MR Hoare and P Pal. Physical cluster mechanics: Statics and energy surfaces for monatomic systems. Advances in Physics, 20(84):161–196, 1971.
* [22] Alexei F Cheviakov, Michael J Ward, and Ronny Straube. An asymptotic analysis of the mean first passage time for narrow escape problems: Part ii: The sphere. Multiscale Modeling & Simulation, 8(3):836–870, 2010.
* [23] Alexei F Cheviakov and Daniel Zawada. Narrow-escape problem for the unit sphere: Homogenization limit, optimal arrangements of large numbers of traps, and the n 2 conjecture. Physical Review E, 87(4):042118, 2013.
|
11institutetext: Data and Web Science Group, University of Mannheim, Mannheim,
Germany
11email<EMAIL_ADDRESS>22institutetext:
Language Technology Lab, University of Cambridge, UK
22email<EMAIL_ADDRESS>
# Evaluating Multilingual Text Encoders for Unsupervised Cross-Lingual
Retrieval
Robert Litschko 11 Ivan Vulić 22 Simone Paolo Ponzetto 11 Goran Glavaš 11
###### Abstract
Pretrained multilingual text encoders based on neural Transformer
architectures, such as multilingual BERT (mBERT) and XLM, have achieved strong
performance on a myriad of supervised language understanding tasks.
Consequently, they have been adopted as a state-of-the-art paradigm for
multilingual and cross-lingual representation learning and transfer, rendering
cross-lingual word embeddings (CLWEs) effectively obsolete. However, questions
remain to which extent this finding generalizes 1) to unsupervised settings
and 2) for ad-hoc cross-lingual IR (CLIR) tasks. Therefore, in this work we
present a systematic empirical study focused on the suitability of the state-
of-the-art multilingual encoders for cross-lingual document and sentence
retrieval tasks across a large number of language pairs. In contrast to
supervised language understanding, our results indicate that for unsupervised
document-level CLIR – a setup in which there are no relevance judgments for
task-specific fine-tuning – the pretrained encoders fail to significantly
outperform models based on CLWEs. For sentence-level CLIR, we demonstrate that
state-of-the-art performance can be achieved. However, the peak performance is
not met using the general-purpose multilingual text encoders ‘off-the-shelf’,
but rather relying on their variants that have been further specialized for
sentence understanding tasks.
###### Keywords:
Cross-lingual IR Multilingual text encoders Unsupervised IR.
## 1 Introduction
Cross-lingual information retrieval (CLIR) systems respond to queries in a
source language by retrieving relevant documents in another, target language.
Their success is typically hindered by data scarcity: they operate in
challenging low-resource settings without sufficient labeled training data,
i.e., human relevance judgments, to build supervised models (e.g., neural
matching models for pair-wise retrieval [53, 22]). This motivates the need for
robust, resource-lean and unsupervised CLIR approaches.
In previous work, Litschko et al. [27] have shown that language transfer
through cross-lingual embedding spaces (CLWEs) can be used to yield state-of-
the-art performance in a range of unsupervised ad-hoc CLIR setups. This
approach uses very weak supervision (i.e., only a bilingual dictionary
spanning 1K-5K word translation pairs), or even no supervision at all, in
order to learn a mapping that aligns two monolingual word embedding spaces
[19, 45]. Put simply, this enables casting CLIR tasks as ’monolingual tasks in
the shared (CLWE) space’: at retrieval time both queries and documents are
represented as simple aggregates of their constituent CLWEs. However, this
approach, by limitations of static CLWEs, cannot capture and handle polysemy
in the underlying text representations. Contextual text representation models
alleviate this issue [28]. They encode occurrences of the same word
differently depending on its surrounding context.
Such contextual representations are obtained via large models pretrained on
large text collections through general objectives such as (masked) language
modeling [16, 30]. Multilingual text encoders pretrained on 100+ languages,
such as mBERT [16] or XLM [14], have become a de facto standard for
multilingual representation learning and cross-lingual transfer in natural
language processing (NLP). These models demonstrate state-of-the-art
performance in a wide range of supervised language understanding and language
generation tasks [36, 26], especially in zero-shot settings: a typical modus
operandi is fine-tuning a pretrained multilingual encoder with task-specific
data of a source language (typically English) and then using it directly in a
target language.
However, there are still several crucial questions remaining. First, it is
unclear whether these general-purpose multilingual text encoders can be used
directly for ad-hoc CLIR without any additional supervision (i.e., relevance
judgments). Further, can they outperform the previous unsupervised CLIR
approaches based on static CLWEs [27]? How do they perform depending on the
(properties of the) language pair at hand? How can we encode useful semantic
information using these models, and do different “encoding variants” (see
later §3) yield different retrieval results? Are there performance differences
in unsupervised sentence-level versus document-level CLIR tasks? Finally, can
we boost performance by relying on sentence encoders that are specialized
towards dealing with sentence-level understanding in particular? In order to
address these questions, we present a systematic empirical study and profile
the suitability of state-of-the-art pretrained multilingual encoders for
different CLIR tasks and diverse language pairs. We evaluate two state-of-the-
art general-purpose pretrained multilingual encoders, mBERT [16] and XLM [14]
with a range of encoding variants, and also compare them to CLIR approaches
based on static CLWEs, and specialized multilingual sentence encoders. Our key
contributions can be summarized as follows:
(1) We empirically validate that, without any task-specific fine-tuning,
multilingual encoders such as mBERT and XLM fail to outperform CLIR approaches
based on static CLWEs. Their performance also crucially depends on how one
encodes semantic information with the models (e.g., treating them as
sentence/document encoders directly versus averaging over constituent words
and/or subwords). We also show that there is no “one-size-fits-all” approach,
and the results are task- and language-pair-dependent.
(2) We provide a first large-scale comparative evaluation of state-of-the art
pretrained multilingual encoders on unsupervised document-level CLIR. We also
empirically show that encoder models specialized for sentence-level
understanding substantially outperform general-purpose models (mBERT and XLM)
on sentence-level CLIR tasks.
## 2 Related Work
Self-Supervised Pretraining and Transfer Learning. Recently, research on
universal sentence representations and transfer learning has gained much
traction. InferSent [13] transfers the encoder of a model trained on natural
language inference to other tasks, while USE [8] extends this idea to a multi-
task learning setting. More recent work explores self-supervised neural
Transformer-based [44] models based on (causal or masked) language modeling
(LM) objectives such as BERT [16], RoBERTa [30], GPT [37, 5], and XLM
[14].111Note that self-supervised learning can come in different flavors
depending on the training objective [10], but language modeling objectives
still seem to be the most popular choice. Results on benchmarks such as GLUE
[47] and SentEval [12] indicate that these models can yield impressive
(sometimes human-level) performance in supervised Natural Language
Understanding (NLU) and Generation (NLG) tasks. These models have become _de
facto_ standard and omnipresent text representation models in NLP. In
supervised monolingual IR, self-supervised LMs have been employed as
contextualized word encoders [32], or fine-tuned as pointwise and pairwise
rankers [33].
Multilingual Text Encoders based on the (masked) LM objectives have also been
massively adopted in multilingual and cross-lingual NLP and IR applications. A
multilingual extension of BERT (mBERT) is trained with a shared subword
vocabulary on a single multilingual corpus obtained as concatenation of large
monolingual data in 104 languages. The XLM model [14] extends this idea and
proposes natively cross-lingual LM pretraining, combining causal language
modeling (CLM) and translation language modeling (TLM).222In CLM, the model is
trained to predict the probability of a word given the previous words in a
sentence. TLM is a cross-lingual variant of standard masked LM (MLM), with the
core difference that the model is given pairs of parallel sentences and
allowed to attend to the aligned sentence when reconstructing a word in the
current sentence. Strong performance of these models in supervised settings is
confirmed across a range of tasks on multilingual benchmarks such as XGLUE
[26] and XNLI [15]. However, recent work [39, 6] has indicated that these
general-purpose models do not yield strong results when used as out-of-the-box
text encoders in an unsupervised transfer learning setup. We further
investigate these preliminaries, and confirm this finding also for
unsupervised ad-hoc CLIR tasks.
Multilingual text encoders have already found applications in document-level
CLIR. Jiang et al. [22] use mBERT as a matching model by feeding pairs of
English queries and foreign language documents. MacAvaney et al. [31] use
mBERT in a zero-shot setting, where they train a retrieval model on top of
mBERT on English relevance data and apply it on a different language. However,
prior work has not investigated unsupervised CLIR setups, and a systematic
comparative study focused on the suitability of the multilingual text encoders
for diverse ad-hoc CLIR tasks and language pairs is still lacking.
Specialized Multilingual Sentence Encoders. An extensive body of work focuses
on inducing multilingual encoders that capture sentence meaning. In [2], the
multilingual encoder of a sequence-to-sequence model is shared across
languages and optimized to be language-agnostic, whereas Guo et al. [20] rely
on a dual Transformer-based encoder architectures instead (with tied/shared
parameters) to represent parallel sentences. Rather than optimizing for
translation performance directly, their approach minimizes the cosine distance
between parallel sentences. A ranking softmax loss is used to classify the
correct (i.e., aligned) sentence in the other language from negative samples
(i.e., non-aligned sentences). In [50], this approach is extended by using a
bidirectional dual encoder and adding an additive margin softmax function,
which serves to push away non-translation-pairs in the shared embedding space.
The dual-encoder approach is now widely adopted [20, 51, 18, 39, 56], and
yields state-of-the-art multilingual sentence encoders which excel in
sentence-level NLU tasks.
Other recent approaches propose input space normalization, and re-aligning
mBERT and XLM with parallel data [56, 6], or using a teacher-student framework
where a student model is trained to imitate the output of the teacher network
while preserving high similarity of translation pairs [39]. In [51], authors
combine multi-task learning with a translation bridging task to train a
universal sentence encoder. We benchmark a series of representative sentence
encoders; their brief descriptions are provided in §3.3.
CLIR Evaluation and Application. The cross-lingual ability of mBERT and XLM
has been investigated by probing and analyzing their internals [23], as well
as in terms of downstream performance [34, 49]. In CLIR, these models as well
as dedicated multilingual sentence encoders have been evaluated on tasks such
as cross-lingual question-answer retrieval [51], bitext mining [58, 59], and
semantic textual similarity (STS) [21, 25]. Yet, the models have been
primarily evaluated on sentence-level retrieval, while classic ad-hoc
(unsupervised) document-level CLIR has not been in focus. Further, previous
work has not provided a large-scale comparative study across diverse language
pairs and with different model variants, nor has tried to understand and
analyze the differences between sentence-level and document-level tasks. In
this work, we aim to fill these gaps.
## 3 Multilingual Text Encoders
We now provide an overview of all the multilingual models in our comparison.
For completeness, we first briefly describe static CLWEs (§3.1). We then
discuss mBERT and XLM as representative general-purpose multilingual text
encoders trained with LM objectives (§3.2), as well as specialized
multilingual sentence encoders later in §3.3.
### 3.1 CLIR with (Static) Cross-lingual Word Embeddings
In a standard CLIR setup, we assume a query $Q_{L_{1}}$ issued in a source
language $L_{1}$, and a document collection comprising $N$ documents
$D_{i,L_{2}}$, $i=1,\ldots,N$ in a target language $L_{2}$. Let
$d=\\{t_{1},t_{2},\dots,t_{|D|}\\}\in D$ be a document consisting of $|D|$
terms $t_{i}$. A typical approach to CLIR with static CLWEs is to represent
queries and documents as vectors
$\overrightarrow{Q},\overrightarrow{D}\in\mathbb{R}^{d}$ in a $d$-dimensional
shared embedding space [46, 27]. Each term is represented independently and
obtained by performing a lookup on a pre-computed static embedding table
$\overrightarrow{t_{i}}=emb\left(t_{i}\right)$. There exist a range of methods
for inducing shared embedding spaces with different levels of supervision,
such as parallel sentences, comparable documents, small bilingual
dictionaries, or even methods without any supervision [41]. Given the shared
CLWE space, both query and document representations are obtained as
aggregations of their term embeddings. We follow Litschko et al. [27] and
represent documents as the weighted sum of their terms’ vectors, where each
term’s weight corresponds to its inverse document frequency (idf) :
$\overrightarrow{d}=\sum_{i=1}^{N_{d}}{\mathit{idf}(t^{d}_{i})\cdot\overrightarrow{t^{d}_{i}}}$.
During retrieval documents are ranked according to the cosine similarity to
the query.
### 3.2 Multilingual (Transformer-Based) Language Models: mBERT and XLM
Massively multilingual pretrained neural language models such as mBERT and XLM
can be used as a dynamic embedding layer to produce contextualized word
representations, since they share a common input space on the subword level
(e.g. word-pieces, byte-pair-encodings) across all languages. Let us assume
that a term (i.e., a word-level token) is tokenized into a sequence of $K$
subword tokens ($K\geq 1$; for simplicity, we assume that the subwords are
word-pieces (wp)): $t_{i}=\big{\\{}\textit{wp}_{i,k}\big{\\}}^{K}_{k=1}$. The
multilingual encoder then produces contextualized subword embeddings for the
term’s $K$ constituent subwords $\overrightarrow{wp_{i,k}}$, $k=1,\ldots,K$,
and we can aggregate these subword embeddings to obtain the representation of
the term $t_{i}$:
$\overrightarrow{t_{i}}=\psi\left(\\{\overrightarrow{wp_{i,k}}\\}^{K}_{k=1}\right)$,
where the function $\psi()$ is the aggregation function over the $K$
constituent subword embeddings. Once these term embeddings
$\overrightarrow{t_{i}}$ are obtained, we follow the same CLIR setup as with
CLWEs in §3.1.
Static Word Embeddings from Multilingual Transformers. We first use
multilingual transformers (mBERT and XLM) in two different ways to induce
static word embedding spaces for all languages. In a simpler variant, we feed
terms into the encoders in isolation (ISO), that is, without providing any
surrounding context for the terms. This effectively constructs a static word
embedding table similar to what is done in §3.1, and allows the CLIR model
(§3.1) to operate at a non-contextual word level. An empirical CLIR comparison
between ISO and CLIR operating on CLWEs [27] then effectively quantifies how
well multilingual encoders (mBERT and XLM) encode word-level representations.
In a more elaborate variant we do leverage the contexts in which the terms
appear, and construct average-over-contexts embeddings (AOC). For each term
$t$ we collect a number of sentences $s_{i}\in\mathcal{S}_{t}$ in which the
term occurs. We use the full set of Wikipedia sentences $\mathcal{S}$ to
sample sets of contexts $\mathcal{S}_{t}$ for vocabulary terms. For a given
sentence $s_{i}$ let $j$ denote the position of $t$’s first occurrence. We
then transform $s_{i}$ with mBERT or XLM as the encoder, $enc(s_{i})$, and
extract the contextualized embedding of $t$ via mean-pooling, i.e., by
averaging embeddings of its constituent subwords,
$\psi\left(\\{\overrightarrow{wp_{j,k}}\\}^{K}_{k=1}\right)=1/K\cdot\sum_{k=1}^{K}{\overrightarrow{wp_{j,k}}}$.
For each vocabulary term, we obtain $N_{t}=min(|\mathcal{S}_{t}|,\tau)$
contextualized vectors, with $|\mathcal{S}_{t}|$ as the number of Wikipedia
sentences containing $t$ and $\tau$ as the maximal number of sentence samples
for a term. The final static embedding of $t$ is then simply the average over
the $N_{t}$ contextualized vectors.
The obtained static AOC and ISO embeddings, despite being induced with
multilingual encoders, however, did not appear to be well-aligned across
languages [29, 6]. We evaluated the static ISO and AOC embeddings induced for
different languages with multilingual encoders (mBERT and XLM), on the
bilingual lexicon induction (BLI) task [19]. We observed poor BLI performance,
suggesting that further projection-based alignment of respective monolingual
ISO and AOC spaces is required. To this end, we use the the standard
Procrustes method [43, 1] to align the embedding spaces of two languages, with
bilingual dictionaries from [19] as the supervision guiding the alignment.
Concretely, for each language pair in our experiments we project the AOC (ISO)
embeddings of the source language to the AOC (ISO) space of the target
language.
Direct Text Embedding with Multilingual Transformers. In both AOC and ISO, we
use the multilingual (contextual) encoders to obtain the static embeddings for
word types (i.e., terms): we can then leverage in exactly the same ad-hoc
retrieval setup (§3.1) in which CLWEs had previously been evaluated [27]. In
an arguably more straightforward approach, we also use pretrained multilingual
Transformers (i.e., mBERT or XLM) to directly encode the whole input text
(SEMB). We encode the input text by averaging the contextualized
representations of all terms in the text (we again compute the weighted
average, where the terms’ IDF scores are used as weights, see §3.1). For SEMB,
we take the contextualized representation of each term $t_{i}$ to be the
contextualized representation of its first subword token, i.e.,
$\overrightarrow{t_{i}}=\psi\left(\\{\overrightarrow{wp_{i,k}}\\}^{K}_{k=1}\right)=\overrightarrow{wp_{i,1}}.$333In
our initial experiments taking the vector of the first term’s subword
consistently outperformed averaging vectors of all its subwords.
### 3.3 Specialized Multilingual Sentence Encoders
Off-the-shelf pretrained multilingual Transformers such as mBERT and XLM have
been shown to produce poor sentence embeddings yielding sub-par performance in
unsupervised text similarity tasks; therefore, in order to be successful in
semantic text (sentences or paragraph) comparisons, they first need to be
fine-tuned on text matching (typically sentence matching) datasets [39, 6,
57]. Such encoders specialized for semantic similarity are supposed to encode
sentence meaning more accurately, supporting tasks that require unsupervised
(ad-hoc) semantic text matching. In contrast to mBERT and XLM, which
contextualize (sub)word representations, these models directly produce a
semantic embedding of the input text. We provide a brief overview of the
models included in our comparative evaluation.
Language Agnostic SEntence Representations (LASER) [2] adopts a standard
sequence-to-sequence architecture typical for neural machine translation (MT).
It is trained on 223M parallel sentences covering 93 languages. The encoder is
a multi-layered bidirectional LSTM and the decoder is a single-layer
unidirectional LSTM. The 1024-dimensional sentence embedding is produced by
max-pooling over the outputs of encoder’s last layer. The decoder then takes
the sentence embedding as additional input as each decoding step. The decoder-
to-encoder attention and language identifiers on the encoder side are
deliberately omitted, so that all relevant information gets ‘crammed’ into the
fixed-sized sentence embedding produced by the encoder. In our experiments, we
directly use the output of the encoder to represent both queries and
documents.
Multilingual Universal Sentence Encoder (m-USE) is a general purpose sentence
embedding model for transfer learning and semantic text retrieval tasks [51].
It relies on a standard dual-encoder neural framework [9, 52] with shared
weights, trained in a multi-task setting with an additional translation
bridging task. For more details, we refer the reader to the original work.
There are two pretrained m-USE instances available – we opt for the 3-layer
Transformer encoder with average-pooling.
Language-agnostic BERT Sentence Embeddings (LaBSE) [18] is another neural
dual-encoder framework, also trained with parallel data. Unlike in LASER and
m-USE, where the encoders are trained from scratch on parallel data, LaBSE
training starts from a pretrained mBERT instance (i.e., a 12-layer Transformer
network pretrained on the concatenated corpora of 100+ languages). In addition
to the multi-task training objective of m-USE, LaBSE additionally uses
standard self-supervised objectives used in pretraining of mBERT and XLM:
masked and translation language modelling (MLM and TLM, see §2). For further
model details, we refer the reader to the original work.
DISTIL [39] is a teacher-student framework for injecting the knowledge
obtained through specialization for semantic similarity from a specialized
monolingual transformer (e.g., BERT) into a non-specialized multilingual
transformer (e.g., mBERT). It first specializes for semantic similarity a
monolingual (English) teacher encoder $M$ using the available semantic
sentence-matching datasets for supervision. In the second, knowledge
distillation step a pretrained multilingual student encoder $\widehat{M}$ is
trained to mimic the output of the teacher model. For a given batch of
sentence-translation pairs $\mathcal{B}=\\{(s_{j},t_{j})\\}$, the teacher-
student distillation training minimizes the following loss:
$\mathcal{J}(\mathcal{B})=\frac{1}{|\mathcal{B}|}\sum_{j\in\mathcal{B}}\left[\left(M(s_{j})-\widehat{M}(s_{j})\right)^{2}+\left(M(s_{j})-\widehat{M}(t_{j})\right)^{2}\right].$
The teacher model $M$ is Sentence-BERT [38], BERT specialized for embedding
sentence meaning on semantic text similarity [7] and natural language
inference [48] datasets. The teacher network only encodes English sentences
$s_{i}$. The student model $\widehat{M}$ is then trained to produce for both
$s_{j}$ and $t_{j}$ the same representation that $M$ produces for $s_{j}$. We
benchmark different DISTIL models in our CLIR experiments, with the student
$\widehat{M}$ initialized with different multilingual transformers.
## 4 Experimental Setup
Evaluation Data. We follow the experimental setup of Litschko et al. [27], and
compare the models from §3 on language pairs comprising five languages:
English (EN), German (DE), Italian (IT), Finnish (FI) and Russian (RU). For
document-level retrieval we run experiments for the following nine language
pairs: EN-{FI, DE, IT, RU}, DE-{FI, IT, RU}, FI-{IT, RU}. We use the 2003
portion of the CLEF benchmark [4],444http://catalog.elra.info/en-
us/repository/browse/ELRA-E0008/ with 60 queries per language pair. The
document collection sizes are 17K (RU), 55K (FI), 158K (IT), and 295K (DE).
For sentence-level retrieval, also following [27], for each language pair we
sample from Europarl [24] 1K source language sentences as queries and 100K
target language sentences as the “document collection”.555Russian is not
included in Europarl and we therefore exclude it from sentence-level
experiments. Further, since some multilingual encoders have not seen Finnish
data in pretraining, we additionally report the results over a subset of
language pairs that do not involve Finnish.
Baseline Models. In order to establish whether multilingual encoders
outperform CLWEs in a fair comparison, we compare their performance against
the strongest CLWE-based CLIR model from the recent comparative study [27],
dubbed Proc-B. Proc-B induces a bilingual CLWE space from pretrained
monolingual fastText embeddings666https://fasttext.cc/docs/en/pretrained-
vectors.html using the linear projection computed as the solution of the
Procrustes problem given the dictionary of word-translation pairs. Compared to
simple Procrustes mapping, Proc-B iteratively (1) augments the word
translation dictionary by finding mutual nearest neighbours and (2) induces a
new projection matrix using the augmented dictionary. The final bilingual CLWE
space is then plugged into the CLIR model from §3.1.
Our document-level retrieval SEMB models do not get to see the whole document
but only the first $128$ word-piece tokens. For a more direct comparison, we
therefore additionally evaluate the Proc-B baseline (Proc-BLEN) which is
exposed to exactly the same amount of document text as the multilingual XLM
encoder (i.e., the leading document text corresponding to first $128$ word-
piece tokens) Finally, we compare CLIR models based on multilingual
Transformers to a baseline relying on machine translation baseline (MT-IR). In
MT-IR, 1) we translate the query to the document language using Google
Translate and then 2) perform monolingual retrieval using a standard Query
Likelihood Model [35] with Dirichlet smoothing [55].
Model Details. For all multilingual encoders we experiment with different
input sequence lengths: $64$, $128$, $256$ subword tokens. For AOC we collect
(at most) $\tau=60$ contexts for each vocabulary term: for a term not present
at all in Wikipedia, we fall back to the ISO embedding of that term. We also
investigate the impact of $\tau$ in §5.3. For purely self-supervised models
(SEMB, ISO, AOC) we independently evaluate representations from different
Transformer layers (cf. §5.3). For comparability, for ISO and AOC – methods
that effectively induce static word embeddings using multilingual contextual
encoders – we opt for exactly the same term vocabularies used by the Proc-B
baseline, namely the top 100K most frequent terms from respective monolingual
fastText vocabularies. We additionally experiment with three different
instances of the DISTIL model: (i) $\text{DISTIL}_{\text{XLM-R}}$ initializes
the student model with the pretrained XLM-R transformer [11];
$\text{DISTIL}_{\text{USE}}$ instantiates the student as the pretrained m-USE
instance [51]; whereas $\text{DISTIL}_{\text{DistilmBERT}}$ distils the
knowledge from the Sentence-BERT teacher into a multilingual version of
DistilBERT [42], a 6-layer transformer pre-distilled from mBERT.777Working
with mBERT directly instead of its distilled version led to similar scores,
while increasing running times. For SEMB models we scale embeddings of special
tokens (sequence start and end tokens, e.g., [CLS] and [SEP] for mBERT) with
the mean IDF value of input terms.
## 5 Results and Discussion
### 5.1 Document-Level Cross-lingual Retrieval
Table 1: Document-level CLIR results (Mean Average Precision, MAP). Bold: best model for each language-pair. *: difference in performance w.r.t. Proc-B significant at $p=0.05$, computed via paired two-tailed t-test with Bonferroni correction. | EN-FI | EN-IT | EN-RU | EN-DE | DE-FI | DE-IT | DE-RU | FI-IT | FI-RU | AVG | w/o FI
---|---|---|---|---|---|---|---|---|---|---|---
Baselines | | | | | | | | | | |
MT-IR | .276 | .428 | .383 | .263 | .332 | .431 | .238 | .406 | .261 | .335 | .349
Proc-B | .258 | .265 | .166 | .288 | .294 | .230 | .155 | .151 | .136 | .216 | .227
$\text{Proc-B}_{\text{LEN}}$ | .165 | .232 | .176 | .194 | .207 | .186 | .192 | .126 | .154 | .181 | .196
Models based on multilingual Transformers
$\text{SEMB}_{\text{XLM}}$ | .199* | .187* | .183 | .126* | .156* | .166* | .228 | .186* | .139 | .174 | .178
$\text{SEMB}_{\text{mBERT}}$ | .145* | .146* | .167 | .107* | .151* | .116* | .149* | .117 | .128* | .136 | .137
1-12[.4pt/1pt] $\text{AOC}_{\text{XLM}}$ | .168 | .261 | .208 | .206* | .183 | .190 | .162 | .123 | .099 | .178 | .206
$\text{AOC}_{\text{mBERT}}$ | .172* | .209* | .167 | .193* | .131* | .143* | .143 | .104 | .132 | .155 | .171
1-12[.4pt/1pt] $\text{ISO}_{\text{XLM}}$ | .058* | .159* | .050* | .096* | .026* | .077* | .035* | .050* | .055* | .067 | .083
$\text{ISO}_{\text{mBERT}}$ | .075* | .209 | .096* | .157* | .061* | .107* | .025* | .051* | .014* | .088 | .119
Similarity-specialized sentence encoders (with parallel data supervision) | | |
$\text{DISTIL}_{\text{XLM-R}}$ | .216 | .190* | .179 | .114* | .237 | .181 | .173 | .166 | .138 | .177 | .167
$\text{DISTIL}_{\text{USE}}$ | .141* | .346* | .182 | .258 | .139* | .324* | .179 | .104 | .111 | .198 | .258
$\text{DISTIL}_{\text{DistilmBERT}}$ | .294 | .290* | .313 | .247* | .300 | .267* | .284 | .221* | .302* | .280 | .280
1-12[.4pt/1pt] LaBSE | .180* | .175* | .128 | .059* | .178* | .160* | .113* | .126 | .149 | .141 | .127
LASER | .142 | .134* | .076 | .046* | .163* | .140* | .065* | .144 | .107 | .113 | .094
m-USE | .109* | .328* | .214 | .230* | .107* | .294* | .204 | .073 | .090 | .183 | .254
We show the performance (MAP) of multilingual encoders on document-level CLIR
tasks in Table 1. The first main finding is that none of the self-supervised
models (mBERT and XLM in ISO, AOC, and SEMB variants) outperforms the CLWE
baseline Proc-B. However, the full Proc-B baseline has, unlike mBERT and XLM
variants, been exposed to the full content of the documents. A fairer
comparison, against Proc-BLEN, which has also been exposed only to the first
$128$ tokens, reveals that SEMB and AOC variants come reasonably close, albeit
still do not outperform Proc-BLEN. This suggests that the document-level
retrieval could benefit from encoders able to encode longer portions of text,
e.g., [3, 54]. For document-level CLIR, however, these models would first have
to be ported to multilingual setups. While SEMB and AOC variants exhibit
similar performance, ISO variants perform much worse. The direct comparison
between ISO and AOC demonstrates the importance of contextual information and
seemingly limited usability of multilingual encoders as word encoders, if no
context is available.
Similarity-specialized multilingual encoders, which rely on pretraining with
parallel data, yield mixed results. Three models,
$\text{DISTIL}_{\text{DistilmBERT}}$, $\text{DISTIL}_{\text{USE}}$ and m-USE,
generally outperform the Proc-B baseline888As expected, m-USE and
$\text{DISTIL}_{\text{USE}}$ perform poorly on language pairs involving
Finnish, as they have not been trained on any Finnish data. LASER is the only
encoder trained on parallel data that does not beat the Proc-B baseline. We
believe this is because (a) LASER’s recurrent encoder provides text embeddings
of lower quality than Transformer-based encoders of m-USE and DISTIL variants
and (b) it has not been subdued to any self-supervised pretraining like DISTIL
models. Even the best-performing CLIR model based on a multilingual encoder
($\text{DISTIL}_{\text{DistilmBERT}}$) overall falls behind the MT-based
baseline (MT-IR). However, the performance of MT-IR crucially depends on the
quality of MT for the concrete language pair: for language pairs with weaker
MT (e.g., FI-RU, EN-FI, FI-RU, DE-RU), $\text{DISTIL}_{\text{DistilmBERT}}$
can substantially outperform MT-IR (e.g., 9 MAP points for FI-RU and DE-RU);
the gap in favor of MT-IR is, as expected, largest for most similar language
pairs, for which also the most reliable MT systems exist (EN-IT, EN-DE). In
other words, the feasibility and robustness of a strong MT-IR CLIR model seems
to diminish with more distant language pairs and lower-resource language
pairs. We plan to investigate this conjecture in more detail in future work.
The variation in results with similarity-specialized sentence encoders
indicates that: (a) despite their seemingly similar high-level architectures
typically based on dual-encoder networks [8], it is important to carefully
choose a sentence encoder in document-level retrieval, and (b) there is an
inherent mismatch between the granularity of information encoded by the
current state-of-the-art text representation models and the document-level
CLIR task.
### 5.2 Sentence-Level Cross-Lingual Retrieval
Table 2: Sentence-level CLIR results (MAP). Bold: best model for each language-pair. *: difference in performance with respect to Proc-B, significant at $p=0.05$, computed via paired two-tailed t-test with Bonferroni correction. | EN-FI | EN-IT | EN-DE | DE-FI | DE-IT | FI-IT | AVG | w/o FI
---|---|---|---|---|---|---|---|---
Baselines | | | | | | | |
MT-IR | .639 | .783 | .712 | .520 | .676 | .686 | .669 | .723
Proc-B | .143 | .523 | .415 | .162 | .342 | .137 | .287 | .427
Models based on multilingual Transformers
$\text{SEMB}_{\text{XLM}}$ | .309* | .677* | .465 | .391* | .495* | .346* | .447 | .545
$\text{SEMB}_{\text{mBERT}}$ | .199* | .570 | .355 | .231* | .481* | .353* | .365 | .469
1-9[.4pt/1pt] $\text{AOC}_{\text{XLM}}$ | .099 | .527 | .274* | .102* | .282 | .070* | .226 | .361
$\text{AOC}_{\text{mBERT}}$ | .095* | .433* | .274* | .088* | .230* | .059* | .197 | .312
1-9[.4pt/1pt] $\text{ISO}_{\text{XLM}}$ | .016* | .178* | .053* | .006* | .017* | .002* | .045 | .082
$\text{ISO}_{\text{mBERT}}$ | .010* | .141* | .087* | .005* | .017* | .000* | .043 | .082
Similarity-specialized sentence encoders (with parallel data supervision)
$\text{DISTIL}_{\text{XLM-R}}$ | .935* | .944* | .943* | .911* | .919* | .914* | .928 | .935
$\text{DISTIL}_{\text{USE}}$ | .084* | .960* | .952* | .137 | .920* | .072* | .521 | .944
$\text{DISTIL}_{\text{DistilmBERT}}$ | .817* | .902* | .902* | .810* | .842* | .793* | .844 | .882
1-9[.4pt/1pt] LaBSE | .971* | .972* | .964* | .948* | .954* | .951* | .960 | .963
LASER | .974* | .976* | .969* | .967* | .965* | .961* | .969 | .970
m-USE | .079* | .951* | .929* | .086* | .886* | .039* | .495 | .922
We show the sentence-level CLIR performance in Table 2. Unlike in the
document-level CLIR task, self-supervised SEMB variants here manage to
outperform Proc-B. The better relative SEMB performance than in document-level
retrieval is somewhat expected: sentences are much shorter than documents
(i.e., typically shorter than the maximal sequence length of $128$ word
pieces). All purely self-supervised mBERT and XLM variants, however, perform
worse than the translation-based baseline.
Multilingual encoders specialized with parallel data excel in sentence-level
CLIR, all of them substantially outperforming the competitive MT-IR baseline.
This however, does not come as much of a surprise, since these models (a) have
been trained using parallel data, and (b) have been optimized exactly on the
sentence similarity task. In other words, in the context of the cross-lingual
sentence-level task, these models are effectively supervised models. The
effect of supervision is most strongly pronounced for LASER, which was, by
being also trained on parallel data from Europarl, effectively subdued to in-
domain training. We note that at the same time LASER was the weakest model
from this group on average in the document-level CLIR task.
### 5.3 Further Analysis
We further investigate three aspects that may impact CLIR performance of
multilingual encoders: (1) layer(s) from which we take vector representations,
(2) number of contexts used in AOC variants, and (3) sequence length in
document-level CLIR.
Layer Selection. All multilingual encoders have multiple layers and one may
select (sub)word representations for CLIR at the output of any of them. Figure
1 shows the impact of taking subword representations after each layer for
self-supervised mBERT and XLM variants. We find that the optimal layer differs
across the encoding strategies (AOC, ISO, and SEMB) and tasks (document-level
vs. sentence-level CLIR). ISO, where we feed the terms into encoders without
any context, seems to do best if we take the representations from lowest
layers. This makes intuitive sense, as the parameters of higher Transformer
layers encode compositional rather than lexical semantics [17, 40]. For AOC
and SEMB, where both models obtain representations by contextualizing
(sub)words in a sentence, we get the best performance for higher layers – the
optimal layers for document-level retrieval (L9/L12 for mBERT, and L15 for
XLM) seem to be higher than for sentence-level retrieval (L9 for mBERT and
L12/L11 for XLM).
Figure 1: CLIR performance of mBERT and XLM as a function of the Transformer
layer from which we obtain the representations. Results (averaged over all
language pairs) shown for all three encoding strategies (SEMB, AOC, ISO).
Number of Contexts in AOC. We construct AOC term embeddings by averaging
contextualized representations of the same term obtained from different
Wikipedia contexts. This raises an obvious question of a sufficient number of
contexts needed for a reliable (static) term embedding. Figure 2 shows the AOC
results depending on the number of contexts used to induce the term vectors
(cf. $\tau$ in §3). The AOC performance seems to plateau rather early – at
around 30 and 40 contexts for mBERT and XLM, respectively. Encoding more than
60 contexts (as we do in our main experiments) would therefore bring only
negligible performance gains.
Figure 2: CLIR performance of AOC variants (mBERT and XLM) w.r.t. the number
of contexts used to obtain the term embeddings.
Input Sequence Length. Multilingual encoders have a limited input length and
they, unlike CLIR models operating on static embeddings (Proc-B, as well as
our AOC and ISO variants), effectively truncate long documents. In our main
experiments we truncated the documents to first $128$ word pieces. Now we
quantify (Table 3) if and to which extent this has a detrimental effect on
document-level CLIR performance. Somewhat counterintuitively, encoding a
longer chunk of documents ($256$ word pieces) yields a minor performance
deterioration (compared to the length of $128$) for all multilingual encoders.
We suspect that this is a combination of two effects: (1) it is more difficult
to semantically accurately encode a longer portion of text, leading to
semantically less precise embeddings of $256$-token sequences; and (2) for
documents in which the query-relevant content is not within the first $128$
tokens, that content might often also appear beyond the first $256$ tokens,
rendering the increase in input length inconsequential to the recognition of
such documents as relevant.
Table 3: Document CLIR results w.r.t. the input text length. Scores averaged over all language pairs not involving Finnish. Length | $\text{SEMB}_{\text{mBERT}}$ | $\text{SEMB}_{\text{XLM}}$ | $\text{DIST}_{\text{use}}$ | $\text{DIST}_{\text{XLM-R}}$ | $\text{DIST}_{\text{DmBERT}}$ | mUSE | LaBSE | LASER
---|---|---|---|---|---|---|---|---
64 | .104 | .128 | .235 | .167 | .237 | .254 | .127 | .089
128 | .137 | .178 | .258 | .162 | .280 | .247 | .125 | .068
256 | .117 | .158 | .230 | .146 | .250 | .197 | .096 | .027
## 6 Conclusion
Pretrained multilingual (mostly Transformer-based) encoders have been shown to
be widely useful in natural language understanding (NLU) tasks, when fine-
tuned in supervised settings with some task-specific data; their utility as
general-purpose text encoders in unsupervised multilingual settings, such as
the ad-hoc cross-lingual IR, has been much less investigated. In this work, we
systematically validated the suitability of a wide spectrum of cutting-edge
multilingual encoders for document- and sentence-level CLIR across several
language pairs. Our study encompassed purely self-supervised multilingual
encoders, mBERT and XLM, as well as the multilingual encoders that have been
specialized for semantic text matching on semantic similarity datasets and
parallel data. Opposing the main findings from supervised NLU tasks, we have
demonstrated that self-supervised multilingual encoders (mBERT and XLM),
without exposure to any further supervision, in most settings fail to
outperform CLIR models based on cross-lingual word embeddings (CLWEs).
Semantically-specialized multilingual sentence encoders, on the other hand, do
outperform CLWEs, but the gains are pronounced only in the sentence retrieval
task. While state-of-the-art multilingual text encoders excel in so many
seemingly more complex language understanding tasks, our work renders ad-hoc
CLIR in general and document-level CLIR in particular a serious challenge for
these models. We make our code and resources available at
https://github.com/rlitschk/EncoderCLIR.
## References
* [1] Artetxe, M., Labaka, G., Agirre, E.: A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In: Proceedings of ACL. pp. 789–798 (2018)
* [2] Artetxe, M., Schwenk, H.: Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the ACL pp. 597–610 (2019)
* [3] Beltagy, I., Peters, M.E., Cohan, A.: Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150 (2020)
* [4] Braschler, M.: CLEF 2003–Overview of results. In: Workshop of the Cross-Language Evaluation Forum for European Languages. pp. 44–63 (2003)
* [5] Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. In: Proceedings of NeurIPS (2020)
* [6] Cao, S., Kitaev, N., Klein, D.: Multilingual alignment of contextual word representations. In: Proceedings of ICLR (2020)
* [7] Cer, D., Diab, M., Agirre, E., Lopez-Gazpio, I., Specia, L.: SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In: Proceedings of SemEval. pp. 1–14 (2017)
* [8] Cer, D., Yang, Y., Kong, S.y., Hua, N., Limtiaco, N., St. John, R., Constant, N., Guajardo-Cespedes, M., Yuan, S., Tar, C., Strope, B., Kurzweil, R.: Universal sentence encoder for English. In: Proceedings of EMNLP. pp. 169–174 (2018)
* [9] Chidambaram, M., Yang, Y., Cer, D., Yuan, S., Sung, Y., Strope, B., Kurzweil, R.: Learning cross-lingual sentence representations via a multi-task dual-encoder model. In: Proceedings of the ACL Workshop on Representation Learning for NLP. pp. 250–259 (2019)
* [10] Clark, K., Luong, M., Le, Q.V., Manning, C.D.: ELECTRA: Pre-training text encoders as discriminators rather than generators. In: Proceedings of ICLR (2020)
* [11] Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., Stoyanov, V.: Unsupervised cross-lingual representation learning at scale. In: Proceedings of ACL. pp. 8440–8451 (2020)
* [12] Conneau, A., Kiela, D.: SentEval: An evaluation toolkit for universal sentence representations. In: Proceedings of LREC. pp. 1699–1704 (2018)
* [13] Conneau, A., Kiela, D., Schwenk, H., Barrault, L., Bordes, A.: Supervised learning of universal sentence representations from natural language inference data. In: Proceedings of EMNLP. pp. 670–680 (2017)
* [14] Conneau, A., Lample, G.: Cross-lingual language model pretraining. In: Proceedings of NeurIPS, pp. 7059–7069 (2019)
* [15] Conneau, A., Rinott, R., Lample, G., Williams, A., Bowman, S., Schwenk, H., Stoyanov, V.: XNLI: Evaluating cross-lingual sentence representations. In: Proceedings of EMNLP. pp. 2475–2485 (2018)
* [16] Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL. pp. 4171–4186 (2019)
* [17] Ethayarajh, K.: How contextual are contextualized word representations? Comparing the geometry of BERT, ELMo, and GPT-2 embeddings. In: Proceedings of EMNLP-IJCNLP. pp. 55–65 (2019)
* [18] Feng, F., Yang, Y., Cer, D., Arivazhagan, N., Wang, W.: Language-agnostic BERT sentence embedding. arXiv preprint arXiv:2007.01852 (2020)
* [19] Glavaš, G., Litschko, R., Ruder, S., Vulić, I.: How to (properly) evaluate cross-lingual word embeddings: On strong baselines, comparative analyses, and some misconceptions. In: Proceedings of ACL. pp. 710–721 (2019)
* [20] Guo, M., Shen, Q., Yang, Y., Ge, H., Cer, D., Hernandez Abrego, G., Stevens, K., Constant, N., Sung, Y.H., Strope, B., Kurzweil, R.: Effective parallel corpus mining using bilingual sentence embeddings. In: Proceedings of WMT. pp. 165–176 (2018)
* [21] Hoogeveen, D., Verspoor, K.M., Baldwin, T.: CQADupStack: A benchmark data set for community question-answering research. In: Proceedings of ADCS. pp. 3:1–3:8 (2015)
* [22] Jiang, Z., El-Jaroudi, A., Hartmann, W., Karakos, D., Zhao, L.: Cross-lingual information retrieval with BERT. In: Proceedings of LREC. p. 26 (2020)
* [23] Karthikeyan, K., Wang, Z., Mayhew, S., Roth, D.: Cross-lingual ability of multilingual BERT: An empirical study. In: Proceedings of ICLR (2020)
* [24] Koehn, P.: Europarl: A parallel corpus for statistical machine translation. In: Proceedings of the 10th Machine Translation Summit (MT SUMMIT). pp. 79–86 (2005)
* [25] Lei, T., Joshi, H., Barzilay, R., Jaakkola, T., Tymoshenko, K., Moschitti, A., Màrquez, L.: Semi-supervised question retrieval with gated convolutions. In: Proceedings of NAACL. pp. 1279–1289 (2016)
* [26] Liang, Y., Duan, N., Gong, Y., Wu, N., Guo, F., Qi, W., Gong, M., Shou, L., Jiang, D., Cao, G., et al.: XGLUE: A new benchmark dataset for cross-lingual pre-training, understanding and generation. In: Proceedings of EMNLP (2020)
* [27] Litschko, R., Glavaš, G., Vulić, I., Dietz, L.: Evaluating resource-lean cross-lingual embedding models in unsupervised retrieval. In: Proceedings of SIGIR. pp. 1109–1112 (2019)
* [28] Liu, Q., Kusner, M.J., Blunsom, P.: A survey on contextual embeddings. arXiv preprint arXiv:2003.07278 (2020)
* [29] Liu, Q., McCarthy, D., Vulić, I., Korhonen, A.: Investigating cross-lingual alignment methods for contextualized embeddings with token-level evaluation. In: Proceedings of CoNLL. pp. 33–43 (2019)
* [30] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., Stoyanov, V.: RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692 (2019)
* [31] MacAvaney, S., Soldaini, L., Goharian, N.: Teaching a new dog old tricks: Resurrecting multilingual retrieval using zero-shot learning. In: Proceedings of ECIR. pp. 246–254 (2020)
* [32] MacAvaney, S., Yates, A., Cohan, A., Goharian, N.: Cedr: Contextualized embeddings for document ranking. In: Proceedings of SIGIR. pp. 1101–1104 (2019)
* [33] Nogueira, R., Yang, W., Cho, K., Lin, J.: Multi-stage document ranking with BERT. arXiv preprint arXiv:1910.14424 (2019)
* [34] Pires, T., Schlinger, E., Garrette, D.: How multilingual is multilingual BERT? In: Proceedings of ACL. pp. 4996–5001 (2019)
* [35] Ponte, J.M., Croft, W.B.: A language modeling approach to information retrieval. In: Proceedings of SIGIR. pp. 275–281 (1998)
* [36] Ponti, E.M., Glavaš, G., Majewska, O., Liu, Q., Vulić, I., Korhonen, A.: XCOPA: A multilingual dataset for causal commonsense reasoning. In: Proceedings of EMNLP (2020)
* [37] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
* [38] Reimers, N., Gurevych, I.: Sentence-BERT: Sentence embeddings using siamese BERT-networks. In: Proceedings of EMNLP. pp. 3973–3983 (2019)
* [39] Reimers, N., Gurevych, I.: Making monolingual sentence embeddings multilingual using knowledge distillation. In: Proceedings of EMNLP (2020)
* [40] Rogers, A., Kovaleva, O., Rumshisky, A.: A primer in BERTology: What we know about how BERT works. Transactions of the ACL (2020)
* [41] Ruder, S., Vulić, I., Søgaard, A.: A survey of cross-lingual word embedding models. Journal of Artificial Intelligence Research 65, 569–631 (2019)
* [42] Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2019)
* [43] Smith, S.L., Turban, D.H., Hamblin, S., Hammerla, N.Y.: Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In: Proceedings of ICLR (2017)
* [44] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. In: Proceedings of NeurIPS. pp. 5998–6008 (2017)
* [45] Vulić, I., Glavas, G., Reichart, R., Korhonen, A.: Do we really need fully unsupervised cross-lingual embeddings? In: Proceedings of EMNLP. pp. 4406–4417 (2019)
* [46] Vulić, I., Moens, M.F.: Monolingual and cross-lingual information retrieval models based on (bilingual) word embeddings. In: Proceedings of SIGIR. pp. 363–372 (2015)
* [47] Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., Bowman, S.R.: GLUE: A multi-task benchmark and analysis platform for natural language understanding. In: Proceedings of ICLR (2019)
* [48] Williams, A., Nangia, N., Bowman, S.: A broad-coverage challenge corpus for sentence understanding through inference. In: Proceedings of NAACL. pp. 1112–1122 (2018)
* [49] Wu, S., Dredze, M.: Beto, bentz, becas: The surprising cross-lingual effectiveness of bert. In: Proceedings of EMNLP. pp. 833–844 (2019)
* [50] Yang, Y., Abrego, G.H., Yuan, S., Guo, M., Shen, Q., Cer, D., Sung, Y.H., Strope, B., Kurzweil, R.: Improving multilingual sentence embedding using bi-directional dual encoder with additive margin softmax. In: Proceedings of AAAI. pp. 5370–5378 (2019)
* [51] Yang, Y., Cer, D., Ahmad, A., Guo, M., Law, J., Constant, N., Abrego, G.H., Yuan, S., Tar, C., Sung, Y.h., Strope, B., Kurzweil, R.: Multilingual universal sentence encoder for semantic retrieval. In: Proceedings of ACL: System Demonstrations. pp. 87–94 (2020)
* [52] Yang, Y., Hernandez Abrego, G., Yuan, S., Guo, M., Shen, Q., Cer, D., Sung, Y.h., Strope, B., Kurzweil, R.: Improving multilingual sentence embedding using bi-directional dual encoder with additive margin softmax. In: Proceedings of IJCAI. pp. 5370–5378 (2019)
* [53] Yu, P., Allan, J.: A study of neural matching models for cross-lingual IR. In: Proceedings of SIGIR. p. 1637–1640 (2020)
* [54] Zaheer, M., Guruganesh, G., Dubey, A., Ainslie, J., Alberti, C., Ontanon, S., Pham, P., Ravula, A., Wang, Q., Yang, L., et al.: Big Bird: transformers for longer sequences. arXiv preprint arXiv:2007.14062 (2020)
* [55] Zhai, C., Lafferty, J.: A study of smoothing methods for language models applied to information retrieval. ACM Transactions on Information Systems (TOIS) 22(2), 179–214 (2004)
* [56] Zhao, W., Eger, S., Bjerva, J., Augenstein, I.: Inducing language-agnostic multilingual representations. arXiv preprint arXiv:2008.09112 (2020)
* [57] Zhao, W., Glavaš, G., Peyrard, M., Gao, Y., West, R., Eger, S.: On the limitations of cross-lingual encoders as exposed by reference-free machine translation evaluation. In: Proceedings of ACL. pp. 1656–1671 (2020)
* [58] Ziemski, M., Junczys-Dowmunt, M., Pouliquen, B.: The United Nations parallel corpus v1.0. In: Proceedings of LREC. pp. 3530–3534 (2016)
* [59] Zweigenbaum, P., Sharoff, S., Rapp, R.: Overview of the third BUCC shared task: Spotting parallel sentences in comparable corpora. In: Proceedings of LREC (2018)
|
# Personalised Recommendations in Mental Health Apps: The Impact of Autonomy
and Data Sharing
Svenja Pieritz Telefonica Alpha, Spain<EMAIL_ADDRESS>, Mohammed
Khwaja Imperial College London, UK<EMAIL_ADDRESS>, A.
Aldo Faisal Imperial College London, UK<EMAIL_ADDRESS>and
Aleksandar Matic Koa Health, Spain<EMAIL_ADDRESS>
(2021)
###### Abstract.
The recent growth of digital interventions for mental well-being prompts a
call-to-arms to explore the delivery of personalised recommendations from a
user’s perspective. In a randomised placebo study with a two-way factorial
design, we analysed the difference between an autonomous user experience as
opposed to personalised guidance, with respect to both users’ preference and
their actual usage of a mental well-being app. Furthermore, we explored users’
preference in sharing their data for receiving personalised recommendations,
by juxtaposing questionnaires and mobile sensor data. Interestingly, self-
reported results indicate the preference for personalised guidance, whereas
behavioural data suggests that a blend of autonomous choice and recommended
activities results in higher engagement. Additionally, although users reported
a strong preference of filling out questionnaires instead of sharing their
mobile data, the data source did not have any impact on the actual app use. We
discuss the implications of our findings and provide takeaways for designers
of mental well-being applications.
User Perception; Personalisation; Recommender Systems; Personality Traits
††copyright: acmcopyright††journalyear: 2021††copyright:
acmcopyright††conference: CHI Conference on Human Factors in Computing
Systems; May 8–13, 2021; Yokohama, Japan††booktitle: CHI Conference on Human
Factors in Computing Systems (CHI ’21), May 8–13, 2021, Yokohama,
Japan††price: 15.00††doi: 10.1145/3411764.3445523††isbn:
978-1-4503-8096-6/21/05††ccs: Human-centered computing Human computer
interaction (HCI)††ccs: Human-centered computing Empirical studies in
ubiquitous and mobile computing
## 1\. Introduction
Digital mental well-being interventions present the promise to mitigate the
global shortage of mental healthcare professionals in a cost-effective and
scalable manner (Tal and Torous, 2017). Their emergence has been accelerated
by the experiences of the COVID-19 pandemic (Organization et al., 2020) and
its impact on mental health (Pfefferbaum and North, 2020). The growth of the
digital mental health space has been paralleled by the rapid increase in
research and development of new interventions and content. Benefits of rich
content are indisputable, yet a vast amount of choices can also misfire—which
is known as a paradox of choice (Schwartz, 2004). For this reason, we are
witnessing the advent of recommender systems also in digital mental health
platforms (Khwaja et al., 2019a). Although personalised recommendations
represent an important aid with respect to the choice overload and moreover in
improving the intervention effectiveness, delivering those recommendations
entails two main challenges. Firstly, how to balance autonomy and personalised
guidance has become an important topic in the design of personalised
technologies. Secondly, data sharing concerns are undetachable from the
automatic personalisation models. Both challenges have a very specific
relevance when it comes to digital health applications (Burr et al., 2020).
While users’ autonomy is one of the common principles in designing digital
experiences (Peters et al., 2018), patients in traditional doctor-patient
settings typically expect (and often prefer) to “be told what to do” rather
than to “do what they want”. This raises an ethical tension between ensuring
the safety of patients and respecting their right to autonomy (Burr et al.,
2020). In addition, data privacy, like autonomy, is another central theme in
personalised technologies—especially for digital services that rely on
behavioural signals and sensitive mental health data to personalise
interventions. There are a myriad of associated challenges including
unintended data leakages, lack of users’ technical literacy, the need of
finding an appropriate balance between using less privacy-invasive monitoring
and providing more tailored interventions to improve health outcomes, and so
on.
We empirically investigate the multifaceted challenges of autonomy and data
sharing in mental health applications from the point-of-view of users. The
importance of understanding the user’s perspective stems from the fact that
user disengagement represents one of the key challenges towards an improved
effectiveness of digital mental health interventions (Makin et al., 2017;
Chikersal et al., 2020; Karapanos, 2015; Eysenbach, 2005). Similar to
pharmacological therapies, no matter how personalised and efficient a digital
intervention is, a user’s adherence is a pre-requisite to receive the desired
benefits. As the content in mental health applications is growing, we are
likely at the dawn of expansion of personalised recommender systems (Khwaja et
al., 2019a). Therefore, the question on how to design the user experience of
delivering personalised recommendations deserves an important place in Human
Computer Interaction (HCI) research. Our objective is to inform digital user
experience designers on how to best promote users’ engagement when providing
diverse digital mental health content. To this end, we explore users’ declared
preference as well as their actual app usage with respect to: 1) a primarily
autonomous versus a primarily guided user experience, 2) data to be shared in
order to receive recommendations. Specifically, we address the following
research questions:
* •
Do users prefer an autonomous or guided experience in a mental health app?
* •
Does receiving an autonomous versus guided experience impact the actual app
use?
* •
To power a recommendation system, do users prefer to share smartphone data or
to self-report their personality traits?
* •
Does sharing smartphone data as opposed to self-reporting personality traits
influence the actual app use?
We used a commercially available mental health application that includes more
than 100 activities (i.e. interventions) and delivered it to $N=218$
participants. In a two-factor factorial design experiment, we randomly
assigned half of the participants to a guided user experience and the other
half to an autonomous selection of mental well-being app content.
Independently, we assigned half of the participants to a self-reported way of
capturing a user model and half to a consent form for sharing smartphone
data–that could be used to infer the same user model. We used the Big Five
personality traits (Donnellan et al., 2006; Goldberg et al., 2006) as a user
model, as personality has been widely used to personalise digital health
solutions (Halko and Kientz, 2010) and they can be inferred passively with
smartphone sensing data (Chittaranjan et al., 2011; Wang et al., 2018; Khwaja
et al., 2019b). The participants were primed that they will receive
personalised recommendations that are based on the data that they agreed to
share. However, in reality, the recommendations were random. We opted for a
random placebo experimental design, based on priming, to reduce the dependency
on the recommendation system accuracy that may not always be uniform for all
users, thus representing a confounding factor. Having four randomised groups
allowed us to delve into the relative differences in the actual app usage and
users’ declared preferences as a function of the two factors—autonomy and data
sharing.
The choices put forth to mental health intervention designers are not trivial,
especially in light of ethical tensions related to paternalistic design
choices (Floridi, 2016) or the possible risks arising from increasingly
sensitive data streams. Yet, both design choices are important to tackle in
order to unlock the value of personalised technology (Floridi et al., 2018).
This paper deepens understanding of users’ preference and their actual app
usage as a consequence of the app design choices, and contributes to the
related debates in the HCI community and beyond.
## 2\. Background and Related Work
Blom and Monk defined personalisation as ”a process that increases personal
relevance” (Blom, 2000). Personalisation has gained significant attention in
digital services, since providing targeted user experience has been shown to
increase acceptance (Jorstad et al., 2005). Particularly in health
applications, personalisation was shown to increase not only engagement but
also effectiveness and ultimately well-being. Noar et al (Noar et al., 2007)
conducted a meta-analysis of 57 studies that used tailored messages to deliver
health behaviour change interventions (for smoking cessation, diet, cancer
screening, etc.), and concluded that personalised health interventions are
more effective than generic ones. Zanker et al (Zanker et al., 2019) argued
that personalisation can impact a range of outcomes including user engagement,
app behaviours, and adoption rates. Recent studies have also found that
personalisation of digital health apps can significantly improve health
outcomes (Madeira et al., 2018; Chawla and Davis, 2013), however, the manner
in which personalisation is delivered to the users and how they perceived it
can be even more important than the extent to which a service is really
personalised (Li, 2016). Our work builds on the previous literature by further
exploring the topic of delivering personalised recommendations in digital
mental health from the users’ perspective. We explored both users’ preference
as well as how their engagement with the app are impacted by a) different ways
of providing personalised recommendation—by giving users more or less autonomy
in choosing the app content, and b) different ways of sharing the data
required for delivering personalisation. Our study highlights the importance
of autonomy and data privacy in the design of digital mental health services
and provides key takeaways for user experience design.
### 2.1. Autonomy
Autonomy has been an important focus in HCI, and specifically in persuasive
technologies. Rughinis et al. (Rughiniş et al., 2015) decoupled five
dimensions of autonomy in the context of health and well-being apps including:
(1) degree of control that the user has; (2) degree of functional
personalisation; (3) degree of truthfulness and reliability of the information
in the app; (4) users’ understanding of the goal-pursuit and (5) promotion of
moral values by what the app recommends. Embedding autonomy in the design of
digital services impacts not only motivation and user experience but also
psychological well-being. For this reason, Peters et al (Peters et al., 2018)
included autonomy as one of the three key principles in “designing for well-
being” (in addition to competence and relatedness), using Self Determination
Theory (Ryan and Deci, 2000) as the basis for their approach. For instance,
game designers have long explored the concept of autonomy and showed that the
perceived autonomy in video games contributes to game enjoyment and also
short-term well-being (Ryan et al., 2006). While autonomy leads to improved
well-being and engagement (in addition to being ethically recommended
(Pullman, 1999)), providing a range of choices may act as a demotivating
factor (Schwartz, 2004). Besides, providing more guidance with tailored
interventions can lead to improved effectiveness of the intervention. Hence,
designers of personalised applications face conflicting requirements.
In this study, we set to explore how the degree of autonomy impacts the users’
subjective preference, as well as their engagement with a mental health
application.
### 2.2. Data Privacy
Data privacy and related topics—including but not limited to transparency,
control and data governance—have been extensively discussed over the past
decade due to rapid technological expansion. These topics gained even more
prominence after the introduction of the EU’s General Data Protection
Regulation (GDPR) (Voigt and Von dem Bussche, 2017). The HCI community has
promptly focused their efforts on understanding how these topics may impact
interaction with digital services. Providing personalised recommendations
typically relies on using sensitive information streams and past studies
indicate that users’ attitude towards sharing potentially sensitive data was
shown to be very conservative (Jamal et al., 2013). For mobile health apps
specifically, Peng et al (Peng et al., 2016) conducted six focus groups and
five individual interviews with 44 participants to compare what users value
the most in these kinds of apps. While participants valued the benefits of
personalisation, the authors found that they were strongly hesitant to share
personal information for receiving these benefits. In another study, HCI
researchers conducted a “Wizard of Oz” study to investigate whether the
benefits of receiving highly personalised services—Ads in particular—offsets
concerns related to sharing personal data (Matic et al., 2017). Interestingly,
the study showed that participants’ concerns were less pronounced when an
actual benefit of sharing the data was clearly visible. However, the users’
concerns on how the system inferred the user model (concretely users’
personality) remained strongly highlighted in semi-structured interviews. On a
related topic, a recent study (Kim et al., 2020) explored how users perceived
automatic personality detection using a mixed-methods approach. They conducted
a survey (with 89 participants) to understand which data streams users were
willing to share, and afterwards developed a machine learning model (with the
preferred data from 32 participants) to predict personality traits.
Subsequently, they interviewed 9 participants to understand how users
perceived the personality prediction model after seeing the prediction
results. They observed that participants’ opinions on data sharing were mixed
and suggested that transparency can help in addressing users’ concerns such as
trust and privacy.
In our randomised placebo study, we primed participants that the selection of
recommended activities in a mental health app was personalised to their
personal data. The goal was to explore if the benefits of having a
personalised experience will outpower their concerns about sharing the data.
The success of placebo effect was evaluated and confirmed by including a
control group in the experiment. We additionally contributed to the existing
literature by comparing the actual app engagement and the user’s preference
towards data sharing.
Data privacy and autonomy were emphasised as key topics in the ethics of
digital well-being (Floridi et al., 2018). To the best of our knowledge our
work is the first that thoroughly explores how these two elements impact
users’ actual app usage and self-declared preferences in a digital mental
health app.
## 3\. Methods
To understand users’ preferences and the usage of a mobile mental health app
in the context of delivering recommendations, we used
Foundations111https://foundations.koahealth.com/. Foundations was a suitable
platform for our study as it contains a large library with numerous
intervention activities. In this section, we detail the methodology applied in
this experiment.
### 3.1. Mental Health App
Figure 1. (a) Open library with all activities. Some activities are locked and
dependent on the completion of others. (b) Recommended activities at the
bottom of the home screen. All activities shown to users are randomly
generated.
[Two screenshots of the smartphone app-Foundations](a) Screenshot of the
smartphone app-Foundations displaying colourful circles with activity titles
corresponding to three different categories, namely “Covid-19: Staying
resilient in times of crisis”, “Relaxation” and “Working with thoughts”. The
top bar consists of free buttons (Modules, Programmes, Activities). The
activities button is activated. The bottom menu shows three icons for home,
library and journey. The library icon is activated. (b) Screenshot of the
smartphone app-Foundations with a single column design and two sections. The
upper section consists of the description of the “Working with thoughts”
programme and a link to the programme overview. The lower section shows the
heading “Other activities for you” followed by two circles with activity
titles on them. The bottom menu shows three icons for home, library and
journey. The home icon is activated.
Foundations is an evidence-based digital mental health platform designed to
improve users’ resilience and decrease their stress levels. At the time of
this study, the version of the app incorporated 10 modules with 102
intervention activities in total. Each activity has a specific format—such as
simple blog posts, relaxation audios, interactive journaling and games—to help
users relax, sleep better, boost their self confidence, think positively, and
similar. The app provides an open library with some activities locked in the
beginning (Figure 1 (a)). Upon completion of each activity, users are asked to
rate their experience using a thumbs up or thumbs down icon. The home screen
contains a section called ”Other activities for you” that shows a
recommendation of two activities at a time (Figure 1 (b)). In our study, these
recommendations were random i.e. not personalised (although presented so),
which guaranteed that all the users have received the same experience.
Automatic recommendations may work better for specific groups of users which
would have biased the results of our study.
### 3.2. Study Design
To determine how the way of data sharing and the autonomy of the user
experience impact both users’ preferences and the actual usage of a mental
well-being app, we designed a study consisting of three parts: (1) Onboarding
questionnaire, followed by (2) the app usage for seven days with daily
reminders, and finally (3) an exit questionnaire (Figure 2). As the goal was
to investigate the effect of the two variables, ”autonomy” and ”preferred way
of data sharing”, we designed a two-factor factorial experiment. A two-factor
factorial design is an experimental design in which data is collected for all
possible combinations of the levels of the two factors of interest (Mukerjee
and Wu, 2007). In our case, each factor has two levels. For the preferred way
of data sharing, the two levels are (1) selecting mobile sensing data and (2)
completing a questionnaire, for building a personalisation model. For the
former level, half of the users (randomly selected) were asked to select
smartphone data streams that can be used to automatically infer their
personality. The other half of the users received the 20-item personality
questionnaire (Donnellan et al., 2006) to complete. We defined two different
user experiences that we refer to as ”the degree of autonomy”, namely (1)
receiving a primarily guided user experience with the option to choose other
activities out of an open library, and (2) receiving a primarily autonomous
user experience with the option to use recommended activities on the home
screen.
Overall, this led to 4 experimental groups that can be combined according to
the variables they have in common. The combination of two groups along one
identical variable is referred to as a cluster. For example, the two groups
that receive an autonomous user experience—but differ in the way of data
sharing—are combined and referred to as the autonomous cluster. This design
allows for one group per each permutation of the two variables, which enables
an analysis of all conditions separately, as well as combined. For an effect
size (Cohen’s d) of 1, statistical power of 95% and significance level of
0.05, the estimated sample size to produce a meaningful statistical
significance with the Mann-Whitney test is 30. Thus, we set the criteria to
have at least 30 samples in each group.
Figure 2. Experimental Design
[Flowchart of the experimental design]Flowchart visualising the experimental
design with 10 elements and flow links. The start state is “Introduction”. The
end state is “Exit questionnaire”. Details provided in section 3.2.
The groups differed in the onboarding questionnaire and in daily reminders
during the app usage. The primary purpose of the onboarding questionnaire was
to give the user the impression that the collected data will be the base for
receiving personalised recommended activities in the app. However, this
questionnaire was solely used for priming, and no actual personalisation was
occurring in the app. All the activities participants found in the
recommendation section of the app were randomly selected, as described in
section 3.1. The onboarding questionnaire consisted of the data sharing
(smartphone modalities or questionnaire) and directions on app usage
(autonomous or guided).
Upon completion of the questionnaire, all participants received instructions
on how to install Foundations and were asked to complete at least one activity
a day for one week. Daily reminders were sent according to the degree of
autonomy. These reminders consisted of either a daily recommended activity for
participants in the guided cluster, or a general reminder to use the app for
those in the autonomous cluster. The daily recommended activities were
selected from the most popular activities in the app’s library. After seven
days, all users completed the exit questionnaire—which was identical for all
groups.
Since we did not use the users’ data to personalise recommendations in the
app, we included an additional control group to verify whether or not the
priming was successful. The control group filled out a control questionnaire
to match the workload to the other groups but this group did not receive any
priming on personalisation.
In summary, this design resulted in having five groups:
* •
Questionnaire-Guided (QG): Personality questionnaire + daily email with
activity recommendation + priming that the email recommendations are based on
the reported personality
* •
Data-Guided (DG): Data modality selection + daily email with activity
recommendation + priming that the email recommendations are based on the
automatically inferred personality
* •
Questionnaire-Autonomous (QA): Personality questionnaire + daily email as a
general reminder to complete one activity + priming that recommendations on
the home screen are based on the reported personality
* •
Data-Autonomous (DA): Data modality selection + daily email as a general
reminder to complete one activity + priming that recommendations on the home
screen are based on the automatically inferred personality
* •
Control (C): Control questionnaire + daily email as a general reminder to
complete one activity
Our study was approved by the internal ethics board. As the whole set of
intervention activities in Foundations has been recently evaluated in a
Randomised Control trial (Catuara-Solarz et al., 2021) and demonstrated an
overall improvement in users’ overall well-being, no harm was expected to be
introduced by a deception study that recommends users with the most popular
activities.
### 3.3. Data Collection
The onboarding and exit questionnaires were created using the Typeform
222https://www.typeform.com/ survey collection tool. We designed five
variations of the onboarding questionnaire for each of the five groups defined
in Section 3.2. In each of these questionnaires, participants were presented
with a consent form explaining details on the data collection and purpose of
the study—in compliance with the EU General Data Protection Regulation (GDPR).
For users in questionnaire cluster, a 7-point Likert scale (1 strongly
disagree to 7 strongly agree) was used for the personality questionnaire.
Users in the data cluster were provided with 10 different smartphone sensing
data categories and asked to select at least 4 that could be sampled from
their smartphones. The rationale for introducing the data choice was to
resemble the choice that users have in real-world applications. Android and
iOS give users the possibility to opt-out from specific data streams.
Moreover, in Europe—where we conducted the experiments—this is a strict
regulatory requirement as per the GDPR. We selected the 10 most common sensing
modalities that have been used in the previous literature to predict
personality traits (Mønsted et al., 2018; de Montjoye et al., 2013;
Chittaranjan et al., 2011, 2013; Wang et al., 2018; Khwaja et al., 2019b). The
10 options included:
* •
Time spent with different applications (App time)
* •
Geographical location (Location)
* •
Number of steps walked (Steps)
* •
Noise in the environment (Noise)
* •
Bluetooth and WiFi data (Bluetooth/Wifi)
* •
Battery level (Battery)
* •
Ambient Light in the environment (Light)
* •
Call history (without phone numbers) (Calls)
* •
Frequency of social network usage (Social network)
* •
Phone lock/unlock data and screen usage (Un(Lock))
We asked users to select at least 4 options out of 10 and explained that
selecting more options leads to a higher accuracy in inferring personality.
After the onboarding, users were asked to use Foundations for a week. App
usage logs consisting of activities completed, time taken per activity etc.
were recorded for each user during the study.
Upon using the app for an entire week, the users were presented with an exit
questionnaire. This questionnaire had four sections asking users (Ex1) about
their overall experience of the mental health app and their perspectives on
personalisation of the app (Ex2) if they prefer to have autonomy in selecting
activities or have the app select the right activity for them, (Ex3) if they
prefer to complete a personality questionnaire or provide smartphone sensing
data and their privacy preferences regarding the same. Based on the Technology
Acceptance Model (Lee et al., 2003), the first set of questions (Ex1) was
defined to understand how users perceived the app in general. The second (Ex2)
and third (Ex3) set of questions were related to the users’ preference to be
guided vs to have autonomy, as well as sharing the data through a
questionnaire or by providing their mobile sensing data. Ex3 also included
questions related to privacy concerns (a recent study that explored
personality profiling by a chatbot indicated that participants generally
regarded personality as sensitive data that they would be reluctant to share
(Völkel et al., 2020)).
Ex1 \- Ex3 were delivered as a 7-point Likert scale or a multiple choice
(select ’X’ or ’Y’). Additionally, we had two free text questions where the
users could give suggestions on the how the app could be improved and more
personalised to them (Ex4). Subsequently, we presented the participants with
demographic questions - gender, age 333We asked range of age rather than exact
number, education level and the continent of residence. The exit questionnaire
concluded with a text block that debriefed the participants.
### 3.4. Participants and Inclusion Criteria
Table 1. Demographics of participants
Demographic | Particular | Complete | QG | DG | QA | DA | C
---|---|---|---|---|---|---|---
Size of Population | 218 | 45 | 52 | 40 | 40 | 41
Gender | Female | 113 | 24 | 25 | 21 | 17 | 26
| Male | 105 | 21 | 27 | 19 | 23 | 15
Age | 15-19# | 6 | - | 1 | 1 | 2 | 2
| 20-24 | 26 | 3 | 3 | 8 | 5 | 7
| 25-29 | 23 | 6 | 3 | 4 | 7 | 3
| 30-34 | 26 | 3 | 9 | 4 | 6 | 4
| 35-39 | 25 | 7 | 7 | 3 | 4 | 4
| 40-44 | 31 | 6 | 9 | 5 | 3 | 8
| 45-49 | 17 | 4 | 6 | 3 | 2 | 2
| 50-54 | 29 | 7 | 9 | 5 | 4 | 4
| 55-59 | 11 | 4 | 2 | - | 3 | 2
| 60+ | 24 | 5 | 3 | 7 | 4 | 5
Education | Secondary School | 69 | 14 | 19 | 13 | 14 | 9
| Bachelor’s Degree | 92 | 17 | 22 | 17 | 16 | 20
| Master’s Degree | 30 | 9 | 5 | 4 | 6 | 6
| Ph.D. or higher | 8 | 1 | 2 | 2 | - | 3
| Trade School | 15 | 1 | 4 | 4 | 4 | 2
| Prefer not to say | 4 | 3 | - | - | - | 1
*QG = Questionnaire-Guided, DG = Data-Guided, G3 = Questionnaire-Autonomous, G4 = Data-Autonomous, C = Control
#The minimum age of participants is 18. We provided this age option to
maintain uniformity with the other age ranges
The participants in our study were recruited through an external agency that
operates in Europe. The inclusion criteria included a high proficiency in
English and the minimal age of 18. We also required a minimum of 30
participants in each group and gender balance. In early July 2020, the
recruitment agency sent an invite for the study through their internal mailing
list and all the participants completed the study by the end of July 2020. All
participants were recruited from Europe. Through the recruitment agency, we
provided a monetary incentive to all participants who completed the study.
Users were instructed that successful completion and receiving the incentive
requires completing the onboarding questionnaire, installation and use of the
mental health app for 1 week, and completing the exit questionnaire. Users
were reminded each day that skipping any of the steps would result in their
disqualification from the study.
700 participants were registered for the study and were randomly assigned to
one of the five groups. Based on the group allocation, they were asked to
complete the corresponding onboarding questionnaire. All 700 users completed
the onboarding questionnaire and were then instructed to install the app on
their smartphones. Out of the 700 users, 353 participants installed the mental
health app. For one week after installing the app, users received daily
reminders to use the app and to engage with at least one activity per day.
Using the app for 4 or more days qualified the users for the last stage of the
study. We chose a threshold of 4 days, as anytime less than this would be
insufficient to explore the app well. 241 participants fulfilled this criteria
and were directed to the exit questionnaire. Finally, 218 users completed the
exit questionnaire and this population was used for our analyses. The
demographics of the participants are provided in Table 1. Having more than 40
participants in each group exceeded the minimum number of completes required
in each group. The demographic distribution indicates that the sample involved
a diverse population.
### 3.5. Statistical Methods
To report statistics, we use the guidelines laid out in (Habibzadeh, 2017).
For normally distributed data, we report mean ($M$) and standard deviation
($SD$) and for data that deviated from the normal distribution, we report the
median value ($Mdn$) and interquartile range ($IQR$). Interquartile range is
defined as difference between the upper quartile (75 percentile) and lower
quartlie (25 percentile). In order to compare the differences in two
distributions, we use the Mann-Whitney U test (also known as the Wilcoxon rank
sum test) (McKnight and Najab, 2010). The Mann-Whitney U test is non-
parametrised and works well for comparing distributions that are non-normal,
as opposed to the parametric Student’s t‐test. Additionally, when comparing
three or more distributions, we use the Kruskal–Wallis test (the non-
parametric equivalent of the one-way ANOVA) (McKight and Najab, 2010).
Although the experimental design would have allowed us to conduct ANOVAs (or
Kruskall-Wallis tests) to look at differences between all 5 conditions, we
decided not to use this statistical method because our research questions
focused on degree of autonomy and data sharing separately rather than
combined. The literature provided no base to hypothesise that any of those
combinations could lead to significantly different preferences or behaviours
and we did not want to make many pairwise comparisons only for the sake of
obtaining more comparisons.
Data processing was performed with the Python programming language. All
statistical tests (except the power analysis) were conducted using the SciPy
library (Jones et al., 2001) while data visualisation plots were generated
using the Matplotlib library (Hunter, 2007). The power analysis was conducted
in Microsoft Excel, using the Mann-Whitney power function $MW\\_POWER$ from
the Real Statistics library (Zaiontz, 2020).
## 4\. Results
### 4.1. Experimental validity
We first tested whether the inclusion criteria and randomisation were executed
according to our design. Major demographic characteristics as well as the
total number of participants, were correctly balanced across the groups (Table
1). To probe the additional motivation to use the app beyond the monetary
incentive, we asked participates to rate the extent to which they wanted to
reduce the amount of stress levels on a Likert scale 1 to 7. The median score
of 6 (IQR = 2) suggested a generally high interest in reducing stress levels.
A Kruskal-Wallis test showed no significant difference among the five groups
(H(4) = 2.34, p ¿ .05), which indicates that the randomisation across the
groups was correctly applied and that the stress level was not expected to act
as a confounding factor when comparing results across the groups.
Table 2. Summary of Results | | Autonomous vs. Guided | Questionnaire vs. Data
---|---|---|---
In-App Behaviours | Completed Activities | Significantly more completed activities in the autonomy cluster | No significant difference in number of completed activities
| Ratio of Recommended versus Chosen Activities | Autonomous cluster: 25%
Guided cluster: 60% | -
| Session Duration | No significant difference in session duration | No significant difference in session duration
| Activity Ratings | Significantly higher ratings of activities in the autonomy cluster | No significant difference in activity ratings
Declarative Data | Preference | All users preferred to have a more guided user experience | All users preferred to complete a personality questionnaire.
| Privacy Preference | - | All users agreed that providing mobile data had more privacy risks
Onboarding Behaviours | Completion time | - | No significant difference in completion time
Participants were informed that they were going to receive recommeneded
activities personalised for them. However, in reality, the recommended
selection of activities (both those sent daily and those included within the
app) were random. Therefore, the success of our priming strategy was a
prerequisite for exploring the perception and effects of personalised
recommendations. Unlike other domains—such as shopping items, music, movies,
etc.—where people are typically well aware of what constitutes a personalised
recommendation, there is a low level of understanding of meaningful symptoms
and personal characteristics when it comes to the personalisation of
interventions. To this end, we compared the response to the statement “I
believe that activities were personalised for me” (provided at the end of the
study in the Exit questionnaire) which was rated on a scale 1-7. We compared
the ratings between the personalisation cluster (QG, DG, QA and DA) and the
control group; and the former rated the perceived personalisation
significantly higher (U = 2725.5, p ¡ .05). Despite the fact that the activity
recommendations were not based on the Big Five personality traits, the
participants believed so–indicating that the priming was successful.
The results from our experiment are summarised in Table 2 and explained in
detail in the following sections.
### 4.2. Guided vs autonomous user experience
We compare the app usage behaviours and self-reported preferences between the
guided (QG+DG) and the autonomous clusters (QA+DA).
#### 4.2.1. App usage behaviours
The number of completed activities considers only those activities that the
user both started and finished. Figure 3 (a) shows that the number of
activities completed by users in the autonomous cluster (Mdn = 19, IQR = 22.5)
was significantly higher than those in the guided cluster (Mdn = 7, IQR = 3),
U = 1427, p ¡ .001. We also observed that the ratio of recommended activities
from the home screen vs. voluntary chosen activities from the library amounts
to 25% for the autonomous cluster. While the ratio of recommended activities
from the email reminders vs. activities from the library made up 60% in the
guided cluster.
Figure 3. Differences between the guided and autonomous clusters for (a)
average number of completed activities, (b) median session duration per user
and (c) proportion of good (1) ratings
[Boxplot-figures comparing app usage behaviours]The three boxplot-figures
compare the average number of completed activities, median session duration
per user and the ratio of good vs bad ratings for the autonomy and guided
cluster. Mean values and interquartile range are provided in section 4.2.1.
The figure shows the significant difference between the two clusters for the
number of completed activities and the ratio of good vs bad ratings. The
median session duration per user was not significantly different between the
autonomy and guided cluster. The autonomy cluster has higher median values in
all three comparisons.
Subsequently, we investigated how the degree of autonomy impacted the session
duration–defined as the median number of seconds for which a user was actively
using the app before closing it. We observed that there was no statistical
difference between autonomous (Mdn = 184 seconds, IQR = 363.2 seconds) and
guided (Mdn = 158 seconds, IQR = 280.4 seconds) clusters, U = 3346, p ¿ .05
(Figure 3 (b))
The design of the Foundations provides a simple format for rating each
activity, namely the users are asked to rate each activity upon its completion
with either a thumbs up or thumbs down. We binary coded these ratings as 1 and
0 respectively and calculated the proportion of good (1) ratings per
user–number of good ratings/(number of good+bad ratings)–which resulted in a
value between 0 and 1. Figure 3 (c) shows that the proportion of good ratings
of users in the autonomous cluster (Mdn = 1, IQR = 0.1) was significantly
higher than in the guided cluster (Mdn = 0.85, IQR = 0.2), U = 3047, p ¡ .01.
#### 4.2.2. Self-reported preference on autonomy
After using the app for a week, we asked users to rate if: A1. They would like
the mental health app to choose an activity/intervention for them (guided) and
A2. They would like to choose an activity/intervention for themselves
(autonomous). In general, users agreed more strongly that the app should
provide an activity to them (Mdn = 5, IQR = 2), as opposed to them having
autonomy to select their own activities (Mdn = 4, IQR = 2). The Mann-Whitney U
test confirms that there is a statistical significance in their preference
between the two ($U=17051.0,p<.001$). When asked to directly compare the two
options, 77.9% of the users preferred to have an activity provided to them by
the mental health app.
Subsequently, we compared the preference for the guided and autonomous
clusters separately. The percentage of users that preferred to have an
activity suggested directly by the app was similar across the guided (78.4%)
and autonomous clusters (77.8%). Next, we assessed the difference in average
ratings between A1 and A2 within each cluster. For both the guided and
autonomous clusters, users rated A1 higher than A2 with statistical
significance ($U=2931.5,p<.001$ and $U=2623.5,p<.01$ respectively). This shows
that, irrespective of receiving a guided or autonomous experience, all users
preferred to have an app that suggests interventions for them instead of
selecting activities solely on their own.
### 4.3. Questionnaire vs data selection
We compare the app usage behaviours and self-reported preferences between the
questionnaire (QG+QA) and data selection clusters (DG+DA)
#### 4.3.1. App usage behaviours
Similar to the comparison described in Section 4.2.1, we compared the number
of completed activities, median session duration per user and proportion of
good ratings between the questionnaire and data selection clusters. Using Mann
Whitney U tests, we found no significant difference for any of these metrics
(Supplementary Figure 1).
#### 4.3.2. Onboarding behaviours
We aimed to explore whether the way of data sharing (completing the
personality questionnaire vs selecting the data modalities) is related to the
time taken to complete the onboarding questionnaire. To do this, we compared
the completion time for the questionnaire cluster against the data selection
cluster. While the median time taken to complete the onboarding questionnaire
was greater for the questionnaire cluster (Mdn = 142 seconds, IQR = 82
seconds) than the data selection cluster (Mdn = 102 seconds, IQR = 68
seconds), the Mann Whitney U test indicated that there was no significant
difference between the two distributions ($U=1799.5,p>.05$). The number of
screens and the priming text in the onboarding questionnaires were comparable
for the two clusters. The major difference in the two was the personality
questionnaire versus the smartphone sensing data selections. Hence, it can be
concluded that there is no significant difference between the time taken to
complete the 20-item personality questionnaire and the time needed to select a
subset of a list of smartphone sensing data modalities, in an onboarding
process.
Figure 4. Proportions of users from the data sharing cluster that preferred to
provide different data modalities. Column names correspond to the data
modalities described in Section 3.3
[Bar chart of the percentage of users that chose to provide each data
modality]The figure shows a bar chart visualising the percentage of users that
chose to provide each data modality. The bars correspond to the data
modalities described in Section 3.3. and are displayed in ascending order. The
most and least selected data modalities are described in 4.3.2.
In addition, we also explored the data categories that the users in the data
selection cluster were most willing to provide. Figure 4 shows the proportion
of users that provided a particular data modality. The error bars in the
figure represent the standard deviation of the proportions obtained
individually from DG and DA. The users were least willing to provide 1. call
history (25.0%), 2. bluetooth and wifi data (26.1%) and 3. noise in the
environment sampled from the microphone (34.0%). As expected, these are data
modalities that have the largest privacy and security concerns across both
users and technologists (Elkhodr et al., 2012; Mayer et al., 2016; Sipior et
al., 2014). Additionally, the data modalities that users are most willing to
provide are 1. battery level (72.8%), 2. number of steps walked (71.7%) and 3.
time spent on different applications (68.5%).
#### 4.3.3. Self-reported preference on data sharing
Users were asked to rate from 1 to 7: D1. If they were willing to complete a
5-10 min personality questionnaire (with up to 50 questions) to receive
personalised recommendations for activities and D2. If they were willing to
provide personal sensing data (e.g., GPS location) from their smartphone to
receive personalised recommendations for activities. A Mann-Whitney U test
confirmed with statistical significance ($U=11568,p<.001$) that users were
more willing to complete a personality questionnaire (Mdn = 6, IQR = 2), than
provide their smartphone sensing data for personalisation (Mdn = 4, IQR = 3).
The users were also asked to select if D3 They would rather prefer to complete
a personality questionnaire or provide their smartphone data. 90.4% of the 218
users said they would prefer to complete a personality questionnaire to have a
personalised app experience.
Next, we compared the preferences for the questionnaire and data selection
clusters. For D3, The percentage of users that preferred to complete the
personality questionnaire instead of providing data is notably high across
both the clusters (questionnaire: 92.9% and data selection: 85.9%). We also
assessed the difference in ratings between D1 and D2 within each cluster.
Using Mann-Whitney U tests, we observed that users in both clusters rated D1
higher than D2, with statistical significance ($U=1915,p<.001$ for the
questionnaire cluster and $U=2112.5,p<.001$ for the data selection cluster).
This indicates that all users—irrespective of the way of data
sharing—preferred to complete the personality questionnaire over providing
their smartphone data.
#### 4.3.4. Self-reported preference on privacy risks
An additional objective was to investigate if there was a difference in how
users viewed privacy risks between completing a personality questionnaire and
providing their smartphone data. We asked users to rate: Pr1. If they believed
that filling out personality questionnaires for personalisation has potential
privacy and data protection risks and Pr2. If they believed that providing a
mental health app with their smartphone’s sensing data for personalisation has
potential privacy and data protection risks. All users believed that
completing a personality questionnaire had less privacy risks (Mdn = 4, SD =
2) compared to providing sensing data from their smartphones (Mdn = 5, SD =
3). The difference between the two questions was statistically significant,
$U=21118.5,p<.05$. Within the two clusters, we also found a similar trend.
Both clusters rated Pr2 higher than Pr1 with statistical significance
($U=1206.5,p<.01$ for the questionnaire cluster and $U=2106.5,p<.05$ for the
data selection cluster).
## 5\. Discussion
In this study, we explored how (1) the degree of autonomy in the user
experience, and (2) the data to be shared impact users’ preferences and app
behaviours in a mental health app. In the following, we discuss the results
and highlight the main takeaways.
### 5.1. Asymmetry between in-app behaviours and preference for the degree of
autonomy
The balance between autonomy and guidance is a critical topic in personalised
recommender systems, and when it comes to the area of digital mental health it
has a peculiar importance. In a traditional setting, for the selection of the
right intervention, autonomy is secondary to the expertise of the medical
professional. However, in digital experiences, autonomy was shown to be an
essential design criterion to create engagement (Peters et al., 2018). Our
results highlight the challenge of finding the right balance between the two
and shed light on the contrast between users’ preferences and their actual
behaviour in the app. This together provides a set of practical takeaways for
user experience designers that we discuss in the following.
Our findings demonstrated that the difference in the degree of autonomy could
influence subsequent behaviours in a mental health mobile application. We
showed significant between-group differences in user behaviours, although all
participants used the same application. Since there was no actual
personalisation in the app, our results are independent of the accuracy of a
recommendation system and solely ascribed to the perceived degree of autonomy
in the user experience.
Our results challenge the popular notion that the more personalised or guided,
the better an app is perceived by users. We witnessed that a primarily
autonomous experience led to the greatest engagement i.e. the highest number
of completed activities and best ratings. Contrary to expectations, the most
guided and tailored experience appeared to discourage users’ exploration and
spontaneous app use. However, when asked about the subjective preference after
the study had been completed, a significantly higher number of users expressed
their preference for more guidance instead of autonomy. This finding shows a
discrepancy between behavioural and declarative data. Our results confirm that
the preferences communicated by the user do not necessarily result in
quantitatively improved engagement metrics. This emphasises the importance of
cautiously interpreting user research results and combining them with
quantitative data, when possible, throughout the process of designing
personalised user experiences.
Interestingly, several answers to the free-text question Do you have any
suggestions on how Foundations could be more personalised for you? referred to
reminders, for instance: “Have daily reminders to help with routine”, “Maybe a
reminder to be set daily” and “I like receiving the daily reminders. I have an
18 month old, so maybe you could set the reminder to come back on later, like
a snooze button?“. This may inspire a potential solution for an experience
design that is in-between autonomy and guidance e.g. a combination of an
autonomous navigation and more frequent notifications suggesting personalised
content. This can result in providing more guidance without negatively
impacting the users’ perceived or actual agency.
In reality, none of the two clusters of users were exposed to an extreme
choice between autonomy or guidance. The imposed content consumption,
primarily in an autonomous versus primarily in a guided way, was clearly
reflected in the actual app use—the guided cluster completed a significantly
higher number of recommended activities than the autonomous cluster. However,
the total number of completed activities was three times higher in the
autonomous cluster. As efficacy and engagement are key pillars of digital
intervention design (Murray et al., 2016), our results can be utilised by
designers to optimise for these metrics. In line with our findings, the
interaction in mental health apps could be designed in a similar way to
popular entertainment applications such as Spotify or Netflix. Specifically,
the interaction design may directly encourage autonomous navigation while
providing an easy access to recommended and personalised content, thus
mitigating choice overload. Moreover, different trade-offs can be made between
engagement and efficacy. If the success of a specific digital therapy does not
critically depend on a volume of the app use but on a targeted engagement with
certain interventions, the user experience can be more guided. On the other
hand, autonomous interaction designs would be more suitable to encourage a
higher frequency of the app use when critical for the therapy success (e.g.
meditation techniques are supposed to be practiced more regularly for optimal
results).
Our results are aligned with the autonomy advocates (Ryan & Deci (Deci and
Ryan, 2012), Peters (Peters et al., 2018)), however our findings additionally
underline an important space for utilising the advantages of increasingly
sophisticated recommender systems that ultimately can optimise for both
efficacy and engagement.
### 5.2. Users prefer questionnaires but app engagement is unaffected
Personality traits have been used as a foundation for personalising digital
health applications (Halko and Kientz, 2010) and for providing personalised
activity recommendations that can improve mental well-being (Khwaja et al.,
2019a). Personality traits can be obtained using questionnaires (Donnellan et
al., 2006; Goldberg et al., 2006) or inferred using machine learning models.
The latter has given rise to the field of automatic personality detection.
Studies in this field have shown that personality can be detected from
Facebook, Twitter or Instagram usage (Ferwerda and Tkalcic, 2018a, b; Skowron
et al., 2016; Hall and Caton, 2017), gaming behaviour (Yee et al., 2011),
music preferences (Nave et al., 2018) and smartphone sensing data
(Chittaranjan et al., 2011, 2013; de Montjoye et al., 2013; Wang et al., 2018;
Khwaja et al., 2019b). All of these studies are based on the premise that
digital behaviour data—captured passively—can be used to infer a user’s
personality traits automatically with machine learning, without requiring them
to answer long questionnaires. However, none of these studies explored users’
preferences in obtaining such data to infer a user’s personality passively,
especially to personalise features in a real-world application. Our work set
out to answer this important question, in the context of obtaining smartphone
sensing data to personalise user experience in a mental health app.
Our results indicate that an overwhelming majority of the users prefer to
complete a personality questionnaire over providing their mobile sensing data,
irrespective of whether they completed the personality questionnaire before
using the app or were asked to provide their smartphone data. These results
are consistent with related studies showing users’ improved comprehension of
algorithms by using ”white-box” explanations (Cheng et al., 2019). Users have
predominantly perceived that their smartphone sensing data entails more
privacy risks than completing a personality questionnaire. This can be
attributed to trust and privacy concerns with the collection of any kind of
digital data (Gilbert et al., 2010; Dwyer et al., 2007).
Despite the fact that smartphone sensing was perceived as obtrusive, there was
no difference in app behaviour between users who completed a personality
questionnaire and those who opted to provide mobile sensing data.
Additionally, results from the onboarding process indicate that there is no
significant difference between the time taken to complete the data consent
process and the time taken to complete the 20 item personality questionnaire
(Donnellan et al., 2006). Expectedly, users were less willing to provide more
invasive data such as call history, Bluetooth data and noise from the
microphone. This can have a significant impact on the accuracy of personality
prediction models. Recent studies have indicated that call history data (de
Montjoye et al., 2013), Bluetooth data (Staiano et al., 2012) and noise data
from microphone (Khwaja et al., 2019b; Wang et al., 2018) are strong
predictors of personality traits.
Should collecting mobile sensing data not be leveraged to provide other
benefits to users than personality modeling for personalising the user
experience, the app designers may consider avoiding the collection of
smartphone data altogether. Users appear to have a strong preference towards
completing a questionnaire instead and although automatic personality
modelling is supposed to reduce the end user effort, it does not bring an
added value in this context. This was further echoed by the users’ answers to
the free-text question Do you have any suggestions on how Foundations could be
more personalised for you? including “An in depth questionnaire“, “Maybe a
regular opt-in questionnaire so you let the app know whether your conditions
or state of mind is changing“ and “I think it could be more personalised by
asking more about the persons life, work, family and friends.“. This suggests
that users may be willing to provide even more personal information than
personality as long as they consciously and directly provide it and the app
becomes more tailored to their needs as a result. As additionally suggested by
the users, momentary information represents an opportunity for personalising
the experience even further. In this regard, the Ecological Momentary
Assessment (EMA) (Shiffman et al., 2008) has been a widely used method that
prompts users (via smartphone notifications) at different times during the day
to report how they feel, what they are doing, where they are, and similar.
Recent studies have shown that behaviour and mood data collected via mobile
EMAs is related to mental health and health outcomes such as sleep (Wang et
al., 2014). Thus, data gathered from EMA surveys can point out the opportune
moments to provide personalised interventions. Ultimately, the decision on
gathering user models through passive sources or questionnaires requires
practitioners to make a trade-off between the required amount of information,
model accuracy, users’ privacy concerns and a potential survey fatigue (Porter
et al., 2004).
### 5.3. Limitations
Our study required us to make several trade-offs in the experimental design,
which we discuss in the following.
Firstly, having identical app versions for all groups was an asset for our
experimental design, although it also represented a limitation at the same
time. On the one hand, it enabled us to control the perceptional aspect. On
the other, having more advanced versions would have allowed us to explore the
interaction between perceived accuracy and perception of personalisation,
which could make the results more generalisable.
Secondly, we did not personalise the app according to each user’s actual
personality which may prompt a question whether the deception of
personalisation will impact the users’ trust in the app and result in a lower
app usage. However, an alternative solution of providing actual
personalisation would have entailed a new set of challenges. In particular,
the quality of recommendations is rarely uniform and frequently biased towards
specific user profiles. This issue would have been difficult or even
impossible to control for. Instead, by providing random recommendations based
on the most popular activities, we reduced the impact of this issue. We
recognise that there is no ideal experimental design in this regard and that
it entails trade-offs. However, 25% of the completed activities in the
autonomous group were recommended, which indicates that the choice of the most
popular activities was appropriate. Furthermore, the recommendations were
perceived as personalised, as tested between the personalisation cluster with
the control group (Section 4.1).
Thirdly, we did not collect smartphone data from participants in the data
group. As detailed in Section 3.3, we asked users to provide us access to
their preferred data streams as a base for personalisation. However, in order
not to increase the complexity of the study, we opted to use such data consent
forms only as priming. Collecting smartphone sensing data would have given us
an opportunity to do a more detailed behavioural analysis and further our
findings.
Lastly, all of our participants were recruited in Europe, which may have
introduced a cultural bias and reduced the generalisability of our findings.
## 6\. Conclusion
In this study, we investigated how the degree of autonomy in the user
experience and different ways of data sharing affect both users’ preference
and the actual usage of a mental well-being app. We conducted a randomised
placebo study with a two-factor factorial design consisting of an onboarding
questionnaire, app usage over seven days, and an exit questionnaire.
Our results revealed an asymmetry between what users declared as their
preference for autonomy (versus guidance) and how they used the app in
reality. The analysis of in-app behaviours showed that a primarily autonomous
design with the option to access content recommendations kept users more
engaged with the app than a primarily guided experience design. However, when
asked in the form of questionnaires, the majority of participants declared
their preference for a more guided experience. The analysis of qualitative
data suggested a potential compromise between different experience designs to
satisfy both engagement metrics and subjective user preferences.
Personalising the user experience typically requires personal data to be
shared, which may impact the manner in which the app will be used. However,
when analysing the actual app use, we found no impact of the data source on
how users interacted with the app. Interestingly, the time taken for
completing a personality questionnaire was comparable to the duration of
completing a form to obtain consent for the usage of smartphone data. Yet,
users indicated a strong preference for completing a personality questionnaire
over providing their mobile sensing data (to infer personality).
As mental health applications are becoming increasingly important and rich in
content, our study provides key design takeaways on delivering personalised
recommendations, to ultimately improve both engagement and efficacy of
interventions.
## Acknowledgements
We would like to thank Emily Stott and Jordan Drewitt for their feedback and
support. This work has been supported from funding awarded by the European
Union’s Horizon 2020 research and innovation programme, under the Marie
Sklodowska-Curie grant agreement no. 722561.
## References
* (1)
* Blom (2000) Jan Blom. 2000. Personalization: a taxonomy. In _CHI’00 extended abstracts on Human factors in computing systems_. 313–314.
* Burr et al. (2020) Christopher Burr, Mariarosaria Taddeo, and Luciano Floridi. 2020. The ethics of digital well-being: A thematic review. _Science and engineering ethics_ (2020), 1–31.
* Catuara-Solarz et al. (2021) Silvina Catuara-Solarz, Bartlomiej Skorulski, Inaki Estella, Claudia Avella-Garcia, Sarah Shepherd, Emily Stott, and Sophie Dix. 2021\. Efficacy of Foundations, a Digital Mental Health App to Improve Mental Well-Being, during COVID-19: A Randomised Controlled Trial. _Manuscript submitted for publication_ (2021).
* Chawla and Davis (2013) Nitesh V Chawla and Darcy A Davis. 2013. Bringing big data to personalized healthcare: a patient-centered framework. _Journal of general internal medicine_ 28, 3 (2013), 660–665.
* Cheng et al. (2019) Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In _Proceedings of the 2019 chi conference on human factors in computing systems_. 1–12.
* Chikersal et al. (2020) Prerna Chikersal, Danielle Belgrave, Gavin Doherty, Angel Enrique, Jorge E Palacios, Derek Richards, and Anja Thieme. 2020. Understanding client support strategies to improve clinical outcomes in an online mental health intervention. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–16.
* Chittaranjan et al. (2011) Gokul Chittaranjan, Jan Blom, and Daniel Gatica-Perez. 2011\. Who’s who with big-five: Analyzing and classifying personality traits with smartphones. In _Wearable Computers (ISWC), 2011 15th Annual International Symposium on_. IEEE, 29–36.
* Chittaranjan et al. (2013) Gokul Chittaranjan, Jan Blom, and Daniel Gatica-Perez. 2013\. Mining large-scale smartphone data for personality studies. _Personal and Ubiquitous Computing_ 17, 3 (2013), 433–450.
* de Montjoye et al. (2013) Yves-Alexandre de Montjoye, Jordi Quoidbach, Florent Robic, and Alex Sandy Pentland. 2013. Predicting personality using novel mobile phone-based metrics. In _International conference on social computing, behavioral-cultural modeling, and prediction_. Springer, 48–55.
* Deci and Ryan (2012) Edward L Deci and Richard M Ryan. 2012. Self-determination theory. (2012).
* Donnellan et al. (2006) M Brent Donnellan, Frederick L Oswald, Brendan M Baird, and Richard E Lucas. 2006. The mini-IPIP scales: tiny-yet-effective measures of the Big Five factors of personality. _Psychological assessment_ 18, 2 (2006), 192\.
* Dwyer et al. (2007) Catherine Dwyer, Starr Hiltz, and Katia Passerini. 2007\. Trust and privacy concern within social networking sites: A comparison of Facebook and MySpace. _AMCIS 2007 proceedings_ (2007), 339.
* Elkhodr et al. (2012) Mahmoud Elkhodr, Seyed Shahrestani, and Hon Cheung. 2012\. A review of mobile location privacy in the internet of things. In _2012 Tenth International Conference on ICT and Knowledge Engineering_. IEEE, 266–272.
* Eysenbach (2005) Gunther Eysenbach. 2005\. The law of attrition. _Journal of medical Internet research_ 7, 1 (2005), e11.
* Ferwerda and Tkalcic (2018a) Bruce Ferwerda and Marko Tkalcic. 2018a. Predicting Users’ Personality from Instagram Pictures: Using Visual and/or Content Features?. In _The 26th Conference on User Modeling, Adaptation and Personalization, Singapore (2018)_.
* Ferwerda and Tkalcic (2018b) Bruce Ferwerda and Marko Tkalcic. 2018b. You Are What You Post: What the Content of Instagram Pictures Tells About Users’ Personality. In _The 23rd International on Intelligent User Interfaces_.
* Floridi (2016) Luciano Floridi. 2016\. Tolerant paternalism: Pro-ethical design as a resolution of the dilemma of toleration. _Science and engineering ethics_ 22, 6 (2016), 1669–1688.
* Floridi et al. (2018) Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, et al. 2018\. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. _Minds and Machines_ 28, 4 (2018), 689–707.
* Gilbert et al. (2010) Peter Gilbert, Landon P Cox, Jaeyeon Jung, and David Wetherall. 2010. Toward trustworthy mobile sensing. In _Proceedings of the Eleventh Workshop on Mobile Computing Systems & Applications_. 31–36.
* Goldberg et al. (2006) Lewis R Goldberg, John A Johnson, Herbert W Eber, Robert Hogan, Michael C Ashton, C Robert Cloninger, and Harrison G Gough. 2006. The international personality item pool and the future of public-domain personality measures. _Journal of Research in personality_ 40, 1 (2006), 84–96.
* Habibzadeh (2017) Farrokh Habibzadeh. 2017\. Statistical data editing in scientific articles. _Journal of Korean medical science_ 32, 7 (2017), 1072–1076.
* Halko and Kientz (2010) Sajanee Halko and Julie A Kientz. 2010. Personality and persuasive technology: an exploratory study on health-promoting mobile applications. In _International conference on persuasive technology_. Springer, 150–161.
* Hall and Caton (2017) Margeret Hall and Simon Caton. 2017. Am I who I say I am? Unobtrusive self-representation and personality recognition on Facebook. _PloS one_ 12, 9 (2017), e0184417.
* Hunter (2007) John D Hunter. 2007\. Matplotlib: A 2D graphics environment. _Computing in science & engineering_ 9, 3 (2007), 90–95.
* Jamal et al. (2013) Arshad Jamal, Jane Coughlan, and Muhammad Kamal. 2013\. Mining social network data for personalisation and privacy concerns: a case study of Facebook’s Beacon. _International Journal of Business Information Systems_ 13, 2 (2013), 173–198.
* Jones et al. (2001) Eric Jones, Travis Oliphant, Pearu Peterson, et al. 2001\. SciPy: Open source scientific tools for Python. (2001).
* Jorstad et al. (2005) Ivar Jorstad, D Van Thanh, and Schahram Dustdar. 2005\. The personalization of mobile services. In _WiMob’2005), IEEE International Conference on Wireless And Mobile Computing, Networking And Communications, 2005._ , Vol. 4. IEEE, 59–65.
* Karapanos (2015) Evangelos Karapanos. 2015\. Sustaining user engagement with behavior-change tools. _Interactions_ 22, 4 (2015), 48–52.
* Khwaja et al. (2019a) Mohammed Khwaja, Miquel Ferrer, Jesus Omana Iglesias, A Aldo Faisal, and Aleksandar Matic. 2019a. Aligning daily activities with personality: towards a recommender system for improving wellbeing. In _Proceedings of the 13th ACM Conference on Recommender Systems_. 368–372.
* Khwaja et al. (2019b) Mohammed Khwaja, Sumer S Vaid, Sara Zannone, Gabriella M Harari, A Aldo Faisal, and Aleksandar Matic. 2019b. Modeling personality vs. modeling personalidad: In-the-wild mobile data analysis in five countries suggests cultural impact on personality models. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ 3, 3 (2019), 1–24.
* Kim et al. (2020) Seoyoung Kim, Arti Thakur, and Juho Kim. 2020. Understanding Users’ Perception Towards Automated Personality Detection with Group-specific Behavioral Data. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–12.
* Lee et al. (2003) Younghwa Lee, Kenneth A Kozar, and Kai RT Larsen. 2003\. The technology acceptance model: Past, present, and future. _Communications of the Association for information systems_ 12, 1 (2003), 50.
* Li (2016) Cong Li. 2016. When does web-based personalization really work? The distinction between actual personalization and perceived personalization. _Computers in Human Behavior_ 54 (2016), 25–33.
* Madeira et al. (2018) Rui Neves Madeira, Helena Germano, Patrícia Macedo, and Nuno Correia. 2018. Personalising the user experience of a mobile health application towards Patient Engagement. _Procedia Computer Science_ 141 (2018), 428–433.
* Makin et al. (2017) Tamar R Makin, Frederique de Vignemont, and A Aldo Faisal. 2017\. Neurocognitive barriers to the embodiment of technology. _Nature Biomedical Engineering_ 1, 1 (2017), 1–3.
* Matic et al. (2017) Aleksandar Matic, Martin Pielot, and Nuria Oliver. 2017\. ” OMG! How did it know that?” Reactions to Highly-Personalized Ads. In _Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization_. 41–46.
* Mayer et al. (2016) Jonathan Mayer, Patrick Mutchler, and John C Mitchell. 2016\. Evaluating the privacy properties of telephone metadata. _Proceedings of the National Academy of Sciences_ 113, 20 (2016), 5536–5541.
* McKight and Najab (2010) Patrick E McKight and Julius Najab. 2010. Kruskal-wallis test. _The corsini encyclopedia of psychology_ (2010), 1–1.
* McKnight and Najab (2010) Patrick E McKnight and Julius Najab. 2010. Mann-Whitney U Test. _The Corsini encyclopedia of psychology_ (2010), 1–1.
* Mønsted et al. (2018) Bjarke Mønsted, Anders Mollgaard, and Joachim Mathiesen. 2018\. Phone-based metric as a predictor for basic personality traits. _Journal of Research in Personality_ 74 (2018), 16–22.
* Mukerjee and Wu (2007) Rahul Mukerjee and CF Jeff Wu. 2007. _A modern theory of factorial design_. Springer Science & Business Media.
* Murray et al. (2016) Elizabeth Murray, Eric B Hekler, Gerhard Andersson, Linda M Collins, Aiden Doherty, Chris Hollis, Daniel E Rivera, Robert West, and Jeremy C Wyatt. 2016. Evaluating digital health interventions: key questions and approaches.
* Nave et al. (2018) Gideon Nave, Juri Minxha, David M Greenberg, Michal Kosinski, David Stillwell, and Jason Rentfrow. 2018\. Musical preferences predict personality: evidence from active listening and facebook likes. _Psychological Science_ 29, 7 (2018), 1145–1158.
* Noar et al. (2007) Seth M Noar, Christina N Benac, and Melissa S Harris. 2007\. Does tailoring matter? Meta-analytic review of tailored print health behavior change interventions. _Psychological bulletin_ 133, 4 (2007), 673\.
* Organization et al. (2020) World Health Organization et al. 2020\. Coronavirus disease 2019 (COVID-19): situation report, 72. (2020).
* Peng et al. (2016) Wei Peng, Shaheen Kanthawala, Shupei Yuan, and Syed Ali Hussain. 2016. A qualitative study of user perceptions of mobile health apps. _BMC public health_ 16, 1 (2016), 1–11.
* Peters et al. (2018) Dorian Peters, Rafael A Calvo, and Richard M Ryan. 2018\. Designing for motivation, engagement and wellbeing in digital experience. _Frontiers in psychology_ 9 (2018), 797.
* Pfefferbaum and North (2020) Betty Pfefferbaum and Carol S North. 2020. Mental health and the Covid-19 pandemic. _New England Journal of Medicine_ (2020).
* Porter et al. (2004) Stephen R Porter, Michael E Whitcomb, and William H Weitzer. 2004. Multiple surveys of students and survey fatigue. _New directions for institutional research_ 2004, 121 (2004), 63–73.
* Pullman (1999) Daryl Pullman. 1999\. The ethics of autonomy and dignity in long-term care. _Canadian Journal on Aging/La Revue canadienne du vieillissement_ 18, 1 (1999), 26–46.
* Rughiniş et al. (2015) Cosima Rughiniş, Răzvan Rughiniş, and Ştefania Matei. 2015. A touching app voice thinking about ethics of persuasive technology through an analysis of mobile smoking-cessation apps. _Ethics and Information Technology_ 17, 4 (2015), 295–309.
* Ryan and Deci (2000) Richard M Ryan and Edward L Deci. 2000. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. _American psychologist_ 55, 1 (2000), 68.
* Ryan et al. (2006) Richard M Ryan, C Scott Rigby, and Andrew Przybylski. 2006\. The motivational pull of video games: A self-determination theory approach. _Motivation and emotion_ 30, 4 (2006), 344–360.
* Schwartz (2004) Barry Schwartz. 2004\. The paradox of choice: Why more is less. Ecco New York.
* Shiffman et al. (2008) Saul Shiffman, Arthur A Stone, and Michael R Hufford. 2008\. Ecological momentary assessment. _Annu. Rev. Clin. Psychol._ 4 (2008), 1–32.
* Sipior et al. (2014) Janice C Sipior, Burke T Ward, and Linda Volonino. 2014\. Privacy concerns associated with smartphone use. _Journal of Internet Commerce_ 13, 3-4 (2014), 177–193.
* Skowron et al. (2016) Marcin Skowron, Marko Tkalčič, Bruce Ferwerda, and Markus Schedl. 2016. Fusing social media cues: personality prediction from twitter and instagram. In _Proceedings of the 25th international conference companion on world wide web_. International World Wide Web Conferences Steering Committee, 107–108.
* Staiano et al. (2012) Jacopo Staiano, Bruno Lepri, Nadav Aharony, Fabio Pianesi, Nicu Sebe, and Alex Pentland. 2012\. Friends don’t lie: inferring personality traits from social network structure. In _Proceedings of the 2012 ACM conference on ubiquitous computing_. 321–330.
* Tal and Torous (2017) Amir Tal and John Torous. 2017. The digital mental health revolution: Opportunities and risks. (2017).
* Voigt and Von dem Bussche (2017) Paul Voigt and Axel Von dem Bussche. 2017. The EU General Data Protection Regulation (GDPR). _A Practical Guide, 1st Ed., Cham: Springer International Publishing_ (2017).
* Völkel et al. (2020) Sarah Theres Völkel, Renate Haeuslschmid, Anna Werner, Heinrich Hussmann, and Andreas Butz. 2020\. How to Trick AI: Users’ Strategies for Protecting Themselves from Automatic Personality Assessment. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_. 1–15.
* Wang et al. (2014) Rui Wang, Fanglin Chen, Zhenyu Chen, Tianxing Li, Gabriella Harari, Stefanie Tignor, Xia Zhou, Dror Ben-Zeev, and Andrew T Campbell. 2014. StudentLife: assessing mental health, academic performance and behavioral trends of college students using smartphones. In _Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing_. 3–14.
* Wang et al. (2018) Weichen Wang, Gabriella M Harari, Rui Wang, Sandrine R Müller, Shayan Mirjafari, Kizito Masaba, and Andrew T Campbell. 2018. Sensing Behavioral Change over Time: Using Within-Person Variability Features from Mobile Sensing to Predict Personality Traits. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ 2, 3 (2018), 141.
* Yee et al. (2011) Nick Yee, Nicolas Ducheneaut, Les Nelson, and Peter Likarish. 2011. Introverted elves & conscientious gnomes: the expression of personality in world of warcraft. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_. ACM, 753–762.
* Zaiontz (2020) Charles Zaiontz. 2020\. Real statistics using Excel. _Computer software] http://www. real-statistics. com/free-download_ (2020).
* Zanker et al. (2019) Markus Zanker, Laurens Rook, and Dietmar Jannach. 2019\. Measuring the impact of online personalisation: Past, present and future. _International Journal of Human-Computer Studies_ 131 (2019), 160–168.
|
# Motif Identification using CNN-based Pairwise Subsequence Alignment Score
Prediction
††thanks: Identify applicable funding agency here. If none, delete this.
Ethan Moyer School of Biomedical Engineering, Science and Health Systems
Drexel University
Philadelphia, PA
https://orcid.org/0000-0002-8023-3810 Anup Das College of Engineering
Drexel University
Philadelphia, PA
https://orcid.org/0000-0002-5673-2636
###### Abstract
A common problem in bioinformatics is related to identifying gene regulatory
regions marked by relatively high frequencies of motifs, or deoxyribonucleic
acid sequences that often code for transcription and enhancer proteins.
Predicting alignment scores between subsequence k-mers and a given motif
enables the identification of candidate regulatory regions in a gene, which
correspond to the transcription of these proteins. We propose a one-
dimensional (1-D) Convolution Neural Network trained on k-mer formatted
sequences interspaced with the given motif pattern to predict pairwise
alignment scores between the consensus motif and subsequence k-mers. Our model
consists of fifteen layers with three rounds of a one-dimensional convolution
layer, a batch normalization layer, a dense layer, and a 1-D maximum pooling
layer. We train the model using mean squared error loss on four different data
sets each with a different motif pattern randomly inserted in DNA sequences:
the first three data sets have zero, one, and two mutations applied on each
inserted motif, and the fourth data set represents the inserted motif as a
position-specific probability matrix. We use a novel proposed metric in order
to evaluate the model’s performance, $S_{\alpha}$, which is based on the
Jaccard Index. We use 10-fold cross validation to evaluate out model. Using
$S_{\alpha}$, we measure the accuracy of the model by identifying the 15
highest-scoring 15-mer indices of the predicted scores that agree with that of
the actual scores within a selected $\alpha$ region. For the best performing
data set, our results indicate on average 99.3% of the top 15 motifs were
identified correctly within a one base pair stride ($\alpha=1$) in the out of
sample data. To the best of our knowledge, this is a novel approach that
illustrates how data formatted in an intelligent way can be extrapolated using
machine learning.
###### Index Terms:
Motif Finding, Convolution Neural Network, Pairwise Sequence Alignment
## I Introduction
Measuring the similarity of two sequences is a well known problem called
sequence alignment. This topic includes a vast category of methods for
identifying regions of high similarity in biological sequences, such as those
in deoxyribonucleic Acid (DNA), ribonucleic acid (RNA), and protein [7].
Specifically, DNA pairwise sequence alignment (PSA) methods are concerned with
finding the best arrangement of two DNA sequences. Some historically notable
dynamic programming PSA methods are the Needleman-Wunsch (NW) algorithm for
global alignment [1] and Smith-Waterman (SW) algorithm for local alignment
[2]. The main difference between global and local alignment is related to the
difference in length of the two sequences: global alignment attempts to find
the highest-scoring end-to-end alignment between two sequences of
approximately the same length, and local alignment searches for local regions
of high similarity between two sequences with different lengths [8]. Figure 1
shows this difference between local and global DNA alignment with two
sequences aligned in a 5’ (i.e. five prime) to 3’ direction. In molecular
biology, this orientation refers to the directionality of the carbon backbone
in DNA. The top subfigure displays global alignment where a query sequence is
aligned end-to-end with a reference. The bottom subfigure displays local
alignment where a short query sequence is most optimally aligned with a longer
reference sequence. This latter alignment displays how the query sequence is
approximately equal to a subsequence of the reference sequence.
Figure 1: Local vs. Global Alignment. In general, DNA is composed of a
permutation of the four nucleotides [adenine (A), thymine (T), cytosine (C),
guanine (G)] and an ambiguous base (N).
In this way, local alignment methods recognize approximate subsequence matches
of a query sequence with respect to a given reference sequence. One common
paradigm utilizing local alignment is to examine similarities between a query
sequence and specific k-long subsequences in a given gene, known as k-mers,
found within the reference sequence. Traditional local alignment algorithms
calculate these scores between the query sequence and each k-mer in the
reference sequence. The aim of this research is to identify where the most
likely subsequence matches of the query sequence occur in each reference
sequence using machine learning methods. One such type of query sequence that
is of high biological significance is a sequence motif, which are short
reoccurring subsequences of DNA [5]. Therefore, this research follows the
ability of machine learning methods to gauge the relative enrichment of
various representations of motifs (or motif patterns) in independent reference
sequences. More specifically, the efficacy of identifying motif enrichment in
sequences is explored using a one-dimensional (1-D) convolution neural network
(CNN).
Four different data sets are generated, each with a different motif pattern
randomly inserted in approximately 10,000 reference sequences: the first three
data sets have zero, one, and two mutations applied on each inserted motif,
and the fourth data set represents the inserted motif as a position-specific
probability matrix (PPM). In this data structure, each nucleotide position
corresponds to a frequency of nucleotides [22]. These distinct motif patterns
help display how the CNN model can recognize both subsequence matches with
exact, inexact, and probabilistic motifs. Each sample in a given data set
consists of artificial sequences enriched with a given motif pattern at a
frequency between five and fifteen occurrences per 1,000 base pairs (bp).
These samples are split into 986 overlapping 15-mers with a corresponding
calculated local alignment score from the BioPython Aligner [20]. These sores
are then predicted using a CNN with 10-fold cross validation. In order to
measure the performance of the model, the average out of sample mean squared
error (MSE), R2, and accuracy scores are reported.
While the MSE of the model trained on each data set is not representative of
the model’s effectiveness, the Jaccard Index and $S_{\alpha}$, a novel
modified version of the Jaccard Index, are better suited to capture accuracy
of the model. The standard MSE is not suitable for this problem because it
inherently only displays differences between predicted and actual values.
Since our aim is to locate those highest-scoring 15-mers, we need a metric
that determines at which positions they occur and with what accuracy (see
subsection V-A). This new metric, $S_{\alpha}$, measures the degree of
similarity between two sets where each pair of elements can be different by at
most $\alpha$. Because of the plateauing nature of this metric as seen in each
data set and the risks involved in increasing alpha, only $S_{0}$ to $S_{5}$
are reported.
In implementing this new metric, the accuracy of the model increases
dramatically across all four data sets compared to the Jaccard Index. This
indicates that while the model is not able to precisely identify the highest-
scoring k-mers exactly, it is able to accurately identify their local region.
As expected, the model’s accuracy is far higher for the data sets with
relatively simple inserted motif patterns–non-probabilistic consensus
motifs–compared to that of the data set with more complex inserted motif
patterns, such as consensus PPM.
## II Background
Clusters of motifs across a genome strongly correlate to a gene regulatory
regions [18]. These regions are especially important for motif enrichment
analysis, where known motifs are identified in the regulatory sequence of a
gene in order to determine which proteins (transcription factors and
enhancers) control its transcription [6] [19]. Motif enrichment analysis is
only relevant given that the regulatory region of a gene is known, otherwise
the sequence under study may be from a non-coding region of an organism’s
genome or an untranslated region of a gene [9]. Given that the regulatory
region of a gene is unknown, one frequently used approach to identifying it is
to first locate sequences enriched with highly conserved motifs. Fortunately,
many motifs that have been discovered are common amongst genes serving a
similar role across organisms, such as a negative regulatory region for
eukaryotes [10]. Finding these conserved motifs may facilitate the
identification of the regulatory regions in a gene. For that reason,
identifying the exact or relative positions of a given motif in a gene or
sequence is a relevant inquiry in the process for classifying candidate
regulatory regions of a gene.
A software toolkit known as MEME Suit includes three different methods for
motif-sequence searching [23]: FIMO (Find Individual Motif Occurrences) [21],
GLAM2SCAN (Gapped Local Alignment of Motifs SCAN) [24], and MAST (Motif
Alignment and Search Tool) [25].
FIMO focuses on scanning both DNA and protein sequences for a given motif
represented as PPM. This software tool calculates the log-likelihood ratio
score, p-value, and q-value (false discovery rate) for each subsequence
position in a sequence database [21].
Typically, GLAM2SCAN performs a Waterman-Eggert local alignment between motifs
found by GLAM2, its companion motif-finding algorithm, and a sequence
database. These local alignment scores are generated from an aligner
programmed with position specific residue scores, deletion scores, and
insertion scores returned from the GLAM2 algorithm. The $n$ highest alignments
are returned to the user [24].
MAST locates the highest-scoring $n$ subsequences with respect to a motif
described as a position-specific score matrix. Using the QFAST algorithm, MAST
calculates the p-value of a group of motif matches. This is accomplished by
first finding the p-value of each match (position p-value’) and normalizing it
for the length of the motif (’sequence p-value’). Then each of these
normalized p-values are multiplied together to find the statistical
significance across all located motifs in the database (’combined p-value’)
[25].
## III Data Analysis & Curation
A single data set contains approximately 10,000 randomly generated DNA
sequences, each 1,000 bp long. The number of samples vary slightly from one to
another due to some inconsistencies that are removed in prepossessing. A
15-mer motif is inserted into each sample anywhere from five to fifteen times.
Four separate data sets of this structure are created where a different motif
pattern is inserted randomly into each sequence. The first three data sets
have zero, one, and two mutations applied on each inserted motif. These
mutations are applied in order to determine whether the proposed model has the
potential to identify consensus motifs and non-exact consensus motifs across
many sequences. Since motifs mostly exist as profiles where each base pair
position corresponds to a frequency table of nucleotides, the fourth data set
is created where the inserted motifs are based off of a PPM [11].
Equation 1 is used to calculate the PPM indicated by matrix $M$ given a set of
candidate motifs, or sequences that are thought to be from the same motif PPM.
This equation counts the number of occurrences of each nucleotide in set
$\gamma$ for each nucleotide position across all motifs, where
$\gamma=\\{A,T,C,G\\}$; $I=\\{0,1\\}$ represents an indicator function, where
$I(x=\gamma)$ is 1 if $x=\gamma$ and 0 otherwise; $i{\displaystyle\in}$ (1, …,
L), where L is the length of each motif; and $j{\displaystyle\in}(1,...,N)$,
where N is the number of motifs.
$M_{\alpha,k}=\frac{1}{N}\sum^{N}_{i=1}I(X_{i,j}=\gamma)$ (1)
In order to apply Equation 1 on candidate motifs, the DNA sequence data must
be formatted as nucleotide position counts shown in Figure 2. This figure
illustrates the conversion of a list of candidate motifs to matrix
$M_{counts}$ and then to $PPM$ using Equation 1. While Figure 2 displays this
process for five 10-mers, the fourth data sets in this work relies on profiles
built from ten 15-mers.
TACAGAGTTG
CCATAGGCGT
TGAACGCTAC
ACGGACGATA
CGAATTTACG
$\downarrow$
$M_{counts}$ = A 1 1 3 3 2 1 0 2 1 1 T 2 0 0 1 1 1 1 2 2 1 C 2 2 1 0 1 1 1 1
1 1 G 0 2 1 1 1 2 3 0 1 2
$\downarrow$
$PPM$ = A 0.2 0.2 0.6 0.6 0.4 0.2 0.0 0.4 0.2 0.2 T 0.4 0.0 0.0 0.2 0.2 0.2
0.2 0.4 0.4 0.2 C 0.4 0.4 0.2 0.0 0.2 0.2 0.2 0.2 0.2 0.2 G 0.0 0.4 0.2 0.2
0.2 0.4 0.6 0.0 0.2 0.4
Figure 2: The conversion of five candidate subsequence motifs to PPM using
Equation 1.
## IV Feature & Output Selection
In order to format the sequence data into a structure that is both
recognizable and meaningful to a CNN, we first split each sequence into a list
of overlapping 15-mers. Next, we generate a one-hot encoding for each
nucleotide in the 15-mers. The resulting feature set is composed of 60 values.
Figure 3 displays this process using a small subsequence example formatted as
4-mers.
Figure 3: DNA subsequence k-mer formatting by one-hot encoding nucleotides.
To obtain the target values, each of these 15-mers are pairwise aligned with
the consensus motif for the given data set motif pattern using the SW
algorithm. Given two sequences, $a$ of length $n$ and $b$ of length $m$, this
algorithm begins by defining an $n+1$ by $m+1$ matrix $H$. The first column
and first row are assigned $0$, and the following recurrence relation is
applied to assign the rest of the values in $H$.
$H(i,j)=max\begin{cases}H(i-1,j-1)+\sigma(a_{i},b_{j})\\\ H(i,j-1)+W\\\
H(i-1,j)+W\\\ 0\end{cases}$
where W is a gap score and $\sigma$ is a score matrix such that
$\sigma(a_{i},b_{j})=\begin{cases}+1&\quad\text{if }a_{i}=b_{j}\\\
-2&\quad\text{if }a_{i}\neq b_{j}\end{cases}$
In the case when $a_{i}=b_{j}$, $\sigma$ returns a match score of $+1$, and in
the case when $a_{i}\neq b_{j}$, $\sigma$ returns a mismatch score of $-2$.
The gap score, $W$, is assigned $-2.5$. The match, mismatch, and gap score can
be configured for different alignments. These parameters are used because they
are the most optimal for this type of local alignment [4]. Once $H$ is
assigned its values, the best alignment is obtained by finding the maximum
value in $H$ and tracing back the matrix elements that led up to this maximum.
In this way, the maximum value in $H$ defines the optimal path in $H$ for the
best alignment between sequences $a$ and $b$ [2]. The calculated alignment
scores are normalized based on the maximum alignment score in each sample.
## V Methods
### V-A CNN Model Evaluation
Although the MSE loss function is effective at penalizing large differences
between predicted and target values, such as outliers in the data, it does not
successfully represent the predictive power of the model given the scope of
the problem [14]. In the data, the target value from each sample ranges from
zero to one. This range already generates an inherently small MSE. Even when
the MSE for each sample is normalized, the metric is overshadowed by the
overwhelming majority of the predicted values that were approximately equal to
the global mean of each sample. In other words, the MSE as a metric does not
capture the correct information pertaining to the five to fifteen inserted
motif patterns in each sample due to a large unequal distribution of such
scores that deviate from the global mean. This problem is analogous to that of
an unequal class distribution in a classification problem.
The goal of the model is to score the CNN based on its ability to locate the
15 highest-scoring 15-mers, because we inserted a motif pattern at most 15
times into a single sample. Since this network deals with continuous values
instead of discrete classes, initially we cannot be certain of the 15-mer to
which a 15-mer score at any index $i$ corresponds. However, a higher scoring
15-mer has a greater probability of corresponding to that of a motif, whereas
the lower scoring 15-mers carry little information. This is due to the fact
that each score in the data is generated from a local alignment between 15-mer
and the given consensus motif. In this way, only the highest 15-scoring
15-mers are of interest. As previously mentioned, we indicate that there is an
unequal distribution between the number of scores corresponding to that of
each inserted motif and the global mean of each sample. Using these
observations, we rationalize that we only have to examine the 15 highest-
scoring indices. This generality that the 15 highest-scoring idicies
correspond to the inserted motif patterns is further supported by the notion
that probability of observing a random 15-mer exactly equal or similar to the
inserted motifs is relatively low.
Thus, the indices of the predicted 15 highest-scoring 15-mer inherently hold
information about the position of possible inserted motif patterns because it
is at these indices at which the local alignment is conducted. Due to the low
likelihood of observing a false positive (when a 15-mer is identified as a
motif but in all actuality is not one), we create a one-to-one correspondence
between the indices of the actual motif indices and that of the predicted
motifs using high local alignment scores. The accuracy of this one-to-one
correspondence can be measured using the Jaccard Index given in Equation 2.
$J(A,B)=\frac{|A\cap B|}{|A\cup B|}$ (2)
We propose a more generalized index, $S_{\alpha}$, in Equation 3 which
measures the similarity of two sets with an allowed margin of error of
$\alpha$. Because of the high locality of local alignment score predictions
and due to the fact that the highest-scoring 15-mers can still be found from
examining the immediate region of a prediction, this margin of error serves as
a heuristic for motif identification. In this metric, two items are considered
identical if they are no more than $\alpha$ away from each other. In the scope
of this work, sets $A$ and $B$ contain the indices of the 15 highest-scoring
15-mers of the actual data and predicted data, respectively. When $\alpha=0$,
$S_{0}(A,B)$ in Equation 2 is identical to $J(A,B)$ in Equation 3. Conversely,
as $\alpha$ increases, the allowed distance between indices in sets $A$ and
$B$ increases. For example, when $\alpha=2$, a predicted 15-mer index $i$ and
actual 15-mer index $i+2$ are considered the same.
$J(A,B\mid\alpha)=S_{\alpha}(A,B)=\frac{|\bigcup\limits_{\mu=0}^{\alpha}A\cap\\{x+\mu\mid
x\in B\\}|}{|A\cup B|}$ (3)
The following process is an algorithm to calculate a modified version of the
Jaccard Index. Using the $argsort$ function in NumPy, we examine the indices
that order both the actual outputs and the predicted outputs. In looping
through the each of the top $n$ indices of the predicted outputs, we count the
number of them which are contained in the list of indices of the actual
outputs. The process returns the score as count over the maximum possible
value, which in this case is $n$. This is implemented in Algorithm 1
Algorithm 1 Measuring Jaccard Index with stride $\alpha$
1:procedure $s_{\alpha}$
2: $\textit{n}\leftarrow\text{number of highest-scoring k-mers to analyze}$
3: $\textit{score}\leftarrow 0$
4: $\textit{act\\_outputs}\leftarrow\text{actual outputs}$
5: $\textit{pred\\_outputs}\leftarrow\text{outputs from CNN}$
6: $\textit{act\\_indxs}\leftarrow\text{indices that would sort
}\textit{act\\_outputs}$
7: $\textit{pred\\_indxs}\leftarrow\text{indices that would sort
}\textit{pred\\_outputs}$
8: _outerloop_ :
9: for $i$ := 1 to $n$ do
10: $\textit{pred\\_indx}\leftarrow\textit{pred\\_indxs(i)}$.
11: for $j$ := 0 to $\alpha$ do
12: if $\textit{pred\\_indxs}\in\textit{act\\_indxs}-j$ then
13: $score\leftarrow score+1$.
14: goto _outerloop_.
15: if $\textit{pred\\_indxs}\in\textit{act\\_indxs}+j$ then
16: $score\leftarrow score+1$.
17: goto _outerloop_.
18: $normalized\\_score\leftarrow score/n$.
## VI Results
Each of the four data sets is characterized by 10,000 samples where each
sample contains a sequence that is 1,000 bp in length. In each sample, a motif
pattern is inserted randomly anywhere from five to fifteen times. The first
three data sets include inserted motif patterns with zero, one, and two
mutations. The fourth data set includes an inserted motif pattern represented
based on a PPM. Each data set is evaluated using out of sample data generated
from 10-fold cross validation based on eight metrics: MSE, R2, and
$S_{0}$-$S_{5}$.
Table I: CNN Results. The average out of sample MSE, R2, and $S_{0}$-$S_{5}$
for each data set.
A fifth analysis is conducted with another data set using a motif
representation similar to that of the fourth data set with the MafK
transcription factor from the BATCH1 regulatory gene [26]. This motif is a
15-mer with a less conserved consensus sequence compared to that of the former
four data sets. While this data set did not perform as well as the other four
data sets with a $S_{9}$ of 45.3%, this analysis brought to light the
consideration of the aligner scoring matrix as another hyperparameter to this
work.
As it turns out, the performance of the model varies greatly with the chosen
match score, mismatch score penalty, and gap score penalty for the currently
implemented alignment method. For instance, the $S_{9}$ varies from 33.7% to
52.6% with different scoring hyperparameters. The former result is derived
from an aligner with a match score of +2.0, mismatch score penalty of -3.0,
and gap score penalty of -3.5, whereas the latter result is derived from an
aligner with a match score of +2.0, mismatch score penalty of -4.0, and gap
score penalty of -4.5. It is currently unclear what aligner hyperparameters
are most optimal for this more complex data set and the original four data
sets explored in the work. Although there is evidence to suggest that aligner
scoring matrices vary with the type of inserted motif pattern, it is unclear
whether the most optimal hyperparameters change from motif to motif.
One possible interpretation of the dependence of the model’s chosen evaluation
metric, $S_{\alpha}$, on the aligner hyperparameters is related to the fact
that the CNN predicts alignment scores that are normalized within each sample.
Therefore, the farther these highest-scoring scores are from the global mean,
the more likely that the proposed metric will be able to recognize inserted
motifs. Conversely, when analyzing a data set with a less conserved motif
consensus sequence, such as that of the MafK transcription factor, the
alignment scores are closer to the global mean of each sample. This in turn
makes recognizing the indices of the highest-scoring segments more
challenging. It follows that the aligner hyperparameters which capitalize on
increasing this difference are most favorable for all motifs, regardless of
pattern.
### VI-A Convolution Neural Network (CNN) Architecture
CNN is a class of deep learning models which can infer patterns based on data
formatted as a grid structure, such as a set of prices over time for stock or
a grid representation of pixels in an image (add reference for these
architectures). These Artificial Neural Netowrk (ANNs) use a linear
mathematical operation called convolution in at least one of their layers [3].
The convolution operation is commonly identified by the following two
equations:
$s(t)=\int x(a)w(t-a)da$ (4)
$s(t)=(x*w)(t)$ (5)
Equation 4 explicitly denotes the equation for convolution, whereas Equation 5
displays how an asterisk can be used to for the linear operation. In both
equations, $x$ is referred to as the input. Typically, this is formatted as a
multidimensional array, or a tensor, that matches the size and dimensions of
the data. The second argument is $w$, representing a kernel, which stores
parameters for the model also formatted as a tensor. This argument is adapted
throughout the training process of the model. The output of both functions,
$s$, is called the feature map of the convolution layer. This is what is fed
into the next layer of the network [3]. Hidden layers are generated from
applying a kernel, or filter, of weights over the receptive field of the
inputs. More specifically, the hidden layer is computed based off of the
filter weights and the input layer as it strides across the feature space
[28]. This operation can either compress or expand input space depending on
the applied kernel [29]. This paradigm is followed by rounds of activations,
normalizations, and pooling [29]. The model typically ends with a fully
connected layer to compute its outputs [28]. The proposed model is represented
in Figure 4 [cite my paper].
Figure 4: CNN model. (create better caption)
The model is marked by three rounds of a 1-D convolution layer, a batch
normalization layer, a dense layer, and a 1-D maximum pooling layer. After
these 12 layers, the model finishes off with a 50% dropout layer, a flattened
layer, and finally a fully connected layer corresponding to the 986 alignment
scores for each sample [13] [12].
The model described above is ran on all four data sets for 100 epochs with a
batch size of 80 and compiled with the Adam optimizer (learning rate=0.001,
beta 1=0.9, beta 2=0.999, epsilon=1e-07). Of the 10,000 samples in each
dataset, 80% is reserved for training the network and the remaining 20% is
used for validation after each epoch. For its loss function, the model relies
on Mean Squared Error (MSE), which is calculated between predicted values
($y_{pred}$) and target values ($y_{act}$) with the following formula in
Equation 6:
$MSE(y_{pred},y_{act})=\frac{1}{n}\sum_{i=1}^{n}(y_{pred,i}-y_{act,i})$ (6)
## VII Discussion
As displayed in this work, deep learning models, such as a CNN, have the
capacity to recognize and predict the positions of an inserted motif with
great accuracy. Furthermore, data structures can be devised to take advantage
of unequal class distributions in regression problems as highlighted by the
design of k-mer data representation in this work and the incorporation of
$S_{\alpha}$ as a novel evaluation metric.
In analyzing the results in Table I, there is a characteristic pattern between
the accuracy metrics across each data set. For instance, in comparing
$S_{0}$-$S_{5}$ for the first data set with zero mutations applied on each
inserted motif, the score monotonically increases with an increasing $\alpha$.
This is evident for the three other data sets as well. With respect to this
particular trend, it is expected that as $\alpha$ increases, the score will
also increase since $\alpha$ relates directly to the allowed margin of error,
making $S_{\alpha}$ less conservative.
Additionally, the model’s accuracy is far higher for the data sets with
relatively simple inserted motif patterns, such as nonmutated and mutated
consensus motifs, compared to that of the fourth data set with a PPM motif
pattern. This relationship can be explained by the process by which the scores
for each 15-mer are calculated. For a given 15-mer, a score is computed based
on its local alignment with a given consensus motif. For the first data set,
these local alignment scores generated are derived from each inserted motif,
whereas in the latter three data sets, the scores are not necessarily derived
from each data set’s consensus motif since the motif patterns support variable
inserted motif.
In all data sets, the largest increase in $S_{\alpha}$ appears to be between
the $S_{0}$ and $S_{1}$. After this point, change in $S_{\alpha}$ plateaus
after a given $\alpha$. With the consideration that the likelihood of
observing a false positive is relatively low, this indicates that the addition
of stride $\alpha$ is well-advised. This is the case because the increase in
$\alpha$ only influences $S_{\alpha}$ up to a certain point. It is expected
that as $\alpha\xrightarrow{}\beta$, where $\beta$ is the maximum $\alpha$ on
either side of a given motif index, $S_{\alpha}\xrightarrow{}1$ because every
single $n$ indices will be covered by the stride ${\alpha}$. In the case that
$S_{\alpha}\xrightarrow{}1$, the certainty for each identified motif decreases
with increasing $S_{\alpha}$ regardless; however, the absence of this limit in
the data indicates that the certainty of the identified motifs does not
decreases dramatically from $S_{0}$ to $S_{5}$. Furthermore, the presence of a
plateauing $S_{\alpha}$ supports the thought that a decrease in the certainty
of an identified motif is negligible. This analysis can be drawn further in
noticing that the point at which $S_{\alpha}$ plateaus increases as the
complexity of the motif pattern increases. In the case of a more complex motif
pattern, such as either of the PPMs, a greater $\alpha$ is required to fully
encapsulate accuracy of the model’s predictions. Even then, the certainty of
such motif identification with increasing $\alpha$ decreases.
In subsection V-A, we draw a one to one correspondence between the actual
motif indices and that of the predicted motifs by only examining the indices
of the 15 highest-scoring 15-mers in both the actual scores and predicted
scores. This is not a strong one-to-one correspondence because the number of
inserted motifs actually varies randomly from five to fifteen times sample to
sample. By design, this is a confounding variable When $S_{\alpha}$ is applied
on a sample with five inserted motifs, the returned score is predicted to be
an underestimate of the model’s prediction. This is due to the fact that this
function only examines the highest 15-scoring indices for each sample. In the
case of five inserted motifs, there would be ten 15-mers identified as high-
scoring motifs, when in reality these are random 15-mers in the sequence.
Because those scores are more likely to be present throughout a sequence,
there will be less similarity between the indices of the predicted 15 highest-
scoring 15-mers and that of the actual 15 highest-scoring 15-mers. This will
most likely lead to a decrease in $S_{\alpha}$.
## References
* [1] Cold Spr. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Mol. Biol, 48:443–153, 1970.
* [2] Temple F Smith, Michael S Waterman, et al. Identification of common molecular subsequences. Journal of molecular biology, 147(1):195–197, 1981.
* [3] Yoshua Bengio, Ian Goodfellow, and Aaron Courville. Deep learning, volume 1. MIT press Massachusetts, USA:, 2017.
* [4] Ahmad Al Kawam, Sunil Khatri, and Aniruddha Datta. A survey of software and hardware approaches to performing read alignment in next generation sequencing. IEEE/ACM transactions on computational biology and bioinformatics, 14(6):1202–1213, 2016.
* [5] Patrik D’haeseleer. What are dna sequence motifs? Nature biotechnology, 24(4):423–425, 2006.
* [6] Robert C McLeay and Timothy L Bailey. Motif enrichment analysis: a unified framework and an evaluation on chip data. BMC bioinformatics, 11(1):165, 2010.
* [7] Waqar Haque, Alex Aravind, and Bharath Reddy. Pairwise sequence alignment algorithms: A survey. In Proceedings of the 2009 Conference on Information Science, Technology and Applications, page 96–103, 2009.
* [8] EMBL-EBI. Pairwise Sequence Alignment, 2020.
* [9] Xiaole Liu, Douglas L Brutlag, and Jun S Liu. Bioprospector: discovering conserved dna motifs in upstream regulatory regions of co-expressed genes. In Biocomputing 2001, pages 127–138. World Scientific, 2000.
* [10] Jorge A Iñiguez-Lluhí and David Pearce. A common motif within the negative regulatory regions of multiple factors inhibits their transcriptional synergy. Molecular and Cellular Biology, 20(16):6040–6050, 2000.
* [11] Modan K Das and Ho-Kwok Dai. A survey of dna motif finding algorithms. In BMC bioinformatics, volume 8, page S21. Springer, 2007.
* [12] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
* [13] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015.
* [14] Yu Qi, Yueming Wang, Xiaoxiang Zheng, and Zhaohui Wu. Robust feature learning by stacked autoencoder with maximum correntropy criterion. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6716–6720. IEEE, 2014.
* [15] Luping Ji, Xiaorong Pu, Hong Qu, and Guisong Liu. One-dimensional pairwise cnn for the global alignment of two dna sequences. Neurocomputing, 149:505–514, 2015.
* [16] Q. Zhang, L. Zhu, W. Bao, and D. Huang. Weakly-supervised convolutional neural network architecture for predicting protein-dna binding. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 17(2):679–689, 2020.
* [17] Gary D Stormo and George W Hartzell. Identifying protein-binding sites from unaligned dna fragments. Proceedings of the National Academy of Sciences, 86(4):1183–1187, 1989.
* [18] Martin C. Frith, Michael C. Li, and Zhiping Weng. Cluster-Buster: finding dense clusters of motifs in DNA sequences. Nucleic Acids Research, 31(13):3666–3668, 07 2003.
* [19] Tom Lesluyes, James Johnson, Philip Machanick, and Timothy L. Bailey. Differential motif enrichment analysis of paired chip-seq experiments. BMC Genomics, 15(1):752, 2014.
* [20] Peter J. A. Cock, Tiago Antao, Jeffrey T. Chang, Brad A. Chapman, Cymon J. Cox, Andrew Dalke, Iddo Friedberg, Thomas Hamelryck, Frank Kauff, Bartek Wilczynski, and Michiel J. L. de Hoon. Biopython: freely available python tools for computational molecular biology and bioinformatics. Bioinformatics, 25(11):1422–1423, 8/5/2020 2009.
* [21] Charles E. Grant, Timothy L. Bailey, and William Stafford Noble. Fimo: scanning for occurrences of a given motif. Bioinformatics, 27(7):1017–1018, 9/8/2020 2011.
* [22] Mengchi Wang, David Wang, Kai Zhang, Vu Ngo, Shicai Fan, and Wei Wang. Motto: Representing motifs in consensus sequences with minimum information loss. Genetics, page genetics.303597.2020, 08 2020.
* [23] Timothy L. Bailey, Mikael Boden, Fabian A. Buske, Martin Frith, Charles E. Grant, Luca Clementi, Jingyuan Ren, Wilfred W. Li, and William S. Noble. Meme suite: tools for motif discovery and searching. Nucleic Acids Research, 37(suppl_2):W202–W208, 9/9/2020 2009\.
* [24] Martin C. Frith, Neil F. W. Saunders, Bostjan Kobe, and Timothy L. Bailey. Discovering sequence motifs with arbitrary insertions and deletions. PLOS Computational Biology, 4(5):e1000071–, 05 2008.
* [25] T L Bailey and M Gribskov. Combining evidence using p-values: application to sequence homology searches. Bioinformatics, 14(1):48–54, 9/9/2020 1998.
* [26] Oriol Fornes, Jaime A Castro-Mondragon, Aziz Khan, Robin van der Lee, Xi Zhang, Phillip A Richmond, Bhavi P Modi, Solenne Correard, Marius Gheorghe, Damir Baranašić, Walter Santana-Garcia, Ge Tan, Jeanne Chèneby, Benoit Ballester, François Parcy, Albin Sandelin, Boris Lenhard, Wyeth W Wasserman, and Anthony Mathelier. JASPAR 2020: update of the open-access database of transcription factor binding profiles. Nucleic Acids Research, 48(D1):D87–D92, 11 2019.
* [27] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
* [28] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015.
* [29] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
* [30] Ethan J Moyer and Anup Das. Machine learning applications to dna subsequence and restriction site analysis. arXiv preprint arXiv:2011.03544, 2020.
|
# Current algebras on $S^{3}$ of complex Lie algebras
Tosiaki Kori
Department of Mathematics
Graduate School of Science and Engineering
Waseda University,
Tokyo 169-8555, Japan
email<EMAIL_ADDRESS>
###### Abstract
Let $\mathcal{L}$ be the space of spinors on $S^{3}$ that are the restrictions
to $S^{3}$ of the Laurent polynomial type harmonic spinors on
$\mathbf{C}^{2}$. $\mathcal{L}$ becomes an associative algebra. For a simple
Lie algebra $\mathfrak{g}$ the real Lie algebra $\mathcal{L}\mathfrak{g}$
generated by $\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}$ is called
$\mathfrak{g}$-current algebra. The real part $\mathcal{K}$ of $\mathcal{L}$
becomes a commutative subalgebra of $\mathcal{L}$. For the Cartan subalgebra
$\mathfrak{h}$ of $\mathfrak{g}\,$,
$\mathcal{K}\mathfrak{h}=\mathcal{K}\otimes_{\mathbf{R}}\mathfrak{h}$ becomes
a Cartan subalgebra of $\mathcal{L}\mathfrak{g}$. We investigate the adjoint
representation of $\mathcal{K}\mathfrak{h}$ and find that the set of non-zero
weights corresponds bijectively to the root space of $\mathfrak{g}$. Let
$\mathfrak{g}=\mathfrak{h}+\mathfrak{e}+\mathfrak{f}$ be the standard
triangular decomposition of $\mathfrak{g}$, and let
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{h}$,
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{e}$ and
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{f}$ generate respectively the Lie
subalgebras $\mathcal{L}\mathfrak{h}$, $\mathcal{L}\mathfrak{e}$ and
$\mathcal{L}\mathfrak{f}$ of $\mathcal{L}\mathfrak{g}$. Then we have the
triangular decomposition
$\mathcal{L}\mathfrak{g}=\mathcal{L}\mathfrak{h}+\mathcal{L}\mathfrak{e}+\mathcal{L}\mathfrak{f}\,$,
that is also associated with the weight space decomposition of
$\mathcal{L}\mathfrak{g}$. With the aid of the basic vector fields on $S^{3}$
that arise from the infinitesimal representation of $SO(3)$ we introduce a
triple of 2-cocycles $\\{c_{k};\,k=0,1,2\,\\}$ on the Lie algebra
$\mathcal{L}\mathfrak{g}$. Then we have the central extenstion
$\mathcal{L}\mathfrak{g}\oplus\oplus_{k=0}^{2}\mathbf{C}a_{k}$ associated to
the 2-cocycles $\\{c_{k}\\}_{k=0,1,2}$. Adjoining a derivation coming from the
radial vector field $\mathbf{n}$ on $S^{3}$ we obtain the second central
extension
$\widehat{\mathfrak{g}}=\mathcal{L}\mathfrak{g}\oplus\oplus_{k=0}^{2}\mathbf{C}a_{k}\oplus\mathbf{C}n$.
The root space decomposition and the Chevalley generators of
$\widehat{\mathfrak{g}}\,$will be given.
Mathematics Subject Classification. 17B65, 17B67, 22E67, 81R10, 81R25, 15B33.
Key Words Infinite dimensional Lie algebras, Current algebras, Kac-Moody Lie
algebra, Spinor analysis.
## 1 Introduction
Any affine Kac-Moody algebra of untwisted type can be realized in terms of a
central extension of the loop algebra of a semisimple Lie algebra, [K]. Let
$L=\mathbf{C}[t,t^{-1}]$ be the algebra of Laurent polynomials in $t$. Given a
semisimple Lie algebra $\mathfrak{g}$ the loop algebra
$L\mathfrak{g}=L\otimes_{\mathbf{C}}\mathfrak{g}$ is an infinite dimensional
complex Lie algebra with the bracket $[\,,\,]$ defined by
$[P\otimes x,\,Q\otimes y]=PQ\otimes\,[x,y]\,,\quad P,Q\in
L,\,x,y\in\mathfrak{g}.$
We define a 2-cocycle on the algebra $L$ by the formula:
$c_{o}(P,Q)=\frac{1}{2\pi}\int_{S^{1}}\,\frac{dP}{dt}(t)\cdot Q(t)\,dt.$
By virtue of the non-degenerate symmetric bilinear form $(\cdot|\cdot)$ on
$\mathfrak{g}$ we extend the 2-cocycle $c_{o}$ to a 2-cocycle $c$ on the Lie
algebra $L\mathfrak{g}\,$:
$c(P\otimes x,\,Q\otimes y)\,=\,(x|y)c_{0}(P,Q)\,.$
Let $L\mathfrak{g}\oplus\mathbf{C}a$ be the extension of $L\mathfrak{g}$ by a
1-dimensional center associated to the cocycle $c$. The Euler derivation
$t\frac{d}{dt}$ acts on $L\mathfrak{g}\oplus\mathbf{C}a$ as an outer
derivation and kills $c$. Then adjoining the derivation $d$ to
$L\mathfrak{g}\oplus\mathbf{C}a$ we have the Lie algebra:
$\widehat{\mathfrak{g}}=L\mathfrak{g}\oplus\mathbf{C}a\oplus\mathbf{C}d.$
We follow this procedure to have central extensions of current algebras on
$S^{3}$. We introduce the algebra of Laurent polynomial type harmonic spinors
on $S^{3}$. It is called the algebra of current on $S^{3}$ and is denoted by
$\mathcal{L}\,$. It plays the same role as the algebra $L$ of Laurent
polynomials does for the loop algebra. The current algebra of $\mathfrak{g}$
on $S^{3}$ is the real Lie algebra $\mathcal{L}\mathfrak{g}$ that is generated
by $\,\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}$ . Then we shall introduce a
triple of 2-cocycles on $\mathcal{L}\,$, and extend them to 2-cocycles on the
current algebra $\mathcal{L}\mathfrak{g}$. For this purpose we prepare in
section 2 a rather long introduction to our previous results on analysis of
harmonic spinors on $\mathbf{C}^{2}$, [F, G-M, Ko1, Ko2, Ko3] and [K-I], that
is, we develop some parallel results as in classical analysis; the separation
of variable method for Dirichlet problem, Fourier expansion by the
eigenfuctions of Laplacian, Cauchy integral formula for holomorphic functions
and Laurent expansion of meromorphic functions etc.. For example, the Dirac
operator on spinors corresponds to the Cauchy-Riemann operator on complex
functions. Let $\Delta=\mathbf{H}^{2}$ be the 4-dimensional spinor space, that
is, an irreducible representation of the complex Clifford algebra $\,{\rm
Clif}^{c}_{4}=End(\Delta)$. The algebraic basis of ${\rm Clif}^{c}_{4}$ is
given by the Dirac matrices:
$\gamma_{k}\,=\,\left(\begin{array}[]{cc}0&-i\sigma_{k}\\\
i\sigma_{k}&0\end{array}\right)\,,\,k=1,2,3$, and
$\gamma_{4}\,=\,\left(\begin{array}[]{cc}0&-I\\\ -I&0\end{array}\right)\,.$
Where $\sigma_{k}$ are Pauli matrices. Let $S=\mathbf{R}^{4}\times\Delta$ be
the spinor bundle. The Dirac operator is defined by the following formula:
$\mathcal{D}=\,-\,\frac{\partial}{\partial
x_{1}}\gamma_{4}\,-\,\frac{\partial}{\partial
x_{2}}\gamma_{3}\,-\,\frac{\partial}{\partial
x_{3}}\gamma_{2}\,-\,\frac{\partial}{\partial
x_{4}}\gamma_{1}\,:\,C^{\infty}(\mathbf{R}^{4},S)\longrightarrow\,C^{\infty}(\mathbf{R}^{4},S)\,.$
Let $S^{\pm}=\mathbf{R}^{4}\times\Delta^{\pm}$ be the ( even and odd ) half
spinor bundle corresponding to the decomposition
$\Delta=\Delta^{+}\oplus\Delta^{-}$: $\Delta^{\pm}\simeq\mathbf{H}$. The half
spinor Dirac operator $D=\mathcal{D}|S^{+}$ has the polar decomposition:
$D=\gamma_{+}\left(\frac{\partial}{\partial n}-\partial\\!\\!\\!/\right)$ with
the tangential (nonchiral) component $\partial\\!\\!\\!/\,$ on
$S^{3}\subset\mathbf{R}^{4}$. The tangential Dirac operator
$\partial\\!\\!\\!/$ on $S^{3}$ is a self adjoint elliptic differential
operator. The eigenvalues of $\partial\\!\\!\\!/$ are
$\\{\frac{m}{2},\,\,-\frac{m+3}{2}\,;\,m=0,1,\cdots\\}$ with multiplicity
$(m+1)(m+2)$. We have an explicitly written polynomial formula of eigenspinors
$\left\\{\phi^{+(m,l,k)},\,\phi^{-(m,l,k)}\right\\}_{0\leq l\leq m,\,0\leq
k\leq m+1}$ corresponding to the eigenvalues $\frac{m}{2}$ and
$-\frac{m+3}{2}\,$ respectively that give rise to a complete orthonormal
system in $L^{2}(S^{3},S^{+})$, [Ko1, Ko2]. A spinor $\phi$ on a domain
$G\subset\mathbf{C}^{2}$ is called a harmonic spinor on $G$ if $D\phi=0$. Each
$\phi^{+(m,l,k)}$ is extended to a harmonic spinor on $\mathbf{C}^{2}$, while
each $\phi^{-(m,l,k)}$ is extended to a harmonic spinor on
$\mathbf{C}^{2}\setminus\\{0\\}$ that is regular at infinity. Every harmonic
spinor $\varphi$ on $\mathbf{C}^{2}\setminus\\{0\\}$ has a Laurent expansion
by the basis $\phi^{\pm(m,l,k)}$:
$\varphi(z)=\sum_{m,l,k}\,C_{+(m,l,k)}\phi^{+(m,l,k)}(z)+\sum_{m,l,k}\,C_{-(m,l,k)}\phi^{-(m,l,k)}(z).$
The set of spinors of Laurent polynomial type is denoted by
$\mathbf{C}[\phi^{\pm}]$.
Let $\mathbf{H}$ be the algebra of quaternion numbers. We look an even spinor
also as a $\mathbf{H}$-valued smooth function:
$C^{\infty}(S^{3},S^{+})=C^{\infty}(S^{3},\mathbf{H})$, so that the space of
spinors $C^{\infty}(S^{3},S^{+})$ is endowed with a multiplication rule:
$\phi_{1}\cdot\phi_{2}\,=\,\begin{pmatrix}\,u_{1}u_{2}-\bar{v}_{1}v_{2}\,\\\\[5.69046pt]
\,v_{1}u_{2}+\bar{u}_{1}v_{2}\,\end{pmatrix}\,,\quad\mbox{ for
}\,\phi_{i}=\begin{pmatrix}\,u_{i}\\\\[5.69046pt]
\,v_{i}\end{pmatrix},\,i=1,2\,.$ (1.1)
Let $\mathcal{L}=\mathbf{C}[\phi^{\pm}]|S^{3}$ be the space of spinors on
$S^{3}$ that are obtained by restricting the spinors of Laurent polynomial
type. $\mathcal{L}$ becomes an associative subalgebra of
$C^{\infty}(S^{3},S^{+})$ that is called the algebra of current on $S^{3}$. In
section 3 we introduce the 2-cocycles on $C^{\infty}(S^{3},S^{+})$. Let
$\\{\theta_{0},\,\theta_{1},\,\theta_{2}\,\\}$ be the basis of vector fields
on $S^{3}$ coming from the infinitesimal representation of $SO(3)$. Our
2-cocycles on $C^{\infty}(S^{3},S^{+})$ are defined as follows. We put
$\Theta_{k}\phi=\,\frac{1}{2}\,\left(\begin{array}[]{c}\,\theta_{k}\,u\\\\[8.5359pt]
\,\theta_{k}\,v\end{array}\right),\,k=0,1,2,\quad\mbox{ for
}\,\phi=\begin{pmatrix}u\\\ v\end{pmatrix}.$
We introduce the following three non-trivial real valued 2-cocycles
$c_{k},\,k=0,1,2$, on $C^{\infty}(S^{3},S^{+})$ :
$c_{k}(\phi_{1},\phi_{2})\,=\,\,\frac{1}{2\pi^{2}}\int_{S^{3}}\,\,tr\,(\,\Theta_{k}\phi_{1}\cdot\phi_{2}\,)\,d\sigma,\qquad\forall\phi_{1}\,,\,\phi_{2}\in\,C^{\infty}(S^{3},S^{+})\,.$
Since each $\Theta_{k}\,,k=0,1,2$, preserves $\mathcal{L}$, the 2-cocycles
$c_{k},\,k=0,1,2$, restrict to $\mathcal{L}$.
Hitherto we prepared the spaces $C^{\infty}(S^{3},\,S^{+})$ and $\mathcal{L}$
that will play the role of coefficients of current algebras. These are complex
algebras. On the other hand $C^{\infty}(S^{3},S^{+})\simeq
C^{\infty}(S^{3},\mathbf{H})$ has a $\mathbf{H}$-module structure, while our
basic interest is on the real Lie algebra
$\mathcal{L}=\mathbf{C}[\phi^{\pm}]|S^{3}\,$. In such a way it is frequent
that we deal with the fields $\mathbf{H}$, $\mathbf{C}$ and $\mathbf{R}$ in
one formula. So to prove a steady point of view for our subjects we shall
introduce here the concept of quaternion Lie algebras, [Kq]. First we note
that a quaternion module $V=\mathbf{H}\otimes_{\mathbf{C}}V_{o}=V_{o}+JV_{o}$,
$V_{o}$ being a $\mathbf{C}$-module, has two involutions $\sigma$ and $\tau$:
$\sigma(u+Jv)=u-Jv\,,\quad\tau(u+Jv)=\overline{u}+J\overline{v}\,,\quad u,v\in
V_{o}\,.$
A quaternion Lie algebra $\,\mathfrak{q}$ is a real submodule of a quaternion
module $V$ that is endowed with a real Lie algebra structure compatible with
the involutions $\sigma$ and $\tau$:
$\displaystyle\sigma\mathfrak{q}\,\subset\mathfrak{q}\,,$
$\displaystyle\sigma[x\,,y]\,=[\sigma x\,,\sigma y]\,,\quad\tau[x\,,y]\,=[\tau
x\,,\tau y]\,\quad\mbox{ for }\,x,y\in\mathfrak{q}.$
For a complex Lie algebra $\mathfrak{g}$ the quaternification of
$\mathfrak{g}$ is a quaternion Lie algebra $\mathfrak{g}^{q}$ that is
generated ( as a real Lie algebra ) by
$\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{g}$. For example,
$\mathfrak{so}^{\ast}(2n)=\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{so}(n,\mathbf{C})$
is the quaternification of $\mathfrak{so}(n,\mathbf{C})$.
$\mathfrak{sl}(n,\mathbf{H})$ is the quaternification of
$\mathfrak{sl}(n,\mathbf{C})$ though
$\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{sl}(n,\mathbf{C})$ is not a Lie
algebra.
The algebra of current $\mathcal{L}$ is a quaternion Lie algebra. In fact
$\mathcal{L}$ is a real submodule of $C^{\infty}(S^{3},\,\mathbf{H})$ that is
invariant under the involutions $\sigma$ and $\tau$. The associative algebra
$\mathcal{L}$ has the following four commutative subalgebras:
$\\{\phi\in\mathcal{L};\,\sigma{\phi}=\pm\phi,\,\tau\phi=\pm\phi\,\\}.$
The real part
$\mathcal{K}\,=\\{\phi\in\mathcal{L};\,\sigma{\phi}=\phi,\,\tau\phi=\phi\,\\}$
plays an important role. $\mathcal{K}$ is a commutative normal subalgebra of
$\mathcal{L}$, and satisfies the condition $[\mathcal{K},\,\mathcal{L}]=0$.
Let $\mathfrak{g}$ be a simple Lie algebra that we suppose to be a subalgebra
of $\mathfrak{sl}(n,\mathbf{C})$. Let $\mathcal{L}\mathfrak{g}$ be the
quaternion Lie algebra generated by
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}$ with the Lie bracket defined by
$[\phi_{1}\otimes X_{1}\,,\,\phi_{2}\otimes
X_{2}\,]\,=\,(\phi_{1}\cdot\phi_{2})\,X_{1}X_{2}\,-\,(\phi_{2}\cdot\phi_{1})\,X_{2}X_{1}$
(1.2)
for $\phi_{1},\,\phi_{2}\in\mathcal{L},\,X_{1},X_{2}\in\mathfrak{g}\,.$ Here
the right hand side is the bracket of the tensor product of the associative
algebra $\mathcal{L}$ and the matrix algebra $\mathfrak{g}$.
$\mathcal{L}\mathfrak{g}$ is called the $\,\mathfrak{g}$-current algebra. Let
$\mathfrak{h}$ be the Cartan subalgebra of $\mathfrak{g}$. $\mathfrak{g}$ has
the standard triangular decomposition
$\mathfrak{g}=\mathfrak{h}+\mathfrak{e}\,+\,\mathfrak{f}\,$. Let
$\mathcal{L}\mathfrak{h}$, $\mathcal{L}\mathfrak{e}$ and
$\mathcal{L}\mathfrak{f}$ be the Lie subalgebras of $\mathcal{L}\mathfrak{g}$
generated by $\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{h}$,
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{e}$ and
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{f}$ respectively. Let
$\mathcal{K}\mathfrak{h}=\mathcal{K}\otimes_{\mathbf{R}}\mathfrak{h}$. We find
that $\mathcal{K}\mathfrak{h}$ is a Cartan subalgebra of
$\mathcal{L}\mathfrak{g}$. It extends the adjoint representation
$ad_{\mathfrak{h}}:\mathfrak{h}\longrightarrow End_{\mathbf{C}}(\mathfrak{g})$
to the adjoint representation
$ad_{\mathcal{K}\mathfrak{h}}:\mathcal{K}\mathfrak{h}\longrightarrow
End_{\mathcal{L}}(\mathcal{L}\mathfrak{g})$. The associated weight space
decomposition of $\mathcal{L}\mathfrak{g}$ with respect to
$\mathcal{K}\mathfrak{h}$ will be given. We find that the space of non-zero
weights of $\mathcal{L}\mathfrak{g}$ corresponds bijectively to the root space
of $\mathfrak{g}$. Let $\mathfrak{g}_{\lambda}$ be the root space of root
$\lambda$ and let $\Phi^{\pm}$ be the set of positive (respectively negative )
roots of $\mathfrak{g}$. Then we have
$\mathcal{L}\mathfrak{e}\,=\sum_{\lambda\in\Phi^{+}}\mathcal{L}\otimes_{\mathbf{R}}\mathfrak{g}_{\lambda},\quad\mathcal{L}\mathfrak{f}\,=\sum_{\lambda\in\Phi^{-}}\mathcal{L}\otimes_{\mathbf{R}}\mathfrak{g}_{\lambda}.$
Hence $\mathcal{L}\mathfrak{e}=\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{e}$
and $\mathcal{L}\mathfrak{f}=\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{f}$.
$\,\mathcal{L}\mathfrak{h}$ has the weight $0$:
$[\mathcal{K}\mathfrak{h},\,\mathcal{L}\mathfrak{h}\,]=0\,$. Accordingly we
have the triangular decomposition of the $\mathfrak{g}$-current algebra:
$\mathcal{L}\mathfrak{g}=\mathcal{L}\mathfrak{h}\,+\,\mathcal{L}\mathfrak{e}\,+\,\mathcal{L}\mathfrak{f}\,,\quad\mbox{direct
sum}\,.$
We discuss in section 5 our central subject to give the central extension of
$\mathfrak{g}$-current algebra. We extend each 2-cocycle
$\,c_{k}\,,\,k=0,1,2\,$ on $\mathcal{L}$ to a 2-cocycle on
$\mathcal{L}\mathfrak{g}\,$ by the formula
$c_{k}(\phi\otimes X,\,\psi\otimes
Y)\,=\,(X|Y)\,c_{k}(\phi,\psi),\quad\phi,\,\psi\in\mathcal{L}\,,\,X,Y\in\mathfrak{g},$
(1.3)
where $(X|Y)=Trace(XY)$ is the Killing form of $\mathfrak{g}$. Then we have
the associated central extension:
$\,\mathcal{L}\mathfrak{g}(a)=\mathcal{L}\mathfrak{g}\,\oplus(\oplus_{k=0}^{2}\mathbf{C}a_{k}),$
which is a quaternion Lie algebra. The radial vector field $\mathbf{n}$ on
$\mathbf{C}^{2}\setminus 0$ acts on $\mathcal{L}\mathfrak{g}(a)$ as an outer
derivation. Then, adjoining the derivation $\mathbf{n}$, we have the second
central extension:
$\widehat{\mathfrak{g}}=\mathcal{L}\mathfrak{g}(a)\oplus\mathbf{C}n.$
We shall investigate the root space decomposition of
$\,\widehat{\mathfrak{g}}\,$. For a root $\alpha\in\Phi$, let
$\mathfrak{g}_{\alpha}=\\{x\in\mathfrak{g};\,[\,h,\,x\,]=\alpha(h)x\,,\,\forall
h\in\mathfrak{h}\,\\}$ denote the root space of $\alpha$. Put
$\widehat{\mathfrak{h}}\,=\,\,\mathfrak{h}\,\oplus(\oplus_{k=0}^{2}\mathbf{C}a_{k})\oplus(\mathbf{C}\,n)\,$
$\widehat{\mathfrak{h}}$ is a commutative subalgebra of
$\widehat{\mathfrak{g}}\,$ and $\,\widehat{\mathfrak{g}}$ is decomposed into a
direct sum of the simultaneous eigenspaces of $ad\,(\hat{h})$,
$\,\hat{h}\in\widehat{\mathfrak{h}}\,$, and $\Phi\subset\mathfrak{h}^{\ast}$
is regarded as a subset of $\,\widehat{\mathfrak{h}}^{\ast}$. We introduce
$\Lambda_{k}\in\widehat{\mathfrak{h}}^{\ast};\,k=0,1,2$ as the dual elements
of $a_{k};\,k=0,1,2$, and $\delta\in\widehat{\mathfrak{h}}^{\ast}$ as the dual
element of $n$. Then
$\alpha_{1},\,\cdots\,,\alpha_{l},\,\delta,\,\Lambda_{0},\Lambda_{1},\Lambda_{2}$
give a basis of $\widehat{\mathfrak{h}}^{\ast}$. The set of simple root are
$\widehat{\Phi}=\left\\{\frac{m}{2}\delta+\alpha\,;\quad\alpha\in\Phi\,,\,m\in\mathbf{Z}\,\right\\}\bigcup\left\\{\frac{m}{2}\delta;\quad
m\in\mathbf{Z}\,\right\\}\,.$
$\widehat{\mathfrak{g}}$ has the weight space decomposition:
$\widehat{\mathfrak{g}}\,=\,\oplus_{m\in\mathbf{Z}}\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}\,\oplus\,\,\oplus_{\alpha\in\Phi,\,m\in\mathbf{Z}}\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}\,.$
Each weight space is given as follows.
$\displaystyle\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}\,$
$\displaystyle=$
$\displaystyle\mathcal{L}[m]\otimes_{\mathbf{C}}\mathfrak{g}_{\alpha}\,,\quad\mbox{
for $\alpha\neq 0$ and and $m\in\mathbf{Z}$},\,,$
$\displaystyle\widehat{\mathfrak{g}}_{0\delta}$ $\displaystyle=$
$\displaystyle(\,\mathcal{L}[0]\mathfrak{h}\,)\oplus(\oplus_{k=0}^{2}\mathbf{C}a_{k})\oplus(\mathbf{C}n)\,\supset\,\widehat{\mathfrak{h}}\,,$
$\displaystyle\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}$ $\displaystyle=$
$\displaystyle\,\mathcal{L}[m]\otimes_{\mathbf{C}}\mathfrak{h}\,\,,\quad\mbox{for
$0\neq m\in\mathbf{Z}$ . }\,$
Where $\mathcal{L}[m]$ is the subspace of
$\mathcal{L}=\mathbf{C}[\phi^{\pm}]|S^{3}$ constituting of those elements
$\phi\in\mathbf{C}[\phi^{\pm}]$ that are of homogeneous degree $m$:
$\phi(z)=|z|^{m}\phi(\frac{z}{|z|})$. $\mathcal{L}[0]\mathfrak{h}$ is the Lie
subalgebra generated by $\mathcal{L}[0]\otimes_{\mathbf{C}}\mathfrak{h}$.
In our previous paper [K-I] we dealt with central extensions of
$S^{3}\mathbf{H}\otimes_{\mathbf{C}}U(\mathfrak{g})$, where $U(\mathfrak{g})$
is the universal enveloping algebra of a simple algebra $\mathfrak{g}$. In
[K-I] we called $\mathbf{H}\otimes_{\mathbf{C}}U(\mathfrak{g})$ the
quaternification of $\mathfrak{g}$. But it is too big to consider as an
adequate object to be studied as a quaternification of a Lie algebra. So we
present here new definitions of a quaternion Lie algebra and a
quaternification. This article contains many arguments, proofs and
calculations that are parallel to those in [K-I], but we prefer to repeat them
so that the readers need not refer to our old descriptions and can understand
the theory as an unified one.
## 2 Preliminaries on spinor analysis on $S^{3}\subset\mathbf{C}^{2}$
Here we prepare a fairly long preliminary of spinor analysis on
$\mathbf{R}^{4}$ because I think various subjects belonging to quaternion
analysis or detailed properties of harmonic spinors of the Dirac operator on
$\mathbf{R}^{4}$ are not so familiar to the readers. We refer to [F, Ko1] for
the exposition on Dirac operators on $\mathbf{R}^{4}$ and to [D-S-Sc, G-M,
Ko2] for the function theory of harmonic spinors. Subsections 2.1, 2.2, 2.3
are to remember the theory of harmonic spinors.
### 2.1 Spinors and the Dirac operator on $\mathbf{R}^{4}$
Let $\mathbf{K}$ be the field $\mathbf{R}$ or $\mathbf{C}$. Let $V$ be a
$\mathbf{K}$-vector space equipped with a quadratic form $q$ over the field
$\mathbf{K}$. The Clifford algebra $C_{\mathbf{K}}(V,q)$ is a
$\mathbf{K}$-algebra which contains $V$ as a sub-vector space and is generated
by the elements of $V$ subject to the relations
$v_{1}v_{2}+v_{2}v_{1}=2q(v_{1},v_{2})\,,$
for $v_{1},\,v_{2}\in V$. In the sequel we denote ${\rm
Clif}_{n}=C_{\mathbf{R}}(\mathbf{R}^{n},\,-x_{1}^{2}-\cdots-x_{n}^{2})$ and
${\rm
Clif}_{n}^{c}\,=C_{\mathbf{C}}(\mathbf{C}^{n},z_{1}^{2}+\cdots+z_{n}^{2})$. It
holds ${\rm Clif}_{n}^{c}={\rm Clif}_{n}\otimes_{\mathbf{R}}\mathbf{C}$. We
have an important isomorphism:
${\rm Clif}_{n+2}^{c}={\rm
Clif}_{n}^{c}\otimes_{\mathbf{C}}\mathbf{C}(2)\,\,.$ (2.1)
Here $\mathbf{K}(m)$ denotes the algebra of $m\times m$-matrices with entries
in the field $\,\mathbf{K}$. The left multiplication of $\mathbf{H}$ yields an
endomorphism of $\mathbf{H}$; $\mathbf{H}\simeq
End_{\mathbf{H}}\mathbf{H}\simeq\mathbf{C}(2)$. Then the corresponding
matrices to $i\,,j\,,k\in\mathbf{H}\,$ are given by
$i\sigma_{3},\,i\sigma_{2},\,i\sigma_{1}$. Where the Pauli matrices are
$\sigma_{1}=\left(\begin{array}[]{cc}0&1\\\\[5.69046pt]
1&0\end{array}\right)\,,\,\sigma_{2}=\left(\begin{array}[]{cc}0&-i\\\\[5.69046pt]
i&0\end{array}\right)\,,\,\sigma_{3}=\left(\begin{array}[]{cc}1&0\\\\[5.69046pt]
0&-1\end{array}\right)\,.$
The relations $\sigma_{i}^{2}=-1$, $i=1,2,3$, and
$\sigma_{1}\sigma_{3}+\sigma_{3}\sigma_{1}=0$ shows that
$\\{\sigma_{1},\,\sigma_{3}\\}$ generate ${\rm Clif}^{c}_{2}$, so that ${\rm
Clif}^{c}_{2}=\mathbf{H}$. Let
$\Delta=\mathbf{C}^{2}\otimes_{\mathbf{C}}\mathbf{C}^{2}$ be the vector space
of complex 4-spinors that gives the spinor representation of Clifford algebra
${\rm Clif}^{c}_{4}\,$: ${\rm Clif}^{c}_{4}={\rm
End}_{\mathbf{C}}(\Delta)=\mathbf{C}(4)$. So $\,{\rm Clif}_{4}^{c}$ is
generated by the following Dirac matrices:
$\gamma_{k}\,=\,\left(\begin{array}[]{cc}0&-i\sigma_{k}\\\
i\sigma_{k}&0\end{array}\right)\,,\quad
k=1,2,3,\quad\gamma_{4}\,=\,\left(\begin{array}[]{cc}0&-I\\\
-I&0\end{array}\right)\,.$
The set
$\left\\{\gamma_{p},\quad\gamma_{p}\gamma_{q},\quad\gamma_{p}\gamma_{q}\gamma_{r},\quad\gamma_{p}\gamma_{q}\gamma_{r}\gamma_{s}\,;\quad
1\leq p,q,r,s\leq 4\,\right\\}$ (2.2)
gives a 16-dimensional basis of the representation ${\rm
Clif}^{c}_{4}\,\simeq\,{\rm End}_{\mathbf{C}}(\Delta)\,$ with the following
relations:
$\gamma_{p}\gamma_{q}+\gamma_{q}\gamma_{p}=2\delta_{pq}\,.$
The representation $\Delta$ decomposes into irreducible representations
$\Delta^{\pm}=\mathbf{C}^{2}$ of $\,{\rm Spin}(4)$.
Let $S=\mathbf{C}^{2}\times\Delta$ be the trivial spinor bundle on
$\mathbf{C}^{2}$. The corresponding bundle
$S^{+}=\mathbf{C}^{2}\times\Delta^{+}$ ( respectively
$S^{-}=\mathbf{C}^{2}\times\Delta^{-}$ ) is called the even ( respectively odd
) half spinor bundle and the sections are called even ( respectively odd )
spinors. On the other hand, since ${\rm
Clif}^{c}_{4}=\mathbf{H}(2)\otimes_{\mathbf{R}}\mathbf{C}$ and
$\Delta=\mathbf{H}^{2}=\mathbf{H}\oplus\mathbf{H}$, we may look an even spinor
on $M\subset\mathbf{R}^{4}$ as a $\mathbf{H}$ valued smooth function:
$C^{\infty}(M,\mathbf{H})\,=\,C^{\infty}(M,S^{+})$. We feel free to use the
alternative notation to write a spinor:
$C^{\infty}(M,\mathbf{H})\,\ni\,u+jv\,\longleftrightarrow\,\left(\begin{array}[]{c}u\\\
v\end{array}\right)\,\in\,C^{\infty}(M,S^{+})\,.$ (2.3)
The Dirac operator is defined by
$\mathcal{D}=c\circ
d\,:\,C^{\infty}(M,S)\,\longrightarrow\,C^{\infty}(M,S)\,.$
where $d:S\rightarrow T^{*}\mathbf{C}^{2}\otimes S\simeq
T\mathbf{C}^{2}\otimes S$ is the covariant derivative which is the exterior
differential in this case, and $c:T\mathbf{C}^{2}\otimes S\rightarrow S$ is
the bundle homomorphism coming from the Clifford multiplication. With respect
to the Dirac matrices $\\{\gamma_{j}\\}_{j=1,2,3,4}\,$, (2.2), the Dirac
operator has the expression:
$\mathcal{D}=\,-\,\frac{\partial}{\partial
x_{1}}\gamma_{4}\,-\,\frac{\partial}{\partial
x_{2}}\gamma_{3}\,-\,\frac{\partial}{\partial
x_{3}}\gamma_{2}\,-\,\frac{\partial}{\partial x_{4}}\gamma_{1}\,.$
By means of the decomposition $S=S^{+}\oplus S^{-}$ the Dirac operator has the
chiral decomposition:
$\mathcal{D}=\begin{pmatrix}0&D^{\dagger}\\\
D&0\end{pmatrix}:C^{\infty}(\mathbf{C}^{2},S^{+}\oplus S^{-})\rightarrow
C^{\infty}(\mathbf{C}^{2},S^{+}\oplus S^{-}).$
If we adopt the notation
$\frac{\partial}{\partial z_{1}}=\frac{\partial}{\partial
x_{1}}-i\frac{\partial}{\partial x_{2}}\,,\quad\frac{\partial}{\partial
z_{2}}=\frac{\partial}{\partial x_{3}}-i\frac{\partial}{\partial x_{4}}\,,$
$D$ and $D^{\dagger}$ have the following coordinate expressions;
$D=\begin{pmatrix}\frac{\partial}{\partial
z_{1}}&-\frac{\partial}{\partial\bar{z_{2}}}\\\ \\\ \frac{\partial}{\partial
z_{2}}&\frac{\partial}{\partial\bar{z_{1}}}\end{pmatrix},\quad
D^{\dagger}=\begin{pmatrix}\frac{\partial}{\partial\bar{z_{1}}}&\frac{\partial}{\partial\bar{z_{2}}}\\\
\\\ -\frac{\partial}{\partial z_{2}}&\frac{\partial}{\partial
z_{1}}\end{pmatrix}.$
### 2.2 Harmonic spinors
#### 2.2.1 harmonic polynomials on $S^{3}\subset\mathbf{C}^{2}$
The right action of $SU(2)$ on $\mathbf{C}^{2}$ is written by
$R_{g}z\,=\,\left(\begin{array}[]{c}\,az_{1}-b\overline{z}_{2}\\\
az_{2}+b\overline{z}_{1}\end{array}\right),\quad
g=\left(\begin{array}[]{cc}a&-\overline{b}\\\
b&\overline{a}\end{array}\right)\in SU(2),\,z=\left(\begin{array}[]{c}z_{1}\\\
z_{2}\end{array}\right)\in\mathbf{C}^{2}.$
Then the infinitesimal action of $su(2)$ on $\mathbf{C}^{2}$ is
$((dR_{e})X)F=\frac{d}{dt}|_{t=0}R_{\exp\,tX}F\,,\quad X\in su(2)\,.$
It yields the following basis of vector fields
$(\theta_{0},\theta_{1},\theta_{2})$ on $\\{|z|=1\\}\simeq S^{3}$ :
$\theta_{1}=\frac{1}{\sqrt{-1}}dR(\sigma_{2})\,,\,\,\theta_{2}=\frac{1}{\sqrt{-1}}dR(\sigma_{1})\,,\,\,\theta_{0}=-\frac{1}{\sqrt{-1}}dR(\sigma_{3})\,.$
(2.4)
We prefer often the following basis $(e_{+},e_{-},\theta)\,$ given by
$\theta_{0}=\sqrt{-1}\theta\,,\quad\theta_{1}=e_{+}+e_{-}\,,\quad\theta_{2}=\sqrt{-1}(e_{+}-e_{-})\,.$
(2.5)
The local coordinate expression of these vector fields becomes:
$\displaystyle e_{+}$ $\displaystyle=$ $\displaystyle-
z_{2}\frac{\partial}{\partial\bar{z_{1}}}+z_{1}\frac{\partial}{\partial\bar{z_{2}}},\quad
e_{-}=-\bar{z_{2}}\frac{\partial}{\partial
z_{1}}+\bar{z_{1}}\frac{\partial}{\partial z_{2}}$ (2.6) $\displaystyle\theta$
$\displaystyle=$ $\displaystyle\left(z_{1}\frac{\partial}{\partial
z_{1}}+z_{2}\frac{\partial}{\partial
z_{2}}-\bar{z_{1}}\frac{\partial}{\partial\bar{z_{1}}}-\bar{z_{2}}\frac{\partial}{\partial\bar{z_{2}}}\right)\,,$
(2.7)
and the following commutation relations hold;
$[\theta,e_{+}]=2e_{+},\quad[\theta,e_{-}]=-2e_{-},\quad[e_{+},e_{-}]=-\theta.$
The dual basis are given by the differential 1-forms:
$\displaystyle\theta_{0}^{\ast}$ $\displaystyle=$
$\displaystyle\frac{1}{2\sqrt{-1}|z|^{2}}(\overline{z}_{1}dz_{1}+\overline{z}_{2}dz_{2}-z_{1}d\overline{z}_{1}-z_{2}d\overline{z}_{2}),$
$\displaystyle\theta_{1}^{\ast}$ $\displaystyle=$
$\displaystyle\frac{1}{2|z|^{2}}(e_{+}^{\ast}+e_{-}^{\ast})\,,\qquad\theta_{2}^{\ast}=\frac{1}{2\sqrt{-1}|z|^{2}}(e_{+}^{\ast}-e_{-}^{\ast})\,,$
where
$e_{+}^{\ast}=(-\overline{z}_{2}d\overline{z}_{1}+\overline{z}_{1}d\overline{z}_{2})\,,\quad
e_{-}^{\ast}=(-z_{2}dz_{1}+z_{1}dz_{2})\,,$
where we wrote the formulae extended to $\mathbf{C}^{2}\setminus\\{0\\}$.
$\theta^{\ast}_{k}$, $k=0,1,2$, are real 1-forms:
$\,\overline{\theta}_{k}^{\ast}=\theta_{k}^{\ast}\,$. It holds that
$\theta_{j}^{\ast}(\theta_{k})=\delta_{jk}\,$ for $\,j,k=0,1,2\,$. The
integrability condition becomes
$\frac{\sqrt{-1}}{2}d\theta_{0}^{\ast}=\theta_{1}^{\ast}\wedge\theta_{2}^{\ast}\,,\quad\frac{\sqrt{-1}}{2}d\theta_{1}^{\ast}=\theta_{2}^{\ast}\wedge\theta_{0}^{\ast}\,,\quad\frac{\sqrt{-1}}{2}d\theta_{2}^{\ast}=\theta_{0}^{\ast}\wedge\theta_{1}^{\ast}\,,$
(2.8)
and
$\theta_{0}^{\ast}\wedge\theta_{1}^{\ast}\wedge\theta_{2}^{\ast}=d\sigma_{S^{3}}\,$
is the volume form on $S^{3}$.
In the following we denote a function $f(z,\bar{z})$ of variables
$z,\bar{z}\,$ simply by $f(z)$.
###### Definition 2.1.
For $m=0,1,2,\cdots$, and $l,k=0,1,\cdots,m$, we define the monomials:
$\displaystyle v^{k}_{(l,m-l)}$ $\displaystyle=$
$\displaystyle(e_{-})^{k}z^{l}_{1}z^{m-l}_{2}.$ (2.9) $\displaystyle
w^{k}_{(l,m-l)}$ $\displaystyle=$
$\displaystyle(-1)^{k}\frac{l!}{(m-k)!}\,v^{m-l}_{(k,m-k)}\,.$ (2.10)
We note that the monomials $v^{k}_{(l,m-l)}$ in (2.9) come naturally from the
calculations of the right action of $SU(2)$ on $\mathbf{C}^{2}$, so as the
monomials $w^{k}_{(l,m-l)}$ are obtained by the left action of $SU(2)$ on
$\mathbf{C}^{2}\setminus\\{0\\}$, [Ko0, Ko1].
###### Proposition 2.2.
1. 1.
$v^{k}_{(l,m-l)}$ are harmonic polynomials on $\mathbf{C}^{2}$; $\Delta
v^{k}_{(l,m-l)}=0\,$, where $\Delta=\frac{\partial^{2}}{\partial
z_{1}\partial\bar{z}_{1}}+\frac{\partial^{2}}{\partial
z_{2}\partial\bar{z}_{2}}$.
2. 2.
$\left\\{\,\frac{1}{\sqrt{2}\pi}v^{k}_{(l,m-l)}\,;\,m=0,1,\cdots,\,0\leq
k,l\leq m\,\right\\}$ forms a $L^{2}(S^{3})$-complete orthonormal system of
the space of harmonic polynomials.
The similar assertions hold for
$\left\\{\,\frac{1}{\sqrt{2}\pi}w^{k}_{(l,m-l)}\,;\,m=0,1,\cdots,\,0\leq
k,l\leq m\,\right\\}$.
3. 3.
For each pair $(m,l)$, $0\leq l\leq m$, the subspace
$H_{(m,l)}=\\{v^{k}_{(l,m-l)}\,;0\leq k\leq m+1\\}$ gives a
$(m+1)$-dimensional right representation of $su(2)$ with the highest weight
$\frac{m}{2}$.
4. 4.
For each pair $(m,l)$, $0\leq l\leq m$, the subspace
$H^{{\dagger}}_{(m,l)}=\\{w^{k}_{(l,m-l)}\,;0\leq k\leq m+1\\}$ gives a
$(m+1)$-dimensional left representation of $su(2)$ with the highest weight
$\frac{m}{2}$.
###### Proposition 2.3.
The set of harmonic polynomials on $S^{3}$ form a graded algebra.
The proof will be found in the proof of Theorem 3.2 of the next section.
#### 2.2.2 Harmonic spinors on $S^{3}\subset\mathbf{C}^{2}$
The radial vector field is defined by
$\frac{\partial}{\partial
n}=\frac{1}{2|z|}(\nu+\bar{\nu}),\qquad\nu=z_{1}\frac{\partial}{\partial
z_{1}}+z_{2}\frac{\partial}{\partial z_{2}}.$
We shall denote by $\gamma$ the Clifford multiplication of the radial vector
$\frac{\partial}{\partial n}\,$. The multiplication $\gamma$ changes the
chirality: $\gamma=\gamma_{+}\oplus\gamma_{-}:S^{+}\oplus S^{-}\longrightarrow
S^{-}\oplus S^{+}$, and $\gamma^{2}=1$.
###### Proposition 2.4.
[Ko1] The Dirac operators $D$ and $D^{\dagger}$ have the following polar
decompositions:
$\displaystyle D$ $\displaystyle=\gamma_{+}\left(\frac{\partial}{\partial
n}-\partial\\!\\!\\!/\right)\,:$ $\displaystyle\,S^{+}\longrightarrow\,S^{-},$
$\displaystyle D^{\dagger}$ $\displaystyle=\left(\frac{\partial}{\partial
n}+\partial\\!\\!\\!/+\frac{3}{2|z|}\right)\gamma_{-}\,\,:$
$\displaystyle\,S^{-}\longrightarrow\,S^{+}\,,$
where the non-chiral Dirac operator $\partial\\!\\!\\!/$ is given by
$\partial\\!\\!\\!/=-\left[\sum^{3}_{i=1}\left(\frac{1}{|z|}\theta_{i}\right)\cdot\nabla_{\frac{1}{|z|}\theta_{i}}\right]=\frac{1}{|z|}\begin{pmatrix}-\frac{1}{2}\theta&\,e_{+}\\\\[5.69046pt]
-e_{-}&\,\frac{1}{2}\theta\end{pmatrix}.$
$\partial\\!\\!\\!/$ restricted on $S^{3}=\\{|z|=1\\}$ is called the
tangential Dirac operator:
$\partial\\!\\!\\!/|S^{3}:C^{\infty}(S^{3},S^{+})\longrightarrow
C^{\infty}(S^{3},S^{+})$
The tangential Dirac operator on $S^{3}$ is a self adjoint elliptic
differential operator.
Now we introduce a basis of the space of even harmonic spinors by the
following formula.
###### Definition 2.5.
For $m=0,1,2,\cdots;l=0,1,\cdots,m$ and $k=0,1,\cdots,m+1$, we put
$\displaystyle\phi^{+(m,l,k)}(z)$ $\displaystyle=$
$\displaystyle\sqrt{\frac{(m+1-k)!}{k!l!(m-l)!}}\begin{pmatrix}kv^{k-1}_{(l,m-l)}\\\
\\\ -v^{k}_{(l,m-l)}\end{pmatrix},$ $\displaystyle\phi^{-(m,l,k)}(z)$
$\displaystyle=$
$\displaystyle\sqrt{\frac{(m+1-k)!}{k!l!(m-l)!}}\left(\frac{1}{|z|^{2}}\right)^{m+2}\begin{pmatrix}w^{k}_{(m+1-l,l)}\\\
\\\ w^{k}_{(m-l,l+1)}\end{pmatrix}.$ (2.11)
We have the following
###### Proposition 2.6.
[Ko1]
1. 1.
$\phi^{+(m,l,k)}$ is a harmonic spinor on $\mathbf{C}^{2}$ and
$\phi^{-(m,l,k)}$ is a harmonic spinor on $\mathbf{C}^{2}\backslash\\{0\\}$
that is regular at infinity.
2. 2.
On $S^{3}=\\{|z|=1\\}$ we have:
$\partial\\!\\!\\!/\phi^{+(m,l,k)}=\frac{m}{2}\phi^{+(m,l,k)}\,,\qquad\partial\\!\\!\\!/\phi^{-(m,l,k)}=-\frac{m+3}{2}\phi^{-(m,l,k)}\,.$
3. 3.
The eigenvalues of $\,\partial\\!\\!\\!/$ are
$\frac{m}{2}\,,\quad-\frac{m+3}{2}\,;\quad m=0,1,\cdots,$
and the multiplicity of each eigenvalue is equal to $(m+1)(m+2)$.
4. 4.
The set of eigenspinors
$\left\\{\frac{1}{\sqrt{2}\pi}\phi^{+(m,l,k)},\quad\frac{1}{\sqrt{2}\pi}\phi^{-(m,l,k)}\,;\quad
m=0,1,\cdots,\,0\leq l\leq m,\,0\leq k\leq m+1\right\\}$
forms a complete orthonormal system of $L^{2}(S^{3},S^{+})$.
The Cauchy kernel ( fundamental solution ) of the half Dirac operator
$D:C^{\infty}(\mathbf{C}^{2},\,S^{+})\longrightarrow
C^{\infty}(\mathbf{C}^{2},\,S^{-})$ is given by
$K^{{\dagger}}(z,\zeta)=\frac{1}{|\zeta-z|^{3}}\gamma_{-}(\zeta-z)\,:C^{\infty}(\mathbf{C}^{2},\,S^{+})\longrightarrow
C^{\infty}(\mathbf{C}^{2},\,S^{-}),\quad|z-c|<|\zeta-c|\,.$
We have the following integral representation of spinors:
###### Theorem 2.7.
[Ko1] Let $G$ be a domain of $\mathbf{C}^{2}$ and let $\varphi\in
C^{\infty}(\overline{G},\,S^{+})$. We have
$\varphi(z)=-\frac{1}{2\pi^{2}}\int_{G}\,K^{{\dagger}}(z,\zeta)D\varphi(\zeta)dv(\zeta)+\frac{1}{2\pi^{2}}\int_{\partial
G}\,K^{{\dagger}}(z,\zeta)(\gamma_{+}\varphi)(\zeta)d\sigma(\zeta)\,,\quad
z\in G.$
The Cauchy kernel has the following eigenfunction expansion by the basis
$\phi^{(\pm(m,l,k)}(z-c)$:
###### Theorem 2.8.
[Ko1, Ko2] For $|z-c|<|\zeta-c|\,$,
$K^{{\dagger}}(z,\zeta)\cdot\gamma_{+}(\zeta-c)=\sum_{\,m,l,k\,}\,|\zeta-c|^{-(2m+3)}\overline{\phi^{+(m,l,k)}(\zeta-c)}\otimes\phi^{+(m,l,k)}(z-c)\,.$
That is, the Cauchy kernel and the Bergman kernel on the 4-disc $|z|\leq 1$
coincide.
## 3 2-cocycles on the space of spinors over $S^{3}$
We shall introduce a triple of 2-cocycles on the space of smooth spinors on
$S^{3}$, then on the space of Laurent polynomial type harmonic spinors. We
shall further introduce a 2-cocycle coming from the radial derivation of
spinors.
### 3.1 Algebra of Laurent polynomial type harmonic spinors on $S^{3}$
#### 3.1.1
The space of even spinors $\Delta^{+}$ is isomorphic to the quaternion vector
space $\mathbf{H}$, and we have an identification
$C^{\infty}(S^{3},\,S^{+})\simeq C^{\infty}(S^{3},\mathbf{H})$, (2.3). Hence
the multiplication of two even spinors is defined by
$\phi_{1}\cdot\phi_{2}\,=\,\,\left(\begin{array}[]{c}u_{1}u_{2}-\overline{v}_{1}v_{2}\\\
v_{1}u_{2}+\overline{u}_{1}v_{2}\end{array}\right)\,,$ (3.1)
for $\phi=\left(\begin{array}[]{c}u_{i}\\\ v_{i}\end{array}\right)$, $i=1,2$.
It corresponds to the quaternion multiplication:
$(u_{1}+jv_{1})(u_{2}+jv_{2})=(u_{1}u_{2}-\overline{v}_{1}v_{2})+j(v_{1}u_{2}+\overline{u}_{1}v_{2}).$
With this multiplication the $\mathbf{C}$-vector space
$C^{\infty}(S^{3},\,S^{+})$ becomes an associative $\mathbf{R}$-algebra.
We have the Laurent expansion of harmonic spinors, that is, a harmonic spinor
$\varphi$ on $\mathbf{C}^{2}\setminus\\{0\\}$ has an expansion by the basic
spinors $\\{\,\phi^{\pm(m,l,k)}\\}_{m,l,k}\,$:
$\varphi(z)=\sum_{m,l,k}\,C_{+(m,l,k)}\phi^{+(m,l,k)}(z)+\sum_{m,l,k}\,C_{-(m,l,k)}\phi^{-(m,l,k)}(z),$
(3.2)
which is uniformly convergent on any compact subset of
$\mathbf{C}^{2}\setminus\\{0\\}$. The coefficients $C_{\pm(m,l,k)}$ are given
by the formula:
$C_{\pm(m,l,k)}=\,\frac{1}{2\pi^{2}}\int_{S^{3}}\,\langle\varphi,\,\phi^{\pm(m,l,k)}\rangle\,d\sigma,$
where $\langle\,,\,\rangle$ is the inner product of $S^{+}$. We have
$\int_{S^{3}}\,tr\,\varphi\,d\sigma=4\pi^{2}Re.C_{+(0,0,1)},$ (3.3)
$Re.$ designates the real part.
###### Definition 3.1.
We call the series (3.2) a spinor of Laurent polynomial type if only finitely
many coefficients $C_{-(m,l,k)}$ are non-zero . The space of spinors of
Laurent polynomial type is denoted by $\mathbf{C}[\phi^{\pm}]$.
###### Theorem 3.2.
The restriction of $\,\mathbf{C}[\phi^{\pm}\,]$ to $S^{3}$ is an associative
subalgebra of $C^{\infty}(S^{3},\,S^{+})=C^{\infty}(S^{3},\mathbf{H})$
generated by the spinors:
$\displaystyle I$ $\displaystyle=\phi^{+(0,0,1)}=\left(\begin{array}[]{c}1\\\
0\end{array}\right),\quad J$
$\displaystyle=-\,\phi^{+(0,0,0)}=\left(\begin{array}[]{c}0\\\
1\end{array}\right),$ $\displaystyle\kappa$
$\displaystyle=\phi^{+(1,0,1)}=\left(\begin{array}[]{c}z_{2}\\\
-\overline{z}_{1}\end{array}\right),\quad\mu$
$\displaystyle=\phi^{-(0,0,0)}=\left(\begin{array}[]{c}z_{2}\\\
\overline{z}_{1}.\end{array}\right).$
###### Proof.
In Lemma 4.1 of [Ko0] we proved the product formula for the harmonic
polynomials $v^{k}_{(a.b)}\,$:
$v^{k_{1}}_{(a_{1},b_{1})}v^{k_{2}}_{(a_{2},b_{2})}=\sum_{j=0}^{a_{1}+a_{2}+b_{1}+b_{2}}C_{j}|z|^{2j}\,v^{k_{1}+k_{2}-j}_{(a_{1}+a_{2}-j,\,b_{1}+b_{2}-j)}\,,$
(3.6)
for some rational numbers $C_{j}=C_{j}(a_{1},a_{2},b_{1},b_{2},k_{1},k_{2})$.
Let $k=k_{1}+k_{2}$, $a=a_{1}+a_{2}$ and $b=b_{1}+b_{2}$. The above product
formula yields the fact that, restricted to $S^{3}$, the harmonic polynomial
$v^{k}_{(a,b)}$ is equal to a constant multiple of
$\,v^{k_{1}}_{(a_{1},b_{1})}\cdot v^{k_{2}}_{(a_{2},b_{2})}$ modulo a linear
combination of polynomials $v^{k-j}_{(a-j,b-j)}\,$, $1\leq j\leq min(k,a,b)$.
Hence the set of harmonic polynomials form a graded algebra. On the other hand
we see that a spinor of the form $\left(\begin{array}[]{c}v^{k}_{(l,m-l)}\\\
0\end{array}\right)$ or $\left(\begin{array}[]{c}0\\\
v^{k+1}_{(l,m-l)}\end{array}\right)$ is written by a linear combinations of
$\phi^{+(m,l,k+1)}$ and $\phi^{-(m-1,k,l)}$. Therefore we find that any
product of two spinors
$\phi^{\pm(m_{1},l_{1},k_{1})}\cdot\phi^{\pm(m_{2},l_{2},k_{2})}$ is written
as a linear combination of $\phi^{\pm(m_{1}+m_{2}-n,\cdot,\cdot)}$, $1\leq
n\leq m_{1}+m_{2}$. Therefore $\mathbf{C}[\phi^{\pm}]|_{S^{3}}$ becomes an
associative algebra. Moreover $\phi^{\pm(m,l,k)}$ is written by a linear
combination of the products
$\,\phi^{\pm(m_{1},l_{1},k_{1})}\cdot\phi^{\pm(m_{2},l_{2},k_{2})}$ for $0\leq
m_{1}+m_{2}\leq m-1\,$, $0\leq l_{1}+l_{2}\leq l$ and $0\leq k_{1}+k_{2}\leq
k$ . Hence we find that the algebra $\mathbf{C}[\phi^{\pm}]|_{S^{3}}$ is
graded and is generated by the four spinors
$I=\phi^{+(0,0,1)}\,,\,J=-\phi^{+(0,0,0)}\,,\,\kappa=\phi^{+(1,0,1)}\,,\,\mu=\phi^{-(0,0,0)}\,$
∎
Examples
$\displaystyle\phi^{+(1,1,1)}$ $\displaystyle=$
$\displaystyle\,-\kappa\,J\,=\left(\begin{array}[]{c}z_{1}\\\
\overline{z}_{2}\end{array}\right)\,,\quad\phi^{-(0,0,1)}=-\mu
J=\left(\begin{array}[]{c}-z_{1}\\\ \overline{z}_{2}\end{array}\right)\,$
$\displaystyle\phi^{+(1,0,0)}$ $\displaystyle=$
$\displaystyle\sqrt{2}\left(\begin{array}[]{c}0\\\
-z_{2}\end{array}\right)=\frac{1}{\sqrt{2}}J(\kappa+\mu).$
$\displaystyle\phi^{-(1,1,1)}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\mu\cdot\left(-\kappa+\mu+J(\lambda+\nu)\right)=\left(\begin{array}[]{c}|z_{2}|^{2}-|z_{1}|^{2}\\\
2\overline{z}_{1}\overline{z}_{2}\end{array}\right)\,,\qquad\mbox{ for
}\,|z|=1\,.$
We must note that $\mathbf{C}[\,\phi^{\pm}\,]$ over
$\mathbf{C}^{2}\setminus\\{0\\}$ is not an algebra because in the formula
(3.6) $|z|\neq 1$ out of $S^{3}$.
###### Corollary 3.3.
Let $\tau$, $\sigma$ be the involutions on $C^{\infty}(S^{3},S^{+})$ defined
by
$\tau\phi=\left(\begin{array}[]{c}\overline{u}\\\
\overline{v}\end{array}\right)\,,\quad\sigma\phi=\left(\begin{array}[]{c}u\\\
-v\end{array}\right)\,\quad\mbox{ for }\,\phi=\left(\begin{array}[]{c}u\\\
v\end{array}\right)\,.$
Then the involutions $\tau$ and $\sigma$ are homomorphisms of
$\mathbf{R}$-algebra $\,\mathbf{C}[\phi^{\pm}]|_{S^{3}}$.
In fact, since $\left(\begin{array}[]{c}v^{k}_{(l,m-l)}\\\
0\end{array}\right)$ and $\left(\begin{array}[]{c}0\\\
v^{k+1}_{(l,m-l)}\end{array}\right)$ are written by linear combinations of
$\phi^{+(m,l,k+1)}$ and $\phi^{-(m-1,k,l)}$, it certainly follows that
$\sigma\phi^{\pm(m-1,k,l)}\in\mathbf{C}[\phi^{\pm}]$. By virtue of the
property
$\,\overline{v}^{k}_{m,l}=(-1)^{m-l-k}\frac{k!}{(m-k)!}v^{m-k}_{m,m-l}\,$,
$\,\tau$ is also a homomorphism:
$\,\sigma\phi_{1}\cdot\sigma\phi_{2}=\sigma(\phi_{1}\cdot\phi_{2})\,,\quad\tau\phi_{1}\cdot\tau\phi_{2}=\tau(\phi_{1}\cdot\phi_{2})\,.$
#### 3.1.2
Now we introduce the following $\mathbf{R}$-bilinear bracket on
$C^{\infty}(S^{3},S^{+})$:
$\left[\phi_{1},\,\phi_{2}\,\right]\,=\,\bigl{[}\,\begin{pmatrix}u_{1}\\\
v_{1}\end{pmatrix},\,\begin{pmatrix}u_{2}\\\
v_{2}\end{pmatrix}\,\bigr{]}\,=\,\begin{pmatrix}v_{1}\bar{v}_{2}-\bar{v}_{1}v_{2}\,\\\
\,(u_{2}-\bar{u}_{2})v_{1}-(u_{1}-\bar{u}_{1})v_{2}\,\end{pmatrix},$ (3.10)
for even spinors $\phi_{1}=\begin{pmatrix}u_{1}\\\ v_{1}\end{pmatrix}$ and
$\phi_{2}=\begin{pmatrix}u_{2}\\\ v_{2}\end{pmatrix}$.
From Theorem 3.2, Corollary3.3 and (3.10) we have the following
###### Proposition 3.4.
$\left(\,C^{\infty}(S^{3},S^{+}),\,[\,,\,\,]\,\right)\,$ is a quaternion Lie
algebra. $\left(\,\mathbf{C}[\phi^{\pm}]|_{S^{3}}\,,\,[\,,\,]\right)\,$ is a
quaternion Lie subalgebra of $(C^{\infty}(S^{3},S^{+}),\,[\,,\,]\,)$.
### 3.2 2-cocycles on $C^{\infty}(S^{3},S^{+})$.
Let $\phi\,,\,\psi\,\in C^{\infty}(S^{3},S^{+})$. We define the trace of a
spinor $\phi=\begin{pmatrix}u\\\ v\end{pmatrix}$ by the formula:
$tr\,\phi\,=\,u+\overline{u}.$
It is invariant by the involutions $\sigma$ and $\tau$. Evidently we have
$tr\,[\phi,\psi]=0$.
In the following we introduce three 2-cocycles on $C^{\infty}(S^{3},S^{+})$
that come from the base vector fields $\theta_{k}\,;\,k=0,1,2$, on $S^{3}$,
(2.5).
###### Definition 3.5.
For a $\varphi=\left(\begin{array}[]{c}u\\\ v\end{array}\right)\in
C^{\infty}(S^{3},S^{+})$ , we put
$\Theta_{k}\,\varphi\,=\,\frac{1}{2}\,\left(\begin{array}[]{c}\,\theta_{k}\,u\\\\[8.5359pt]
\,\theta_{k}\,v\end{array}\right),\qquad k=0,1,2.$
Note that $\theta_{k}$ is a real vector field:
$\,\theta_{k}=\overline{\theta}_{k}$, so is $\Theta_{k}$.
###### Lemma 3.6.
For any $k=0,1,2$, and $\phi,\,\psi\in C^{\infty}(S^{3},S^{+})$, we have
$\displaystyle\Theta_{k}\,(\phi\cdot\psi\,)\,$ $\displaystyle=$
$\displaystyle\,(\Theta_{k}\,\phi)\cdot\,\psi\,+\,\phi\cdot\,(\Theta_{k}\,\psi)\,.$
(3.11) $\displaystyle\int_{S^{3}}\,\Theta_{k}\,\varphi\,d\sigma\,$
$\displaystyle=$ $\displaystyle\,0.$ (3.12)
###### Proof.
The first equation follows from the fact: $\overline{\theta}_{k}=\theta_{k}$.
The second assertion follows from the fact
$\int_{S^{3}}\,\theta_{k}f\,d\sigma\,=\,0\,,$ (3.13)
for any function $f$ on $S^{3}$. This is proved as follows. We consider the
2-form $\beta=f\theta_{1}^{\ast}\wedge\theta_{2}^{\ast}$. By virtue of the
integrable condition (2.8) we have
$d\beta=(\theta_{0}f)\,\theta_{0}^{\ast}\wedge\theta_{1}^{\ast}\wedge\theta_{2}^{\ast}=\theta_{0}f\,d\sigma\,.$
Hence
$0=\int_{S^{3}}\,d\beta\,=\,\int_{S^{3}}\theta_{0}f\,d\sigma.$
Similarly for the integral of $\theta_{k}f$, $k=1,2$. ∎
###### Remark 3.7.
The formula (3.13) is an evident fact if we recognize the invariance under the
action of $SO(4)$ of each $\theta_{k}$ and the volume form $d\sigma$ . This is
pointed to me by Professor T. Iwai of Kyoto University.
###### Definition 3.8.
For $\phi_{1}$ and $\phi_{2}\in C^{\infty}(S^{3},S^{+})$, we put
$c_{k}(\phi_{1},\phi_{2})\,=\,\,\frac{1}{2\pi^{2}}\int_{S^{3}}\,\,tr\,(\,\Theta_{k}\phi_{1}\cdot\phi_{2}\,)\,d\sigma\,,\quad
k=0,1,2\,.$
###### Proposition 3.9.
1. 1.
For each $k=0,1,2$, $c_{k}$ defines a 2-cocycle on the $\mathbf{R}$-algebra
$C^{\infty}(S^{3},S^{+})\,$. That is, $c_{k}$ satisfies the equations:
$\displaystyle
c_{k}(\phi_{1}\,,\,\phi_{2})\,=\,-\,c_{k}(\phi_{2}\,,\,\phi_{1})\,,$ (3.14)
$\displaystyle
c_{k}(\phi_{1}\cdot\phi_{2}\,,\,\phi_{3})+c_{k}(\phi_{2}\cdot\phi_{3}\,,\,\phi_{1}\,)+c_{k}(\phi_{3}\cdot\phi_{1}\,,\,\phi_{2}\,)=0,$
(3.15)
for any $\phi_{1},\,\phi_{2},\,\phi_{3}\in C^{\infty}(S^{3},S^{+})$.
2. 2.
For each $k=0,1,2$, $c_{k}$ defines a 2-cocycle on the real Lie algebra
$C^{\infty}(S^{3},S^{+})\,$. That is, $c_{k}$ satisfies the equations:
$\displaystyle
c_{k}(\phi_{1}\,,\,\phi_{2})\,=\,-\,c_{k}(\phi_{2}\,,\,\phi_{1})\,,$ (3.16)
$\displaystyle
c_{k}(\,[\phi_{1}\,,\phi_{2}]\,,\,\phi_{3})+c_{k}(\,[\phi_{2}\,,\phi_{3}]\,,\,\phi_{1}\,)+c_{k}(\,[\phi_{3}\,,\phi_{1}]\,,\,\phi_{2}\,)=0,$
(3.17)
for any $\phi_{1},\,\phi_{2},\,\phi_{3}\in C^{\infty}(S^{3},S^{+})$.
3. 3.
$c_{k}$ is a non-trivial 2-cocycle, that is, there is no 1-cochain $b$ such
that $c_{k}(\phi_{1}\,,\phi_{2})=b(\,[\phi_{1},\,\phi_{2}]\,)$.
4. 4.
Each $c_{k}$ is invariant under the involutions $\sigma$ and $\tau$. Each
2-cocycle $c_{k}\,$, $\,k=0,1,2$, restricts to the space
$\mathbf{C}[\phi^{\pm}]|S^{3}$.
###### Proof.
Evidently each $c_{k}$ is $\mathbf{R}$-bilinear ( It is not
$\mathbf{C}$-bilinear ). By the formula (3.12) and the Leibnitz rule (3.19) we
have
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\,\int_{S^{3}}\,\,tr\,(\,\Theta_{k}\,(\phi_{1}\cdot\phi_{2})\,)\,d\sigma\,=\,\int_{S^{3}}\,tr\,\left(\,\Theta_{k}\,\phi_{1}\,\cdot\phi_{2}\,\right)d\sigma\,+\,\int_{S^{3}}\,tr\,\left(\,\phi_{1}\cdot\,\Theta_{k}\,\phi_{2}\,\right)d\sigma$
Hence
$\,c_{k}(\phi_{1}\,,\,\phi_{2}\,)\,+\,c_{k}(\,\phi_{2}\,,\,\phi_{1}\,)\,=0\,$.
The following calculation proves (3.15).
$\displaystyle c_{k}(\phi_{1}\cdot\phi_{2}\,,\,\phi_{3})$ $\displaystyle=$
$\displaystyle\,\int_{S^{3}}\,\,tr\,(\,\Theta_{k}(\,\phi_{1}\cdot\phi_{2}\,)\cdot\,\phi_{3}\,)\,d\sigma$
$\displaystyle=$
$\displaystyle\,\int_{S^{3}}\,\,tr\,(\,\Theta_{k}\phi_{1}\cdot\phi_{2}\cdot\phi_{3}\,)d\sigma\,+\,\,\int_{S^{3}}\,\,tr\,(\,\Theta_{k}\phi_{2}\cdot\,\phi_{3}\,\cdot\phi_{1}\,)d\sigma$
$\displaystyle=$
$\displaystyle\,c_{k}(\phi_{1}\,,\,\phi_{2}\cdot\phi_{3}\,)+c_{k}(\phi_{2}\,,\,\phi_{3}\cdot\phi_{1}\,)=\,-c_{k}(\phi_{2}\cdot\phi_{3}\,,\phi_{1}\,)\,-\,c_{k}(\phi_{3}\cdot\phi_{1}\,,\,\phi_{2}).$
Suppose now that $c_{0}$ is the coboundary of a 1-cochain
$b:\,C^{\infty}(S^{3},S^{+})\longrightarrow\mathbf{C}$. Then
$c_{0}(\phi_{1},\,\phi_{2})=(\delta\,b)(\phi_{1},\,\phi_{2})\,=\,b(\,[\phi_{1},\,\phi_{2}]\,)$
for any $\phi_{1},\phi_{2}\in C^{\infty}(S^{3},S^{+})$. Take
$\phi_{1}=\,\frac{1}{\sqrt{2}}\phi^{+(1,1,2)}\,=\left(\begin{array}[]{c}-\overline{z}_{2}\\\
0\end{array}\right)$ and
$\,\phi_{2}=\frac{1}{2}(\phi^{+(1,0,1)}+\phi^{-(0,0,0)})=\left(\begin{array}[]{c}z_{2}\\\
0\end{array}\right)$ . Then $[\,\phi_{1},\,\phi_{2}\,]=0$, so $(\delta
b)(\phi_{1},\phi_{2})=0$. But $c_{0}(\phi_{1},\phi_{2})=\frac{1}{2}\,$.
Therefore $c_{0}$ can not be a coboundary. For $\phi_{1}$ and
$\phi_{3}=\phi^{+(1,0,2)}=\sqrt{2}\left(\begin{array}[]{c}\overline{z}_{1}\\\
0\end{array}\right)$, we have $[\phi_{1},\phi_{3}]=0$ and
$c_{1}(\phi_{1},\phi_{3})=-\frac{1}{\sqrt{2}}$. So $c_{1}$ can not be a
coboundary by the same reason as above. Similarly for $c_{2}$. ∎
Examples
1. 1.
$c_{0}(\phi^{\pm(m,l,k)},\,\phi^{\pm(p,q,r)})=0,\quad
c_{0}(\,\phi^{+(1,1,2)}\,,\,\sqrt{-1}(\phi^{+(1,0,1)}+\phi^{-(0,0,0)})\,)\,=\sqrt{2}\,.$
2. 2.
Let
$\kappa=\phi^{+(1,0,1)}=\left(\begin{array}[]{c}z_{2}\\\
-\overline{z}_{1}\end{array}\right),\quad\kappa_{\ast}=\,\frac{-\sqrt{-1}}{\sqrt{2}}(\phi^{-(0,0,0)}-\phi^{+(1,1,2)}-\phi^{+(1,0,1)})=\sqrt{-1}\left(\begin{array}[]{c}\overline{z}_{2}\\\
\overline{z}_{1}\end{array}\right).$
Then
$(\Theta_{0}\,\kappa)\cdot\kappa_{\ast}\,=\,-\frac{1}{2}\left(\begin{array}[]{c}1\\\
0\end{array}\right),$
and
$c_{0}(\kappa,\,\kappa_{\ast})=\frac{1}{2\pi^{2}}\int_{S^{3}}\,tr\,[(\Theta_{0}\kappa)\cdot\kappa_{\ast}\,]\,d\sigma_{3}=-1.$
Similarly
$c_{1}(\kappa,\,\kappa_{\ast})=c_{2}(\kappa,\,\kappa_{\ast})=0$
### 3.3 Radial derivative on $C^{\infty}(S^{3},S^{+})$
We define the following operator $\mathbf{n}$ on $C^{\infty}(S^{3})$:
$\mathbf{n}\,f(z)=|z|\frac{\partial}{\partial
n}f(z)\,=\,\frac{1}{2}(\nu+\bar{\nu})f(z)\,.$ (3.18)
Here we consider the radial derivative of a function on $\mathbf{C}^{2}$ and
then restrict it to $S^{3}=\\{|z|=1\\}$.
For an even spinor $\varphi=\left(\begin{array}[]{c}u\\\
v\end{array}\right)\in C^{\infty}(S^{3},S^{+})$, we put
$\mathbf{n}\,\varphi\,=\,\left(\begin{array}[]{c}\mathbf{n}\,u\\\\[5.69046pt]
\mathbf{n}\,v\end{array}\right).$
###### Proposition 3.10.
1. 1.
$\mathbf{n}(\phi_{1}\cdot\phi_{2})=(\mathbf{n}\phi_{1})\cdot\phi_{2}+\phi_{1}\cdot(\mathbf{n}\phi_{2})\,.$
(3.19)
$\mathbf{n}\,[\,\phi_{1}\,,\,\phi_{2}\,]\,=\,[\,\mathbf{n}\phi_{1}\,,\,\phi_{2}\,]\,+\,[\,\phi_{1}\,,\,\mathbf{n}\phi_{2}\,]\,.$
(3.20)
2. 2.
$\mathbf{n}\phi^{+(m,l,k)}=\frac{m}{2}\,\phi^{+(m,l,k)},\quad\mathbf{n}\phi^{-(m,l,k)}=\,-\frac{m+3}{2}\,\phi^{-(m,l,k)}.$
(3.21)
3. 3.
If $\varphi$ is a spinor of Laurent polynomial type:
$\varphi(z)=\sum_{m,l,k}\,C_{+(m,l,k)}\phi^{+(m,l,k)}(z)+\sum_{m,l,k}\,C_{-(m,l,k)}\phi^{-(m,l,k)}(z).$
then $\mathbf{n}\varphi$ is also a spinor of Laurent polynomial type and we
have
$\int_{S^{3}}\,tr\,(\,\mathbf{n}\,\varphi\,)\,d\sigma\,=\,0\,.$ (3.22)
###### Proof.
The formula (3.21) follows from the definition (2.5). The last assertion
follows from (3.3) and the fact that the coefficient of $\phi^{+(0,0,1)}$ in
the Laurent expansion of $\mathbf{n}\varphi$ vanishes. ∎
Therefore the derivations $\Theta_{k}$, $\,k=0,1,2$, and $\mathbf{n}$ act on
the space of Laurent polynomial type harmonic spinors
$\mathbf{C}[\phi^{\pm}]|S^{3}$.
###### Proposition 3.11.
$c_{k}(\,\mathbf{n}\phi_{1}\,,\phi_{2}\,)\,+\,c_{k}(\,\phi_{1}\,,\mathbf{n}\phi_{2}\,)\,=\,0\,\quad
k=0,1,2\,.$ (3.23)
###### Proof.
Since
$\,\theta_{0}\,\mathbf{n}\,=\,(\nu-\bar{\nu})(\nu+\bar{\nu})=\nu^{2}-\bar{\nu}^{2}=\,\mathbf{n}\,\theta_{0}\,$,
we have
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\int_{S^{3}}\,tr\,(\,\mathbf{n}(\Theta_{0}\phi_{1}\cdot\phi_{2})\,)\,d\sigma=\int_{S^{3}}\,tr\,(\,(\mathbf{n}\Theta_{0}\phi_{1})\cdot\phi_{2}+\Theta_{0}\phi_{1}\cdot\mathbf{n}\phi_{2}\,)\,d\sigma$
$\displaystyle=$
$\displaystyle\int_{S^{3}}\,tr\,((\Theta_{0}\,\mathbf{n}\,\phi_{1})\cdot\phi_{2}\,)\,d\sigma+\int_{S^{3}}\,tr\,(\Theta_{0}\phi_{1}\cdot\mathbf{n}\phi_{2}\,)\,d\sigma$
$\displaystyle=$ $\displaystyle
c_{0}(\mathbf{n}\phi_{1},\phi_{2})+c_{0}(\phi_{1},\mathbf{n}\phi_{2})\,.$
The others are proved similarly. ∎
### 3.4 Homogeneous decomposition of $\mathbf{C}[\phi^{\pm}]$
Let $\mathbf{C}[\phi^{\pm};\,N]$ be the subspace of $\mathbf{C}[\phi^{\pm}]$
consisting of those elements that are of homogeneous degree $N$:
$\varphi(z)=|z|^{N}\varphi(\frac{z}{|z|})$. $\mathbf{C}[\phi^{\pm};\,N]$ is
spanned by the spinors $\varphi=\phi_{1}\cdots\phi_{n}$ such that each
$\phi_{i}$ is equal to $\phi_{i}=\phi^{+(m_{i},l_{i},k_{i})}$ or
$\,\phi_{i}=\phi^{-(m_{i},l_{i},k_{i})}$ , where $m_{i}\geq 0$ and $0\leq
l_{i}\leq m_{i}+1,\,0\leq k_{i}\leq m_{i}+2$, and such that
$N=\sum_{i:\,\phi_{i}=\phi^{+(m_{i},l_{i},k_{i})}}\,m_{i}\,\,-\,\sum_{i:\,\phi_{i}=\phi^{-(m_{i},l_{i},k_{i})}}\,(m_{i}+3).$
It holds that $\mathbf{n}\varphi=\frac{N}{2}\varphi$, so the eigenvalues of
$\mathbf{n}$ on $\mathbf{C}[\phi^{\pm}]$ are
$\left\\{\frac{N}{2};\,N\in\mathbf{Z}\,\right\\}$ and
$\mathbf{C}[\phi^{\pm};\,N]$ is the space of eigenspinors for the eigenvalue
$\frac{N}{2}$.
Example
$\phi=\phi^{+(2,00)}\cdot\phi^{-(0,00)}\in\mathbf{C}[\phi^{\pm};-1]\,,\quad\mbox{
and }\,\mathbf{n}\phi=-\frac{1}{2}\phi\,.$
We note that $-\frac{1}{2}$ is not an eigenvalue of $\partial\\!\\!\\!/$.
We have the eigenspace decomposition of the radial derivative $\mathbf{n}$:
$\mathbf{C}[\phi^{\pm}]\,=\,\bigoplus_{N\in\mathbf{Z}}\,\mathbf{C}[\phi^{\pm};\,N]\,$
(3.24)
The radial derivation $\mathbf{n}$ acts on $\mathbf{C}[\phi^{\pm}]\,$ and
preserves the homogeneous degree.
## 4 $\mathfrak{g}$-current algebras on $S^{3}$
### 4.1 Algebra of current $\mathcal{L}$ on $S^{3}$
###### Definition 4.1.
We denote $\,\mathcal{L}\,=\,\mathbf{C}[\phi^{\pm}]|S^{3}\,$ and call
$\mathcal{L}$ the algebra of current on $S^{3}\,$.
By virtue of Theorem 3.2 $\,\mathcal{L}$ is an associative
$\mathbf{C}$-algebra generated by the spinors
$I=\phi^{+(0,0,1)},\,J=-\phi^{+(0,0,0)},\,\kappa=\phi^{+(1,0,1)},\,\mu=\phi^{-(0,0,0)}\,.$
We have given the definition of a quaternion Lie algebra in the introduction.
It is a real submodule of a quaternion module that is endowed with a real Lie
algebra structure compatible with the involutions $\sigma$ and $\tau$, (1).
###### Proposition 4.2.
$\mathcal{L}$ is a quaternion Lie algebra with the induced bracket:
$[\phi_{1},\phi_{2}]=\phi_{1}\cdot\phi_{2}-\phi_{2}\cdot\phi_{1}\,,\quad\phi_{1},\,\phi_{2}\in\mathcal{L}\,.$
(4.1)
In particular $\mathcal{L}$ is invariant under the involutions $\sigma$ and
$\tau$. $\mathcal{L}$ is also invariant under the derivations $\Theta_{k}$,
$k=0,1,2$, and the radial derivation:
$\Theta_{k}\phi\in\mathcal{L}\,,\quad\mathbf{n}\phi\in\mathcal{L}\quad\mbox{
for }\,\,\forall\phi\in\mathcal{L}\,.$ (4.2)
###### Proof.
We have already seen these properties in section 3. ∎
The quaternion Lie algebra $\mathcal{L}$ has the following subalgebras.
$\displaystyle\mathcal{L}^{r}_{0}=$
$\displaystyle\\{\phi\in\mathcal{L}\,;\,\sigma\phi=\phi,\,\tau\phi=\phi\,\\}\,,\quad$
$\displaystyle\mathcal{L}^{0}_{r}\,=\\{\phi\in\mathcal{L}:\,\sigma\phi=\,-\phi,\,\tau\phi=\phi\,\\}\,,$
$\displaystyle\mathcal{L}^{i}_{0}\,=$
$\displaystyle\\{\phi\in\mathcal{L}\,;\,\tau\phi=-\phi\,,\sigma\phi=\phi\,\\}\,,\quad$
$\displaystyle\mathcal{L}^{0}_{i}\,=\\{\phi\in\mathcal{L}\,;\,\tau\phi=-\phi\,,\sigma\phi=-\phi\,\\}.$
$\phi=\left(\begin{array}[]{c}u\\\ v\end{array}\right)\in\mathcal{L}^{r}_{0}$
if $u$ is real and $v=0$. $\phi\in\mathcal{L}^{0}_{r}$ if $v$ is real and
$u=0$. $\phi\in\mathcal{L}^{i}_{0}$ if $u$ is pure imaginary and $v=0$, and
$\phi\in\mathcal{L}^{0}_{i}$ if $u=0$ and $v$ is pure imaginary. For
$\phi_{k}=\left(\begin{array}[]{c}u_{k}\\\
v_{k}\end{array}\right)\,\in\mathcal{L}^{r}_{0}+\mathcal{L}^{0}_{r}$, $k=1,2$,
we have
$\phi_{1}\cdot\phi_{2}\,=\,\left(\begin{array}[]{c}u_{1}u_{2}-v_{1}v_{2}\\\
v_{1}u_{2}+u_{1}v_{2}\end{array}\right)=\phi_{2}\phi_{1}\,.$ (4.3)
Hence $\mathcal{L}^{r}_{0}\,+\,\mathcal{L}^{0}_{r}\,,\,\mathcal{L}^{r}_{0}\,$
and $\mathcal{L}^{0}_{r}\,$ are commutative Lie subalgebras of
$\mathcal{L}\,$. Similarly $\mathcal{L}^{i}_{0}\,$ and $\mathcal{L}^{0}_{i}\,$
are commutative subalgebras and the following relations hold;
$[\mathcal{L}^{0}_{r}\,,\mathcal{L}^{0}_{i}]=\,\mathcal{L}^{i}_{0}\,,\quad[\mathcal{L}^{0}_{i}\,,\mathcal{L}^{i}_{0}]\,=\,\mathcal{L}^{0}_{r}\,.\quad[\mathcal{L}^{i}_{0}\,,\mathcal{L}^{0}_{r}]\,=\,\mathcal{L}^{0}_{i}\,.$
These are proved by a calculation of the Lie bracket (4.1). For example, for
$\phi=\left(\begin{array}[]{c}0\\\ t\end{array}\right)\in\mathcal{L}^{0}_{r}$
and $\psi=\left(\begin{array}[]{c}\sqrt{-1}u\\\
0\end{array}\right)\in\mathcal{L}^{i}_{0}$, we have
$[\phi,\psi]=\left(\begin{array}[]{c}0\\\
2\sqrt{-1}tu\end{array}\right)\in\mathcal{L}^{0}_{i}\,$. The others follow
similarly.
###### Definition 4.3.
We put
$\mathcal{K}\,=\,\mathcal{L}^{r}_{0}\,,\qquad\mathcal{K}^{\bot}=\mathcal{L}^{0}_{r}\,+\,\mathcal{L}^{0}_{i}\,+\,\mathcal{L}^{i}_{0}\,.$
(4.4)
###### Proposition 4.4.
1. 1.
$\mathcal{K}$ is a commutative subalgebra of the associative algebra
$\mathcal{L}$. We have
$N(\mathcal{K})=\mathcal{K}\,,$ (4.5)
where $N(\mathcal{K})$ is the normalizer of $\mathcal{K}$:
$\,N(\mathcal{K})=\,\\{\psi\in\mathcal{L}\,;\quad\phi\psi\in\mathcal{K}\,,\quad\forall\phi\in\mathcal{K}\,\\}$.
2. 2.
$\,\mathcal{K}^{\bot}$ is an ideal of $\mathcal{L}$ complementary to
$\mathcal{K}$, and we have
$\,\mathcal{K}\cdot\mathcal{K}^{\bot}=\mathcal{K}^{\bot}\cdot\mathcal{K}\,=\,\mathcal{K}^{\bot}\,.$
(4.6)
3. 3.
The quaternion Lie algebra $\mathcal{L}$ is decomposed into
$\,\mathcal{L}\,=\,\mathcal{K}\,+\,\mathcal{K}^{\bot}\,,\quad\mbox{ direct
sum.}$
It holds that
$[\mathcal{K}\,,\,\mathcal{L}\,]\,=\,0\,,\qquad[\,\mathcal{L}\,,\,\mathcal{L}\,]\,=\,\mathcal{K}^{\bot}\,.$
(4.7)
The proof follows by direct calculations of the multiplication of spinors
(3.1) and the Lie bracket (4.1).
Examples
$\displaystyle\frac{1}{2}(\phi^{+(1,1,1)}-\phi^{-(0,0,1)})+\frac{1}{\sqrt{2}}\phi^{+(1,0,2)}$
$\displaystyle=$
$\displaystyle\left(\begin{array}[]{c}z_{1}+\overline{z}_{1}\\\
0\end{array}\right)\in\mathcal{K}$
$\displaystyle(\phi^{-(1,0,0)}+\phi^{-(1,0,2)})-J\cdot(\phi^{-(1,1,0)}+\phi^{-(1,1,2)})$
$\displaystyle=$
$\displaystyle\sqrt{2}\left(\begin{array}[]{c}z_{1}^{2}+\overline{z}_{1}^{2}+z_{2}^{2}+\overline{z}_{2}^{2}\\\
0\end{array}\right)\,\in\,\mathcal{K}$
$\displaystyle\phi^{-(0,0,1)}+\frac{1}{\sqrt{2}}(\phi^{+(0,0,0)}\phi^{+(1,1,0)}-\phi^{+(1,0,0)})$
$\displaystyle=$ $\displaystyle\left(\begin{array}[]{c}0\\\
z_{2}+\overline{z}_{2}\end{array}\right)\in\mathcal{L}^{0}_{r}$
$\displaystyle-\phi^{+(1,1,1)}-\phi^{-(0,0,1)}+\sqrt{2}\phi^{+(1,0,0)}$
$\displaystyle=$ $\displaystyle\left(\begin{array}[]{c}0\\\
2(z_{2}-\overline{z}_{2})\end{array}\right)\in\mathcal{L}^{0}_{i}\,.$
$\displaystyle\phi^{+(1,1,1)}-\phi^{-(0,0,1)}-\sqrt{2}\phi^{+(1,0,2)}$
$\displaystyle=$
$\displaystyle\left(\begin{array}[]{c}2(z_{1}-\overline{z}_{1})\\\
0\end{array}\right)\in\mathcal{L}^{i}_{0}.$
### 4.2 $\mathfrak{g}-$Current algebras on $S^{3}$ and its subalgebras
$C^{\infty}(S^{3},\,\mathfrak{gl}(n,\mathbf{H})\,)=C^{\infty}(S^{3},\,\mathbf{H})\otimes_{\mathbf{C}}\mathfrak{gl}(n,\mathbf{C})$
becomes a quaternion Lie algebra with the Lie bracket defined by
$[\,\phi_{1}\otimes X_{1}\,,\,\phi_{2}\otimes
X_{2}\,]=(\phi_{1}\cdot\phi_{2})\otimes
X_{1}X_{2}\,-(\phi_{2}\cdot\phi_{1})\otimes X_{2}X_{1}\,,$ (4.13)
for $\phi_{1},\phi_{2}\in
C^{\infty}(S^{3},\,\mathbf{H}),\,X_{1},X_{2}\in\mathfrak{gl}(n,\mathbf{C})$.
In (4.13) the right hand side is in the tensor product of the associative
algebra $C^{\infty}(S^{3},\,\mathbf{H})\simeq C^{\infty}(S^{3},\,S^{+})$ and
the matrix algebra $\mathfrak{gl}(n,\mathbf{C})$.
Let $(\,\mathfrak{g}\,,\,[\,\,,\,\,]\,)$ be a complex Lie algebra, that we
suppose to be a subalgebra of $\mathfrak{gl}(n,\mathbf{C})$ for some $n$. Then
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}\,$ becomes a
$\mathbf{C}$-submodule of the $\mathbf{H}$-module
$C^{\infty}(S^{3},\,\mathbf{H})\otimes_{\mathbf{C}}\mathfrak{gl}(n,\mathbf{C})=C^{\infty}(S^{3},\,\mathfrak{gl}(n,\mathbf{H})\,)$.
The involutions $\sigma$ and $\tau$ on $\,\mathcal{L}$ are extended to
$\,\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}\,$ by $\sigma(\phi\otimes
X)=\sigma(\phi)\otimes X\,$ and $\tau(\phi\otimes X)=\tau(\phi)\otimes X\,$
respectively for $\phi\in\mathcal{L}$ and $X\in\mathfrak{g}\,$. Thus
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}\,$ endowed with the bracket
(4.13) generates a quaternion Lie algebra.
###### Definition 4.5.
The quaternion Lie algebra generated by
$\left(\,\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}\,,\,[\,\,,\,\,]\,\right)$
is called $\mathfrak{g}$-current algebra, and is denoted by
$\mathcal{L}\mathfrak{g}$.
As the following examples show $\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}$
is not necessarily a Lie algebra so that the Lie algebra
$\mathcal{L}\mathfrak{g}$ is defined as that which is generated by
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}$ in the Lie algebra
$C^{\infty}(S^{3},\,\mathfrak{gl}(n,\mathbf{H})\,)$.
Examples: The following elements are in
$\,\mathcal{L}\mathfrak{g}\,\ominus\,(\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g})\,$.
1. 1.
$\sqrt{-1}(X_{1}X_{2}+X_{2}X_{1})\,\in\mathcal{L}\mathfrak{g}\,,\quad\mbox{
for }\forall X_{1},\,X_{2}\in\mathfrak{g}\,.$
In fact we have
$\mathcal{L}\mathfrak{g}\,\ni\,[\,J\otimes X_{1}\,,\,\sqrt{-1}J\otimes
X_{2}\,]\,=\,\sqrt{-1}I\otimes(X_{1}X_{2}+X_{2}X_{1})\,.$
Here we saw that the right hand-side calculated in
$C^{\infty}(S^{3},\,\mathfrak{gl}(n,\mathbf{H})\,)$ gives the left-hand side
element in $\mathcal{L}\mathfrak{g}\,$.
2. 2.
$\sqrt{-1}J\otimes(X_{1}X_{2}+X_{2}X_{1})\,\in\mathcal{L}\mathfrak{g}\,,\quad\mbox{
for }\forall X_{1},\,X_{2}\in\mathfrak{g}\,.$
In fact
$\mathcal{L}\mathfrak{g}\,\ni\,[\,J\otimes X_{1}\,,\,\sqrt{-1}I\otimes
X_{2}\,]\,=\,\sqrt{-1}J\otimes(X_{1}X_{2}+X_{2}X_{1})\,.$
3. 3.
$(z_{1}-\overline{z}_{1})(z_{2}+\overline{z}_{2})J\otimes(X_{1}X_{2}+X_{2}X_{1})\,\in\mathcal{L}\mathfrak{g}\,,\quad\mbox{
for }\forall X_{1},\,X_{2}\in\mathfrak{g}\,.$
In fact, let $\phi_{1}=\left(\begin{array}[]{c}z_{1}+\overline{z}_{1}\\\
z_{2}+\overline{z}_{2}\end{array}\right)$ and
$\phi_{2}=\left(\begin{array}[]{c}z_{1}-\overline{z}_{1}\\\
0\end{array}\right)$. Then
$\displaystyle\mathcal{L}\mathfrak{g}\,\ni\,$ $\displaystyle[\,\phi_{1}\otimes
X_{1},\,\phi_{2}\otimes
X_{2}\,]-\,(z_{1}^{2}-\overline{z}_{1}^{2})I\otimes[X_{1},X_{2}]\,$
$\displaystyle=(z_{1}-\overline{z}_{1})(z_{2}+\overline{z}_{2})J\otimes(X_{1}X_{2}+X_{2}X_{1}).$
### 4.3 Quaternification and $\mathfrak{g}$-current algebras
Remember that the quaternification of a complex Lie algebra $\mathfrak{g}$ is
the quaternion Lie algebra $\mathfrak{g}^{q}$ generaetd by
$\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{g}=\mathfrak{g}+J\mathfrak{g}$. The
latter is not a Lie algebra in general. Since
$I=\phi^{+(0,0,1)}=\left(\begin{array}[]{c}1\\\ 0\end{array}\right)$ and
$J=-\phi^{+(0,0,0)}=\left(\begin{array}[]{c}0\\\ 1\end{array}\right)$ are in
$\,\mathcal{L}$, $\mathfrak{g}^{q}$ is a subspace of
$\mathcal{L}\mathfrak{g}$. We have the following relations:
$S^{3}\mathfrak{g}^{q}\supset
S^{3}\mathfrak{g}+J\,(S^{3}\mathfrak{g})\supset\mathcal{L}\mathfrak{g}\,\supset\mathfrak{g}^{q},$
where $S^{3}\mathfrak{g}+J\,(S^{3}\mathfrak{g})$ is not necessarily a Lie
algebra in general and
$S^{3}\mathfrak{g}^{q}=S^{3}\mathbf{H}\otimes\mathfrak{g}^{q}$ is the Lie
algebra with bracket (4.13).
The following examples show the case where both
$S^{3}\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{g}$ and
$\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{g}$ become Lie algebras.
Examples
1. 1.
$\mathfrak{gl}(n,\mathbf{H})=\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{gl}(n,\mathbf{C})\subset\mathcal{L}\mathfrak{gl}(n,\mathbf{C})\subset
S^{3}\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{gl}(n,\mathbf{C})\,=S^{3}\mathfrak{gl}(n,\mathbf{H})$
2. 2.
$so^{\ast}(2n)=\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{so}(n,\mathbf{C})\,\subset\,\mathcal{L}\mathfrak{so}(n,\mathbf{C})\,\subset
S^{3}\mathbf{H}\otimes\mathfrak{so}(n,\mathbf{C})=S^{3}\mathfrak{so}^{\ast}(2n)$
3. 3.
$\mathfrak{sp}(2n)=\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{u}(n)\,\subset\mathcal{L}\mathfrak{u}(n)\,\subset\,S^{3}\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{u}(n)\,=\,S^{3}\mathfrak{sp}(2n).$
In general $\mathbf{H}\otimes_{\mathbf{C}}\mathfrak{g}$ is not a Lie algebra.
We know that the quaternification of $\mathfrak{sl}(n,\mathbf{C})$ is
$\mathfrak{sl}(n,\mathbf{H})$, [Kq], and it is not contained in
$\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{sl}(n,\mathbf{C})$ as is seen by the
following example:
Let $\\{h_{i}=E_{ii}-E_{i+1\,i+1};\,1\leq i\leq n-1\,,\quad E_{ij},\,i\neq
j\,\\}$ be the basis of $\mathfrak{g}=\mathfrak{sl}(n,\mathbf{C})$. Then
$[\sqrt{-1}Jh_{1},Jh_{2}\,]=-2\sqrt{-1}E_{22}\,\in\mathfrak{g}^{q}\subset\mathcal{L}\mathfrak{g}$
but not in $\mathcal{L}\otimes\mathfrak{g}$.
### 4.4 Root space decomposition of $\mathfrak{g}$-current algebras
#### 4.4.1
Let $\mathfrak{g}$ be a simple Lie algebra with Cartan matrix
$A=\left(c_{ij}\right)$. Let $\mathfrak{h}$ be a Cartan subalgebra, $\Phi$ the
corresponding root system. Let
$\Pi=\\{\alpha_{i};\,i=1,\cdots,l=\dim\,\mathfrak{h}\\}\subset\mathfrak{h}^{\ast}$
be the set of simple roots and
$\\{h_{i}=\alpha_{i}^{\vee}\,;\,i=1,\cdots,l\,\\}\subset\mathfrak{h}$ be the
set of simple coroots. The Cartan matrix $A=(\,c_{ij}\,)_{i,j=1,\cdots,r}$ is
given by $c_{ij}=\left\langle\alpha_{i}^{\vee},\,\alpha_{j}\right\rangle$.
$\alpha(h)$ is real if $h\in\mathfrak{h}$ is real. Let
$\,\mathfrak{g}_{\alpha}=\\{\xi\in\mathfrak{g}\,;\,ad(h)\xi\,=\,\alpha(h)\xi,\quad\forall
h\in\mathfrak{h}\\}$ be the root space of $\alpha\in\Phi$. Then
$\dim_{\mathbf{C}}\,\mathfrak{g}_{\alpha}=1$. Let $\Phi_{\pm}$ be the set of
positive ( respectively negative ) roots of $\mathfrak{g}$ and put
$\mathfrak{e}=\sum_{\alpha\in\Phi_{+}}\,(\mathfrak{g})_{\alpha}\,,\quad\mathfrak{f}=\sum_{\alpha\in\Phi_{-}}\,(\mathfrak{g})_{\alpha}\,.$
Fix a standard set of generators
$\,h_{i}\in\mathfrak{h}\,,\,e_{i}\in\mathfrak{g}_{\alpha_{i}}$,
$f_{i}\in\mathfrak{g}_{-\alpha_{i}}$. $\mathfrak{g}$ is generated by
$X\,=\,\\{e_{i},\,f_{i},\,h_{i}\,;\,i=1,\cdots,l\,\\}$, and these generators
satisfy the relations:
$[\,h_{i},\,h_{j}\,]\,=\,0\,,\quad[\,e_{i}\,,\,f_{j}\,]\,=\,\delta_{ij}h_{i}\,,\quad[\,h_{i}\,,\,e_{j}\,]\,=\,c_{ji}e_{j}\,,\quad[\,h_{i}\,,\,f_{j}\,]\,=\,-\,c_{ji}f_{j}\,.$
(4.14)
This is a presentation of $\mathfrak{g}$ by generators and relations which
depend only on the root system $\Phi$. The triangular decomposition of the
simple Lie algebra $\mathfrak{g}$ becomes
$\mathfrak{g}=\mathfrak{f}+\mathfrak{h}+\mathfrak{e}$, ( direct sum ) with the
space of positive root vectors $\mathfrak{e}$ and the space of negative root
vectors $\mathfrak{f}$.
$\mathfrak{g}$ is considered as a quaternion Lie subalgebra of the
$\mathfrak{g}$-current algebra $\,\mathcal{L}\mathfrak{g}\,$;
$\displaystyle
i\,:\,\mathfrak{g}\,\ni\,X\,\longrightarrow\,\phi^{+(0,0,1)}\otimes
X\,\in\,\mathcal{L}\mathfrak{g}\,,$ (4.15)
$\displaystyle\left[\phi^{+(0,0,1)}\otimes X,\,\phi^{+(0,0,1)}\otimes
Y\right]_{\mathcal{L}\mathfrak{g}}=\left[X,Y\right]_{\mathfrak{g}}\,.$
We adopt the following abbreviated notations: For $\phi_{i}\in\mathcal{L}$.
$\,x_{i}\in\mathfrak{g}\,$, $i=1,\cdots,t$, we put
$\displaystyle x_{12\cdots t}$ $\displaystyle=$
$\displaystyle[\,x_{1},\,[\,x_{2},\,[\,\cdots\,\cdots\,x_{t}\,]\,]\cdots\,]\,,$
$\displaystyle\,\phi_{12\cdots t}\ast x_{12\cdots t}$ $\displaystyle=$
$\displaystyle[\phi_{1}\otimes x_{1},\,[\phi_{2}\otimes
x_{2},\,[\,\cdots\,\cdots\,,\,\phi_{t}\otimes x_{t}\,]\,]\cdots\,]\,.$ (4.16)
Every element of $\mathcal{L}\mathfrak{g}$ is expressed as a linear
combination of $\,\phi_{12\cdots t}\ast x_{12\cdots t}$’s. We have a
projection from $\mathcal{L}\mathfrak{g}$ to $\mathfrak{g}$ that extends the
correspondence:
$\pi:\,\mathcal{L}\mathfrak{g}\ni\,\phi_{12\cdots t}\ast x_{12\cdots
t}\,\longrightarrow\,x_{12\cdots t}\,\in\mathfrak{g}.$ (4.17)
It is obtained by letting all $\phi_{i}$’s in (4.16) equal to
$\phi^{+(0,0,1)}$.
#### 4.4.2 The adjoint representation
$ad_{\mathcal{K}\mathfrak{h}}:\,\mathcal{K}\mathfrak{h}\longrightarrow\,End(\mathcal{L}\mathfrak{g})$
We shall investigate the triangular decomposition of $\mathfrak{g}$-current
algebra $\mathcal{L}\mathfrak{g}$.
###### Definition 4.6.
Let $\mathcal{L}\,\mathfrak{h}$, $\,\mathcal{L}\,\mathfrak{e}$ and
$\mathcal{L}\,\mathfrak{f}$ respectively be the Lie subalgebras of the
$\mathfrak{g}$-current algebra $\mathcal{L}\mathfrak{g}$ that are generated by
$\,\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{h}\,$,
$\,\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{e}\,$ and
$\,\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{f}\,$ respectively.
Let $\mathcal{K}\,\mathfrak{h}$ and $\,\mathcal{K}^{\bot}\,\mathfrak{h}$ be
the Lie subalgebras of $\mathcal{L}\,\mathfrak{g}\,$ generated by
$\mathcal{K}\otimes_{\mathbf{R}}\mathfrak{h}\,$ and
$\mathcal{K}^{\bot}\otimes_{\mathbf{R}}\mathfrak{h}$ respectively.
$\mathcal{L}\mathfrak{e}$ consists of linear combinations of elements of the
form $\phi_{12\cdots t}\ast e_{12\cdots t}$ for $\phi_{j}\in\mathcal{L}$ and
$e_{j}\in\mathfrak{e}$, $j=1,2,\cdots,\,t$. Similarly
$\,\mathcal{L}\mathfrak{f}\,$ is generated by $\phi_{12\cdots t}\ast
f_{12\cdots t}$ with $\phi_{j}\in\mathcal{L}\,$ and $\,f_{j}\in\mathfrak{f}$,
$j=1,2,\cdots,\,t\,$. Later we shall see that
$\mathcal{L}\mathfrak{e}=\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{e}$, viewed
as a real Lie algebra. This is a crucial fact in our investigation.
###### Lemma 4.7.
1. 1.
$\mathfrak{h}\,\subset\,\mathcal{K}\,\mathfrak{h}\,.\qquad\mathcal{L}\mathfrak{h}=\mathcal{K}\mathfrak{h}+\mathcal{K}^{\bot}\mathfrak{h}\,.$
2. 2.
$\mathcal{K}\mathfrak{h}\,=\,\mathcal{K}\otimes_{\mathbf{R}}\mathfrak{h}\,.$
$\mathcal{K}\,\mathfrak{h}$ is a commutative subalgebra of
$\mathcal{L}\,\mathfrak{g}\,$, and
$N(\mathcal{K}\mathfrak{h})\,=\,\mathcal{K}\mathfrak{h}\,$. That is,
$\mathcal{K}\mathfrak{h}$ is a Cartan subalgebra of
$\mathcal{L}\mathfrak{g}\,$, where
$N(\mathcal{K}\mathfrak{h})=\\{\,\xi\in\mathcal{L}\mathfrak{g};\,[\kappa,\xi]\in\mathcal{K}\mathfrak{h},\,\forall\kappa\in\mathcal{K}\mathfrak{h}\,\\}$
is the normalizer of $\mathcal{K}\mathfrak{h}$.
3. 3.
$[\,\mathcal{K}\,\mathfrak{h}\,,\,\mathcal{L}\mathfrak{h}\,]\,=\,0\,,\quad[\,\mathcal{K}\,\mathfrak{h}\,,\,\mathcal{L}\,\mathfrak{e}\,]\,=\,\mathcal{L}\mathfrak{e}\,,\quad[\,\mathcal{K}\,\mathfrak{h}\,,\,\mathcal{L}\,\mathfrak{f}\,]\,=\,\mathcal{L}\,\mathfrak{f}\,.$
4. 4.
$[\,\mathcal{L}\mathfrak{h}\,,\,\mathcal{L}\mathfrak{h}\,]\,=\mathcal{K}^{\bot}\mathfrak{h}\,.$
(4.18)
###### Proof.
Let $\phi_{i}\in\mathcal{K}$ and $h_{i}\in\mathfrak{h}$, $i=1,2$. Since
$\phi_{1}\phi_{2}\,=\,\phi_{2}\phi_{1}$ from (4.3), We have $[\phi_{1}\otimes
h_{1}\,,\,\phi_{2}\otimes h_{2}\,]\,=\,(\phi_{1}\phi_{2})\,[h_{1},h_{2}]=0\,$.
So $\mathcal{K}\mathfrak{h}=\mathcal{K}\otimes_{\mathbf{R}}\mathfrak{h}$, and
$\mathcal{K}\mathfrak{h}$ is a commutative Lie algebra. Now the first
assertions follow from the definitions;
$\phi^{+(0,0,1)}\otimes\mathfrak{h}\subset\mathcal{K}\mathfrak{h}$. We shall
prove $N(\mathcal{K}\mathfrak{h})\,=\,\mathcal{K}\mathfrak{h}\,$. Let
$\psi\otimes
x\in(\mathcal{L}\otimes\mathfrak{g})\cap\,N(\mathcal{K}\mathfrak{h})$. Then
$[\phi\otimes h,\,\psi\otimes
x]=(\phi\psi)\otimes[h,x]\in\mathcal{K}\otimes\mathfrak{h}$ for any
$\phi\in\mathcal{K}$ and $h\in\mathfrak{h}$. Then $\phi\psi\in\mathcal{K}$ for
all $\phi\in\mathcal{K}$, so (4.5) implies $\psi\in\mathcal{K}$. And
$[h,x]\in\mathfrak{h}$ for all $h\in\mathfrak{h}$. $\mathfrak{h}$ being a
Cartan subalgebra it follows $x\in\mathfrak{h}$. Hence $\psi\otimes
x\in\mathcal{K}\mathfrak{h}$. $N(\mathcal{K}\mathfrak{h})$ being generated by
$(\mathcal{L}\otimes\mathfrak{g})\cap\,N(\mathcal{K}\mathfrak{h})$, it follows
$N(\mathcal{K}\mathfrak{h})\,=\mathcal{K}\mathfrak{h}$. We proceed to the
proof of the 3rd assertion. Let $\phi\otimes
h\in\mathcal{K}\otimes\mathfrak{h}$ and $\psi\otimes
h^{\prime}\in\mathcal{L}\otimes\mathfrak{h}$ with
$\phi\in\mathcal{K}\,,\psi\in\mathcal{L}$ and
$h\,,\,h^{\prime}\in\mathfrak{h}$. By virtue of (4.7 ) we have $[\phi\otimes
h,\,\psi\otimes h^{\prime}]=(\phi\psi)\otimes\,[h.h^{\prime}]=0\,$. Jacobi
identity yields $[\phi\otimes h,\,[\psi_{1}\otimes h_{1},\,\psi_{2}\otimes
h_{2}]\,]=0$ for $\psi_{i}\in\mathcal{L},\,h_{i}\in\mathfrak{h}$, $i=1,2$, and
$[\phi\otimes h,\,\psi_{12\cdots t}\ast h_{12\cdots t}\,]=0$. Hence
$[\,\mathcal{K}\,\mathfrak{h}\,,\mathcal{L}\mathfrak{h}\,]=0$ . Let
$\psi\otimes e_{j}\in\mathcal{L}\otimes\mathfrak{e}$. We have $[\phi\otimes
h_{i}\,,\,\psi\otimes
e_{j}\,]\,=\,(\phi\psi)\otimes[h_{i},e_{j}]\,=(\phi\psi)\otimes
c_{ji}e_{j}\,\in\mathcal{L}\mathfrak{e}\,$. The similar argument with Jacobi
identity yields
$[\phi\otimes h_{i},\,\psi_{j_{1}\cdots j_{t}}\ast e_{j_{1}\cdots
j_{t}}\,]\,=\,\left(c_{j_{1}i}+\cdots
c_{j_{t}i}\right)(\phi\psi_{j_{1}}\psi_{j_{2}}\cdots\psi_{j_{t}})\otimes
e_{j_{1}\cdots j_{t}}\in\mathcal{L}\mathfrak{e}\,.$ (4.19)
So we have $[\phi\otimes
h_{i}\,,\,\mathcal{L}\mathfrak{e}\,]\subset\mathcal{L}\mathfrak{e}$, hence
$[\,\mathcal{K}\mathfrak{h},\,\mathcal{L}\mathfrak{e}\,]\subset\mathcal{L}\mathfrak{e}$.
Similarly
$\,[\,\mathcal{K}\,\mathfrak{h},\,\mathcal{L}\mathfrak{f}\,]\subset\mathcal{L}\mathfrak{f}$.
Conversely any element $\psi_{j_{1}\cdots j_{t}}\ast e_{j_{1}\cdots
j_{t}}\,\in\,\mathcal{L}\mathfrak{e}$ satisfies the relation (4.19) for all
$\phi\otimes h\in\mathcal{K}\mathfrak{h}$ with non-zero $(c_{j_{1}i}+\cdots
c_{j_{t}i})$ hence
$[\,\mathcal{K}\mathfrak{h}\,,\,\mathcal{L}\mathfrak{e}\,]\,=\,\mathcal{L}\mathfrak{e}\,$.
Similarly
$[\,\mathcal{K}\mathfrak{h}\,,\,\mathcal{L}\mathfrak{f}\,]\,=\,\mathcal{L}\mathfrak{f}\,$.
Finally we shall prove the assertion;
$[\,\mathcal{L}\mathfrak{h}\,,\,\mathcal{L}\mathfrak{h}\,]\,=\mathcal{K}^{\bot}\mathfrak{h}\,$.
Let $\psi_{1},\,\psi_{2}\in\mathcal{L}$ and $h_{1},\,h_{2}\in\mathfrak{h}$. We
have $[\psi_{1}\otimes h_{1}\,,\psi_{2}\otimes
h_{2}\,]\,=\,(\psi_{1}\psi_{2})\otimes h_{1}h_{2}-\,(\psi_{2}\psi_{1})\otimes
h_{2}h_{1}\,=\,[\psi_{1},\psi_{2}]\otimes h_{2}h_{1}\,$. Here the right hand
side is the multiplication of matrices with coefficients in $\mathcal{L}$.
While the left hand side is in $\mathcal{L}\mathfrak{h}$. The relation (4.7)
implies
$[\mathcal{L}\otimes\mathfrak{h},\,\mathcal{L}\otimes\mathfrak{h}]\,\subset\mathcal{K}^{\bot}\mathfrak{h}$
and
$[\,\mathcal{L}\mathfrak{h}\,,\,\mathcal{L}\mathfrak{h}\,]\,=\mathcal{K}^{\bot}\mathfrak{h}\,$.
∎
The examples at the end of subsection 4.2 testify to the assertion 4. We note
that $\mathcal{L}\mathfrak{g}$ is not a soluble Lie algebra.
We regard the associative algebra $\mathcal{L}$ as a $\mathcal{K}$-module, and
we regard $\mathcal{L}$ as the coefficient ring of $\mathcal{L}\mathfrak{g}$.
$\mathcal{K}\mathfrak{h}$ being a Cartan subalgebra of
$\mathcal{L}\mathfrak{g}$, we consider the adjoint representation
$ad_{\mathcal{K}\mathfrak{h}}\,\in
End_{\mathcal{K}}(\mathcal{L}\mathfrak{g})\,$ and its weight space
decomposition. The adjoint representation $ad_{\mathcal{K}\mathfrak{h}}$ is
written as follows:
$\displaystyle ad_{\phi\otimes h}\,(\psi\otimes x)$ $\displaystyle=$
$\displaystyle\,(\phi\,\psi)\,\otimes ad_{h}x\,,$ (4.20) $\displaystyle
ad_{\phi\otimes h}(\psi_{1\cdots m}\ast x_{1\cdots m})$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{m}\phi\,[\psi_{1}\otimes x_{1},[\psi_{2}\otimes
x_{2},\,\cdots[\psi_{i}\otimes ad_{h}x_{i},[\psi_{i+1}\otimes
x_{i+1},\cdots,\psi_{m}\otimes x_{m}]\cdots],$
for $\phi\otimes h\in\mathcal{K}\mathfrak{h}\,$ and $\psi\otimes
x,\,\psi_{1\cdots m}\ast x_{1\cdots m}\in\mathcal{L}\mathfrak{g}\,$. Let
$Hom_{\mathbf{\mathcal{K}}}(\,\mathcal{K}\mathfrak{h}\,,\,\mathcal{L})\,=\,\\{\lambda\,:\mathcal{K}\mathfrak{h}\,\longrightarrow\mathcal{L}\,,\quad\lambda(\kappa)=\alpha(h)\,\phi\quad\mbox{for
$\forall\kappa=\phi\otimes h$}\\},$ (4.21)
with $\alpha\in\mathfrak{h}^{\ast}=Hom(\mathfrak{h},\,\mathbf{C})$ defined by
$\alpha(h)\circ\pi=\pi_{o}\circ\lambda(i\,h)\,,\,\forall h\in\mathfrak{h}$.
Where $i:\mathfrak{h}\hookrightarrow\mathcal{K}\mathfrak{h}$ is the embedding
(4.15), $\,\pi:\mathcal{L}\mathfrak{g}\longrightarrow\mathfrak{g}$ is the
projection (4.17) and $\pi_{o}:\mathcal{L}\longrightarrow\mathbf{C}$ the
projection to the constant term of a Laurent polynomial type spinor. In the
above we regard $\alpha(h)\in End_{\mathbf{C}}(\mathfrak{g})$ by the
multiplication. Similarly we regard $\lambda(\kappa)\in
End_{\mathcal{L}}(\mathcal{L}\mathfrak{g})$ as the multiplication of
$\lambda(\kappa)=\alpha(h)\phi\in\mathcal{L}$, which is not necessarily in
$\mathcal{K}$.
For each $\lambda\in
Hom_{\mathcal{K}}(\,\mathcal{K}\,\mathfrak{h}\,,\,\mathcal{L})$, we put
$(\mathcal{L}\mathfrak{g})_{\lambda}=\\{\xi\in\mathcal{L}\mathfrak{g}\,;\quad
ad_{\kappa}\,\xi\,=\,\lambda(\kappa)\,\xi\,,\quad\forall\kappa\in\mathcal{K}\mathfrak{h}\\}\,.$
(4.22)
$\lambda\in Hom_{\mathcal{K}}(\,\mathcal{K}\mathfrak{h}\,,\,\mathcal{L})$ is
called a weight whenever $(\mathcal{L}\mathfrak{g})_{\lambda}\neq 0$.
$(\mathcal{L}\mathfrak{g})_{\lambda}$ is called the weight space of weight
$\lambda$. The set of the non-zero weights is denoted by
${\Phi}_{\mathcal{L}}=\,\\{\lambda\in\,Hom(\,\mathcal{K}\mathfrak{h}\,,\,\mathcal{L})\,;\,\lambda\neq
0\,\\}\,.$
Then $\mathcal{L}\mathfrak{g}$ is the direct sum of the weight spaces:
$\mathcal{L}\mathfrak{g}\,=\,(\mathcal{L}\mathfrak{g})_{0}\,\oplus_{\lambda\in\Phi_{\mathcal{L}}}\,(\mathcal{L}\mathfrak{g})_{\lambda}\,.$
(4.23)
We have
$ad_{\kappa}\,[\,\xi_{1}\,,\,\xi_{2}\,]\,=\,[\,ad_{\kappa}\xi_{1}\,,\,\xi_{2}\,]\,+\,[\,\xi_{1}\,,\,ad_{\kappa}\xi_{2}\,]\,,$
(4.24)
for all $\kappa\in\mathcal{K}\mathfrak{h}\,$,
$\xi_{i}\in\mathcal{L}\mathfrak{g}$, $i=1,2$. This follows inductively from
the definition of $ad_{\mathcal{K}\mathfrak{h}}$; (4.20). Therefore it holds
that if $\xi,\,\eta\in\,\mathcal{L}\mathfrak{g}$ are weight vectors of weights
$\lambda,\,\mu$ then $[\xi\,,\,\eta]$ is a weight vector of weight
$\lambda+\mu\,$:
$[\,(\mathcal{L}\mathfrak{g})_{\lambda}\,,\,(\mathcal{L}\mathfrak{g})_{\mu}\,]\subset\,(\mathcal{L}\mathfrak{g})_{\lambda+\mu}\,.$
(4.25)
###### Proposition 4.8.
The adjoint representation $ad_{\mathfrak{h}}$ of $\mathfrak{g}$ extends to
the adjoint representation $ad_{\mathcal{K}\mathfrak{h}}$ of
$\mathcal{L}\mathfrak{g}$ .
###### Proof.
$\phi^{+(0,0,1)}\in\mathcal{K}$ and the abbreviation
$\phi^{+(0,0,1)}\otimes\mathfrak{h}\,\simeq\mathfrak{h}$ imply the embedding
$i:\,\mathfrak{h}\longrightarrow\,\mathcal{K}\mathfrak{h}$. The adjoint
representation $ad_{\mathcal{K}\mathfrak{h}}$ restricts to the adjoint
representation of $\mathfrak{h}$ on $\mathfrak{g}$ if we take
$\phi=\psi=\phi^{+(0,0,1)}$ in (4.20). Then we have
$ad_{h}\circ\pi\,=\,\pi\circ ad_{ih}\,,\quad\forall h\in\mathfrak{h}.$ (4.26)
Conversely we see from (4.20) that the action of the representation
$ad_{\mathcal{K}\mathfrak{h}}$ on $\mathcal{L}\mathfrak{g}$ comes from
$ad_{\mathfrak{h}}\in End(\mathfrak{g})$. If $ad_{h}y=0$ for
$h\in\mathfrak{h}$ and $y\in\mathfrak{g}$ then $ad_{\phi\otimes h}\psi\otimes
y=0$ for all $\phi\in\mathcal{K}$ and $\psi\in\mathcal{L}$ . In fact, since
$[\mathcal{K},\mathcal{L}]=0$ we have $[\phi\otimes h,\,\psi\otimes
y]=(\phi\cdot\psi)\otimes[h,y]=0$. ∎
###### Proposition 4.9.
1. 1.
The root spaces of the adjoint representation $ad_{\mathcal{K}\mathfrak{h}}$
on $\mathcal{L}\mathfrak{g}$ and that of $ad_{\mathfrak{h}}$ on $\mathfrak{g}$
correspond bijectively: $\Phi_{\mathcal{L}}\simeq\Phi$.
2. 2.
For $\lambda\in\Phi$ it holds that
$(\mathcal{L}\mathfrak{g})_{0}\,=\,(\mathcal{L}\mathfrak{h})\,,\quad(\mathcal{L}\mathfrak{g})_{\lambda}\,=\,\mathcal{L}\otimes\,\mathfrak{g}_{\lambda}.$
(4.27)
3. 3.
$\mathcal{L}\mathfrak{g}$ is the direct sum of the weight spaces:
$\mathcal{L}\mathfrak{g}\,=\,\mathcal{K}\mathfrak{h}\,\oplus\,\mathcal{K}^{\bot}\mathfrak{h}\,\oplus\,\oplus_{\lambda\in\Phi}\,(\mathcal{L}\otimes\,\mathfrak{g}_{\lambda})\,.$
(4.28)
###### Proof.
Let $\lambda\in\Phi_{\mathcal{L}}\,$. There exists a weight vector
$\xi\in\mathcal{L}\mathfrak{g}$ with the weight $\lambda$: $\,[\phi\otimes
h\,,\,\xi\,]\,=\,\lambda(\phi\otimes h)\xi\,$ for any $\phi\otimes
h\in\mathcal{K}\mathfrak{h}\,$. We define $\check{\lambda}\in
Hom(\,\mathfrak{h}\,,\mathbf{C}\,)$ by the formula
$\check{\lambda}(h)=\lambda(\phi^{+(0,0,1)}\otimes h)$. Then $\check{\lambda}$
becomes a root of the representation $ad_{\mathfrak{h}}$ on $\mathfrak{g}$:
$\,[h,x]=[\,\phi^{+(0,0,1)}\otimes h\,,\,\phi^{+(0,0,1)}\otimes
x\,]\,=\,\check{\lambda}(h)x\,$. Conversely let $\xi=\psi_{1\cdots m}\ast
x_{1\cdots m}\in\mathcal{L}\mathfrak{g}$. We suppose that each
$x_{i}\in\mathfrak{g}$ is a weight vector with root $\beta_{i}\in\Phi$,
$i=1,\cdots,m$. General elements of $\mathcal{L}\mathfrak{g}$ are linear
combinations of such vectors. It follows from (4.20) that
$ad_{\phi\otimes
h}\xi=\,\left(\Sigma_{i=1}^{m}\beta_{i}(h)\phi\right)\,\xi\,,\quad\forall\phi\otimes
h\in\mathcal{K}\mathfrak{h}.$
Hence $\Sigma_{i=1}^{m}\beta_{i}(h)\phi\in\Phi_{\mathcal{L}}$, and $\xi$ is a
weight vector of $ad_{\phi\otimes h}$. The relation extends linearly to
$\mathcal{L}\mathfrak{g}$. Thus we have proved the first assertion. From
(4.20) we have
$\mathcal{L}\otimes\mathfrak{g}_{\alpha}\,\subset\,(\mathcal{L}\mathfrak{g})_{\alpha}\,$
for any $\alpha\in\Phi$. Lemma 4.7 shows that
$\mathcal{L}\mathfrak{h}\,\subset\,(\mathcal{L}\mathfrak{g})_{0}$. Then (4.19)
yields that $\phi_{i_{1}i_{2}\cdots i_{t}}\otimes e_{i_{1}i_{2}\cdots i_{t}}$
and $\phi_{i_{1}i_{2}\cdots i_{t}}\otimes f_{i_{1}i_{2}\cdots i_{t}}$ are
weight vectors. Thus all Lie products of generators $\left\\{\,\phi\otimes
e_{i}\,,\,\phi\otimes f_{i}\,,\,\phi\otimes
h_{i}\,;\,\phi\in\mathcal{L}\,,\,i=1,\cdots,l\,\,\right\\}$ are weight
vectors. Since every element of $\mathcal{L}\mathfrak{g}$ is a linear
combination of products of these weight vectors we deduce from (4.23) and the
fact $\Phi\simeq\Phi_{\mathcal{L}}$ that
$\mathcal{L}\mathfrak{g}\,=\,(\mathcal{L}\mathfrak{g})_{0}\,\oplus\,\oplus_{\alpha\in\Phi}(\mathcal{L}\mathfrak{g})_{\alpha}\,.$
(4.29)
Now the simple roots $\alpha_{1},\cdots,\alpha_{l}\in\Phi$ are linearly
independent, so the only monomials which have weight $\alpha_{j}$ are the
weight vectors of $\mathcal{L}\otimes\mathfrak{g}_{\alpha_{j}}$. We conclude
$\,(\mathcal{L}\mathfrak{g})_{\alpha_{j}}\,=\,\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}_{\alpha_{j}}\,.$
(4.30)
Hence
$\,(\mathcal{L}\mathfrak{g})_{\alpha}\,=\,\mathcal{L}\otimes_{\mathbf{C}}\mathfrak{g}_{\alpha}\,$
for all $\alpha\in\Phi$. Therefore (4.29) becomes
$\mathcal{L}\mathfrak{g}\,=\,(\mathcal{L}\mathfrak{g})_{0}\,\oplus\,\oplus_{\alpha\in\Phi}(\mathcal{L}\otimes\mathfrak{g}_{\alpha})\,.$
(4.31)
Now we shall prove $(\mathcal{L}\mathfrak{g})_{0}=\mathcal{L}\mathfrak{h}\,$.
We regard $\mathcal{L}\mathfrak{g}$ as a $\mathcal{K}\mathfrak{h}$-module.
Hence $\mathcal{L}\mathfrak{h}$ is a $\mathcal{K}\mathfrak{h}$-submodule.
$\mathcal{L}\mathfrak{h}$ is contained in $(\mathcal{L}\mathfrak{g})_{0}$ by
Lemma 4.7. If $\,\mathcal{L}\mathfrak{h}\neq(\mathcal{L}\mathfrak{g})_{0}\,$
the $\mathcal{K}\mathfrak{h}$-module
$(\mathcal{L}\mathfrak{g})_{0}/\mathcal{L}\mathfrak{h}$ will have a
1-dimensional submodule $M/\mathcal{L}\mathfrak{h}$ on which
$\mathcal{K}\mathfrak{h}$ acts with weight $0$. That is,
$[\,\mathcal{K}\mathfrak{h},\,M/\mathcal{L}\mathfrak{h}\,]=0$. Then
$[\,\mathcal{K}\mathfrak{h},\,M\,]\subset\,\mathcal{L}\mathfrak{h}\,$ and $M$
is a $\mathcal{K}\mathfrak{h}$-submodule of $\mathcal{L}\mathfrak{h}$. That is
a contradiction. ∎
We know that any weight $\lambda\in\Phi$ is of the form
$\sum_{i=1}^{l}\,k_{i}\alpha_{i}$, $k_{i}\in\mathbf{Z}$. Moreover a non-zero
weight $\lambda$ has the form
$\lambda=\sum_{i=1}^{l}k_{i}\alpha_{i},\,k_{i}\in\mathbf{Z}$, with all
$k_{i}\geq 0$ or all $k_{i}\leq 0$. Therefore
$\displaystyle\mathcal{L}\mathfrak{e}$ $\displaystyle=$
$\displaystyle\sum_{\lambda\in\Phi^{+}}\mathcal{L}\otimes_{\mathbf{R}}\mathfrak{g}_{\lambda}$
(4.32) $\displaystyle\mathcal{L}\mathfrak{f}$ $\displaystyle=$
$\displaystyle\sum_{\lambda\in\Phi^{-}}\mathcal{L}\otimes_{\mathbf{R}}\mathfrak{g}_{\lambda}$
(4.33)
From the above discussion we have the following
###### Theorem 4.10.
The $\mathfrak{g}$-current algebra $\mathcal{L}\mathfrak{g}$ has the following
triangular decomposition
$\displaystyle\mathcal{L}\mathfrak{g}$ $\displaystyle=$
$\displaystyle\,\mathcal{L}\mathfrak{e}\,\oplus\,\mathcal{L}\mathfrak{h}\,\oplus\,\mathcal{L}\mathfrak{f}\,.$
$\displaystyle\mathcal{L}\mathfrak{e}\,$ $\displaystyle=$
$\displaystyle\,\mathcal{L}\otimes\,\mathfrak{e}\,,\quad\mathcal{L}\mathfrak{f}\,=\,\mathcal{L}\otimes\,\mathfrak{f}\,,$
$\displaystyle\mathcal{L}\mathfrak{h}\,$ $\displaystyle=$
$\displaystyle\,\mathcal{K}\mathfrak{h}\,\oplus\,\mathcal{K}^{\bot}\mathfrak{h}\,.$
###### Corollary 4.11.
$\mathcal{L}\mathfrak{g}\ominus\,(\,\mathcal{L}\otimes\mathfrak{g}\,)\,=\,\mathcal{K}^{\bot}\mathfrak{h}\,.$
(4.34)
## 5 Central extensions of the $\mathfrak{g}$-current algebra
### 5.1 Central extensions of the $\mathfrak{g}$-current algebra
$\mathcal{L}\mathfrak{g}$
Let $(V,\,[\,\cdot\,,\,\cdot\,]_{V}\,)$ be a quaternion Lie algebra. A central
extension of $(V,\,[\,\cdot\,,\,\cdot\,]_{V}\,)$ is a quaternion Lie algebra
$(W,\,[\,\cdot\,,\,\cdot\,]_{W}\,)$ such that $W=V\oplus Z$ ( direct sum ) and
$Z$ is contained in the center of $W$;
$\,Z\,\subset\\{w\in W\,:\,[w,x]_{W}=0\,,\forall x\in W\\}\,,$
and such that $\,[\,\cdot\,,\,\cdot\,]_{W}$ restricts to
$\,[\,\cdot\,,\,\cdot\,]_{V}$.
Let $\mathfrak{g}$ be a simple Lie algebra and let $\mathcal{L}\mathfrak{g}$
be the $\mathfrak{g}$-current algebra. We write the invariant bilinear form (
Killing form ) on $\mathfrak{g}$ by
$(x|y)=\,Trace\,(xy).$
We have $(xy|z)=(yz|x)$. In Proposition 3.9 we introduced 2-cocycles
$\\{c_{k}\,;\,k=0,1,2\,\\}$ on the space of current $\mathcal{L}$. We extend
them to the 2-cocycles on the $\mathfrak{g}$-current algebra
$\mathcal{L}\mathfrak{g}$ by
$c_{k}(\,\phi_{1}\otimes x\,,\,\phi_{2}\otimes
y\,)=\,(x|y)\,c_{k}(\phi_{1},\phi_{2})\,,\quad k=0,1,2,$ (5.1)
for $\phi_{1},\,\phi_{2}\in\mathcal{L}$ and $x,\,y\in\mathfrak{g}$. Associated
to the the 2-cocycles $c_{k}$, $k=0,1,2$, we have the central extensions of
$\mathcal{L}\mathfrak{g}\,$.
###### Theorem 5.1.
Let $a_{k}$, $k=0,1,2$, be three indefinite numbers. Put
$\mathcal{L}\mathfrak{g}(a)\,=\,\mathcal{L}\mathfrak{g}\oplus(\oplus_{k=0,1,2}\mathbf{C}a_{k})\,.$
(5.2)
We endow $\mathcal{L}\mathfrak{g}(a)\,$ with the following bracket:
$\displaystyle[\,\phi\otimes x\,,\,\psi\otimes y\,]^{a}$ $\displaystyle=$
$\displaystyle[\phi\otimes x\,,\,\psi\otimes
y\,]+(x|y)\,\sum_{k=0}^{2}\,c_{k}(\phi,\psi)\,a_{k}\,,$
$\displaystyle[\,a_{k}\,,\,\phi\otimes x\,]^{a}$ $\displaystyle=$
$\displaystyle 0\,,\,\,k=0,1,2,$ (5.3)
for $\phi\otimes x\,,\,\psi\otimes y\in\mathcal{L}\otimes\mathfrak{g}$. The
bracket is extended to $\mathcal{L}\mathfrak{g}$. The conjugation automorphism
$\sigma$ is extended to $\mathcal{L}\mathfrak{g}(a)$ by $\,\sigma
a_{k}=a_{k}$, $k=0,1,2$.
We shall further complete the central extension of the current algebra
$\mathcal{L}\mathfrak{g}$ by adjoining a derivation coming from the radial
vector field $\mathbf{n}$ on $S^{3}$.
###### Lemma 5.2.
The derivation $\mathbf{n}$ on $S^{3}\mathbf{H}$ restricts to the outer
derivation on $\mathcal{L}$.
$\mathbf{n}$ is extended to an outer derivation of the Lie algebra
$\mathcal{L}\mathfrak{g}$ by
$\,\mathbf{n}\,(\phi\otimes x\,)\,=\,(\,\mathbf{n}\phi\,)\otimes
x,\qquad\,\phi\in\mathcal{L}\,,\,x\in\mathfrak{g}\,.$ (5.4)
Then $\mathbf{n}$ acts on $\mathcal{L}\mathfrak{g}(a)$ by killing the
$a_{k}$’s.
From Propositions 3.10 and 3.11 we have
$\displaystyle\,[\,\mathbf{n}(\phi_{1}\otimes x_{1})\,,\,\phi_{2}\otimes
x_{2}\,]^{a}\,+\,[\,\phi_{1}\otimes x_{1}\,,\,\mathbf{n}(\phi_{2}\otimes
x_{2})\,]^{a}$
$\displaystyle\,=\,(\mathbf{n}\phi_{1}\cdot\phi_{2})\otimes\,x_{1}x_{2}\,-\,(\phi_{2}\cdot\mathbf{n}\phi_{1})\otimes\,x_{2}x_{1}\,\,+\,(\phi_{1}\cdot\mathbf{n}\phi_{2})\otimes\,x_{1}x_{2}\,\,-\,(\mathbf{n}\phi_{2}\cdot\phi_{1})\otimes\,x_{2}x_{1}\,$
$\displaystyle+\,(x_{1}|x_{2})\sum_{k}\left(\,{c}_{k}(\mathbf{n}\phi_{1},\,\phi_{2})\,+\,c_{k}(\phi_{1},\,\mathbf{n}\phi_{2})\right)a_{k}\,$
$\displaystyle\,=\,\mathbf{n}(\phi_{1}\cdot\phi_{2})\otimes
x_{1}x_{2}\,-\,\mathbf{n}(\phi_{2}\cdot\phi_{1})\otimes
x_{2}x_{1}\,=\mathbf{n}\left(\,[\,\phi_{1}\otimes x_{1}\,,\,\phi_{2}\otimes
x_{2}\,]^{a}\,\,\right)\,.$ (5.5)
Hence $\mathbf{n}$ is a derivation that acts on the Lie algebra
$\mathcal{L}\mathfrak{g}(a)\,$.
###### Theorem 5.3.
Let $a_{k}$, $k=0,1,2$, and $\,{\rm n}$ be the above indefinite elements. We
consider the $\mathbf{C}$-vector space:
$\widehat{\mathfrak{g}\,}\,=\,\mathcal{L}\mathfrak{g}\oplus(\oplus_{k=0}^{2}\,\mathbf{C}\,a_{k})\oplus(\mathbf{C}\,{\rm
n})\,.$ (5.6)
We endow $\,\widehat{\mathfrak{g}}\,$ with the following bracket extended to
$\,\widehat{\mathfrak{g}}\,$:
$\displaystyle[\,\phi\otimes x\,,\,\psi\otimes
y\,]_{\widehat{\mathfrak{g}}}\,=\,[\,\phi\otimes x\,,\,\psi\otimes y\,]^{a}$
$\displaystyle\quad=\,[\,\phi\otimes x\,,\,\psi\otimes
y\,]\,+\,(x|y)\,\sum_{k=0}^{2}\,c_{k}(\phi,\psi)\,a_{k}\,,$ (5.7)
$\displaystyle\,[\,a_{k}\,,\,\phi\otimes
x\,]_{\widehat{\mathfrak{g}}}\,=0\,,\quad[\,{\rm n}\,,\,\phi\otimes
x\,]_{\widehat{\mathfrak{g}}}=\,\mathbf{n}\phi\otimes x\,,$ (5.8)
$\displaystyle[\,{\rm n}\,,\,a_{k}\,]_{\widehat{\mathfrak{g}}}\,=0,\quad
k=0,1,2\,,$
for $x,y\in\mathfrak{g}$ and $\phi,\,\psi\,\in\,\mathcal{L}\,$. The involution
$\sigma$ is extended to $\widehat{\mathfrak{g}}\,$ by
$\sigma(\,\phi\otimes x)=\sigma\phi\otimes x\,,\quad\sigma
a_{k}=0\,,\quad\sigma{\rm n}\,={\rm n}\,.$
Then we get a quaternion Lie algebra
$\left(\,\widehat{\mathfrak{g}}\,,\,[\,\cdot,\cdot\,]_{\widehat{\mathfrak{g}}}\,\right)$.
###### Proof.
We write simply $[\,,\,]$ instead of $[\,,\,]_{\widehat{\mathfrak{g}}}\,$. It
is enough to prove the following Jacobi identity:
$[\,[\,{\rm n}\,,\,\phi_{1}\otimes x_{1}\,]\,,\,\phi_{2}\otimes
x_{2}\,]+[\,[\phi_{1}\otimes x_{1},\phi_{2}\otimes x_{2}\,]\,,\,{\rm
n}\,]\,+\,[\,[\phi_{2}\otimes x_{2},\,{\rm n}\,],\,\phi_{1}\otimes
x_{1}\,]=0.$
From the defining equation (5.8) and the equation (5.5), the sum of the 1st
and 3rd terms is equal to
$[\,[\,n,\,\phi_{1}\otimes x_{1}]\,,\,\phi_{2}\otimes
x_{2}\,]\,+\,[\,\phi_{1}\otimes x_{1}\,,\,[n\,\,\phi_{2}\otimes
x_{2}]\,]\,=\,\mathbf{n}\left(\,[\,\phi_{1}\otimes x_{1}\,,\,\phi_{2}\otimes
x_{2}\,]\,\,\right)\,,$
which is equal to $\,-\,[\,[\phi_{1}\otimes x_{1},\phi_{2}\otimes
x_{2}\,]\,,\,{\rm n}\,]$. ∎
###### Proposition 5.4.
The centralizer of $n\in\,\widehat{\mathfrak{g}}\,$ is given by
$(\,\mathcal{L}[0]\,\otimes_{\mathbf{C}}\mathfrak{g}\,)\,\oplus\,(\oplus_{k}\mathbf{C}a_{k}\,)\oplus\mathbf{C}{\rm
n}\,.$
Here $\,\mathcal{L}[0]$ is the subspace in $\mathcal{L}$ generated by
$\phi_{1}\cdots\phi_{n}$ with $\phi_{i}$ being
$\phi_{i}=\phi^{\pm(m_{i},l_{i},k_{i})}$ such that
$\sum_{i;\,\phi_{i}=\phi^{+(m_{i},l_{i},k_{i})}}\,m_{i}-\sum_{i;\,\phi_{i}=\phi^{-(m_{i},l_{i},k_{i})}}\,(m_{i}+3)=0\,.$
$\,\mathcal{L}[0]\mathfrak{g}\,$ is the subalgebra of
$\widehat{\mathfrak{g}}\,$ generated by
$\,\mathcal{L}[0]\,\otimes_{\mathbf{C}}\mathfrak{g}\,$.
###### Definition 5.5.
We call the quaternion Lie algebra $\,\widehat{\mathfrak{g}}\,$ the affine
current algebra over $\mathfrak{g}$ :
$\widehat{\mathfrak{g}\,}\,=\,\mathcal{L}\mathfrak{g}\oplus(\oplus_{k=0}^{2}\,\mathbf{C}\,a_{k})\oplus(\mathbf{C}\,{\rm
n})\,.$ (5.9)
### 5.2 Root space decomposition of the current algebra
$\,\widehat{\mathfrak{g}}\,$
Let
$\widehat{\mathfrak{g}\,}\,=\,\mathcal{L}\mathfrak{g}\oplus(\oplus_{k=0}^{2}\,\mathbf{C}\,a_{k})\oplus(\mathbf{C}\,{\rm
n})\,$ be the affine current algebra over $\mathfrak{g}$, Definition 5.5. Let
$\widehat{\mathfrak{h}}\,=\,\mathfrak{h}\oplus(\oplus_{k}\mathbf{C}a_{k})\oplus(\mathbf{C}{\rm
n}\,)\,.$ (5.10)
Where we applied the identification $\mathfrak{h}\ni
h\stackrel{{\scriptstyle\simeq}}{{\longrightarrow}}\phi^{+(0,0,1)}\otimes
h\in\mathcal{L}\mathfrak{g}$. $\widehat{\mathfrak{h}}\,$ is a commutative
subalgebra of $\widehat{\mathfrak{g}}$. From the discussion in previous
sections, in particular by virtue of Theorem 4.10, Corollary 4.11, (4.27) and
(5.6) , we know that any element $\xi\,\in\widehat{\mathfrak{g}}$ is written
in the form:
$\displaystyle\xi$ $\displaystyle=$ $\displaystyle\,x\,+\,\sum
p_{j}a_{j}+q{\rm n}\,,\quad x\in\mathcal{L}\mathfrak{g}\,,\quad
p_{j},\,q\in\mathbf{C}\,,\,\,j=0,1,2\,,\,$ (5.11) $\displaystyle x$
$\displaystyle=$
$\displaystyle\,y+\,\sum_{\alpha\in\Phi}\,\varphi_{\alpha}\otimes
x_{\alpha}\,,\quad\varphi_{\alpha}\in\mathcal{L}\,,\quad
x_{\alpha}\in\mathfrak{g}_{\alpha}\,,$ $\displaystyle y$ $\displaystyle=$
$\displaystyle\kappa+z\in\mathcal{L}\mathfrak{h}\,,\quad\kappa\in\mathcal{K}\mathfrak{h}\,,\quad
z\in\mathcal{K}^{\bot}\mathfrak{h}\,$
Any element of $\,\widehat{\mathfrak{h}}\,$ is written in the form
$\,\hat{h}=\phi^{+(0,0,1)}\otimes h+\sum s_{k}a_{k}+\,t\,{\rm n}\,,\quad
h\in\mathfrak{h}\,,\,s_{k},\,t\in\mathbf{C}\,.$
From Lemma 4.7 we have $[\phi\otimes h\,,y\,]=0$ for any $\phi\in\mathcal{K}$,
$h\in\mathfrak{h}$ and $y\in\mathcal{L}\mathfrak{h}$, in particular
$[\phi^{+(0,0,1)}\otimes h\,,y\,]=0$. So we see that the adjoint action of
$\,\hat{h}=h+\sum s_{i}a_{i}+t{\rm n}\in\widehat{\mathfrak{h}}\,$ on
$\,\xi=y+\,\sum_{\alpha}\,\varphi_{\alpha}\otimes x_{\alpha}+\sum
p_{j}a_{j}+q{\rm n}\in\widehat{\mathfrak{g}}\,$ becomes
$ad(\hat{h})(\xi)\,=\,\sum_{\alpha}\,\alpha(h)\varphi_{\alpha}\otimes
x_{\alpha}\,+\,t\,\sum_{\alpha}\,(\mathbf{n}\,\varphi_{\alpha})\otimes
x_{\alpha}\,+\,t\,[{\rm n}\,,\,y]\,.$ (5.12)
Let $\widehat{\mathfrak{h}}^{\ast}$ be the dual space of
$\widehat{\mathfrak{h}}$:
$\widehat{\mathfrak{h}}^{\ast}\,=\,Hom_{\mathbf{C}}(\widehat{\mathfrak{h}}\,,\mathbf{C})\,.$
An element $\alpha$ of the dual space $\mathfrak{h}^{*}$ of $\mathfrak{h}$ is
regarded as a element of $\,\widehat{\mathfrak{h}}^{\,\ast}$ by putting
$\left\langle\,\alpha,a_{k}\,\right\rangle=\left\langle\,\alpha,{\rm
n}\,\right\rangle=0,\quad k=0,1,2\,.$
So $\Phi\subset\mathfrak{h}^{*}$ is seen to be a subset of
$\,\widehat{\mathfrak{h}}^{\,*}$. We define
$\delta\,,\,\Lambda_{k}\,\in\widehat{\mathfrak{h}}^{\,*}$, $k=0,1,2$, by
$\displaystyle\left\langle\delta,\alpha_{i}^{\vee}\,\right\rangle$
$\displaystyle=\left\langle\,\Lambda_{k},\alpha_{i}^{\vee}\,\right\rangle=0,$
$\displaystyle\left\langle\,\delta,a_{k}\,\right\rangle$
$\displaystyle=0\,,\qquad\left\langle\,\delta,{\rm n}\,\right\rangle=1\,,$
(5.13) $\displaystyle\left\langle\,\Lambda_{k},a_{k}\,\right\rangle$
$\displaystyle=1\,,\qquad\left\langle\,\Lambda_{k},{\rm
n}\,\right\rangle=0\,,\quad 1\leqq i\leqq l,\,\,k=0,1,2\,.$
Then
$\alpha_{1},\,\cdots\,,\alpha_{l},\,\delta,\,\Lambda_{0},\,\Lambda_{1},\,\Lambda_{2}\,$
give the basis of $\widehat{\mathfrak{h}}^{\ast}$.
We shall investigate the decomposition of $\,\widehat{\mathfrak{g}}\,$ into a
direct sum of the simultaneous eigenspaces of $ad\,(\hat{h})\,$,
$\hat{h}\in\widehat{\mathfrak{h}}\,$. For a 1-dimensional representation
$\lambda\in\widehat{\mathfrak{h}}^{\ast}$ we put
$\widehat{\mathfrak{g}}_{\lambda}\,=\,\left\\{\xi\in\widehat{\mathfrak{g}}\,;\quad\,[\,\hat{h},\,\xi\,]_{\widehat{\mathfrak{g}}}\,=\,\langle\lambda,\hat{h}\rangle\,\xi\quad\mbox{
for }\,\forall\hat{h}\in\,\widehat{\mathfrak{h}}\,\right\\}.$ (5.14)
$\lambda$ is called a root of the representation
$\left(\,\widehat{\mathfrak{g}}\,,\,ad(\widehat{\mathfrak{h}}\,)\right)$ if
$\lambda\neq 0$ and $\,\widehat{\mathfrak{g}}_{\lambda}\neq 0$.
$\,\widehat{\mathfrak{g}}_{\lambda}$ is called the root space of $\lambda\,$.
Let $\widehat{\Phi}$ be the set of roots:
$\widehat{\Phi}=\left\\{\lambda=\alpha+\sum_{j=0}^{2}n_{j}\Lambda_{j}\,+\,k_{0}\delta\in\widehat{\mathfrak{h}}^{\ast}\,;\,\alpha=\sum_{i=1}^{l}\,k_{i}\alpha_{i}\in\Phi,\,k_{i},\,n_{j}\in\mathbf{Z},\,0\leq
i\leq l,\,j=0,1,2\,\right\\}.$
The set
$\widehat{\Pi}=\\{\,\alpha_{1},\cdots,\alpha_{l},\,\Lambda_{0},\Lambda_{1},\Lambda_{2},\,\delta\,\\}$
forms a fundamental basis of $\,\widehat{\Phi}\,$. Thus we have
$\widehat{\mathfrak{g}}\,=\,\widehat{\mathfrak{g}}_{0}\,\oplus\,\left(\oplus_{\lambda\in\widehat{\Phi}}\,\widehat{\mathfrak{g}}_{\lambda}\,\right)\,\,.$
(5.15)
We investigate the root spaces $\,\widehat{\mathfrak{g}}_{\lambda}\,$ for
$(i)\,\lambda=\alpha+k\delta,\,0\neq\alpha\in\Phi\,,\quad(ii)\,\lambda=k\delta,\quad
k\neq 0,\quad(iii)\,\lambda=0\delta\,\quad\mbox{and }\,(iv)\,\lambda=0\,.$
We may assume that the weight vector $\xi\in\widehat{\mathfrak{g}}$ of each
weight $\lambda$ takes the form
$\xi=y+\sum_{\alpha\in\Phi}\varphi_{\alpha}\otimes x_{\alpha}$ because others
do not contribute to give weight, see (5.12). Let $x\in\mathfrak{g}_{\alpha}$
for $\alpha\in\Phi$, $\alpha\neq 0$, and let $\varphi\in\mathcal{L}[m]$ for
$m\in\mathbf{Z}$, that is, $\varphi$ is $m$-homogeneous, (3.24). From (5.12)
we have
$\displaystyle[\,\phi\otimes h,\,\varphi\otimes x\,]_{\widehat{\mathfrak{g}}}$
$\displaystyle=$
$\displaystyle(\phi\,\varphi)\otimes[\,h,\,x\,]\,=\left\langle\alpha,h\right\rangle\varphi\otimes
x,$ $\displaystyle[\,\mathbf{n},\,\varphi\otimes
x\,]_{\widehat{\mathfrak{g}}}$ $\displaystyle=$
$\displaystyle\frac{m}{2}\varphi\otimes x,$
for any $\phi\otimes h\in\mathcal{K}\mathfrak{h}$. That is,
$[\,\hat{h}\,,\varphi\otimes
x]_{\widehat{\mathfrak{g}}}=\left\langle\frac{m}{2}\delta+\alpha\,,\,\hat{h}\,\right\rangle(\varphi\otimes
x)\,,$
for every $\hat{h}\in\widehat{\mathfrak{h}}$, Then we see
$\mathcal{L}[m]\otimes\mathfrak{g}_{\alpha}\,\subset\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}$.
Now let $y\in\mathcal{L}\mathfrak{h}$. It is written by a linear combination
of terms of the form $y^{\prime}=\phi_{i_{1}i_{2}\cdots i_{t}}\otimes
h_{i_{1}i_{2}\,\cdots i_{t}}$ with $h_{j}\in\mathfrak{h}$ and
$\phi_{j}\in\mathcal{L}[m_{j}\,]$, $j=i_{1},\cdots,i_{t}$, so that
$\mathbf{n}y^{\prime}=(\frac{1}{2}\,\sum_{k=1}^{t}\,m_{k}\,\,)\phi_{i_{1}i_{2}\cdots
i_{t}}\otimes h_{i_{1}i_{2}\,\cdots i_{t}}\,,$
and we find that $y^{\prime}\in\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}$
with $m=\sum_{k=1}^{t}\,m_{k}\in\mathbf{Z}$. Hence
$\mathcal{L}\mathfrak{h}\,\subset\,\,\widehat{\mathfrak{g}}_{0\delta}\oplus\oplus_{m\neq
0}\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}\,,$
with $\,\widehat{\mathfrak{g}}_{0\delta}=\mathcal{L}[0]\otimes\mathfrak{h}\,$,
and
$\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}=\mathcal{L}[m]\otimes\mathfrak{h}\,$.
###### Proposition 5.6.
We have the following relations:
1. 1.
$\left[\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}\,,\,\widehat{\mathfrak{g}}_{\frac{n}{2}\delta+\beta}\,\right]_{\widehat{\mathfrak{g}}}\,\subset\,\widehat{\mathfrak{g}}_{\frac{m+n}{2}\delta+\alpha+\beta}\,\,,$
(5.16)
for $\alpha,\,\beta\in\Phi$ and for $m,n\in\mathbf{Z}$.
2. 2.
$\left[\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}\,,\,\widehat{\mathfrak{g}}_{\frac{n}{2}\delta}\,\right]_{\widehat{\mathfrak{g}}}\,\subset\,\widehat{\mathfrak{g}}_{\frac{m+n}{2}\delta}\,,$
(5.17)
for $m,n\in\mathbf{Z}$.
The Proposition is proved by a standard argument using the properties of Lie
bracket.
###### Theorem 5.7.
1. 1.
$\displaystyle\widehat{\Pi}$ $\displaystyle=$
$\displaystyle\left\\{\frac{m}{2}\,\delta+\alpha\,;\quad\alpha\in\Pi\,,\,m\in\mathbf{Z}\,\right\\}$
(5.18) $\displaystyle\bigcup\left\\{\frac{m}{2}\,\delta\,;\quad
m\in\mathbf{Z}\,\right\\}\,.$
is a base of $\,\widehat{\Phi}$.
2. 2.
For $\alpha\in\Phi$, $\alpha\neq 0$ and $m\in\mathbf{Z}$, we have
$\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}\,=\mathcal{L}[m]\otimes_{\mathbf{C}}\mathfrak{g}_{\alpha}\,.$
(5.19)
3. 3.
$\displaystyle\,\widehat{\mathfrak{g}}_{0\delta}$ $\displaystyle=$
$\displaystyle\mathcal{L}[0]\otimes_{\mathbf{C}}\mathfrak{h}\,\supset\widehat{\mathfrak{h}},$
(5.20) $\displaystyle\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}$
$\displaystyle=$
$\displaystyle\,\mathcal{L}[m]\otimes_{\mathbf{C}}\mathfrak{h}\,,\quad\mbox{for
$0\neq m\in\mathbf{Z}$ . }\,$ (5.21)
4. 4.
$\widehat{\mathfrak{g}}$ has the following decomposition:
$\widehat{\mathfrak{g}}\,=\,\widehat{\mathfrak{g}}_{0\delta}\oplus\left(\oplus_{0\neq
m\in\mathbf{Z}}\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}\,\right)\oplus\,\,\left(\oplus_{\alpha\in\Phi,\,m\in\mathbf{Z}}\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}\,\right)$
(5.22)
###### Proof.
First we prove the second assertion. We have already proved
$\mathcal{L}[m]\otimes\mathfrak{g}_{\alpha}\,\subset\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}$.
Conversely, for $m\in\mathbf{Z}$ and
$\xi\in\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}$, we shall show that
$\xi$ has the form $\,\phi\otimes x\,$ with $\phi\in\mathcal{L}[m]$ and
$x\in\mathfrak{g}_{\alpha}\,$. Let $\xi=\psi\otimes x\,+\sum\,p_{k}a_{k}+q{\rm
n}$. Then
$\displaystyle[\hat{h},\xi]_{\widehat{\mathfrak{g}}}=[\,\phi^{+(0,0,1)}\otimes
h+\sum s_{k}a_{k}+t{\rm n}\,,\,\psi\otimes x\,+\sum\,p_{k}a_{k}+q{\rm
n}\,]_{\widehat{\mathfrak{g}}}=\,\psi\otimes[\,h\,,\,x\,]$
$\displaystyle\qquad+\,t(\,\sum_{n\in\mathbf{Z}}\,\frac{n}{2}\psi_{n}\,\otimes
x\,)$
for any $\hat{h}=\phi^{+(0,0,1)}\otimes h+\sum
s_{k}a_{k}+tn\in\widehat{\mathfrak{h}}\,$, where $\psi=\sum_{n}\psi_{n}$ is
the homogeneous decomposition of $\psi$. From the assumption we have
$\displaystyle[\,\hat{h},\xi\,]_{\widehat{\mathfrak{g}}}\,$ $\displaystyle=$
$\displaystyle\,\langle\,\frac{m}{2}\delta+\alpha\,,\,\hat{h}\,\rangle\,\xi\,$
$\displaystyle=$ $\displaystyle<\alpha,\,h>\psi\otimes
x\,+(\frac{m}{2}t+<\alpha,\,h>)(\sum p_{k}a_{k}+q{\rm n})\,$
$\displaystyle\quad+\,\frac{m}{2}t\,(\sum_{k}\,\psi_{k})\otimes x.$
Comparing the above two equations we have $p_{k}=q=0$, and $\psi_{k}=0$ for
all $k$ except for $k=m$. Therefore $\psi\in\mathcal{L}[m]$. We also have
$[\hat{h},\xi]_{\widehat{\mathfrak{g}}}=\psi\otimes[h,x]=\langle\alpha,\,h\rangle\,\psi\otimes
x$ for any $\hat{h}=\phi^{+(0,0,1)}\otimes h+\sum
s_{k}a_{k}+td\in\widehat{\mathfrak{h}}$. Hence $x$ has weight $\alpha$ and
$\xi=\psi_{m}\otimes x\in\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}\,$.
We have proved
$\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}=\mathcal{L}[m]\otimes_{\mathbf{C}}\mathfrak{g}_{\alpha}\,.$
Now we shall show
$\mathcal{L}\mathfrak{h}\,\supset\,\,\widehat{\mathfrak{g}}_{0\delta}\oplus\oplus_{m\neq
0}\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}\,.$
where
$\,\widehat{\mathfrak{g}}_{0\delta}=\mathcal{L}[0]\otimes\mathfrak{h}\,$, and
$\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}=\mathcal{L}[m]\otimes\mathfrak{h}\,$.
The converse implication has been proved before, so both sides coincide. Let
$\xi=\in\widehat{\mathfrak{g}}_{0\delta}\oplus\oplus_{m\neq
0}\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta}\,$ which we may assume to be
the form $\xi=y+\sum\,p_{k}a_{k}+q{\rm n}$. It satisfies
$[\,\hat{h},\xi\,]_{\widehat{\mathfrak{g}}}\,=\,\langle\,\frac{m}{2}\delta\,,\,\hat{h}\,\rangle\,\xi\,,\quad\forall\widehat{h}\in\widehat{\mathfrak{h}}\,,$
for $m=0$ or $m\neq 0$. From (5.12) we find
$\xi=y\in\mathcal{L}[m]\mathfrak{h}$. The above discussion yields the first
and the fourth assertions. ∎
###### Corollary 5.8.
$\oplus_{\Phi\ni\alpha\neq
0}\,\widehat{\mathfrak{g}}_{\frac{m}{2}\delta+\alpha}\,=\,\mathcal{L}[m]\otimes_{\mathbf{C}}\mathfrak{g}.$
### 5.3 Chevalley generators of $\,\widehat{\mathfrak{g}}$
By the natural embedding of $\mathfrak{g}$ in $\widehat{\mathfrak{g}}$ we have
the vectors
$\displaystyle h_{i}$ $\displaystyle=$ $\displaystyle\phi^{+(0,0,1)}\otimes
h_{i}\,\in\widehat{\mathfrak{h}},\,$ $\displaystyle e_{i}$ $\displaystyle=$
$\displaystyle\phi^{+(0,0,1)}\otimes
e_{i}\,\in\widehat{\mathfrak{g}}_{0\delta+\alpha_{i}},\quad
f_{i}=\phi^{+(0,0,1)}\otimes
f_{i}\,\in\widehat{\mathfrak{g}}_{0\delta-\alpha_{i}},\qquad i=1,\cdots,l\,.$
Then
$\displaystyle\left[e_{i}\,,f_{j}\,\right]_{\widehat{\mathfrak{g}}}$
$\displaystyle=$ $\displaystyle\,\delta_{ij}\,h_{i}\,,$
$\displaystyle\left[h_{i}\,,e_{j}\,\right]_{\widehat{\mathfrak{g}}}$
$\displaystyle=$
$\displaystyle\,a_{ij}\,e_{j},\quad\left[h_{i}\,,f_{j}\,\right]_{\widehat{\mathfrak{g}}}=\,-a_{ij}\,f_{j}\,,\quad
1\leq i,j\leq l.$ (5.23)
We have obtained a part of generators of $\widehat{\mathfrak{g}}$ that come
naturally from $\mathfrak{g}$. We want to augment these generators to the
Chevalley generators of $\widehat{\mathfrak{g}}$. We take the following set of
generators of the algebra $\mathcal{L}$:
$\displaystyle I$ $\displaystyle=\phi^{+(0,0,1)}=\left(\begin{array}[]{c}1\\\
0\end{array}\right),\,\qquad J$
$\displaystyle=\phi^{+(0,0,0)}=\left(\begin{array}[]{c}0\\\
-1\end{array}\right)\,,$ (5.28) $\displaystyle\kappa$
$\displaystyle=\phi^{+(1,0,1)}\,=\,\left(\begin{array}[]{c}z_{2}\\\
-\overline{z}_{1}\end{array}\right),\qquad\lambda$
$\displaystyle=\,\phi^{-(0,0,0)}=\left(\begin{array}[]{c}z_{2}\\\
\overline{z}_{1}\end{array}\right).$ (5.33)
We put
$\displaystyle\kappa_{\ast}$ $\displaystyle=$
$\displaystyle\,\frac{-\sqrt{-1}}{\sqrt{2}}\phi^{+(1,1,2)}+\frac{\sqrt{-1}}{2}(\phi^{-(0,0,0)}-\phi^{+(1,0,1)})\,=\,\sqrt{-1}\left(\begin{array}[]{c}\overline{z}_{2}\\\
\overline{z}_{1}\end{array}\right)\,$ $\displaystyle\lambda_{\ast}$
$\displaystyle=$
$\displaystyle\,\frac{-\sqrt{-1}}{\sqrt{2}}\phi^{+(1,1,2)}-\,\frac{\sqrt{-1}}{2}(\phi^{-(0,0,0)}-\phi^{+(1,0,1)})\,=\,\sqrt{-1}\left(\begin{array}[]{c}\overline{z}_{2}\\\
-\overline{z}_{1}\end{array}\right)\,$
###### Lemma 5.9.
1. 1.
$\kappa\,\in\mathcal{L}[1]\,,\qquad\,\lambda\,\in\mathcal{L}[-3\,]\,.$ (5.36)
2. 2.
$\displaystyle\,c_{0}(\kappa,\kappa_{\ast})\,$ $\displaystyle=-1\,,\quad
c_{1}(\kappa,\kappa_{\ast})=c_{2}(\kappa,\kappa_{\ast})=0,$ (5.37)
$\displaystyle c_{0}(\lambda,\lambda_{\ast})$
$\displaystyle=-1\,,\quad\,c_{1}(\lambda,\lambda_{\ast})=c_{2}(\lambda,\lambda_{\ast})=0\,.$
(5.38)
Let $\theta$ be the highest root of $\mathfrak{g}$ and suppose that
$e_{\theta}\in\mathfrak{g}_{\theta}$ and $f_{\theta}\in\mathfrak{g}_{-\theta}$
satisfy the relations $[e_{\theta}\,,\,f_{\theta}]\,=\,h_{\theta}$ and
$(e_{\theta}|f_{\theta})=1$. We introduce the following vectors of
$\,\widehat{\mathfrak{g}}\,$;
$\displaystyle f_{J}$ $\displaystyle=J\otimes
f_{\theta}\,\in\widehat{\mathfrak{g}}_{0\delta-\theta}\,,\quad$ $\displaystyle
e_{J}$ $\displaystyle=(-J)\otimes
e_{\theta}\,\in\widehat{\mathfrak{g}}_{0\delta+\theta}\,,$ (5.39)
$\displaystyle f_{\kappa}$ $\displaystyle=\kappa\otimes
f_{\theta}\,\in\widehat{\mathfrak{g}}_{\frac{1}{2}\delta-\theta}\,,\quad$
$\displaystyle e_{\kappa}$ $\displaystyle=\kappa_{\ast}\otimes
e_{\theta}\,\in\widehat{\mathfrak{g}}_{-\frac{3}{2}\delta+\theta}\oplus\widehat{\mathfrak{g}}_{\frac{1}{2}\delta+\theta}\,,$
(5.40) $\displaystyle f_{\lambda}$ $\displaystyle=\lambda\otimes
f_{\theta}\,\in\widehat{\mathfrak{g}}_{-\frac{3}{2}\delta-\theta}\,,\quad$
$\displaystyle e_{\lambda}$ $\displaystyle=\lambda_{\ast}\otimes
e_{\theta}\,\in\widehat{\mathfrak{g}}_{-\frac{3}{2}\delta+\theta}\oplus\widehat{\mathfrak{g}}_{\frac{1}{2}\delta+\theta}\,.$
(5.41)
Then we have the generators of
$\mathcal{L}\mathfrak{g}\oplus\,\oplus_{k=0}^{2}\mathbf{C}a_{k}$ that are
given by the following triples:
$\displaystyle\left(\,\widehat{e}_{i},\widehat{f}_{i},h_{i}\right)\quad
i=1,2,\cdots,l,$
$\displaystyle\left(\widehat{e}_{\lambda},\widehat{f}_{\lambda},h_{\theta}\right),\quad\left(\widehat{e}_{\kappa},\widehat{f}_{\kappa},h_{\theta}\,\right),\quad\,\left(\widehat{e}_{J},\widehat{f}_{J},h_{\theta}\right)\,\,.$
(5.42)
These triples satisfy the following relations.
###### Proposition 5.10.
1. 1.
$\left[\,e_{\pi}\,,\,f_{i}\,\right]_{\widehat{\mathfrak{g}}}=\,\left[\,f_{\pi}\,,\,e_{i}\,\right]_{\widehat{\mathfrak{g}}}=0\,,\quad\mbox{for
}\,1\leq i\leq l,\,\mbox{ and }\,\pi=J,\,\kappa,\,\lambda\,.$ (5.43)
2. 2.
$\left[\,e_{J}\,,\,f_{J}\,\right]_{\widehat{\mathfrak{g}}}=\,\widehat{h}_{\theta}\,,$
(5.44)
3. 3.
$\quad\left[\,e_{\lambda}\,,\,f_{\lambda}\,\right]_{\widehat{\mathfrak{g}}}=\sqrt{-1}\,\widehat{h}_{\theta}-a_{0},\quad\left[\,e_{\kappa}\,,\,f_{\kappa}\,\right]_{\widehat{\mathfrak{g}}}=\sqrt{-1}\,\widehat{h}_{\theta}\,-a_{0}\,.$
(5.45)
Adding the element $n$ to these generators of
$\mathcal{L}\mathfrak{g}\oplus\,\oplus_{k=0}^{2}\mathbf{C}a_{k}$ we have
obtained the Chevalley generators of $\widehat{\mathfrak{g}}$.
## References
* [A] J.F.Adams, Lectures on Lie Groups, W.A.Benjamin, Inc. New York, 1969.
* [B] N. Bourbaki, Groupes et Algèbres de Lie, Elements de Mathématique, Hermann, Paris, 1973.
* [C] R. W. Carter, Lie Algebras of Finite and Affine Type, Cambridge Studies in Advanced Mathematics 96, Cambridge University Press, Cambridge, 2005.
* [D] J. Dixmier, Algèbres enveloppantes, Cahiers scientifiques XXXVII, Gauthier-Villars, Paris, 1974.
* [D-S-Sc] R. Delanghe, F. Sommen and V. Souček, Clifford algebra and spinor-valued functions: a function theory for the Dirac operator, Kluwer, Dordrecht, 1992.
* [F] T. Friedrich, Dirac Operators in Riemannian Geometry, Graduate Studies in Mathematics, American Mathematical Society, Providence, 1997.
* [G-M] J. Gilbert and M. Murray, Clifford algebras and Dirac operators in harmonic analysis, Cambridge University Press, Cambridge, 1991.
* [H] J-E. Humphreys, Introduction to Lie Algebras and Representation Theory, Springer-Verlag, Berlin, 1972.
* [K] V. Kac, Infinite dimensional Lie algebras, Cambridge University Press, Cambridge, 1983.
* [K-W] B. Khesin and R. Wendt, The geometry of infinite-dimensional groups, A Series of Modern Surveys in Mathematics 51, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Springer-Verlag, Berlin, 2009.
* [Ko0] T. Kori, Lie algebra of the infinitesimal automorphisms on $S^{3}$ and its central extension, J. Math. Kyoto Univ. 36 (1996), no. 1, 45-60.
* [Ko1] T. Kori, Index of the Dirac operator on $S^{4}$ and the infinite-dimensional Grassmannian on $S^{3}$, Japan. J. Math. (N.S.), 22 (1996), no. 1, 1-36.
* [Ko2] T. Kori, Spinor analysis on $\mathbf{C}^{2}$ and on conformally flat 4-manifolds, Japan. J. Math., 28 (1)(2002), 1-30.
* [Ko3] T. Kori, Cohomology groups of harmonic spinors on conformally flat manifolds, Trends in Mathematics: Advances in Analysis and Geometry, 209-225, Birkhäuser Verlag, Basel, 2004.
* [K-I] T. Kori and Y. Imai, Quaternifications and Extensions of Current Algebras on $S^{3}$, Symmetry 2015,7, 2150-2180; doi:10.3390/sym7042150.
* [Kc] T. Kori, $\mathfrak{sl}(n,\mathbf{H})$-Current Algebra on $S^{3}$, arXiv:1710.09712v1[math.DG].
* [Kq] T. Kori, Quaternification of complex Lie algebras,
* [M] J. Mickelsson, Current Algebras and Groups, Plenum Press, New York and London, 1989.
* [P-S] A. Pressley and G. Segal, Loop groups, Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1986.
* [S] J-P. Serre, Algebres de Lie semi-simples complexes, W.A.Benjamin, New-York, 1966.
* [W] M. Wakimoto, Infinite-Dimensional Lie Algebras, Translations of Mathematical Monographs, 195. Iwanami Series in Modern Mathematics. American Mathematical Society, Providence, RI, 2001.
|
# Distilling Interpretable Models into Human-Readable Code
Walker Ravina∗, Ethan Sterling, Olexiy Oryeshko, Nathan Bell, Honglei Zhuang,
Xuanhui Wang, Yonghui Wu, Alexander Grushetsky Google, Mountain View, CA, USA
walkerravina, esterling, olexiy, nathanbell, hlz, xuanhui, yonghui,
<EMAIL_ADDRESS>
###### Abstract.
The goal of model distillation is to faithfully transfer teacher model
knowledge to a model which is faster, more generalizable, more interpretable,
or possesses other desirable characteristics. Human-readability is an
important and desirable standard for machine-learned model interpretability.
Readable models are transparent and can be reviewed, manipulated, and deployed
like traditional source code. As a result, such models can be improved outside
the context of machine learning and manually edited if desired. Given that
directly training such models is difficult, we propose to train interpretable
models using conventional methods, and then distill them into concise, human-
readable code.
The proposed distillation methodology approximates a model’s univariate
numerical functions with piecewise-linear curves in a localized manner. The
resulting curve model representations are accurate, concise, human-readable,
and well-regularized by construction. We describe a piecewise-linear curve-
fitting algorithm that produces high-quality results efficiently and reliably
across a broad range of use cases. We demonstrate the effectiveness of the
overall distillation technique and our curve-fitting algorithm using four
datasets across the tasks of classification, regression, and ranking.
Model distillation; human readable; piecewise-linear curves
∗Corresponding author
††ccs: Computing methodologies Machine learning approaches
## 1\. Introduction
Interpretable models are critical for high-stakes decision-making scenarios
(Rudin, 2018) such as guiding bail or parole decisions, assessing loan
eligibility, and guiding medical treatment decisions. In these cases, the
explanation of a model’s output (_e.g_. individual feature contributions)
should be examinable and understandable, to ensure transparency,
accountability, and fairness of the outcomes.
To achieve intrinsic interpretability, univariate functions are widely used in
interpretable models. In the classic Generalized Additive Models (GAMs)
(Hastie and Tibshirani, 1986), the model is a sum of univariate shape
functions,
$M=f_{0}+f_{1}(x_{1})+f_{2}(x_{2})+f_{3}(x_{3})+\dots+f_{n}(x_{n}).$
where $x_{i}$’s are $n$ features and $f_{i}$’s are the shape functions. Such a
model is simple but often less accurate than a model with feature
interactions. Recently, Lou _et al_. (Lou et al., 2013) showed that adding a
limited number of pairwise feature interactions allows GAM-style additive
models to capture a significant fraction of the accuracy of a fully-
interacting model. In many cases of interest, such feature interactions are
intuitively captured with products of univariate functions,
$g_{1}(c_{1})\cdot f_{1}(x_{1})+g_{2}(c_{2})\cdot f_{2}(x_{2})+\dots,$
or products of groups of features,
$(g_{1,1}(c_{1})+g_{1,2}(c_{2}))\cdot
f_{1}(x_{1})+(g_{2,1}(c_{1})+g_{2,2}(c_{2}))\cdot f_{2}(x_{2})+\dots,$
where the magnitude of one function (_i.e_. $f_{i}$) is modulated by a
function (_i.e_. $g_{i}$ or $g_{i,j}$) of another ”context” feature (_i.e_.
$c_{i}$) (Zhuang et al., 2021). In other cases, the interaction amongst
features is adequately approximated by additive models of univariate functions
nested within univariate functions,
$\displaystyle f(x_{1},x_{2},x_{3})\approx$ $\displaystyle\
g_{1}(f_{1,1}(x_{1})+f_{1,2}(x_{2})+f_{1,3}(x_{3}))\ +$ $\displaystyle\
g_{2}(f_{2,1}(x_{1})+f_{2,2}(x_{2})+f_{2,3}(x_{3}))+\dots,$
where the outer function $g_{i}$ captures nonlinear behavior (Chen et al.,
2018). Indeed, the Kolmogorov–Arnold representation theorem (Kolmogorov, 1957;
Wikipedia, 2020c) guarantees that every continuous multivariate function of
$n$ inputs can be represented as a sum of $2n$ such terms,
$f(x_{1},\dots,x_{n})=\sum_{i=0}^{2n}g_{i}\left(\sum_{j=1}^{n}f_{i,j}(x_{j})\right).$
In practice a single outer function is often sufficient, yielding an
interpretable model.
Figure 1. Shape plots for numerical features from models learned on the COMPAS
dataset. The GAM forest model is shown in blue dots, while its distillation
into the two-segment curve model is shown in orange lines (both map to the
left Y axis). Cumulative distribution functions of the corresponding signals
are shown in grey (right Y axis).
In the classic GAM models, splines are used as shape functions (Hastie and
Tibshirani, 1986). Another commonly used shape function is piecewise-linear
functions (Wikipedia, 2020f). These representations contain a small number of
variables (_e.g_. knots) and thus are concise and human-readable. However,
directly optimizing such representations often yields less accurate models
than alternative model representations. For example, Lou _et al_. (Lou et al.,
2012) showed that learning spline GAMs is less accurate than learning bagged
boosted decision forest GAMs. Our experiments show similar results for
directly optimizing GAMs composed of piecewise-linear curves using Stochastic
Gradient Descent (SGD) methods. Broadly speaking, the model representations
using decision forest GAMs have the advantage during model optimization, but
the resultant models are not human-readable. This is the case even when there
exists a simpler model with a concise, human-readable form that provides
comparable accuracy.
Inspired by the model distillation work in which relatively small decision
forests or neural networks can be distilled from much larger ensembles, but
not trained directly from data, to match the accuracy of complex models
(Buciluundefined et al., 2006; Hinton et al., 2015), we propose to distill
interpretable models into readable representations in a separate process after
model optimization. This decouples the initial, learned model representation
from the final, published model representation. For example, the proposed
distillation methodology can be applied to additive models trained using
bagged boosted decision trees (Lou et al., 2012), as well as additive neural
nets (Agarwal et al., 2020; Zhuang et al., 2021).
In this paper, we describe a technique for distilling models composed of
univariate components into human readable representations, in particular, the
piecewise-linear curves described in Section 2.2. The output of our
distillation technique is illustrated in Listing 1 and Figure 1, which show
textual and graphical representations of piecewise-linear curves obtained by
applying our approach to a decision forest GAM trained on the COMPAS dataset
(described in Section 2.1). The distilled model is a concise representation of
the decision forest GAM model and is converted to human-readable source code.
Listing 1: Code for a distilled COMPAS model
⬇
score = sum([
PWLCurve("age", [(18, 3.13), (21, 0.5914),
(46, -0.7206)], fx="log"),
PWLCurve("priors_count", [(0, -0.8415), (1, -0.4452), (38, 2.146)],
fx="log1p"),
PWLCurve("length_of_stay", [(0, -0.1855),
(3, -0.04099), (4, 0.2443)], fx="log1p"),
EnumCurve("c_charge_degree", {1: 0.0198, 2: -0.0384}),
## ... other features ...
])
From here on, we will use ”curves” to refer to piecewise-linear curves, ”curve
models” to refer to models where each component is a curve, and ”code” to
refer to the textual representations of curve models or curves.
The rest of this paper is structured as follows. After presenting the
preliminaries in Section 2, we elaborate on the benefits of using curve models
in Section 3. We then describe the localized distillation process in Section 4
and piecewise-linear approximation algorithm, sometimes referred to as
segmented regression (Wikipedia, 2020f), for creating curve models in Section
5. Lastly, we present experimental results on for datasets: COMPAS, FICO,
MSLR-WEB30K, and CWS in Section 6 and conclude the paper in Section 7.
## 2\. Preliminaries
Throughout the paper, we will use the data sets used in this paper as concrete
examples to explain our methods. Thus, we first describe them in this section.
We also give the formal definition of piecewise-linear-curves in this section.
### 2.1. Data Sets
We used the following four datasets to represent different settings:
classification, regression, and ranking. The first three are publicly
available.
* •
The COMPAS dataset111https://github.com/propublica/compas-analysis is the
result of a ProPublica investigation (Angwin et al., 2016) into possible
racial bias of the proprietary COMPAS model score for defendants in Broward
county, Florida. The dataset has been studied extensively in the context of
bias, fairness, and interpretability (Tan et al., 2018; Dressel and Farid,
2018; Kleinberg, 2018; Chouldechova, 2017). Labels are binary and indicate
whether recidivism occurred for an individual within a time period. We use
area under the receiver operating characteristic curve (AUC-ROC) to measure
classifier accuracy. COMPAS has 6 features and four of them are used as
examples in this paper: age, priors_count, length_of_stay, and
c_charge_degree.
* •
The FICO dataset (FIC, 2018) is composed of real-world anonymized credit
applications along with risk scores. Labels are a risk score for an
individual. We use root mean square error (RMSE) to measure regressor
accuracy. FICO has 24 features and we use two features as examples in our
paper: MSinceMostRecentDelq, Months Since Most Recent Delinquency;
PercentTradesWBalance, Percent Trades with Balance.
* •
The MSLR-WEB30K dataset (Qin and Liu, 2013) is a widely used learning-to-rank
benchmark dataset. Labels are per document relevance judgements. We use
normalized discounted cumulative gain at $k=5$ (NDCG@5) to measure ranker
accuracy. MSLR-WEB30K is significantly larger both in number of features (136)
and number of training examples (~2,000,000 per cross validation fold). We use
it to compare our curve approximation algorithm to pwlf (Jekel and Venter,
2019), a publicly available alternative, on the basis of accuracy, robustness
and efficiency. We use two features as examples in our paper: feature_0011,
Body stream length; feature_0128, Inlink number.
* •
The Chrome Web Store (CWS) dataset is a private and anonymized dataset
originating from Chrome Web Store logs. Each query corresponds to a visit to
the Chrome Web Store. The items within each query were the ones shown to the
user. Labels correspond to user actions such as clicking, installing, or no
action whatsoever. We again use NDCG@5 to measure ranker accuracy. A similar,
but distinct datset from the Chrome Web Store was previously studied by Zhaung
_et al_. (Zhuang et al., 2021). Unlike in that previous work, in this instance
we do not utilize query level ”context” features, instead using only 14 item
level features. The queries are also distinct.
In each case, we distill a decision forest GAM and evaluate the accuracy of
the distilled curve models. The COMPAS and FICO datasets represent high-stakes
domains (Rudin, 2018) in which the benefits of curve models, discussed below,
are particularly compelling. FICO, MSLR-WEB30K, and CWS have been previously
studied in the context of interpretability (Agarwal et al., 2020; Zhuang et
al., 2021; Lou et al., 2013; Chen et al., 2018). Furthermore, the results from
MSLR-WEB30K demonstrate that the accuracy of this approach is not limited to
small datasets.
### 2.2. Piecewise-Linear Curves
A piecewise linear curve (PWLCurve) is defined by a list of control points
$S=[(x_{k},y_{k})]_{k=1}^{K}$ through which the curve must pass. Between
control points, output $y$ values are determined by performing linear
interpolation between neighboring control points. Beyond the leftmost or
rightmost control points, output values are capped to the $y_{k}$-value of the
neighboring control point. More formally, assuming $x_{k}$’s are ordered,
_i.e_. $x_{k}<x_{k+1}$, the definition of a piecewise linear curve can be
described as:
$PWL(x;S)=\begin{cases}y_{1}&\text{if }x<x_{1},\\\
\frac{y_{k+1}-y_{k}}{x_{k+1}-x_{k}}(x-x_{k})+y_{k}&\text{if }x_{k}\leq x\leq
x_{k+1},\\\ y_{K}&\text{if }x>x_{K}.\end{cases}$
In most cases of interest 5 or 6 control points, defining 4 or 5 interior
segments, is sufficient to capture the desired behavior.
We allow for an optional $x$-transformation, specified with the fx argument,
to fit curves to data with different scales. When an $x$-transformation is
present it is applied to the input value and $x$-values of all the control
points, and then linear interpolation is performed in the transformed space.
We support identity (default), log, log1p and symlog1p transformations. Here
symlog1p is defined as sgn(x) * log1p(abs(x)) and is suitable for highly-
variable features that take on both positive and negative values.
Univariate categorical functions are represented by EnumCurve, which directly
maps input values to outputs using a discrete mapping.
## 3\. Background & Motivation
Interpretable models are critical for high-stakes decisions (Rudin, 2018) and
provide many advantages over more complex model structures (Caruana et al.,
2015; Du et al., 2019). In this section we explain how distilling
interpretable models into curve models reinforces these benefits and addresses
a variety of real-world engineering challenges. Here, one underlying theme is
that distilling models into human-readable source code _reduces a novel
machine learning problem to an established software engineering problem with
an abundance of existing solutions_.
### 3.1. Greater Transparency
A model is transparent if it provides a textual or graphical representation
that enables its behavior to be understood comprehensively (Ustun and Rudin,
2014). One way in which the proposed method provides greater transparency is
by simplifying graphical depictions of a model while retaining its essential
characteristics. It is often argued, implicitly or explicitly, that the shape
plots of an interpretable model are an _exact description_ of the model and
therefore provide a reliable way to understand the model. While this claim is
narrowly true, it is misleading in general. Unless given specific guidance,
humans will naturally discount certain fine-grained details of the plots when
developing an understanding of the model. By distilling interpretable models
to a concise representation, we discard extraneous characteristics and reduce
the mental effort necessary to understand the model. For example, it is not
immediately obvious what understanding an individual should derive from the
shape plots of the feature_0011 (body stream length), and feature_0128 (inlink
number) features in the initially-learned MSLR-WEB30K model, shown in Figure
2. Indeed, different individuals may derive qualitatively different
understandings from these graphical depictions. However, given the additional
knowledge that the distilled curve model represented by the overlaid curves in
Figure 2 has nearly identical accuracy, an observer can make much stronger
inferences about the model’s essential characteristics. Interpretability can
be increased even further by imposing monotonicity constraints. We discuss the
effect of such constraints in Section 6.4.
Figure 2. Shape plots for a GAM forest model (in blue dots) and 5 segment
curve distillation (in orange lines) for the MSLR-WEB30K dataset.
Clearly when distillation yields a simpler model with comparable accuracy we
would say the distillation process has succeeded. However, instances where
distillation yields a model with inferior accuracy warrant further
investigation because the apparent ”failure” can often be attributed to
essential characteristics of the teacher model that were not successfully
transferred to the student _because they violate a prescribed notion of human-
interpretability_. We examine one representative example of this phenomenon in
Section 6.2. While a complete discussion of this principle is beyond the scope
of this paper, we note that the idea can be viewed as an extension of the use
of structural constraints to define “interpretable” models, just now applied
to the structure of individual functions in the model. Under this policy, if
the accuracy of a candidate model cannot be reproduced using a predefined
class of expressive, “human-scale” functions (_e.g_. curves with a small
number of truncated control points) its transparency would be called into
question.
### 3.2. Constructive Regularization
The proposed method can also be viewed as a post-hoc regularization process
that is completely compatible with, and complementary to, optimization-based
regularization techniques (_e.g_. L1/L2 penalties or monotonicity
constraints). In the context of regularization, our emphasis on conciseness is
aligned with the minimum description length principle (Wikipedia, 2020d) for
model selection. Ustun and Rudin (Ustun and Rudin, 2014) applied similar
reasoning to motivate linear models with small, integer-valued weights. The
constrained description length of curves provides limited capacity for
capturing idiosyncratic behavior. As a result, curve distillation successfully
removes aberrations from teacher model functions. This regularization effect
can be seen in Figure 1 and Figure 2. The fewer segments the greater the
effect. To find the most concise curve model we can repeatedly apply the
proposed method with decreasing number of control points. Naturally, the
optimality of this approach is subject to the limitations of our localized
distillation methodology (see Section 4) and curve approximation algorithm
(see Section 5). While it is difficult to directly compare models with
different functional representations, comparing the length and readability of
their corresponding code is instructive.
One practical advantage of curve-based regularization is that regularity is
enforced by construction and the complexity of individual curves is readily
apparent and quantifiable. Therefore, organizations that adopt curve models
can set objective guidelines about model complexity that developers can
anticipate when submitting model candidates for approval. Such guidelines can
specify the maximum number of curve segments, maximum number of significant
digits per curve control point, or monotonicity of the curve. Similar to the
use of nothing-up-my-sleeve numbers in cryptography (Wikipedia, 2020e), curve
models enable developers to preemptively address suspicions about potential
weaknesses and constructively prove the robustness of a given model candidate.
In general, standardizing development around curve models is a straightforward
way for organizations to systematically enforce best practices, defend against
common mistakes and pitfalls, and expedite model verification and approval.
The accessible, readable nature of curve models enables organization members
beyond engineers (_e.g_. executives, product managers, etc.) to participate in
this approval process.
### 3.3. Readable, Editable Code
Curve model code can be read, reviewed, merged, and versioned like
conventional source code. An example model for the COMPAS dataset is shown in
Listing 1. One can understand how a curve model would behave under novel or
extremal conditions by mentally “evaluating” the model under hypothetical
“what if?” scenarios without the need for additional tools. Subjecting models
to a traditional source code review process facilitates a more rigorous
examination of the model’s characteristics and greater accountability than is
possible with non-readable models. Indeed, conducting “model review” through
source code review ensures that the candidate model itself - not some
separate, potentially inconsistent description or artifact of the model or how
it was trained - is the subject of review. In the event that undesirable model
behavior is discovered, the model’s code may be directly edited to correct
such issues. For example, in the case of the COMPAS model a user may wish to
deliberately cap the contribution of features such as priors_count and
length_of_stay features for legitimate policy reasons not captured by
classification metrics such as AUC-ROC. The contribution of other features can
be entirely removed. Agarwal _et al_. (Agarwal et al., 2020) discussed how
such an approach of training with biased features and then removing them can
potentially be better than simply training without biased features. This
approach can prevent the model from extracting bias through other features
which are correlated with biased ones.
Model transparency is essential in the context of high-stakes decisions
(Rudin, 2018) arising in criminal justice, finance, health care, and other
areas. Providing the complete source of the model in simple, portable, human-
readable code makes the models transparent. Compared to human-readable models
produced by CORELS (Angelino et al., 2017), which are expressed in
universally-understandable if-then language, curve models sacrifice
accessibility for greater expressiveness and general-purpose application.
### 3.4. Collaborative Model Development
Curve distillation is compatible with any algorithm or modeling technique that
results in univariate functions. In the experiments section we apply the
proposed technique to decision forest GAMs on several datasets. Previous work
(Zhuang et al., 2021) applied the proposed technique to GAMs learned via
neural networks, as well as similar neural networks with limited interactions
via multiplicative pairs. Organizing collaborative development around curve
models enables engineers to apply a plurality of different tools, techniques,
or platforms to optimize components of a (potentially large-scale) model.
Engineers are free to choose a modeling approach that maximizes their
productivity, similarly to how engineers use multiple IDEs, code formatters,
or linters to collaboratively develop software. Curve distillation can be
viewed as a “format conversion” tool that translates an arbitrary and
potentially exotic model representation into a fixed, agreed-upon vocabulary
of human-readable building blocks.
### 3.5. Straightforward Deployment
Curve models are fast-to-evaluate and straightforward to deploy. Since
evaluation requires minimal computation - just a handful of floating point
operations per curve - curve models are well-suited for performance-critical
applications. Curves are a portable, platform-agnostic representation that can
be natively supported in a variety of languages or systems with little effort.
For example, Listing 2 shows a C++ implementation of a COMPAS model with 2
segments. In general, curve models are straightforward to deploy because they
offer a multitude of integration options. Curves can be embedded in
configuration files, passed via CGI parameters, manually embedded into complex
applications in a piecemeal fashion, systematically translated to a target
representation, or evaluated by existing runtime systems with a few
incremental extensions.
Listing 2: A COMPAS model as a C++ function
⬇
double COMPAS(double age, double priors_count,
double length_of_stay, int charge_degree,
// ... other features ...
) {
static auto age_curve = PWLCurve({{18, 3.13},
{21, 0.5914}, {46, -0.7206}}, "log");
static auto priors_count_curve = PWLCurve(
{{0, -0.8415}, {1, -0.4452}, {38, 2.146}},"log1p");
static auto length_of_stay_curve = PWLCurve(
{{0, -0.1855}, {3, -0.04099}, {4, 0.2443}}, "log1p");
static auto charge_degree_curve = EnumCurve({{1, 0.0198}, {2, -0.0384}});
// ... other features ...
return (age_curve.Eval(age) +
priors_count_curve.Eval(priors_count) +
length_of_stay_curve.Eval(length_of_stay) +
charge_degree_curve.Eval(charge_degree) +
// ... other features ...
);
}
## 4\. Localized Distillation
Our distillation process takes two inputs: a teacher model containing one or
more univariate functions, and a representative dataset (generally the
training data). Our method differs from conventional distillation techniques
in that we (1) distill each univariate function in isolation and (2) optimize
for mean squared error (MSE) when approximating each univariate function.
Specifically, each univariate function in the teacher model is evaluated on
the dataset to produce representative $(x,y)$ example pairs. For discrete
categorical features we create a mapping where each unique $x$ is mapped to
the mean $y$. For numerical features, we produce a PWLCurve using the
approximation algorithm described in Section 5. If the teacher model contains
univariate functions nested within other univariate functions we replace the
source functions in a bottom-up fashion. Otherwise, all non-nested functions
can be approximated in parallel. The final model is constructed by replacing
each original univariate function with its PWLCurve approximation.
Conventionally, model distillation involves a global optimization using the
same (or at least similar) objective to the original teacher model training.
This objective may differ from a point-wise MSE objective. For example,
ranking objectives often have pair-wise definitions. Why then do we advocate a
localized optimization using a MSE objective in all circumstances? The primary
answer is that, in the context of interpretable models, there is substantial
value in maintaining a strong one-for-one correspondence between each source
function and target function. Notably, this allows us to visualize each shape
function in the teacher model against its corresponding curve replacement.
Additionally, we can attribute distillation failures - data instances where
the curve model is less accurate than the teacher model - to specific
univariate functions, and to take remedial actions. For example, in the Figure
5 we can immediately tell that the shape function of $x_{1}$ was not well-
approximated by a curve. In the experiments section we show that the
meaningful behavior of nearly all shape functions can be accurately captured
by curves with three to five segments. Furthermore, when the meaningful
behavior is not captured, it is generally due to inherently non-interpretable
behavior being lost.
While a global optimization approach (_i.e_. optimizing the parameters of all
curves in the target model simultaneously) using a problem-specific metric
might produce a more accurate result, it is computationally more expensive and
would lack the same one-to-one correspondence with the teacher model, making
distillation failures more difficult to diagnose. Furthermore, if higher
accuracy is desired, the output of the proposed distillation process can be
used to initialize a global optimization of the curve model’s parameters.
## 5\. Piecewise-Linear Curve Approximation
Given a univariate numerical function $f(x)\rightarrow y$, our goal is to
produce a PWLCurve $c(x)\rightarrow y$ that faithfully approximates $f(x)$ by
minimizing the $MSE(c(x),f(x))$ over sample data. Clearly, the accuracy of the
overall distillation method depends critically on the accuracy of the
individual curve approximations - _i.e_. how much metric loss is incurred when
each $c(x)$ is substituted for the corresponding $f(x)$ in the trained model.
Additionally, the practical success of the methodology also depends on the
robustness and efficiency of the approximation algorithm. To enable systematic
use of curve distillation in model training pipelines, the approximation
algorithm must run with minimal configuration. Complex hyperparameters pose a
significant barrier to entry. We have designed pwlfit, our piecewise linear
approximation algorithm, so that in practice users only need to consider the
num_segments and mono (monotonicity) parameters. While num_segments=5 segments
and mono=False is sufficient to get high accuracy (as demonstrated by our
experiments), it is desirable to investigate whether the model can be further
simplified with fewer segments or with monotonicity restrictions. To
facilitate such investigations it is important that distillation runs quickly
(_e.g_. less than 1 second per function) which enables interactive analysis
via Jupyter notebooks (Kluyver et al., 2016) or other tools. These practical
considerations have informed various decisions in the design of pwlfit. In
particular, we prefer an algorithm which quickly and reliably yields high
accuracy results with minimal configuration to one which sacrifices either of
these practical considerations for marginal gains in accuracy.
In this section we will describe the salient characteristics and noteworthy
features of pwlfit. We invite interested readers to consult the publicly-
available source code of pwlfit (Sterling and Ravina, 2019), for additional
details.
### 5.1. Algorithm
Given a list of $(x,y,weight)$ points and a desired number of segments $k$, we
search for a PWLCurve to minimize mean squared error, MSE. A PWLCurve with $k$
segments is characterized by its $k+1$ control points – a set of $x$-knots and
their corresponding $y$-knots. Given only the $x$-knots, we can solve a linear
least squares expression for the optimal $y$-knots and the resulting error.
Since we don’t know the correct $x$-knots, we search through the space of
possible $x$-knots and solve a least squares expression at each step to
calculate the error.222pwlf (Jekel and Venter, 2019) implements a similar
approach. We will compare with it in our experiments.
#### 5.1.1. Initial Downsampling
For performance, we randomly downsample large datasets to approximately one
million points before fitting. We downsample to reduce the cost of sorting,
which dominates the runtime for large data. This downsampling imposes a
negligible quality loss. To further reduce runtime, we discretize the search
space for $x$-knots. We choose num_samples $x$-values from the data, spaced
equally by cumulative weight, and search over the combinations of $x$-knots
from that sampled set of candidates. Using the default 100 samples, our
candidates are the $x$-values at $(0\%,1.01\%,\dots,98.9\%,100\%)$ of the
cumulative weight.
#### 5.1.2. Knot Discretization
For data with many repeated $x$-values, some of our candidates will be
duplicates. For example, $55\%$ of the values in the length_of_stay feature in
the COMPAS data set are 0 or 1. In such cases, we iteratively resample at
higher rates (such as $0\%,0.505\%,1.01\%$, etc.) until we collect a suitable
number of distinct candidates, never exceeding the specified num_samples
parameter.
#### 5.1.3. Condensation
To minimize the cost of each linear least squares step, we condense the data
using a novel technique described in Appendix B. Given num_samples candidate
knots, we condense the full data into two synthetic points per adjacent pair
of candidates, for a total of 2 * (num_samples - 1) synthetic points. For any
function that’s linear between each adjacent pair of candidate $x$-knots,
which is guaranteed by our choice of discrete candidate $x$-knots, these
condensed points perfectly recreate the loss of that function over the full
data set. We run our linear least squares solver on the condensed points
instead of the full data set, which reduces our cost per solve from
${\mathcal{O}(\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\footnotesize{\@listingGroup{ltx_lst_identifier}{num\textunderscore
points}}}}}})$ to
${\mathcal{O}(\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\footnotesize{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.0,0.27,0.13}{n}um\textunderscore
samples}}}}}})$. This is purely a performance optimization, with no quality
implications.
Figure 3. Candidate $x$-knots (red vertical lines) and derived condensed
points (pink large dots) on the age piece of a COMPAS GAM forest model (blue
dots). For visual clarity, this illustration considers only five $x$-knot
candidates.
#### 5.1.4. Global Optimization via Greedy Search
After discretization, the solution space consists of
${{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\footnotesize{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.0,0.27,0.13}{n}um\textunderscore
samples}}}}}}\choose{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\footnotesize{\@listingGroup{ltx_lst_keyword}{\color[rgb]{0.0,0.27,0.13}{n}um\textunderscore
segments}}}}}}+1}$ $x$-knot combinations, which is still too large for an
exhaustive search. To make the search tractable we use a greedy search
heuristic that optimizes one $x$-knot at a time. Specifically, at each step of
the process we evaluate the error associated with each candidate $x$-knot, and
keep the candidate that yields the least error.
With this approach, we optimize in two stages. We begin with a single $x$-knot
as our solution, and greedily add the best remaining candidate $x$-knot until
our solution consists of (num_segments + 1) $x$-knots. Then we cycle through
our solution, removing one $x$-knot at a time and replacing that $x$-knot with
the best remaining candidate $x$-knot, which could be the same $x$-knot that
we just removed. We continue this cycle of iterative improvements until our
solution converges, or until we’ve exceeded the maximum number of iterations
(defaulting to 10 iterations).
#### 5.1.5. Slope Constraints & Monotonicity
pwlfit can impose a minimum and/or maximum slope on the solution via bounded
least squares. Instead of solving the least squares expression directly for
the $y$-knots, we solve it for the deltas between adjacent $y$-knots. Then we
impose a min/max slope by bounding the deltas. Slope restrictions can be used
to limit the spikiness of curves, but we primarily use them to impose
monotonicity. For example, specifying min_slope=0 restricts to monotonically
non-decreasing functions while max_slope=0 restricts to monotonically non-
increasing functions. Specifying a min_slope greater than 0 or a max_slope
less than 0 restricts to strictly increasing or decreasing functions,
respectively.
pwlfit can deduce the direction of monotonicity by applying isotonic
regression (Wikipedia, 2020b) to the condensed points. We fit an increasing
and a decreasing isotonic regression, and use the direction that minimizes
mean squared error. The user can override this behavior by specifying the
direction explicitly or by disabling monotonicity entirely.
#### 5.1.6. Input Transformations
pwlfit can also interpolate in a transformed $x$-coordinate space instead of
the original space, as a simple form of feature engineering. pwlfit transforms
the $x$-values before learning the curve. Specifically, pwlfit will choose a
candidate $x$-transformation, fx, among log, log1p, or symlog1p based on the
range of the $x$-values and then proceed with that transformation if it
increases the Pearson correlation between fx and $y$ by a noticeable amount
over the identity transformation. Alternatively, the user can specify any
strictly increasing 1D transform or specify the identity transform to disable
transformation.
## 6\. Experiments
### 6.1. Distillation Accuracy
Table 1 and Figure 4(a) show the results obtained from experiments on the
different datasets. A complete set of results can be found in Table 2 in
Appendix A. The results of applying our distillation technique with our
piecewise-linear approximation algorithm are presented as pwlfit. We present
results from using various numbers of segments with and without a monotonicity
restriction and otherwise default parameters. In all cases we truncated the
control points to four significant digits. We also present several additional
reference points to provide context.
* •
SGD: We directly learn the curves with the Adadelta(Zeiler, 2012) optimizer.
We initialize the $y$ values of the control points as zeros. For the $x$
values of the control points we use the quantiles for numerical features
(_e.g_. 0%, 50%, 100% for a three point, two segment curve) or all unique
values for categorical features. We then apply Adadelta to optimize the $y$
values. Simultaneously optimizing $x$ and $y$ values was also attempted, but
the results were always worse than optimizing $y$ values alone.
* •
NAM: Neural Additive Models (NAMs) (Agarwal et al., 2020) is another method
for learning interpretable models proposed by Agarwal _et al_. We present
their result for reference where applicable.
* •
Interacting forest: We train a bagged, boosted decision forest allowing
feature interactions to demonstrate the accuracy of a non-interpretable, high-
complexity ”black box” model.
* •
GAM forest: We train a bagged boosted decision forest GAM by restricting each
tree to use only one feature. This model is also the source model for our
distillation technique.
* •
pwlf: We apply our distillation technique using an alternative piecewise-
linear approximation algorithm pwlf(Jekel and Venter, 2019).
On each dataset we used five fold cross validation and present the metric mean
and sample standard deviation across folds. We used three different metrics to
evaluate accuracy: AUC-ROC, RMSE, and NDCG@5 for the three different tasks of
classification, regression, and ranking. Further details on our
experimentation setup can be found in Appendix A and further details on the
datasets, labels, and metrics can be found in Preliminaries 2.1.
Table 1. Metrics across datasets. For AUC-ROC and NDCG@5 higher is better, and for RMSE lower is better. MSLR-WEB30K and CWS were not used by the NAM paper and are omitted from that row. Metric values are the mean from five fold cross validation $\pm$ the sample standard deviation. Model | COMPAS (AUC-ROC) | FICO (RMSE) | MSLR-WEB30K (NDCG@5) | CWS (NDCG@5)
---|---|---|---|---
Interacting forest | $0.742\pm 0.011$ | $3.128\pm 0.092$ | $0.485\pm 0.002$ | $0.461\pm 0.003$
GAM forest | $0.741\pm 0.013$ | $3.495\pm 0.109$ | $0.442\pm 0.002$ | $0.460\pm 0.004$
NAM | $0.741\pm 0.009$ | $3.490\pm 0.081$ | |
pwlfit num_segments=5, mono=False | $0.743\pm 0.012$ | $3.494\pm 0.096$ | $0.441\pm 0.002$ | $0.454\pm 0.002$
pwlfit num_segments=5, mono=True | $0.743\pm 0.013$ | $3.693\pm 0.101$ | $0.437\pm 0.003$ | $0.452\pm 0.003$
pwlf num_segments=5 | $0.743\pm 0.012$ | $3.503\pm 0.096$ | $0.433\pm 0.003$ | $0.454\pm 0.003$
SGD num_segments=5 | $0.741\pm 0.010$ | $3.643\pm 0.097$ | $0.405\pm 0.002$ | $0.448\pm 0.004$
SGD num_segments=20 | $0.738\pm 0.011$ | $3.499\pm 0.117$ | $0.419\pm 0.003$ | $0.455\pm 0.003$
(a) Top level metrics across datasets. For AUC-ROC and NDCG@5 higher is
better, for RMSE lower is better.
(b) Per fit metrics for MSLR-WEB30K Fold 1. Note that for 4 segments pwlf has
extreme outliers for RMSE vs the source submodel.
Figure 4. Comparisons of different methods across the 4 datasets.
Our results show that applying our distillation technique with 4-5 segments
with pwlfit produces models which are as accurate as both the source GAM
forest and NAM models for all datasets except CWS where a small gap remains.
We investigate this accuracy gap in detail in Section 6.2 below. In the case
of the COMPAS dataset these models are as accurate as full complexity models.
Applying our technique with pwlf produces competitive results, albeit less
accurate on the MSLR-WEB30K dataset. By contrast, the results show that
learning curves directly via SGD is less general. On the FICO and CWS datasets
more segments are required to achieve accuracy comparable to the GAM forest
models. On the MSLR-WEB30K dataset the accuracy is inferior even with many
more segments.
The consistent accuracy of applying our distillation approach with pwlfit on
these four datasets and three separate tasks (classification, regression,
learning to rank) demonstrates that the process is not sensitive to either the
specific data or the top level objective being used.
### 6.2. Distillation Failures
In Section 3.1 we explained how distillation yielding a model with inferior
accuracy warrants further investigation because the purported ”failure” can
often be attributed to essential yet non-interpretable characteristics of the
teacher model not transferring to the student model. The accuracy gap observed
on the CWS dataset is an example of this phenomenon. Figure 5 shows the worst
two fits from the CWS dataset. The plots have been redacted to maintain the
privacy of the dataset. For each plot it is clear that the original teacher
submodel had some non-interpretable behavior which was lost during
distillation. This is most evident for feature $x_{1}$, the worst offender,
where the output is highly erratic. If the original teacher submodel is not
distilled for these two features then the accuracy gap between the original
teacher model and 5 segment non-monotonic distillation drops from 0.0059 to
0.003 (_i.e_. ~50% of the gap is recovered).
To identify the above two failures we applied the following method.
* •
Begin with the original teacher model. For each submodel compute the metric
delta against the teacher model from distilling only that submodel and no
others.
* •
Perform the above on each cross validation fold using the validation set and
average the metric deltas across folds.
* •
Sort the features by their associated metric delta to determine the worst
distillations.
Figure 5. The worst curve distillations from the CWS dataset using 5 segments.
### 6.3. Efficiency & Robustness
The experiments of the previous section showed that pwlfit more accurately
distills the source model across datsets than pwlf. We also found on the MSLR-
WEB30K dataset that pwlfit is more efficient and robust than pwlf. Figure 4(b)
shows per fit metrics from the first fold of the MSLR-WEB30K dataset as the
number of segments varies without monotonicity. The top plot shows the time in
seconds, as measured on a ThinkStation P520, to fit each of the 136 submodels
of the source GAM forest. We find that pwlfit is faster in the average case as
the number of segments increases, and has a narrower distribution. The bottom
plot shows the RMSE of each fit against the 136 submodels of the source GAM
forest. We again find that pwlfit performs favorably in the average case with
a narrower runtime distribution.
It’s worth noting that pwlf by default does not perform any downsampling. For
the MSLR-WEB30K dataset running pwlf without any downsampling was
prohibitively expensive. For all of our experiments we ran pwlf with a pre-
processing downsample to a random 1000 examples. We found this to be a fair
point for balancing speed and quality when comparing to pwlfit. It is of
course possible with both algorithms to modify the number of samples used to
strike a different trade-off between run time and accuracy.
### 6.4. Monotonicity
As discussed in Section 5, pwlfit can fit monotonic curves with automatic
direction detection. Figure 4(a) compares curve models fit with and without
monotonicity constraints (automatically inferring the direction) across
datasets. For the COMPAS dataset monotonic and non monotonic models are
comparably accurate, while for FICO, MSLR-WEB30K, and CWS, non-monotonic
models are more accurate.
Monotonicity with respect to appropriate features is desirable for
interpretable models. In these cases a monotonic model may be preferable to a
non-monotonic one, even if it is less accurate. For example, Figure 6 compares
monotonic and non-monotonic 5 segment curve models on the FICO dataset for the
MSinceMostRecentDelq, and PercentTradesWBalance features. Given the semantic
meaning of these features, it is desirable from a transparency and incentives
standpoint for the model output to be monotonic with respect to each of them.
Figure 6. Curve distillations on the FICO dataset using 5 segments without
monotonicity (orange) and with monotonicity (green).
## 7\. Conclusion
We have introduced a novel method for distilling interpretable models into
human-readable code using piecewise-linear curves and demonstrated its
efficacy on four datasets. We have shown that curve models match or outperform
the accuracy achieved by other additive models. On smaller datasets, curve
models match the accuracy of more complex models, like interacting decision
forests. Our localized distillation methodology is applicable to any model
containing univariate numerical functions and is straightforward to implement
using the publicly-available pwlfit(Sterling and Ravina, 2019) library.
We have explained how curve model distillation reinforces interpretability and
addresses a variety of real-world engineering challenges. Curve models are 1)
transparent, 2) well-regularized, 3) easy to analyze for presence of biases or
other fairness issues, and 4) can be directly edited or improved outside the
context of machine learning to fix the aforementioned fairness issues.
Distilling models into human-readable code allows one to address novel machine
learning problems using well-established software engineering methods. Curve
models can be improved by multiple contributors in parallel, reviewed, and
made to systematically follow best practices. Curve models are well-suited for
production applications, since they can be natively supported in many
languages, are easy to deploy, and fast to evaluate.
###### Acknowledgements.
We thank Vytenis Sakenas, Jaime Fernandez del Rio, Benoit Zhong, and Petr
Mitrichev for their support and providing the algorithms and optimization
infrastructure used in our experiments. We also thank Paul Heymann, Diego
Federici, Mike Bendersky, Paul Haahr, and Petr Mitrichev for their helpful
feedback and detailed reviews. Lastly, we thank Xinyu Qian, Janelle Lee, Po
Hu, and Chary Chen for preparing the CWS data set for our experiments.
## References
* (1)
* FIC (2018) 2018\. FICO Explainable Machine Learning Challenge. https://community.fico.com/s/explainable-machine-learning-challenge.
* Agarwal et al. (2020) Rishabh Agarwal, Nicholas Frosst, Xuezhou Zhang, Rich Caruana, and Geoffrey E. Hinton. 2020. Neural Additive Models: Interpretable Machine Learning with Neural Nets. arXiv:cs.LG/2004.13912
* Angelino et al. (2017) Elaine Angelino, Nicholas Larus-Stone, Daniel Alabi, Margo Seltzer, and Cynthia Rudin. 2017\. Learning Certifiably Optimal Rule Lists. In _Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ _(KDD ’17)_.
* Angwin et al. (2016) Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine bias: There’s software used across the country to predict future criminals. _And it’s biased against blacks. ProPublica_ 23 (2016).
* Buciluundefined et al. (2006) Cristian Buciluundefined, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model Compression. In _Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ _(KDD ’06)_. 535–541.
* Caruana et al. (2015) Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015\. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission. In _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ _(KDD ’15)_. 1721–1730.
* Chen et al. (2018) Chaofan Chen, Kangcheng Lin, Cynthia Rudin, Yaron Shaposhnik, Sijia Wang, and Tong Wang. 2018\. An Interpretable Model with Globally Consistent Explanations for Credit Risk. arXiv:cs.LG/1811.12615
* Chouldechova (2017) Alexandra Chouldechova. 2017\. Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. _Big Data_ 5, 2 (Jun 2017), 153–163. https://doi.org/10.1089/big.2016.0047
* Dressel and Farid (2018) Julia Dressel and Hany Farid. 2018. The accuracy, fairness, and limits of predicting recidivism. _Science Advances_ 4, 1 (2018). https://doi.org/10.1126/sciadv.aao5580
* Du et al. (2019) Mengnan Du, Ninghao Liu, and Xia Hu. 2019. Techniques for interpretable machine learning. _Commun. ACM_ 63, 1 (Dec 2019), 68–77. https://doi.org/10.1145/3359786
* Hastie and Tibshirani (1986) Trevor Hastie and Robert Tibshirani. 1986. Generalized Additive Models. _Statist. Sci._ 1, 3 (08 1986), 297–310.
* Hinton et al. (2015) Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015\. Distilling the Knowledge in a Neural Network. arXiv:stat.ML/1503.02531
* Jekel and Venter (2019) Charles F. Jekel and Gerhard Venter. 2019. _pwlf: A Python Library for Fitting 1D Continuous Piecewise Linear Functions_. https://github.com/cjekel/piecewise_linear_fit_py
* Kleinberg (2018) Jon Kleinberg. 2018\. Inherent Trade-Offs in Algorithmic Fairness. In _Abstracts of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems_ _(SIGMETRICS ’18)_.
* Kluyver et al. (2016) Thomas Kluyver, Benjamin Ragan-Kelley, Fernando Pérez, Brian Granger, Matthias Bussonnier, Jonathan Frederic, Kyle Kelley, Jessica Hamrick, Jason Grout, Sylvain Corlay, Paul Ivanov, Damián Avila, Safia Abdalla, and Carol Willing. 2016\. Jupyter Notebooks – a publishing format for reproducible computational workflows. In _Positioning and Power in Academic Publishing: Players, Agents and Agendas_ , F. Loizides and B. Schmidt (Eds.). IOS Press, 87 – 90.
* Kolmogorov (1957) A. K. Kolmogorov. 1957\. On the Representation of Continuous Functions of Several Variables by Superposition of Continuous Functions of One Variable and Addition. _Doklady Akademii Nauk SSSR_ 114 (1957), 369–373.
* Lou et al. (2012) Yin Lou, Rich Caruana, and Johannes Gehrke. 2012. Intelligible Models for Classification and Regression. In _Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ _(KDD ’12)_. 150–158.
* Lou et al. (2013) Yin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker. 2013\. Accurate Intelligible Models with Pairwise Interactions. In _Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ _(KDD ’13)_. 623–631.
* Qin and Liu (2013) Tao Qin and Tie-Yan Liu. 2013. Introducing LETOR 4.0 Datasets. _CoRR_ abs/1306.2597 (2013). http://arxiv.org/abs/1306.2597
* Qin et al. (2010) Tao Qin, Tie-Yan Liu, and Hang Li. 2010. A general approximation framework for direct optimization of information retrieval measures. _Information retrieval_ 13, 4 (2010), 375–397.
* Rudin (2018) Cynthia Rudin. 2018\. Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. arXiv:stat.ML/1811.10154
* Sterling and Ravina (2019) Ethan Sterling and Walker Ravina. 2019. _pwlfit: A Piecewise-Linear Curve Fitting Library_. https://github.com/google/pwlfit
* Tan et al. (2018) Sarah Tan, Rich Caruana, Giles Hooker, and Yin Lou. 2018\. Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation. In _Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society_ _(AIES ’18)_. 303–310.
* Ustun and Rudin (2014) Berk Ustun and Cynthia Rudin. 2014. Methods and Models for Interpretable Linear Classification. arXiv:stat.ME/1405.4047
* Wikipedia (2020a) Wikipedia. 2020a. Bhatia–Davis inequality. http://en.wikipedia.org/w/index.php?title=Bhatia%E2%80%93Davis%20inequality&oldid=875899600. [Online; accessed 03-September-2020].
* Wikipedia (2020b) Wikipedia. 2020b. Isotonic regression. http://en.wikipedia.org/w/index.php?title=Isotonic%20regression&oldid=989717822. [Online; accessed 30-November-2020].
* Wikipedia (2020c) Wikipedia. 2020c. Kolmogorov–Arnold representation theorem. http://en.wikipedia.org/w/index.php?title=Kolmogorov%E2%80%93Arnold%20representation%20theorem&oldid=964097101. [Online; accessed 10-August-2020].
* Wikipedia (2020d) Wikipedia. 2020d. Minimum description length. http://en.wikipedia.org/w/index.php?title=Minimum%20description%20length&oldid=965620302. [Online; accessed 12-August-2020].
* Wikipedia (2020e) Wikipedia. 2020e. Nothing-up-my-sleeve number. http://en.wikipedia.org/w/index.php?title=Nothing-up-my-sleeve%20number&oldid=972510276. [Online; accessed 12-August-2020].
* Wikipedia (2020f) Wikipedia. 2020f. Segmented regression. http://en.wikipedia.org/w/index.php?title=Segmented%20regression&oldid=910888930. [Online; accessed 10-August-2020].
* Zeiler (2012) Matthew D. Zeiler. 2012\. ADADELTA: An Adaptive Learning Rate Method. _CoRR_ abs/1212.5701 (2012). arXiv:1212.5701 http://arxiv.org/abs/1212.5701
* Zhuang et al. (2021) Honglei Zhuang, Xuanhui Wang, Michael Bendersky, Alexander Grushetsky, Yonghui Wu, Petr Mitrichev, Ethan Sterling, Nathan Bell, Walker Ravina, and Hai Qian. 2021\. Interpretable Ranking with Generalized Additive Models. In _Proceedings of the 14th ACM International Conference on Web Search and Data Mining_ _(WSDM ’21)_. to appear.
## Appendix A Experimental Details
Table 2. Metrics across datasets. For AUC-ROC and NDCG@5 higher is better, and for RMSE lower is better. MSLR-WEB30K and CWS were not used by the NAM paper and are omitted from that row. Metric values are the mean from five fold cross validation $\pm$ the sample standard deviation. Model | COMPAS (AUC-ROC) | FICO (RMSE) | MSLR-WEB30K (NDCG@5) | CWS (NDCG@5)
---|---|---|---|---
Interacting forest | $0.742\pm 0.011$ | $3.128\pm 0.092$ | $0.485\pm 0.002$ | $0.461\pm 0.003$
GAM forest | $0.741\pm 0.013$ | $3.495\pm 0.109$ | $0.442\pm 0.002$ | $0.460\pm 0.004$
NAM | $0.741\pm 0.009$ | $3.490\pm 0.081$ | |
pwlfit num_segments=1, mono=False | $0.741\pm 0.008$ | $3.781\pm 0.105$ | $0.432\pm 0.003$ | $0.449\pm 0.003$
pwlfit num_segments=1, mono=True | $0.741\pm 0.008$ | $3.780\pm 0.105$ | $0.432\pm 0.003$ | $0.450\pm 0.003$
pwlfit num_segments=2, mono=False | $0.742\pm 0.011$ | $3.617\pm 0.101$ | $0.438\pm 0.002$ | $0.452\pm 0.003$
pwlfit num_segments=2, mono=True | $0.742\pm 0.011$ | $3.713\pm 0.103$ | $0.435\pm 0.003$ | $0.451\pm 0.003$
pwlfit num_segments=3, mono=False | $0.743\pm 0.010$ | $3.536\pm 0.099$ | $0.440\pm 0.002$ | $0.453\pm 0.002$
pwlfit num_segments=3, mono=True | $0.743\pm 0.011$ | $3.691\pm 0.100$ | $0.437\pm 0.002$ | $0.451\pm 0.003$
pwlfit num_segments=4, mono=False | $0.742\pm 0.012$ | $3.505\pm 0.094$ | $0.441\pm 0.002$ | $0.454\pm 0.003$
pwlfit num_segments=4, mono=True | $0.742\pm 0.012$ | $3.691\pm 0.101$ | $0.437\pm 0.003$ | $0.452\pm 0.004$
pwlfit num_segments=5, mono=False | $0.743\pm 0.012$ | $3.494\pm 0.096$ | $0.441\pm 0.002$ | $0.454\pm 0.002$
pwlfit num_segments=5, mono=True | $0.743\pm 0.013$ | $3.693\pm 0.101$ | $0.437\pm 0.003$ | $0.452\pm 0.003$
pwlf num_segments=2 | $0.742\pm 0.014$ | $3.728\pm 0.101$ | $0.428\pm 0.004$ | $0.451\pm 0.003$
pwlf num_segments=3 | $0.743\pm 0.012$ | $3.565\pm 0.108$ | $0.434\pm 0.004$ | $0.453\pm 0.003$
pwlf num_segments=4 | $0.744\pm 0.012$ | $3.498\pm 0.099$ | $0.436\pm 0.004$ | $0.453\pm 0.003$
pwlf num_segments=5 | $0.743\pm 0.012$ | $3.503\pm 0.096$ | $0.433\pm 0.003$ | $0.454\pm 0.003$
SGD num_segments=1 | $0.728\pm 0.008$ | $4.349\pm 0.059$ | $0.352\pm 0.002$ | $0.435\pm 0.003$
SGD num_segments=2 | $0.734\pm 0.007$ | $4.028\pm 0.095$ | $0.382\pm 0.001$ | $0.443\pm 0.004$
SGD num_segments=3 | $0.742\pm 0.008$ | $3.887\pm 0.080$ | $0.394\pm 0.002$ | $0.449\pm 0.003$
SGD num_segments=4 | $0.742\pm 0.010$ | $3.742\pm 0.110$ | $0.403\pm 0.003$ | $0.449\pm 0.004$
SGD num_segments=5 | $0.741\pm 0.010$ | $3.643\pm 0.097$ | $0.405\pm 0.002$ | $0.448\pm 0.004$
SGD num_segments=6 | $0.742\pm 0.010$ | $3.583\pm 0.105$ | $0.408\pm 0.002$ | $0.452\pm 0.002$
SGD num_segments=7 | $0.741\pm 0.010$ | $3.604\pm 0.098$ | $0.408\pm 0.002$ | $0.452\pm 0.003$
SGD num_segments=8 | $0.741\pm 0.010$ | $3.561\pm 0.111$ | $0.411\pm 0.003$ | $0.449\pm 0.004$
SGD num_segments=9 | $0.741\pm 0.011$ | $3.522\pm 0.103$ | $0.414\pm 0.002$ | $0.449\pm 0.003$
SGD num_segments=10 | $0.741\pm 0.010$ | $3.544\pm 0.117$ | $0.415\pm 0.002$ | $0.452\pm 0.005$
SGD num_segments=15 | $0.740\pm 0.011$ | $3.499\pm 0.101$ | $0.419\pm 0.003$ | $0.454\pm 0.003$
SGD num_segments=20 | $0.738\pm 0.011$ | $3.499\pm 0.117$ | $0.419\pm 0.003$ | $0.455\pm 0.003$
### A.1. Cross Validation
We performed 5-fold cross validation on all datasets.
* •
COMPAS & FICO: The datasets were split into 5 equal parts. Each part was used
once as a test set (20%) with the remaining parts as the training set (80%).
We used the same random folds as in the NAM paper (Agarwal et al., 2020). No
validation set was used given the small size of the data. Instead we used out
of bag evaluation wherever a validation set would be used (see below).
* •
MSLR-WEB30K: We used the predefined folds and partitions from the original
dataset. For each fold it allocates 60% for training 20% for validation and
20% for testing.
* •
CWS: We used a dataset of 60,000 queries and 2,690,439 items with an average
of ~44 items per query. The dataset was split into 5 equal parts. Each part
was used once as a test set. Of the remaining parts 80% was used as training
and 20% as validation. Overall this resulted in 64% for training, 16% for
validation and 20% for test for each fold.
### A.2. Ensemble Learning
For both SGD and tree models, we trained ensembles with 56 bags using a bag
fraction of $\frac{7}{8}$ to produce the random subsets. For MSLR-WEB30K and
CWS, queries were randomly divided into bags. For the other datasets,
individual examples were randomly divided into bags. When applying our
distillation technique, we distilled the ensembles into a single PWLCurve per
feature. When learning the curves directly via SGD, we averaged the learned
$y$-coordinate values across bags to obtain the final model.
### A.3. Loss Functions
We trained SGD and tree models using log-loss for the COMPAS dataset, mean
squared error (MSE) for the FICO dataset, and ranking loss (ApproxNDCG (Qin et
al., 2010)) for the MSLR-WEB30K and CWS datasets.
### A.4. Hyper-parameters
For the COMPAS, and FICO datasets hyper-parameters were tuned using out of bag
evaluation on the training set of the first fold. For MSLR-WEB30K and CWS, we
used the validation sets of the first fold.
* •
SGD: We tuned the batch size in {128, 256, 512, 1024, 4096}. We used the
Adadelta (Zeiler, 2012) optimizer and tuned a sufficient maximum number of
steps for convergence. No other parameters were tuned.
* •
Interacting forest: We trained depth 5 trees using an internal boosted forest
algorithm. We tuned a sufficient maximum number of steps for convergence. No
other parameters were tuned.
* •
GAM forest: We trained depth 3 trees restricted to using a single feature with
an internal boosted forest algorithm. We tuned a sufficient maximum number of
steps for convergence. No other parameters were tuned.
In all cases, we trained models for the tuned maximum number of steps and then
truncated models after training. Truncation used a confidence-based truncation
algorithm which attempts to select the earliest step for which no later step
provides a confident win. This algorithm was run on the validation set if
present or otherwise utilized out of bag evaluation.
### A.5. Code
The GitHub repository for pwlfit (Sterling and Ravina, 2019) contains several
Jupyter notebooks applying our distillation technique and performing the
analyses shown in this paper. Please reference the v0.2.0 release to get the
accompanying data files and appropriate version of the Jupyter notebooks.
## Appendix B Linear Condense
Linear condensing is a data optimization designed to reduce the runtime
complexity of our piecewise-linear curve fitting.
### B.1. Motivation/Overview
pwlfit picks a set of candidate $x$-knots and searches through combinations of
those $x$-knots. For each combination considered, it solves a linear least
squares expression for the ideal $y$-knots, calculates the resulting squared
error, and prefers the combination that yields the lowest error.
Each solve is linear in the size of input, which is slow for large data. We
could downsample to save compute at the cost of accuracy. Instead, we
introduce a technique to save compute at no cost in accuracy. We condense the
data into $\mathcal{O}(\\#candidates)$ synthetic points. These synthetic
points perfectly recreate the true squared error over the full data for every
PWLCurve that will be considered. We then optimize over the synthetic points
instead of the real points.
This is possible because we know the candidate $x$-knots ahead of time. A
PWLCurve defined on those $x$-knots will always be linear between any adjacent
$x$-knots in the set of candidates. As we show in the theorem, we can condense
arbitrarily many points down to two points such that linear fits are the same
on those two points as on the full set. In the corollary, we apply this
process separately between each pair of candidate $x$-knots, producing two
points between each pair. Together, the squared error of such a PWLCurve is
the same on those synthetic points as it is on the full data set. (Up to a
constant that we safely ignore because it’s the same for each PWLCurve.)
### B.2. Definitions
For convenience, we take standard definitions and specialize them for weighted
2D points.
###### Definition B.1.
Let a ‘point’ refer to a real-valued triple of the form $(x,y,weight)$ with
$weight>0$.
###### Definition B.2.
Let a ‘line’ refer to a function of the form $f(x)=mx+b$ for
$m,b,x\in\mathbb{R}$.
###### Definition B.3.
For any function $f:\mathbb{R}\to\mathbb{R}$ and finite point set $P$, define
the squared error $SE(f,P)$ as the sum of $(f(x)-y)^{2}\cdot weight$ for each
point in $P$. If $P$ is empty, we consider the squared error to be $0$.
###### Definition B.4.
For any finite point set $P$, define the ‘best fit line’ $bestfitline(P)$ as
the line $L$ that minimizes $SE(L,P)$. In the degenerate case where multiple
lines minimize $SE$, let the best fit line be the solution with zero slope,
and if multiple solutions have zero slope, let the best fit line be the
solution with a zero $y$-intercept.
There are two degenerate cases that require tie-breaking. If the point set is
empty, every line has the same squared error, so our definition chooses
$f(x)=0$ as the best fit line. If the point set is nonempty but all its points
have the same $x$, then any line with the correct value at $x$ will minimize
the squared error, so our definition choose the horizontal line.
### B.3. Theorem
###### Theorem B.5.
Given a set of points $P$, we can construct a set $P^{\prime}$ of two or fewer
points such that
1\. $min_{x}(P)<=min_{x}(P^{\prime})<=max_{x}(P^{\prime})<=max_{x}(P)$, and
2\. For any line $L$, $SE(L,P)=SE(L,P^{\prime})+SE(bestfitline(P),P)$.
###### Remark.
These properties are desirable because (2) allows us to compute the squared
error of $M$ lines over a data set of $N$ points in $\mathcal{O}(N+M)$ instead
of the naive $\mathcal{O}(NM)$, and (1) allows us to extend this property from
lines to a useful class of piecewise-linear curves in the corollary.
Note that the points in $P^{\prime}$ are constructed, rather than chosen from
$P$. The construction of $P^{\prime}$ is implemented in pwlfit (Sterling and
Ravina, 2019) as linear_condense.linear_condense.
###### Proof.
Let $X$, $Y$, and $W$ represent the $x,y$, and $weight$ values of $P$,
respectively. We dismiss the trivial case where $P$ is empty; in that case, an
empty $P^{\prime}$ satisfies the requirements. Likewise, we dismiss the case
where $min(X)=max(X)$ since $P^{\prime}=\\{Centroid(P)\\}$ fulfills our
desired properties. With those cases resolved, we assume for the rest of this
proof that $min(X)<max(X)$.
#### B.3.1. Reframe the Coordinate System
To begin, we reframe the coordinate system such that the origin is the
centroid of $P$ and $y=0$ is the best fit line. (This simplifies the math.) We
ensure that the shift of coordinates is reversible and preserves the squared
error.
$Centroid(P)=(X\cdot W/sum(W),Y\cdot W/sum(W))$. We translate the coordinate
frame by this centroid so that, under the new coordinates,
$Centroid(P)=(0,0)$. After translation, $X\cdot W=0$ and $Y\cdot W=0$.
Additionally, we skew the coordinate system by the slope of the best fit line:
we replace $Y$ with $Y-X\cdot slope(bestfitline(P))$. With the centroid at the
origin, the slope of the best fit line is $Covariance(X,Y,W)/Variance(X,W)$ =
$(XY\cdot W)/(XX\cdot W)$. After skewing this slope to 0, $XY\cdot W$ = 0.
Under the new coordinate frame, $SE(bestfitline(P),P)=SE(y=0,P)=Y^{2}\cdot W$.
We will determine $P^{\prime}$ under this new coordinate system. Afterwards,
we can easily convert $P^{\prime}$ back to the original coordinate system by
reversing the skew and the translation.
#### B.3.2. Squared Error of an arbitrary line
We will express $SE(line,P)$ as $SE(bestfitline(P),P)$ plus leftover terms.
From that, we will derive a $P^{\prime}$ such that $SE(line,P^{\prime})$
equals those leftover terms.
For an arbitrary line $y=mx+b$,
$SE(y=mx+b,P)=(mX+b-Y)^{2}\cdot W=(m^{2}X^{2}+2bmX-2mXY+b^{2}-2bY+Y^{2})\cdot
W.$
In our coordinate frame, $X\cdot W=0$, $Y\cdot W=0$, and $XY\cdot W=0$. So
$SE(y=mx+b,P)=(m^{2}X^{2}+b^{2}+Y^{2})\cdot W.$
$Y^{2}\cdot W=SE(bestfitline(P),P)$. Therefore,
$SE(y=mx+b,P)=m^{2}X^{2}\cdot W+b^{2}\cdot W+SE(bestfitline(P),P).$
$m^{2}X^{2}\cdot W+b^{2}\cdot W=SE(y=mx+b,P)-SE(bestfitline(P),P).$
#### B.3.3. Squared error over $P^{\prime}$
$SE(y=mx+b,P^{\prime})=SE(y=mx+b,P)-SE(bestfitline(P),P)$
for all lines $y=mx+b$ $\iff$ $(mX^{\prime}+b-Y^{\prime})^{2}\cdot
W^{\prime}=m^{2}X^{2}\cdot W+b^{2}\cdot W$ for all lines $y=mx+b$.
The above equation can be viewed as a quadratic polynomial in the two
variables $m$ and $b$. To hold for all values of $m$ and $b$, the coefficients
of each $m^{c}b^{d}$ must be equal on both sides of the equation. Then the
equation holds iff:
1\. $X^{\prime 2}\cdot W^{\prime}=X^{2}\cdot W$, and
2\. $X^{\prime}\cdot W^{\prime}=0$, and
3\. $sum(W)=sum(W^{\prime})$, and
4\. $Y^{\prime}\cdot W^{\prime}=0$, and
5\. $Y^{\prime 2}\cdot W^{\prime}=0$, and
6\. $X^{\prime}Y^{\prime}\cdot W^{\prime}=0$.
(5) $\iff$ $Y^{\prime}=0$, which also guarantees (4) and (6). We will use 1-3
to derive a satisfactory $X^{\prime}$ and $W^{\prime}$.
#### B.3.4. Deriving $X^{\prime}$ and $W^{\prime}$
We’ve determined that $Y^{\prime}=0$.
Let $X^{\prime}:=(x_{1},x_{2})$ and $W^{\prime}:=(w_{1},w_{2})$. Without loss
of generality, let $x_{1}$ ¡= $x_{2}$. Then, to satisfy our squared error
expression, it’s necessary and sufficient that:
1\. $x_{1}^{2}w_{1}+x_{2}^{2}w_{2}=X^{2}\cdot W$, and
2\. $x_{1}w_{1}+x_{2}w_{2}=0$, and
3\. $w_{1}+w_{2}=sum(W)$.
Because we have three equations in four unknowns, we cannot directly solve for
$x_{1},x_{2},w_{1},w_{2}.$ To produce a fourth equation, we choose the
constraint that $x_{1}/x_{2}$ = $min(X)/max(X)$. This choice will simplify the
math, and will ensure that $min(X)<=x_{1}<=x_{2}<=max(X)$.
With this fourth equation, we solve the simultaneous equations to produce:
$x_{1}=-stddev(X,W)\sqrt{-min(X)/max(X)}$
$x_{2}=stddev(X,W)\sqrt{max(X)/-min(X)}$.
$w_{1}=sum(W)\cdot max(X)/(max(X)-min(X))$
$w_{2}=sum(W)\cdot-min(X)/(max(X)-min(X))$.
Note that, because the centroid is zero, $min(X)<0<max(X)$, so these
expressions are all defined. (The denominators are never 0 and values beneath
the square roots are never negative.)
$P^{\prime}={(x_{1},0,w_{1}),(x_{2},0,w_{2})}$ satisfies our requirements.
#### B.3.5. Verify that $min(X)<=x_{1}<=x_{2}<=max(X)$
We wanted $P^{\prime}$ to satisfy the the squared error expression, which it
does, and also have its x-values bounded by the x-values of $P$, which we
prove now. Let $\mu:=E(X,W)$, the expected value of $X$ weighted by $W$, which
is equivalent to the x-value of $Centroid(P)$. By the Bhatia–Davis inequality
(Wikipedia, 2020a),
$stddev(X,W)^{2}<=(\mu-min(X))(max(X)-\mu)$. (This inequality is equivalent to
the observation that the standard deviation of a distribution is maximized
when all the xs are at the extremes – i.e. equal to min(X) or max(X).)
Since $\mu$ is zero for $P$, $stddev(X,W)^{2}<=-min(X)max(X)$.
$x_{1}^{2}=stddev(X,W)^{2}\cdot(-min(X)/max(X))<=-min(X)max(X)\cdot(-min(x)/max(X))=min(X)^{2}.$
$x_{1}<0$ and $min(X)<0$, so $x_{1}^{2}<=min(X)^{2}\implies min(X)<=x_{1}$.
The proof that $x_{2}<=max(X)$ is similar.
Therefore $min(X)<=x_{1}<=x_{2}<=max(X)$, as desired. ∎
### B.4. Corollary
###### Corollary B.6.
Given a set of points $P$ and a set of x-knots $K$, we can construct a set of
points $P^{\prime}$ with $|P^{\prime}|<=2(|K|-1)$ such that, for any PWLCurve
$C$ whose x-knots are elements of $K$, $SE(C,P)=SE(C,P^{\prime})+c$, where $c$
is a constant determined exclusively by $P$ and $K$ that’s the same for every
$C$.
Note that the points in $P^{\prime}$ are constructed, rather than chosen from
$P$. The construction of $P^{\prime}$ is implemented in pwlfit (Sterling and
Ravina, 2019) as linear_condense.condense_around_knots.
#### B.4.1. Preprocess $P$ by clamping
Let $k:=|K|$, and consider $K$ in sorted order. Piecewise-linear curves are
constant for input values that exceed the range of their x-knots, so $C$ is
constant for $x<=min(K)=K[0]$ and for $x>=max(K)=K[k-1]$.
Therefore we can clamp the x-values of $P$ to $[K[0],K[k-1]]$ without altering
$SE(C,P)$. We do so as a preprocess.
#### B.4.2. Partition $P$ by $K$
To construct $P^{\prime}$ from $P$, we first partition $P$ by $K$ into $k+1$
disjoint pieces labeled $P_{0}$, $P_{1}$, …, $P_{k}$.
\- $P_{0}$ := $\\{x\in P|x<K[0]\\}$.
\- $P_{i}$ := $\\{x\in P|K[i-1]<=x<K[i]\\}$ for $1<=i<=k-2$.
\- $P_{k-1}$ := $\\{x\in P|K[k-2]<=x<=K[k-1]\\}$.
\- $P_{k}$ := $\\{x\in P|K[k-1]<x\\}$.
Because we clamped $P$, $P_{0}$ and $P_{k}$ are empty. Therefore
$\bigcup_{i=1}^{k-1}P_{i}=P$.
A PWLCurve is linear between consecutive control points, so $C$ is linear over
each $P_{i}$.
#### B.4.3. Convert each partition into two points
From the theorem, for each $P_{i}$, we can produce a two-point set
$P_{i}^{\prime}$ with
$min_{x}(P_{i})<=min_{x}(P_{i}^{\prime})<=max_{x}(P_{i}^{\prime})<=max_{x}(P_{i})$,
such that for any line $L$,
$SE(L,P_{i})=SE(L,P_{i}^{\prime})+SE(bestfitline(P_{i}),P_{i})$. $C$ is linear
over each $P_{i}$, so
$SE(C,P_{i})=SE(C,P_{i}^{\prime})+SE(bestfitline(P_{i}),P_{i})$.
#### B.4.4. Recombine partitions
Let $P^{\prime}:=\bigcup_{i=1}^{k-1}P_{i}^{\prime}$. Each $P_{i}^{\prime}$
consists of two points, so $P^{\prime}$ consists of $2(|K|-1)$ points.
$\displaystyle SE(C,P)$ $\displaystyle=\sum_{i=1}^{k-1}SE(C,P_{i})$
$\displaystyle={\sum_{i=1}^{k-1}(SE(C,P_{i}^{\prime})+SE(bestfitline(P_{i}),P_{i}))}$
$\displaystyle={SE(C,P^{\prime})+\sum_{i=1}^{k-1}SE(bestfitline(P_{i}),P_{i})}.$
$\sum_{i=1}^{k-1}SE(bestfitline(P_{i}),P_{i})$ is determined by $P$ and $K$,
and is therefore the same for every $C$. Therefore we’ve proven the corollary.
|
# Templates of generic geographic information for answering where-questions
Ehsan Hamzei, Stephan Winter and Martin Tomkoa Correspondence concerning this
article should be addressed to Ehsan Hamzei, Department of Infrastructure
Engineering, The University of Melbourne, Parkville, VIC 3010, Australia;
Email<EMAIL_ADDRESS>aDepartment of Infrastructure
Engineering, The University of Melbourne, Parkville, Australia
###### Abstract
In everyday communication, where-questions are answered by place descriptions.
To answer where-questions automatically, computers should be able to generate
relevant place descriptions that satisfy inquirers’ information needs. Human-
generated answers to where-questions constructed based on a few anchor places
that characterize the location of inquired places. The challenge for
automatically generating such relevant responses stems from selecting relevant
anchor places. In this paper, we present templates that allow to characterize
the human-generated answers and to imitate their structure. These templates
are patterns of generic geographic information derived and encoded from the
largest available machine comprehension dataset, MS MARCO v2.1. In our
approach, the toponyms in the questions and answers of the dataset are encoded
into sequences of generic information. Next, sequence prediction methods are
used to model the relation between the generic information in the questions
and their answers. Finally, we evaluate the performance of predicting
templates for answers to where-questions.
###### keywords:
question answering; notion of place; scale; prominence
††articletype: ARTICLE TEMPLATE
## 1 Introduction
Consider the following question and its corresponding answer, taken from the
Microsoft Machine Comprehension (MS MARCO) dataset v2.1 (Nguyen ., 2016):
Question: Where is Putney Bridge?
Answer: Putney Bridge is a bridge crossing of the River Thames in west London.
This where-question is answered using a _place description_ – a description
that characterizes the location of interest (Putney Bridge) based on a few
anchor places (River Thames and London). Place descriptions, however, are not
the only way to answer where-questions. Where-questions can also be answered
via other representations such as maps or sketches (Church ., 2010). Invariant
to the chosen representation, the answers localize the place in question based
on its spatial relationships with chosen anchor places (Couclelis ., 1987).
Hence, answering where-questions poses the following challenges no matter what
representation is used:
* •
Generating informative answers – i.e., the answer should complete the
inquirers’ gap of knowledge in a way that obvious or already-known responses
should be avoided and useful and necessary information are included (Shanon,
1983). In the example, obvious, inadequate or undetailed answers such as on
Earth or in the UK or over a river are avoided by the responder.
* •
Answering the question in a cognitively efficient manner (D. Wilson Sperber,
2002) – e.g., producing short and straightforward place descriptions (Hamzei,
Li ., 2019) and personalized map labelling strategies in map visualizations
(JA. Wilson, 2018). In the example, the responder excludes unnecessary
information such as the nearby theaters and restaurants to keep the answer as
simple and relevant as possible.
* •
Determining the level of granularity of answers – e.g., a suitable zoom level
for maps (Ballatore, 2019) and referring to places of suitable granularity in
place descriptions (Hamzei, Winter Tomko, 2019). In our example, the name of
the roads and streets that are connected to the bridge are neglected in the
answer based on the judgement of the responder for the relevant scale level.
* •
Selecting places that can be assumed to be known by the inquirer – e.g.,
labelling the places known to inquirers in maps (Suomela ., 2009) and
referring to them in place descriptions as anchors. In the example, the
location of _River Thames_ and _London_ are assumed to be known to the
inquirer.
Where these challenges are met by an answer, the communication succeeds.
Addressing these challenges is a necessary step towards answering where-
questions. To understand and imitate human selectivity in choosing anchor
places, we investigate and characterize human-generated answers to where-
questions. The results of our research are applied for generating answers to
where-questions as natural language responses (place descriptions). Selecting
relevant anchor places is an essential part of generating place descriptions
that succeed in answering where-questions. Moreover, information about anchor
places can be used in static maps to be visualized in a proper context frame
(Ballatore, 2019).
Current geographic question answering systems are often focused on coordinate
retrieval as answers to where-questions (e.g., Luque ., 2006; Stadler .,
2012). While coordinates are useful for communication between location-based
services to perform spatial analysis or visualization (Jiang Yao, 2006), it is
not necessarily a relevant response to inquirers without a proper map
visualization. Yet, a characterization of relevant anchor places to localize a
place in question is still missing.
In this paper, we study human-generated answers to where-questions to inform
the properties of such answers and to devise and test a method to imitate
their structures in machine-generated responses. To achieve these goals, the
information in a where-question and its answer is modelled as an ordered set
of places that are mentioned in their content. Then the properties of places
in questions and corresponding answers are derived and further investigated.
This model forms a template (i.e., an ordered set of place properties) that
enables computers to learn and imitate human answering behaviour. In other
words, place properties are utilized to understand why a set of places are
chosen as anchors to localize the place in question and how this selectivity
can be imitated by computers.
The properties that are used in the templates are generic geographic
information that describe the shared meaning of places in form of generic
types from a finite set of categories. Referring to the example above, the
place in question is a bridge which is localized by referring to the river it
goes over and the city it belongs to. Here, the template captures the
structure of the answer as relationships between bridges and rivers, and
bridges and cities.
### 1.1 Background: Geographic Question Answering
Geographic Question Answering (GeoQA) is defined as methods and algorithms
that help inquirers to satisfy their information need by deriving answers to
their geographic questions. In GeoQA, answering geographic questions can be
based on diverse information sources such as textual information (Mishra .,
2010; Ferrés Rodríguez, 2006), geodatabases (Chen ., 2013), and spatially-
enabled knowledge bases (Ferrés Rodríguez, 2010). GeoQA (and in general QA)
architectures typically resolve three tasks: (a) question classification and
intent analysis, (b) finding relevant sources, and (c) extracting answers from
the sources (Ferrés Rodríguez, 2006).
The classification of the questions (Hamzei, Li ., 2019; Mohasseb ., 2018)
enables GeoQA to coarsely identify the intent and purpose of asking questions
(e.g., localization, or navigation). Next, the questions are translated into
formal representations such as database queries or even just a vector
representation of extracted keywords (Punjani ., 2018). Using the
representations, the information sources can be searched or queried to look up
the possible answers (Zheng ., 2019). Finally, the factoid answers are
retrieved from the sources – e.g., a sentence in a Web document, a cell in a
selected table, or a node in a graph knowledge base (Sun ., 2018).
In recent years, several GeoQA studies were conducted for answering geographic
questions (Stadler ., 2012), creating knowledge bases from unstructured data
(Mai ., 2018), and relaxing unanswerable questions (Mai ., 2020). Focusing on
answering geographic questions, previous studies provide solutions to retrieve
responses from knowledge bases (Stadler ., 2012) and documents (Buscaldi .,
2006; Luque ., 2006). GeoQA studies are mostly focused on what/which questions
about geographic places (e.g., Scheider ., 2020; Vahedi ., 2016). In answering
where-questions, the task is either simplified into retrieving stored
coordinates (Luque ., 2006; Stadler ., 2012), or selecting a part of text
without explicit adaptation to the question (Buscaldi ., 2006).
When answering where-questions, the answer extraction step is particularly
challenging. Without a well-designed approach to imitate human answering
behavior, the extracted answers can easily be over-specified and consequently
uninterpretable for the inquirer, or under-specified and thus obvious and
uninformative to the inquirer (Shanon, 1983). Hence, the challenge is to
provide relevant answers by selecting proper set of anchor places to localize
the place in question.
### 1.2 Rationale and Research Gap
To enable computers to provide responses with similar qualities to human-
generated answers, the responses need to be relevant. An answer is relevant if
its positive cognitive effects to inquirers are large and the processing
effort to achieve the effect is small (D. Wilson Sperber, 2002). In other
words, answers should be informative enough and as straightforward as
possible. Assuming human-generated answers are relevant responses, machine-
generated responses should imitate the selectivity in human-generated answer
to provide useful pieces of information and avoid unnecessary ones. Generating
such relevant responses is the major prerequisite of intelligent GeoQA as
defined by Winter (2009).
Generic information captures shared meaning of geographic places. While
generic geographic information is not used in QA, it has been used to
investigate and characterize place descriptions (Richter ., 2013; Edwardes
Purves, 2007), route descriptions (Raubal Winter, 2002), and regions (Tomko
Purves, 2008).
This research hypothesizes that, at least in the English language, generic
geographic information can be used to characterize human answering behavior
and ultimately to generate templates for answering where-questions. We
approach this hypothesis by addressing three sub-hypotheses.
###### Sub-hypothesis 1 (Characteristics of the answers).
Human-generated answers to where-questions have special characteristics that
can be described and characterized in terms of generic geographic information
such as type, scale, and prominence;
###### Sub-hypothesis 2 (Relation between where-questions and their answers).
There is a strong relationship between generic information in the content of
where-questions and their answers which can be used to characterize human
answering behavior;
###### Sub-hypothesis 3 (Generating answers to where-questions).
If Hypotheses 1 and 2 hold, the characteristics of human-generated answers and
the relation between the questions and their answers can be used to generate
templates to answer to where-questions.
To investigate the hypotheses, the following research questions will be
addressed:
1. 1.
How can the characterizing patterns of the human-generated answers be derived?
2. 2.
How does generic geographic information in where-questions relate to the
generic information in their human-generated answers?
3. 3.
How can the templates be generated to imitate the structure of human-generated
answers?
By addressing the research questions, we contribute:
* •
A generalized approach to investigate human answering behavior to where-
questions using generic geographic information;
* •
An investigation of the human-generated answers to where-question asked in Web
search, using patterns of type, scale and prominence of places.
## 2 Methodology
To investigate the hypotheses, we propose a generalized approach of _specific-
generic translation_. Next, a method using specific-generic translation is
devised to investigate the QA in interaction of people with a general-purpose
search engine. Other QA scenarios (e.g., human-human dialogue, human-social
bot interaction) may require different design of specific-generic
translations.
### 2.1 Specific-Generic Translation
Figure 1 shows the proposed approach to derive characterizing patterns in
human-generated answers and generating template to answer where-questions. The
approach includes two stages: (1) _learning generic patterns_ where the
objective is to investigate and to characterize human answering behavior into
a machine learning model, and (2) _answering questions_ where the model is
used to generate answers. The novelty of the approach is in the encoding of
questions and answers into their generic meaning and to model the relation
between questions and answers in their generic forms. Later, the model is used
to generate generic forms of answers to where-questions. Finally, the answer
is constructed by decoding generic form (e.g., type) of the answer into its
specific representation (i.e., toponym).
Figure 1: Specific-generic translation approach
The specific-generic translation approach involves the following steps:
1. 1.
Selecting a set of generic information classes (e.g., place type, scale,
prominence, functions and access) based on the context of QA and availability
of data;
2. 2.
Defining a schema for each selected generic information class;
3. 3.
Designing an information extraction approach to encode the questions and
answers into generic forms (Generic Encoder in Figure 1);
4. 4.
Evaluating how effective each generic class is in capturing the relation
between the questions and their human-generated answers (Generalized Patterns
in Figure 1). The results of evaluation also provide insights about human
answering behavior in the context of the QA problem.
5. 5.
Training a predictive model that can learn generalized patterns of human-
generated answers (Predictive Model in Figure 1);
6. 6.
Defining a decoding approach to map generic forms of answers into specific
(toponym) representation (Generic Decoder in Figure 1). This step can be
followed by additional steps such as natural language generation to be used in
real-world applications.
In this paper, we discussed the results of the first five steps for question
answering in a Web search scenario in details. The last step is only
demonstrated using examples.
### 2.2 Type, Scale and Prominence (TSP) Encoding
Based on the specific-generic translation, TSP encoding is proposed to
investigate where-questions constructed only with toponyms. The generic forms
which are used to investigate these questions and their answers are _type_ ,
_scale_ and _prominence_ of the toponyms. We first introduce our terminology
before discussing the details of the proposed TSP encoding method.
#### 2.2.1 Definitions
The investigated types of where-questions are defined as:
* •
Toponym-Based Where-Question (TWQ): A toponym-based where-question is a
geographical where-question that is generated completely using toponyms. For
example, _Where is Beverly Hills?_ is a toponym-based where-question, while
_Where is Clueless filmed?_ (without toponym) and _Where can I buy furniture?_
(affordance, buying furniture) do not belong to this type.
* •
Simple Where-Question (SWQ): Simple where-questions are a sub-class of TWQs
that contains only one toponym in their body (e.g., _Where is Beverly
Hills?_).
* •
Detailed Where-Question (DWQ): Detailed where-questions are a sub-class of
TWQs with more than one toponym in their content (e.g., _Where is Beverly
Hills, California?_). In DWQs, contextual details are provided in the content
of the questions that shows what the inquirer already know – e.g., Beverly
Hills is located in California.
We use _type_ , _scale_ and _prominence_ , defined as:
* •
Type: A type (e.g., restaurant, mountain) is a reference to a group of places
with similar characteristics (e.g., affordance, or physical properties). Type
defines similar places and differentiates dissimilar ones, sometimes in a
hierarchical or taxonomical manner. Here, the relation between a specific
reference to a place (unambiguous toponym) and its type is considered as a
one-to-one relation.
* •
Scale: Scale is defined as a finite hierarchically-organized ordinal set of
levels grounded in the human understanding and differentiation of places based
on their size and their relationships (i.e., containment). The relation
between scale and an unambiguous toponym is considered as one-to-one. Due to
the specific context of the QA scenario, very fine levels of scale of
geographic entities, such as room-level or object-level, can be neglected
here, while in everyday human-human communication these levels of scale may
have a more important role.
* •
Prominence: Prominence is a measure of how well-known a place is. In this
research, prominence is characterized by a finite ordinal set of levels. While
prominence of places is subjective and differs from person to person based on
their experience, here prominence is considered as an absolute and objective
measure to rank places, established through a proxy measure defined later.
This approach enables to avoid individual experiential biases and is supported
by the evidence of success in day to day communication in which the absolute
prominence evaluation is adapted between hearers and speakers.
Type, scale and prominence are used to characterize place descriptions
(Richter ., 2013; Edwardes Purves, 2007). These geographic concepts can be
used to capture different kinds of relationships among places. These
relationships can be used to understand the relation between where-questions
and their answers. For example, such relationships between rivers and seas
(flows to), and cities and countries (part of) can be captured using place
type. Considering the relation between where-questions and their answers,
containment (different levels) and nearness (a same level) can be captured
through differences among scale levels. Finally, prominence is a measure to
check whether the answer is interpretable by the inquirers – i.e., more
prominent places are expected to be better known by inquires.
Finally, aspects of human-generated answers which are investigated in this
paper are defined below:
* •
Content: The content of an answer is a collection of distinct information
units that are presented to satisfy the inquirer’s information need. Content
can be generic (e.g., type) or specific (e.g., toponym). Content is the most
important aspect of the answers, in a way that the difference between correct
and incorrect responses are completely based on their content.
* •
Style: The style of an answer is the way that the content is presented. Style
directly influences the perception of naturalness of the response. Referring
to the introductory example, …Putney Bridge is a bridge crossing of the River
Thames in west London and …Putney Bridge is a bridge in west London which goes
over River Thames are two different styles of answers (with same content) to
the question. Here, the former is preferred by the responder.
#### 2.2.2 TSP Sequences
In TSP encoding, we use a sequence representation to model generic/specific
information in the questions and answers. A sequence is defined as an ordered
set of items (here, references to generic/specific geographic information). We
first model questions and their corresponding answers as sequences of toponyms
(specific representation). Then, these toponym sequences are encoded into
type, scale and prominence sequences by translating each specific toponym into
its corresponding generic type, scale and prominence reference. Referring to
the introductory example, the specific representations (toponym sequences) and
the encoded type sequences (an example of a generic sequence) of question and
answer are presented below:
* •
Toponym sequences: [Putney Bridge] [River Thames, London]
* •
Type sequences: [bridge] [river, city]
Here, the _content_ refers to the information items in the sequences, and
their order defines the _style_ in which the information is presented.
## 3 Implementation
Figure 2 shows the proposed workflow111Additional details of implementation
are presented in the supplementary material (Section 1) to investigate TWQs
and their answers. Here, we detail the dataset, extraction, encoding, generic
patterns and prediction. A complete implementation of the proposed TSP
encoding approach also includes decoding from generic to specific information.
Here, the decoding step is demonstrated through examples, and a fully
automated implementation remains out of scope of this paper.
Figure 2: The proposed implementation approach
### 3.1 Data
The questions in MS MARCO v2.1 (Nguyen ., 2016) are categorized into five
categories using tags: (1) _numeric_ , (2) _entity_ , (3) _location_ , (4)
_person_ , and (5) _description_ (Nguyen ., 2016). Geographic questions can
thus be easily extracted using the predefined _location_ tag. The dataset
contains over one million records divided into _training_ , _development_ and
_testing_ subsets.
Each record in the dataset includes a _question_ , _human-generated answer(s)_
(except for records in the _test_ dataset, where the answers are deliberately
excluded), a _tag_ , and a _set of documents_ retrieved by the Microsoft Bing
search engine222More information about the dataset can be found in:
https://github.com/dfcf93/MSMARCO.
The ‘location questions’ in MS MARCO (56,721 question-answer pairs) include
36,939 geographic questions, and the remainder are questions about fictional,
mystic and other non-geographic places (Hamzei, Li ., 2019). Among the
geographic questions, 13,195 pairs of questions and answers are geographic
where-questions (Hamzei, Li ., 2019).
There are several reasons to choose MS MARCO for this study considering other
available datasets such as SQuAD (Rajpurkar ., 2016):
* •
MS MARCO is the largest available QA dataset;
* •
The questions are labelled and geographic questions can be easily extracted;
* •
All questions are asked in a specific real-world scenario (i.e., Web search);
* •
Inquirers pose questions to resolve their information needs while in some
datasets such as SQuAD, questions are made from documents. In other words,
questions in SQuAD is more about what a document can answer rather than what
actual inquirers want to know.
* •
The answers are provided using an open form strategy. The answerers can
utilize suggested documents (one or more) and their own knowledge to answer a
question. Hence, the answers are not directly extracted as a single span of a
document.
### 3.2 Extraction
We first extract the questions labelled as _location_ and starting with a
_where_ -token. Next, the text is the toponyms inside the questions and
answers are identified using Named Entity Recognition (NER) and gazetteer
lookups using both OSM Nominatim and Geonames gazetteers. Here, the Stanford
NLP toolkit is used to extract named entities (Finkel ., 2005). In this step,
if a compound noun phrase is tagged as location, first the compound noun is
checked by gazetteer lookup; if it is not identified, then its constituent
simple nouns are checked. If a compound or simple noun phrase is found in both
gazetteers, it is stored as a toponym. For the extracted toponyms, we retain
only records for which (1) the OSM and Geonames records have same name, and
(2) the Geonames’ point-based coordinates are inside the region-based
representation of their corresponding OSM records.
The toponym disambiguation is then undertaken based on the _minimum spatial
context_ heuristic (Leidner ., 2003). We use bounding boxes to determine which
combination of the geographic locations satisfy the minimum spatial extent
condition. In cases of duplicate places in GeoNames which lead to the same
bounding boxes, the combination with more specific place types is selected.
For example, populate place (PPL) is a place type in GeoNames which could be a
village, city, state and even country. Hence, administrative divisions (e.g.,
state) are chosen over the populated places. Finally, if the toponym ambiguity
still exists, we use importance value to select the more prominent
combination.
More sophisticated heuristics in toponym disambiguation (e.g., Wang ., 2010;
Lieberman Samet, 2012) are not used due to reliance on significant assumptions
– e.g., the relation between place types in the toponyms, city-country
relation. These heuristics constrain the relationships between type, scale and
prominence of resolved places in the text. This may impact the results of this
study and lead to stronger associations based on type, scale and prominence
between toponyms in questions and answers. Here, to present fair results, we
avoid using these disambiguation methods.
### 3.3 Encoding
The gazetteers’ attributes for the extracted toponyms have been used as
proxies to capture type, scale and prominence of the toponyms in questions and
answers. Using these proxies, the sequence representations for each question-
answer pair are encoded into TSP sequences.
Type: The Geonames type schema333https://www.geonames.org/export/codes.html
has been used without modification to encode generic place types. This schema
contains 667 unique types of geographic features, covering both natural and
man-made geographic types.
Scale: To capture scale, we have extended the schema from Richter . (2013).
This schema contains seven levels of granularity: (1) furniture, (2) room, (3)
building, (4) street, (5) district, (6) city, and (7) country. We have
extended the coarse levels of granularity by adding _county_ , _state_ ,
_country_ , and _continent_ , and removing the _furniture_ and _room_ levels
from the schema. OSM records include an attribute related to the OSM
definition of scale (i.e.,
place_rank444https://wiki.openstreetmap.org/wiki/Nominatim/Development_overview),
a number between 0 to 30. We convert the extracted gazetteers’ records into
the appropriate scale level based on a look-up table that maps OSM scale
levels into the proposed scale schema (see the supplementary material, Section
1.1).
Prominence: To capture prominence, the _importance_ attribute in the extracted
OSM Nominatim record is used. The OSM importance value is estimated using
Wikipedia importance score (Thalhammer Rettinger, 2016) with some minor
tweaks555https://lists.openstreetmap.org/pipermail/geocoding/2013-August/000916.html.
The value is defined between 0 and 1, and it is designed to be used for
ranking search results. We translate these values into seven finite levels of
prominence, derived by _natural breaks_ classification (Jenks, 1967) of the
frequency spectrum of the values.
### 3.4 Distribution Analysis and Rule Mining
Distribution analysis and rule mining techniques are used to extract and
investigate patterns in the human-generated answers and the relation between
the questions and their answers. Distributions of type, scale and prominence
sequences are used to compare the questions and answers. To derive patterns in
the questions and their answers, association rule mining, a-priori algorithm
(Agrawal Srikant, 1994), is used.
The strength of the extracted rules are evaluated using the standard measures
– i.e., _support_ , _confidence_ , and _lift_. Support defines how frequently
an association rule is observed in the whole dataset, and confidence
determines how often the rule is true. Lift is a measure to evaluate the
importance of the rules – i.e., lift greater than one shows positive and
strong dependency among the elements of the extracted rule. This part of the
method is devised to test the first and second hypotheses.
### 3.5 Prediction
The input for the prediction is an encoded sequence of TWQs, and the output is
the generic sequence of their corresponding answers. The problem can then be
formulated as a sequence prediction from concatenated generic sequences for
the questions and their answers, where a part of a sequence is known, and the
rest is predicted. Table 1 shows the sequence prediction methods which are
used in this study. We used and extended an open-source toolkit for sequence
analysis (Fournier-Viger ., 2016) to implement the prediction methods.
These classic methods are divided into probabilistic (Cleary Witten, 1984;
Pitkow Pirolli, 1999; Padmanabhan Mogul, 1996) and non-probabilistic
categories (Ziv Lempel, 1978; Laird Saul, 1994; Gueniche ., 2013, 2015). The
probabilistic methods are based a graph representation of conditional
probabilities (Cleary Witten, 1984) or Markov chain’s transition probability
matrix (Pitkow Pirolli, 1999; Padmanabhan Mogul, 1996) of the sequence
elements. The non-probabilistic methods compress the sequences in a lossy (Ziv
Lempel, 1978; Laird Saul, 1994) or lossless approaches (Gueniche ., 2013,
2015) into tree-based (Gueniche ., 2013, 2015) or graph-based (Laird Saul,
1994) data structures (for a review of sequence prediction methods see Tax .
(2020)).
The structure of sequence and the relation between prior elements in the
sequence to their succeeding elements are trained into a model. The model is
then tested on an unseen part of data using K-fold cross validation (K=10). We
considered two baseline methods to evaluate the performance of the sequence
prediction methods: (1) random sequence generation and (2) most frequent
pattern.
The random generation baseline only utilizes the schema of type, scale and
prominence without any information about the distributions of values in the
answers. The most frequent patterns baseline predicts templates of answers
using the schema and the distribution of generic references in the answers.
The difference between the prediction performances of random generation and
the most frequent patterns shows the impacts of using the distribution of
generic values in generating templates of answers (see hypothesis 1). The
sequence prediction methods also consider the relation between generic values
in the questions to their answers. Consequently, the improvement in generating
the templates compared to the most frequent patterns baseline is related to
the association between generic values of questions and their answers
(hypothesis 2).
Table 1: Sequence prediction methods
Method | Publication | Year
---|---|---
Lampel-Ziv 1978 (LZ78) | (Ziv Lempel, 1978) | 1978
First order Markov Chains (Mark1) | (Cleary Witten, 1984) | 1984
Transition Directed Acyclic Graph (TDAG) | (Laird Saul, 1994) | 1994
Dependency Graph (DG) | (Padmanabhan Mogul, 1996) | 1996
All-k-Order Markov Chains (AKOM) | (Pitkow Pirolli, 1999) | 1999
Compact Prediction Tree (CPT) | (Gueniche ., 2013) | 2013
Compact Prediction Tree Plus (CPT+) | (Gueniche ., 2015) | 2015
In prediction, each generic form of questions is used to predict the same
generic form of their answers. In addition, we have devised an approach to
predict one of the generic forms of an answer using all generic forms (i.e.,
type, scale and prominence) of its corresponding question. Algorithm 1 shows
the process to use all three type/scale/prominence sequences to predict a
generic form of the answers in each generic class. Here, each combination of
type, scale and prominence values are mapped to a unique code. Using these
codes, a new sequence is generated for each question/answer to capture type,
scale and prominence together. Next, these sequences are used to predict the
generic form of answers. Finally, a reverse mapping is used to decode these
sequences into type, scale and prominence sequences.
Algorithm 1 Training and prediction based on type-scale-prominence together
1:procedure $\mathbf{TSP\\_Prediction}$($type$, $scale$, $prominence$)
2: generate a code for each unique combination of $type$-$scale$-$prominence$
($TSP$)
3: create encoded sequences based on generated $TSP$ codes
4: train a model to predict $TSP$ in answers based on $TSP$ in the questions
5: for every $question$ do
6: given a $question$ ($TSP$); predict the $answer$ ($TSP$)
7: decode the predicted $answer$ ($TSP$) to $answer$
($type$/$scale$/$prominence$)
8: if multiple predictions are allowed then
9: avoid counting duplicate decoded values for $type$/$scale$/$prominence$
10: end if
11: end for
12:end procedure
13:
## 4 Results
### 4.1 Extraction and Encoding
The assessment of toponym extraction, finding TWQs, and categorizing the
questions into SWQs and DWQs are presented in Table 2. Here, average precision
and recall of the extraction results are calculated using manually annotated
data (5% of TBWQs and their answers). For the task of finding TWQs in the
dataset, the _false negatives_ (TWQs that have not been extracted) are not
investigated, hence the recall is unknown.
As shown in Table 2, 6,274 TWQs and their answers are found in the dataset.
The TWQs are approximately 11.1% of the _location questions_ of the dataset.
For evaluation, 5% of extracted TWQs (314 questions) are investigated and the
precision of extraction is 91.7% – i.e., 288 of 314 extracted questions are
TWQs. Using the 288 TWQs, the precision and recall of extracting toponyms and
classifying the questions to SWQs and DWQs are presented in Table 2.
Table 2: Extraction evaluation
Extraction | #Extracted | #Investigated | Precision | Recall
---|---|---|---|---
TWQs | 6274 | 314 (5%) | 91.7% (288 out of 314) | –
SWQs | 3285 | 121 out of 288 | 89.4% | 90.2%
DWQs | 2989 | 167 out of 288 | 92.7% | 92.1%
Toponyms | 22307 | 1133666unique toponyms extracted from the sampled questions and answers | 88.6% | 90.8%
Table 3 shows the number of records that are completely encoded for question-
answer pairs in type, scale and prominence sequences. Here, if even the
information for one place (which is mentioned either in the question or its
answer) is missing, the question and its answer are not used to extract
patterns or test the predictability of generating generic form the answer. As
shown in the table, the encoding into scale and prominence is not always
possible due to incompleteness of attribute information (i.e., _place_rank_
and _importance_) in OSM Nominatim.
Table 3: Encoding results Encoding | #TWQs | #SWQs | #DWQs
---|---|---|---
Type sequences | 6,274 | 3,285 | 2,989
Scale sequences | 3,936 | 1,985 | 1,951
Prominence sequences | 6,051 | 3,098 | 2,953
### 4.2 Distributions
The distribution of TWQs777A detailed comparison of SWQs and DWQs is presented
in the supplementary material (Section 2) and their answers based on type,
scale and prominence are shown in Figures 3, 4 and 5. Figure 3 shows that the
diversity of types in the questions is higher than in the answers. While
administrative divisions are more frequent than other generic types in both
questions and answers, they are more dominant in the answers.
Figure 3: Distribution of place types in the questions and in the answers.
Figure 4 shows the scale in the answers is systematically one level coarser
than in the questions. In addition, the distribution shows that city-level and
state-level scales are frequently observed in the questions, while the answers
mostly contain references of county and country levels of scale. The results
further show that the coarsest level of scale (i.e., continent level) is
rarely observed in the answers. This observation shows an answer at the
continent level would be under-specified in most cases, and therefore
uninformative.
Figure 4: Distribution of levels of scale in all toponym-based where questions
and answers.
The distributions of prominence levels in questions and answers are similar to
the distributions by scale (Figure 5). In the questions, we observe a bi-modal
distribution of levels of prominence in the content of questions. The
distribution of prominence in the answers, however, shows that higher levels
are dominant. In contrast to the distributions by scale, the most prominent
level is dominant in the answers. Hence, people tend to refer to well-known
places in their answers. Unlike with scale, the highest levels of prominence
do not necessarily lead to obvious or irrelevant answers888A detailed analysis
of sequence distributions is available in supplementary material (Section 3).
Figure 5: Distribution of prominence levels in the questions versus answers
### 4.3 Extracted Rules
To test Hypotheses 1 and 2 (see Section 1.4), we extract strong rules in the
encoded pairs of questions-answers through association rule mining. The
association rules extracted from the answers can be used to describe how
answers are constructed in detail (Hypothesis 1). The relationship between the
content of the questions and their answers can thus also be further
investigated (Hypothesis 2).
Tables 4-6 show the top five extracted rules (based on _frequency_ /_support_)
for type, scale and prominence, respectively. In the tables, the values
starting with _Q-_ relate to the contents of the questions and the values
starting with _A-_ to the content of the corresponding answers. As shown in
the tables, some rules describe the structure of answers (e.g., {A-ADM1,
A-ADM2} =>{A-PCLI}) while the others describe the relationships between
questions and answers (e.g., {Q-ADM2} =>{A-PCLI}).
Table 4: Extracted rules from type sequences
Rank | rule | support | confidence | lift | frequency
---|---|---|---|---|---
Simple where-questions
1 | {A-ADM2} =>{A-ADM1} | 0.15 | 0.52 | 1.28 | 478
2 | {Q-ADM1} =>{A-PCLI} | 0.08 | 0.74 | 1.69 | 259
3 | {A-ADM1,A-ADM2} =>{A-PCLI} | 0.08 | 0.54 | 1.24 | 259
4 | {Q-ADM2} =>{A-PCLI} | 0.06 | 0.52 | 1.21 | 188
5 | {Q-PPL,A-ADM2} =>{A-PCLI} | 0.04 | 0.54 | 1.23 | 112
Detailed where-questions
1 | {Q-ADM1} =>{A-ADM2} | 0.57 | 0.76 | 1.12 | 1701
2 | {Q-ADM1} =>{A-PCLI} | 0.38 | 0.50 | 1.13 | 1126
3 | {A-PCLI} =>{A-ADM2} | 0.35 | 0.79 | 1.17 | 1053
4 | {A-ADM2,Q-ADM1} =>{A-PCLI} | 0.31 | 0.54 | 1.21 | 916
5 | {Q-PPL} =>{A-ADM2} | 0.22 | 0.78 | 1.15 | 656
Table 5: Extracted rules from scale sequences
Rank | Rule | support | confidence | lift | frequency
---|---|---|---|---|---
Simple where-questions
1 | {Q-6} =>{A-9} | 0.21 | 0.55 | 1.01 | 417
2 | {Q-6} =>{A-8} | 0.20 | 0.54 | 1.24 | 404
3 | {A-7} =>{A-9} | 0.16 | 0.56 | 1.73 | 307
4 | {Q-6} =>{A-7} | 0.15 | 0.54 | 1.37 | 295
5 | {A-7} =>{A-8} | 0.15 | 0.54 | 1.25 | 293
Detailed where-questions
1 | {Q-8} =>{A-7} | 0.65 | 0.80 | 1.08 | 1277
2 | {Q-6} =>{Q-8} | 0.49 | 0.81 | 0.99 | 952
3 | {Q-6} =>{A-7} | 0.48 | 0.80 | 1.07 | 940
4 | {A-9} =>{Q-8} | 0.45 | 0.87 | 1.07 | 887
5 | {A-7,Q-6} =>{Q-8} | 0.42 | 0.88 | 1.07 | 823
Table 6: Extracted rules from prominence sequences
Rank | Rule | support | confidence | lift | frequency
---|---|---|---|---|---
Simple where-questions
1 | {A-4} =>{A-7} | 0.14 | 0.54 | 1.09 | 425
2 | {A-5} =>{A-7} | 0.13 | 0.50 | 1.02 | 417
3 | {Q-3} =>{A-7} | 0.12 | 0.52 | 1.05 | 382
4 | {Q-6} =>{A-7} | 0.08 | 0.58 | 1.18 | 260
5 | {Q-4} =>{A-7} | 0.08 | 0.53 | 1.07 | 250
Detailed where-questions
1 | {Q-6} =>{A-4} | 0.32 | 0.56 | 1.19 | 957
2 | {Q-6} =>{A-7} | 0.30 | 0.51 | 1.06 | 884
3 | {A-4} =>{A-7} | 0.25 | 0.54 | 1.11 | 742
4 | {Q-3} =>{Q-6} | 0.24 | 0.54 | 0.93 | 695
5 | {Q-3} =>{A-7} | 0.22 | 0.51 | 1.04 | 650
Table 4 shows the dominant role of administrative divisions in the human-
generated answers. Association rules extracted based on the scale (Table 5)
show the _greater-than_ and _between_ levels of the answers to SWQs and DWQs.
The top five patterns of answers are mostly constructed with references to the
highest level of prominence (_A-7_). This shows the major impact of prominence
in human answering behavior to where-questions – i.e., people refer to
prominent places in answering where-questions.
Tables 4, 5 and 6 show that stronger association rules with higher support are
extracted from DWQs in comparison to SWQs. The rules show strong associations
between antecedent and consequent parts of the extracted rules with lift value
greater than one. The results show that stronger rules with higher confidence
and support are extracted using scale in comparison to type and prominence.
The tables only present the extracted rules with highest frequency and
support. These tables show how a small set of generic rules describes a large
proportion of data in the MS MARCO dataset. Sorting rules by confidence or
lift will change the order of the rules. For example, the maximum lift (equal
to 8.93) in the extracted rules belongs to {Q-6, Q-9} =>{A-8} for detailed-
where questions using scale. The frequency of this rule is 43, and it
describes the relevant scale level (between minimum and maximum levels of the
question) for detailed where-questions. The maximum confidence is 0.93 for
detailed where-questions encoded by type. This association rule is {Q-PPLA2,
Q-ADM1} =>{A-ADM2} with a frequency 109. This rule describes that populated
places in detailed where-questions are mostly localized by referring to
counties they belong to.
### 4.4 Predicting the Generic Form of Answers
We test the predictability of the generic sequence of an answer given the
generic sequence of the corresponding question. We investigate different
prediction scenarios, including (1) the same generic class prediction (e.g.,
predicting type sequence of answers using type sequence of questions), and (2)
prediction of one generic class using all generic classes (e.g., predicting
type sequence of answers using type/scale/prominence sequences of questions,
see Algorithm 1).
We assess the prediction accuracy of content and content-and-style of the
answers (defined in Section 2.2.1). Referring to the introductory example, if
type sequence of the answer is predicted as [river, city] then it is captured
as a correct prediction for both content and content-and-style. The other
permutation of this sequence (i.e., [city, river]) is considered as a correct
prediction of content and incorrect prediction for content-and-style.
Evidently, any other type sequence is an incorrect prediction for both content
and content-and-style scenarios.
Each prediction scenario is applied over all questions, SWQs and DWQs to
investigate the impacts of question types on the prediction accuracy. Each
scenario is tested using all six sequence prediction methods and is compared
with the two baseline approaches (i.e., random generation and most frequent
patterns). Only the best prediction performances among the six sequence
prediction methods are presented. The best performance is the maximum
prediction accuracy achieved by one of the methods for a prediction scenario.
We also test prediction accuracy when multiple predictions are allowed – i.e.,
top-k predictions for $k$ from one to five. In top-k predictions, $k$ unique
sequences are predicted for each answer and if one of the sequences matches
with the generic form of the answer then the prediction is successful.
Table 7 shows the best performances in predicting type sequences of
answers.The prediction accuracy based of TSP sequences is noticeably higher
than that of predictions using only type sequences. This shows a complementary
role of scale and prominence in predicting type sequence of the answers.
Contrasting DWQs and SWQs shows that extra details in DWQs are useful for
prediction of the generic form of answers. In addition, we observe how
subjectivity in style of answers and flexibility of language to convey
information lead to noticeable less accuracy in prediction of content-and-
style of answers in comparison to prediction of content. This observation is
related to the flexibility of natural language, in which the same meaning can
be presented in different ways. Finally, the number of predictions ($k$ in the
table) shows that the accuracy dramatically increases in the case of multiple
predictions.
Table 7: Prediction accuracy for type sequences
#Predictions (k) | Content | Content and Style
---|---|---
| Type $\rightarrow$ Type | TSP $\rightarrow$ Type | Type $\rightarrow$ Type | TSP $\rightarrow$ Type
All questions
1 | 45.2 | 55.7 | 29.0 | 40.7
2 | 68.9 | 77.1 | 44.6 | 60.5
3 | 80.2 | 83.3 | 57.8 | 73.3
4 | 83.6 | 84.7 | 64.0 | 76.1
5 | 84.4 | 85.5 | 68.3 | 77.4
Simple where-questions
1 | 39.5 | 47.5 | 14.2 | 27.4
2 | 60.8 | 69.4 | 32.7 | 48.5
3 | 73.2 | 75.8 | 48.2 | 63.4
4 | 77.2 | 77.5 | 58.1 | 66.2
5 | 78.5 | 78.2 | 63.1 | 67.0
Detailed where-questions
1 | 59.1 | 67.3 | 47.1 | 59.6
2 | 80.4 | 88.7 | 61.3 | 76.3
3 | 84.0 | 91.2 | 65.9 | 84.4
4 | 88.0 | 91.3 | 73.6 | 86.4
5 | 88.5 | 92.1 | 75.6 | 87.1
Tables 8 and 9 show that compared to type sequence prediction, the TSP
sequences contribute less effectively in predicting the prominence and scale
sequences – i.e., only slightly improve the prediction accuracy. When
considering multiple predictions, TSP sequences lead to worse results than
prominence sequences or scale sequences alone. This can be explained by
overfitting to specific patterns in the training dataset. Here, overfitting is
observed because the schema of types is more than 20 times larger than the
scale and prominence schemas. Hence, using type in prediction of scale or
prominence leads to very detailed patterns that are not generalizable enough
and decrease the prediction accuracy on unseen data. Finally, scale is the
most predictable, and prominence is the least predictable generic class.
Similar to the observations based on type prediction performances, DWQs are
more predictable than SWQs based on scale and prominence.
Table 8: Prediction accuracy for scale sequences
#Predictions (k) | Content | Content and Style
---|---|---
| Scale $\rightarrow$ Scale | TSP $\rightarrow$ Scale | Scale $\rightarrow$ Scale | TSP $\rightarrow$ Scale
All questions
1 | 55.0 | 56.7 | 38.2 | 42.2
2 | 79.4 | 79.2 | 61.0 | 62.8
3 | 91.6 | 86.1 | 79.0 | 76.0
4 | 96.3 | 88.7 | 92.0 | 81.9
5 | 98.0 | 89.3 | 96.0 | 83.5
Simple where-questions
1 | 48.5 | 49.5 | 20.4 | 28.6
2 | 79.6 | 71.8 | 49.1 | 49.8
3 | 89.9 | 78.3 | 71.9 | 67.0
4 | 95.6 | 81.8 | 90.3 | 74.0
5 | 97.5 | 82.6 | 94.9 | 75.5
Detailed where-questions
1 | 69.6 | 68.2 | 59.8 | 60.6
2 | 88.4 | 89.6 | 78.4 | 77.3
3 | 95.8 | 93.3 | 88.6 | 87.1
4 | 97.5 | 95.2 | 94.8 | 92.1
5 | 98.6 | 95.2 | 97.0 | 92.7
Table 9: Prediction accuracy of prominence sequences
#Predictions (k) | Content | Content and Style
---|---|---
| Prominence $\rightarrow$ Prominence | TSP $\rightarrow$ Prominence | Prominence $\rightarrow$ Prominence | TSP $\rightarrow$ Prominence
All questions
1 | 50.8 | 53.0 | 19.9 | 30.7
2 | 74.1 | 73.4 | 39.2 | 49.1
3 | 85.0 | 81.9 | 61.6 | 66.4
4 | 92.1 | 86.7 | 79.2 | 77.1
5 | 96.1 | 88.6 | 89.2 | 81.8
Simple where-questions
1 | 45.4 | 45.6 | 14.3 | 19.5
2 | 75.4 | 69.4 | 34.9 | 39.1
3 | 84.7 | 77.0 | 54.5 | 56.9
4 | 91.3 | 80.5 | 73.7 | 68.2
5 | 95.6 | 81.9 | 87.9 | 72.7
Detailed where-questions
1 | 53.3 | 58.2 | 26.9 | 43.8
2 | 75.1 | 80.0 | 50.9 | 60.4
3 | 86.0 | 88.9 | 70.9 | 78.4
4 | 93.1 | 93.0 | 82.5 | 87.0
5 | 96.8 | 95.4 | 91.9 | 92.6
Table 10 shows the improvement of accuracy in best prediction performances
compared to two baselines – i.e., random generator, and most frequent
pattern(s). The minimum improvement is +18.3% in prediction of type sequences
of answers using type sequences of questions in comparison to the most
frequent pattern(s). This observation shows that strong patterns exist in the
distributions of answers and consequently, the baseline method performs well
in prediction of type sequences of answers. The strongest improvement is
+61.6% when comparing the best predictive performance of type sequences using
type/scale/prominence sequences together, compared to the random baseline.
This is because of the large number of distinct types in type schema that lead
to false predictions for the random baseline. The accuracy improvements
illustrate the strong relationship between the generic content of questions
and generic content of their answers.
Table 10: Accuracy improvement using sequence prediction compared to the
baselines
Prediction Scenario | Random | Most Frequent Pattern(s)
---|---|---
Type $\rightarrow$ Type | +48.9% | +18.3%
Scale $\rightarrow$ Scale | +58.1% | +27.6%
Prominence $\rightarrow$ Prominence | +39.2% | +30.4%
TSP $\rightarrow$ Type | +61.6% | +31.0%
TSP $\rightarrow$ Scale | +54.1% | +23.6%
TSP $\rightarrow$ Prominence | +42.3% | +33.5%
Overall | +50.7% | +27.4%
To compare the sequence prediction methods, we used the difference between the
prediction accuracy of each method to the best performance achieved by all
methods for each prediction scenario. Table 11 shows the root mean square
error (RMSE) for each sequence prediction method. The RMSE shows how well-
performed a method is in comparison to other methods. If the RMSE of a method
is lower than others, the prediction accuracy of the method is higher than the
others. The prediction scenarios in Table 11 are simplified groups of actual
predictions. For example, prediction scenario of scale is related to
predicting scale sequences of answers using (1) scale sequence of questions or
(2) type/scale/prominence sequences of questions.
As shown in Table 11, in all scenarios the CPT method is the best performing
method and TDAG performs worst based on the RMSE values. The results suggest
that CPT is the best method to construct predictive models to predict the
generic form of answers.
Table 11: RMSE of sequence prediction methods
Prediction Scenario | LZ78 | Mark1 | TDAG | DG | AKOM | CPT | CPT+
---|---|---|---|---|---|---|---
Type | 7.4% | 15.2% | 21.8% | 13.4% | 17.3% | 7.1% | 12.9%
Scale | 9.8% | 12.5% | 17.9% | 10.6% | 14.3% | 5.7% | 11.8%
Prominence | 8.7% | 13.3% | 19.2% | 9.9% | 15.2% | 4.9% | 11.5%
Content | 8.9% | 15.5% | 22.7% | 9.1% | 17.4% | 1.9% | 10.2%
Content and Style | 8.6% | 11.7% | 16.1% | 13.2% | 13.7% | 8.2% | 13.8%
## 5 Demonstration: From Generic to Specific
Translating generic encoding of answer to specific form (e.g., type sequence
to toponym sequence) is the last phase in the proposed approach. Our approach
to the generic-to-specific translation problem is grounded in the following
assumption: _places mentioned in the questions have relationships to places
referred to in their answers, and these relations can be found in a knowledge
base_. In addition, the specific form of questions and generic form of answers
are available through encoding and prediction, respectively. Based on this
assumption and the available information, the specific form of answer can be
derived using a SPARQL query template (Query 1). While the _structure_ of a
suitable knowledge base for this purpose has been studied before by Chen .
(2018), no such knowledge base is yet available with the definitions of type,
scale and prominence as used in this study. Hence, the translation is only
demonstrated here using the introductory example999More examples are provided
in the supplementary material (Section 4).
We have used DBPedia and Geonames as sources to demonstrate how SPARQL queries
can be used to find specific forms of answers. Considering the information
stored in DBPedia and Geonames, this demonstration is limited to type
sequences of the answers because the prominence and scale are not available in
the place ontology of these knowledge bases. Even the type schema used in
DBPedia is different from the Geonames’ type schema, and consequently in the
following example, mapping to the DBPedia type schema is done manually.
⬇
PREFIX [KNOWLEDGE BASE]
SELECT distinct ?question ?answer
WHERE {
VALUES ?question [SPECIFIC] .
?answer a [GENERIC] .
{?question ?r ?answer} UNION {?answer ?r ?question} .
}
Query 1: SPARQL template
Referring to the introductory example, the where-question and its answer is
modelled as follows:
* •
specific representation (question): [Putney Bridge];
* •
TSP encoding (question): type sequence [BDG], scale sequence [4], prominence
sequence [3];
* •
TSP encoding (answer): type sequence [STM, ADM2], scale sequence [6, 6],
prominence sequence [6, 7];
* •
specific representation (answer): [River Thames, London]
The SPARQL queries for finding the specific forms of answers are presented in
Queries 2 and 3 using DBPedia and Geonames ontologies. The results of these
queries are shown in Table 12. Using DBPedia, the generic forms are correctly
translated into River Thames and London. However, the generic to specific
translation using Geonames is partially successful. In Geonames, places are
conceptualized as points and it supports only containment. This example shows
that point-based conceptualization of places is not sufficient for generic to
specific translation and more diverse support of spatial relationships can be
useful to find the correct specific forms.
⬇
PREFIX dbo: <http://dbpedia.org/ontology/>
SELECT distinct ?q1 ?a1 ?a2 WHERE {
VALUES ?q1 {<http://dbpedia.org/resource/Putney_Bridge>}
?a1 a dbo:PopulatedPlace .
{?a1 ?r1 ?q1} UNION {?q1 ?r1 ?a1} .
?a2 a dbo:River .
{?a2 ?r2 ?q1} UNION {?q1 ?r2 ?a2} .
}
Query 2: SPARQL query of the example (DBPedia)
⬇
PREFIX gn: <http://www.geonames.org/ontology#>
SELECT distinct ?q1 ?a1 ?a2 WHERE {
VALUES ?q1 {<http://sws.geonames.org/6619925/>}
?a1 gn:featureCode gn:A.ADM2 .
{?a1 ?r1 ?q1} UNION {?q1 ?r1 ?a1} .
?a2 gn:featureCode gn:H.STM .
{?a2 ?r2 ?q1} UNION {?q1 ?r2 ?a2} .
}
Query 3: SPARQL query of the example (Geonames) Table 12: SPARQL results to find specific form of the answer Knowledge Base | Q1 | A1 | A2
---|---|---|---
DBPedia | Putney Bridge | London | River Thames
Geonames | Putney Bridger | London | –
## 6 Discussion
The results of the proposed method shows how generic information can be used
to characterize and imitate human answering behavior to generate templates for
answering the questions. While the results are limited to the human-search
engine interaction, the proposed methodology (specific-generic translation) is
flexibly defined to be applicable to other QA scenarios as well.
We have used type, scale and prominence as generic classes to investigate MS
MARCO dataset. We have compared their potentials in describing human answering
behavior and their performances in predicting the generic forms of the
answers. As a result, two major observations are reported.
First, while strong patterns for each generic class have been observed, we
find that scale is the most predictive class. This is because where-questions
are a specific subset of spatial questions and scale directly captures
inherent spatial aspect of places. Meanwhile, in the notions of type and
prominence, other aspects of places contribute as well – e.g., functional and
physical aspects. In addition, scale is a generic class that captures
hierarchical relationships between places, and previous studies show that
these relationships are the basis for answering where-questions (Shanon,
1983). Moreover, we have observed that type is performing better than
prominence in both characterizing and predicting human-generated answers. This
observation is highly influenced by the proxies used to capture type and
prominence.
Second, when comparing SWQs and DWQs, our investigation shows that the generic
templates to answer to DWQs, compared to SWQs, can be generated more
accurately. We find stronger rules and patterns in the answers to DWQs than in
answers to SWQs. This is because DWQs contain richer details which helps
narrowing down the list of possible relevant answers. To illustrate this
point, two examples are provided (1) _Where in California is Beverly Hills?_
and (2) _Where is Beverly Hills?_ In the first question, the list of possible
relevant answers is narrowed down to _Los Angeles County_ because the inquirer
already knows it is in _California_. For the latter, respondents are free to
subjectively guess the state of the inquirer’s geographic knowledge and
provide answers such as _Los Angeles County_ , _California_ , and _United
States_.
Theoretical limitations: The instruction for specific-generic approach is
devised in a flexible manner to be usable in different GeoQA scenarios.
However, utilizing the approach needs a careful design (e.g., selecting
appropriate list of generic classes) to fit for a particular scenario. The
proposed TSP encoding is limited to the QA scenario of general Web search and
may not be suitable for other QA scenarios such as human interaction with
autonomous cars. In short, the theoretical limitations of this study are:
1. 1.
The generic-to-specific translation approach is only focused on where-
questions, and other types of geographic questions are neglected.
2. 2.
The proposed approach is focused only on the questions and their relationship
with the answers, when no other contextual information about inquirers is
available.
3. 3.
The approach is designed with an exclusive focus on toponyms while qualitative
spatial relationships have an important role in answering where questions.
4. 4.
The additional impacts of qualitative spatial relationships (e.g., _in
southern part of_) as modifiers of scale are neglected in the TSP encoding.
Results limitations: There are some limitations to the implementation
presented in this study:
1. 1.
The biases of the MS MARCO dataset directly influence our results. The data
are extracted from the Microsoft Bing search engine, and hence the results are
necessarily biased to the questions asked by users of this search engine. In
addition, the sampling approach used when extracting MS MARCO questions from
the MS Bing query logs may have a direct and unquantifiable impact on the
generality of the results.
2. 2.
The results are influenced by the geographic biases and incompleteness of data
in Geonames and OSM Nominatim. The bias and incompleteness of gazetteers are
well-documented by Acheson . (2017).
3. 3.
The bias in the proxies that have been used to capture the TSP encoding also
have an impact on the results.
Despite these limitations, the identified patterns align well with everyday
experience and provide a grounding for answering where-questions.
## 7 Conclusions
Generating responses with a similar quality to human-generated answers is a
challenge to current search engines and QA systems. In particular, where-
questions are hard to answer because the responses can sometimes be either
vague or obvious to the inquirers. To avoid generating ambiguous or obvious
responses or retrieving unnecessary information as a part of the answers, a
proper set of anchor places must be identified to localize the place in
question. The assumption that answers to where-questions can be found
completely, without any further modification, inside a textual document or as
a node or its properties in a knowledge base may not hold in general.
Consequently, we introduced here an approach to generate templates to answer
where-questions based on relevant pieces of information.
The approach is based on the automatic extraction of patterns of generic
geographic forms from human-generated QA. These captured in predictive models
and are used to generate templates of answers similar human-generated
responses. Three generic classes (i.e., type, scale and prominence) are used
to investigate the properties of the anchor places in human-generated answers.
We have used questions and answers from MS MARCO v2.1, an extensive dataset
constructed from questions submitted to a general-purpose search engine.
Using distribution analysis and rule mining techniques, we have identified the
characteristics and recurrent patterns in the questions and their answers
(Hypotheses 1 and 2). We have then applied sequence prediction methods to
generate the generic forms for answers based on the generic forms of the
corresponding questions (Hypothesis 3). We have also briefly sketched an
approach how such generic forms may help with the generation of the
appropriate answers, based on the information available in the knowledge
bases.
The results show that the prediction of answer structures based on scale is
more precise, compared to predictions relying on type and prominence. The
rules extracted based on scale have higher support and confidence than the
rules extracted from type or prominence. We also observe how the type of
questions (i.e., SWQs vs. DWQs) influence the strength of the extracted rules
and lead to noticeable differences in prediction performances. Finally, we
compared different sequence prediction methods and find that CPT (Gueniche .,
2013) is the best performing approach in all scenarios. However, the results
of this study are limited to human interaction with a general purpose search
engine. Consequently, an important future direction of this research is to
investigate other corpora of QA related to different scenarios – e.g., human-
human dialogue.
We have also observed that the neglect of qualitative spatial relationships in
our encoding and prediction mechanism may present a a major theoretical
shortcoming of the proposed specific-generic translation. Consequently,
developing a more sophisticated encoding is necessary to extract a deeper
understanding of the human answering behavior of where-questions.
Developing an automatic approach to decode generic forms of answers into
specific representations (i.e., toponyms) is a necessary step to complete the
process of the specific-generic translation approach. Available information in
documents or knowledge bases can be used to derive the specific
representations. Another important future direction is to investigate how the
proposed approach can be combined with current personalization methods, in
order to adapt answers to specific inquirers and their context. Finally,
investigation of other types of where-questions (i.e., where-questions with
generic references) and their human-generated answers using specific-generic
translation remains as a future work.
## 8 Data and Codes Availability Statement
This study makes use of a third-party data source, MS MARCO v2.1 (Nguyen .,
2016). The dataset is freely available under a proprietary agreement for non-
commercial use101010http://www.msmarco.org/dataset.aspx. The computational
workflow of this publication is implemented in Java and R. The implementation
is available under the MIT License111111https://opensource.org/licenses/MIT
and accessible in GitHub: https://github.com/hamzeiehsan/Template-for-
answering-where-questions.
## Acknowledgments
The support by the Australian Research Council grant DP170100109 is
acknowledged. We also thank the anonymous reviewers for their helpful comments
that improve the quality of this paper.
## References
* Acheson . (2017) ACHESON2017309Acheson, E., Sabbata, SD. Purves, RS. 2017\. A quantitative analysis of global gazetteers: Patterns of coverage for common feature types A quantitative analysis of global gazetteers: Patterns of coverage for common feature types. Computers, Environment and Urban Systems64309 - 320\. https://doi.org/10.1016/j.compenvurbsys.2017.03.007
* Agrawal Srikant (1994) Agrawal:1994Agrawal, R. Srikant, R. 1994\. Fast Algorithms for Mining Association Rules in Large Databases Fast algorithms for mining association rules in large databases. Proceedings of the 20th International Conference on Very Large Data Bases Proceedings of the 20th International Conference on Very Large Data Bases ( 487–499). San Francisco, CA, USAMorgan Kaufmann Publishers Inc.
* Ballatore (2019) ballatore2019Ballatore, A. 2019\. A Context Frame for Interactive Maps A context frame for interactive maps. AGILE Conference on Geographical Information Science: Short Papers AGILE Conference on Geographical Information Science: Short Papers ( 1–-5).
* Buscaldi . (2006) Buscaldi:2006Buscaldi, D., Benajiba, Y., Rosso, P. Sanchis, E. 2006\. The UPV at QA@ CLEF 2006. The UPV at QA@ CLEF 2006\. CLEF (Working Notes). CLEF (Working Notes).
* Chen . (2018) Chen2018Chen, H., Vasardani, M., Winter, S. Tomko, M. 2018jun. A Graph Database Model for Knowledge Extracted from Place Descriptions A Graph Database Model for Knowledge Extracted from Place Descriptions. ISPRS International Journal of Geo-Information76221. 10.3390/ijgi7060221
* Chen . (2013) chen2014Chen, W., Fosler-Lussier, E., Xiao, N., Raje, S., Ramnath, R. Sui, D. 2013\. A Synergistic Framework for Geographic Question Answering A synergistic framework for geographic question answering. Proceedings of IEEE 7th International Conference on Semantic Computing Proceedings of IEEE 7th International Conference on Semantic Computing ( 94-99).
* Church . (2010) mapvstext:2010Church, K., Neumann, J., Cherubini, M. Oliver, N. 2010\. The ”Map Trap”? An Evaluation of Map versus Text-Based Interfaces for Location-Based Mobile Search Services The ”map trap”? an evaluation of map versus text-based interfaces for location-based mobile search services. Proceedings of the 19th International Conference on World Wide Web Proceedings of the 19th international conference on world wide web ( 261–270). New York, NY, USAAssociation for Computing Machinery. 10.1145/1772690.1772718
* Cleary Witten (1984) cleary1984dataCleary, J. Witten, I. 1984\. Data compression using adaptive coding and partial string matching Data compression using adaptive coding and partial string matching. IEEE transactions on Communications324396–402.
* Couclelis . (1987) COUCLELIS198799Couclelis, H., Golledge, R., Gale, N. Tobler, W. 1987\. Exploring the anchor-point hypothesis of spatial cognition Exploring the anchor-point hypothesis of spatial cognition. Journal of Environmental Psychology7299 - 122. https://doi.org/10.1016/S0272-4944(87)80020-8
* Edwardes Purves (2007) Purves2007Edwardes, AJ. Purves, RS. 2007\. A Theoretical Grounding for Semantic Descriptions of Place A Theoretical Grounding for Semantic Descriptions of Place [Conference Proceedings]. JM. Ware GE. Taylor (), Web and Wireless Geographical Information Systems Web and Wireless Geographical Information Systems ( 106–120). Springer Berlin Heidelberg.
* Ferrés Rodríguez (2006) Ferres:2006Ferrés, D. Rodríguez, H. 2006\. Experiments Adapting an Open-domain Question Answering System to the Geographical Domain Using Scope-based Resources Experiments adapting an open-domain question answering system to the geographical domain using scope-based resources. Proceedings of the Workshop on Multilingual Question Answering Proceedings of the Workshop on Multilingual Question Answering ( 69–76). Stroudsburg, PA, USAAssociation for Computational Linguistics.
* Ferrés Rodríguez (2010) Ferres:2010Ferrés, D. Rodríguez, H. 2010\. TALP at GikiCLEF 2009 TALP at GikiCLEF 2009\. C. Peters . (), Multilingual Information Access Evaluation I. Text Retrieval Experiments Multilingual Information Access Evaluation I. Text Retrieval Experiments ( 322–325). Berlin, HeidelbergSpringer Berlin Heidelberg.
* Finkel . (2005) finkel2005incorporatingFinkel, JR., Grenager, T. Manning, CD. 2005\. Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling. Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05) Proceedings of the 43rd annual meeting of the association for computational linguistics (acl’05) ( 363–370).
* Fournier-Viger . (2016) SPMFFournier-Viger, P., Lin, JCW., Gomariz, A., Gueniche, T., Soltani, A., Deng, Z.Tresp, V. 2016\. The SPMF Open-Source Data Mining Library Version 2 The SPMF Open-Source Data Mining Library Version 2. Machine Learning and Knowledge Discovery in Databases Machine Learning and Knowledge Discovery in Databases ( 36–40). ChamSpringer International Publishing.
* Gueniche . (2015) gueniche2015cptplusGueniche, T., Fournier-Viger, P., Raman, R. Tseng, VS. 2015\. CPT+: Decreasing the time/space complexity of the Compact Prediction Tree CPT+: Decreasing the time/space complexity of the compact prediction tree. Pacific-Asia Conference on Knowledge Discovery and Data Mining Pacific-Asia Conference on Knowledge Discovery and Data Mining ( 625–636).
* Gueniche . (2013) gueniche2013compactGueniche, T., Fournier-Viger, P. Tseng, VS. 2013\. Compact prediction tree: A lossless model for accurate sequence prediction Compact prediction tree: A lossless model for accurate sequence prediction. International Conference on Advanced Data Mining and Applications International Conference on Advanced Data Mining and Applications ( 177–188).
* Hamzei, Li . (2019) agilePaperHamzei, E., Li, H., Vasardani, M., Baldwin, T., Winter, S. Tomko, M. 2019\. Place questions and human-generated answers: A data analysis approach Place questions and human-generated answers: A data analysis approach. Geospatial Technologies for Local and Regional Development Geospatial Technologies for Local and Regional Development ( 1–16).
* Hamzei, Winter Tomko (2019) hamzeiCOSITHamzei, E., Winter, S. Tomko, M. 2019\. Initial analysis of simple where-questions and human-generated answers Initial analysis of simple where-questions and human-generated answers. Proceedings of Short Papers at the 14th International Conference on Spatial Information Theory. Proceedings of Short Papers at the 14th International Conference on Spatial Information Theory. Dagstuhl, GermanySchloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* Jenks (1967) Jensk1967IYCJenks, GF. 1967\. The Data Model Concept in Statistical Mapping The data model concept in statistical mapping. International Yearbook of Cartography7186–190.
* Jiang Yao (2006) JIANG2006712Jiang, B. Yao, X. 2006\. Location-based services and GIS in perspective Location-based services and gis in perspective. Computers, Environment and Urban Systems306712 - 725\. Location Based Services https://doi.org/10.1016/j.compenvurbsys.2006.02.003
* Laird Saul (1994) laird1994discreteLaird, P. Saul, R. 1994\. Discrete sequence prediction and its applications Discrete sequence prediction and its applications. Machine Learning15143–68.
* Leidner . (2003) Leidner:2003:GSN:1119394.1119399Leidner, JL., Sinclair, G. Webber, B. 2003\. Grounding Spatial Named Entities for Information Extraction and Question Answering Grounding spatial named entities for information extraction and question answering. Proceedings of the HLT-NAACL 2003 Workshop on Analysis of Geographic References - Volume 1 Proceedings of the hlt-naacl 2003 workshop on analysis of geographic references - volume 1 ( 31–38). Stroudsburg, PA, USAAssociation for Computational Linguistics. 10.3115/1119394.1119399
* Lieberman Samet (2012) Lieberman:2012Lieberman, MD. Samet, H. 2012\. Adaptive Context Features for Toponym Resolution in Streaming News Adaptive context features for toponym resolution in streaming news. Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval ( 731–740). New York, NY, USAACM. 10.1145/2348283.2348381
* Luque . (2006) luque:2006Luque, J., Ferrés, D., Hernando, J., Mariño, JB. Rodríguez, H. 2006\. GEOVAQA: A VOICE ACTIVATED GEOGRAPHICAL QUESTION ANSWERING SYSTEM GEOVAQA: a voice activated geographical question answering system. IV Jornadas en Tecnologia del Habla Iv jornadas en tecnologia del habla ( 309–314).
* Mai . (2018) Mai:2018Mai, G., Janowicz, K., He, C., Liu, S. Lao, N. 2018\. POIReviewQA: A Semantically Enriched POI Retrieval and Question Answering Dataset POIReviewQA: A Semantically Enriched POI Retrieval and Question Answering Dataset. Proceedings of the 12th Workshop on Geographic Information Retrieval. Proceedings of the 12th Workshop on Geographic Information Retrieval. New York, NY, USAAssociation for Computing Machinery. 10.1145/3281354.3281359
* Mai . (2020) Mai:2020Mai, G., Yan, B., Janowicz, K. Zhu, R. 2020\. Relaxing Unanswerable Geographic Questions Using A Spatially Explicit Knowledge Graph Embedding Model Relaxing unanswerable geographic questions using a spatially explicit knowledge graph embedding model. P. Kyriakidis, D. Hadjimitsis, D. Skarlatos A. Mansourian (), Geospatial Technologies for Local and Regional Development Geospatial Technologies for Local and Regional Development ( 21–39). ChamSpringer International Publishing.
* Mishra . (2010) Mishra:2010Mishra, A., Mishra, N. Agrawal, A. 2010Nov. Context-Aware Restricted Geographical Domain Question Answering System Context-aware restricted geographical domain question answering system. Proceedings of IEEE International Conference on Computational Intelligence and Communication Networks Proceedings of IEEE International Conference on Computational Intelligence and Communication Networks ( 548-553). 10.1109/CICN.2010.108
* Mohasseb . (2018) Mohasseb2018Mohasseb, A., Bader-El-Den, M. Cocea, M. 2018nov. Question categorization and classification using grammar based approach Question categorization and classification using grammar based approach. Information Processing and Management5461228–1243. 10.1016/j.ipm.2018.05.001
* Nguyen . (2016) Nguyen2016msNguyen, T., Rosenberg, M., Song, X., Gao, J., Tiwary, S., Majumder, R. Deng, L. 2016\. MS MARCO: A human generated machine reading comprehension dataset MS MARCO: A human generated machine reading comprehension dataset. Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016). Proceedings of the Workshop on Cognitive Computation: Integrating neural and symbolic approaches 2016 co-located with the 30th Annual Conference on Neural Information Processing Systems (NIPS 2016). Barcelona, Spain.
* Padmanabhan Mogul (1996) padmanabhan1996usingPadmanabhan, VN. Mogul, JC. 1996\. Using predictive prefetching to improve world wide web latency Using predictive prefetching to improve world wide web latency. ACM SIGCOMM Computer Communication Review26322–36.
* Pitkow Pirolli (1999) pitkow1999mininglongestrepeatinPitkow, J. Pirolli, P. 1999\. Mining Longest Repeating Subsequences to Predict World Wide Web Surfing Mining longest repeating subsequences to predict world wide web surfing. Proceedings of the 2Nd Conference on USENIX Symposium on Internet Technologies and Systems - Volume 2 Proceedings of the 2Nd Conference on USENIX Symposium on Internet Technologies and Systems - Volume 2 ( 13–13). Berkeley, CA, USAUSENIX Association.
* Punjani . (2018) Punjani:2018:TQA:3281354.3281362Punjani, D., Singh, K., Both, A., Koubarakis, M., Angelidis, I., Bereta, K.Stamoulis, G. 2018\. Template-Based Question Answering over Linked Geospatial Data Template-Based Question Answering over Linked Geospatial Data. Proceedings of the 12th Workshop on Geographic Information Retrieval Proceedings of the 12th Workshop on Geographic Information Retrieval ( 7:1—-7:10). New York, NY, USAACM. 10.1145/3281354.3281362
* Rajpurkar . (2016) rajpurkar2016squadRajpurkar, P., Zhang, J., Lopyrev, K. Liang, P. 2016\. SQuAD: 100,000+ Questions for Machine Comprehension of Text SQuAD: 100,000+ Questions for Machine Comprehension of Text. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing ( 2383–2392).
* Raubal Winter (2002) Raubal:2002Raubal, M. Winter, S. 2002\. Enriching Wayfinding Instructions with Local Landmarks Enriching wayfinding instructions with local landmarks. MJ. Egenhofer DM. Mark (), Geographic Information Science Geographic Information Science ( 243–259). Berlin, Heidelberg.
* Richter . (2013) Richter_et_al_2013Richter, D., Winter, S., Richter, KF. Stirling, L. 2013\. Granularity of locations referred to by place descriptions Granularity of locations referred to by place descriptions [Journal Article]. Computers, Environment and Urban Systems4188-99. https://doi.org/10.1016/j.compenvurbsys.2013.03.005
* Scheider . (2020) Scheider:2020Scheider, S., Nyamsuren, E., Kruiger, H. Xu, H. 2020\. Geo-analytical question-answering with GIS Geo-analytical question-answering with gis. International Journal of Digital Earth001-14. 10.1080/17538947.2020.1738568
* Shanon (1983) shanon1983answersShanon, B. 1983\. Answers to where-questions Answers to where-questions. Discourse Processes64319–352.
* Stadler . (2012) Stadler:2012Stadler, C., Lehmann, J., Höffner, K. Auer, S. 201210\. LinkedGeoData: A Core for a Web of Spatial Open Data Linkedgeodata: A core for a web of spatial open data. Semantic Web34333–354.
* Sun . (2018) sun-etal-2018-openSun, H., Dhingra, B., Zaheer, M., Mazaitis, K., Salakhutdinov, R. Cohen, W. 201810-11. Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text Open domain question answering using early fusion of knowledge bases and text. Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing ( 4231–4242). Brussels, BelgiumAssociation for Computational Linguistics. 10.18653/v1/D18-1455
* Suomela . (2009) suomela2009displayingSuomela, R., Lakkala, H. Salminen, I. 200901 20. Displaying a map having a close known location. Displaying a map having a close known location. Google Patents. US Patent 7,480,567
* Tax . (2020) review:2020Tax, N., Teinemaa, I. van Zelst, SJ. 2020\. An interdisciplinary comparison of sequence modeling methods for next-element prediction An interdisciplinary comparison of sequence modeling methods for next-element prediction. Software and Systems Modeling. 10.1007/s10270-020-00789-3
* Thalhammer Rettinger (2016) thalhammer2016pagerankThalhammer, A. Rettinger, A. 2016\. PageRank on Wikipedia: towards general importance scores for entities Pagerank on wikipedia: towards general importance scores for entities. European Semantic Web Conference European Semantic Web Conference ( 227–240).
* Tomko Purves (2008) tomko2008categoricalTomko, M. Purves, RS. 2008\. Categorical prominence and the characteristic description of regions Categorical prominence and the characteristic description of regions. Proceedings of the Semantic Web Meets Geospatial Applications Workshop, held in conjunction with AGILE 2008. Proceedings of the Semantic Web Meets Geospatial Applications Workshop, held in conjunction with AGILE 2008. Girona, Spain.
* Vahedi . (2016) Vahedi:2016Vahedi, B., Kuhn, W. Ballatore, A. 2016\. Question-Based Spatial Computing—A Case Study Question-based spatial computing—a case study. T. Sarjakoski, MY. Santos LT. Sarjakoski (), Geospatial Data in a Changing World Geospatial Data in a Changing World ( 37–50). ChamSpringer International Publishing.
* Wang . (2010) wang:2010Wang, X., Zhang, Y., Chen, M., Lin, X., Yu, H. Liu, Y. 2010June. An evidence-based approach for Toponym Disambiguation An evidence-based approach for toponym disambiguation. Proceedings of 18th International Conference on Geoinformatics Proceedings of 18th International Conference on Geoinformatics ( 1-7). 10.1109/GEOINFORMATICS.2010.5567805
* D. Wilson Sperber (2002) wilson2002relevanceWilson, D. Sperber, D. 2002\. Relevance theory Relevance theory. Handbook of Pragmatics. Handbook of Pragmatics. Blackwell.
* JA. Wilson (2018) wilson2018systemsWilson, JA. 201805 17. Systems and methods for presenting personalized map labels. Systems and methods for presenting personalized map labels. Google Patents. US Patent App. 15/811,376
* Winter (2009) winter2009spatialWinter, S. 2009\. Spatial intelligence: ready for a challenge? Spatial intelligence: ready for a challenge? Spatial Cognition & Computation92138–151.
* Zheng . (2019) Zheng2019Zheng, W., Cheng, H., Yu, JX., Zou, L. Zhao, K. 2019may. Interactive natural language question answering over knowledge graphs Interactive natural language question answering over knowledge graphs. Information Sciences481141–159. 10.1016/j.ins.2018.12.032
* Ziv Lempel (1978) ziv1978compressionZiv, J. Lempel, A. 1978\. Compression of individual sequences via variable-rate coding Compression of individual sequences via variable-rate coding. IEEE Transactions on Information Theory245530–536.
|
LABEL:firstpage–LABEL:lastpage
# Patterns formed in a thin film with spatially homogeneous and non-
homogeneous Derjaguin disjoining pressure
A.S.ALSHAIKHI$\,{}^{1}$ M.GRINFELD$\,{}^{1,2}$ S.K.WILSON$\,{}^{1}$ $\,{}^{1}$
Department of Mathematics and Statistics, University of Strathclyde,
Livingstone Tower,
26 Richmond Street, Glasgow G1 1XH,UK $\,{}^{2}$ email:
<EMAIL_ADDRESS>
(20 December 2020; resubmitted 11 June 2021; 2000)
###### Abstract
We consider patterns formed in a two-dimensional thin film on a planar
substrate with a Derjaguin disjoining pressure and periodic wettability
stripes. We rigorously clarify some of the results obtained numerically by
Honisch et al. [Langmuir 31: 10618–10631, 2015] and embed them in the general
theory of thin-film equations. For the case of constant wettability, we
elucidate the change in the global structure of branches of steady state
solutions as the average film thickness and the surface tension are varied.
Specifically we find, by using methods of local bifurcation theory and the
continuation software package AUTO, both nucleation and metastable regimes. We
discuss admissible forms of spatially non-homogeneous disjoining pressure,
arguing for a form that differs from the one used by Honisch et al., and study
the dependence of the steady state solutions on the wettability contrast in
that case.
###### keywords:
thin films, disjoining pressure, non-homogeneous substrates, pattern formation
###### 2020 Mathematics Subject Classification:
7
††volume: 000
4K35 (Primary); 35B32 (Secondary)
## 1 Introduction
Thin liquid films on solid substrates occur in many natural situations. For
example, they appear in tear films in the eye which protect the cornea [6] or
in streams of lava from a volcanic eruption [19]. Moreover, thin liquid films
occur in many technological applications, such as coatings [22] and lubricants
(e.g. oil films which lubricate the piston in a car engine [41], drying paint
layers [18], and in the manufacture of microelectronic devices [21]). For
extensive reviews of thin-film flow see, for example, Oron et al. [28] and
Craster and Matar [10].
As these liquid films are thin, the Navier–Stokes equation governing their
flow can be reduced to a single degenerate fourth-order quasi-linear parabolic
partial differential equation (PDE) usually known as a thin-film equation. In
many applications a choice of a disjoining pressure, which we denote by $\Pi$,
must be made. Such a term describes the action of surface forces on the film
[37]. In different situations, different forms of disjoining pressure are
appropriate; these may incorporate long-range van der Waals forces and/or
various types of short-range interaction terms such as Born repulsion;
inclusion of a particular type of interaction can have significant effects on
the wettability of the surface and the evolution of the film film, sometimes
leading to dewetting phenomena, i.e. the rupture of the film and the
appearance of dry spots. (Here and subsequently by “wettability” of the
surface we mean the ability of a solid surface to reduce the surface tension
of a liquid on contact with it such that it spreads over the surface and wets
it.)
Witelski and Bernoff [43] were one of the first authors to analyse
mathematically the rupture of three-dimensional thin films. In particular,
considering a disjoining pressure of the form $\Pi=-1/(3h^{3})$ (we use the
sign convention adopted in Honisch et al. [17]), they analysed planar and
axisymmetric equilibrium solutions on a finite domain. They showed that a
disjoining pressure of this form leads to finite-time rupture singularities,
that is, the film thickness approaches zero in finite time at a point (or a
line or a ring) in the domain. In a related more recent paper, Ji and Witelski
[20] considered a different choice of disjoining pressure and investigated the
finite-time rupture solutions in a model of thin film of liquid with
evaporative effects. They observed different types of finite-time
singularities due to the non-conservative terms in the model. In particular,
they showed that the inclusion of non-conservative term can prevent the
disjoining pressure from causing finite-time rupture.
A pioneering theoretical study of a thin-film equation with a disjoining
pressure term given by a combination of negative powers of the thin film
thickness is that by Bertozzi et al. [4], who studied the formation of
dewetting patterns and the rupture of thin liquid films due to long-range
attractive and short-range Born repulsive forces, and determined the structure
of the bifurcation diagram for steady state solutions, both with and without
the repulsive term.
Aiming to quantify the temporal coarsening in a thin film, Glasner and
Witelski [15] examined two coarsening mechanisms that arise in dewetting
films: mass exchange that influences the breakdown of individual droplets and
spatial motion that results in droplet rupture as well as merging events. They
provided a simple model with a disjoining pressure which combines the effects
of both short- and long-range forces acting on the film. Kitavtsev et al. [23]
analysed the long-time dynamics of dewetting in a thin-film equation by using
a disjoining pressure similar to that used by Bertozzi et al. [4]. They
applied centre manifold theory to derive and analysed an ordinary differentual
equation model for the dynamics of dewetting.
The recent article by Witelski [42] presents a review of the various stages of
dewetting for a film of liquid spreading on a hydrophobic substrate. Different
types of behaviour of the film are observed depending on the form of the
disjoining pressure: finite-time singularities, self-similar solutions and
coarsening. In particular, he divides the evolution of dewetting processes
into three phases: an initial linear instability that leads to finite-time
rupture (short time dynamics), which is followed by the propagation of
dewetting rims and instabilities of liquid ridges (intermediate time
dynamics), and the eventual formation of quasi-steady droplets (long time
dynamics).
Most of the previous studies of thin liquid films focussed on films on
homogeneous substrates. However, thin liquid films on non-homogeneous
chemically patterned substrates are also of interest. These have many
practical applications, such as in the construction of microfluidic devices
and creating soft materials with a particular pattern [30]. Chemically
patterned substrates are an efficient way to obtain microstructures of
different shapes by using different types of substrate patterning [34].
Chemical modification of substrates can also be used to avoid spontaneous
breakup of thin films, which is often highly undesirable, as, for example, in
printing technology [5, 1].
Due to their many applications briefly described above, films on non-
homogeneous substrates have been the object of a number of previous
theoretical studies which motivate the present work. For example, Konnur et
al. [24] found that in the case of an isolated circular patch with wetting
properties different from that of the rest of the substrate that chemical non-
homogeneity of the substrate can greatly accelerate the growth of surface
instabilities. Sharma et al. [35] studied instabilities of a liquid film on a
substrate containing a single heterogeneous patch and a substrate with stripes
of alternating less and more wettable regions. The main concern of that paper
was to investigate how substrate patterns are reproduced in the liquid film,
and to determine the best conditions for templating.
Thiele et al. [39] performed a bifurcation analysis using the continuation
software package AUTO [11] to study dewetting on a chemically patterned
substrate by solving a thin-film equation with a disjoining pressure, using
the wettability contrast as a control parameter. The wettability contrast
measures the degree of heterogeneity of the substrate; it is introduced and
defined rigorously in (45) in Section 5. Honisch et al. [17] modelled an
experiment in which organic molecules were deposited on a chemically non-
homogeneous silicon oxide substrates with gold stripes and discuss the
redistribution of the liquid following deposition.
In a recent paper, Liu and Witelski [27] studied thin films on chemically
heterogeneous substrates. They claim that in some applications such as digital
microfluidics, substrates with alternate hydrophilic and hydrophobic
rectangular areas are better described by a piecewise constant function than
by a sinusoidal one. Therefore, in contrast to other studies, including the
present one, they study substrates with wettability characteristics described
by such a function. Based on the structure of the bifurcation diagram, they
divide the steady-state solutions into six distinct but connected branches and
show that the only unstable branch corresponds to confined droplets, while the
rest of the branches are stable.
In the present work, we build on the work of Thiele et al. [39] and Honisch et
al. [17]. Part of our motivation is to clarify and explain rigorously some of
the numerical results reported in these papers. In the sinusoidally striped
non-homogeneous substrate case, we offer a justification for using a form of
the disjoining pressure that differs from the one used in these two papers. A
detailed plan of the paper is given in the last paragraph of Section 2.
## 2 Problem Statement
Denoting the thickness of the thin liquid film by $z=h(x,y,t)$, where
$(x,y,z)$ are the usual Cartesian coordinates and $t$ is time, Honisch et al.
[17] consider the thin-film equation
$h_{t}=\nabla\cdot\left\\{Q(h)\nabla P(h,x,y)\right\\},\qquad
t>0,\qquad(x,y)\in\mathbb{R}^{2},$ (1)
where $Q(h)=h^{3}/(3\eta)$ is the mobility coefficient with $\eta$ being the
dynamic viscosity. The generalized pressure $P(h,x,y)$ is given by
$P(h,x,y)=-\gamma\Delta h-\Pi(h,x,y),$ (2)
where $\gamma$ is the coefficient of surface tension and we follow [17] in
taking the Derjaguin disjoining pressure $\Pi(h,x,y)$ in the spatially
homogeneous case to be of the form
$\Pi(h,x,y)=-\frac{A}{h^{3}}+\frac{B}{h^{6}}$ (3)
suggested, for example, by Pismen [29]. Here $A$ and $B$ are positive
parameters that measure the relative contributions of the long-range forces
(the $1/h^{3}$ term) and the short-range ones (the $1/h^{6}$ term). However,
we will see that both of these constants can be scaled out of the mathematical
problem. Equation (3) uses the exponent $-3$ for the long-range forces and
$-6$ for the short-range forces as in Honisch et al. [17]. Other choices
include the pairs of long- and short-range exponents $(-2,-3)$, $(-3,-4)$ and
$(-3,-9)$ discussed by [4, 33, 36]. In terms of the classification of Bertozzi
et al. [4, Definition 1], the choice $(-3,-6)$ is admissible (as are all the
other pairs above), and falls in their region II; we expect that choosing
other admissible exponent pairs will give qualitatively the same results as
those obtained here.
In what follows, we study thin films on both homogeneous and non-homogeneous
substrates. In the non-homogeneous case, we will modify (3) by assuming that
the Derjaguin pressure term $\Pi$ changes periodically in the $x$-direction
with period $L$. The appropriate forms of $\Pi$ in the non-homogeneous case
are discussed in Section 5.
Hence, in order better to understand solutions of (1), we study its one-
dimensional version,
$h_{t}=(Q(h)P(h,x)_{x})_{x},\quad\quad 0<x<L.$ (4)
We start by characterising steady state solutions of (4) subject to periodic
boundary conditions at $x=0$ and $x=L$. In other words, we seek steady state
solutions $h(x)$ of (4), satisfying the non-local boundary value problem
$\gamma
h_{xx}+\frac{B}{h^{6}}-\frac{A}{h^{3}}-\frac{1}{L}\int_{0}^{L}\left[\frac{B}{h^{6}}-\frac{A}{h^{3}}\right]\,\hbox{d}{x}=0,\;\;0<x<L,$
(5)
subject to the constraint
$\frac{1}{L}\int_{0}^{L}h(x)\,\hbox{d}{x}=h^{*},$ (6)
where the constant $h^{*}\,(>0)$ denotes the (scaled) average film thickness,
and the periodic boundary conditions
$h(0)=h(L),\quad h_{x}(0)=h_{x}(L).$ (7)
Now we non-dimensionalise. Setting
$H=\left(\frac{B}{A}\right)^{1/3},\;\;h=H\tilde{h},\quad\hbox{and}\quad
x=L\tilde{x},$ (8)
in (5) and removing the tildes, we obtain
$\epsilon^{2}h_{xx}+f(h)-\int_{0}^{1}f(h)\,\hbox{d}{x}=0,\;\;0<x<1,$ (9)
where
$f(h)=\frac{1}{h^{6}}-\frac{1}{h^{3}},$ (10)
and
$\epsilon^{2}=\frac{\gamma B^{4/3}}{L^{2}A^{7/3}},$ (11)
subject to the periodic boundary conditions
$h(0)=h(1),\quad h_{x}(0)=h_{x}(1),$ (12)
and the volume constraint
$\int_{0}^{1}h(x)\,\hbox{d}{x}=\bar{h},$ (13)
where
$\bar{h}=\frac{h^{*}A^{1/3}}{B^{1/3}}.$ (14)
Note that the problem (9)–(14) is very similar to the corresponding steady
state problem for the Cahn–Hilliard equation considered as a bifurcation
problem in the parameters $\bar{h}$ and $\epsilon$ by Eilbeck et al. [12]. The
boundary conditions considered in that work were the physically natural double
Neumann conditions. The periodic boundary conditions (12) in the present
problem slightly change the analysis, but our general approach in
characterising different bifurcation regimes still follows that of Eilbeck et
al. [12], though the correct interpretation of the limit as $\epsilon\to
0^{+}$ is that now we let the surface tension $\gamma$ go to zero. In
particular, we perform a Liapunov–Schmidt reduction to determine the local
behaviour close to bifurcation points and then use AUTO (in the present work
we use the AUTO-07p version [11]) to explore the global structure of branches
of steady state solutions both for the spatially homogeneous case and for the
spatially non-homogeneous case in the case of an $x$-periodically patterned
substrate.
We first investigate the homogeneous case and, having elucidated the structure
of the bifurcations of non-trivial solutions from the constant solution
$h=\bar{h}$ in that case in Sections 3 and 4, we study forced rotational
($O(2)$) symmetry breaking in the non-homogeneous case in Section 5. In
Appendix A, we present a general result about $O(2)$ symmetry breaking in the
spatially non-homogeneous case. It shows that in the spatially non-homogeneous
case, only two steady state solutions remain from the orbit of solutions of
(9)–(14) induced by its $O(2)$ invariance. We concentrate on the simplest
steady state solutions of (9)–(14), as by a result of Laugesen and Pugh [25,
Theorem 1] only such solutions, that is, constant solutions and those having
only one extremum point, are linearly stable in the homogeneous case. For
information about dynamics of one-dimensional thin-film equations the reader
should also consult Zhang [46].
In what follows, we use $\|\cdot\|_{2}$ to denote $L^{2}([0,1])$ norms.
## 3 Liapunov–Schmidt Reduction in the Spatially Homogeneous Case
We start by performing an analysis of the dependence of the global structure
of branches of steady state solutions of the problem in the spatially
homogeneous case given by (9)–(14) on the parameters $\bar{h}$ and $\epsilon$.
In what follows, we do not indicate explicitly the dependence of the operators
on the parameters $\bar{h}$ and $\epsilon$, and all of the calculations are
performed for a fixed value of $\bar{h}$ and close to a bifurcation point
$\epsilon=\epsilon_{k}$ for $k=1,2,3,\ldots$ defined below.
We set $v=h-\bar{h}$, so that $v=v(x)$ has zero mean, and rewrite (9) as
$G(v)=0,$ (15)
where
$G(v)=\epsilon^{2}v_{xx}+f(v+\bar{h})-\int_{0}^{1}f(v(x)+\bar{h})\,\hbox{d}{x}.$
(16)
If we set
$H=\left\\{w\in C(0,1)\,:\,\int_{0}^{1}w(x)\,\hbox{d}{x}=0\right\\},$ (17)
where $G$ is an operator from $D(G)\subset H\rightarrow H$, then $D(G)$ is
given by
$D(G)=\left\\{v\in
C^{2}(0,1)\,:\,v(0)=v(1),\,v_{x}(0)=v_{x}(1),\,\int_{0}^{1}v(x)\,\hbox{d}{x}=0\right\\}.$
(18)
The linearisation of $G$ at $v$ applied to $w$ is defined by
$dG(v)w=\lim_{\tau\to 0}\frac{G(v+\tau w)-G(v)}{\tau}.$ (19)
We denote $dG(0)$ by $S$, so that $S$ applied to $w$ is given by
$Sw=\epsilon^{2}w_{xx}+f^{\prime}(\bar{h})w.$ (20)
To locate the bifurcation points, we have to find the nontrivial solutions of
the equation $Sw=0$ subject to
$w(0)=w(1),\quad\quad w_{x}(0)=w_{x}(1).$ (21)
The kernel of $S$ is non-empty and two dimensional when
$\epsilon=\epsilon_{k}=\frac{\sqrt{f^{\prime}(\bar{h})}}{2k\pi}\quad\textrm{for}\quad
k=1,2,3,\ldots,$ (22)
and is spanned by $\cos(2k\pi x)$ and $\sin(2k\pi x)$. That these values of
$\epsilon$ correspond to bifurcation points follows from two theorems of
Vanderbauwhede [40, Theorems 2 and 3].
In a neighbourhood of a bifurcation point $(\epsilon_{k},0)$ in $(\epsilon,v)$
space, solutions of $G(v)=0$ on $H$ are in one-to-one correspondence with
solutions of the reduced system of equations on $\mathbb{R}^{2}$,
$g_{1}(x,y,\epsilon)=0,\quad g_{2}(x,y,\epsilon)=0,$ (23)
for some functions $g_{1}$ and $g_{2}$ to be obtained through the
Liapunov–Schmidt reduction [16].
To set up the Liapunov–Schmidt reduction, we decompose $D(G)$ and $H$ as
follows:
$D(G)=\hbox{ker}\,S\oplus M$ (24)
and
$H=N\oplus\hbox{range}\,S.$ (25)
Since $S$ is self-adjoint with respect to the $L^{2}$-inner product denoted by
$\langle\cdot,\cdot\rangle$, we can choose
$M=N=\hbox{span}\,\left\\{\cos(2kx),\sin(2kx)\right\\},$ (26)
and denote the above basis for $M$ by $\left\\{w_{1},w_{2}\right\\}$ and for
$N$ by $\left\\{w_{1}^{*},w_{2}^{*}\right\\}$. We also denote the projection
of $H$ onto $\hbox{range}\,S$ by $E$.
Since the present problem is invariant with respect to the group $O(2)$, the
functions $g_{1}$ and $g_{2}$ must have the form
$g_{1}(x,y,\epsilon)=xp(x^{2}+y^{2},\epsilon),\quad
g_{2}(x,y,\epsilon)=yp(x^{2}+y^{2},\epsilon),$ (27)
for some function $p(\cdot,\cdot)$ [9], which means that in order to determine
the bifurcation structure, the only terms that need to be computed are
$g_{1,x\epsilon}$ and $g_{1,xxx}$, as these immediately give $g_{2,y\epsilon}$
and $g_{2,yyy}$ and all of the other second and third partial derivatives of
$g_{1}$ and $g_{2}$ are identically zero.
Following Golubitsky and Schaeffer [16], we have
$\displaystyle g_{1,x\epsilon}$ $\displaystyle=$ $\displaystyle\langle
w_{1}^{*},dG_{\epsilon}(w_{1})-d^{2}G(w_{1},S^{-1}EG_{\epsilon}(0))\rangle,$
(28) $\displaystyle g_{1,xxx}$ $\displaystyle=$ $\displaystyle\langle
w_{1}^{*},d^{3}G(w_{1},w_{1},w_{1})-3d^{2}G(w_{1},S^{-1}E[d^{2}G(w_{1},w_{1})])\rangle,$
(29)
where
$d^{r}G(z_{1},\ldots,z_{r})=\left.\frac{\partial^{r}}{\partial
t_{1}\ldots\partial
t_{r}}G(t_{1}z_{1}+\ldots+t_{r}z_{r})\right|_{t_{1}=\ldots=t_{r}=0}\quad\textrm{for}\quad
r=1,2,3,\ldots,$ (30)
and we choose
$w_{1}=w_{1}^{*}=\cos(2k\pi x),$ (31)
where $w_{1}\in\hbox{ker}\ S$ and $w_{1}^{*}\in(\hbox{range}\ S)^{\perp}$. In
particular, from (30) we have
$\displaystyle d^{2}G(z_{1},z_{2})$ $\displaystyle=$
$\displaystyle\left.\frac{\partial^{2}}{\partial t_{1}\partial
t_{2}}G(t_{1}z_{1}+t_{2}z_{2})\right|_{t_{1}=t_{2}=0}$ (32) $\displaystyle=$
$\displaystyle\frac{\partial^{2}}{\partial t_{1}\partial
t_{2}}\Big{[}\epsilon_{k}(t_{1}z_{1,xx}+t_{2}z_{2,xx})+f(t_{1}z_{1}+t_{2}z_{2}+\bar{h})$
$\displaystyle-$
$\displaystyle\left.\int_{0}^{1}f(t_{1}z_{1}+t_{2}z_{2}+\bar{h})\,\hbox{d}{x}\Big{]}\right|_{t_{1}=t_{2}=0}$
$\displaystyle=$ $\displaystyle
f^{\prime\prime}(\bar{h})z_{1}z_{2}-\int_{0}^{1}f^{\prime\prime}(\bar{h})z_{1}z_{2}\,\hbox{d}{x},$
and so
$\displaystyle d^{2}G(\cos(2k\pi x),\cos(2k\pi x))$ $\displaystyle=$
$\displaystyle f^{\prime\prime}(\bar{h})\cos^{2}(2k\pi
x)-\int_{0}^{1}f^{\prime\prime}(\bar{h})\cos^{2}(2k\pi x)\,\hbox{d}{x}$ (33)
$\displaystyle=$ $\displaystyle f^{\prime\prime}(\bar{h})\cos^{2}(2k\pi
x)-\frac{1}{2}f^{\prime\prime}(\bar{h}).$
To obtain $S^{-1}E[d^{2}G(w_{1},w_{1})]$, which we denote by $R(x)$, so that
$SR=E[d^{2}G(w_{1},w_{1})]$, we use the definition of $\epsilon_{k}$ given in
(22) and solve the second order ordinary differential equation satisfied by
$R(x)$,
$R_{xx}+4k^{2}\pi^{2}R=4k^{2}\pi^{2}\frac{f^{\prime\prime}(\bar{h})}{f^{\prime}(\bar{h})}\cos^{2}(2k\pi
x)-2k^{2}\pi^{2}\frac{f^{\prime\prime}(\bar{h})}{f^{\prime}(\bar{h})},$ (34)
subject to
$R(0)=R(1)\quad\textrm{and}\quad R_{x}(0)=R_{x}(1)$ (35)
in $\hbox{ker}\,S$. We obtain
$R(x)=S^{-1}E[d^{2}G(w_{1},w_{1})]=-\frac{1}{6}\frac{f^{\prime\prime}(\bar{h})}{f^{\prime}(\bar{h})}\cos(4k\pi
x),$ (36)
and hence
$\displaystyle d^{2}G(w_{1},S^{-1}E[d^{2}G(w_{1},w_{1})])$ $\displaystyle=$
$\displaystyle d^{2}G\left(\cos(2k\pi
x),-\frac{1}{6}\frac{f^{\prime\prime}(\bar{h})}{f^{\prime}(\bar{h})}\cos(4k\pi
x)\right)$ (37) $\displaystyle=$ $\displaystyle
f^{\prime\prime}(\bar{h})\cos(2k\pi
x)\left(-\frac{1}{6}\frac{f^{\prime\prime}(\bar{h})}{f^{\prime}(\bar{h})}\cos(4k\pi
x)\right)$ $\displaystyle-\int_{0}^{1}f^{\prime\prime}(\bar{h})\cos(2k\pi
x)\left(-\frac{1}{6}\frac{f^{\prime\prime}(\bar{h})}{f^{\prime}(\bar{h})}\cos(4k\pi
x)\right)\hbox{d}{x}$ $\displaystyle=$
$\displaystyle-\frac{1}{6}\frac{[f^{\prime\prime}(\bar{h})]^{2}}{f^{\prime}(\bar{h})}\cos(2k\pi
x)\cos(4k\pi x).$
In addition, from (30) we have
$\displaystyle d^{3}G(z_{1},z_{2},z_{3})$ $\displaystyle=$
$\displaystyle\left.\frac{\partial^{3}}{\partial t_{1}\partial t_{2}\partial
t_{3}}G(t_{1}z_{1}+t_{2}z_{2}+t_{3}z_{3})\right|_{t_{1}=t_{2}=t_{3}=0}$ (38)
$\displaystyle=$ $\displaystyle
f^{\prime\prime\prime}(\bar{h})z_{1}z_{2}z_{3}-\int_{0}^{1}f^{\prime\prime\prime}(\bar{h})z_{1}z_{2}z_{3}\,\hbox{d}{x},$
and therefore
$\displaystyle d^{3}G(\cos(2k\pi x),\cos(2k\pi x),\cos(2k\pi x))$
$\displaystyle=$ $\displaystyle f^{\prime\prime\prime}(\bar{h})\cos^{3}(2k\pi
x)-\int_{0}^{1}f^{\prime\prime\prime}(\bar{h})\cos^{3}(2k\pi x)\,\hbox{d}{x}$
(39) $\displaystyle=$ $\displaystyle
f^{\prime\prime\prime}(\bar{h})\cos^{3}(2k\pi x).$
Putting all of the information in (37) and (39) into (29) we obtain
$\displaystyle g_{1,xxx}$ $\displaystyle=$ $\displaystyle\langle
w_{1}^{*},d^{3}G(w_{1},w_{1},w_{1})-3d^{2}G(w_{1},S^{-1}E[d^{2}G(w_{1},w_{1})])\rangle$
(40) $\displaystyle=$ $\displaystyle\int_{0}^{1}\cos(2k\pi
x)\left[f^{\prime\prime\prime}(\bar{h})\cos^{3}(2k\pi
x)-3\left(-\frac{1}{6}\frac{[f^{\prime\prime}(\bar{h})]^{2}}{f^{\prime}(\bar{h})}\cos(2k\pi
x)\cos(4k\pi x)\right)\right]\,\hbox{d}{x}$ $\displaystyle=$
$\displaystyle\frac{3}{8}f^{\prime\prime\prime}(\bar{h})+\frac{1}{8}\frac{[f^{\prime\prime}(\bar{h})]^{2}}{f^{\prime}(\bar{h})}.$
In addition, $G_{\epsilon}(v)=v_{xx}$, so that $G_{\epsilon}(0)=0$ at $v=0$,
and hence we have
$d^{2}G(w_{k},S^{-1}EG_{\epsilon}(0))=0.$ (41)
Furthermore, since $dG_{\epsilon}(w)=w_{xx}$, from (28) we obtain
$\displaystyle g_{1,x\epsilon}$ $\displaystyle=$ $\displaystyle\langle
w_{1}^{*},dG_{\epsilon}(w_{1})-d^{2}G(w_{1},S^{-1}EG_{\epsilon}(0))\rangle$
(42) $\displaystyle=$ $\displaystyle\int_{0}^{1}\cos(2k\pi
x)\left(-4\pi^{2}k^{2}\cos(2k\pi x)\right)\,\hbox{d}{x}$ $\displaystyle=$
$\displaystyle-2k^{2}\pi^{2}.$
Referring to (27) and the argument following that equation, the above analysis
shows that as long as $f^{\prime}(\bar{h})>0$ at $\epsilon=\epsilon_{k}$ a
circle of equilibria bifurcates from the constant solution $h\equiv\bar{h}$.
The direction of bifurcation is locally determined by the sign of $g_{1,xxx}$
given by (40). Hence, using $1/\epsilon$ as the bifurcation parameter, the
bifurcation of nontrivial equilibria is supercritical if $g_{1,xxx}$ is
negative and subcritical if it is positive.
By finding the values of $\bar{h}$ where $g_{1,xxx}$ given by (40) with $f(h)$
given by (10) is zero, we finally obtain the following proposition:
###### Proposition 1
Bifurcations of nontrivial solutions from the constant solution $h=\bar{h}$ of
the problem $(\ref{nl})$–$(\ref{mc})$ are supercritical if
$1.289<\bar{h}<1.747$ and subcritical if $1.259<\bar{h}<1.289$ or if
$\bar{h}>1.747$.
definition
###### Proof 3.1
The constant solution $h\equiv\bar{h}$ will lose stability as $\epsilon$ is
decreased only if $f^{\prime}(\bar{h})>0$. i.e. if
$-6/{\bar{h}}^{7}+3/{\bar{h}}^{4}>0$, for $\bar{h}>2^{1/3}\approx 1.259$. From
(40) we have that
$g_{1,xxx}=\frac{57\bar{h}^{6}-426\bar{h}^{3}+651}{2\bar{h}^{9}(\bar{h}^{3}-2)},$
(43)
so that $g_{1,xxx}<0$ if $\bar{h}\in(1.289,1747)$ giving the result.
For $\bar{h}\leqslant 2^{1/3}$ there are no bifurcations from the constant
solution $h=\bar{h}$. Furthermore, we have the following proposition:
###### Proposition 1
The problem $(\ref{nl})$–$(\ref{mc})$ has no nontrivial solutions when
$\bar{h}\leq 1$.
###### Proof 3.2
Assume that such a nontrivial solution exists. Then, since $\bar{h}\leq 1$,
its global minimum, achieved at some point $x_{0}\in(0,1)$, must be less than
1. (We may take the point $x_{0}$ to be an interior point by translation
invariance.) But then
$\epsilon^{2}h_{xx}(x_{0})=\int_{0}^{1}f(h)\,\hbox{d}{x}-f(h(x_{0}))<0,$ (44)
so the point $x_{0}$ cannot be a minimum.
## 4 Two-Parameter Continuation of Solutions in the Spatially Homogeneous
Case
To describe the change in the global structure of branches of steady state
solutions of the problem (9)–(14) as $\bar{h}$ and $\epsilon$ are varied, we
use AUTO [11] and our results are summarised in Figure 1.
As Figure 1 shows, a curve of saddle-node (SN) bifurcations which originates
from $\bar{h}\approx 1.289$ at $1/\epsilon\approx 23.432$ satisfies
$\bar{h}\rightarrow 1^{+}$ as $1/\epsilon\rightarrow\infty$, while a curve of
SN bifurcations which originates from $\bar{h}\approx 1.747$,
$1/\epsilon\approx 13.998$ satisfies $\bar{h}\rightarrow\infty$ as
$1/\epsilon\rightarrow\infty$.
Figure 1: The global structure of branches of steady state solutions with a
unique maximum, including both saddle-node (SN) (shown with dash-dotted
curves) and pitchfork (PF) bifurcation branches (shown with solid curves). The
nucleation regime $1<\bar{h}<2^{1/3}\approx 1.259$ (Regime I), the metastable
regime $2^{1/3}<\bar{h}<1.289$ and $\bar{h}>1.747$ (Regime II), and the
unstable regime $1.289<\bar{h}<1.747$ (Regime III) are also indicated.
Figure 1 identifies three different bifurcation regimes, denoted by I, II and
III, with differing bifurcation behaviour occurring in the different regimes,
namely (using the terminology of [12] in the context of the Cahn-Hilliard
equation):
* •
a “nucleation” regime for $1<\bar{h}<2^{1/3}\approx 1.259$ (Regime I),
* •
a “metastable” regime for $2^{1/3}<\bar{h}<1.289$ and $\bar{h}>1.747$ (Regime
II), and
* •
an “unstable” regime for $1.289<\bar{h}<1.747$ (Regime III).
In Regime I, the constant solution $h(x)\equiv\bar{h}$ is linearly stable
which follows from analysing the spectrum of the operator $S$ for
$f^{\prime}(\bar{h})<0$ in (20) and (21), but under sufficiently large
perturbations the system will evolve to a non-constant steady state solution.
See [25] for an extensive discussion of the stability analysis of steady state
solutions to thin-film equations.
In Regime II, as $\epsilon$ is decreased the constant solution
$h(x)\equiv\bar{h}$ loses stability through a subcritical bifurcation.
In Regime III, as $\epsilon$ is decreased, the constant solution
$h(x)\equiv\bar{h}$ loses stability through a supercritical bifurcation.
## 5 The Spatially Non-Homogeneous Case
Honisch et al. [17] chose the Derjaguin disjoining pressure $\Pi(h,x,y)$ to be
of the form
$\Pi(h,x,y)=\left(\frac{1}{h^{6}}-\frac{1}{h^{3}}\right)(1+\rho\,G(x,y)),$
(45)
where the function $G(x,y)$ models the non-homogeneity of the substrate and
the parameter $\rho$, which can be either positive or negative, is called the
“wettability contrast”. Following Honisch et al. [17], in the remainder of the
present work we consider the specific case
$G(x,y)=\sin\left(2\pi x\right):=G(x),$ (46)
corresponding to an $x$-periodically patterned (i.e. striped) substrate.
There are, however, some difficulties in accepting (45) as a physically
realistic form of the disjoining pressure for a non-homogeneous substrate. The
problems arise because the two terms in (45) represent rather different
physical effects. Specifically, since the $1/h^{6}$ term models the short-
range interaction amongst the molecules of the liquid and the $1/h^{3}$ term
models the long-range interaction, assuming that both terms reflect the
patterning in the substrate in exactly the same way through their dependence
on the same function $G(x,y)$ does not seem very plausible. Moreover, there
are other studies which assume that the wettability of the substrate is
incorporated in either the short-range interaction term (see, for example,
Thiele and Knobloch [38] and Bertozzi et al. [4]) or the long-range
interaction term (see, for example, Ajaev et al. [2] and Sharma et al. [35]),
but not both simultaneously. Hence in what follows we will consider the two
cases $\Pi(h,x)=\Pi_{\rm LR}(h,x)$ and $\Pi(h,x)=\Pi_{\rm SR}(h,x)$, where LR
stands for “long range” and SR stands for “short range” where
$\Pi_{\rm LR}(h,x)=\frac{1}{h^{6}}-\frac{1}{h^{3}}(1+\rho G(x))$ (47)
and
$\Pi_{\rm SR}(h,x)=\frac{1}{h^{6}}(1+\rho G(x))-\frac{1}{h^{3}},$ (48)
in both of which $G(x)$ is given by (46) and $\rho$ is the wettability
contrast.
For small wettability contrast, $|\rho|\ll 1$, we do not expect there to be
significant differences between the influence of $\Pi_{\rm LR}$ or $\Pi_{\rm
SR}$ on the bifurcation diagrams, as these results depend only on the nature
of the bifurcation in the homogeneous case $\rho=0$ and on the symmetry groups
under which the equations are invariant. To see this, consider the spatially
non-homogeneous version of (9), that is, the boundary value problem
$\epsilon^{2}h_{xx}+f(h,x)-\int_{0}^{1}f(h,x)\,\hbox{d}{x}=0,\;\;0<x<1,$ (49)
where now
$f(h,x)=\Pi_{\rm LR}(h,x)\quad\hbox{or}\quad f(h,x)=\Pi_{\rm SR}(h,x).$ (50)
subject to the periodic boundary conditions and the volume constraint,
$h(0)=h(1),\quad
h_{x}(0)=h_{x}(1),\quad\hbox{and}\quad\int_{0}^{1}h(x)\,\hbox{d}{x}=\bar{h}.$
(51)
Seeking an asymptotic solution to (49)–(51) in the form $h(x)=\bar{h}+\rho
h_{1}(x)+O(\rho^{2})$ in the limit $\rho\to 0$, by substituting this anzatz
into (49) we find that in the case of $\Pi_{\rm LR}(h,x)$ we have
$h_{1}(x)=\frac{S\bar{h}^{4}\sin\left(2\pi
x\right)}{4\pi^{2}\bar{h}^{7}\epsilon^{2}-3\bar{h}^{3}+6},$ (52)
while in the case of $\Pi_{\rm SR}(h,x)$ the corresponding result is
$h_{1}(x)=\frac{\bar{h}\sin\left(2\pi
x\right)}{4\pi^{2}\bar{h}^{7}\epsilon^{2}-3\bar{h}^{3}+6}.$ (53)
For non-zero values of $\rho$, in both the $\Pi_{\rm LR}$ and $\Pi_{\rm SR}$
cases, the changes in the bifurcation diagrams obtained in the homogeneous
case ($\rho=0$) are an example of forced symmetry breaking (see, for example,
Chillingworth [8]), which we discuss further in Appendix A. More precisely, we
show there that when $\rho\neq 0$, out of the entire $O(2)$ orbit only two
equilibria are left after symmetry breaking.
Figure 2: Bifurcation diagrams of solutions with a unique maximum, showing
$\|h-\bar{h}\|_{2}$ plotted as a function of $1/\epsilon$ when the disjoining
pressure is $\Pi_{\rm LR}$ for $\rho=0$ (dashed curves), $\rho=0.005$ (dotted
curves) and $\rho=0.05$ (solid curves) for (a) $\bar{h}=1.24$, (b)
$\bar{h}=1.3$, and (c) $\bar{h}=2$, corresponding to Regimes I, III, and II,
respectively.
Figure 2 shows how for small wettability contrast $|\rho|\ll 1$, the resulting
spatial non-homogeneity introduces imperfections [16] in the bifurcation
diagrams of the homogeneous case $\rho=0$ discussed in Section 4. It presents
bifurcation diagrams in Regimes I, II and III when the disjoining pressure
$\Pi_{\rm LR}$ is given by (47) for a range of small values of $\rho$ together
with the corresponding diagrams in the case $\rho=0$. The corresponding
bifurcation diagrams when the disjoining pressure $\Pi_{\rm SR}$ is given by
(48) are very similar and hence are not shown here.
For large wettability contrast, specifically for $|\rho|\geq 1$, significant
differences between the two forms of the disjoining pressure are to be
expected. When using $\Pi_{\rm LR}$, one expects global existence of positive
solutions for all values of $|\rho|$; see, for example, Wu and Zheng [44]. On
the other hand, when using $\Pi_{\rm SR}$, there is the possibility of rupture
of the liquid film for $|\rho|\geq 1$; see, for example, [4, 44], which means
in this case we do not expect positive solutions for sufficiently large values
of $|\rho|$.
Figure 3: Bifurcation diagram for steady state solutions with a unique maximum
showing $\|h-\bar{h}\|_{2}$ plotted as a function of $\rho$ when the
disjoining pressure is $\Pi_{\rm LR}$, $\bar{h}=3$ and $1/\epsilon=50$. The
leading-order dependence of $\|h-\bar{h}\|_{2}$ on $\rho$ as $\rho\to 0$,
given by (52), is shown with dashed lines.
In Figure 3 we plot the branches of the positive solutions of (49)–(51) with a
unique maximum when the disjoining pressure is $\Pi_{\rm LR}$ for $\bar{h}=3$
and $1/\epsilon=50$, so that when $\rho=0$, we are in Regime II above the
curve of pitchfork (PF) bifurcations from the constant solution shown in
Figure 1. The work of Bertozzi et al. [4] and of Wu and Zheng [44], shows that
strictly positive solutions exist for all values of $|\rho|$, i.e. beyond the
range $\rho\in[-2,2]$ for which we have performed the numerical calculations.
Figure 4 shows that the situation is different when the disjoining pressure is
$\Pi_{\rm SR}$ (with the same values of $\bar{h}$ and $\epsilon$). At
$|\rho|=1$, the upper branches of solutions disappear due to rupture of the
film, so that at some point $x_{0}\in[0,1]$ we have $h(x_{0})=0$ and a
strictly positive solution no longer exists, while the other two branches
coalesce at a saddle node bifurcation at $|\rho|\approx 5.8$.
Note that in Figures 3 and 4, the non-trivial “solution” on the axis $\rho=0$
is, in fact, a whole $O(2)$-symmetric orbit of solutions predicted by the
analysis leading to Figure 1.
Figure 4: Bifurcation diagram showing $\|h-\bar{h}\|_{2}$ plotted as a
function of $\rho$ when the disjoining pressure is $\Pi_{\rm SR}$, for
$\bar{h}=3$ and $1/\epsilon=50$. The leading order dependence of
$\|h-\bar{h}\|_{2}$ on $\rho$ as $\rho\to 0$, given by (53), is shown with
dashed lines. Note that the upper branches of solutions cannot be extended
beyond $|\rho|=1$ (indicated by filled circles).
Note that when the disjoining pressure is $\Pi_{\rm SR}$, given by (48), we
were unable to use AUTO to continue branches of solutions beyond the rupture
of the film at $|\rho|=1$.
Figure 5: Solutions $h(x)$ when the disjoining pressure is $\Pi_{\rm SR}$ for
$\bar{h}=2$ and $1/\epsilon=30$ for $\rho=0$, 0.97, 0.98, 0.99 and 1, denoted
by “1”, “2”, “3”, “4” and “5”, respectively, showing convergence of strictly
positive solutions to a non-strictly positive one as $\rho\to 1^{-}$.
Figure 5 shows the film approaching rupture as $\rho\to 1^{-}$ at the point
where the coefficient of the short-range term disappears when $\rho=1$, i.e.
when $1+\sin(2\pi x)=0$ and hence at $x=3/4$. These numerical results are
consistent with the theoretical arguments of Bertozzi et al. [4].
Investigation of the possible leading-order balance in (9) when the disjoining
pressure is $\Pi_{\rm SR}$ and $\rho=1$ in the limit $x\to 3/4$ reveals that
$h(x)=O\left((x-3/4)^{2/3}\right)$; continuing the analysis to higher order
yields
$h=(2\pi^{2})^{\frac{1}{3}}\left(x-\frac{3}{4}\right)^{\frac{2}{3}}-\frac{4\left(4\pi^{10}\right)^{\frac{1}{3}}\epsilon^{2}}{27}\left(x-\frac{3}{4}\right)^{\frac{4}{3}}+O\left(\left(x-\frac{3}{4}\right)^{2}\right).$
(54)
Figure 6 shows the detail of the excellent agreement between the solution
$h(x)$ when $\rho=1$ and the two-term asymptotic solution given by (54) close
to $x=3/4$.
Figure 6: Detail near $x=3/4$ of the solution $h(x)$ shown with solid line
when the disjoining pressure is $\Pi_{\rm SR}$ and $\rho=1$, and the two-term
asymptotic solution given by (54) shown with dashed lines for $\bar{h}=2$ and
$\epsilon=1/30$.
Figures 7 and 8 show the multiplicity of solutions with a unique maximum for
the disjoining pressures $\Pi_{\rm LR}$ and $\Pi_{\rm SR}$, respectively, in
the $(1/\epsilon,\rho)$-plane in the three regimes shown in Figure 1.
Figure 7: Multiplicity of strictly positive solutions with a unique maximum in
the $(1/\epsilon,\rho)$-plane when the disjoining pressure is $\Pi_{\rm LR}$
for (a) $\bar{h}=1.24$ (Regime I), (b) $\bar{h}=1.3$ (Regime III), and (c)
$\bar{h}=2$ (Regime II). Figure 8: Multiplicity of strictly positive
solutions with a unique maximum in the $(1/\epsilon,\rho)$-plane when the
disjoining pressure is $\Pi_{\rm SR}$ for (a) $\bar{h}=1.24$ (Regime I), (b)
$\bar{h}=1.3$ (Regime III), and (c) $\bar{h}=2$ (Regime II).
In Figure 8 the horizontal dashed line at $\rho=1$ indicates rupture of the
film and loss of a smooth strictly positive solution, showing that there are
regions in parameter space where no such solutions exist.
Let us explain in detail the interpretation of Figure 7(c); all of the other
parts of Figure 7 and Figure 8 are interpreted in a similar way.
Figure 9: Bifurcation diagram of steady state solutions with $\bar{h}=2$
(Regime II) for $\rho=0$ (dashed curves) and $\rho=0.005$ (solid curves)
indicating the different branches of steady state solutions.
Each curve in Figure 7(c) is a curve of saddle-node bifurcations. Similar to
Figure 2(c), Figure 9 shows the bifurcation diagram of steady state solutions
with $\bar{h}=2$ (Regime II) for $\rho=0$ and $\rho=0.005$. As $1/\epsilon$ is
increased, we pass successively through saddle-node bifurcations with the
multiplicity of the steady state solutions changing from 1 to 3 to 5 and then
back to 3 again. In Figure 10, we plot the five steady state solutions with a
unique maximum indicated in Figure 9 by (i)–(v).
Figure 10: Steady state solutions on the five branches of solutions indicated
in Figure 9 by (i)–(v).
## 6 Conclusions
In the present work we have investigated the steady state solutions of the
thin-film evolution equation (1) both in the spatially homogeneous case
(9)–(12) and in the spatially non-homogeneous case for two choices of
spatially non-homogeneous Derjaguin disjoining pressure given by (47) and
(48). We have given a physical motivation for our choices of the disjoining
pressure. For reasons explained in the last paragraph of Section 2, we
concentrated on branches of solutions with a unique maximum.
In the spatially homogeneous case (9)–(14), we used the Liapunov-Schmidt
reduction of an equation invariant under the action of the $O(2)$ symmetry
group to obtain local bifurcation results and to determine the dependence of
the direction and nature of bifurcation in bifurcation parameter $1/\epsilon$
on the average film thickness $\bar{h}$; our results on the existence of three
different bifurcation regimes, (namely nucleation, metastable, and unstable)
are summarised in Propositions 1 and 1 and in Figure 1 obtained using AUTO.
In the spatially non-homogeneous case (49)–(51), we clarified the $O(2)$
symmetry breaking phenomenon (see Appendix A) and presented imperfect
bifurcation diagrams in Figure 2 and global bifurcation diagrams using the
wettability contrast $\rho$ as a bifurcation parameter for fixed $\epsilon$
and $\bar{h}$ in Figures 3 and 4.
Let us discuss Figure 3 in more detail; for convenience, it is reproduced in
Figure 11 with labelling added to the different branches of strictly positive
steady state solutions with a unique maximum. Below we explain what these
different labels mean.
Figure 11: A sketch of the bifurcation diagram plotted in Figure 3 with the
different solution branches labelled.
Our results can be described using the language of global compact attractors.
For more information on attractors in dissipative infinite-dimensional
dynamical systems please see the monograph of Hale [14] and Wu and Zheng [44]
for applications in the thin-film context. In systems such as (4), the global
compact attractor of the PDE is the union of equilibria and their unstable
manifolds. In Figures 12 and 13 we sketch the global attractor of (4) for
$\epsilon=1/50$ and $\bar{h}=3$, the values used to plot Figure 3. For these
values of the parameters the attractor is two-dimensional and we sketch its
projection onto a plane.
Figure 12: Sketch of the global attractor for $\rho=0$. The circle represents
the $O(2)$ orbit of steady state solutions and $O$ represents the constant
solution $h(x)=\bar{h}$. Figure 13: Sketch of the global attractor for small
non-zero values of $|\rho|$. The points $A$, $B$, $C$ correspond to the steady
state solutions labelled in Figure 11.
When $\rho=0$, for $1/\epsilon=50$, the attractor is two-dimensional; the
constant solution $h\equiv\bar{h}$ denoted by $O$ has a two-dimensional
unstable manifold and $X$ corresponds to a whole $O(2)$ orbit of steady state
solutions. A sketch of the attractor in this case is shown in Figure 12.
When $|\rho|$ takes small positive values, only two steady state solutions,
denoted by $A$ and $C$ remain from the entire $O(2)$ orbit, as discussed in
Appendix A, while the constant solution continues to $B$ without change of
stability. The resulting attractor is sketched in Figure 13.
Increasing $|\rho|$ causes the steady state solutions $B$ and $C$ to coalesce
in a saddle node bifurcation, so that the attractor degenerates to a single
asymptotically stable steady state solution. It would be interesting to
understand why this collision of steady state solution branches occurs.
We have also explored the geometry of film rupture which occurs as $\rho\to
1^{-}$ when the disjoining pressure is given by $\Pi_{\rm SR}$; this
phenomenon is shown in detail in Figures 5 and 6.
Finally, in Figures 7 and 8, we showed the results of a two-parameter
continuation study in the $(1/\epsilon,\rho)$ plane, showing how the
multiplicity of positive steady state solutions changes as parameters are
varied, and, in particular, indicating in the case of disjoining pressure
$\Pi_{\rm SR}$ shown in Figure 8 regions in parameter space where no such
solutions exist. We conjecture that in these regions the solution of the
unsteady problem with any positive initial condition converges to a weak
solution of the thin-film equation with regions in which $h(x)=0$, i.e.
solutions with film rupture. For a discussion of such (weak) solutions of
thin-film equations in the homogeneous case the reader is referred to the work
of Laugesen and Pugh [26].
In the case of disjoining pressure $\Pi_{\rm SR}$, we could not use the
AUTO-07p version of AUTO to continue branches of solutions beyond rupture. It
would be an interesting project to develop such a capability for this powerful
and versatile piece of software.
Figures 8(b) and 8(c) provide numerical evidence for the existence of a curve
of saddle-node bifurcations converging to the point $(0,1)$ in the
$(1/\epsilon,\rho)$ plane; an explanation for this feature of the global
bifurcation diagrams requires further study.
To summarise: our study was primarily motivated by the work of Honisch et al.
[17]. While we have clarified the mathematical properties of (9)–(12) and
(49)–(51), so that the structure of bifurcations in Figure 3(a) of that paper
for non-zero values of $\rho$ is now understood, many of their other numerical
findings are still to be explored mathematically. For example, the stability
of ridge solutions shown in their Figure 5 in the context of the full two-
dimensional problem of a substrate with periodic wettability stripes. There is
clearly much work still to be done on heterogeneous substrates with more
complex wettability geometry.
A final remark that might be of interest to the reader is that the solutions
of equations (49)–(51), the steady state solutions of (4), a degenerate quasi-
linear fourth-order PDE, can also be thought of as the steady state solutions
of a much simpler (Rubinstein-Sternberg type [31]) second-order semi-linear
non-local equation,
$h_{t}=\gamma
h_{xx}+\Pi(h,x)-\frac{1}{L}\int_{0}^{L}\Pi(h,x)\,\hbox{d}{x},\quad 0<x<L.$
(55)
It would be interesting to compare the dynamics of (4) and (55), for example
using the spectral comparison principles of Bates and Fife [3]. For other work
on non-local reaction-diffusion equations such as (55), please see Budd et al.
[7] and the review of Freitas [13].
We are grateful to Prof. U. Thiele (University of Münster) for clarifications
concerning the work of Honisch et al. [17] and for sharing with us the AUTO
codes used in that work which formed the basis of our continuation analysis.
We are also grateful to the two anonymous referees whose remarks helped us to
improve significantly the readability of the present work.
## Appendix A $O(2)$ Symmetry Breaking by Spatial Non-homogeneity
In this Appendix, we present an argument that shows that when the wettability
contrast is present, i.e. when $\rho\neq 0$, the breaking of the $O(2)$
symmetry which equation (49) with the periodic boundary conditions (51) has
for $\rho=0$ , leaves only two steady state solutions.
This is, in principle, a known result (see, for example, Chillingworth [8]),
but, since we are not aware of an easily accessible reference, we give the
details here. As before, we set $G(x)=\sin(2\pi x)$. We provide the proof for
$\Pi_{\rm SR}$ given by (48), the proof for $\Pi_{\rm LR}$ given by (47) is
similar.
For the case of $\Pi_{\rm SR}$, let us rewrite the boundary value problem (49)
in the form
$\epsilon^{2}h_{xx}+f_{1}(h)+\rho f_{2}(h)G(x)-\int_{0}^{1}[f_{1}(h)+\rho
f_{2}(h)G(x)]\,\hbox{d}{x}=0,\;\;0<x<1,$ (56)
where
$f_{1}(h)=\frac{1}{h^{6}}-\frac{1}{h^{3}},$ (57)
and
$f_{2}(h)=\frac{1}{h^{6}},$ (58)
i.e. we separate the spatially homogeneous and spatially non-homogeneous
components of the disjoining pressure. Equation (56) is subject to the
periodic boundary conditions (51).
Suppose that when $\rho=0$ there is an orbit of steady state solutions, i.e. a
continuous closed curve of solutions $h_{0,s}(x)$, parameterised by
$s\in\mathbb{R}/[0,1]$, such that $h_{0,s}(x):=h_{0}(x+s)$, for some function
$h_{0}(x)$, i.e. all these solutions are related by translation. We aim to
understand what remains of this orbit for small non-zero $\rho$.
Fix $s\in\mathbb{R}/[0,1]$. We write
$h(x)=h_{0,s}(x)+\rho h_{1}(x)+O(\rho^{2}).$ (59)
Substituting this expansion into (56) and collecting the $O(\rho)$ terms, we
have
$\displaystyle\epsilon^{2}h_{1,xx}$ $\displaystyle+$
$\displaystyle(f_{1}^{\prime}(h_{0,s})+f_{2}^{\prime}(h_{0,s}))h_{1}-\int_{0}^{1}\left[f_{1}^{\prime}(h_{0,s})+f_{2}^{\prime}(h_{0,s})\right]h_{1}\,\hbox{d}{x}$
(60) $\displaystyle=$ $\displaystyle-
f_{2}(h_{0,s})G+\int_{0}^{1}f_{2}(h_{0,s})G\,\hbox{d}{x},$
where, just like $h_{0,s}(x)$, $h_{1}(x)$ also satisfies the periodic boundary
conditions (51).
Now set
$Ku:=\epsilon^{2}u_{1,xx}+(f_{1}^{\prime}(h_{0,s})+f_{2}^{\prime}(h_{0,s}))u-\int_{0}^{1}[f_{1}^{\prime}(h_{0,s})+f_{2}^{\prime}(h_{0,s})]u\,\hbox{d}{x},$
(61)
and let $D(K)$, the domain of $K$, be
$D(K)=\left\\{f\in
C^{2}\left(\left[0,1\right]\right)\;|\,f(0)=f(1),f^{\prime}(0)=f^{\prime}(1)\right\\}.$
(62)
The operator $K$ is self-adjoint with respect to the $L^{2}([0,1])$ inner
product. Invoking the Fredholm Alternative [32, Theorem 7.26], we conclude
that (60) has $1$-periodic solutions if and only if the right-hand side of
(60) is orthogonal in $L^{2}([0,1])$ to the solutions of $Ku=0$.
Next, we show that $u:=h_{0,s}^{\prime}$ solves $Ku=0$. Indeed, by
differentiating (56) with $\rho=0$ with respect to $x$, we find that $u$
solves the equation
$\epsilon^{2}u_{xx}+(f_{1}^{\prime}(h_{0,s})+f_{2}^{\prime}(h_{0,s}))u=0.$
(63)
Integrating this equation over the interval $[0,1]$, we have that
$\int_{0}^{1}(f_{1}^{\prime}(h_{0,s})+f_{2}^{\prime}(h_{0,s}))u\,\hbox{d}{x}=0.$
(64)
Hence
$\begin{split}0&~{}=\epsilon^{2}u_{xx}+(f_{1}^{\prime}(h_{0,s})+f_{2}^{\prime}(h_{0,s}))u\\\
&~{}=\epsilon^{2}u_{xx}+(f_{1}^{\prime}(h_{0,s})+f_{2}^{\prime}(h_{0,s}))u+\int_{0}^{1}(f_{1}^{\prime}(h_{0,s})+f_{2}^{\prime}(h_{0,s}))u\,\hbox{d}{x}\\\
&~{}=Ku.\end{split}$ (65)
Also note that as $h_{0,s}(x)$ satisfies periodic boundary conditions,
$\int_{0}^{1}h_{0,s}^{\prime}(x)\hbox{d}{x}=0.$ (66)
Hence the solvability condition for (60) is
$\int_{0}^{1}h_{0,s}^{\prime}(r)\left[-f_{2}(h_{0,s})G+\int_{0}^{1}f_{2}(h_{0,s})G\,\hbox{d}{x}\right]\hbox{d}{r}=0.$
(67)
By (66), this condition reduces to
$\int_{0}^{1}f_{2}(h_{0,s})h_{0,s}^{\prime}G\,\hbox{d}{x}=0.$ (68)
Now recall that $h_{0,s}(x)=h_{0}(x+s)$, so if we write
$F(x+s)=f_{2}(h_{0}(x+s))h_{0}^{\prime}(x+s)$, the function $F(\cdot)$ is
$1$-periodic in $x$ with zero mean. Hence
$F(z)=\sum_{k=1}^{\infty}\alpha_{k}\sin(2k\pi z)+\beta_{k}\cos(2k\pi z).$ (69)
Therefore for $G(x)=\sin(2\pi x)$, the solvability condition for (60) becomes
$\alpha_{1}\sin(2k\pi s)-\beta_{1}\cos(2\pi s)=0,$ (70)
which has two solutions $s\in\mathbb{R}/[0,1]$, from which we conclude there
is a solution $h_{1}(x)$ only for two choices of $s\in\mathbb{R}/[0,1]$, that
is, that only two solutions to (56) remain from the entire $O(2)$ orbit that
exists for $\rho=0$.
## References
* [1] Aimé, J. P. & Ondarçuhu, T. (Eds.) 2013 Nanoscale Liquid Interfaces: Wetting, Patterning and Force Microscopy at the Molecular Scale (1st ed.). Jenny Stanford Publishing. Stanford.
* [2] Ajaev, V. S., Gatapova, E. Y. & Kabov, O. A. (2011) Rupture of thin liquid films on structured surfaces. Physical Review E 84, 041606.
* [3] Bates, P. W. & Fife, P. C. (1990) Spectral comparison principles for the Cahn-Hilliard and phase-field equations, and time scales for coarsening. Physica D: Nonlinear Phenomena 43, 335–48.
* [4] Bertozzi, A. L., Grün, G. & Witelski, T. P. (2001) Dewetting films: bifurcations and concentrations. Nonlinearity 14, 1569–92.
* [5] Brasjen, B. J. & Darhuber, A. A. (2011) Dry-spot nucleation in thin liquid films on chemically patterned surfaces. Microfluidics and Nanofluidics 11, 703–16.
* [6] Braun, R. J., King-Smith, P., Begley, C., Li, L. & Gewecke, N. (2015) Dynamics and function of the tear film in relation to the blink cycle. Progress in Retinal and Eye Research 45, 132–64.
* [7] Budd, C., Dold, B., & Stuart, A. (1993) Blowup in a partial differential equation with conserved first integral. SIAM Journal on Applied Mathematics 53, 718–42.
* [8] Chillingworth, D. (1985) Bifurcation from an orbit of symmetry. In: Pnevmatikos, S. N. (ed.) Singularities and Dynamical Systems. Amsterdam. Elsevier, pp. 285–94.
* [9] Chossat, P. & Lauterbach, R. 2000 Methods in Equivariant Bifurcations and Dynamical Systems. Singapore, World Scientific.
* [10] Craster, R. V. & Matar, O. K. (2009) Dynamics and stability of thin liquid films. Reviews of Modern Physics 81, 1131–98.
* [11] Doedel, E. J. & Oldeman, B. E. 2009 Auto07p: Continuation and Bifurcation for Ordinary Differential Equations. Montreal. Concordia University.
* [12] Eilbeck, J. C., Furter, J. E. & Grinfeld, M. (1989) On a stationary state characterization of transition from spinodal decomposition to nucleation behaviour in the Cahn–Hilliard model of phase separation. Physics Letters A 135, 272–75.
* [13] Freitas, P. (1999) Nonlocal reaction-diffusion equations. Fields Institute Communications 21, 187–204.
* [14] Hale, J. K. (2010) Asymptotic Behavior of Dissipative Systems, Providence, RI. American Mathematical Society.
* [15] Glasner, K. B. & Witelski, T. P. (2005) Collision versus collapse of droplets in coarsening of dewetting thin films. Physica D: Nonlinear Phenomena 209, 80–104.
* [16] Golubitsky, M. & Schaeffer, D. G. 1985 Singularities and Groups in Bifurcation Theory, Vol. I. New York. Springer.
* [17] Honisch, C., Lin, T.-S., Heuer, A., Thiele, U. & Gurevich, S. V. (2015) Instabilities of layers of deposited molecules on chemically stripe patterned substrates: ridges versus drops. Langmuir 31, 10618–31.
* [18] Howison, S. D., Moriarty, J. A., Ockendon, J. R., Terrill, E. L. & Wilson, S. K. (1997) A mathematical model for drying paint layers. Journal of Engineering Mathematics 32, 377–94.
* [19] Huppert, H. E. (1982) The propagation of two-dimensional and axisymmetric viscous gravity currents over a rigid horizontal surface. Journal of Fluid Mechanics 121, 43–58.
* [20] Ji, H. & Witelski, T. P. (2017) Finite-time thin film rupture driven by modified evaporative loss. Physica D: Nonlinear Phenomena 342, 1–15.
* [21] Karnaushenko, D., Kang, T., Bandari, V. K., Zhu, F. & Schmidt, O. G. (2020) $3D$ Self-assembled microelectronic devices: concepts, materials, applications. Advances in Materials 32, 1902994.
* [22] Kistler, S. F. & Schweizer, P. M. (Eds.) 1997 Liquid Film Coating: Scientific Principles and Their Technological Implications. New York. Chapman & Hall.
* [23] Kitavtsev, G., Lutz, R. & Wagner, B. (2011) Centre manifold reduction approach for the lubrication equation. Nonlinearity 24, 2347–69.
* [24] Konnur, R., Kargupta, K. & Sharma, A. (2000) Instability and morphology of thin liquid films on chemically heterogeneous substrates. Physical Review Letters 84, 931–34.
* [25] Laugesen, R. S. & Pugh, M. C. (2000) Linear stability of steady states for thin film and Cahn-Hilliard type equations. Archive for Rational Mechanics and Analysis 154, 3–51.
* [26] Laugesen, R. S. & Pugh, M. C. (2000) Properties of steady states for thin film equations. European Journal of Applied Mathematics 11, 293–351.
* [27] Liu, W & Witelski, T. P. (2020) Steady states of thin film droplets on chemically heterogeneous substrates, IMA Journal of Applied Mathematics 85, 980–1020.
* [28] Oron, A., Davis, S. H. & Bankoff, S. G. (1997) Long-scale evolution of thin liquid films. Reviews of Modern Physics 69, 931–80.
* [29] Pismen, L. M. (2002) Mesoscopic hydrodynamics of contact line motion. Colloids and Surfaces A 206, 11–30.
* [30] Quake, S. R. & Scherer, A. (2000) From micro-to nanofabrication with soft materials. Science 290, 1536–40.
* [31] Rubinstein, J. & Sternberg, P. (1992) Nonlocal reaction–diffusion equations and nucleation. IMA Journal of Applied Mathematics 48, 249–64.
* [32] Rynne, B. P. & Youngson, M. A. 2008 Linear Functional Analysis, 2nd edition. Berlin. Springer.
* [33] Schwartz, L. W. & Eley, R. R. (1998) Simulation of droplet motion on low-energy and heterogeneous surfaces. Journal of Colloid and Interface Science 202, 173–88.
* [34] Sehgal, A., Ferreiro, V., Douglas, J. F., Amis, E. J. & Karim, A. (2002) Pattern-directed dewetting of ultrathin polymer films. Langmuir 18, 7041–48.
* [35] Sharma, A., Konnur, R. & Kargupta, K. (2003) Thin liquid films on chemically heterogeneous substrates: self-organization, dynamics and patterns in systems displaying a secondary minimum. Physica A: Statistical Mechanics and its Applications 318, 262–78.
* [36] Sibley, D. N., Nold, A., Savva, N. & Kalliadasis, S. (2015) A comparison of slip, disjoining pressure, and interface formation models for contact line motion through asymptotic analysis of thin two-dimensional droplet spreading. Journal of Engineering Mathematics, 94, 19–41.
* [37] Starov, V. M. (2013) Disjoining pressure. In: Li, D. (Ed.) Encyclopedia of Microfluidics and Nanofluidics. Berlin. Springer.
* [38] Thiele, U. & Knobloch, E. (2006) On the depinning of a driven drop on a heterogeneous substrate. New Journal of Physics 8, 313.
* [39] Thiele, U., Brusch, L., Bestehorn, M. & Bär, M. (2003) Modelling thin-film dewetting on structured substrates and templates: bifurcation analysis and numerical simulations. European Physical Journal E 11, 255–71.
* [40] Vanderbauwhede, A. (1981) Symmetry and bifurcation from multiple eigenvalues. In: Everitt, W. N. & Sleeman, B. D. (eds). Ordinary and Partial Differential Equations, Berlin. Springer, pp. 356–65.
* [41] Wang, D., Song, Y., Tian, J., Shigu, E. & Haidak, G. (2018) Research on the fluid film lubrication between the piston-cylinder interface. AIP Advances 8, 105330.
* [42] Witelski, T. P. (2020) Nonlinear dynamics of dewetting thin films. AIMS Mathematics 5 4229–59.
* [43] Witelski, T. P. & Bernoff, A. J. (2000) Dynamics of three-dimensional thin film rupture. Physica D: Nonlinear Phenomena 147, 155–76.
* [44] Wu, H. & Zheng, S. (2007) Global attractor for the 1-D thin film equation. Asymptotic Analysis 51, 101–11.
* [45] Xue, L., Zhang, J. & Han, Y. (2012) Phase separation induced ordered patterns in thin polymer blend films. Progress in Polymer Science 37, 564–94.
* [46] Zhang, Y. (2009) Counting the stationary states and the convergence to equilibrium for the 1-D thin film equation. Nonlinear Analysis: Theory, Methods & Applications 71, 1425–37.
|
11institutetext: High Energy Physics and Astrophysics Laboratory, Department
of Physics, Faculty of Sciences SEMLALIA, Cadi Ayyad University, P.O.B. 2390,
Marrakesh, Morocco.
# Giant dipole resonance in Sm isotopes within TDHF method
A. Ait Ben Mennana _e-mail:_<EMAIL_ADDRESS>M. Oulne _e-mail:_
<EMAIL_ADDRESS>
(Received: date / Revised version: date)
###### Abstract
In this work, we have studied the isovector giant dipole resonance (IVGDR) in
even-even Sm isotopes within time-dependent Hartree-Fock (TDHF) with four
Skyrme forces SLy6, SVbas, SLy5 and UNEDF1. The approach we have followed is
somewhat similar to the one we did in our previous work in the region of
Neodymium (Nd, Z=60) [Physica Scripta (2020)]. We have calculated the dipole
strength of ${}^{128-164}\text{Sm}$, and compared with the available
experimental data. An overall agreement between them is obtained. The dipole
strength in neutron-deficient ${}^{128-142}\text{Sm}$ and in neutron-rich
${}^{156-164}\text{Sm}$ isotopes are predicted. Shape phase transition as well
as shape coexistence in Sm isotopes are also investigated in the light of
IVGDR. In addition, the correlation between the quadrupole deformation
parameter $\beta_{2}$ and the splitting $\Delta E/\bar{E}_{m}$ of the giant
dipole resonance (GDR) spectra is studied. The results confirm that $\Delta
E/\bar{E}_{m}$ is proportional to quadrupole deformation $\beta_{2}$.
## 1 Introduction
Giant resonances (GRs) represent an excellent example of collective modes of
many,if not all, particles in the nucleus harakeh2001 . GRs are of particular
importance because they currently provide the most reliable information about
the bulk behavior of the nuclear many-body system. The so-called isovector
giant diople resonance (IVGDR) is the oldest and best known of giant
resonances. This is due to high selectivity for isovector $E_{1}$ in photo-
absorption experiments. Several attempts of theoretical description of GDR
have been made using the liquid drop model. Among them, Goldhaber and Teller
(GT) interpreted it as collective vibrations of the protons moving against the
neutrons in the nucleus with the centroid energy of the form $E_{c}\propto
A^{-1/6}$goldhaber1948 . Somewhat later, Steinwedel and Jensen (SJ)
interpreted it as a vibration of proton fluid against neutron fluid with a
fixed surface where the centroid energy has the form $E_{c}\propto A^{-1/3}$
speth1981 . The experimental data are adjusted by a combination of these two
berman1975 : in light nuclei, the data follow the law $A^{-1/6}$, while the
dependence $A^{-1/3}$ becomes more and more dominant for increasing values of
A. Since its first observation bothe1937 , it has been much studied both
experimentally (see for example Refs.carlos1971 ; carlos1974 ; berman1975 ;
donaldson2018 ) and theoretically (see for example Refs.goeke1982 ; maruhn2005
; reinhard2008 ; yoshida2011 ; benmenana2020 ).
The GDR spectra of nucleus can predict its shape (spherical, prolate, oblate,
triaxial). It has a single peak for heavier spherical nuclei while in light
nuclei it is split into several fragments harakeh2001 . In deformed nuclei,
the GDR strength is split in two components corresponding to oscillations of
neutrons versus protons along and perpendicular to the symmetry axis speth1991
; harakeh2001 . Several microscopic approaches have been employed to study
GDRs in deformed nuclei such as Separable Random-Phase-Approximation (SRPA)
reinhard2008 ; reinhard2007c , time-dependent Skyrme-Hartree-Fock method
maruhn2005 ; fracasso2012 , Relativistic Quasi-particle Random Phase
Approximation (RQRPA) ring2009 and Extended Quantum Molecular Dynamics (EQMD)
wang2017 . Experimentally, the GDR is induced by various ways such as
photoabsorption carlos1971 ; carlos1974 ; Masur2006 inelastic scattering
donaldson2018 ; ramakrishnan1996 ,$\gamma$-decay gundlach1990 .
The time-dependent Hartree-Fock (TDHF) dirac1930 method has been employed in
many works to investigate GRs in nuclei. It provides a good approximation for
GR. Early, TDHF calculations concentrated on giant monopole resonance
(GMR)blocki1979 ; chomaz1987 because they require only a spherical one-
dimensional code. In the last few years with the increase in computer power,
large scale TDHF calculations become possible with no assumptions on the
spatial symmetry of the systemmaruhn2005 ; maruhn2006 ; stevenson2004 . Such
calculations are performed by codes using a fully three dimensional (3D)
Cartesian grid in coordinate space sky3d .
In our previous work benmenana2020 , TDHF method provided an accurate
description of the GDR in ${}^{124-160}\text{Nd}$ isotopes. Four Skyrme forces
were used in this work. We obtained an overall agreement with experiment with
slight advantage for SLy6 CHABANAT1998 . In this paper, we aim to study
another even-even isotopic chain namely ${}^{128-164}\text{Sm}$ with four
Skyrme forces SLy6 CHABANAT1998 , SLy5 CHABANAT1998 , SVbasreinhard2009 and
UNEDF1 kortelainen2012 . The first three forces were used in our previous work
benmenana2020 and gave acceptable results for GDR in Nd isotopes. The new
Skyrme force UNEDF1 provided also satisfactory results in. Many previous
experimental and theoretical works have studied the isotopic chain of Samarium
Sm (Z = 62). From the experimental point of view one can see for example
Ref.carlos1974 ) and from the theoretical one Refs.yoshida2011 ; wang2017 .
Besides the study of GDR, many works (Refs.yoshida2011 ; ring2009 ; tao2013 )
studied the so-called pygmy dipole resonance (PDR) which correspond to low-
energy E1 strength in nuclei with a pronounced neutron exces. The pygmy mode
is regarded as vibration of the weakly bound neutron skin of the neutron-rich
nucleus against the isospin-symmetric core composed of neutrons and protons
paar2007 . In Ref. yoshida2011 , the authors studied PDR in some spherical
nuclei such as ${}^{144}\text{Sm}$ and deformed ones such as
${}^{152-154}\text{Sm}$. For spherical nuclei, they found a concentration of
the E1 strength in low-energy between 8 and 10 MeV, whereas for deformed
nuclei the dipole strength is fragmented into low-energy states. They also
showed that the nuclear deformation increases the low-lying strength E1 at E
$<$ 10 MeV. The PDR mode is out of our current work in which we aim at a
description of the GDR which lie at a high excitation energy range of $\sim$
10-20 MeV.
In this paper, the TDHF approximation negele1982 has been applied to study
the GDR and shape evolution in even-even Sm (Z=62) isotopes from mass number
A=128 to A=164. This study is done with SKY3D code sky3d which uses a fully
three dimensional (3D) Cartesian grid in coordinate space with no spatial
symmetry restrictions and includes all time-odd terms. Consequently, it is
possible to study both spherical and deformed system within the limitation of
mean field theory. Due to the open-shell nature of these nuclei, pairing and
deformation properties must be taken into account in this study. Firstly, a
static calculation gives some properties of the ground-state of the nucleus
like root mean square (r.m.s), $\beta_{2}$, $\gamma$. In dynamic calculation,
the ground-state of the nucleus is boosted by imposing a dipole excitation to
obtain the GDR spectra and some of its properties (resonance energies, width).
The paper is organized as follows: in Sec.2, we give a brief description of
TDHF method and the GDR in deformed nuclei. In Sec.3, we present details of
the numerical calculations. Our results and discussion are presented in Sec.4.
Finally, Sec.5 gives the summary.
## 2 Time-Dependent Hartree-Fock method (TDHF) to giant resonances
### 2.1 TDHF method
The time-dependent Hartree-Fock (TDHF) approximation has been extensively
discussed in several references engel1975 ; kerman1976 ; koonin1977 . A brief
introduction of the TDHF method is presented as follows.
The TDHF is a self-consistent mean field (SCMF) theory which was proposed by
Dirac in 1930 dirac1930 . It generalizes the static hartree-Fock (HF) and has
been very successful in describing the dynamic properties of nuclei such as
for example, giant resonances maruhn2005 ; stevenson2004 ; reinhard2007 ;
blocki1979 and Heavy-ion collisions simenel2018 ; maruhn2006 .
The TDHF equations are determined from the variation of Dirac action
${}S\equiv
S_{t_{0},t_{1}}[\psi]=\int_{t_{0}}^{t_{1}}dt\bra{\psi(t)}\bigg{(}i\hbar\frac{d}{dt}-\hat{H}\bigg{)}\ket{\psi(t)},$
(1)
where $\ket{\psi}$ is the Slater determinant, $t_{0}$ and $t_{1}$ define the
time interval, where the action S is stationary between the fixed endpoints
$t_{0}$ and $t_{1}$, and $\hat{H}$ is the Hamiltonian of the system. The
energy of the system is defined as $E=\bra{\psi}\hat{H}\ket{\psi}$, and we
have
${}\bra{\psi}\frac{d}{dt}\ket{\psi}=\sum_{i=1}^{N}\bra{\varphi_{i}}\frac{d}{dt}\ket{\varphi_{i}},$
(2)
where $\ket{\varphi_{i}}$ are the occupied single-particle states. The action
S can be expressed as
$\displaystyle{}S$ $\displaystyle=$
$\displaystyle\int_{t_{0}}^{t_{1}}dt\bigg{(}i\hbar\sum_{i=1}^{N}\bra{\varphi_{i}}\frac{d}{dt}\ket{\varphi_{i}}-E[\varphi_{i}]\bigg{)}$
(3) $\displaystyle=$
$\displaystyle\int_{t_{0}}^{t_{1}}dt\bigg{(}i\hbar\sum_{i=1}^{N}\int
dx\,\varphi_{i}^{*}(x,t)\frac{d}{dt}\varphi_{i}(x,t)-E[\varphi_{i}]\bigg{)}$
The variation of the action S with respect to the wave functions
$\varphi_{i}^{*}$ reads
${}\frac{\delta S}{\delta\varphi_{i}^{*}(x,t)}=0,$ (4)
for each $i=1....N$, $t_{0}\leq t\leq{t_{1}}$ and for all $x$. More details
can be found for example in Refs. kerman1976 ; simenel2012 . We finally get
the TDHF equation
${}i\hbar\frac{\partial}{\partial
t}\varphi_{i}(t)=\hat{h}[\rho(t)]\varphi_{i}(t)\quad\text{for}\quad 1\leq
i\leq\text{N}.$ (5)
where $\hat{h}$ is the single-particle Hartree-Fock Hamiltonian.
The TDHF equations (5) are solved iteratively by a small time step $\Delta t$
during which we assume that the Hamiltonian remains constant. To conserve the
total energy E, it is necessary to apply a symmetric algorithm by time
reversal, and therefore to estimate the Hamiltonian at time $t+\frac{\Delta
t}{2}$ to evolve the system between time $t\;\text{and}\;t+\Delta t$
flocard1978 ; bonche1976
${}\ket{\varphi(t+\Delta t)}\simeq e^{-i\frac{\Delta
t}{\hbar}\hat{h}(t+\frac{\Delta t}{2})}\ket{\varphi(t)}.$ (6)
### 2.2 Giant dipole resonance in deformed nuclei
In deformed axially symmetric nuclei, one of the most spectacular properties
of the GDR is its splitting into two components associated to vibrations of
neutrons against protons along (K=0) and perpendicularly to (K=1) the symmetry
axis. Therefore, the GDR strength represents a superposition of two resonances
with energies $E_{i}\sim R_{i}^{-1}\sim A^{-1/3}$ speth1981 where R is the
nuclear radius, and even three resonances in the case of asymmetric nuclei.
This splitting has been observed experimentally carlos1974 ; berman1975 ;
Masur2006 ; donaldson2018 and treated theoretically by different models
maruhn2005 ; reinhard2008 ; yoshida2011 . For the axially symmetric prolate
nuclei, the GDR spectra present two peaks where the low-energy $E_{z}$
corresponds to the oscillations along the major axis of symmetry and the high-
energy $E_{x}=E_{y}$ corresponds to the oscillations along transverse minor
axes of the nuclear ellipsoid, due to $E\sim R^{-1}$. For an oblate nucleus,
it is the opposite situation to the prolate case. For triaxial nuclei, the
oscillations along three axes are different ,i.e., $E_{x}\neq E_{y}\neq
E_{z}$. For spherical nuclei, the vibrations along three axes degenerate and
their energies coincide $E_{x}=E_{y}=E_{z}$.
## 3 Details of Calculations
In this work, the GDR in even-even ${}^{128-164}\text{Sm}$ isotopes has been
studied by using the code Sky3D (v1.1) sky3d . This code solves the HF as
well as TDHF equations for Skyrme interactions SKYRME1958 . Calculations were
performed with four Skyrme functional: SLy6 CHABANAT1998 , SLy5CHABANAT1998 ,
SVbasreinhard2009 , UNEDF1 kortelainen2012 . These Skyrme forces are widely
used for the ground state properties (binding energies, radii…) and dynamics
(as giant resonances) of nuclei including deformed ones. In particular they
provide a reasonable description of the GDR: SLy6maruhn2005 ; reinhard2008 ,
SVbasreinhard2009 , SLy5fracasso2012 and UNEDF1 kortelainen2012 . The
parameters set of these functionals used in this study is shown in Table 1.
Table 1: Parameters ($t,x$) of the Skyrme forces used in this work. Parameters | UNEDF1 | SVbas | SLy6 | SLy5
---|---|---|---|---
$t_{0}$ (MeV.fm3) | -2078.328 | -1879.640 | -2479.500 | -2484.880
$t_{1}$ (MeV.fm5) | 239.401 | 313.749 | -1762.880 | 483.130
$t_{2}$ (MeV.fm5) | 1574.243 | 112.676 | -448.610 | -549.400
$t_{3}$ (MeV.fm3+3σ) | 14263.646 | 12527.389 | 13673.000 | 13763.000
$x_{0}$ | 0.054 | 0.258 | 0.825 | 0.778
$x_{1}$ | -5.078 | -0.381 | -0.465 | -0.328
$x_{2}$ | -1.366 | -2.823 | -1.000 | -1.000
$x_{3}$ | -0.161 | 0.123 | 1.355 | 1.267
$\sigma$ | 0.270 | 0.300 | 0.166 | 0.166
W0 (MeV.fm5) | 76.736 | 124.633 | 122.000 | 126.000
A first step of calculation concerns a static calculation which allows to
determine the ground state for a given nucleus. This state is obtained by
solving the static HF + BCS equations (8) in a three-dimensional (3D)
Cartesian mesh with a damped gradient iteration method on an equidistant grid
and without symmetry restrictions sky3d .
${}\hat{h}\psi_{i}(x)=\epsilon_{i}\psi_{i}(x)\quad\text{for}\quad i=1,....,A,$
(7)
where $\hat{h}$ is the single-particle Hamiltonien, and $\epsilon_{i}$ is the
single-particle energy of the state $\psi_{i}(x)$ with
$x=(\vec{r},\sigma,\tau)$.
We used a cubic box with size a = 24 fm and a grid spacing of $\Delta x$ =
1.00 fm in each direction. In SKY3D code sky3d , the static HF + BCS equations
(7) are solved iteratively until a convergence is obtained ,i.e., when for
example the sum of the single-particle energy fluctuations becomes less than a
certain value determined at the beginning of the static calculation. In this
study we take as a convergence value $10^{-5}$ which is sufficient for heavy
nuclei (for more details see Ref. sky3d . The pairing is treated in the static
calculation, which allows to calculate the pairing energy
$\displaystyle{}E_{pair}=\frac{1}{4}\sum_{q\in\\{p,n\\}}V_{pair,q}\int
d^{3}r|\xi_{q}|^{2}F(r)$ (8)
where the pairing density $\xi_{q}$ reads sky3d
$\displaystyle{}\xi(\vec{r})=\sum_{\alpha\in\\{p,n\\}}\sum_{s}u_{\alpha}v_{\alpha}|\psi_{\alpha}(\vec{r},s)|^{2}$
(9)
where $v_{\alpha}$, $u_{\alpha}=\sqrt{1-v_{\alpha}^{2}}$ are the occupation
and non-occupation amplitude of single-particle state $\psi_{\alpha}$ ,
respectively, and the function $F=1$ or $F=1-\rho/\rho_{0}$ gives a pure
$\delta$-interaction (DI), also called volume pairing (VDI) where
$\rho_{0}\rightarrow\infty$ or density dependent $\delta$-interaction (DDDI),
respectively, while $\rho_{0}=0.16$ fm-3 is the saturation density. $V_{P,N}$
represents the pairing strength which is obtained from the force definition in
the SKY3D code sky3d .
In dynamic calculations, the ground-state wave function obtained by the static
calculations is excited by an instantaneous initial dipole boost operator in
order to put the nucleus in the dipole mode maruhn2005 ; simenel2009 ;
stevenson2008 .
${}\varphi_{\alpha}^{(g.s)}(r)\longrightarrow\varphi_{\alpha}(r,t=0)=\exp(ib\hat{D})\varphi_{\alpha}^{(g.s)}(r),$
(10)
where $\varphi_{\alpha}^{(g.s)}(r)$ represents the ground-state of nucleus
before the boost, b is the boost amplitude of the studied mode , and $\hat{D}$
the associated operator. In our case, $\hat{D}$ represents the isovector
dipole operator defined as
$\displaystyle{}\hat{D}$ $\displaystyle=$
$\displaystyle\frac{NZ}{A}\bigg{(}\frac{1}{Z}\sum_{p=1}^{Z}\vec{z}_{p}-\frac{1}{N}\sum_{n=1}^{N}\vec{z}_{n}\bigg{)}$
(11) $\displaystyle=$
$\displaystyle\frac{NZ}{A}\bigg{(}\vec{R}_{Z}-\vec{R}_{N}\bigg{)},$
where $\vec{R}_{Z}$ (resp. $\vec{R}_{N}$ ) measures the proton (resp. neutron)
average position on the z axis.
The spectral distribution of the isovector dipole strength is obtained by
applying a boost (10) with a small value of the amplitude of the boost b to
stay well in the linear regime of the excitation. For a long enough time, the
dipole moment $\hat{D}=\bra{\psi(t)}\hat{D}\ket{\psi(t)}$ is recorded along
the dynamical evolution. Finally, the dipole strength $S_{D}(\omega)$ can be
obtained by performing the Fourier transform $D(\omega)$ of the signal
$\hat{D}(t)$, defined as ring1980
$\displaystyle{}S_{D}(\omega)$ $\displaystyle=$
$\displaystyle\sum_{\nu}\delta(E-E_{\nu})\big{|}\bra{\nu}\hat{D}\ket{0}\big{|}^{2}.$
(12)
Some filtering is necessary to avoid artifacts in the spectra obtained by
catting the signal at a certain final time, in order to the signal vanishes at
the end of the simulation time. In practice we use windowing in the time
domain by damping the signal $D(t)$ at the final time with
$cos\big{(}\frac{\pi t}{2T_{fin}}\big{)}^{n}$ sky3d .
${}D(t)\longrightarrow D_{fil}=D(t).cos\bigg{(}\frac{\pi
t}{2T_{fin}}\bigg{)}^{n},$ (13)
where n represents the strength of filtering and $T_{fin}$ is the final time
of the simulation. More details can be founded in Refs. sky3d ; reinhard2006
.
In this work, all dynamic calculations were performed in a cubic space with 24
x 24 x 24 fm3 according to the three directions (x, y, z) and a grid spacing
of 1 fm. We chose nt= 4000 as a number of time steps to be run, and dt = 0.2
fm/c is the time step, so Tf = 800 fm/c is the final time of simulation.
Pairing is frozen in the dynamic calculation ,i.e., the BCS occupation numbers
are frozen at their initial values during time evolution.
## 4 Results and Discussion
In this section we present our numerical results of static calculations
concerning some properties of the ground-state, and dynamic calculations
concerning some properties of the GDR for ${}^{128-164}\text{Sm}$ nuclei.
### 4.1 Ground-state properties
The isotopic chain of Sm (Z=62) studied in this work displays a transition
from spherical, when neutron number N is close to magic number N = 82, to the
axially deformed shapes when N increases or decreases carlos1974 ; meng2005 ;
wang2017 ; naz2018 . Among the properties of the ground-state of nuclei, there
are the deformation parameters $\beta_{2}$ and $\gamma$ which give an idea on
the shape of the nucleus ring1980 ; takigawa2017 . These deformation
parameters are defined as follows sky3d
${}\beta=\sqrt{a_{0}^{2}+2a_{2}^{2}}\qquad,\quad\gamma=atan\bigg{(}\frac{\sqrt{2}a_{2}}{a_{0}}\bigg{)}$
(14)
${}a_{m}=\frac{4\pi}{5}\frac{Q_{2m}}{AR^{2}}\qquad,\quad R=1.2A^{1/3}(fm),$
(15)
where $Q_{2m}$ is the quadrupole moment defined as
${}Q_{2m}=\int\rho(\vec{r})r^{2}Y_{2m}(\theta,\varphi)d\vec{r}$ (16)
The deformation parameters ($\beta$,$\gamma$) often called Bohr-Mottelson
parameters are treated as a probe to select the ground-state of all nuclei in
this article. Table 2 displays the numerical results obtained for the
deformation parameters ($\beta_{2}$,$\gamma$) based on Eq. (14) of
${}^{128-164}\text{Sm}$ isotopes with four Skyrme forces, including the
available experimental data from Ref.raman2001 and the HFB calculations based
on the D1S Gogny force HFB for comparison. Fig. 1 shows the variation of
$\beta_{2}$ as a function of neutrons number N.
Table 2: The deformation parameters ($\beta_{2}$,$\gamma$) calculated with UNEDF1, SVbas, SLy6, and SLy5 are compared with the experimental data are from Ref.raman2001 , and data from Ref.HFB . Nuclei | UNEDF1 | SVbas | SLy6 | SLy5 | HFB$\\_$Gogny.HFB | Exp. raman2001
---|---|---|---|---|---|---
${}^{128}\text{Sm}$ | (0.406; $0.0^{\circ}$) | (0.398; $4.8^{\circ}$) | (0.402; $8.6^{\circ}$) | (0.401; $7.6^{\circ}$) | (0.398; $8.0^{\circ}$) | —–
${}^{130}\text{Sm}$ | (0.393; $0.1^{\circ}$) | (0.377; $0.0^{\circ}$) | (0.381; $0.0^{\circ}$) | (0.381; $0.0^{\circ}$) | (0.377; $0.0^{\circ}$) | —–
${}^{132}\text{Sm}$ | (0.388; $0.0^{\circ}$) | (0.374; $0.0^{\circ}$) | (0.371; $0.0^{\circ}$) | (0.382; $0.0^{\circ}$) | (0.380; $0.0^{\circ}$) | —–
${}^{134}\text{Sm}$ | (0.377; $0.0^{\circ}$) | (0.399; $0.0^{\circ}$) | (0.308; $14.8^{\circ}$) | (0.314; $12.2^{\circ}$) | (0.436; $0.0^{\circ}$) | 0.366
${}^{136}\text{Sm}$ | (0.260; $21.3^{\circ}$) | (0.252; $22.5^{\circ}$) | (0.261; $22.4^{\circ}$) | (0.263; $21.9^{\circ}$) | (0.252; $22.0^{\circ}$) | 0.293
${}^{138}\text{Sm}$ | (0.205; $27.5^{\circ}$) | (0.207; $26.1^{\circ}$) | (0.228; $25.6^{\circ}$) | (0.227; $25.2^{\circ}$) | (0.183; $25.0^{\circ}$) | 0.208
${}^{140}\text{Sm}$ | (0.026; $14.8^{\circ}$) | (0.113; $0.0^{\circ}$) | (0.181; $27.7^{\circ}$) | (0.181; $27.6^{\circ}$) | (0.147; $35.0^{\circ}$) | —–
${}^{142}\text{Sm}$ | (0.000; $20.6^{\circ}$) | (0.000; $14.4^{\circ}$) | (0.001; $0.0^{\circ}$) | (0.003; $17.0^{\circ}$) | (0.000; $0.0^{\circ}$) | —–
${}^{144}\text{Sm}$ | (0.001; $4.0^{\circ}$) | (0.000; $8.5^{\circ}$) | (0.000; $12.7^{\circ}$) | (0.000; $1.5^{\circ}$) | (0.000; $0.0^{\circ}$) | 0.087
${}^{146}\text{Sm}$ | (0.014; $58.0^{\circ}$) | (0.052; $1.2^{\circ}$) | (0.063; $0.0^{\circ}$) | (0.064; $0.7^{\circ}$) | (0.045; $2.0^{\circ}$) | —–
${}^{148}\text{Sm}$ | (0.128; $0.0^{\circ}$) | (0.151; $0.2^{\circ}$) | (0.167; $3.6^{\circ}$) | (0.162; $0.0^{\circ}$) | (0.167; $0.0^{\circ}$) | 0.142
${}^{150}\text{Sm}$ | (0.211; $0.0^{\circ}$) | (0.220; $0.0^{\circ}$) | (0.225; $0.0^{\circ}$) | (0.223; $0.0^{\circ}$) | (0.204; $0.0^{\circ}$) | 0.193
${}^{152}\text{Sm}$ | (0.302; $0.0^{\circ}$) | (0.306; $0.0^{\circ}$) | (0.305; $0.0^{\circ}$) | (0.302; $0.0^{\circ}$) | (0.273; $0.0^{\circ}$) | 0.306
${}^{154}\text{Sm}$ | (0.335; $0.0^{\circ}$) | (0.337; $0.0^{\circ}$) | (0.341; $0.0^{\circ}$) | (0.338; $0.0^{\circ}$) | ((0.347; $0.0^{\circ}$) | 0.341
${}^{156}\text{Sm}$ | (0.349; $0.0^{\circ}$) | (0.348; $0.0^{\circ}$) | (0.350; $0.0^{\circ}$) | (0.349; $0.0^{\circ}$) | (0.336; $0.0^{\circ}$) | —–
${}^{158}\text{Sm}$ | (0.357; $0.0^{\circ}$) | (0.356; $0.0^{\circ}$) | (0.362; $0.0^{\circ}$) | (0.363; $0.0^{\circ}$) | (0.351; $0.0^{\circ}$) | —–
${}^{160}\text{Sm}$ | (0.361; $0.0^{\circ}$) | (0.360; $0.0^{\circ}$) | (0.368; $0.0^{\circ}$) | (0.366; $0.0^{\circ}$) | (0.361; $0.0^{\circ}$) | —–
${}^{162}\text{Sm}$ | (0.365; $0.0^{\circ}$) | (0.362; $0.0^{\circ}$) | (0.369; $0.0^{\circ}$) | (0.367; $0.0^{\circ}$) | (0.360; $0.0^{\circ}$) | —–
${}^{164}\text{Sm}$ | (0.367; $0.0^{\circ}$) | (0.363; $0.0^{\circ}$) | (0.373; $0.0^{\circ}$) | (0.369; $0.0^{\circ}$) | (0.360; $0.0^{\circ}$) | —–
Figure 1: The Quadrupole deformation parameter $\beta_{2}$ of
${}^{124-160}\text{Nd}$ isotopes as function of their neutron number N. The
experimental data are from Ref. raman2001 .
From Fig.1, we can see the $\beta_{2}$ values of our calculations are
generally close to experimental ones raman2001 . On the other hand, there is
an agreement between our calculations and HFB theory based on the D1S Gogny
forceHFB . In the vicinity of the region where N = 82, the $\beta_{2}$ values
show minima ($\beta_{2}\simeq 0$) as expected because all nuclei with the
magic number N=82 are spherical. For the ${}^{140}\text{Sm}$ nucleus, we find
different results between the four Skyrme forces in this study. For the Skyrme
forces SLy6 and SLy5, ${}^{140}\text{Sm}$ has a triaxial shape ($\gamma\simeq
28.0^{\circ}$). It has a prolate shape for SV-bas ($\gamma=0.0^{\circ}$), and
has an approximate spherical form for UNEDF1 force ($\beta_{2}\simeq 0.026$).
For comparison, Calculations by Möller et al.moller2008 , based on the finite-
range droplet model, predicted the ground state of ${}^{140}\text{Sm}$ nucleus
to be triaxial ($\gamma=30.0^{\circ}$). In table 2, the ($\beta_{2}$,$\gamma$)
values obtained in this work as well as those of HFB theory based on the D1S
Gogny forceHFB and avialable experimental data raman2001 show a shape
transition from spherical ${}^{144}\text{Sm}$ (N=82) to deformed shape below
and above the magic neutron number N=82. For ${}^{128-144}\text{Sm}$ isotopes
below N = 82, the isotopic chains exhibit a transition from prolate
($\gamma=0.0^{\circ}$) to spherical shape ($\beta_{2}\simeq 0.000$) passing
through triaxial form ($22.0^{\circ}\leq\gamma\leq 28.0^{\circ}$) for
${}^{136-140}\text{Sm}$ isotopes, and for neutron number higher than N = 82,
both the experimental and theoretical results show that the prolate
deformation increases gradually and then saturates at a value which closes to
$\beta_{2}\simeq$ 0.368.
### 4.2 Giant dipole resonance in ${}^{128-164}\text{Sm}$ nuclei
Based on the TDHF ground states for ${}^{128-164}\text{Sm}$ isotopes obtained
in static calculations, we perform dynamic calculation such as GDR in this
work to obtain some of its properties as we will see later.
#### 4.2.1 The time evolution of the dipole moment $D_{m}(t)$
The dipole moment $D_{m}(t)$ defined by Eq. (11) allows to predict the
collective motions of nucleons along the three directions x, y and z. The time
evolution of $D^{i}_{m}(t)$ where i denotes x, y and z of ${}^{138}\text{Sm}$,
${}^{144}\text{Sm}$ and ${}^{154}\text{Sm}$ is plotted in Fig. 2. We note that
the collective motion of nucleons in GDR is done generally along two axes. The
oscillation frequency $\omega_{i}$ is related to the nuclear radius $R_{i}$ by
$\omega_{i}\propto R_{i}^{-1}$ where i$\in${x,y,z}. Fig. 2(a) shows the time
evolution of dipole moment for ${}^{144}\text{Sm}$ and ${}^{154}\text{Sm}$.
Figure 2: The dipole moment $D_{m}(t)$ as function of the simulation time
t(fm/c) calculated with the Skyrme force SLy6 for ${}^{138}\text{Sm}$,
${}^{144}\text{Sm}$ and ${}^{154}\text{Sm}$.
For the ${}^{144}\text{Sm}$ nucleus, the three components $D^{x}_{m}(t)$,
$D^{y}_{m}(t)$ and $D^{z}_{m}(t)$ are identical ,i.e., the oscillation
frequencies along the three axes are equal (
$\omega_{x}=\omega_{y}=\omega_{z}$) which confirms that this nucleus has a
spherical shape as we predicted in static calculations ($\beta_{2}\simeq
0.000$). For the ${}^{154}\text{Sm}$ nucleus, the $D^{x}_{m}(t)$ and
$D^{y}_{m}(t)$ values are identical and differ from the values of
$D^{z}_{m}(t)$ ,i.e., the oscillation frequencies along the symmetry z-axis
$\omega_{z}$ are lower than that along the two other axes x and y which they
are equal $\omega_{x}=\omega_{y}$. This confirms that ${}^{154}\text{Sm}$ has
a prolate shape because $\omega_{z}<\omega_{x}=\omega_{y}$Masur2006 which is
consistent with our static calculations ($\gamma=0.0^{\circ}$). We point out
that we found almost the same situation for the prolate nuclei namely
${}^{130-134}\text{Sm}$ and ${}^{148-164}\text{Sm}$. In Fig. 2(b), the values
of the three components $D^{i}_{m}(t)$ are different from each other in the
case of the ${}^{138}\text{Sm}$ nucleus. We notice that the oscillation
frequencies $\omega_{i}$ along the three axes are different from each other
$\omega_{x}\neq\omega_{y}\neq\omega_{z}$ which confirms that this nucleus has
a triaxial shape as we predicted in static calculations ($\gamma\simeq
25.0^{\circ}$). The same situation occurs for ${}^{136}\text{Sm}$. We note
also that the time evolution of dipole moment $D_{m}(t)$ is almost the same
for the others Skyrme forces (SLy5, UNEDF1, SVbas) with an exception for some
nuclei as ${}^{140}\text{Sm}$. The periodicity of the three components
$D^{i}_{m}(t)$ allows the excitation energies $E_{i}$ to be estimated for the
oscillations along each of the three axes. For ${}^{144}\text{Sm}$, we obtain,
for $D^{x}_{m}(t)$, $D^{y}_{m}(t)$ and $D^{z}_{m}(t)$, the same period T
$\simeq$ 84.3 fm/c giving an excitation energy $E_{x}=E_{y}=E_{z}\simeq$ 14.70
MeV. This value is slightly lower than the experimental one
$E_{GDR}^{exp.}$=15.3$\pm$ 0.1 carlos1974 . The table 3 shows the excitation
energies for ${}^{138}\text{Sm}$ and ${}^{154}\text{Sm}$ nuclei with Skyrme
force SLy6.
Table 3: The excitation energies along the three axes for ${}^{138}\text{Sm}$ and ${}^{154}\text{Sm}$ with Sly6, obtained from the time evolution of $D^{i}_{m}(t)$. Nuclei | | $E_{x}$(MeV) | | | $E_{y}$(MeV) | | | $E_{z}$(MeV)
---|---|---|---|---|---|---|---|---
${}^{138}\text{Sm}$ | | 14.75 | | | 16.52 | | | 13.40
${}^{154}\text{Sm}$ | | 15.62 | | | 15.62 | | | 11.90
#### 4.2.2 GDR Spectrum
The calculation of the Fourier transform of the isovector signal D(t) allows
to obtain the GDR energy spectrum. The spectral strength S(E) (12) is simply
the imaginary part of the Fourier transform of D(t).
Figs. 3 \- 6 display the GDR spectra in ${}^{128-164}\text{Sm}$ isotopes
calculated with the four Skyrme forces, compared with the available
experimental data carlos1974 . It needs to be pointed out that the
experimental data for Sm isotopes from A=128 to A=142, and from A=152 to
A=160, and ${}^{146}\text{Sm}$ are not yet available. The calculated GDR
spectra in ${}^{144-154}\text{Sm}$ isotopes together with the available
experimental data carlos1974 are shown in Fig.3. It can be seen that all four
Skyrme forces give generally acceptable agreement with the experiment with a
slight down-shift of the order of 0.5 MeV for SLy5, SLy6 in the case of the
spherical nucleus ${}^{144}\text{Sm}$ and the weakly deformed
${}^{148-150}\text{Sm}$ nuclei , and slight up-shift ($\sim$ 0.5 MeV) for
SVbas force. The agreement is better for deformed ${}^{152-154}\text{Sm}$
nuclei , where all Skyrme forces produce the deformation splitting, in which
rare-earth nuclei as Samarium (Sm) with neutron number N$\approx$90 show an
example of shape transitionscarlos1974 ; maruhn2005 ; benmenana2020 . For
${}^{144}\text{Sm}$ (N=82), its GDR strength has a single-humped shape. The
vibrations along the three axes degenerate ,i.e., they are the same resonance
energy $E_{i}$ ($E_{x}=E_{y}=E_{z}$), which confirms that this nucleus is
spherical due to the relation $E_{i}\propto R_{i}^{-1}$ where i$\in${x,y,z}
speth1981 . For ${}^{148}\text{Sm}$ and ${}^{150}\text{Sm}$ nuclei, the two
resonance peaks move away slightly from each other but the total GDR presents
one peak, so they are also weakly deformed nuclei with prolate shape. For
${}^{152}\text{Sm}$ and ${}^{154}\text{Sm}$ nuclei, the total GDR splits into
two distinct peaks which confirms that these nuclei are strongly deformed with
prolate shape since the oscillations along the major axis (K=0 mode) are
characterized by lower frequencies than the oscillations perpendicular to this
axis (K=1 mode) speth1991 .
The isotope ${}^{146}\text{Sm}$ for which we do not have experimental data,
SLy6, Sly5 and SVbas give a weakly deformed nucleus ($\beta_{2}\simeq$0.06)
where the resonance peaks along the major and the minor axis are very close
together, whereas UNEDF1 gives an approximate spherical nucleus
($\beta_{2}\simeq$0.01). Calculations in Ref. naz2018 , based on the self-
consistent Relativistic-Hartree-Bogoliubov (RHB) formalism, predicted a shape
coexistence for ${}^{146}\text{Sm}$. In order to verify the shape coexistence
wood1992 ; heyde2011 in ${}^{146}\text{Sm}$ nucleus, we redid the static
calculations several times from different initial deformations with SLy6
force. In all cases, we obtained two minima (prolate and oblate) whose their
properties are displayed in Table 4. We can see that the difference in energy
between these two minima is around $\Delta$(B.E)$\simeq$ 0.07 MeV. This is a
clear indication of a shape coexistence in ${}^{146}\text{Sm}$ nucleus.
According to the value of deformation parameter $\gamma$, this competition of
shape is between oblate ($\gamma=60^{\circ}$) and prolate ($\gamma=0^{\circ}$)
shape, but the deformation is very weak ($\beta_{2}\simeq 0.05$) in both
cases. Fig.4 shows the calculated GDR spectra corresponding to two minima
(prolate, oblate). It confirms this suggestion: the upper panel (Fig.4(a))
shows an oblate shape for ${}^{146}\text{Sm}$ due to oscillations along the
shorter axis (K=0 mode) which are characterized by higher energies than the
oscillations along the axis ($\mid$K$\mid$=1 mode) perpendicular to it, while
the lower panel (Fig.4(b)) shows a prolate shape for this nucleus. In both
cases, the deformation splitting $\Delta$E between the two peaks is too small
which confirms that this nucleus is very weakly deformed.
Figure 3: (Color online) GDR spectra in the chain of ${}^{144-154}\text{Sm}$ calculated with SLy6, SLy5, SVbas and UNEDF1. The solid(red), dashed(green) and dotted-dashed(blue) lines denote the dipole strengths: total, along the long axis and the short axis (multiplied by 2) respectively. The calculated strength total is compared with the experimental data carlos1974 depicted by black solid squares. Table 4: The ground-state properties of two minima for ${}^{146}\text{Sm}$ nucleus. Properties | | Prolate minimum | | | Oblate minimum | | |
---|---|---|---|---|---|---|---|---
Binding energy (B.E) | | -1999.73 MeV | | | -1999.66 MeV | | |
Root mean square (r.m.s) | | 4.970 fm | | | 4.969 fm | | |
Quadrupole deformation $\beta_{2}$ | | 0.063 | | | 0.048 | | |
Deformation parameter $\gamma$ | | $0^{\circ}$ | | | $60^{\circ}$ | | |
Figure 4: (Color online) The calculated GDR spectra for ${}^{146}\text{Sm}$
with the Skyrme force SLy6.
,
Fig.5 shows the GDR strength in neutron-deficient ${}^{128-142}\text{Sm}$
isotopes. We can see that the deformation decreases gradually from the well
deformed nucleus ${}^{128}\text{Sm}$ ($\beta_{2}\simeq$0.4) to the approximate
spherical one ${}^{142}\text{Sm}$ ($\beta_{2}\simeq$0.0) ,i.e., when the
neutron number N increases and closes to the magic number N=82. We note that
all Skyrme forces in this work give almost the same GDR spectra except for
${}^{140}\text{Sm}$. According to the GDR strength along the three axes, the
${}^{128}\text{Sm}$ nucleus is weakly triaxial with SLy6, SLy5 and SVbas
whereas it has a prolate shape with UNEDF1. For the ${}^{130-132}\text{Sm}$
isotopes, all the four Skyrme forces predict a prolate shape for them. For
${}^{134}\text{Sm}$, SVbas and UNEDF1 predict a prolate shape, while SLy5 and
SLy6 give a weak triaxial shape. For ${}^{136-138}\text{Sm}$ isotopes, we can
see that the oscillations along the three axes correspond to different
resonance energies $E_{i}$ ($E_{x}\neq E_{y}\neq E_{z}$), which shows that
these nuclei are deformed with triaxial shape. The four Skyrme forces give
different results for ${}^{140}\text{Sm}$ as displayed in Fig.5. The SLy
family (SLy5 and SLy6) predict a triaxial shape, SVbas predicts a prolate
shape while UNEDF1 gives an approximate spherical shape. For
${}^{142}\text{Sm}$, all Skyrme forces predict a spherical shape where the GDR
strengths along the three axes are identical ,i.e., ($E_{x}=E_{y}=E_{z}$).
Figure 5: (Color online) GDR spectra in the isotopic chain
${}^{128-142}\text{Sm}$ calculated with SLy6, SLy5, SVbas and UNEDF1. The
solid(red), dashed(green) and dotted-dashed(blue) lines denote the dipole
strengths: total, along the long axis and the short axis(multiplied by 2
except ${}^{136-140}\text{Sm}$) respectively. The dotted (magenta) line
denotes the strength along the third middle axis in the case of the triaxial
nuclei ${}^{136-140}\text{Sm}$.
Fig.6 shows the GDR strength in neutron-rich ${}^{156-164}\text{Sm}$ isotopes.
We can see that all Skyrme forces provide quite similar results. From
${}^{156}\text{Sm}$ (N=94) to ${}^{164}\text{Sm}$ (N=102), the deformation
gradually gets broader, and their GDRs acquire a pronounced double-humped
shape. Therefore, these nuclei are strongly deformed with prolate shape since
the oscillations energies along the longer axis (z-axis) are lower than those
of oscillations along the short axis (x and y axes) ,i.e.,
$E_{z}<E_{x}=E_{y}$.
Figure 6: (color online) The GDR spectra in the isotopic chain
${}^{156-164}\text{Sm}$ calculated with SLy6, SLy5, SVbas and UNEDF1. The
solid(red), dashed(green) and dotted-dashed(magenta) lines denote the dipole
strengths: total, along the long axis and the short axis(multiplied by 2)
respectively.
In order to compare the results between different Skyrme forces under
consideration, we plot their GDR spectra into one figure, together with
experimental data. Fig.7 shows the GDR strength in ${}^{144}\text{Sm}$, and
${}^{154}\text{Sm}$ calculated by the four Skyrme forces as well as the
experimental data from Ref.carlos1974 . It can be seen there is a dependence
of the GDR spectra on various Skyrme forces. We note a small shift of the
average peak position of $\sim$ 1 MeV between these forces. The peak position
of energy obtained with the Skyrme force SVbas is located highest among these
four Skyrme forces. For the spherical nucleus ${}^{144}\text{Sm}$, the Skyrme
force UNEDF1 reproduces well the shape and the peak among the four Skyrme
forces. The agreement is less perfect with other forces. The SLy5 and SLy6
forces give very similar results, the strength exhibits a slight downshift
while a slight upshift with SVbas functional. For the deformed nucleus
${}^{154}\text{Sm}$, there is an excellent agreement between the different
functionals and the experiment, with a slight upshift for the K=1 mode for
SVbas force. We can explain this dependence y the fact that it is linked to
certain basic characteristics and nuclear properties of the Skyrme forces as
shown in Table 5. The isovector effective mass $m_{1}^{*}/m$ is related to the
sum rule enhancement factor $\kappa$ by $m_{1}^{*}/m=1/(1+\kappa)$ berman1975
, i.e., the larger isovector effective mass corresponds to the lighter value
of the enhancement factor. We can easily see that the increase of the factor
$\kappa$ (i.e., low isovector effective mass $m_{1}^{*}/m$) causes the GDR
strength to shift towards the higher energy region, as indicated in
Ref.nesterenko2006 for the GDR in ${}^{154}\text{Sm}$, ${}^{238}\text{U}$ and
${}^{154}\text{No}$, and in Ref.oishi2016 for ${}^{174}\text{Yb}$. For
example, the large collective shift in SVbas can be related to a very high
enhancement factor $\kappa$=0.4 compared to other Skyrme forces. In addition
to the dependence with the enhancement factor $\kappa$, Fig.7 also shows a
connection between GDR energy and symmetry energy $a_{sym}$. The peak energy
of the GDR moves towards the higher energy region when $a_{sym}$ decreases, as
pointed in Ref.stone2007 for the GDR in doubly magic ${}^{208}\text{Pb}$, and
in our previous work for Nd isotopes benmenana2020 .
Figure 7: (Color online) The calculated GDR spectra ${}^{144}\text{Sm}$ and ${}^{154}\text{Sm}$ with Skyrme forces UNEDF1, SLy6, SLy5 and SVbas for . the experimental data carlos1974 are depicted by triangle. Table 5: The sum rule enhancement factor $\kappa$, isovector effective mass $m_{1}^{*}/m=1/(1+\kappa)$, and symmetry energy $a_{sym}$ for the Skyrme forces under consideration. Forces | | $m_{1}^{*}/m$ | | | $\kappa$ | | | $a_{sym}(MeV)$
---|---|---|---|---|---|---|---|---
SLy6 CHABANAT1998 | | 0.80 | | | 0.25 | | | 31.96
SLy5 CHABANAT1998 | | 0.80 | | | 0.25 | | | 32.03
UNEDF1 kortelainen2012 | | $\simeq$1.00 | | | 0.001 | | | 28.98
SVbas reinhard2009 | | 0.715 | | | 0.4 | | | 30.00
#### 4.2.3 Relation between deformation splitting $\Delta E$ and quadrupole
deformation $\beta_{2}$
As we mentioned above, the GDR strength splits into two peaks for deformed
nuclei. Each peak corresponds to a resonance energy $E_{i}$ of GDR. We denoted
by $E_{1}$ and $E_{2}$ the energies corresponding to K=0 and K=1 modes
respectively. The total resonance energy of giant resonance is defined by the
formula garg2018
${}E_{m}=\frac{\int_{0}^{+\infty}S(E)EdE}{\int_{0}^{+\infty}S(E)dE},$ (17)
where S(E) (12) is the strength function of giant resonance. In Table 6 , the
resonance energies E1 and E2 of ${}^{128-164}\text{Sm}$ nuclei are presented,
including the available experimental data from Ref.carlos1974 . From this
table, we can see an overall agreement between our results and the
experimental data, with a slightly advantage for the Sly6 functional. For
instance, the result of the semi-spherical ${}^{144}\text{Sm}$ gives
$E_{GDR}^{SLy6}$=15.05 MeV which is very close to $E_{GDR}^{Exp.}$=(15.30
$\pm$ 0.10) MeV. Also for deformed nuclei as ${}^{152}\text{Sm}$ and
${}^{154}\text{Sm}$, the results ($E_{1},E_{2}$) with SLy6 are very close to
those of experiment.
Table 6: The resonance energy centroids $E_{1}$ and $E_{2}$ of ${}^{128-164}\text{Sm}$ corresponding to oscillation along the major axis (K=0) and the minor axis (K=1) respectively. The experimental data are from ref. carlos1974 . | UNEDF1 | SVBas | SLy5 | SLy6 | Exp.carlos1971
---|---|---|---|---|---
Nuclei | E1 | E2 | E1 | E2 | E1 | E2 | E1 | E2 | E1 | E2
${}^{\textbf{128}}\textbf{Sm}$ | 13.36 | 17.79 | 13.36 | 17.75 | 12.54 | 16.61 | 12.76 | 16.90 | — | —
${}^{\textbf{130}}\textbf{Sm}$ | 13.32 | 17.68 | 13.46 | 17.64 | 12.58 | 16.49 | 12.82 | 16.76 | — | —
${}^{\textbf{132}}\textbf{Sm}$ | 13.15 | 17.59 | 13.47 | 17.55 | 12.63 | 16.46 | 12.89 | 16.69 | — | —
${}^{\textbf{134}}\textbf{Sm}$ | 13.22 | 17.43 | 13.22 | 17.60 | 12.96 | 16.13 | 13.22 | 16.35 | — | —
${}^{\textbf{136}}\textbf{Sm}$ | 14.00 | 16.84 | 14.20 | 16.91 | 13.27 | 15.87 | 13.50 | 16.11 | — | —
${}^{\textbf{138}}\textbf{Sm}$ | 14.34 | 16.51 | 14.47 | 16.66 | 13.48 | 15.70 | 13.70 | 15.95 | — | —
${}^{\textbf{140}}\textbf{Sm}$ | 15.42 | 15.73 | 14.93 | 16.25 | 13.73 | 15.50 | 13.95 | 15.72 | — | —
${}^{\textbf{142}}\textbf{Sm}$ | 15.59 | 15.59 | 15.78 | 15.78 | 14.80 | 14.83 | 15.03 | 15.04 | — | —
${}^{\textbf{144}}\textbf{Sm}$ | 15.57 | 15.57 | 15.79 | 15.79 | 14.84 | 14.84 | 15.05 | 15.05 | 15.30$\pm$ 0.1 | —
${}^{\textbf{146}}\textbf{Sm}$ | 15.27 | 15.45 | 15.45 | 15.85 | 14.15 | 14.95 | 14.34 | 15.16 | — | —
${}^{\textbf{148}}\textbf{Sm}$ | 14.07 | 15.69 | 14.25 | 16.15 | 13.34 | 15.24 | 14.29 | 15.47 | 14.80$\pm$ 0.1 | —
${}^{\textbf{150}}\textbf{Sm}$ | 13.40 | 15.86 | 13.62 | 16.31 | 12.91 | 15.38 | 13.06 | 15.59 | 14.60$\pm$ 0.1 | —
${}^{\textbf{152}}\textbf{Sm}$ | 12.80 | 16.07 | 13.14 | 16.65 | 12.46 | 15.62 | 12.60 | 15.82 | 12.45$\pm$ 0.1 | 15.85$\pm$ 0.1
${}^{\textbf{154}}\textbf{Sm}$ | 12.53 | 16.06 | 12.93 | 16.98 | 12.23 | 15.70 | 12.37 | 15.91 | 12.35$\pm$ 0.1 | 16.10$\pm$ 0.1
${}^{\textbf{156}}\textbf{Sm}$ | 12.36 | 15.97 | 12.80 | 16.55 | 12.12 | 15.64 | 12.26 | 15.82 | — | —
${}^{\textbf{158}}\textbf{Sm}$ | 12.22 | 15.84 | 12.69 | 16.47 | 12.01 | 15.60 | 12.15 | 15.77 | — | —
${}^{\textbf{160}}\textbf{Sm}$ | 12.08 | 15.69 | 12.35 | 16.37 | 11.87 | 15.51 | 12.07 | 15.69 | — | —
${}^{\textbf{162}}\textbf{Sm}$ | 11.96 | 15.53 | 12.52 | 16.26 | 11.87 | 15.40 | 11.99 | 15.57 | — | —
${}^{\textbf{164}}\textbf{Sm}$ | 11.84 | 15.37 | 12.44 | 16.14 | 11.82 | 15.33 | 11.95 | 15.50 | — | —
Fig. 8 displays the resonance energies ($E_{1}$, $E_{2}$) evolution as
function of the neutron number N from ${}^{128}\text{Sm}$ (N=66) to
${}^{164}\text{Sm}$ (N=102). We can see for all the four Skyrme forces that
the resonance energy $E_{1}$ along the major axis (k=0 mode) increases with
the neutron number N (i.e., mass number A) until the region around N=82 (magic
number) and then trends to decreases. The opposite happens for the resonance
energy $E_{2}$, i.e., it decreases with the increasing of N until N=82 , and
then gradually increases. We can clearly see that the SLy6 reproduces the
experimental data best among the four Skyrme forces. It was shown to provide a
satisfying description of the GDR for spherical and deformed nuclei
nesterenko2008 ; reinhard2008 . The SVbas functional gives somewhat high
values of $E_{1}$ and $E_{2}$ among the other forces due to its large
enhancement factor $\kappa$ ($\kappa$=0.4) as we discussed above.
Figure 8: (Color online) The peak positions $E_{1}$ and $E_{2}$ of GDR in
${}^{128-164}\text{Sm}$ along major axis (square symbol) and minor axis
(circle symbol) respectively. The experimental data are depicted by black
square ($E_{1}$) and circle ($E_{2}$).
In Fig. 9, we plotted the evolution of the GDR-splitting value $\Delta
E=E_{2}-E_{1}$ as a function of the neutron number N. It can be easily seen
for all the four Skyrme forces, that the GDR splitting $\Delta$E decreases
gradually with the increase of N and then increases. It takes the minimum
value $\Delta E$=0 at N=82 (magic Number ) which corresponds to the spherical
nucleus ${}^{144}\text{Sm}$ and achieves a maximum for strongly deformed
nuclei as ${}^{164}\text{Sm}$. Such a result confirms that the splitting of
GDR is related to the deformation structure of nuclei.
Figure 9: (Color online) The GDR-splitting $\Delta E$ as a function of the
neutron number N for ${}^{128-164}\text{Sm}$ nuclei calculated with SLy6,
SVbas, SLy5 and UNEDF1.
Since the GDR-splitting is caused by the deformation, it is possible to relate
the nuclear deformation parameter $\beta_{2}$ with the ratio $\Delta
E/\bar{E}$, where $\bar{E}$ is the mean resonance energy. Fig. 10 displays the
correlation between the quadrupole deformation $\beta_{2}$ and $\Delta
E/\bar{E}$ for ${}^{128-164}\text{Sm}$ nuclei calculated with the Skyrme
forces under consideration. We can see for all of the four Skyrme forces that
there is an almost linear relationship between $\Delta E/\bar{E}$ and
$\beta_{2}$, i.e.,
${}\Delta E/\bar{E}\simeq a.\beta_{2},$ (18)
where a is a parameter depending slightly on the Skyrme force. This fact
confirms that the size of the GDR-splitting is proportional to the quadrupole
deformation parameter $\beta_{2}$. The relation (18) was already studied in
Refs. okamoto1958 ; ring2009 ; benmenana2020 .
Figure 10: (Color online) The correlation between the deformation parameter
$\beta_{2}$ and the ratio $\Delta E/\bar{E}$. circles denote the data in the
Sm isotopes and lines are the fitting results.
## 5 Conclusion
The isovector giant dipole resonance (IVGDR) has been investigated in the
isotopic chain of Samarium (Sm). The study covers even-even Sm isotopes from
${}^{128}\text{Sm}$ to ${}^{164}\text{Sm}$. The investigations have been done
within the framework of time dependent Hartree-Fock (TDHF) method based on the
Skyrme functional. The calculations were performed with Four Skyrme forces:
SLy6, SLy5, SVbas and UNEDF1. In static calculations, some properties of
ground state like the deformation parameters ($\beta_{2},\gamma$) have been
calculated by using SKY3D code sky3d . In dynamic calculations, the dipole
moment Dm(t) and the strength of GDR are calculated and compared with the
available experimental data carlos1974 . The results obtained showed that TDHF
method can reproduce the shape and the peak of the GDR spectra. All four
Skyrme forces generally reproduce the average position of the GDR strength
with a small shift depending on the used Skyrme force. The agreement is better
with the SLy6 force among these Skyrme forces. The GDR strengths in
${}^{128-142}\text{Sm}$, ${}^{146}\text{Sm}$ and ${}^{156-164}\text{Sm}$
nuclei are also predicted in this work.
Finally, some properties of GDR ($\bar{E}$, $E_{1}$, $E_{2}$, $\Delta E$ )
have been calculated with the four Skyrme forces. The results with SLy6 were
very close to the experimental data compared to the other forces. A
correlation between the ratio $\Delta E/\bar{E}$ and the quadrupole
deformation parameter $\beta_{2}$ was found. For all Skyrme forces, we have
found the relation $\Delta E/\bar{E}=a.\beta_{2}+b$ with the value of b is
negligible.
In the light of the successful description of the GDR in deformed nuclei with
the TDHF method, it was expected that this latter can also be applied for
treating the shape coexistence as we predicted for ${}^{146}\text{Sm}$ with
the SLy6 force.
## References
* (1) M. N. Harakeh and A. Woude, Giant Resonances: fundamental high-frequency modes of nuclear excitation, vol. 24. Oxford University Press on Demand, (2001).
* (2) M. Goldhaber and E. Teller Phys. Rev., vol. 74, p. 1046, (1948).
* (3) J. Speth and A. van der Woude Reports on Progress in Physics, vol. 44, p. 719, (1981).
* (4) B. L. Berman and S. Fultz Reviews of Modern Physics, vol. 47, p. 713, (1975).
* (5) W. Bothe and W. Gentner Z.Phys, vol. 106, p. 236, (1937).
* (6) P. Carlos, H. Beil, R. Bergere, A. Lepretre, and A. Veyssiere Nuclear Physics A, vol. 172, p. 437, (1971).
* (7) P. Carlos, H. Beil, R. Bergere, A. Lepretre, A. De Miniac, and A. Veyssiere Nuclear Physics A, vol. 225, p. 171, (1974).
* (8) L. Donaldson, C. Bertulani, J. Carter, and al. Physics Letters B, vol. 776, p. 133, (2018).
* (9) K. Goeke and J. Speth Annual Review of Nuclear and Particle Science, vol. 32, p. 65, (1982).
* (10) J. A. Maruhn, P. G. Reinhard, P. D. Stevenson, J. R. Stone, and M. R. Strayer Phys. Rev. C, vol. 71, p. 064328, (2005).
* (11) W. Kleinig, V. O. Nesterenko, J. Kvasil, P.-G. Reinhard, and P. Vesely Phys. Rev. C, vol. 78, p. 044313, (2008).
* (12) K. Yoshida and T. Nakatsukasa Phys. Rev. C, vol. 83, p. 021304, (2011).
* (13) A. A. B. Mennana, Y. E. Bassem, and M. Oulne Physica Scripta, vol. 95, p. 065301, 2020.
* (14) S. Josef, Electric and magnetic giant resonances in nuclei, vol. 7. World Scientific, (1991).
* (15) V. Nesterenko, W. Kleinig, J. Kvasil, P. Vesely, and P.-G. Reinhard International Journal of Modern Physics E, vol. 16, p. 624, (2007).
* (16) S. Fracasso, E. B. Suckling, and P. Stevenson Physical Review C, vol. 86, p. 044303, (2012).
* (17) D. P. n. Arteaga, E. Khan, and P. Ring Phys. Rev. C, vol. 79, p. 034311, (2009).
* (18) S. S. Wang, Y. G. Ma, X. G. Cao, W. B. He, H. Y. Kong, and C. W. Ma Phys. Rev. C, vol. 95, p. 054615, (2017).
* (19) V. M. Masur and L. M. Mel’nikova Physics of Particles and Nuclei, vol. 37, p. 923, (2006).
* (20) E. Ramakrishnan, T. Baumann, and al. Physical review letters, vol. 76, p. 2025, (1996).
* (21) J. Gundlach, K. Snover, J. Behr, and al. Physical review letters, vol. 65, p. 2523, (1990).
* (22) P. A. M. Dirac Mathematical Proceedings of the Cambridge Philosophical Society, vol. 26, p. 376, (1930).
* (23) J. Błocki and H. Flocard Physics Letters B, vol. 85, p. 163, (1979).
* (24) P. Chomaz, N. Van Giai, and S. Stringari Physics Letters B, vol. 189, p. 375, (1987).
* (25) J. A. Maruhn, P.-G. Reinhard, P. D. Stevenson, and M. R. Strayer Phys. Rev. C, vol. 74, p. 027601, (2006).
* (26) P. Stevenson, M. Strayer, J. Rikovska Stone, and W. Newton International Journal of Modern Physics E, vol. 13, p. 181, (2004).
* (27) B. Schuetrumpf, P.-G. Reinhard, P. Stevenson, A. Umar, and J. Maruhn Computer Physics Communications, vol. 229, p. 211, (2018).
* (28) E. Chabanat, P. Bonche, P. Haensel, J. Meyer, and R. Schaeffer Nuclear Physics A, vol. 635, p. 231, (1998).
* (29) P. Klüpfel, P.-G. Reinhard, T. J. Bürvenich, and J. A. Maruhn Phys. Rev. C, vol. 79, p. 034310, (2009).
* (30) M. Kortelainen, J. McDonnell, W. Nazarewicz, P.-G. Reinhard, J. Sarich, N. Schunck, M. V. Stoitsov, and S. M. Wild Phys. Rev. C, vol. 85, p. 024304, (2012).
* (31) C. Tao, Y. Ma, G. Zhang, X. Cao, D. Fang, H. Wang, et al. Physical Review C, vol. 87, no. 1, p. 014621, 2013.
* (32) N. Paar, D. Vretenar, E. Khan, and G. Colo Reports on Progress in Physics, vol. 70, no. 5, p. 691, 2007.
* (33) J. W. Negele Reviews of Modern Physics, vol. 54, p. 913, (1982).
* (34) Y. Engel, D. Brink, K. Goeke, S. Krieger, and D. Vautherin Nuclear Physics A, vol. 249, p. 215, (1975).
* (35) A. Kerman and S. Koonin Annals of Physics, vol. 100, p. 332, (1976).
* (36) S. E. Koonin, K. T. R. Davies, V. Maruhn-Rezwani, H. Feldmeier, S. J. Krieger, and J. W. Negele Phys. Rev. C, vol. 15, p. 1359, (1977).
* (37) P.-G. Reinhard, L. Guo, and J. Maruhn The European Physical Journal A, vol. 32, p. 19, (2007).
* (38) C. Simenel and A. Umar Progress in Particle and Nuclear Physics, vol. 103, p. 19, (2018).
* (39) C. Simenel The European Physical Journal A, vol. 48, p. 152, (2012).
* (40) H. Flocard, S. E. Koonin, and M. S. Weiss Phys. Rev. C, vol. 17, p. 1682, (1978).
* (41) P. Bonche, S. Koonin, and J. W. Negele Phys. Rev. C, vol. 13, p. 1226, (1976).
* (42) T. Skyrme Nuclear Physics, vol. 9, p. 615, (1958).
* (43) C. Simenel and P. Chomaz Phys. Rev. C, vol. 80, p. 064309, (2009).
* (44) J. M. Broomfield and P. D. Stevenson Journal of Physics G: Nuclear and Particle Physics, vol. 35, p. 095102, (2008).
* (45) P. Ring and P. Schuck, The nuclear many-body problem. Springer-Verlag, (1980).
* (46) P.-G. Reinhard, P. D. Stevenson, D. Almehed, J. A. Maruhn, and M. R. Strayer Phys. Rev. E, vol. 73, p. 036709, (2006).
* (47) J. Meng, W. Zhang, S. Zhou, H. Toki, and L. Geng The European Physical Journal A-Hadrons and Nuclei, vol. 25, p. 23, (2005).
* (48) T. Naz, G. Bhat, S. Jehangir, S. Ahmad, and J. Sheikh Nuclear Physics A, vol. 979, p. 1, (2018).
* (49) K. W. N. Takigawa, ”Fundamentals of Nuclear Physics”. Springer Japan, (2017).
* (50) S. RAMAN, C. NESTOR, and P. TIKKANEN Atomic Data and Nuclear Data Tables, vol. 78, p. 1, (2001).
* (51) J.-P. Delaroche, M. Girod, J. Libert, H. Goutte, S. Hilaire, S. Péru, N. Pillet, and G. Bertsch Physical Review C, vol. 81, p. 014303, (2010).
* (52) P. Möller, R. Bengtsson, B. Carlsson, P. Olivius, T. Ichikawa, H. Sagawa, and A. Iwamoto Atomic Data and Nuclear Data Tables, vol. 94, p. 758, (2008).
* (53) J. Wood, K. Heyde, W. Nazarewicz, M. Huyse, and P. Van Duppen Physics reports, vol. 215, p. 101, (1992).
* (54) K. Heyde and J. L. Wood Reviews of Modern Physics, vol. 83, p. 1467, (2011).
* (55) V. Nesterenko, W. Kleinig, J. Kvasil, P. Vesely, P.-G. Reinhard, and D. Dolci Physical Review C, vol. 74, p. 064306, (2006).
* (56) T. Oishi, M. Kortelainen, and N. Hinohara Phys. Rev. C, vol. 93, p. 034329, (2016).
* (57) J. R. Stone and P.-G. Reinhard Progress in Particle and Nuclear Physics, vol. 58, p. 587, (2007).
* (58) U. Garg and G. Colò Progress in Particle and Nuclear Physics, vol. 101, p. 55, (2018).
* (59) V. Nesterenko, W. Kleinig, J. Kvasil, P. Vesely, and P.-G. Reinhard International Journal of Modern Physics E, vol. 17, p. 89, (2008).
* (60) K. Okamoto Phys. Rev., vol. 110, p. 143, (1958).
|
PASA 2024
# Resolving VLBI correlator ambiguity in the time delay model improves
precision of geodetic measurements
O. Titov1 A. Melnikov2 and Y. Lopez3 1Geoscience Australia, Canberra,
Australia 2Institute of Applied Astronomy of Russian Academy of Science,
Saint-Petersburg, Russia 3University of Tasmania, Hobart, Australia
###### Abstract
The modern Very Long Baseline Interferometry (VLBI) relativistic delay model,
as documented in the IERS Conventions refers to the time epoch when the signal
passes one of two stations of an interferometer baseline (selected arbitrarily
from the pair of stations and called the “reference station”, or "station 1").
This model consists of the previous correlation procedure used before the year
2002. However, since 2002 a new correlation procedure that produces the VLBI
group delays referring to the time epoch of signal passage at the geocenter
has been used. A corresponding correction to the conventional VLBI model delay
has to be introduced. However, this correction has not been thoroughly
presented in peer reviewed journals, and different approaches are used at the
correlators to calculate the final group delays officially published in the
IVS database. This may cause an inconsistency up to 6 ps for ground-based VLBI
experiments between the group delay obtained by the correlator and the
geometrical model delay from the IERS Conventions used in data analysis
software. Moreover, a miscalculation of the signal arrival moment to the
"reference station" could result a larger modelling error (up to 50 ps). The
paper presents the justification of the correction due to transition between
two epochs elaborated from the Lorentz transformation, and the approach to
model the uncertainty of the calculation of the signal arrival moment. The
both changes are particularly essential for upcoming broadband technology
geodetic VLBI observations.
###### doi:
10.1017/pas.2024.xxx
###### keywords:
IVS – broadband Very Long Baseline Interferometry (VLBI) – relativity –
Geodesy – Lorentz transformation – reference radio sources
## 1 INTRODUCTION
The Very Long Baseline Interferometry (VLBI) technique measures the difference
between the arrival times of a signal from a distant radio source at two radio
telescopes (Schuh & Behrend (2012)). The signal is recorded at each radio
telescope together with time marks from independent hydrogen masers. Due to
separation of the radio telescopes by a few hundred or thousand kilometres,
the plain wave front passes first telescope earlier then the second one. This
difference in the arrival time of the signal at both radio telescopes is known
as time delay, and the frequency shift due to the relative motion of the
telescopes around the geocentre is known as delay rate.
The time delay and delay rate are found during cross-correlation of the two
independent records. There are two types of correlators (XF and FX) based on
the order of the mathematical operations – cross-correlation (X) and Fourier
transformation (F). Baseline-based correlators are designed as XF type
correlators, and station-based correlators are FX type correlators. For the
baseline-based XF-type MarkIII correlator used before 2002, the observables
referred to the position of one of the two stations (station 1). For the
station-based FX-type MarkIV correlator all observables for all baselines at
one single multi-baseline scan are referred to the geocentre as a common
reference point. As 1 ps precision is required for the time delay calculation,
all first-order and second-order effects of special relativity should be taken
into account.
One of the goals of the International VLBI Service activities is to achieve
1-mm accuracy from the analysis of routine geodetic VLBI sessions. The
accuracy of the daily scale factor improved dramatically in 2002 when the
MarkIII correlator was replaced by MarkIV correlator (Titov & Krásná (2018)).
However, so far this value varies about 3-4 mm despite technological
developments since 2002.
One possible reason for the lack of improvement in accuracy is the
inconsistency between the VLBI observable group delays and the relativistic
delay model developed in 1980s-90s, published in the IERS Conventions 2010
(Petit & Luzum (2010)). The transition from the MarkIII to MarkIV correlator
was not followed by any changes in the IERS Conventions model that still
refers to the epoch of the wavefront passage of station 1. Thus, it remains
consistent with the XF-type correlators. To make the output delay of the FX-
correlator consistent with the IERS Conventions 2010 model (XF-type), an
additional correction needs to be applied. Unfortunately, this correction has
not been officially presented in explicit form. This conversion difference was
called “subtle” (Whitney (2000)); however, it reaches 20 ns, which is quite
significant. Corey (2000) developed a simple geometric approach under
assumption of the finiteness of the speed of light to obtain this correction,
but his final equation comprised a major term only, while several minor terms
were not included.
In this paper, it is emphasised that the relativistic correction due to the
change of the reference epoch definition should be derived from the Lorenz
transformation to secure the 1-ps accuracy. Therefore, the final equation of
the recommended group delay should include some minor relativistic terms due
to coupling of the barycentric and geocentric velocities of the
radiotelescopes to be added to the version by Corey (2000). A detailed
development of the correction based on the Lorenz transformation is given in
Appendix. This correction is essential from the theoretical point of view;
however, its impact on the geodetic results is negligible for ground-based
baselines (less than 1 mm).
A more serious problem is caused by the uncertainty in the signal arrival time
as calculated by the correlators, even if the problem of the epoch calculation
is fixed. Within the adopted procedure, for a single multi-station scan this
time is common for all stations and is usually rounded to integer number of
seconds. Meanwhile, for a multi-station scan, the factual signal arrival time
is individual for each station, the output group delay is converted to the
time common for all stations within one scan using a reasonable polynomial
approximation. Therefore, the final output delay for each baseline is referred
to the common time of scan. Theoretically, this output delay should be
perfectly consistent to the delay at the time of the signal arrival to the
"reference station" of each baseline. However, this is not guaranteed due to
the uncertainty of the reference epoch definition (discussed in the Appendix)
and hidden numerical issues during the polynomial approximation.
To estimate an additional correction, the standard parametrical model should
be extended. For each scan we have a time of the signal arrival (common for
$N$ stations) and a set of $N(N-1)/2$ time delays for all baselines. Instead
of seeking for $N(N-1)/2$ errors in the delays themselves, it would be easier
to treat the signal arrival time as the parameter to be updated assuming the
delays are errorless. A possible approach to model this type inconsistency is
presented analytically in Finkelstein, Kreinovich, & Pandey (1983). A second
order term in Equation (18) may be generalised at 1-ps accuracy in the form
$\displaystyle\delta{\tau_{12}}=\frac{(\boldsymbol{b}\cdot\boldsymbol{s})}{c^{2}}\frac{(({\epsilon\boldsymbol{w_{1}}+\boldsymbol{w_{2}})\cdot\boldsymbol{s})}}{1+\epsilon}$
(1)
The case $\epsilon=0$ corresponds to the selection of the reference clock at
station 1, and the case $\epsilon=\infty$ corresponds to the selection of the
reference clock at station 2. The relativistic group delay model from the IERS
Conventions has an intrinsic assumption that $\epsilon=0$. A violation of this
assumption results in a small deviation of the $\epsilon$ from zero. For a
small value of $\epsilon$ it could be parametrized with the partial derivative
$\displaystyle\frac{\partial\delta{\tau_{12}}}{\partial\epsilon}=\frac{(\boldsymbol{b}\cdot\boldsymbol{s})(({\boldsymbol{w_{1}}-\boldsymbol{w_{2}})\cdot\boldsymbol{s})}}{c^{2}}$
(2)
By its analytical representation, this new parameter $\epsilon$ should be
referred to the group of parameters to model the clock instability (offset,
rate, 2nd derivative, etc). In total, $(N-1)$ parameters should be added to
the traditional procedure of the VLBI delay modelling.
The parameters $\epsilon$ could be estimated with Equation (2) by the least
squares individually for each VLBI station clock except to the clock at the
network "reference station" that is assumed to be errorless. Then for two
arbitrary stations (i and j) the corresponding delay is calculated as follows
$\displaystyle\delta{\tau_{ij}}=(\epsilon_{i}-\epsilon_{j})\frac{(\boldsymbol{b_{ij}}\cdot\boldsymbol{s})((\boldsymbol{w_{i}}-\boldsymbol{w_{j}})\cdot\boldsymbol{s})}{c^{2}}$
(3)
## 2 DATA ANALYSIS
Figure 1: Contribution of the three third order terms from Eq (11) for
baselines KOKEE12M - GGAO12M (7405 km) (top) and WETTZ13S - KOKEE12M (10358
km) (bottom).
The second term in Equation (18) (of the Appendix) is the diurnal variation of
the Earth scale’s factor that replaces the diurnal aberration applied for the
traditional astronomical observations. This is the only term due to the
Earth’s rotation implemented by the FX-correlator software developers (in
accordance to Corey (2000)). However, a more accurate approach based on the
Lorenz transformation (12) reveals additional minor terms in Equation (18) due
to coupling of the two velocities $V$ and $w_{2}$. The first term in Equation
(18) is the coordinate term due to the transformation from the barycentric to
the geocentric reference frame, and it could be ignored for the scope of this
paper.
Figure 2: Systematic group delay for baseline KOKEE12M - GGAO12M (7405 km) in
accordance with (3).
We used one of the recent VGOS experiments (VT9290, 17-Oct-2019) for more
detailed analysis. This 24-hour experiment included five radio telescopes
(WETT13S, ONSA13NE, ONSA13SW, GGAO12M and KOKEE12M) equipped with the broad
band receivers. Observations were performed in four bands with dual linear
polarisation (3000-3480 MHz, 5240-5740 MHz, 6360-6840 MHz and 10200-10680 MHz)
(Alef et al. (2019)).
Fig 1 shows the contribution of the three “missed” terms in Equation (18) to
the total delay for two baselines: KOKEE12M - GGAO12M (7405.4 km) and KOKEE12M
- WETTZ13S (10357.6 km). As expected, the correction on Fig 1 is essential for
long baselines (up to 6 ps).
Standard geodetic VLBI observations operated in two frequencies, 2.3 GHz
(S-band) and 8.4 GHz (X-band), are not sensitive to the effect of the time of
signal arrival. Therefore, we used the new broad band VLBI data (VGOS project)
to estimate the parameter $\epsilon$. Due to the higher sample rate and the
broader bandwidth of the recorded data the formal accuracy of the VGOS
geodetic results is better than for standard S/X observations by an order of
magnitude.
The VGOS data files were processed using the OCCAM software (Titov, Tesmer, &
Boehm (2004)) (version 6.3) in two modes. A first solution produces a standard
set of parameters for estimating - (i) corrections to the positions of radio
telescopes in the ITRF2014 frame (Altamimi et al. (2016)), (ii) Earth
orientation parameters, (iii) wet troposphere delay and two gradients, (iv)
three parameters to model the clock instability for each station except the
reference one (clock offset, clock rate and second derivative), and (v)
corrections to the ICRF3 positions of several radio sources that expose a high
level of astrometric instability in the past. A second solution was used to
estimate the parameter $\epsilon$ for all stations except for the reference
one.
Estimates of the parameter $\epsilon$ for six VGOS stations operating during
2019 are shown in Table 1. About half of the estimates are statistically
significant. This means that typically, the time of the radio wave arrival to
the reference station is not calculated by the correlator with sufficient
accuracy. The resulting group delay calculated by Equation (3) for baseline
GGAO12M - KOKEE12M at the same session (17-Oct-2019, MJD = 58744) is shown in
Fig 2. We selected this baseline because for both stations in this experiment
the estimates of $\epsilon$ are larger than usual ($-0.885\cdot 10^{-3}$ for
GGAO12M and $0.892\cdot 10^{-3}$ for KOKEE12M). The range of the peak-to-peak
variations is about 80 ps. This results in additional, hidden, source of
systematic error for all other parameters.
### 2.1 ANALYSIS OF ASTROMETRIC RESULTS
A comprehensive analysis of geodetic parameters is beyond of the scope of this
paper. Herewith we discuss only effect of the additional parameter on the
astrometric positions of two well-known reference radio sources, namely
0552+398 and 1156+295. Both sources were observed in twenty 24-h broadband
VLBI experiments during 2019, with a large number of observations. As a
result, their formal positional errors for both components are less than 50
$\mu$as for almost all experiments. Therefore, statistical investigation of
the astrometrical results would demonstrate the advantage of the new VLBI
technology application and the effect of the inclusion of the additional
modelling parameter.
#### 2.1.1 Radio source 0552+398
Radio source 0552+398 is one of the most actively observed radio sources by
geodetic VLBI since 1979 due to its strong flux density and good astrometric
stability. It was included to the list of reference radio sources of ICRF1,
(Ma et al. (1998)), ICRF2 (Fey et al. (2015)) and ICRF3 (Charlot et al.
(2020)). It was also treated as a ’stable’ one after independent verification
by Feissel-Vernier (2003). The source 0552+398 has no apparent structure at S-
and X-bands images. However, its imaging at higher frequencies (24 and 43 GHz)
discloses a sub-milliarcsec jet in the east direction from the core (Charlot
et al. (2010)). Recently, a second component was revealed by Bolotin et al.
(2019) from the analysis of the broadband observations during the CONT17
campaign.
Figure 3: Daily corrections to the ICRF3 coordinates of the radio source
0552+398 (up: right ascension bottom: declination). Black circles - standard
solution, white circles - solution included the parameter $\epsilon$.
While the daily estimates of the corrections to the declination component in
Fig 3 vary around the original ICRF3 catalogue position within 0.6 mas, the
estimates of the correction to right ascension (RA = 05h 55m 30s.80561207)
(Charlot et al. (2020)) show a non-zero offset of approximately 0.2 mas. The
available post-ICRF3 catalogues (e.g. the celestial reference frame solution
aus2020a published by International VLBI Service (IVS)) including S/X
observations during 2019-2020 do not detect any essential offset of the
0552+398 positions with respect to the ICRF3 catalogue coordinates. This
potentially indicates that the jet observed at high frequencies (24 and 43
GHz) is also essential for frequencies between 2 and 11 GHz, even though it is
not detected on the S/X images. We conclude that the broadband VLBI
observations are more sensitive to the sub-milliarcsec structure than the
traditional S/X VLBI observations as also hinted by Bolotin et al. (2019).
Therefore, the positions of the reference radio sources observed by the new
broadband technology are not necessary to be coincided with the S/X data
positions.
Figure 4: Difference between two solution corrections for the radio source
0552+398 (up: right ascension bottom: declination).
#### 2.1.2 Radio source 1156+295
Radio source 1156+295 has actively monitored for the last 30 years over a wide
range of frequencies. Despite its extended structure in S- and X-bands with an
elongated jet in the north direction (e.g. Kellermann et al. (1998)), the
radio source 1156+295 demonstrates a moderate range of astrometric
instability. At the same time, no structure was reported in 24 GHz and 43 GHz
(Charlot et al. (2010)). Therefore, it was selected as one of the defining
reference sources in the second ICRF realization (ICRF2) (Fey et al. (2015)),
although, not included to the list of the ICRF3 reference sources. Our
analysis of the broadband VLBI results highlights a higher range of
astrometric instability in declination than right ascension time series (Fig
5) during 2019, presumably, induced by the jet oriented to the north
direction. The average declination component is shifted approximately 0.2 mas
south with respect to the ICRF3 catalogue position.
The difference between the two sets of daily estimates in Fig 4 and 6 does not
reveal any noticeable astrometric signature due to inclusion of $\epsilon$ to
the list of estimated parameters. For both sources the peak-to-peak variations
do not exceed 0.25 mas in both components. Therefore, for radio sources
0552+398 and 1156+295, the inclusion of new parameter does not change the
source position estimates essentially. However, for rare observed radio
sources this difference may cause a substantial change in the final catalogue
positions.
Figure 5: Daily corrections to the ICRF3 coordinates of the radio source
1156+295 (up: right ascension bottom: declination). Black circles - standard
solution, white circles - solution included the parameter $\epsilon$.
Figure 6: Difference between two solution corrections for the radio source
1156+295 (up: right ascension bottom: declination).
## 3 DISCUSSION AND CONCLUSION
The transition from the XF-type to FX-type correlators for processing geodetic
VLBI data requires a corresponding revision of the relativistic group delay in
the IERS Conventions to secure a match between the correlator output and the
theoretical model. Alternatively, a special correction needs to be done at the
final step of the post-correlation data processing. In Equation (18) we show
in the four last terms the relativistic correction due to the time
transformation from the epoch of the geocenter to the epoch of station 1. This
correction is derived from the modified version of the Lorenz transformation
in Equation (12). Missing of the three minor terms in Equation (18) can lead
to a discrepancy of the group delay model at a level of 6 ps for long
baselines. This is, in particular, pertinent for the intensive experiments for
rapid estimation of Universal Time because a typical observational network
consists of 2 or 3 radio telescopes separated by a long baseline (> 7000 km).
We would like to recommend this equation be applied for the post-processing
analysis of VLBI data at the modern FX-correlators.
Another effect, though may not be directly linked to the first one, is the
uncertainty of the time of signal registration for each telescope as measured
by the local clock (hydrogen maser) at the reference station and extrapolated
during the process of correlation. This effect also refers to the difference
of the geocentric velocities of the both radio telescopes, but it could be
introduced as the extension of the clock instability model. The additional
parameter describes how far the actual time of the signal arrival deviates
from the time presented in the VLBI data file. Our analysis of broadband VLBI
data over 2019 reveals that the parameter is statistically significant in many
cases (Table 1). The corresponding systematic effect is up to 100 ps in time
delays, and up to 0.25 mas in estimates of daily radio source positions.
It is not yet clear whether the source structure effect directly links to the
problem of precisely determining the time of the signal arrival to the radio
telescopes. The algorithm of the numerical calculation of the signal arrival
time always relies on the assumption that the phase reference point of the
target source is the same for all frequency bands. However, with a broadband
receiver, we may have four different phase reference points at the four
frequency bands. Therefore, four signals in each band may arrive to the
receiver at four different times even from a point-like radio source. A
standard calibration may not compensate this inconsistency perfectly, mostly
due to the non-linear behaviour of the phase during the fringe-fitting
process. In addition, an extended radio source may have four different phase
reference points at four frequencies referring to the celestial reference
frame. Thus, the actual differences between the signal arrival times for four
frequency bands could change unpredictably. As a result, the signal arrival
time presented in the broadband VLBI data file as a single value has some
level of uncertainty making the additional parameter $\epsilon$ feasible for
routine application using Equation (3). While it was not essential for the
traditional S/X VLBI observations, the broadband VLBI observations are more
accurate, and, more advanced parametrical model should be used to match these
observations.
###### Acknowledgements.
We are thankful to Sergei Kopeikin (University of Missouri-Columbia), Slava
Turyshev (JPL), James Anderson (GFZ), and Igor Surkis (IAA RAS) for fruitful
discussions on the theoretical aspects and the technical details of the
correlation process. Also we thank the PASA Editor-in-Chief, and the anonymous
referee for their constructive comments and suggestions which have
significantly improved the clarity of the paper. This paper is published with
the permission of the CEO, Geoscience Australia. We used the International
VLBI Service (IVS) products available electronically at
http://ivscc.bkg.bund.de/products-data/products.html.
## References
* Alef et al. (2019) Alef W., Anderson J. M., Bernhart S., de Vicente P., González García J., Haas R., La Porta L., et al., 2019, evga.conf, 24, 107
* Altamimi et al. (2016) Altamimi Z., Rebischung P., Métivier L., Collilieux X., 2016, JGRB, 121, 6109. doi:10.1002/2016JB013098
* Bolotin et al. (2019) Bolotin S., Baver K., Bolotina O., Gipson J., Gordon D., Le Bail K., MacMillan D., 2019, evga.conf, 24, 224
* Charlot et al. (2010) Charlot P., Boboltz D. A., Fey A. L., Fomalont E. B., Geldzahler B. J., Gordon D., Jacobs C. S., et al., 2010, AJ, 139, 1713. doi:10.1088/0004-6256/139/5/1713
* Charlot et al. (2020) Charlot P., Jacobs C. S., Gordon D., Lambert S., de Witt A., Böhm J., Fey A. L., et al., 2020, Astron.Astroph. (in press), arXiv, arXiv:2010.13625
* Corey (2000) Corey B., 2000, Memo of Massachusets Institute of Technology, Haystack Observatory
* Feissel-Vernier (2003) Feissel-Vernier M., 2003, Astron.Astroph., 403, 105. doi:10.1051/0004-6361:20030348
* Fey et al. (2015) Fey A. L., Gordon D., Jacobs C. S., Ma C., Gaume R. A., Arias E. F., Bianco G., et al., 2015, AJ, 150, 58. doi:10.1088/0004-6256/150/2/58
* Finkelstein, Kreinovich, & Pandey (1983) Finkelstein A. M., Kreinovich V. I., Pandey S. N., 1983, ApSS, 94, 233. doi:10.1007/BF00653714
* Hellings (1986) Hellings R. W., 1986, AJ, 91, 650. doi:10.1086/114048
* Kellermann et al. (1998) Kellermann K. I., Vermeulen R. C., Zensus J. A., Cohen M. H., 1998, AJ, 115, 1295. doi:10.1086/300308
* Klioner (1991) Klioner S. A., 1991, gvmg.conf, 188
* Kopeikin (1990) Kopeikin S. M., 1990, SvA, 34, 5
* Ma et al. (1998) Ma C., Arias E. F., Eubanks T. M., Fey A. L., Gontier A.-M., Jacobs C. S., Sovers O. J., et al., 1998, AJ, 116, 516. doi:10.1086/300408
* Mansouri & Sexl (1977) Mansouri R., Sexl R. U., 1977, GReGr, 8, 497. doi:10.1007/BF00762634
* Petit & Luzum (2010) Petit G., Luzum B. (eds.), 2010, IERS Technical Notes, 36, 1
* Schuh & Behrend (2012) Schuh H., Behrend D., 2012, JGeo, 61, 68. doi:10.1016/j.jog.2012.07.007
* Soffel et al. (1991) Soffel M. H., Wu X., Xu C., Mueller J., 1991, AJ, 101, 2306. doi:10.1086/115851
* Soffel, Kopeikin, & Han (2017) Soffel M., Kopeikin S., Han W.-B., 2017, JGeod, 91, 783. doi:10.1007/s00190-016-0956-z
* Titov, Tesmer, & Boehm (2004) Titov O., Tesmer V., Boehm J., 2004, ivsg.conf, 267
* Titov & Girdiuk (2015) Titov O., Girdiuk A., 2015, Astron.Astroph., 574, A128. doi:10.1051/0004-6361/201424690
* Titov & Krásná (2019) Titov O., Krásná H., 2019, In: FreyMueller J, Sánchez L. (eds), International Symposium on Advancing Geodesy in a Changing World, International Association of Geodesy Symposia, vol 149. Springer, Cham, 19, arXiv:1808.06769
* Titov & Krásná (2018) Titov O., Krásná H., 2018, Astron.Astroph., 610, A36. doi:10.1051/0004-6361/201731901
* Whitney (2000) Whitney, A. R., 2000, International VLBI Service for Geodesy and Astrometry 2000 General Meeting Proceedings, 187.
* Will (1971) Will C. M., 1971, ApJ, 163, 611. doi:10.1086/150804
* Will (1992) Will C. M., 1992, PhRvD, 45, 403. doi:10.1103/PhysRevD.45.403
## Appendix A DEVELOPMENT OF THE RELATIVISTIC GROUP DELAY MODELS FOR THE
EPOCH OF GEOCENTER AND FOR THE EPOCH OF THE REFERENCE STATION
### A.1 THE CONVENTIONAL GEOCENTRIC DELAY MODEL
The equation for the relativistic group delay model has been developed in the
1980s-90s (e.g. Hellings (1986), Kopeikin (1990), Klioner (1991), Soffel et
al. (1991) to approximate the observed VLBI data at the 1-ps level of
accuracy. The conventional group delay model was finally adopted Petit & Luzum
(2010)
$\tau_{g}=\frac{-\frac{(\boldsymbol{b}\cdot\boldsymbol{s})}{\textrm{c}}\Big{(}1-\frac{2GM}{c^{2}R}-\frac{|\boldsymbol{V}|^{2}}{2\textrm{c}^{2}}-\frac{(\boldsymbol{V}\cdot\boldsymbol{w_{2}})}{\textrm{c}^{2}}\Big{)}-\frac{(\boldsymbol{b}\cdot\boldsymbol{V})}{\textrm{c}^{2}}\Big{(}1+\frac{(\boldsymbol{s}\cdot\boldsymbol{V})}{2\textrm{c}}\Big{)}}{1+\frac{(\boldsymbol{s}\cdot(\boldsymbol{V}+{w_{2}}))}{\textrm{c}}}$
(4)
where $\boldsymbol{b}$ is the vector of baseline
$\boldsymbol{b}=\boldsymbol{r}_{2}-\boldsymbol{r}_{1}$, $\boldsymbol{s}$ is
the barycentric unit vector of the radio source, $\boldsymbol{V}$ is the
barycentric velocity of the geocenter, $\boldsymbol{w_{2}}$ is the geocentric
velocity of station 2, $c$ is the speed of light, $G$ is the gravitational
constant, $M$ is the mass of the Sun, $R$ is the geocentric distance to the
Sun, and ($\cdot$) is the dot-product operator of two vectors. The reference
epoch is the UTC epoch of the wavefront passage at the reference station. In
accordance with the assumption, station 1 is treated as the reference station,
the geocentric velocity of station 2 is presented in (1) explicitly. A modern
revision (e.g. Soffel, Kopeikin, & Han (2017)) is to add some smaller terms
(less than 1 ps), but the analytical model (1) is still valid for the analysis
of VLBI data.
### A.2 LORENTZ TRANSFORMATION
The radio signal is received by two radio telescopes on the surface of the
rotating Earth, and their coordinates are presented in the Geocentric
Celestial Reference System (GCRS) comoving with the Earth. Positions of
reference radio sources emitting the signals are in the Barycentric Celestial
Reference System (BCRS). So, a detailed transformation of the coordinates from
BCRS to GCRS is traditionally based on the metric tensor of the Solar System
at the first and second post-Newtonian level (e.g. Hellings (1986), Kopeikin
(1990), Klioner (1991), Soffel, Kopeikin, & Han (2017)). However, many lower
order effects are not observable, therefore, a simplified approach could be
developed for the relativistic model delay.
The conventional Lorenz transformation is given by
$\displaystyle\boldsymbol{x^{\prime}}=$
$\displaystyle\boldsymbol{x}+(\gamma-1)\frac{(\boldsymbol{V}\cdot\boldsymbol{x})\boldsymbol{V}}{|\boldsymbol{V}|^{2}}-\gamma\boldsymbol{V}t$
(5) $\displaystyle t^{\prime}=$
$\displaystyle\gamma\Bigg{(}t-\frac{(\boldsymbol{V}\cdot\boldsymbol{x})}{\textrm{c}^{2}}\Bigg{)}.$
where
$\gamma=\bigg{(}\sqrt{1-\frac{|\boldsymbol{V}|^{2}}{\textrm{c}^{2}}}\bigg{)}^{-1}$
is the Lorentz "gamma-factor" (Mansouri & Sexl (1977), Will (1992)). It should
be noted that this factor here is not the parameter $\gamma$ of the Post-
Newtonian formalism (PPN) used in general relativity Will (1971).
Transformation (5) links the geocentric reference system
$S^{\prime}(x^{\prime},t^{\prime})$ that is moving with velocity
$\boldsymbol{V}$ around the Solar System Barycentre (SSB) and the barycentric
reference system $S(x,t)$ located at the SSB. It could be shown (Titov &
Krásná (2019)) that the time delay derived from (5) may be presented in the
form
$\tau_{g_{0}}=\frac{-\frac{(\boldsymbol{b}\cdot\boldsymbol{s})}{\textrm{c}}\Big{(}1-\frac{|\boldsymbol{V}|^{2}}{2\textrm{c}^{2}}\Big{)}-\frac{(\boldsymbol{b}\cdot\boldsymbol{V})}{\textrm{c}^{2}}\Big{(}1+\frac{(\boldsymbol{s}\cdot\boldsymbol{V})}{2\textrm{c}}\Big{)}}{1+\frac{(\boldsymbol{s}\cdot\boldsymbol{V})}{\textrm{c}}}$
(6)
Whether an astronomical instrument with a reference clock were placed in the
Earth’s geocenter and the Solar gravitation were ignored, the equation (6)
would be applied to reduction of the geodetic VLBI data. However, further
complications will be discussed in two next subsections.
#### A.2.1 SPACE AND TIME TRANSFORMATION INCLUDING GRAVITATIONAL POTENTIAL
The relativistic model (6) does not include the term proportional to the Solar
gravitational potential $\frac{2U}{c^{2}}$, where $U=\frac{GM}{R}$, and few
terms with the geocentric velocity $\boldsymbol{w_{2}}$ presented in (4).
Hellings (1986) showed that the former term appears due to the Solar
gravitational field (in the Schwarzschild metric) at the Earth geocentre.
Therefore, Hellings (1986) has developed new equations for the relationships
between intervals of physical distance and time, measured in a moving
reference geocentric frame, and the intervals, given in the barycentric
coordinate system including the gravitational field of the Sun is given by
$\displaystyle\boldsymbol{x^{\prime}}=$
$\displaystyle(1+\frac{2U}{c^{2}})\boldsymbol{x}-(1+\frac{2U}{c^{2}})(\gamma-1)\frac{(\boldsymbol{V}\cdot\boldsymbol{x})\boldsymbol{V}}{|\boldsymbol{V}|^{2}}-$
(7) $\displaystyle-(1-\frac{2U}{c^{2}})\gamma\boldsymbol{V}t$ $\displaystyle
t^{\prime}=$
$\displaystyle\gamma\Bigg{(}(1-\frac{2U}{c^{2}})t-(1+\frac{2U}{c^{2}})\frac{(\boldsymbol{V}\cdot\boldsymbol{x})}{\textrm{c}^{2}}\Bigg{)}.$
Transformation (7) reduces to the Lorenz transformation (5) if the Solar
potential $U=0$.
The corresponding equation for the relativistic group delay includes the Solar
gravitational potential at the geocenter of the Earth.
$\tau_{g_{U}}=\frac{-\frac{(\boldsymbol{b}\cdot\boldsymbol{s})}{\textrm{c}}\Big{(}1-\frac{2U}{c^{2}}-\frac{|\boldsymbol{V}|^{2}}{2\textrm{c}^{2}}\Big{)}-\frac{(\boldsymbol{b}\cdot\boldsymbol{V})}{\textrm{c}^{2}}\Big{(}1+\frac{(\boldsymbol{s}\cdot\boldsymbol{V})}{2\textrm{c}}\Big{)}}{1+\frac{(\boldsymbol{s}\cdot\boldsymbol{V})}{\textrm{c}}}$
(8)
Titov & Girdiuk (2015) showed that the term proportional to $\frac{2U}{c^{2}}$
in (8) could be unified with the general relativity effect of the
gravitational delay. Therefore, we will not include it into further analysis;
however, we discuss it here as it is a part of the conventional geometric part
of the relativistic delay model (Petit & Luzum (2010)).
#### A.2.2 LORENZ TRANSFORMATION REFERRING TO THE EPOCH OF FIRST STATION
Physical clocks (hydrogen masers) used for VLBI observations are located at
the Earth surface rather than at the geocenter. As two clocks separated by a
long baseline are involved for a routine observational experiment, one of them
should be selected as "reference" clock. This choice is completely arbitrary,
though, once it is made, the geocentric velocity of the second ("no
reference") clock appears explicitly in the analytical equations. The standard
approach is to consider a difference between barycentric coordinates of two
radio telescopes, $\boldsymbol{r_{1}}(t_{1})$ and $\boldsymbol{r_{2}}(t_{2})$,
measured at the two epochs ${t_{1}}$ and ${t_{2}}$, to expand the vector
$\boldsymbol{r_{2}}(t_{2})$ as follows
$\displaystyle\boldsymbol{r_{2}}(t_{2})=\boldsymbol{r_{2}}(t_{1})+\boldsymbol{w_{2}(t_{1})}(t_{2}-t_{1}),$
(9)
where $\boldsymbol{w_{2}}=\boldsymbol{w_{2}(t_{1})}$ is the geocentric
velocity of the second station at epoch $t_{1}$. Denoting
$\boldsymbol{B(t_{1})}$ a difference between two barycentric vectors at the
same epoch
$\boldsymbol{B}=\boldsymbol{B(t_{1})}=\boldsymbol{r_{2}(t_{1})}-\boldsymbol{r_{1}(t_{1})}$
one could get for the time difference $(t_{2}-t_{1})$
$\displaystyle
c(t_{2}-t_{1})=-(\boldsymbol{B}\cdot\boldsymbol{s})-(\boldsymbol{w_{2}\cdot\boldsymbol{s}})(t_{2}-t_{1})$
(10)
It should be noted here that $\boldsymbol{B}$ is a formal three-component
vector rather than a meaningful physical value, though it links to the
physical distance between two terrestrial positions of radio telescopes on the
Earth at ${t_{1}}$.
Eq (10) could be obtained by alternative way. Let’s introduce of a new
geocentric reference frame
$S^{\prime\prime}=S^{\prime\prime}(x^{\prime\prime},t^{\prime\prime})$ with
the reference epoch referred to station 1 in a such way that two geocentric
reference frames $S^{\prime\prime}$ and $S^{\prime}$ are linked by new
transformation
$\displaystyle\boldsymbol{x"}=$ $\displaystyle\boldsymbol{x^{\prime}}$ (11)
$\displaystyle t"=$ $\displaystyle
t^{\prime}-\frac{(\boldsymbol{w_{2}}\cdot\boldsymbol{x^{\prime}})}{\textrm{c}^{2}}$
Transformation (11) could be easily combined with the Lorentz transformation
(5)
$\displaystyle\boldsymbol{x"}=$
$\displaystyle\boldsymbol{x}+(\gamma-1)\frac{(\boldsymbol{V}\cdot\boldsymbol{x})\boldsymbol{V}}{|\boldsymbol{V}|^{2}}-\gamma\boldsymbol{V}t$
(12) $\displaystyle t"=$
$\displaystyle\gamma\Bigg{(}t-\frac{(\boldsymbol{V}\cdot\boldsymbol{x})}{\textrm{c}^{2}}\Bigg{)}-\frac{(\boldsymbol{w_{2}}\cdot\boldsymbol{x})}{\textrm{c}^{2}}-$
$\displaystyle-(\gamma-1)\frac{(\boldsymbol{V}\cdot\boldsymbol{x})(\boldsymbol{V}\cdot\boldsymbol{w_{2}})}{{c^{2}}\cdot|\boldsymbol{V}|^{2}}+\gamma\frac{(\boldsymbol{V}\cdot\boldsymbol{w_{2}})t}{\boldsymbol{c}^{2}}.$
It is obvious that the transformations (11) and (12) are pertinent only for an
individual pair of two radio telescopes equipped with their own high precision
clocks, one of which is a reference clock and the second clock is moving with
instantaneous velocity $\boldsymbol{w_{2}}$ . The transformation (12) is fully
consistent with the special relativity postulates and reflects the situation
when the position of the reference clock is not at the reference frame origin
(geocentre). For a classical astronomic instrument the reference frame origin
and position of the reference clock are referred to the same topocentric
position of the instrument on the Earth surface. In this scenario, the
geocentric velocity of the instrument is simply added to the barycentric
velocity in the formulae of the Lorenz transformation, i.e. the velocity
$\boldsymbol{V}$ is replaced by the sum $\boldsymbol{V}+\boldsymbol{w_{2}}$ in
(5) followed by a substantial change in (6). From the observational point of
view this results in the appearance of the classical diurnal aberration
effect. In geodetic VLBI, there is no the diurnal aberration effect at all.
Instead of that, as it will be shown later, the geocentric velocity
$\boldsymbol{w_{2}}$ contributes to the diurnal variation of the scale factor
with magnitude up to 20 ns (or 6 meters in the linear scale) for a standard
baseline of 6000 km in length and a geocentric velocity of 300 m/s.
For calculating the group delay from (12), one needs to develop the
corresponding velocity transformation. As both reference frames
$S^{\prime\prime}$ and $S^{\prime}$ are geocentric, the time component is only
changed due to transition from (5) to (12). Traditionally, authors proceed to
the equation of the relativistic time delay (6) consistent with the XF-type
correlator directly (e.g. Hellings 1986, Kopeikin 1990, Soffel et al 2017).
Therefore, these two transformations (5) and (11) merge together and the
difference between the delays (4) and (6) is lost. However, for the FX-type
correlators, this procedure must be separated into two steps to provide a
proper relativistic conversion between observables produced by the XF and FX
correlators.
To elaborate equation (4) (without the $\frac{2U}{c^{2}}$ term) from
transformation (12), let’s consider the velocity transformation
${\boldsymbol{v_{x}"}}=\frac{\boldsymbol{dx"}}{dt"}$
$\displaystyle{v_{x}"}=\frac{\boldsymbol{dx}+(\gamma-1)\frac{(\boldsymbol{V}\cdot\boldsymbol{dx})\boldsymbol{V}}{|\boldsymbol{V}|^{2}}-\gamma\boldsymbol{V}dt}{\gamma\Bigg{(}(1+\frac{(\boldsymbol{V}\cdot\boldsymbol{w_{2}})}{\boldsymbol{c}^{2}})dt-\frac{(\boldsymbol{V}\cdot\boldsymbol{dx})}{\textrm{c}^{2}}\Bigg{)}-\frac{(\boldsymbol{w_{2}}\cdot\boldsymbol{dx})}{\textrm{c}^{2}}-\frac{(\boldsymbol{V}\cdot\boldsymbol{dx})(\boldsymbol{V}\cdot\boldsymbol{w_{2}})}{2{c}^{4}}}$
(13)
or, denoting ${\boldsymbol{v_{x}}}=\frac{\boldsymbol{dx}}{dt}$ within the 1 ps
level of accuracy
$\displaystyle{v_{x}"}=\frac{\boldsymbol{v_{x}}+\frac{(\boldsymbol{V}\cdot\boldsymbol{v_{x}})\boldsymbol{V}}{2c^{2}}-\boldsymbol{V}}{\Bigg{(}1+\frac{(\boldsymbol{V}\cdot\boldsymbol{w_{2}})}{\boldsymbol{c}^{2}}-\frac{(\boldsymbol{V}\cdot\boldsymbol{v_{x}})}{\textrm{c}^{2}}\Bigg{)}\Bigg{(}1+\frac{|\boldsymbol{V}|^{2}}{2c^{2}}\Bigg{)}-\frac{(\boldsymbol{w_{2}}\cdot{\boldsymbol{v_{x}}})}{\textrm{c}^{2}}}$
(14)
Now apply for a standard transition to the radio source vector
$c\boldsymbol{s}=-\boldsymbol{v_{x}}$
$\displaystyle{s"}=\frac{{s}+\frac{(\boldsymbol{V}\cdot\boldsymbol{s})\boldsymbol{V}}{2c^{2}}+\frac{\boldsymbol{V}}{c}}{\Bigg{(}1+\frac{(\boldsymbol{V}\cdot\boldsymbol{s})}{c}+\frac{(\boldsymbol{V}\cdot\boldsymbol{w_{2}})}{c^{2}}\Bigg{)}\Bigg{(}1+\frac{|\boldsymbol{V}|^{2}}{2c^{2}}\Bigg{)}+\frac{(\boldsymbol{w_{2}}\cdot{s})}{\textrm{c}}}$
(15)
and, after reduction of negligible terms,
$\displaystyle{s"}=\frac{{s}+\frac{(\boldsymbol{V}\cdot\boldsymbol{s})\boldsymbol{V}}{2c^{2}}+\frac{\boldsymbol{V}}{c}}{1+\frac{(\boldsymbol{V}\cdot\boldsymbol{s})}{c}+\frac{(\boldsymbol{V}\cdot\boldsymbol{w_{2}})}{c^{2}}+\frac{|\boldsymbol{V}|^{2}}{2c^{2}}+\frac{(\boldsymbol{w_{2}}\cdot{s})}{\textrm{c}}}$
(16)
This equation could be converted to the form consistent with the conventional
group delay model at 1-ps level after inclusion of the Solar gravitation term
(8)
$\displaystyle{s"}=\frac{{s}\Bigg{(}1-\frac{2U}{c^{2}}-\frac{|\boldsymbol{V}|^{2}}{2c^{2}}-\frac{(\boldsymbol{V}\cdot\boldsymbol{w_{2}})}{c^{2}}\Bigg{)}+\frac{\boldsymbol{V}}{c}\Bigg{(}1+\frac{(\boldsymbol{V}\cdot\boldsymbol{s})}{2c}\Bigg{)}}{1+\frac{((\boldsymbol{V}+\boldsymbol{w_{2}})\cdot\boldsymbol{s})}{c}}$
(17)
Development of the time delay from (17) as
$\tau=-\frac{(\boldsymbol{b}\cdot\boldsymbol{s"})}{c}$ provides the
conventional group delay model (4). Now it is obvious that this model is based
on the modification of the Lorentz transformation (12) in which the
transformation of time is presented in a non-standard way because our
reference clocks are physically located at the Earth surface rather than at
the geocenter.
Eq (6) misses the terms including the velocity of the second radio telescope
in (4). At the 1 ps level of accuracy this difference
$\delta\tau=\tau_{g}-\tau_{g_{0}}$ comprises five terms
$\displaystyle\delta\tau=\frac{2(\boldsymbol{b}\cdot\boldsymbol{s}){U}}{c^{3}}+\frac{(\boldsymbol{b}\cdot\boldsymbol{s}){(\boldsymbol{w_{2}}\cdot\boldsymbol{s})}}{c^{2}}+$
(18)
$\displaystyle+\frac{(\boldsymbol{b}\cdot\boldsymbol{s}){(\boldsymbol{V}\cdot\boldsymbol{w_{2}})}}{c^{3}}+\frac{(\boldsymbol{b}\cdot\boldsymbol{V}){(\boldsymbol{w_{2}}\cdot\boldsymbol{s})}}{c^{3}}-$
$\displaystyle-\frac{2(\boldsymbol{b}\cdot\boldsymbol{s}){(\boldsymbol{V}\cdot\boldsymbol{s})}{(\boldsymbol{w_{2}}\cdot\boldsymbol{s})}}{c^{3}}$
Table 1: Estimates of the parameter $\epsilon$ (units $10^{-3}$) for six VGOS
stations in 2019)
MJD | GGAO12M | KOKEE12M | ONSA13NE | ONSA13SW | RAEGYEB | WESTFORD
---|---|---|---|---|---|---
58534.2499 | $0.015\pm 0.049$ | $0.055\pm 0.045$ | $0.164\pm 0.053$ | ————– | $0.127\pm 0.051$ | $-0.135\pm 0.047$
58547.2499 | $-0.186\pm 0.049$ | $0.265\pm 0.043$ | $0.258\pm 0.053$ | ————– | $0.173\pm 0.053$ | $-0.337\pm 0.049$
58561.2499 | $-0.425\pm 0.081$ | $0.051\pm 0.060$ | $0.243\pm 0.077$ | ————– | $0.106\pm 0.068$ | $-0.150\pm 0.060$
58575.2500 | $0.016\pm 0.054$ | $0.078\pm 0.047$ | $0.117\pm 0.057$ | ————– | $0.312\pm 0.054$ | $-0.042\pm 0.064$
58589.2497 | $-0.252\pm 0.061$ | $0.262\pm 0.050$ | $0.233\pm 0.055$ | ————– | ————– | $-0.278\pm 0.057$
58603.2497 | $-0.330\pm 0.062$ | $0.290\pm 0.051$ | $0.221\pm 0.058$ | ————– | ————– | $-0.333\pm 0.061$
58617.2497 | $-0.109\pm 0.058$ | $0.117\pm 0.051$ | $0.038\pm 0.065$ | ————– | ————– | $-0.097\pm 0.062$
58632.2494 | $-0.563\pm 0.122$ | ————– | $-0.202\pm 0.074$ | ————– | ————– | ————–
58659.2497 | $0.049\pm 0.076$ | $0.058\pm 0.061$ | $0.166\pm 0.067$ | ————– | ————– | $-0.178\pm 0.069$
58673.2499 | $-0.085\pm 0.180$ | ————– | $-0.068\pm 0.107$ | ————– | ————– | ————–
58687.2496 | $0.015\pm 0.080$ | $-0.068\pm 0.091$ | $0.008\pm 0.077$ | $0.001\pm 0.076$ | ————– | $0.024\pm 0.066$
58701.2499 | $-0.068\pm 0.068$ | $0.069\pm 0.059$ | $0.255\pm 0.074$ | $0.223\pm 0.074$ | $0.155\pm 0.072$ | $-0.237\pm 0.060$
58715.2498 | $-0.282\pm 0.050$ | | $0.066\pm 0.055$ | $0.068\pm 0.054$ | $0.145\pm 0.059$ | $0.025\pm 0.049$
58732.2499 | $-0.160\pm 0.047$ | $0.099\pm 0.041$ | $-0.002\pm 0.048$ | $-0.037\pm 0.048$ | $0.056\pm 0.050$ | $-0.025\pm 0.039$
58743.2499 | $-0.206\pm 0.045$ | $0.128\pm 0.038$ | $0.204\pm 0.046$ | $0.153\pm 0.045$ | $0.311\pm 0.046$ | $-0.124\pm 0.036$
58757.2497 | $-0.171\pm 0.053$ | $0.147\pm 0.042$ | $0.173\pm 0.048$ | $0.226\pm 0.050$ | ————– | $-0.145\pm 0.044$
58774.2493 | $-0.885\pm 0.071$ | $0.892\pm 0.079$ | $-0.220\pm 0.072$ | $-0.128\pm 0.073$ | ————– | ————–
58785.2496 | $-0.214\pm 0.051$ | $0.194\pm 0.031$ | $0.153\pm 0.038$ | $0.143\pm 0.038$ | ————– | ————–
58802.2498 | $0.106\pm 0.062$ | $-0.034\pm 0.035$ | $-0.007\pm 0.047$ | $-0.004\pm 0.048$ | ————– | ————–
58813.2500 | $-0.135\pm 0.046$ | $0.150\pm 0.036$ | $0.206\pm 0.048$ | $0.195\pm 0.048$ | ————– | $-0.189\pm 0.044$
58827.2497 | $-0.211\pm 0.056$ | ————– | $0.182\pm 0.042$ | $0.183\pm 0.042$ | ————– | $-0.135\pm 0.041$
58844.2499 | ————– | $0.472\pm 0.073$ | $-0.018\pm 0.074$ | $-0.148\pm 0.074$ | ————– | $-0.223\pm 0.049$
58858.2498 | $-0.072\pm 0.066$ | ————– | $-0.123\pm 0.051$ | $-0.058\pm 0.052$ | ————– | $0.048\pm 0.047$
|
# Blocked and Hierarchical Disentangled Representation From Information Theory
Perspective
Ziwen Liu
University of Chinese Academy of Sciences
<EMAIL_ADDRESS>Mingqiang Li
Information Science Academy of China Electronics Technology Group Corporation
<EMAIL_ADDRESS>Congying Han
University of Chinese Academy of Sciences
<EMAIL_ADDRESS>
###### Abstract
We propose a novel and theoretical model, blocked and hierarchical variational
autoencoder (BHiVAE), to get better-disentangled representation. It is well
known that information theory has an excellent explanatory meaning for the
network, so we start to solve the disentanglement problem from the perspective
of information theory. BHiVAE mainly comes from the information bottleneck
theory and information maximization principle. Our main idea is that (1)
Neurons block not only one neuron node is used to represent attribute, which
can contain enough information; (2) Create a hierarchical structure with
different attributes on different layers, so that we can segment the
information within each layer to ensure that the final representation is
disentangled. Furthermore, we present supervised and unsupervised BHiVAE,
respectively, where the difference is mainly reflected in the separation of
information between different blocks. In supervised BHiVAE, we utilize the
label information as the standard to separate blocks. In unsupervised BHiVAE,
without extra information, we use the Total Correlation (TC) measure to
achieve independence, and we design a new prior distribution of the latent
space to guide the representation learning. It also exhibits excellent
disentanglement results in experiments and superior classification accuracy in
representation learning.
## 1 Introduction
##### Disentanglement Representation
Learning an interpretable and disentangled representation of data to reflect
the semantic meaning is what machine learning always pursues [5, 6, 8, 27].
Disentangled representation is defined in [5] as:_a representation where a
change in one dimension corresponds to a change in one factor of variation,
while being relatively invariant to changes in other factors._ As far as our
understanding is concerned, the fact that different dimensions do not affect
each other means probabilistically independent.
As popular generative models, Variational Autoencoder (VAE) [15] and
Generative Adversarial Networks(GAN) [11] have been applied in
disentanglement. For example, InfoGAN [8], based on the GAN model, maximizes
the mutual information between the small subset of the latent variables and
the observations which makes the latent variables contain more information
about the real data, hence increases the interpretability of the latent
representation. Based on InfoGAN, FineGAN [18, 30] creates a hierarchical
architecture that assigns the background, object shape, and object appearance
to different hierarchy to generate images of fine-grained object categories.
And VAE model, derived from autoencoder [1] is also widely applied to
representation learning, VAEs have been demonstrated their unique power to
constrain representations disentanglement. For example, $\beta$-VAE [12],
$\beta$-TCVAE [7], FactorVAE [14] and so on [10] are able to get more
disentangled representation.
##### Information Theory
Information Theory has been proposed by Shannon in 1948 [28], which came from
communication research. Mutual information is the fundamental metric for
measuring the relationship about information between random variables. In
representation learning, it has been applied widely [3, 8, 13, 25], with graph
network [26, 34], and gets some explanatory meaning on machine learning [29].
We can conclude the application as two ideas: The first one is Information
Maximization Principle(InfoMax) [4, 19], which enforces representation to
preserve more information about the input data through the transformers (CNN,
GNN); some works [8, 13, 35] regularize their original model with InfoMax term
to get more informative and interpretable model. The other one is the
Information Bottleneck(IB) theory [29, 32, 33]. It analyzes the process of
information transmission and the loss through the networks. IB theory
considers the network process as a Markov chain and uses the Data Processing
Inequality (DPI) [9] to explain the variation of information in deep networks.
In 2015, Variational Information Bottleneck (VIB) method [2] offers a
variational form of supervised IB theory. Also, IB theory has been revealed a
unique ability [36] to explain how and why VAEs models design this
architecture. With this knowledge of disentanglement and information, we
initiate our model, blocked and hierarchical variational autoencoder (BHiVAE),
completely from information theory perspective to get better interpretability
and controllability. In BHiVAE, because of the neural network’s different
ability to extract features with different net depth, we locate data factors
into different layers. Furthermore, the weak expressiveness of single-neuron
pushes us to use neuron blocks to represent features. We also discuss the
supervised and unsupervised version model. In the supervised model, we utilize
the label to separate the representation from feature information. In the
unsupervised model, we give out a unique prior distribution to better meet our
model and use additional discriminators to split information. Of course we
give enough experiments in MNIST [17], CelebA [20] and dSprite [23] datasets
to show the great performance in disentanglement. In summary, our work mainly
makes the following contributions:
* •
We approach the disentanglement problem for the first time entirely from an
information theory perspective. Most previous works on disentanglement have
been based on existing models and modified to fit the framework for solving
entanglement problems.
* •
We present Blocked and Hierarchical Variational Autoencoder (BHiVAE) in both
supervised and unsupervised cases. In the supervised case, we utilize the
known feature information to guide the representation learning in each
hierarchy; in the unsupervised case, we propose a novel distribution-based
method to meet our neural block set.
* •
We perform experiments thoroughly on several public datasets, MNIST, dSprites
and CelebA, comparing with VAE, $\beta$-VAE, FactorVAE, $\beta$-TCVAE, and
Guided-VAE in several classic metrics. From the results, our method BHiVAE
shows an excellent performance considering all the indicators together.
## 2 Related Work
In order to get disentangled representation, some previous work has made a
significant contribution to it. Based on VAE, $\beta$-VAE [12] adds a
coefficient weight to the KL-divergence term of the VAE loss and get a more
disentangled representation. Mostly there is a significant advantage in that
it trains more stably than InfoGAN. However, $\beta$-VAE sacrifices the
reconstruction result at the same time. $\beta$-TCVAE [7] and FactorVAE [14]
explored this issue in more detail and found TC term is the immediate causes
to promote disentanglement.
Guided VAE [10] also gives out a model using different strategies in
supervised and unsupervised situations to get disentanglement representation.
It uses additional discriminator to guide the representation learning and
learn the knowledge about latent geometric transformation and principal
components. This idea of using different methods with different supervised
information inspires us. FineGAN [30] based on InfoGAN, generates the
background, object shape, and object appearance images respectively in
different hierarchies, then combines these three images into true image. In
FineGAN, what helps the disentanglement is the mutual information between the
latent codes and each factor. And MixNMatch [18], developed from FineGAN,
becomes a conditional generative model that learns disentangled representation
and encodes different features from real image and then uses additional
discriminators to match the representation to the prior distribution given by
FineGAN model.
Previous works have made simple corrections to $\beta$-VAE or GAN model,
adding some useful terms for solving disentanglement. In our work, we fully
consider the disentanglement problem from information theory and then
establish the BHiVAE model. Information theory and optimal coding theory [9,
36] have shown that longer code can express more information. So in our model,
instead of using only one dimension node to represent a ground-truth factor as
in previous work, we choose multiple neural nodes to do so.
In the meantime, different ground-truth factors of data contain different
levels of information, and the depth of the neural network affects the depth
of information extracted, so a hierarchical architecture is used in our model
for extracting different factor features at different layers. Therefore, in
order to satisfy the requirement of disentanglement representation, i.e., the
irrelevance between representation neural blocks, We only need to minimize the
mutual information between blocks of the same layer due to characteristics of
hierarchical architecture.
(a) Encoder part
(b) Decoder part
Figure 1: Architecture of Hierarchical VAE model: Encoder part in the left-
side and decoder in the right-side.
## 3 Proposed Method
We propose our model motivated by IB theory and VAEs, like $\beta$-VAE,
Factor-VAE, $\beta$-TCVAE, Guided-VAE, and FineGAN. Therefore, in this
section, we first introduce the IB theory and VAEs models, and then we present
our detailed model architecture and discuss supervised and unsupervised BHiVAE
methods.
### 3.1 Information Theory and VAEs
IB theory aims to learn a representation $Z$ that maximizes the compression of
informaiton in real data $X$ while maximizing the expression of target $Y$. So
we can describe it as:
$\displaystyle\min I(X;Z)-\beta I(Z;Y)$ (1)
the target $Y$ is the attribute information under supervision, and is equal to
$X$ under unsupervision [36].
In the case of supervised IB theory [2], we can get the upper bound:
$\displaystyle I_{\phi}(X;Z)-\beta I_{\theta}(Z;Y)\leq$
$\displaystyle\mathbb{E}_{p_{D}(x)}[D_{KL}(q_{\phi}(z|x)\|p(z))]$
$\displaystyle-\beta\mathbb{E}_{p(x,y)}[q_{\phi}(z|x)\log p_{\theta}(y|z)]$
(2)
The first term represents the KL divergence between the posterior
$q_{\phi}(z|x)$ and the prior distribution $p(z)$; and absolutely, the second
term equals cross-entropy loss of label prediction.
And in the case of unsupevised IB theory, the we can rewrite the objective Eq.
(1) as:
$\displaystyle\min I_{\phi}(X;Z)-\beta I_{\theta}(Z;X)$ (3)
Unsupervised IB theory seems like generalization of VAEs model, with an
encoder to learn representation and a decoder to reconstruct. $\beta$-VAE [12]
is actually the upper bound of it:
$\displaystyle\mathcal{L}_{\beta-VAE}=$
$\displaystyle\mathbb{E}_{p(x)}[D_{KL}(q_{\phi}(z|x)\|p(z))$
$\displaystyle-\beta\mathbb{E}_{q_{\phi}(z|x)}[\log(p_{\theta}(x|z))]]$ (4)
FactorVAE [14] and $\beta$-TCVAE [7] just add more weight on the TC term
$\mathbb{E}_{q(z)}[\log\frac{q(z)}{\tilde{q}(z)}]$, which express the
dependence across dimensions of variable in information theory, where
$\tilde{q}(z)=\prod_{i=1}^{n}q(z_{i})$.
We build our BHiVAE model upon above works and models. We focus on information
transmission and loss through the whole network, and then achieve it through
different methods.
### 3.2 BHiVAE
Now let us present our detailed model architecture. As shown in Fig 1, feed
data $X$ into the encoder (parameterized as $\phi$), and in the first layer,
we get the latent representation $z^{1}$, be divided into two parts $s^{1}$
and $h^{1}$. The part $s^{1}$ is the final representation part, which
corresponds to feature $y^{1}$, and $h^{1}$ is the input of next layer’s
encoder to get latent representation $z^{2}$. Then through three similar
network processes, we can get three representation parts $s^{1},s^{2},s^{3}$,
which are disentangled, and get the part $c^{3}$ in the last layer, that
contains information other than the above attributes of the data. All of them
make up the whole representation $z=(s^{1};s^{2};s^{3};c^{3})$. The
representation of each part is then mapped to the same space by a different
decoder (all parameterized as $\theta$) and finally concatenated together to
reconstruct the raw data, which is shown in Fig 1(b). For the problem we
discussed, we need to get the final disentangled representation $z$, i.e., we
need the independence between each representation part $s^{1},s^{2},s^{3}$,
and $c^{3}$.
(a) Unsupervised
(b) Supervised
Figure 2: Different methods for constraining information segmentation between
$s^{i}$ and $z^{i}$.
Then we can separate the whole problem into two sub-problem in $i$-th layer,
so the input is $h^{i-1}$(where $h^{0}=x$):
* (1)
Information flow $h^{i-1}\rightarrow s^{i}\rightarrow y^{i}$: Encode the upper
layer’s output $h^{i-1}$ to representation $z^{i}$, with one part $s^{i}$
containing sufficient information about one feature factor $y^{i}$;
* (2)
Information separation of $s^{i}$ and $h^{i}$: Eliminate the information about
$s^{i}$ in $h^{i}$ while requiring $s^{i}$ only to contain label $y^{i}$
information.
The first subproblem can be regarded as IB problem, the goal is to learn a
representation of $s^{i}$, i.e. maximally expressive about feature $y^{i}$
while minimally informative about input data $h^{i-1}$. So it can described
as:
$\displaystyle\min I(h^{i-1};s^{i})-\beta I(s^{i};y^{i})$ (5)
To satisfy the second subproblem is a complex issue, and it requires different
methods to achieve it with different known conditions. So we will introduce
these in follow conditions in detail. In summary, our representation is
designed to enhance the internal correlation of each block while reducing the
relationships between them to achieve the desired disentanglement goal.
#### 3.2.1 Supervised BHiVAE
In supervised case, we denote the input of $i$-th layer as $h^{i-1}$
($h^{0}=x$). Given the $i$-th layer label $y^{i}$, we require the
representation part $s^{i}$ to predict the feature correctly while being as
compressed as possible. So the objective in $i$-th ($i=1,2,3$) layer can be
described as with information measure:
$\displaystyle\mathcal{L}_{sup}^{class}(i)=I(h^{i-1};s^{i})-\beta
I(s^{i};y^{i})$ (6)
We can get a upper bound of it:
$\displaystyle\mathcal{L}_{sup}^{class}(i)$
$\displaystyle=I(h^{i-1};s^{i})-\beta I(s^{i};y^{i})$
$\displaystyle\leq\mathbb{E}_{p(h^{i-1})}[D_{KL}(q_{\phi}(s^{i}|h^{i-1})\|p(s))]$
$\displaystyle-\beta\mathbb{E}_{p(z^{i-1},y^{i})}[\mathbb{E}_{q_{\phi}(s^{i}|h^{i-1})}[\log
p_{\theta}(y^{i}|s^{i})]]$
$\displaystyle\triangleq\mathcal{L}_{sup}^{class_{up}}(i)$ (7)
So we need one more classifier $\mathcal{C}_{i}$ in Fig 2(b) to predict
$y^{i}$ with $s^{i}$.
For the second requirement, since $s^{i}$ is completely informative about
$y^{i}$ which constrained in first subproblem, the elimination of information
about $y^{i}$ is required for $h^{i}$:
$\displaystyle\mathcal{L}_{info}^{sup}(i)$ $\displaystyle=I(h^{i},y^{i})$
$\displaystyle=H(y^{i})-H(y^{i}|h^{i})$ (8)
$H(y^{i})$ is a constant, so minimizing $\mathcal{L}_{info}^{sup}(i)$ is equal
to minimize:
$\displaystyle\mathcal{L}_{info}^{sup_{e}}(i)=-H(y^{i}|h^{i})$ (9)
This is like a principle of maximum entropy, just requiring $h^{i}$ can’t
predict the factor feature $y^{i}$ at all, i.e. the probability predicted by
$h^{i}$ of each category is $\frac{1}{n_{i}}$ ($n_{i}$ denotes the number of
$i$-th feature categories). And $h^{i}$ shares the classifier
$\mathcal{C}_{i}$ with $s^{i}$ as Fig 2(b) shows.
So in our supervised model, we can get the total objective as:
$\displaystyle\min\\{\mathcal{L}^{sup}$
$\displaystyle=\sum_{i=1}^{n}\mathcal{L}_{class}^{sup}(i)+\gamma\mathcal{L}_{info}^{sup_{e}}(i)\\}$
(10)
where $\beta$ and $\gamma$ in the objective are hyper-parameter. The objective
(10) satisfies two requirement we need, and deal with the second subproblem
with a novel approach.
#### 3.2.2 Unsupervised BHiVAE
In the unsupervised case, we know nothing about the data source, so we can
only use reconstruction to constrain the representation. However, only
reconstruction is not enough for disentanglement problem [21], so we try to
use an unique representation prior distribution to guide the representation
learning. We know that all disentanglement models of the VAE series match the
posterior distribution $q_{\phi}(z|x)$ to standard normal distribution prior
$\mathcal{N}(0,I)$, and they can get disentanglement representation in each
dimension because of the independence across $\mathcal{N}(0,I)$. For meeting
our neural block representation set, we set the prior distribution $p(z)$ as
$\mathcal{N}(0,\Sigma)$, where $\Sigma$ is a block diagonal symmetric matrix.
Of course, the dimension of each block corresponds to the segmentation of each
hidden layer. In the unsupervised model, the target is reconstruction, so we
can decompose Eq. (5) as:
$\displaystyle\min$ $\displaystyle I(h^{i-1};s^{i})-\beta I(s^{i};x)$
$\displaystyle\leq\mathbb{E}_{p(h^{i-1})}[D_{KL}(q(z^{i}|h^{i-1})\|p(z))]$
(11) $\displaystyle-D_{KL}(q_{\phi}(z^{i})\|p(z))$ (12)
$\displaystyle-\beta[\mathbb{E}_{p(h^{i-1},y^{i})}[\mathbb{E}_{q_{\phi}(s^{i}|h^{i-1})}[\log
p_{\theta}(x|s^{i})]]$ (13) $\displaystyle-
D_{KL}(q_{\phi}(z^{i-1})\|p_{D}(x))]$ (14)
The first two terms are meant to constrain the capacity of representation
$z^{i}$, and the last two reinforce the reconstruction. VAEs model use (11)
and (13) to achieve, and adversarial autoencoder [22] use the KL divergence
(12) between the posterior distribution $q_{\phi}(z^{i})$ and prior $p(z)$ to
constrain the capacity of representation and get better representation.
In our model, we also minimize the KL divergence between the posterior
distribution $q_{\phi}(z^{i})$ and prior $\mathcal{N}(0,\Sigma)$, i.e.,
$D_{KL}(q_{\phi}(z^{i})\|\mathcal{N}(0,\Sigma))\rightarrow 0$. And we choose
the determinstic encoder, so we get the objective:
$\displaystyle\mathcal{L}_{recon}^{uns}=$ $\displaystyle
D_{KL}(q_{\phi}(z^{i})\|\mathcal{N}(0,\Sigma))$
$\displaystyle-\beta\mathbb{E}_{p(h^{i-1})}[\mathbb{E}_{q_{\phi}(s^{i}|h^{i-1})}[\log
p_{\theta}(x|s^{i})]]$ (15)
We use a discriminator at the top of Fig 2(a) to estimate and optimize
$D_{KL}(p_{\phi}(h^{i})\|\mathcal{N}(0,\Sigma))$.
Unlike the supervised case, we adopt a different method to satisfy the
information separation requirement. When $s^{i}$ and $h^{i}$ are independent
in probability, the mutual information between them comes to zero, i.e., no
shared information between $s^{i}$ and $h^{i}$. Here we apply an alternative
definition of mutual information, Total Correlation (TC) penalty [14, 37],
which is a popular measure of dependence for multiple random variables.
$KL(q(z)\|q(\tilde{z}))$ where $q(\tilde{z})=\prod^{d}_{j=1}q(z_{j})$ is
typical TC form, and in our case, we use the form
$KL(p(z^{i})\|p(h^{i})p(s^{i}))=I(h^{i};s^{i})$. So we can get the information
separation objective as:
$\displaystyle\mathcal{L}_{info}^{uns}(i)$ $\displaystyle=I(h^{i};s^{i})$ (16)
$\displaystyle=KL(p(z^{i})\|p(h^{i})p(s^{i}))$ (17)
In practice, $KL$ term is intractable to compute. The multiplication of
marginal distributions $p(h^{i})p(s^{i})$ is not analytically computable, so
we take a sampling approach to simulate it. After getting the a batch of
representations $\\{z^{i}_{j}=(s^{i}_{j};h^{i}_{j})\\}_{j=1}^{N}$ in $i$-th
layer, we randomly permute across the batch for $\\{s^{i}_{j}\\}_{j=1}^{N}$
and $\\{h^{i}_{j}\\}_{j=1}^{N}$ to generate sample batch under distribution
$p(z^{i})p(s^{i})$. But direct estimating density ratio
$\frac{p(z^{i})}{p(h^{i})p(s^{i})}$ is often impossible. Thus, with random
samples, we conduct a density ratio method [24, 31]: use an additional
classifier $D(x)$ that distinguishes between samples from the two
distributions, at the bottom of Fig 2(a):
$\displaystyle\mathcal{L}_{info}^{uns}(i)$
$\displaystyle=KL(p(z^{i})\|p(h^{i})p(s^{i}))$ $\displaystyle=TC(z^{i})$
$\displaystyle=\mathbb{E}_{q(z)}[\log\frac{p(z^{i})}{p(h^{i})p(s^{i})}]$
$\displaystyle\approx\mathbb{E}_{q(z)}[\log\frac{D(z^{i})}{1-D(z^{i})}]$ (18)
In summary, the total objective under unsupervision is:
$\displaystyle\max\\{\mathcal{L}^{unsup}=\sum_{i=1}^{n}(\mathcal{L}_{recon}^{sup}+\gamma\mathcal{L}_{info}^{sup}(i))\\}$
(19)
## 4 Experiments
In this section, we present our results in quantitative and qualitative
experiments. We also perform experiments comparing with $\beta$-VAE,
FactorVAE, and $\beta$-TCVAE in several classic metrics. Here are datasets
used in our experiments:
MNIST [17]: handwriting digital $(28\times 28\times 1)$ images with 60000
train samples and 10000 test samples;
dSprites [23]: 737280 2D shapes $(64\times 64\times 1)$ images procedurally
generated from 6 ground truth independent latent factors: shapes (heart,oval
and square), x-postion (32 values), y-position (32 values), scale (6 values)
and rotation (40 values);
CelebA (cropped version) [20]: 202599 celebrity face $(64\times 64\times 3)$
images with 5 landmark locations, 40 binary attributes annotations.
In the following, we perform several qualitative and quantitative experiments
on these datasets and show some results comparison in both unsupervised and
supervised cases. We demonstrated the ability of our model to disentangle in
the unsupervised case. Besides, we also show the representation learned in the
supervised case.
(a) Layer1 with KL=0.61
(b) Layer2 with KL=0.49
(c) Layer3 with KL=0.11
Figure 3: Scatter distribution VS. Prior distribution: Scatter plot of three
layers representation $\\{s^{i}\\}_{i=1}^{3}$; and (C) visualizes the known
category information with different colors.
(a) $\beta$-VAE
(b) FactorVAE
(c) Guided-VAE
(d) BHiVAE
Figure 4: Traversal images on MNIST: In (a), (b) and (c), the images in $i$-th
row are generated by changing $z^{i}$ from -3 to 3; and we change
$\\{s^{1},s^{2},s^{3},c^{3}\\}$ from (-3,-3) to (3,3), then generate the
images in each row.
### 4.1 Training Details
When training BHiVAE model, we need the encoder and decoder (Fig 1) both in
supervised and unsupervised cases. On the CelabA dataset, we build our network
with both a convolutional layer and a fully connected layer. On the MNIST and
dSprites datasets, the datasets are both $64\times 64$ binary images, so we
design our network to consist entirely of fully connected layers.
In evaluating the experimental results, we use the Z-differ [12], SAP [16],
and MIG [7] metrics to measure the quality of the disentangled representation,
and observe the images generated by the traversal representation. Moreover, we
use some pre-trained classifiers on attribute features to analyze the model
according to the classification accuracy.
### 4.2 Unsupervised BHiVAE
In the unsupervised case, as introduced in the previous section, the most
significant novel idea is we use a different prior $\mathcal{N}(0,\Sigma)$ to
guide the representation learning. Additionally, we need another one to
estimate the KL divergence (18). Therefore, two extra discriminators are
needed for BHiVAE in Fig 2(a). Actually, because we aim to get
$D_{KL}(q_{\phi}(z^{i})\|p(z))=0$, the latent representation
$\\{z_{j}^{i}\\}_{j=1}^{N}$ can be considered as generated from true
distribution, while prior and permuted ’presentations’
$\\{z^{i-perm}_{j}\\}_{j=1}^{N}$ can both be considered as false. Therefore,
we can simplify the network to contain only one discriminator to score these
three distributions.
We want to reinforce the relationship within $s^{i}$ to retain the information
and then decrease the dependency between $s^{i}$ and $h^{i}$ to separate
information, so in our unsupervised experiments, we use this prior
$\mathcal{N}(0,\Sigma)$, where
$\displaystyle\scriptsize\Sigma=\begin{bmatrix}1&0.5&0&\cdots&0\\\
0.5&1&0&\cdots&0\\\ 0&0&1&\cdots&0\\\ \vdots&\vdots&\vdots&\ddots&\vdots\\\
0&0&0&\cdots&1\end{bmatrix}$
First, we use some experiments to demonstrate the feasibility and validity of
this prior setting. We train the model on the dSprites dataset first, with
setting the dimension of representation $z$ to 14 ($d(z)=14$), where
$d(s^{i})=2,i=1,2,3$ and $d(c^{3})=8$. Then we get a representation in each
layer of the 1000 test images, while the three subfigures in Fig 3 shows a
scatter plot of each layer representation respectively, and the curves in
these figures both are the contour of the block target distribution
$p(s)\sim\mathcal{N}(0,\begin{bmatrix}1&0.5\\\ 0.5&1\end{bmatrix})$. And it is
shown in Fig 3 that in the first and second layer, the distribution of $s^{1}$
and $s^{2}$ do not sufficiently match the prior $p(s)$, but as the layer
forward, the KL divergence between $q_{\phi}(s^{i})$ and $p(s)$ keep
decresing, and the scatter plot of $s^{i}$ fits the prior distribution more
closely. In the model, we train the encoder globally, so the front layer’s
representation learning can be influenced by the change of deeper
representation and then yields larger KL divergence than the next layer.
Even more surprisingly, in Fig 3(c), we find that in the third layer,
visualizing the ’Shape’ attribute of dSprites dataset, there is an apparent
clustering effect (the different colors denote different categories). This
result proves our hypothesis about the deep network’s ability: the deeper
network is, the more detailed information it extracts. And it almost matches
the prior perfectly. Fig 3(c) also gives us a better traversal way. In
previous works, because only one dimension represents the attribute, they can
simply change the representation from $a$ to $b$ ($a$ and $b$ both are
constant). However, this does not fit our model, so the direction of the
category transformation in Fig 3(c) inspires us to traverse the data along the
diagonal line ($y=x$). Our block prior $p(s)$ also supports that (because the
prior distribution’s major axis is the diagonal line too).
We perform several experiments under above architecture setting and traversal
way to show the disentanglement quality on MNIST datasets. The disentanglement
quantitative results of comparing with $\beta$-VAE [12], FactorVAE [14] and
Guided-VAE [10] are presented in Fig 4. Here, considering the dimension of the
representation and the number of parameters, other works’ bottleneck size is
set to 12, i.e., $d(z)=12$. This setting helps reduce the impact of
differences in complexity between model frameworks. However, for a better
comparison, we only select seven dimensions that change more regularly. In our
model, we change the three-block representation $\\{s^{i}\\}_{i=1}^{3}$ and
then the rest representation $c^{3}$ changes according to two dimensions as a
whole, i.e., $c^{3}=(c^{3}_{1:2},c^{3}_{3:4},c^{3}_{5:6},c^{3}_{7:8})$. And
Fig 4 shows that $\beta$-VAE hardly ever gets a worthwhile disentangled
representation, but FactorVAE appears to attribute change as representation
varies. Moreover, Fig 4(c) and Fig 4(d) both show great disentangled images,
with $h_{1}$ changing in Guided-VAE and $s^{1}$ changing in BHiVAE, the
handwriting is getting thicker, and $h_{3},s^{2}$ control the angle of
inclination. These all demonstrate the model capabilities of our model.
| Z-diff $\uparrow$ | SAP $\uparrow$ | MIG $\uparrow$
---|---|---|---
VAE[15] | 67.1 | 0.14 | 0.23
$\beta$-VAE[12]($\beta$=6) | 97.3 | 0.17 | 0.41
FactorVAE[14]($\gamma$=7) | 98.4 | 0.19 | 0.44
$\beta$-TCVAE[7]($\alpha$=1,$\beta$=8,$\gamma$=2) | 96.5 | 0.41 | 0.49
Guided-VAE[10] | 99.2 | 0.4320 | 0.57
BHiVAE(Ours)($\beta$=10, $\gamma$=3) | 99.0 | 0.4312 | 0.61
Table 1: Disentanglement Scores: Z-diff score, SAP score, MIG score on the
dSprites dataset in the unsupervised case. The bold note the best results and
blue is the second best result.
We then progress to the traversal experiments on the dSprites dataset. This
dataset has clear attributes distinctions, and these allow us to better
observe the disentangled representation. In these experiments, BHiVAE learns a
10-dimensional representation $z=(s^{1},s^{2},s^{3},c^{3}_{1:2},c^{3}_{3,4})$
and 8-dimensional $z=(z_{1},z_{2},\dots,z_{8})$ in other works. We present the
experiments results in Fig 5 of reconstruction and traversal results. The
first and second rows in four figure represent original and reconstruction
images respectively. In Fig 5(d), it shows that our first three variables
$s^{1},s^{2},s^{3}$ have learned the attribute characteristics (Scale,
Orientation, and Position) of the data.
Moreover, we perform two quantitive experiments comparing with previous works
and present our results in Table 1 and Table 2. The experiments are all based
on the same experiment setting in Fig 4.
(a) $\beta$-VAE
(b) FactorVAE
(c) GuidedVAE
(d) BHiVAE(Ours)
Figure 5: Traversal images on dsprites: Images in first and second row of each
figure are original and reconstruction images respectively. And others rows
correspond the traversal images.
First, we compare BHiVAE with previous models with Z-differ Score [12], SAP
Score [16] and MIG Score [7] and present the results in Table 1. It is clear
that our model BHiVAE is at the top and that the MIG metric is better than
other popular models. The high value of the Z-diff score indicates that
learned disentangled representation has less variance on the attributes of
generated data as corresponding dimension changing, while SAP measures the
degree of coupling between data factors and representations. Additionally MIG
metric uses mutual information to measure the correlation between the data
factor and learned disentangled representation, and our work is just modeled
from the perspective of mutual information, which makes us performs best on
the MIG score.
Not only that, but we also perform transferability experiments by conducting
classification tasks on the generated representation. Here we set the
representation dimensions to be the same in all models. First, we have learned
a pre-trained model to obtain the representation $z$ and a pre-trained
classifier to predict MNIST image label from representation. We compare the
classification accuracy in Table 2 with different dimension settings.
| $d_{z}=10\uparrow$ | $d_{z}=16\uparrow$ | $d_{z}=32\uparrow$
---|---|---|---
VAE[15] | 97.21%$\pm$0.42 | 96.62% $\pm$ 0.51 | 96.41%$\pm$0.22
$\beta$-VAE[12]($\beta$=6) | 94.32% $\pm$0.48 | 95.22%$\pm$0.36 | 94.78%$\pm$0.53
FactorVAE[14]($\gamma$=7) | 93.7%$\pm$0.07 | 94.62%$\pm$0.12 | 93.69%$\pm$ 0.26
$\beta$-TCVAE[7]($\alpha$=1,$\beta$=8,$\gamma$=2) | 98.4%$\pm$0.04 | 98.6%$\pm$0.05 | 98.9%$\pm$0.11
Guided-VAE[10] | 98.2%$\pm$0.08 | 98.2%$\pm$0.07 | 98.40% $\pm$0.08
BHiVAE(Ours)($\beta$=10, $\gamma$=3) | 98.2%$\pm$0.09 | 98.7%$\pm$0.10 | 99.0%$\pm$0.05
Table 2: Accuracy of representation under unsupervised case: The bold note the
best results and blue is the second best result.
Our model appears not higher accuracy than FactorVAE and Guided-VAE in the
case of $d_{z}=10$. That block representation setting causes a small number of
factors it learns. However, as $d(z)$ is increased, our representation can
learn more attribute factors of data, and then the classification accuracy can
also be improved.
### 4.3 Supervised BHiVAE
In the supervised case, we still did the qualitative and quantitative
experiments to evaluate our model. The same as the unsupervised case, overall
autoencoder is required, and then we need a classifier to satisfy the
segmentation of information at each level, as shown in Fig 2(b). And we set
the dimension of representation $z$ as 12 ($d(z)=12,d(c^{3})=6$, and
$d(s^{i})=2,i=1,2,3$).
Figure 6: Traversal Results Comparison on CelebA: The first column is the
traversal change of Gender, the second column is the change of Black Hair, the
first row is from Guided-VAE [10], the second row is ours, following the
procedure of Guided-VAE.
We first perform several experiments comparing with Guided-VAE [10] in two
attributes(Gender and Black Hair) and present the results in Fig 6. When
changing each attribute $s^{i}\in\\{s^{1},s^{2},s^{3}\\}$, we keep other
attributes representations and content representation $c^{3}$ unchanged. We
use the third layer representation $s^{3}$ to control gender attribute, while
the first layers correspond to the black hair and bale, respectively. In the
supervised case, compared to Guided-VAE, we use multiple dimensions to control
an attribute while Guided-VAE uses only one dimension, which may lead to
insufficient information to control the traversal results. And Fig 6 shows
that our model has a broader range of control over attributes, especially
reflected in the range of hair from pure white to pure black.
Besides, our quantitative experiment is to first pre-train the BHiVAE model
and three attribute classifiers of the representation and then get the
representationS of the training set, traversing the three representation
blocks a,b,c from $(-3,3)$ to $(3,3)$ along with the diagonal($y=x$). Fig 7
shows that all three attributes have a transformation threshold in the
corresponding representation blocks.
Figure 7: The classifier result used to determine if the property is
available. We traverse the Black Hair ($s^{1}$), Bale ($s^{2}$) and Gender
($s^{3}$) attributes.
Figure 8: Comparison of the accuracy of Block and Single setting model for
Black Hair attribute.
### 4.4 Block Nodes VS. Single Node
In the previous experiments, we are all making judgments about how well the
representation is disentangled and did not prove that the block setting is
beneficial, so we set up the following comparison experiments for this
problem.
For the comparison experiment here, we set the dimension of the model
representation $z$ to 10, 16, and 32. Then in the comparison experiment, we
just changed the dimension of representation $s^{1}$ (black hair) in the first
layer to 1, and therefore the dimension of $c^{3}$ is changed to 5, 11, and 27
accordingly. First we pre-train these two models under the same conditions and
learn a binary classifier that predicts the black hair attributes with
representation $z$. It is shown in Fig 8 that Block is better than Single in
every dimension setting, and the accuracy of them has increased with
increasing representation dimension. It could be that there is still some
information about black hair in other representation parts of the model, and
then the increasing dimension will allow more information about black hair to
be preserved, getting better prediction accuracy.
## 5 Conclusion and Future Work
We propose a new model, blocked and hierarchical variational autoencoder, for
thinking about and solving disentanglement problems entirely through the
perspective of information theory. We innovatively propose a blocked
disentangled representation and hierarchical architecture. Then, following the
idea of information segmentation, we use different methods to guide
information transfer in unsupervised and supervised cases. Outstanding
performance in both image traversal and representation learning allows BHiVAE
to have a wider field of application.
## References
* [1] David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147–169, 1985.
* [2] Alex Alemi, Ian Fischer, Josh Dillon, and Kevin Murphy. Deep variational information bottleneck. In ICLR, 2017.
* [3] Mohamed Ishmael Belghazi, Aristide Baratin, Sai Rajeshwar, Sherjil Ozair, Yoshua Bengio, Aaron Courville, and Devon Hjelm. Mutual information neural estimation. volume 80 of Proceedings of Machine Learning Research, pages 531–540, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
* [4] Anthony J Bell and Terrence J Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural computation, 7(6):1129–1159, 1995.
* [5] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013.
* [6] Yoshua Bengio and Yann Lecun. Scaling Learning Algorithms towards AI. MIT Press, 2007.
* [7] Ricky TQ Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentanglement in variational autoencoders. In Advances in Neural Information Processing Systems, pages 2610–2620, 2018.
* [8] Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In Advances in neural information processing systems, pages 2172–2180, 2016.
* [9] Thomas M. Cover and Joy Thomas. Elements of Information Theory. Wiley, 1991.
* [10] Zheng Ding, Yifan Xu, Weijian Xu, Gaurav Parmar, Yang Yang, Max Welling, and Zhuowen Tu. Guided variational autoencoder for disentanglement learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7920–7929, 2020.
* [11] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.
* [12] I. Higgins, Loïc Matthey, A. Pal, C. Burgess, Xavier Glorot, M. Botvinick, S. Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. In ICLR, 2017.
* [13] Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Philip Bachman, Adam Trischler, and Yoshua Bengio. Learning deep representations by mutual information estimation and maximization. In ICLR 2019. ICLR, April 2019.
* [14] Hyunjik Kim and Andriy Mnih. Disentangling by factorising. volume 80 of Proceedings of Machine Learning Research, pages 2649–2658, Stockholmsmässan, Stockholm Sweden, 10–15 Jul 2018. PMLR.
* [15] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013.
* [16] Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. In ICLR, 2018.
* [17] Yann LeCun. The mnist database of handwritten digits. http://yann. lecun. com/exdb/mnist/, 1998.
* [18] Yuheng Li, Krishna Kumar Singh, Utkarsh Ojha, and Yong Jae Lee. Mixnmatch: Multifactor disentanglement and encoding for conditional image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8039–8048, 2020.
* [19] Ralph Linsker. Self-organization in a perceptual network. Computer, 21(3):105–117, 1988.
* [20] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision, pages 3730–3738, 2015.
* [21] Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In international conference on machine learning, pages 4114–4124, 2019.
* [22] Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. Adversarial autoencoders. In International Conference on Learning Representations, 2016.
* [23] Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dsprites-dataset/, 2017.
* [24] XuanLong Nguyen, Martin J Wainwright, and Michael I Jordan. Estimating divergence functionals and the likelihood ratio by convex risk minimization. IEEE Transactions on Information Theory, 56(11):5847–5861, 2010\.
* [25] A. Oord, Y. Li, and Oriol Vinyals. Representation learning with contrastive predictive coding. ArXiv, abs/1807.03748, 2018.
* [26] Yuxiang Ren, Bo Liu, Chao Huang, Peng Dai, Liefeng Bo, and Jiawei Zhang. Heterogeneous deep graph infomax. AAAI, 2020.
* [27] Jürgen Schmidhuber. Learning factorial codes by predictability minimization. Neural Comput., 4(6):863–879, 1992.
* [28] Claude E Shannon. A mathematical theory of communication. The Bell system technical journal, 27(3):379–423, 1948.
* [29] Ravid Shwartz-Ziv and Naftali Tishby. Opening the black box of deep neural networks via information. arXiv preprint arXiv:1703.00810, 2017.
* [30] Krishna Kumar Singh, Utkarsh Ojha, and Yong Jae Lee. Finegan: Unsupervised hierarchical disentanglement for fine-grained object generation and discovery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6490–6499, 2019.
* [31] Masashi Sugiyama, Taiji Suzuki, and Takafumi Kanamori. Density-ratio matching under the bregman divergence: a unified framework of density-ratio estimation. Annals of the Institute of Statistical Mathematics, 64(5):1009–1044, 2012.
* [32] Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck method. In Proc. of the 37-th Annual Allerton Conference on Communication, Control and Computing, pages 368–377, 1999.
* [33] Naftali Tishby and Noga Zaslavsky. Deep learning and the information bottleneck principle. In 2015 IEEE Information Theory Workshop (ITW), pages 1–5. IEEE, 2015.
* [34] Petar Velickovic, William Fedus, William L Hamilton, Pietro Lio, Yoshua Bengio, and R Devon Hjelm. Deep graph infomax. In ICLR (Poster), 2019.
* [35] Petar Velickovic, William Fedus, William L. Hamilton, Pietro Liò, Yoshua Bengio, and R. Devon Hjelm. Deep graph infomax. In ICLR. OpenReview.net, 2019.
* [36] Slava Voloshynovskiy, Mouad Kondah, Shideh Rezaeifar, Olga Taran, Taras Holotyak, and Danilo Jimenez Rezende. Information bottleneck through variational glasses. arXiv preprint arXiv:1912.00830, 2019.
* [37] Satosi Watanabe. Information theoretical analysis of multivariate correlation. IBM Journal of research and development, 4(1):66–82, 1960.
|
# Weak deflection angle by electrically and magnetically charged black holes
from nonlinear electrodynamics
Qi-Ming<EMAIL_ADDRESS>Li<EMAIL_ADDRESS>and Yu-
Xiao<EMAIL_ADDRESS>corresponding author 1Institute of Physics,
Shannxi University of Technology, Hanzhong 723000, China
2Institute of Theoretical Physics $\&$ Research Center of Gravitation, Lanzhou
University, Lanzhou 730000, China
3Lanzhou Center for Theoretical Physics, Key Laboratory of Theoretical Physics
of Gansu Province, School of Physical Science and Technology, Lanzhou
University, Lanzhou 730000, China
###### Abstract
Nonlinear electrodynamic (NLED) theories are well-motivated for their
extensions to classical electrodynamics in the strong field regime, and have
been extensively investigated in seeking for regular black hole solutions. In
this paper, we focus on two spherically symmetric and static black hole
solutions based on two types of NLED models: the Euler-Heisenberg NLED model
and the Bronnikov NLED model, and calculate the weak deflection angle of light
by these two black holes with the help of the Gauss-Bonnet theorem. We
investigate the effects of the one-loop corrections to quantum electrodynamics
on the deflection angle and analyse the behavior of the deflection angle by a
regular magnetically charged black hole. It is found that the weak deflection
angle of the electrically charged Einstein-Euler-Heisenberg black hole
increases with the one-loop corrections and the regular magnetically charged
black hole based on the Bronnikov NLED model has a smaller deflection angle
than the singular one. Besides, we also calculate the deflection angle of
light by the geodesic method for verification. In addition, we discuss the
effects of a cold non-magnetized plasma on the deflection angle and find that
the deflection angle increases with the plasma parameter.
###### pacs:
95.30.Sf, 98.62.Sb, 97.60.Lf
## I Introduction
To solve the divergence of the self energy of a point-like charge, Born and
Infeld generalized Maxwell’s theory and proposed the Born-Infeld
electrodynamics Born1934 . However, this theory did not attract much attention
until its reemergence at the low energy scale of some string theories.
Afterwards, Heisenberg and Euler introduced a new extension to the standard
electromagnetic theory (known as Euler-Heisenberg (EH) electrodynamics)
Heisenberg1936 , which takes into account the one-loop corrections to quantum
electrodynamics (QED) and can explain the vacuum polarization in QED. As
extensions to Born-Infeld and EH electrodynamics, nonlinear electrodynamic
(NLED) models have been studied in different aspects since then. For instance,
NLED models can be used to explain the inflation of the universe in the early
times Salcedo2000 ; Camara2004 . Some types of NLED models can depict the
accelerated expansion of the universe instead of dark energy and remove the
Big Bang singularity Elizalde2003 ; Novello2004 ; Vollick2008 ; Kruglov2015 .
In addition, in recent years, NLED models attract much more attention for
their ability in seeking regular black hole solutions. The first regular black
hole model was proposed by Bardeen Bardeen1968 . However, this regular black
hole was obtained without a specified source associated to its line element.
Remarkably, in 1998, Ay$\acute{\text{o}}$n-Beato and
Garc$\acute{\text{{\i}}}$a found that NLED model minimally coupled to general
relativity (GR) can be a possible source generating such a regular black hole
solution Ayon-Beato1998 . In Ref. Bronnikov2001 , Bronnikov found a class of
magnetically charged regular black holes in the framework of GR coupled with a
specific NLED model (known as Bronnikov NLED model). Subsequently, Hayward
proposed a concrete model which can describe both the collapse and evaporation
of black holes Hayward2006 . One can see Refs. Dymnikova1992 ; Ayon-Beato2000
; Elizalde2002 ; Nicolini2006 ; Ansoldi2007 ; Hossenfelder2010 ; Johannsen2013
; Dymnikova2015 ; Rodrigues2016 ; Fan2016 ; Chinaglia2017 ; Nojiri2017 ;
Yu2020 ; Pedro2020 for more regular black holes based on NLED models. In this
paper, we mainly focus on two black hole models based on two particular above-
mentioned NLED models, i.e., EH NLED model and Bronnikov NLED model, and
investigate the weak deflection angle of light by these two black hole models.
On the other hand, it is well known that light rays will be bent when
traveling through a massive object, known as the gravitational lensing effect,
which is one of the key predictions of GR. At present, the gravitational
lensing is one of the most powerful tools in astronomy and cosmology, such as,
measuring the mass of galaxies and clusters Hoekstra2013 ; Brouwer2018 ;
Bellagamba2019 , detecting dark energy and dark matter Vanderveld2012 ; He2017
; Cao2012 ; Huterer2018 ; Jung2019 ; Andrade2019 ; Turimov2019 . Since the
first measurement of the gravitational bending of light by the sun, the
gravitational lensing effects have been extensively investigated for black
holes, wormholes, cosmic strings and other objects by the lens equation
Keeton1998 ; Bhadra2003 ; Perlick2004 ; Whisker2005 ; Chen2009 ; Nandi2006 ;
Eiroa2002 ; Mao1991 ; Bozza2002 ; Hoekstra2004 ; Virbhadra2002 ; Virbhadra2000
; Gallo2011 ; Sharif2015 ; Gibbons1993 . In 2008, Gibbons and Werner
introduced an alternative method to calculate the weak deflection angle of
light in static asymptotically flat spacetimes by using the Gauss-Bonnet
theorem and the optical geometry of the spacetime, where the light source and
receiver are located at infinity Gibbons2008 . Later, this method was extended
to stationary spacetimes by Werner Werner2012 . In Ref. Ishihara2016 , the
authors investigated the weak deflection of light for the light source and
receiver located at a finite distance. The weak deflection for the massive
particles by this method was investigated in Refs. Crisnejo2018 ; Jusufi2018 ;
Zonghai2020 . Besides, the weak deflection of light by a black hole immersed
in a plasma medium was discussed in Ref. Crisnejo2018 . One can see Refs.
Jusufi2016 ; Jusufi2017a ; Jusufi2017b ; Ono2017 ; Sakalli2017 ; Jusufi2018a ;
Jusufi2018b ; Jusufi2018c ; Arakida2018 ; Ono2018 ; Gyulchev2019 ; Javed2019 ;
Sakalli2019 ; Crisnejo2019 for more recent works.
Although the black holes based on Einstein-Euler-Heisenberg (EEH) theory have
been extensively studied in the literatures Yajima2001 ; Ruffini2013 ;
Guerrero2020 ; Allahyari2020 ; Magos2020 , the weak deflection of light by
these black holes have not been investigated yet. As a powerful tool to study
the characteristics of black holes, it is interesting to investigate the weak
deflection angle by the electrically charged EEH black hole and know what the
effects are of the one-loop corrections to QED on the deflection angle.
Besides, although there are many investigations on the NLED-based regular
black holes, the weak deflection angle of light by such regular black holes
are rarely investigated. In this paper, we take the Bronnikov NLED black hole
with magnetic charge as an example and investigate the characteristics of this
regular black hole by calculating its deflection angle. What’s more, most
astrophysical objects including black holes are surrounded by a plasma medium.
Thus, it is interesting to investigate the effects of the plasma medium on the
deflection angle of light by these black holes.
This paper is organized as follows. In Sec. II, we first give a brief review
of the EEH black hole and then calculate the weak deflection angle of light by
this black hole via two different methods, i.e., the method by using the
Gauss-bonnet theorem and the traditional geodesic method. Then, the effects of
the plasma on the weak deflection angle are studied. In Sec. III, we perform
the same procedures for the Bronnikov NLED black hole and analyse the
characteristics of the weak deflection angle of light by this regular
magnetically charged black hole. Section IV comes with the conclusion.
## II Weak deflection angle of light by the Einstein-Euler-Heisenberg black
holes
In this section, we first give a brief review of the Einstein-Euler-Heisenberg
theory and present the spherically symmetric and static solution to this
theory. Then, we will use these results to calculate the weak deflection angle
of light for this black hole by using the Gauss-Bonnet theorem. Besides, the
weak deflection angle of light is also calculated with the null geodesic
method as a verification to the former results. Finally, we will investigate
the deflection angle of light for this black hole immersed in a cold non-
magnetized plasma medium.
### II.1 Einstein-Euler-Heisenberg theory
The action for the Einstein-Euler-Heisenberg theory is given by Allahyari2020
; Magos2020
$\displaystyle S=\frac{1}{4\pi}\int
d^{4}x\sqrt{-g}\left[\frac{1}{4}R-\mathcal{L}(F,G)\right],$ (1)
where $\mathcal{L}(F,G)$ is the functional of the electromagnetic invariants,
$F=\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$ and
$G=\frac{1}{4}F^{\mu\nu}F^{*}_{\mu\nu}$ with $F_{\mu\nu}$ the electromagnetic
field strength and
$F^{*}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\sigma\rho}F^{\sigma\rho}$ its
dual. The Levi-Civita tensor satisfies
$\epsilon_{\mu\nu\sigma\rho}\epsilon^{\mu\nu\sigma\rho}=-4!$. As one-loop
corrections to quantum electrodynamics (QED), the Euler-Heisenberg Lagrangian
is
$\displaystyle\mathcal{L}(F,G)=-F+\frac{a}{2}F^{2}+\frac{7a}{8}G^{2},$ (2)
where $a$ is the Euler-Heisenberg parameter. For $a=0$, the standard Maxwell
electrodynamics is recovered. There are two frameworks in nonlinear
electrodynamics. One is the $F$ framework constructed by the electromagnetic
field tensor $F_{\mu\nu}$ and the other is the $P$ framework constructed by
the tensor $P_{\mu\nu}$, defined by
$\displaystyle
P_{\mu\nu}=-(\mathcal{L}_{F}F_{\mu\nu}+F^{*}_{\mu\nu}\mathcal{L}_{G}),$ (3)
where $\mathcal{L}_{X}=\frac{\partial\mathcal{L}}{\partial X}$. Then, the
$P_{\mu\nu}$ in the Euler-Heisenberg theory can be calculated as
$\displaystyle P_{\mu\nu}=(1-aF)F_{\mu\nu}-\frac{7a}{4}F^{*}_{\mu\nu}G.$ (4)
In the $P$ framework, one can define two independent invariants $P$ and $O$,
$\displaystyle P=-\frac{1}{4}P_{\mu\nu}P^{\mu\nu},\quad\quad
O=-\frac{1}{4}P^{\mu\nu}P^{*}_{\mu\nu},$ (5)
where $P^{*}_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\sigma\rho}P^{\sigma\rho}$.
The equations of motion can be derived as
$\displaystyle R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R$ $\displaystyle=$
$\displaystyle 8\pi T_{\mu\nu},$ (6) $\displaystyle\nabla_{\mu}P^{\mu\nu}$
$\displaystyle=$ $\displaystyle 0,$ (7)
where the energy momentum tensor in the $P$ framework is given by
$\displaystyle
T_{\mu\nu}\\!\\!=\\!\\!\frac{1}{4\pi}\left[(1\\!-\\!aP)P_{\mu}^{\sigma}P_{\nu\sigma}\\!+\\!g_{\mu\nu}\left(P\\!-\\!\frac{3}{2}aP^{2}\\!-\\!\frac{7a}{8}O^{2}\right)\right].$
### II.2 Spherically symmetric solution in the Einstein-Euler-Heisenberg
theory
The line element for a spherically symmetric and static black hole can be
assumed as
$\displaystyle~{}ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}d\Omega^{2},$
(9)
where $\mu$ and $\nu$ run from $0$ to $3$, and
$d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}$. According to the symmetry
of the spacetime and restricting to the electric charge $Q$, the $P_{\mu\nu}$
can be calculated as
$\displaystyle P_{\mu\nu}=\frac{Q}{r^{2}}\delta^{0}_{[\mu}\delta^{1}_{\nu]},$
(10)
and the independent electromagnetic invariants are
$\displaystyle P=\frac{Q^{2}}{2r^{4}},\quad\quad O=0.$ (11)
Then the function in the metric can be solved as Allahyari2020 ; Magos2020
$\displaystyle
f(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{aQ^{4}}{20r^{6}},$ (12)
where $M$ is the mass of the black hole.
### II.3 Calculation of deflection angle with the Gauss-Bonnet theorem
The null geodesics satisfies $ds^{2}=0$, which can be rearranged as
$\displaystyle
dt^{2}=\gamma_{ij}dx^{i}dx^{j}=\frac{1}{f^{2}}dr^{2}+\frac{r^{2}}{f}d\Omega^{2},~{}$
(13)
where $i$ and $j$ run from $1$ to $3$, and $\gamma_{ij}$ is the so-called
optical metric. After a coordinate transformation $dr^{*}=\frac{1}{f}dr$, the
above expression can be rewritten as
$\displaystyle dt^{2}=dr^{*2}+\tilde{f}^{2}(r^{*})d\phi^{2},$ (14)
where $\tilde{f}(r^{*})\equiv\sqrt{\frac{r^{2}}{f}}$ and
$\theta=\frac{\pi}{2}$. The Gaussian curvature of the optical spacetime can be
calculated as
$\displaystyle\mathcal{K}$ $\displaystyle=$ $\displaystyle\frac{R_{r\phi
r\phi}}{\gamma}=\frac{1}{\sqrt{\gamma}}\left[\frac{\partial}{\partial\phi}\left(\frac{\sqrt{\gamma}}{\gamma_{rr}}\Gamma^{\phi}_{rr}\right)-\frac{\partial}{\partial
r}\left(\frac{\sqrt{\gamma}}{\gamma_{rr}}\Gamma^{\phi}_{r\phi}\right)\right]$
(15) $\displaystyle=$
$\displaystyle-\frac{2M}{r^{3}}\left(1-\frac{3M}{2r}\right)+\frac{Q^{2}}{r^{4}}\left(3-\frac{6M}{r}\right)$
$\displaystyle+$
$\displaystyle\frac{Q^{4}}{r^{6}}\left(2-\frac{21a}{20r^{2}}+\frac{19aM}{10r^{3}}\right)-\frac{9aQ^{6}}{10r^{10}}+\frac{3a^{2}Q^{8}}{100r^{14}},~{}$
where $\gamma\equiv\det(\gamma_{ij})$.
Let the domain $D$ be a compact oriented nonsingular two-dimensional
Riemannian surface with Euler characteristic $\chi(D)$ and Gaussian curvature
$\mathcal{K}$, and bounded by a piecewise smooth curve with geodesic curvature
$\kappa$. Then the Gauss-Bonnet theorem gives the relation between the
deflection angle of light and the Gaussian curvature via
$\displaystyle\int\int_{D}\mathcal{K}dS+\oint_{\partial D}\kappa
dt+\sum_{i=1}\beta_{i}=2\pi\chi(D),~{}$ (16)
where $dS$ is the surface element, $\kappa$ standards for the geodesic
curvature of the boundary defined as $\kappa=|\nabla_{\dot{C}}\dot{C}|$, and
$\beta_{i}$ denotes the $i^{\text{th}}$ exterior angles. For a specific
$\tilde{D}$ bounded by a geodesic $C_{1}$ from the source $S$ to the observer
$O$ and a circular curve $C_{R}$ intersecting $C_{1}$ in $S$ and $O$ at right
angles, Eq. (16) reduces to
$\displaystyle\int\int_{\tilde{D}}\mathcal{K}dS+\int_{C_{R}}\kappa(C_{R})dt=\pi,~{}$
(17)
where we have used $\kappa(C_{1})=0$ and the Euler characteristic
$\chi(\tilde{D})=1$. For the circular curve $C_{R}:=r(\phi)=R=\text{const}$,
the non-zero part of the geodesic curvature can be calculated as
$\displaystyle\kappa(C_{R})=\left(\nabla_{\dot{C}_{R}}\dot{C}_{R}\right)^{r}=\dot{C}^{\phi}_{R}(\partial_{\phi}\dot{C}^{r}_{R})+\Gamma^{r}_{\phi\phi}(\dot{C}^{\phi}_{R})^{2},$
(18)
where $\dot{C}_{R}$ denotes the tangent vector of the circular curve $C_{R}$
and $\Gamma^{r}_{\phi\phi}$ is the Christoffel symbol related to the optical
metric (13). In the last equation it is obvious that the first term vanishes,
and $\Gamma^{r}_{\phi\phi}=-\tilde{f}(r^{*})\tilde{f}^{\prime}(r^{*})$,
$(\dot{C}^{\phi}_{R})^{2}=\frac{1}{\tilde{f}^{2}(r^{*})}$ in the second term.
In the limit $R\rightarrow\infty$, one can obtain
$\displaystyle\lim_{R\rightarrow\infty}\left[\kappa(C_{R})dt\right]$ (19)
$\displaystyle=$
$\displaystyle\lim_{R\rightarrow\infty}[-\tilde{f}^{\prime}(r^{*})]d\phi$
$\displaystyle=$
$\displaystyle\lim_{R\rightarrow\infty}\left(\frac{10R^{4}\left(R(R-3M)+2Q^{2}\right)-2aQ^{4}}{R^{3}\sqrt{100R^{4}\left(R(R-2M)+Q^{2}\right)-5aQ^{4}}}\right)d\phi$
$\displaystyle=$ $\displaystyle d\phi.~{}$
Inserting Eq. (19) into Eq. (17), one has
$\displaystyle\int\int_{\tilde{D}_{R\rightarrow\infty}}\mathcal{K}dS+\int_{0}^{\pi+\alpha}d\phi=\pi.$
(20)
Then the weak deflection angle of light can be calculated as
$\displaystyle\alpha$ $\displaystyle=$
$\displaystyle-\int\int_{\tilde{D}}\mathcal{K}dS=-\int^{\pi}_{0}\int^{\infty}_{\frac{b}{\sin\phi}}\mathcal{K}dS$
(21) $\displaystyle\simeq$ $\displaystyle\frac{4M}{b}-\frac{3\pi
Q^{2}}{4b^{2}}+\frac{7\pi
aQ^{4}}{128b^{6}}+\mathcal{O}(M^{2},a^{2},Q^{4}),~{}$
where we have used the zero-order particle trajectory $r=b/\sin\phi$,
$0\leq\phi\leq\pi$ at the weak deflection limit. It is obvious that the first
two terms are the deflection angle of light by an electrically charged black
hole based on the standard electrodynamics Jusufi2016 . The third term comes
from the influences of the one-loop corrections to QED on the spacetime of the
black hole. It is obvious that the deflection angle increases with the one-
loop corrections while their effects are suppressed by the impact parameter.
### II.4 Calculation of deflection angle by the geodesic method
The Lagrangian of the null geodesics of the Einstein-Euler-Heisenberg black
hole is given by
$\displaystyle
2\mathcal{L}_{*}=-f(r)\dot{t}^{2}+f(r)^{-1}\dot{r}^{2}+r^{2}\big{(}\dot{\theta}^{2}+\sin^{2}\theta\dot{\phi}^{2}\big{)},~{}$
(22)
where $\dot{x}=\frac{dx}{d\tau}$, and $\tau$ is the affine parameter along the
geodesics. Since the Lagrangian is independent on $t$ and $\phi$, one can
obtain two conserved constants:
$\displaystyle p_{t}$ $\displaystyle=$
$\displaystyle\frac{\partial\mathcal{L}_{*}}{\partial\dot{t}}=-f(r)\dot{t}=-E,$
(23) $\displaystyle p_{\phi}$ $\displaystyle=$
$\displaystyle\frac{\partial\mathcal{L}_{*}}{\partial\dot{\phi}}=r^{2}\dot{\phi}\sin^{2}\theta=L.$
(24)
Then the null geodesic equation at the equatorial plane can be obtained as
$\displaystyle\left(\frac{d\phi}{dr}\right)^{2}=\left(\frac{r^{4}}{b^{2}}-r^{2}f(r)\right)^{-1},~{}$
(25)
where the impact parameter is defined as $b=r_{0}/\sqrt{f(r_{0})}$ with
$r_{0}$ the radius of the circular orbit.
The weak bending angle of the light coming from infinity and deflected by a
black hole before arriving at infinity is given by
$\displaystyle\alpha(r_{0})=\Delta\phi(r_{0})-\pi,$ (26)
where $\Delta\phi(r_{0})$ can be solved from Eq. (25) as
$\displaystyle\Delta\phi(r_{0})=2\int_{r_{0}}^{\infty}\left(\frac{r^{4}}{b^{2}}-r^{2}f(r)\right)^{-\frac{1}{2}}dr.~{}$
(27)
It is convenient to define the dimensionless line element as
$\displaystyle dS^{2}$ $\displaystyle=$
$\displaystyle(2M)^{-2}ds^{2}=-f(x)dT^{2}+f(x)^{-1}dx^{2}$ (28)
$\displaystyle+$ $\displaystyle x^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}),$
where we have defined
$\displaystyle x=\frac{r}{2M},\quad T=\frac{t}{2M},\quad
q=\frac{Q}{2M},\quad\hat{\alpha}=\frac{a}{(2M)^{2}},$ (29)
and the function $f(r)$ in the metric (9) can be reexpressed as
$\displaystyle
f(x)=1-\frac{1}{x}+\frac{q^{2}}{x^{2}}-\frac{\hat{\alpha}q^{4}}{20x^{6}}.$
(30)
Then Eq. (27) can be rewritten as
$\displaystyle\Delta\phi(x_{0})\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!2\int_{x_{0}}^{\infty}\\!\\!\sqrt{20}x^{2}x_{0}^{4}\Big{(}\hat{\alpha}q^{4}\left(x_{0}^{8}-x^{8}\right)$
(31) $\displaystyle+$
$\displaystyle\\!\\!20q^{2}x^{4}x_{0}^{4}\left(x^{4}-x_{0}^{4}\right)$
$\displaystyle+$
$\displaystyle\\!\\!20x^{5}x_{0}^{5}\left(x^{3}(x_{0}\\!-\\!1)\\!-\\!xx_{0}^{3}\\!+\\!x_{0}^{3}\right)\Big{)}^{-\frac{1}{2}}\\!\\!dx,$
and the impact parameter can be expressed as
$\displaystyle\frac{b}{2M}=\frac{x_{0}}{\sqrt{f(x_{0})}}.~{}$ (32)
After defining a new variable $z=\frac{x_{0}}{x}$, the above integral can be
rewritten as
$\displaystyle\Delta\phi(x_{0})$ $\displaystyle=$ $\displaystyle
2\int_{0}^{1}\sqrt{20}x_{0}^{3}\Big{(}\hat{\alpha}q^{4}\left(z^{8}-1\right)-20q^{2}x_{0}^{4}\left(z^{4}-1\right)$
(33) $\displaystyle-$ $\displaystyle
20x_{0}^{5}\left(x_{0}\left(z^{2}-1\right)-z^{3}+1\right)\Big{)}^{-\frac{1}{2}}dz.$
Considering the weak gravitational lensing limit $x_{0}\gg 1$ and expanding
the above integrand about $\frac{1}{x_{0}}$, the above integral can be
integrated out term by term as follows:
$\displaystyle\alpha(x_{0})$ $\displaystyle=$
$\displaystyle\Delta\phi(x_{0})-\pi=\frac{2}{x_{0}}+\left(\frac{\pi}{4}\left(\frac{15}{4}-3q^{2}\right)-1\right)\frac{1}{x_{0}^{2}}+\left(\frac{3}{16}\pi\left(4q^{2}-5\right)-7q^{2}+\frac{61}{12}\right)\frac{1}{x_{0}^{3}}+\bigg{(}\frac{5}{8}\left(20q^{2}-13\right)$
(34) $\displaystyle+$
$\displaystyle\frac{3\pi\left(304q^{4}-2200q^{2}+1155\right)}{1024}\bigg{)}\frac{1}{x_{0}^{4}}+\left(\frac{1}{32}(632-105\pi)q^{4}+\frac{7}{64}(135\pi-536)q^{2}+\frac{7783}{320}-\frac{3465\pi}{512}\right)\frac{1}{x_{0}^{5}}$
$\displaystyle+$ $\displaystyle\bigg{(}\frac{7}{128}\pi\alpha
q^{4}-\frac{1}{384}\left(28560q^{4}-59832q^{2}+21397\right)-\frac{105\pi}{16384}\left(192q^{6}-4816q^{4}+8676q^{2}-2959\right)\bigg{)}\frac{1}{x_{0}^{6}}$
$\displaystyle+$
$\displaystyle\mathcal{O}\left(\frac{1}{x_{0}^{7}}\right).~{}$
To obtain the deflection angle in terms of the impact parameter $b$, one needs
the relation between $b$ and $x_{0}$ which can be solved from Eq. (32) in the
weak deflection limit as
$\displaystyle\frac{1}{x_{0}}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\frac{2M}{b}+\frac{1}{2}\left(\frac{2M}{b}\right)^{2}-\frac{1}{8}(4q^{2}-5)\left(\frac{2M}{b}\right)^{3}-\bigg{(}\frac{3q^{2}}{2}$
(35) $\displaystyle-$
$\displaystyle\\!\\!1\bigg{)}\left(\frac{2M}{b}\right)^{4}+\frac{7}{128}\Big{(}16q^{4}-72q^{2}+33\Big{)}\left(\frac{2M}{b}\right)^{5}$
$\displaystyle+$
$\displaystyle\\!\\!\left(5q^{4}\\!-\\!10q^{2}\\!+\\!\frac{7}{2}\right)\left(\frac{2M}{b}\right)^{6}\\!+\\!\mathcal{O}\left(\left(\frac{2M}{b}\right)^{7}\right).~{}$
Inserting Eq. (35) into Eq. (34), the weak deflection angle is found to be
$\displaystyle\hat{\alpha}\simeq\frac{4M}{b}-\frac{3\pi Q}{4b^{2}}+\frac{7\pi
aQ^{4}}{128b^{6}}+\mathcal{O}(M^{2},a^{2},Q^{4}).$ (36)
It is obvious that the above result is in agreement with the result calculated
by using the Gauss-Bonnet theorem. However, it should be noted that this
agreement only holds for the first-order terms and breaks down for the higher-
order corrections.
### II.5 Weak deflection angle in the presence of plasma
In this subsection, we investigate the effects of a cold non-magnetized plasma
on the deflection angle for the Einstein-Euler-Heisenberg black hole. The
refractive index for this black hole is given by Perlick2015 ,
$\displaystyle n(r)=\sqrt{1-\frac{\omega_{e}^{2}}{\omega_{\infty}^{2}}f(r)},$
(37)
where $\omega_{e}$ and $\omega_{\infty}$ denote the electron plasma frequency
and the photon frequency measured by a static observer at infinity,
respectively. The corresponding optical line element can be defined as
$\displaystyle d\sigma^{2}$ $\displaystyle=$
$\displaystyle\gamma_{ij}dx^{i}dx^{j}=-\frac{n^{2}}{g_{00}}g_{ij}dx^{i}dx^{j}$
(38) $\displaystyle=$ $\displaystyle
n^{2}\left(\frac{1}{f^{2}}dr^{2}+\frac{r^{2}}{f}d\phi^{2}\right),~{}$
which is conformally related to the induced metric on the spatial section with
$\theta=\frac{\pi}{2}$. Then the Gaussian curvature can be calculated as
$\displaystyle\tilde{\mathcal{K}}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!{40\Xi^{-3}r^{4}\left(3aQ^{4}+20r^{4}\left(Mr-Q^{2}\right)\right)^{2}}$
(39) $\displaystyle+$
$\displaystyle\\!\\!\Xi^{-1}\left(\frac{3aQ^{4}}{20r^{8}}+r^{-4}\left(Mr-Q^{2}\right)\right)\Big{(}aQ^{4}$
$\displaystyle-$
$\displaystyle\\!\\!20r^{4}\left(r(r-2M)+Q^{2}\right)\Big{)}-\Xi^{-2}\bigg{(}\frac{9a^{2}Q^{8}}{r^{2}}$
$\displaystyle+$
$\displaystyle\\!\\!20aQ^{4}r^{2}\left(r(18r-19M)+2Q^{2}\right)$
$\displaystyle+$
$\displaystyle\\!\\!400r^{6}\\!\left(-Q^{2}r(M\\!+\\!2r)\\!+\\!Mr^{2}(M\\!+\\!r)\\!+\\!Q^{4}\right)\\!\\!\bigg{)},$
where $\Xi=\left(a\delta
Q^{4}+20r^{4}\left(r^{2}-\delta\left(r(r-2M)+Q^{2}\right)\right)\right)$ and
the plasma parameter is defined by
$\delta\equiv\frac{\omega_{e}^{2}}{\omega_{\infty}^{2}}$. For a photon can
propagate in the plasma, one should require $\omega_{\infty}\geq\omega_{e}$,
thus $0\leq\delta\leq 1$. For more details about the plasma, one can refer to
Ref. Bisnovatyi-Kogan2010 . Besides, it follows from Eq. (38) that
$\displaystyle\frac{d\sigma}{d\phi}\bigg{|}_{\gamma_{R}}=n\sqrt{\frac{r^{2}}{f}},$
(40)
which results in
$\displaystyle\lim_{R\rightarrow\infty}\tilde{\kappa}(C_{R})\frac{d\sigma}{d\phi}\bigg{|}_{\gamma_{R}}\approx
1.$ (41)
By taking the zero-order particle trajectory $r=\frac{b}{\sin\phi}$ and for
the limit $R\rightarrow\infty$, the Gauss-Bonnet theorem can be written as
$\displaystyle\int^{\pi+\alpha}_{0}d\phi=\pi-\int^{\pi}_{0}\int^{\infty}_{\frac{b}{\sin\phi}}\tilde{\mathcal{K}}dS.$
(42)
Then the deflection angle can be calculated as
$\displaystyle\alpha$ $\displaystyle=$
$\displaystyle-\int^{\pi}_{0}\int^{\infty}_{\frac{b}{\sin\phi}}\tilde{\mathcal{K}}dS$
(43) $\displaystyle\simeq$
$\displaystyle\frac{2M}{b}\left(1+\frac{1}{1-\delta}\right)-\frac{\pi
Q^{2}}{4b^{2}}\left(1+\frac{2}{1-\delta}\right)$ $\displaystyle+$
$\displaystyle\frac{a\pi
Q^{4}}{128b^{6}}\left(1+\frac{6}{1-\delta}\right)+\mathcal{O}(M^{2},a^{2},Q^{4}).~{}$
It can be easily shown that Eq. (43) reduces to Eq. (21) when
$\delta\rightarrow 0$, and the deflection angle increases with the plasma
parameter $\delta$, which suggests that the lower the photon frequency
measured by a static observer at infinity is, the larger the deflection angle
of it will be for a fixed electron plasma frequency.
## III Weak deflection angle of light by Einstein-Bronnikov black holes
In this section we will perform the same procedures of the previous section in
the case of Einstein-Bronnikov theory, which is a particular NLED theory only
consists of the relativistic invariant $F$, and wherein one can obtain regular
black holes.
### III.1 The Einstein-Bronnikov theory
The action for the Einstein-Bronnikov theory is given by Bronnikov2001
$\displaystyle S=\frac{1}{16\pi}\int
d^{4}x\sqrt{-g}\left[R-\mathcal{L}(F)\right],$ (44)
where
$\displaystyle\mathcal{L}(F)=F\cosh^{-2}\left[\hat{a}\left(F/2\right)^{1/4}\right],$
(45)
and the parameter $\hat{a}$ is related to the black hole mass $M$ and magnetic
charge $Q_{m}$ via $\hat{a}=Q_{m}^{3/2}/(2M)$. The standard Einstein-Maxwell
Lagrangian can be recovered with $\hat{a}\rightarrow 0$.
The equations of motion can be derived as
$\displaystyle R_{\mu\nu}\\!-\\!\frac{1}{2}g_{\mu\nu}R\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!8\pi
T_{\mu\nu}\\!\\!=\\!\\!8\pi\\!\left(2\mathcal{L}_{F}F_{\rho\mu}F_{\nu}^{\rho}\\!-\\!\frac{1}{2}g_{\mu\nu}\mathcal{L}\right)\\!,$
(46) $\displaystyle\nabla_{\mu}(\mathcal{L}_{F}F^{\mu\nu})$ $\displaystyle=$
$\displaystyle 0.$ (47)
Considering the spherically symmetric and static spacetime and restricting to
the magnetic charge $Q_{m}$, the relevant function in the metric analogous to
Eq. (9) can be obtained as Bronnikov2001
$\displaystyle~{}g(r)=1-\frac{2M}{r}\left(1-\tanh\left(\frac{Q_{m}^{2}}{2Mr}\right)\right),$
(48)
and the gauge field is given by $A_{\mu}=-Q_{m}\cos\theta\delta^{\phi}_{\mu}$.
It can be straightforwardly shown that the metric function (48) reduces to the
Schwarzschild black hole solution with $Q_{m}\rightarrow 0$ and is regular as
$r\rightarrow 0$, which suggests a regular black hole.
### III.2 Calculation of deflection angle by the Gauss-Bonnet theorem
The null geodesics satisfying $ds^{2}=0$ can be rearranged as
$\displaystyle
dt^{2}=\gamma_{ij}dx^{i}dx^{j}=\frac{1}{g^{2}}dr^{2}+\frac{r^{2}}{g}d\Omega^{2}.$
(49)
After a coordinate transformation $dr^{*}=\frac{1}{g}dr$, the above line
element can be rewritten as
$\displaystyle dt^{2}=dr^{*2}+\tilde{g}^{2}(r^{*})d\phi^{2},$ (50)
where $\tilde{g}(r^{*})=\sqrt{\frac{r^{2}}{g}}$ and $\theta=\frac{\pi}{2}$.
The Gaussian curvature of this optical spacetime can be calculated as
$\displaystyle\mathcal{K}$ $\displaystyle=$
$\displaystyle-\frac{2M}{r^{3}}\left(1-\tanh\left(\frac{Q_{m}}{2Mr}\right)\right)+\frac{1}{r^{4}}\bigg{[}3M^{2}\left(1-\tanh\left(\frac{Q_{m}^{2}}{2Mr}\right)\right)^{2}+2Q_{m}^{2}\text{sech}^{2}\left(\frac{Q_{m}^{2}}{2Mr}\right)\bigg{]}$
(51) $\displaystyle-$
$\displaystyle\frac{Q_{m}^{2}}{2Mr^{5}}\text{sech}^{2}\left(\frac{Q_{m}^{2}}{2Mr}\right)\bigg{[}6M^{2}+(Q_{m}^{2}-6M^{2})\tanh\left(\frac{Q_{m}^{2}}{2Mr}\right)\bigg{]}$
$\displaystyle-$
$\displaystyle\frac{Q_{m}^{4}}{4r^{6}}\text{sech}^{2}\left(\frac{Q_{m}^{2}}{2Mr}\right)\left(1-\tanh\left(\frac{Q_{m}}{2Mr}\right)\right)\left(1-3\tanh\left(\frac{Q_{m}}{2Mr}\right)\right).~{}$
Following the same procedures as the previous section, the weak deflection
angle of light by this black hole can be obtained as
$\displaystyle\alpha$ $\displaystyle=$
$\displaystyle-\int\int_{\tilde{D}}\mathcal{K}dS=-\int^{\pi}_{0}\int^{\infty}_{\frac{1}{u(\phi)}}\mathcal{K}dS$
(52) $\displaystyle\simeq$ $\displaystyle\frac{4M}{b}-\frac{3\pi
Q_{m}^{2}}{4b^{2}}-\frac{16MQ_{m}^{2}}{b^{3}}+\mathcal{O}(M^{2},Q_{m}^{3}),~{}$
where $u(\phi)$ is given in Eq. (56). It is obvious that the first two terms
are the same with the weak deflection angle of light by the Reissner-
Nordstr$\ddot{\text{o}}$m black hole Jusufi2016 except the electric charge is
replaced by the magnetic charge, and the minus sign in front of the third term
indicates that the weak deflection angle of this regular magnetically charged
black hole is smaller than the singular one.
### III.3 Calculation of deflection angle by the geodesic method
The Lagrangian of the null geodesics of the Einstein-Bronnikov black hole is
given by
$\displaystyle
2\mathcal{L}_{*}=-g(r)\dot{t}^{2}+g(r)^{-1}\dot{r}^{2}+r^{2}\big{(}\dot{\theta}^{2}+\sin^{2}\theta\dot{\phi}^{2}\big{)},~{}$
(53)
where $\dot{x}=\frac{dx}{d\tau}$, and $\tau$ is the affine parameter along the
geodesic. Then the null geodesic equation at the equatorial plane can be
obtained as
$\displaystyle\left(\frac{d\phi}{dr}\right)^{2}=\left(\frac{r^{4}}{b^{2}}-r^{2}g(r)\right)^{-1},~{}$
(54)
where the impact parameter is defined as $b=\sqrt{\frac{r_{0}^{2}}{g(r_{0})}}$
with $r_{0}$ the radius of the circular orbit.
After introducing a new variable $u(\phi)=\frac{1}{r}$, the above geodesic
equation can be rewritten as
$\displaystyle\left(\frac{du}{d\phi}\right)^{2}=\frac{1}{b^{2}}-u^{2}+2Mu^{3}\left[1-\tanh\left(\frac{Q_{m}^{2}u}{2M}\right)\right],~{}$
(55)
which can be solved by iterative method as follows:
$\displaystyle u(\phi)$ $\displaystyle=$
$\displaystyle\frac{\sin\phi}{b}+\frac{M\left(\cos^{2}\phi+1\right)}{b^{2}}-\frac{M^{2}\cos\phi}{8b^{3}}\Big{(}30\phi$
(56) $\displaystyle+$ $\displaystyle
3\sin(2\phi)-20\tan\phi\Big{)}-\frac{Q_{m}^{2}\cos\phi}{2b^{3}}\bigg{(}-\frac{3\phi}{2}$
$\displaystyle+$
$\displaystyle\frac{1}{4}\sin(2\phi)+\tan\phi\bigg{)}+\mathcal{O}(M^{3},Q_{m}^{3})~{}.$
Besides, the bending angle of light can be expressed as
$\displaystyle\hat{\alpha}(r_{0})=\Delta\phi(r_{0})-\pi,$ (57)
where $\Delta\phi(r_{0})$ can be obtained from Eq. (54) as
$\displaystyle\Delta\phi(r_{0})$ $\displaystyle=$ $\displaystyle
2\int_{r_{0}}^{\infty}\bigg{(}\frac{r^{4}}{b^{2}}-r^{2}$ (58) $\displaystyle+$
$\displaystyle
2Mr\left[1-\tanh\left(\frac{Q_{m}^{2}}{2Mr}\right)\right]\bigg{)}^{-\frac{1}{2}}dr.~{}$
Defining the new dimensionless spacetime coordinates $x=\frac{r}{2M}$ and
$T=\frac{t}{2M}$ and the dimensionless magnetic charge
$q_{m}=\frac{Q_{m}}{2M}$, Eq. (58) can be reexpressed as
$\displaystyle\Delta\phi(x_{0})\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!2\int_{x_{0}}^{\infty}\\!\\!x_{0}^{\frac{3}{2}}\Big{(}x^{4}(-1+x_{0})+xx_{0}^{3}(1-x)$
$\displaystyle-$
$\displaystyle\\!\\!xx_{0}^{3}\tanh\left(q_{m}^{2}/x\right)\\!+\\!x^{4}\tanh\left(q_{m}^{2}/x_{0}\right)\Big{)}^{-\frac{1}{2}}\\!\\!dx,$
and the impact parameter is given by
$\displaystyle\frac{b}{2M}=\frac{x_{0}}{\sqrt{g(x_{0})}}.~{}$ (60)
After defining a new variable $z=\frac{x_{0}}{x}$, the above integral becomes
$\displaystyle\Delta\phi(x_{0})\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!2\int_{0}^{1}\sqrt{x_{0}}\Big{(}-1+(1-z^{2})x_{0}+z^{3}$
(61) $\displaystyle+$
$\displaystyle\\!\\!\tanh\left(q_{m}^{2}/x_{0}\right)\\!-\\!z^{3}\tanh\left(q_{m}^{2}z/x_{0}\right)\\!\Big{)}^{-\frac{1}{2}}\\!dz.$
Then the weak deflection angle can be integrated out term by term as follows:
$\displaystyle\alpha(x_{0})$ $\displaystyle=$
$\displaystyle\Delta\phi(x_{0})-\pi=\frac{2}{x_{0}}+\left(\frac{\pi}{4}\left(\frac{15}{4}-3q_{m}^{2}\right)-1\right)\frac{1}{x_{0}^{2}}+\left(\frac{3}{16}\pi\left(4q_{m}^{2}-5\right)-7q_{m}^{2}+\frac{61}{12}\right)\frac{1}{x_{0}^{3}}+\bigg{(}\frac{5}{8}\left(20q_{m}^{2}-13\right)$
(62) $\displaystyle+$
$\displaystyle\frac{\pi\left(320q_{m}^{6}+912q_{m}^{4}-6600q_{m}^{2}+3465\right)}{1024}\bigg{)}\frac{1}{x_{0}^{4}}+\bigg{(}\frac{1}{120}(472-75\pi)q_{m}^{6}+\frac{1}{32}(632-105\pi)q_{m}^{4}$
$\displaystyle+$
$\displaystyle\frac{7}{64}(135\pi-536)q_{m}^{2}+\frac{7783}{320}-\frac{3465\pi}{512}\bigg{)}\frac{1}{x_{0}^{5}}+\bigg{(}\frac{7\pi
q_{m}^{10}}{48}+\frac{49\pi
q_{m}^{8}}{64}-\frac{5}{96}(208+63\pi)q_{m}^{6}-\frac{35}{1024}(2176+903\pi)q_{m}^{4}$
$\displaystyle+$
$\displaystyle\frac{9}{4096}(70912+25305\pi)q_{m}^{2}-\frac{21397}{384}-\frac{310695\pi}{16384}\bigg{)}\frac{1}{x_{0}^{6}}+\mathcal{O}\left(\frac{1}{x_{0}^{7}}\right).~{}$
Besides, the relation between $b$ and $x_{0}$ can be obtained from Eq. (60) as
$\displaystyle\frac{1}{x_{0}}$ $\displaystyle=$
$\displaystyle\frac{2M}{b}+\frac{1}{2}\left(\frac{2M}{b}\right)^{2}-\frac{1}{8}(4q_{m}^{2}-5)\left(\frac{2M}{b}\right)^{3}$
(63) $\displaystyle-$
$\displaystyle\left(\frac{3q_{m}^{2}}{2}-1\right)\left(\frac{2M}{b}\right)^{4}+\bigg{(}\frac{q_{m}^{6}}{6}+\frac{7q_{m}^{4}}{8}-\frac{63q_{m}^{2}}{16}$
$\displaystyle+$
$\displaystyle\frac{231}{128}\bigg{)}\left(\frac{2M}{b}\right)^{5}+\bigg{(}\frac{2q_{m}^{6}}{3}+5q_{m}^{4}-10q_{m}^{2}$
$\displaystyle+$
$\displaystyle\frac{7}{2}\bigg{)}\left(\frac{2M}{b}\right)^{6}+\mathcal{O}\left(\left(\frac{2M}{b}\right)^{7}\right).~{}$
Inserting Eq. (63) into Eq.(62), the weak deflection angle is found to be
$\displaystyle\hat{\alpha}\simeq\frac{4M}{b}-\frac{3\pi
Q_{m}^{2}}{4b^{2}}-\frac{16MQ_{m}^{2}}{b^{3}}+\mathcal{O}(M^{2},Q_{m}^{3}),$
(64)
which is in agreement with the result calculated by using the Gauss-Bonnet
theorem.
### III.4 Weak deflection angle in the presence of plasma
In this subsection, we investigate the effects of a cold non-magnetized plasma
on the deflection angle of the black hole in Einstein-Bronnikov theory. The
refractive index for this black hole is given by Perlick2015 ,
$\displaystyle n(r)=\sqrt{1-\delta^{2}g(r)}.$ (65)
The corresponding optical metric is
$\displaystyle d\sigma^{2}$ $\displaystyle=$
$\displaystyle\gamma_{ij}dx^{i}dx^{j}=-\frac{n^{2}}{g_{00}}g_{ij}dx^{i}dx^{j}$
(66) $\displaystyle=$ $\displaystyle
n^{2}\left(\frac{1}{g^{2}}dr^{2}+\frac{r^{2}}{g}d\phi^{2}\right).~{}$
Then the Gaussian curvature is calculated as
$\displaystyle\tilde{\mathcal{K}}$ $\displaystyle=$
$\displaystyle\frac{1}{4r^{4}\delta^{2}}\bigg{[}-3r^{2}+4M\delta
r-2\delta\text{sech}^{2}\left(\frac{Q_{m}^{2}}{2Mr}\right)\left(Q_{m}^{2}+Mr\sinh\left(\frac{Q_{m}^{2}}{Mr}\right)\right)\bigg{]}$
(67) $\displaystyle+$
$\displaystyle\frac{6(\delta-1)Mr^{3}-2Q_{m}^{2}\delta\text{sech}^{2}\left(\frac{Q_{m}^{2}}{2Mr}\right)\left(Mr+Q_{m}^{2}\tanh\left(\frac{Q_{m}^{2}}{Mr}\right)\right)}{4Mr^{4}\delta^{2}\left(r(1-\delta)+2M\delta\left(1-\tanh\left(\frac{Q_{m}^{2}}{2Mr}\right)\right)\right)}$
$\displaystyle+$
$\displaystyle\frac{1}{4Mr^{4}\delta^{2}\left(r(1-\delta)+2M\delta\left(1-\tanh\left(\frac{Q_{m}^{2}}{2Mr}\right)\right)\right)^{2}}\bigg{[}Mr^{4}(-5+(8-3\delta)\delta)$
$\displaystyle+$ $\displaystyle
2MQ_{m}^{2}r^{2}\delta(3\delta-2)\text{sech}^{2}\left(\frac{Q_{m}^{2}}{2Mr}\right)-Q_{m}^{4}\delta\text{sech}^{4}\left(\frac{Q_{m}^{2}}{2Mr}\right)\left(3M\delta+r\sinh\left(\frac{Q_{m}^{2}}{Mr}\right)\right)\bigg{]}$
$\displaystyle+$
$\displaystyle\frac{2r\left(r^{2}(\delta-1)-Q_{m}^{2}\delta\text{sech}^{4}\left(\frac{Q_{m}^{2}}{2Mr}\right)\right)^{2}}{4Mr^{4}\delta^{2}\left(r(1-\delta)+2M\delta\left(1-\tanh\left(\frac{Q_{m}^{2}}{2Mr}\right)\right)\right)^{3}},$
and the deflection angle can be obtained as
$\displaystyle\alpha$ $\displaystyle=$
$\displaystyle-\int^{\pi}_{0}\int^{\infty}_{\frac{1}{\sin\phi}}\tilde{\mathcal{K}}dS$
(68) $\displaystyle\simeq$
$\displaystyle\frac{2M}{b}\left(1+\frac{1}{1-\delta}\right)-\frac{\pi
Q_{m}^{2}}{4b^{2}}\left(1+\frac{2}{1-\delta}\right)$ $\displaystyle+$
$\displaystyle\frac{2MQ_{m}^{2}}{b^{3}}\left(\frac{3\delta}{1-\delta}-\frac{8-10\delta}{(1-\delta)^{2}}\right)$
$\displaystyle+$ $\displaystyle\mathcal{O}(M^{2},Q_{m}^{3}),~{}$
It is obvious that Eq. (68) reduces to Eq. (52) when $\delta\rightarrow 0$,
and the deflection angle increases with the plasma parameter $\delta$, which
suggests that the lower the photon frequency measured by a static observer at
infinity is, the larger the deflection angle of it will be for a fixed
electron plasma frequency.
## IV Conclusion
As two well-known nonlinear electrodynamic (NLED) theories, Euler-Heisenberg
NLED model and Bronnikov NLED model are extensively studied in the
literatures. In this paper, we considered the spherically symmetric and static
black hole solutions based on these NLED models and calculated the weak
deflection angle of light by these two black holes with the help of the Gauss-
Bonnet theorem. To be specific, in the Einstein-Euler-Heisenberg black hole,
we investigated the effects of the one-loop corrections to quantum
electrodynamics on the deflection angle of light and found that the weak
deflection angle increases with the one-loop corrections. In the Einstein-
Bronnikov black hole, we calculated the weak deflection angle by this regular
magnetically charged black hole and found that the deflection angle by this
black hole is smaller than the singular one. Besides, the weak deflection
angles of both black holes were also calculated via the geodesic method, which
was confirmed in agreement with the method by using the Gauss-Bonnet theorem
at least at low order. What’s more, the effects of a cold non-magnetized
plasma on the weak deflection angle also were discussed and it was found that
the deflection angle increases with the plasma parameter for both black holes,
which indicates that the lower the photon frequency measured by a static
observer at infinity is, the larger the deflection angle of it will be for a
fixed electron plasma frequency.
###### Acknowledgements.
This work was supported by Scientific Research Program Funded by Shaanxi
Provincial Education Department (No. 20JK0553), and the National Natural
Science Foundation of China (Grants No. 11875151 and No. 11522541).
## References
* (1) M. Born and L. Infeld, Proc. Roy. Soc. Lond. A 144, (1934) 425.
* (2) W. Heisenberg and H. Euler, Z. Phys. 98, 714 (1936).
* (3) R. Garc$\acute{\text{\i}}$a-Salcedo and N. Breton, Int. J. Mod. Phys. A 15, 4341 (2000).
* (4) C. S. Camara, M. R. de Garcia Maia, J. C. Carvalho, and J. A. S. Lima, Phys. Rev. D 69, 123504 (2004).
* (5) E. Elizalde, J. E. Lidsey, S. Nojiri, and S. D. Odintsov, Phys. Lett. B 574, 1 (2003).
* (6) M. Novello, S. E. Perez Bergliaffa, and J. Salim, Phys. Rev. D 69, 127301 (2004).
* (7) D. N. Vollick, Phys. Rev. D 78, 063524 (2008).
* (8) S. I. Kruglov, Phys. Rev. D 92, 123523 (2015).
* (9) J. M. Bardeen, in Conference Proceedings of GR5 (Tbilisi, URSS, 1968), p. 174.
* (10) E. Ay$\acute{\text{o}}$n-Beato and A. Garc$\acute{\text{{\i}}}$a, Phys. Rev. Lett. 80, 5056 (1998).
* (11) K. A. Bronnikov, Phys. Rev. D 63, 044005 (2001).
* (12) S. A. Hayward, Phys. Rev. Lett. 96, 031103 (2006).
* (13) I. G. Dymnikova, Gen. Relativ. Gravit. 24, 235 (1992).
* (14) E. Ayon-Beato and A. Garcia, Phys. Lett. B 493, 149 (2000).
* (15) E. Elizalde and S. R. Hildebrandt, Phys. Rev. D 65, 124024 (2002).
* (16) P. Nicolini, A. Smailagic, and E. Spallucci, Phys. Lett. B 632, 547 (2006).
* (17) S. Ansoldi, P. Nicolini, A. Smailagic, and E. Spallucci, Phys. Lett. B 645, 261 (2007).
* (18) S. Hossenfelder, L. Modesto, and I. Premont-Schwarz, Phys. Rev. D 81, 044036 (2010).
* (19) T. Johannsen, Phys. Rev. D 88, 044002 (2013).
* (20) I. Dymnikova and E. Galaktionov, Class. Quant. Grav. 32, 165015 (2015).
* (21) M. E. Rodrigues, J. C. Fabris, E. L. B. Junior, and G. T. Marques, Eur. Phys. J. C 76, 250 (2016).
* (22) Z. Y. Fan and X. Wang, Phys. Rev. D 94, 124027 (2016).
* (23) S. Chinaglia and S. Zerbini, Gen. Rel. Grav. 49, 75 (2017).
* (24) S. Nojiri and S. D. Odintsov, Phys. Rev. D 96, 104008 (2017).
* (25) S. Yu and C. J. Gao, Int. J. Mod. Phys. D 29, 2050032 (2020).
* (26) P. Ca$\tilde{\text{n}}$ate, D. Magos, and N. Breton, Phys. Rev. D 101, 064010 (2020).
* (27) H. Hoekstra, M. Bartelmann, H. Dahle, H. Israel, M. Limousin, and M. Meneghetti, Space Sci. Rev. 177, 75 (2013).
* (28) M. M. Brouwer et al., Mon. Not. R. Astron. Soc. 481, 5189 (2018).
* (29) F. Bellagamba et al., Mon. Not. R. Astron. Soc. 484, 1598 (2019).
* (30) R. A. Vanderveld, M. J. Mortonson, W. Hu, and T. Eifler, Phys. Rev. D 85, 103518 (2012).
* (31) H. J. He and Z. Zhang, J. Cosmol. Astropart. Phys. 1708, 036 (2017).
* (32) S. Cao, G. Covone, and Z. H. Zhu, Astrophys. J. 755, 31 (2012).
* (33) D. Huterer and D. L. Shafer, Rep. Prog. Phys. 81, 016901 (2018).
* (34) S. Jung and C. S. Shin, Phys. Rev. Lett. 122, 041103 (2019).
* (35) K. E. Andrade, Q. Minor, A. Nierenberg, and M. Kaplinghat, Mon. Not. R. Astron. Soc. 487, 1905 (2019).
* (36) B. Turimov, B. Ahmedov, A. Abdujabbarov, and C. Bambi, Int. J. Mod. Phys. D 28, 2040013 (2019).
* (37) C. R. Keeton, C. S. Kochanek, and E. E. Falco, Astrophys. J. 509, 561 (1998).
* (38) A. Bhadra, Phys. Rev. D 67, 103009 (2003).
* (39) R. Whisker, Phys. Rev. D 71, 064004 (2005).
* (40) S. B. Chen and J. L. Jing, Phys. Rev. D 80, 024036 (2009).
* (41) K. K. Nandi, Y. Z. Zhang, and A. V. Zakharov, Phys. Rev. D 74, 024020 (2006).
* (42) E. F. Eiroa, G. E. Romero, and D. F. Torres, Phys. Rev. D 66, 024010 (2002).
* (43) S. Mao and B. Paczynski, Astrophys. J. Lett. 374, L37 (1991).
* (44) V. Perlick, Phys. Rev. D 69, 064017 (2004).
* (45) V. Bozza, Phys. Rev. D 66, 103001 (2002).
* (46) H. Hoekstra, H. K. C. Yee, and M. D. Gladders, Astrophys. J. 606, 67 (2004).
* (47) K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. D 65, 103004 (2002).
* (48) K. S. Virbhadra and G. F. R. Ellis, Phys. Rev. D 62, 084003 (2000).
* (49) E. Gallo and O. M. Moreschi, Phys. Rev. D 83, 083007 (2011).
* (50) M. Sharif and S. Iftikhar, Astrophys. Space Sci. 357, 85 (2015).
* (51) G. W. Gibbons, Phys. Lett. B 308, 237 (1993).
* (52) G. W. Gibbons and M. C. Werner, Class. Quant. Grav. 25, 235009 (2008).
* (53) M. C. Werner, Gen. Relativ. Gravit. 44, 3047 (2012).
* (54) A. Ishihara, Y. Suzuki, T. Ono, T. Kitamura, and H. Asada, Phys. Rev. D 94, 084015 (2016).
* (55) G. Crisnejo and E. Gallo, Phys. Rev. D 97, 124016 (2018).
* (56) K. Jusufi, Phys. Rev. D 98, 064017 (2018).
* (57) Z. H. Li and J. J. Jia, Eur. Phys. J. C 80, 157 (2020).
* (58) K. Jusufi, Astrophys. Space Sci. 361, 24 (2016).
* (59) K. Jusufi, M. C. Werner, A. Banerjee, and A. $\ddot{\text{O}}$vg$\ddot{\text{u}}$n, Phys. Rev. D 95, 104012 (2017).
* (60) K. Jusufi, I. Sakalli, and A. $\ddot{\text{O}}$vg$\ddot{\text{u}}$n, Phys. Rev. D 96, 024040 (2017).
* (61) T. Ono, A. Ishihara, and H. Asada, Phys. Rev. D 96, 104037 (2017).
* (62) I. Sakalli and A. $\ddot{\text{O}}$vg$\ddot{\text{u}}$n, Europhys. Lett. 118, 60006 (2017).
* (63) K. Jusufi and A. $\ddot{\text{O}}$vg$\ddot{\text{u}}$n, Phys. Rev. D 97, 024042 (2018).
* (64) K. Jusufi and A. $\ddot{\text{O}}$vg$\ddot{\text{u}}$n, Phys. Rev. D 97, 064030 (2018).
* (65) K. Jusufi, A. $\ddot{\text{O}}$vg$\ddot{\text{u}}$n, J. Saavedra, Y. Vasquez, and P. A. Gonzalez, Phys. Rev. D 97, 124024 (2018).
* (66) H. Arakida, Gen. Relativ. Gravit. 50, 48 (2018).
* (67) T. Ono, A. Ishihara, and H. Asada, Phys. Rev. D 98, 044047 (2018).
* (68) A. $\ddot{\text{O}}$vg$\ddot{\text{u}}$n, G. Gyulchev, and K. Jusufi, Ann. Phys. 406, 152 (2019).
* (69) A. $\ddot{\text{O}}$vg$\ddot{\text{u}}$n, K. Jusufi, and I. Sakalli, Phys. Rev. D 99, 024042 (2019).
* (70) W. Javed, R. Babar, and A. $\ddot{\text{O}}$vg$\ddot{\text{u}}$n, Phys. Rev. D 99, 084012 (2019).
* (71) G. Crisnejo, E. Gallo, and A. Rogers, Phys. Rev. D 99, 124001 (2019).
* (72) H. Yajima and T. Tamaki, Phys. Rev. D 63, 064007 (2001).
* (73) R. Ruffini, Y. B. Wu, and S. S. Xue, Phys. Rev. D 88, 085004 (2013).
* (74) M. Guerrero and D. Rubiera-Garcia, Phys. Rev. D 102, 024005 (2020).
* (75) A. Allahyari, M. Khodadi, S. Vagnozzi, and D. F. Mota, J. Cosmol. Astropart. Phys. 2002, 003 (2020).
* (76) D. Magos and N. Bret$\acute{\text{o}}$n, Phys. Rev. D 102, 084011 (2020).
* (77) V. Perlick, O. Y. Tsupko, and G. S. Bisnovatyi-Kogan, Phys. Rev. D 92, 104031 (2015).
* (78) G. S. Bisnovatyi-Kogan and O. Y. Tsupko, Mon. Not. Roy. Astron. Soc. 404, 1790 (2010).
|
# Zariski density of points with maximal arithmetic degree for surfaces
Kaoru Sano, Takahiro Shibata Faculty of Science and Engineering, Doshisha
University, Kyoto, 610-0394, Japan<EMAIL_ADDRESS>National
University of Singapore, Singapore 119076, Republic of Singapore
<EMAIL_ADDRESS>
###### Abstract.
We prove that any surjective self-morphism $f$ with $\delta_{f}>1$ on a
potentially dense smooth projective surface defined over a number field $K$
has densely many $L$-rational points for a finite extension $L/K$.
###### Key words and phrases:
dynamical degree, arithmetic degree
###### 2010 Mathematics Subject Classification:
Primary 37P55, Secondary 14G05
###### Contents
1. 1 Introduction
2. 2 Definitions and Notation
3. 3 Lemmata
4. 4 Ptoof of Theorem 1.5
## 1\. Introduction
Let $X$ be a projective variety defined over a number field $K$ and $f\colon
X\longrightarrow X$ a surjective self-morphism over $K$. Then we can define
two dynamical quantities, the dynamical degree $\delta_{f}$ and the arithmetic
degree $\alpha_{f}(x)$ at a point $x\in X(\overline{K})$, see Section 2 for
the definitions.
Relationships of those two quantities are studied in several papers. The
following result is fundamental.
###### Theorem 1.1 ([KS16b, endomorphism case]).
Let $X$ be a projective variety over a number field $K$, and $f\colon
X\longrightarrow X$ a surjective self-morphism over $K$. Then the limit
defining $\alpha_{f}(x)$ converges and the inequality
$\alpha_{f}(x)\leq\delta_{f}$ holds for all $x\in X(\overline{K})$.
See [KS16a, Theorem 26] and [Mat16, Theorem 1.4] for the case that $f$ is a
dominant rational map.
Matsuzawa, Meng, Zhang, and the second author gave the following conjecture in
[MMSZ20].
###### Conjecture 1.2.
Let $X$ be a projective variety over a number field $K$, $f\colon
X\longrightarrow X$ a surjective morphism, and $d>0$ a positive integer. Then
the set
$Z_{f}(d):=\\{x\in X(\overline{K})\ |\ [K(x):K]\leq
d,\alpha_{f}(x)<\delta_{f}\\}$
is not Zariski dense.
Roughly speaking, Conjecture 1.2 says that the set of points with non-maximal
arithmetic degree is small.
On the other hand, the set of points with maximal arithmetic degree should be
large. To give a precise statement, we prepare the notion of densely many
rational points with maximal arithmetic degree.
###### Definition 1.3.
Let $X$ be a projective variety over a number field $K$ and $f\colon
X\longrightarrow X$ a surjective self-morphism over $K$. Fix an algebraic
closure $\overline{K}$ of $K$ and let $L$ be an intermediate field extension
$\overline{K}/L/K$. We say that $(X,f)$ has densely many $L$-rational points
with maximal arithmetic degree if there is a subset $S\subset X(L)$ satisfying
the following conditions:
* (1)
$S$ is Zariski dense,
* (2)
the equality $\alpha_{f}(x)=\delta_{f}$ holds for all $x\in S$, and
* (3)
$O_{f}(x)\cap O_{f}(y)=\emptyset$ for any distinct two points $x,y\in S$,
where $O_{f}(x)$ is the forward $f$-orbit $\\{f^{n}(x)\ |\ n\geq 0\\}$ of $x$.
If $(X,f)$ has densely many $L$-rational points with maximal arithmetic
degree, we also say that $(X,f)$ satisfies $(DR)_{L}$, for abbreviation. If
there is a finite extension $L/K$ such that $(X,f)$ satisfies $(DR)_{L}$, we
say that $(X,f)$ satisfies $(DR)$.
In [SS20], the authors proved that for a surjective self-morphism $f\colon
X\longrightarrow X$ on a projective variety $X$ both defined over a number
field $K$, $(X,f)$ has densely many $\overline{K}$-rational points with
maximal arithmetic degree. The authors also gave the following question:
###### Question 1.4.
Let $X$ be a projective variety and $f\colon X\longrightarrow X$ a surjective
self-morphism. If $X$ has potentially dense rational points i.e. there is a
finite extension $L/K$ such that $X(L)$ is Zariski dense, then does $(X,f)$
satisfy $(DR)$?
In fact, if $X$ is either a unirational variety, an abelian variety, or a
$\mathbb{P}^{1}$-bundle over an elliptic curve, the authors gave affirmative
answers for Question 1.4 in [SS20]. The main result of this paper is that
Question 1.4 is affirmative if $X$ is a smooth projective surface and $f\colon
X\longrightarrow X$ is a surjective self-morphism with $\delta_{f}>1$.
###### Theorem 1.5.
Let $K$ be a number field, $X$ a smooth projective surface over $K$ having
potentially dense rational points, and $f\colon X\longrightarrow X$ a
surjective self-morphism with $\delta_{f}>1$. Then $(X,f)$ satisfies $(DR)$.
The idea of the proof is as follows. Replacing the self-morphism by its
iteration, a self-morphism on the minimal model is induced. So we may assume
that the given surface is minimal. Since the case of abelian varieties and
unirational varieties is proved in [SS20], the remaining case is automorphisms
of K$3$ surfaces and non-isomorphic surjective self-morphisms of elliptic
surfaces. These cases are treated in case-by-case analysis.
The outline of this paper is as follows. In Section 2, we prepare some
notation and definitions which we use in this paper. In Section 3, lemmata to
be used in the proof of Theorem 1.5 are prepared. We prove Theorem 1.5 for
automorphisms of K$3$ surfaces and non-isomorphic surjective self-morphisms of
elliptic surfaces in Section 4.1 and Section 4.2, respectively.
###### Acknowledgments.
The authors thank Professor Shu Kawaguchi for giving valuable comments and
suggesting them writing this paper, and Professor Joseph Silverman for reading
a draft and giving valuable comments. The first author is supported by JSPS
KAKENHI Grant Number JP20K14300. The second author is supported by a Research
Fellowship of NUS. This work was supported by the Research Institute for
Mathematical Sciences, an International Joint Usage/Research Center located in
Kyoto University.
## 2\. Definitions and Notation
###### Notation and Conventions.
* * •
Throughout this article, we work over a fixed number field $K$. We fix an
algebraic closure $\overline{K}$ of $K$.
* •
A variety means a geometrically integral separated scheme of finite type over
$K$.
* •
The symbols $\sim$ (resp. $\sim_{\mathbb{Q}}$, $\sim_{\mathbb{R}}$) and
$\equiv$ mean the linear equivalence (resp. $\mathbb{Q}$-linear equivalence,
$\mathbb{R}$-linear equivalence) and the numerical equivalence on divisors.
* •
Let $\mathbb{K}=\mathbb{Q},\mathbb{R}$ or $\mathbb{C}$. For a
$\mathbb{K}$-linear endomorphism $\phi:V\to V$ on a $\mathbb{K}$-vector space
$V$, $\rho(\phi)$ denotes the spectral radius of $f$, that is, the maximum of
absolute values of eigenvalues (in $\mathbb{C}$) of $\phi$.
* •
Though the definition of the dynamical degree of dominant rational self-map is
known, we need them only for surjective self-morphisms in this paper. Let
$f\colon X\dashrightarrow X$ be a dominant rational map on a projective
variety. We define the (first) dynamical degree $\delta_{f}$ of $f$ as
$\delta_{f}=\lim_{n\to\infty}((f^{n})^{*}H\cdot H^{\dim X-1})^{1/n}.$
Let
$f^{*}:\operatorname{NS}(X)_{\mathbb{R}}\to\operatorname{NS}(X)_{\mathbb{R}}$
be the pull-back action on the space of numerical classes of
$\mathbb{R}$-Cartier divisors on $X$. If $f$ is a morphism, then
$\delta_{f}=\rho(f^{*})$, so $\delta_{f}$ is an algebraic integer.
* •
Let $X$ be a projective variety. For an $\mathbb{R}$-Cartier divisor $D$ on
$X$, a function $h_{D}:X(\overline{K})\to\mathbb{R}$ is determined up to the
difference of a bounded function. $h_{D}$ is called the height function
associated to $D$. For definition and properties of height functions, see e.g.
[HS00, Part B] or [Lan83, Chapter 3].
* •
Let $X$ be a projective variety and $f:X\longrightarrow X$ a surjective self-
morphism. Fix an ample height function $h_{H}\geq 1$ on $X$.
* •
For $x\in X(\overline{K})$, we define
$\alpha_{f}(x)=\limsup_{n\to\infty}h_{H}(f^{n}(x))^{1/n},$
which we call the arithmetic degree of $f$ at $x$. The convergence defining
the arithmetic degree is known (cf. [KS16b]). Moreover, the arithmetic degree
is independent of the choice of $H$ and $h_{H}$.
###### Remark 2.1.
Dynamical degrees have the following invariance: if $\pi\colon
X\dashrightarrow X^{\prime}$ is a dominant rational map between projective
varieties of the same dimension, and $f\colon X\dashrightarrow X$ and
$f^{\prime}\colon X^{\prime}\dashrightarrow X^{\prime}$ are dominant rational
maps satisfying $\pi\circ f=f^{\prime}\circ\pi$, then the equality
$\delta_{f}=\delta_{f^{\prime}}$ holds. For details on dynamical degrees, see
[Tru20].
## 3\. Lemmata
In this section, we list lemmata used in the next section.
###### Lemma 3.1.
Consider the following commutative diagram
$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{X}}$$\scriptstyle{\pi}$$\textstyle{X\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\pi}$$\textstyle{Y\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{Y}}$$\textstyle{Y,}$
where $X,Y$ are smooth projective varieties and $f_{X}$, $f_{Y}$ are
surjective self-morphisms. Suppose that there exists a non-empty open subset
$U\subset Y$ such that $\pi\colon V:=\pi^{-1}(U)\longrightarrow U$ is finite.
Let $x\in X(\overline{K})$, $y:=\pi(x)\in Y(\overline{K})$,
$O_{f_{X}}(x)\subset V$, $O_{f_{Y}}(y)\subset U$. Then the equality
$\alpha_{f_{X}}(x)=\alpha_{f_{Y}}(y)$ holds.
###### Proof.
Since $f_{X}$ and $f_{Y}$ are morphism, the existence of the limit defining
the arithmetic degrees are known. The equality
$\alpha_{f_{X}}(x)=\alpha_{f_{Y}}(y)$ is a part of [MS20, Lemma 2.8]. ∎
###### Lemma 3.2.
Let $X,Y$ be projective varieties, $f_{X}\colon X\longrightarrow X$ and
$f_{Y}\colon Y\longrightarrow$ surjective self-morphisms on $X$ and $Y$,
respectively, and $\pi\colon X\longrightarrow Y$ a surjective morphism such
that $\pi\circ f_{X}=f_{Y}\circ\pi$.
* (a)
If $\pi$ is birational and $(Y,f_{Y})$ satisfies $(DR)$, then $(X,f_{X})$
satisfies $(DR)$.
* (b)
If $\pi$ is finite and $(X,f_{X})$ satisfies the $(DR)$, then $(Y,f_{Y})$
satisfies $(DR)$.
###### Proof.
* (a)
Let $L$ be a finite extension of $K$ such that $(Y,f_{Y})$ satisfies
$(DR)_{L}$. Let $S_{Y}=\\{y_{i}\\}_{i=1}^{\infty}\subset Y(L)$ be a sequence
of $L$-rational points such that
* –
$S_{Y}$ is Zariski dense,
* –
$\alpha_{f_{Y}}(y_{i})=\delta_{f_{Y}}$ for $i\geq 1$, and
* –
$O_{f_{Y}}(y_{i})\cap O_{f_{Y}}(y_{j})=\emptyset$ for $i\neq j$.
Let $\\{X_{j}\\}_{j=1}^{\infty}$ be the family of all the proper closed
subsets of $X$. Let $U\subset X$ and $V\subset Y$ be open subsets such that
$\pi|_{U}\colon U\longrightarrow V$ is isomorphic. Then we can inductively
take a subsequence $\\{y_{i_{j}}\\}_{j=1}^{\infty}\subset V$ of $S_{Y}$ such
that $x_{j}:=(\pi|_{U})^{-1}(y_{i_{j}})\not\in X_{j}$ for $j\geq 1$. Since
$\alpha_{f_{X}}(x_{j})=\alpha_{f_{Y}}(y_{i_{j}})$ and
$\delta_{f_{X}}=\delta_{f_{Y}}$ by Lemma 3.1, the sequence
$S_{X}:=\\{x_{j}\\}_{j=1}^{\infty}$ satisfies
* –
$x_{j}\not\in X_{j}$ for $j\geq 1$,
* –
$\alpha_{f_{X}}(x_{j})=\delta_{f_{X}}$ for $j\geq 1$, and
* –
$O_{f_{X}}(x_{j})\cap O_{f_{X}}(x_{k})=\emptyset$ for $j\neq k$.
* (b)
Let $L$ be a finite extension of $K$ such that $(X,f_{X})$ satisfies
$(DR)_{L}$. Let $S_{X}=\\{x_{i}\\}_{i=1}^{\infty}\subset X(L)$ be a sequence
of $L$-rational points such that
* –
$S_{X}$ is Zariski dense,
* –
$\alpha_{f_{X}}(x_{i})=\delta_{f_{X}}$ for $i\geq 1$, and
* –
$O_{f_{X}}(x_{i})\cap O_{f_{X}}(x_{j})=\emptyset$ for $i\neq j$.
Let $\\{Y_{j}\\}_{j=1}^{\infty}$ be the family of all the proper closed
subsets of $Y$. Then since $\pi$ is finite, the number of
$k\in\mathbb{Z}_{\geq 1}$ such that $O_{f_{Y}}(\pi(x_{k}))\cap
O_{f_{Y}}(\pi(x_{i}))\neq\emptyset$ is finite for each $i\geq 1$. So we can
inductively take a subset $\\{x_{i_{j}}\\}_{j=1}^{\infty}$ such that
* –
$y_{j}:=\pi(x_{i_{j}})\not\in Y_{j}$ for $j\geq 1$,
* –
$\alpha_{f_{Y}}(y_{j})=\alpha_{f_{X}}(x_{i_{j}})=\delta_{f_{X}}=\delta_{f_{Y}}$,
and
* –
$O_{f_{Y}}(y_{j})\cap O_{f_{Y}}(y_{k})=\emptyset$ for $j\neq k$,
where the second assertion follows by Lemma 3.1.
∎
###### Lemma 3.3.
Let $X$ be a projective variety and $f\colon X\longrightarrow X$ a surjective
self-morphism. Then for any integer $t\geq 1$ and a finite extension $L/K$,
$(X,f)$ satisfies $(DR)_{L}$ if and only if $(X,f^{t})$ satisfies $(DR)_{L}$.
###### Proof.
Obviously $(X,f^{t})$ satisfies $(DR)_{L}$ if so does $(X,f)$. Conversely,
assume that $(X,f^{t})$ satisfies $(DR)_{L}$. Let
$S=\\{x_{i}\\}_{i=1}^{\infty}\subset X(L)$ be a subset such that
* •
$S$ is Zariski dense,
* •
$\alpha_{f^{t}}(x_{i})=\delta_{f^{t}}$ for $i=1,2,\ldots$,
* •
$O_{f^{t}}(x_{i})\cap O_{f^{t}}(x_{j})=\emptyset$ for $i\neq j$.
Note that we have
$\alpha_{f}(x)=\alpha_{f^{t}}(x)^{1/t}=\delta_{f^{t}}^{1/t}=\delta_{f}$ for
$x\in S$. Thus, it is enough to prove that for any proper closed subset
$Y\subset X$ and any points $x_{1}^{\prime},x_{2}^{\prime},\ldots
x_{r}^{\prime}\in S$, there is a point $x\in S$ such that $x\not\in Y$ and
$O_{f}(x)\cap O_{f}(x_{i}^{\prime})=\emptyset$ for $1\leq i\leq r$. Since $S$
is Zariski dense, there are infinitely many $i\geq 1$ such that $x_{i}\not\in
Y$. Since the number of $i\geq 1$ such that $O_{f}(x_{i})\cap
O_{f}(x_{j}^{\prime})\neq\emptyset$ is at most $t$ for each $j$. Hence we can
get a point $x_{i}\in S$ which we wanted. ∎
###### Theorem 3.4.
Let $K$ be a number field. Then there is a constant $N$ such that, for any
elliptic curve defined over $K$, the number of $K$-torsion points is at most
$N$.
###### Proof.
See [Mer96]. ∎
###### Lemma 3.5.
Let $\pi\colon X\longrightarrow B$ be an elliptic surface. Then the followings
are equivalent.
* (a)
$X(K)$ is Zariski dense.
* (b)
There are infinitely many points $b\in B(K)$ such that $X_{b}=\pi^{-1}(b)$ has
infinitely many $K$-rational points.
###### Proof.
Clearly (b) implies (a). Assume that (b) does not hold. Then we can take an
open subset $B^{\prime}\subset B$ such that
$X^{\prime}=\pi^{-1}(B^{\prime})\overset{\pi}{\longrightarrow}B^{\prime}$
admits the structure of an abelian scheme and $X_{b}(K)$ is finite for any
$b\in B^{\prime}(K)$. Let $N$ be an upper bound of the number of torsion
points of elliptic curves defined over $K$, which we can take by Lemma 3.4.
Let $[N!]\colon X^{\prime}\longrightarrow X^{\prime}$ be the morphism defined
by the multiplication map by $N!$ on each fiber. Then we have
$X(K)\subset\ker([N!])\ \cup\ (\pi^{-1}(B\setminus B^{\prime}))(K).$
Hence $X(K)$ is not dense. ∎
###### Lemma 3.6.
Let $f\colon X\longrightarrow X$ be a surjective self-morphism with
$\delta_{f}>1$ on a normal projective variety. Assume that there is an ample
$\mathbb{R}$-Cartier divisor $H$ such that
$f^{\ast}H\equiv_{\mathbb{R}}\delta_{f}H$.
* (a)
The number of the preperiodic points are finite.
* (b)
$x\in X(\overline{K})$ is not preperiodic if and only if
$\alpha_{f}(X)=\delta_{f}$.
###### Proof.
By [MMSZZ20, Theorem 6.4 (1)], there is an ample $\mathbb{R}$-Cartier divisor
$D^{\prime}$ such that
$f^{\ast}D^{\prime}\sim_{\mathbb{R}}\delta_{f}D^{\prime}$. So the assertion
follows from [CS93, Corollary 1.1.1]. ∎
## 4\. Ptoof of Theorem 1.5
We divide the proof of Theorem 1.5 into two cases: the automorphism case and
the non-isomorphic self-morphism case.
### 4.1. The automorphism case
We start with listing known results on automorphisms of surfaces.
###### Lemma 4.1.
Let $X$ be a smooth projective surface over $\mathbb{C}$, and $f\colon
X\longrightarrow X$ be an automorphism with $\delta_{f}>1$.
* (a)
The set of eigenvalues of $f^{\ast}|_{H^{2}(X,\mathbb{R})}$ with counted
multiplicities is
$\\{\delta_{f},\delta_{f}^{-1},\lambda_{1},\lambda_{2},\ldots\lambda_{\dim
H^{2}(X,\mathbb{R})-2}\\},$
where $|\lambda_{i}|=1$ for all $i=1,2,\ldots,\dim H^{2}(X,\mathbb{R})-2$.
* (b)
The eigenvalues $\delta_{f}$ and $\delta_{f}^{-1}$ are irrational real
numbers. Moreover, $\delta_{f}^{-1}$ is a Galois conjugate of $\delta_{f}$
over $\mathbb{Q}$.
* (c)
There are numerically non-zero nef $\mathbb{R}$-divisors $D^{+}$ and $D^{-}$
satisfying $f^{\ast}D^{+}\sim_{\mathbb{R}}\delta_{f}D^{+}$ and
$f^{\ast}D^{-}\sim_{\mathbb{R}}\delta_{f}^{-1}D^{-}$, respectively.
* (d)
For a curve $C$ in $X$, $(C\cdot D^{+})=0$ holds if and only if $(C\cdot
D^{-})=0$.
* (e)
Let $D:=D^{+}+D^{-}$. Then the set $\mathcal{C}_{f}$ of irreducible curves $C$
satisfying $(C\cdot D)=0$ is a finite set.
* (f)
The set $\mathcal{C}_{f}$ coincides with the set of $f$-periodic irreducible
curves in $X$.
* (g)
For $\bullet\in\\{+,-\\}$, the set $\mathcal{C}_{f}$ coincides with the set of
irreducible curves $C$ such that $(C\cdot D^{\bullet})=0$ holds.
###### Proof.
* (a)
See [McM02, Theorem 3.2] and [Kaw08, Theorem 2.1].
* (b)
Since $f^{\ast}|_{H^{2}(X,\mathbb{R})}$ is induced by the action of $f^{\ast}$
on $H^{2}(X,\mathbb{Z})$, an integer matrix represents
$f^{\ast}|_{H^{2}(X,\mathbb{R})}$. So $\delta_{f}$ and $\delta_{f}^{-1}$ are
algebraic integers. If $\delta_{f}$ is a rational number, $\delta_{f}^{-1}$ is
also a rational number, so $\delta_{f}$ and $\delta_{f}^{-1}$ are integers.
Hence we get $\delta_{f}=\delta_{f}^{-1}=1$. This is contradiction. Since $f$
is an isomorphism, the constant term of the characteristic polynomial of
$f^{\ast}|_{H^{2}(X,\mathbb{R})}$ is $1$. Since the minimal polynomial of
$\delta_{f}$ have integer coefficients and divides the characteristic
polynomial of $f^{\ast}|_{H^{2}(X,\mathbb{R})}$, the constant term of the
minimal polynomial of $\delta_{f}$ is also $1$. Since $|\lambda_{i}|=1$ for
$1\leq i\leq\dim H^{2}(X,\mathbb{R})-2$, the number $\delta_{f}^{-1}$ must be
a Galois conjugate of $\delta_{f}$ over $\mathbb{Q}$.
* (c)
See [Kaw08, Proposition 2.5].
* (d)
Let $F$ be the Galois closure of $\mathbb{Q}(\delta_{f})/\mathbb{Q}$. Let
$\sigma\in\operatorname{Gal}(F/\mathbb{Q})$ be an automorphism sending
$\delta_{f}$ to $\delta_{f}^{-1}$. Then since $D^{+}$ and $D^{-}$ lie in nef
classes in
$\operatorname{N}^{1}(X)\otimes_{\mathbb{Z}}\mathbb{Q}(\delta_{f})$, we have
$(C\cdot D^{\bullet})\in\mathbb{Q}(\delta_{f})(\subset F)$ for any curve $C$
in $X$ and for $\bullet\in\\{+,-\\}$. Since $\delta_{f}$ and $\delta_{f}^{-1}$
are Galois conjugate over $\mathbb{Q}$, we have $\sigma D^{+}=\sigma D^{-}$,
so $\sigma(C\cdot D^{+})=(C\cdot D^{-})$.
* (e), (f)
See [Kaw08, Proposition 3.1]
* (g)
If $(C\cdot D)=0$ holds, we have $(C\cdot D^{\bullet})=0$ for
$\bullet\in\\{+,-\\}$ since $D^{+}$ and $D^{-}$ are nef. If $(C\cdot D^{+})=0$
holds, we get $(C\cdot D^{-})=0$ by (d), so $(C\cdot D)=(C\cdot D^{+})+(C\cdot
D^{-})=0$. If $(C\cdot D^{-})=0$ holds, the similar is true.
∎
###### Theorem 4.2.
If $X$ is a smooth projective surface admitting an automorphism $f$ with
$\delta_{f}>1$, then $X$ is either a non-minimal rational surface, or a
surface birational to a K$3$ surface, an Enriques surface, or an abelian
surface.
###### Proof.
See [Can99, Proposition 1]. ∎
###### Proposition 4.3.
Let $X$ be a K$3$ surface over a number field $K$ with an infinite group of
automorphisms. Then there is a rational curve $C\subset X$ such that
$\\#\\{g(C)\ |\ g\in\operatorname{Aut}(X)\\}=\infty$.
###### Proof.
See the proof of [BT00, Theorem 4.10]. ∎
###### Proposition 4.4.
Let $X$ be a K$3$ surface defined over a number field $K$, and $f\colon
X\longrightarrow X$ an automorphism with $\delta_{f}>1$. Then there exists a
rational curve $C\subset X$ such that $\\#\\{f^{n}(C)\ |\ n\geq 0\\}=\infty$
###### Proof.
Since $X$ contains only finitely many $f$-periodic curves by Lemma 4.1(e),(f),
and there are infinitely many rational curves in $X$ by Proposition 4.3, there
is a rational curve $C\subset X$ which is not $f$-periodic. ∎
###### Proposition 4.5.
Let $X$ be a projective variety and $f:X\longrightarrow X$ a surjective
morphism with $\delta_{f}>1$. Assume the following condition:
(†): There is a nef $\mathbb{R}$-Cartier divisor $D$ on $X$ such that
$f^{*}D\sim_{\mathbb{R}}\delta_{f}D$ and for any proper closed subset
$Y\subset X_{\overline{K}}$, there exists a non-constant morphism
$g:\mathbb{P}^{1}_{K}\longrightarrow X$ such that
$g(\mathbb{P}^{1}_{K})\not\subset Y$ and $g^{*}D$ is ample.
Then $(X,f)$ satisfies the condition $(DR)_{K}$.
###### Proof.
See [SS20, Theorem 4.1]. ∎
###### Proof of Theorem 1.5 when $f$ is an automorphism.
If $X$ is rational, our assertion is true by [SS20]. Assume that $X$ is
irrational. Take a birational morphism $\mu\colon X\longrightarrow Y$ to the
minimal model $Y$ of $X$ and let $g:Y\dashrightarrow Y$ be the birational
automorphism on $Y$ defined as $g=\mu\circ f\circ\mu^{-1}$. Then $g$ is in
fact an automorphism since, if $g$ has indeterminacy, $Y$ must have a
$K_{Y}$-negative curve. By Theorem 4.2 and Lemma 3.2, we may assume that $X$
is either a K$3$ surface, an Enriques surface, or an abelian variety. If $X$
is an abelian variety, our assertion is true by [SS20]. If $X$ is an Enriques
surface, take the universal covering $\pi\colon\tilde{X}\longrightarrow X$.
Then an automorphism $\tilde{f}\colon\tilde{X}\longrightarrow\tilde{X}$ such
that $\pi\circ\tilde{f}=f\circ\pi$ is induced and $\tilde{X}$ is a K$3$
surface. Hence by Lemma 3.2, we may assume that $X$ is a K$3$ surface.
Now it is enough to prove that (†) in Proposition 4.5. Let $Y$ be a proper
closed subset of $X$. Take a rational curve $\iota\colon C\hookrightarrow X$
suth that $\\#\\{f^{n}\circ\iota(C)\ |\ n\geq 0\\}=\infty$ by Proposition 4.4.
Then there is an integer $n_{Y}\geq 0$ such that
$C_{Y}:=f^{n_{Y}}\circ\iota(C)\not\subset Y$. Let
$g_{Y}:=f^{n_{Y}}\circ\iota$. Since $C_{Y}$ is not $f$-periodic, we have
$(C_{Y}\cdot D^{+})>0$ by Lemma 4.1 (e), (f), so $g_{Y}^{\ast}D^{+}$ is ample.
Hence (†) in Proposition 4.5 is proved. ∎
### 4.2. The non-isomorphic surjective self-morphism case
We prepare the following lemmata to reduce to a minimal surface.
###### Lemma 4.6.
Let $f\colon X\longrightarrow X$ be a non-isomorphic surjective self-morphism
on a smooth projective irrational surface $X$ with $\kappa(X)=-\infty$. Then
there is a positive integer $t$, a birational morphism $\mu\colon
X\longrightarrow X^{\prime}$ to a $\mathbb{P}^{1}$-bundle over a curve $B$
with $g(B)\geq 1$, and a surjective self-morphism $f^{\prime}\colon
X^{\prime}\longrightarrow X^{\prime}$ such that the equality $\mu\circ
f^{t}=f^{\prime}\circ\mu$ holds.
###### Proof.
By [Nak02, Proposition 10], any $(-1)$-curve on $X$ is $f$-periodic. So the
assertion follows. ∎
###### Lemma 4.7.
Let $f\colon X\longrightarrow X$ be a non-isomorphic surjective self-morphism
on a smooth projective surface $X$ with $\kappa(X)\geq 0$. Then $X$ is
minimal.
###### Proof.
See [Fuj02, Lemma 2.3 and Proposition 3.1]. ∎
###### Proof of Theorem 1.5 when $f$ is not an automorphism.
We prove it by using the Enriques–Kodaira Classification and case-by-case
analysis.
* (I)
$\kappa(X)=-\infty$. By Lemma 4.6, Lemma 3.2, and Lemma 3.3, we may assume
that $X$ is either a rational surface or a $\mathbb{P}^{1}$-bundle over a
curve $B$ with genus $g(B)\geq 1$. If $X$ is rational, the assertion follows
by [SS20, Theorem 1.11]. If $g(B)\geq 2$, $X$ does not have potentially dense
rational points. If $g(B)=1$, $X$ is a $\mathbb{P}^{1}$-bundle over an
elliptic curve. This case is proved in [SS20, Theorem 6.1].
From now on, assume that $\kappa(X)\geq 0$. Then $X$ is minimal by Lemma 4.7.
* (II)
$\kappa(X)=0$. By [Fuj02, Theorem 3.2], $X$ is either a hyperelliptic surface
or an abelian surface.
* (II-i)
The hyperelliptic surface case. In this case, there is an equivariant finite
covering from an abelian variety. See e.g. the proof of [MSS18, Theorem 7.1]
for details. By taking such an equivariant finite covering and applying Lemma
3.2, we can reduce to the abelian surface case.
* (II-ii)
The abelian surface case. More generally the abelian variety case is proved in
[SS20, Theorem 1.12].
* (III)
$\kappa(X)=1$. We treat this case below.
* (IV)
$\kappa(X)=2$. This case does not occur since any surjective self-morphisms on
$X$ are automorphisms of finite order.
Thus the remaining case is the $\kappa(X)=1$ case. Then $X$ admits an elliptic
fibration $\pi\colon X\longrightarrow B$ and $f$ descends to an automorphism
of finite order on $B$ (cf. [MZ19, Theorem A] or [MSS18, Theorem 8.1]).
Replacing $f$ by some iteration $f^{t}$, we may assume that $f$ is a morphism
over the base curve $B$. Let $\\{Y_{i}\\}_{i=1}^{\infty}$ be the set of all
the proper closed subsets of $X$. It is enough to get
$\\{x_{i}\\}_{i=1}^{\infty}\subset X(L)$ such that
* •
$x_{i}\not\in Y_{i}$,
* •
$\alpha_{f}(x_{i})=\delta_{f}$ for $i=1,2,\ldots$, and
* •
$O_{f}(x_{i})\cap O_{f}(x_{j})=\emptyset$ for $i\neq j$.
Let $L$ be a finite extension field of $K$ such that $X(L)$ is Zariski dense.
By Lemma 3.5, we can find an infinite subset
$\\{b_{j}\\}_{j=1}^{\infty}\subset B(L)$ such that $X_{b_{j}}=\pi^{-1}(b_{j})$
contains infinitely many $L$-rational points for each $j$. Removing special
fibers, we may assume that $X_{b_{j}}$ is an elliptic curve and
$f|_{X_{b_{j}}}$ has dynamical degree $\delta_{f}$. By Lemma 3.6, there are
infinitely many $L$-rational points $x\in X_{b_{j}}(L)$ with
$\alpha_{f}(x)=\delta_{f}$ for each $j$. Hence, letting
$\\{b_{j_{i}}\\}_{i=1}^{\infty}$ be a subsequence of
$\\{b_{j}\\}_{j=1}^{\infty}$ such that $X_{b_{j_{i}}}$ is not contained
$Y_{i}$, we can find $x_{i}\in X_{b_{j_{i}}}(L)$ such that $x_{i}\not\in
Y_{i}$. Since $f$ is a morphism over $B$, we have $O_{f}(x_{k})\cap
O_{f}(x_{l})=\emptyset$ for $k\neq l$. The assertion is proved. ∎
## References
* [BT00] F. A. Bogomolov, Y. Tschinkel, Density of rational points on elliptic K3 surfaces, Asian J. Math. 4 (2000), no. 2, 351–368.
* [CS93] G. S. Call, J. H. Silverman, Canonical heights on varieties with morphisms, Compositio Math. 89 (1993), no. 2, 163–205.
* [Can99] S. Cantat, Dynamique des automorphismes des surfaces projectives complexes, C. R. Acad. Sci. Paris S’er. I Math. 328 (1999), 901–906.
* [Fuj02] Y. Fujimoto, Endomorphisms of smooth projective 3-folds with nonnegative Kodaira dimension, Publ. RIMS, Kyoto Univ. 38 (2002), 33–92.
* [HS00] M. Hindry, J. H. Silverman, Diophantine Geometry: An Introduction, Springer-Verlag, New York, 2000.
* [KS16a] S. Kawaguchi, J. H. Silverman, On the dynamical and arithmetic degrees of rational self-maps of algebraic varieties, J. Reine Angew. Math. 713 (2016), 21–48.
* [KS16b] S. Kawaguchi, J. H. Silverman, Dynamical canonical heights for Jordan blocks, arithmetic degrees of orbits, and nef canonical heights on abelian varieties, Trans. Amer. Math. Soc. 368 (2016), no. 7, 5009–5035.
* [Kaw08] S. Kawaguchi, Projective surface automorphisms of positive topological entropy from an arithmetic viewpoint, Amer. J. Math. 130 (2008), no. 1, 159–186.
* [Lan83] S. Lang, Fundamentals of Diophantine Geometry, Springer-Verlag, New York, 1983.
* [Mat16] Y. Matsuzawa, On upper bounds of arithmetic degrees, arXiv:1606.00598
* [McM02] C. T. McMullen, Dynamics on K3 surfaces: Salem numbers and Siegel disks, J. Reine Angew. Math. 545 (2002), 201–233.
* [MS20] Y. Matsuzawa, K. Sano, Arithmetic and dynamical degrees of self-morphisms of semi-abelian varieties, Ergodic Theory Dynam. Systems 40 (2020), no. 6, 1655–1672.
* [MSS18] Y. Matsuzawa, K. Sano, T. Shibata, Arithmetic degrees and dynamical degrees of endomorphisms on surfaces, Algebra Number Theory 12 (2018), no. 7, 1635–1657.
* [MMSZ20] Y. Matsuzawa, S. Meng, T. Shibata, D.-Q. Zhang, Non-density of points of small arithmetic degrees, srXiv:2002.10976
* [MMSZZ20] Y. Matsuzawa, S. Meng, T. Shibata, D.-Q. Zhang, G. Zhong, Invariant subvarieties with small dynamical degree, srXiv:2005.13368
* [MZ19] S. Meng, D.-Q. Zhang, Kawaguchi–Silverman Conjecture for surjective endomorphisms, arXiv:1908.01605
* [Mer96] L. Merel, Bornes pour la torsion des courbes elliptiques sur les corps de nombres, Invent. Math. 124 (1996), no. 1-3, 437–449.
* [Nak02] N. Nakayama, Ruled surfaces with non-trivial surjective endomorphisms, Kyusyu J. Math. 56 (2002), 433–446.
* [SS20] K. Sano, and T. Shibata, Zariski density of points with maximal arithmetic degree, arXiv:2007.15180
* [Tru20] T.-T. Truong, Relative dynamical degrees of correspondences over a field of arbitrary characteristic. J. Reine Angew. Math. 758 (2020), 139–182.
|
4(0.5,1) To cite: Prerna Juneja and Tanushree Mitra. 2021. Auditing E-Commerce
Platforms for Algorithmically Curated Vaccine Misinformation. In Proceedings
of the 2021 CHI Conference on Human Factors in Computing Systems (CHI ’21).
Association for Computing Machinery. DOI:
https://doi.org/10.1145/3411764.3445250
# Auditing E-Commerce Platforms for Algorithmically Curated Vaccine
Misinformation
Prerna Juneja The Information School
University of WashingtonSeattleWA, USA<EMAIL_ADDRESS>and Tanushree Mitra
The Information School
University of WashingtonSeattleUSA<EMAIL_ADDRESS>
(2021)
###### Abstract.
There is a growing concern that e-commerce platforms are amplifying vaccine-
misinformation. To investigate, we conduct two-sets of algorithmic audits for
vaccine misinformation on the search and recommendation algorithms of
Amazon—world’s leading e-retailer. First, we systematically audit search-
results belonging to vaccine-related search-queries without logging into the
platform—unpersonalized audits. We find 10.47% of search-results promote
misinformative health products. We also observe ranking-bias, with Amazon
ranking misinformative search-results higher than debunking search-results.
Next, we analyze the effects of personalization due to account-history, where
history is built progressively by performing various real-world user-actions,
such as clicking a product. We find evidence of filter-bubble effect in
Amazon’s recommendations; accounts performing actions on misinformative
products are presented with more misinformation compared to accounts
performing actions on neutral and debunking products. Interestingly, once user
clicks on a misinformative product, homepage recommendations become more
contaminated compared to when user shows an intention to buy that product.
search engines, health misinformation, vaccine misinformation, algorithmic
bias, personalization, algorithmic audits, search results, recommendations,
e-commerce platforms
††journalyear: 2021††copyright: acmlicensed††conference: CHI Conference on
Human Factors in Computing Systems; May 8–13, 2021; Yokohama,
Japan††booktitle: CHI Conference on Human Factors in Computing Systems (CHI
’21), May 8–13, 2021, Yokohama, Japan††price: 15.00††doi:
10.1145/3411764.3445250††isbn: 978-1-4503-8096-6/21/05††ccs: Information
systems Personalization††ccs: Information systems Content ranking††ccs: Human-
centered computing Human computer interaction (HCI)††ccs: Information systems
Web crawling
## 1\. Introduction
The recent onset of coronavirus pandemic has unleashed a barrage of online
health misinformation (Ball and Maxmen, 2020; Financial, 2020) and renewed
focus on the anti-vaccine movement, with anti-vax social media accounts
witnessing a 19% increase in their follower base (Owen, 2020). As scientists
work towards creating a vaccine for the disease, health experts worry that
vaccine hesitancy could make it difficult to achieve herd immunity against the
new virus (Ball, 2020). Battling health misinformation, especially anti-
vaccine misinformation has never been more important.
Statistics show that people increasingly rely on the internet (Rainie and Fox,
2000), and specifically online search engines (Center, 2006), for health
information including information about medical treatments, immunizations,
vaccinations and vaccine-related side effects (Fox, 2006; Bragazzi et al.,
2017). Yet, the algorithms powering search engines are not traditionally
designed to take into account the credibility and trustworthiness of such
information. Search platforms being the primary gateway and reportedly the
most trusted source (Edelman and Luca, 2014), persistent vaccine
misinformation on them, can cause serious health ramifications (Kata, 2010).
Thus, there has been a growing interest in empirically investigating search
engine results for health misinformation. While multiple studies have
performed audits on commercial search engines to investigate problematic
behaviour (Hu et al., 2019; Robertson et al., 2018; Hussein et al., 2020),
e-commerce platforms have received little to no attention ((Chen et al., 2016;
Shin and Valente, 2020) are two exceptions), despite critics calling
e-commerce platforms, like Amazon, a “dystopian” store for hosting anti-
vaccine books (Diresta, 2019). Amazon specifically has faced criticism from
several technology critics for not regulating health-related products on its
platform (Reynolds, 2019; Belluz, 2016). Consider the most recent instance.
Several medically unverified products for coronavirus treatment, like prayer
healing, herbal treatments and antiviral vitamin supplements proliferated
Amazon (Goldhill, 2020; Dreisbach, 2020), so much so that the company had to
remove 1 million fake products after several instances of such treatments were
reported by the media (Financial, 2020). The scale of the problematic content
suggests that Amazon could be a great enabler of misinformation, especially
health misinformation. It not only hosts problematic health-related content
but its recommendation algorithms drive engagement by pushing potentially
dubious health products to users of the system (Glaser, 2017; Shin and
Valente, 2020). Thus, in this paper we investigate Amazon—world’s leading
e-retailer—for most critical form of health misinformation—vaccine
misinformation.
What is the amount of misinformation present in Amazon’s search results and
recommendations? How does personalization due to user history built
progressively by performing real-world user actions, such as clicking or
browsing certain products, impact the amount of misinformation returned in
subsequent search results and recommendations? In this paper, we dabble into
these questions. We conduct 2 sets of systematic audit experiments:
_Unpersonalized audit_ and _Personalized audit_. In the _Unpersonalized audit_
, we adopt Information Retrieval metrics from prior work (Kulshrestha et al.,
2017) to determine the amount of health misinformation users are exposed to
when searching for vaccine-related queries. In particular, we examine search-
results of 48 search queries belonging to 10 popular vaccine-related topics
like ‘hpv vaccine’, ‘immunization’, ‘MMR vaccine and autism’, etc. We collect
search results without logging in to Amazon to eliminate the influence of
personalization. To gain in-depth insights about the platform’s searching and
sorting algorithm, our _Unpersonalized audits_ ran for 15 consecutive days,
sorting the search results across 5 different Amazon filters each day:
“featured”, “price low to high”, “price high to low”, “average customer
review” and “newest arrivals”. The first audit resulted in 36,000 search
results and 16,815 product page recommendations which we later annotated for
their stance on health misinformation—promoting, neutral or debunking.
In our second set of audit— _Personalized audit_ , we determine the impact of
personalization due to user history on the amount of health misinformation
returned in search results, recommendations and auto-complete suggestions.
User history is built progressively over 7 days by performing several real-
world actions, such as “search” , “search + click” , “search + click + add
to cart” , “search + click + mark top-rated all positive review as helpful”
, “follow contributor” and “search on third party website” ( Google.com in
our case) . We collect several Amazon components in our Personalized audit,
like homepages, product pages, pre-purchase pages, search results, etc. Our
audits reveal that Amazon hosts a plethora of health misinformative products
belonging to several categories, including Books, Kindle eBooks, Amazon
Fashion (e.g. apparel, t-shirt, etc.) and Health & Personal care items (e.g.
dietary supplements). We also establish the presence of a filter-bubble effect
in Amazon’s recommendations, where recommendations of misinformative health
products contain more health misinformation.
Below we present our formal research questions, key findings, contributions
and implication of this study along with ethical considerations taken for
conducting platform audits.
### 1.1. Research Questions and Findings
In our first set of audits, we ask,
RQ1 [_Unpersonalized audit_]: What is the amount of health misinformation
returned in various Amazon components, given components are not affected by
user personalization?
* RQ1a: How much are the Amazon’s search results contaminated with misinformation?
* RQ1b: How much are recommendations contaminated with misinformation? Is there a filter-bubble effect in recommendations?
We find a higher percentage of products promoting health misinformation
(10.47%) compared to products that debunk misinformation (8.99%) in the
unpersonalized search results. We discover that Amazon returns high number of
misinformative search results when users sort their searches by filter
“featured” and high number of debunking results when they sort results by
filter “newest arrivals”. We also find Amazon ranking misinformative results
higher than debunking results especially when results are sorted by filters
“average customer reviews” and “price low to high”. Overall, search results of
topics “vaccination”, “andrew wakefield” and “hpv vaccine” contain the highest
misinformation bias when sorted by default filter “featured”. Our analysis of
product page recommendations suggests that recommendations of products
promoting health misinformation contain more health misinformation when
compared to recommendations of neutral and debunking products.
RQ2 [_Personalized audit_]: What is the effect of personalization due to user
history on the amount of health misinformation returned in various Amazon
components, where user history is built progressively by performing certain
actions?
* RQ2a: How are _search results_ affected by various user actions?
* RQ2b: How are _recommendations_ affected by various user actions? Is there a filter-bubble effect in the recommendations?
* RQ2c: How are the _auto-complete suggestions_ affected by various user actions?
Our _Personalized audit_ reveals that search results sorted by filters
“average customer review”, “price low to high” and “newest arrivals” along
with auto-complete suggestions are not personalized. Additionally, we find
that user actions involving clicking a search product leads to personalized
homepages. We find evidence of filter-bubble effect in various recommendations
found in homepages, product and pre-purchase pages. Surprisingly, the amount
of misinformation present in homepages of accounts building their history by
performing actions “search + click” and “mark top-rated all positive review as
helpful” on misinformative products was more than the amount of misinformation
present in homepages of accounts that added the same misinformative products
in cart. The finding suggests that Amazon nudges users more towards
misinformation once a user shows interest in a misinformative product by
clicking on it but hasn’t shown any intention of purchasing it. Overall, our
audits suggest that Amazon has a severe vaccine/health misinformation problem
exacerbated by its search and recommendation algorithms. Yet, the platform has
not taken any steps to address this issue.
### 1.2. Contributions and Implications
In the absence of an online regulatory body monitoring the quality of content
created, sold and shared, vaccine misinformation is rampant on online
platforms. Through our work, we specifically bring the focus on e-commerce
platforms since they have the power to influence browsing as well as buying
habits of millions of people. We believe our study is the first large-scale
systematic audit of an e-commerce platform that investigates the role of its
algorithms in surfacing and amplifying vaccine misinformation. Our work
provides an elaborate understanding of how Amazon’s algorithm is introducing
misinformation bias in product selection stage and ranking of search results
across 5 Amazon filters for 10 impactful vaccine-related topics. We find that
even use of different search filters on Amazon can dictate what kind of
content a user can be exposed to. For example, use of default filter
“featured” lead users to more health misinformation while sorting search
results by filter “newest arrivals” lead users to products debunking health-
related misinformation. Ours is also the first study to empirically establish
how certain real-world actions on health misinformative products on Amazon
could drive users into problematic echo chambers of health misinformation.
Both our audit experiments resulted in a dataset of 4,997 unique Amazon
products distributed across 48 search queries, 5 search filters, 15
recommendation types, and 6 user actions, conducted over 22 (15+7) days
111https://social-comp.github.io/AmazonAudit-data/. Our findings suggest that
traditional recommendation algorithms should not be blindly applied to all
topics equally. There is an urgent need for Amazon to treat vaccine related
searches as searches of higher importance and ensure higher quality content
for them. Finally, our findings also have several design implications that we
discuss in detail in Section 7.4.
### 1.3. Ethical Considerations
We took several steps to minimize the potential harm of our experiments to
retailers. For example, buying and later returning an Amazon product for the
purpose of our project can be deemed unethical and thus, we avoid performing
this activity. Similarly, writing a fake positive review about an Amazon
product containing misinformation could negatively influence the audience.
Therefore, in our _Personalized audit_ we explored other alternatives that
could mimic similar if not the same influence as the aforementioned
activities. For example, instead of buying a product, we performed ”add to
cart” action that shows users’ intent to purchase a product. Instead of
writing positive reviews for products, we marked top rated positive review as
helpful. Since, accounts did not have any purchase history, marking a review
helpful did not increase the “Helpful” count for that review. Through this
activity, the account shows positive reaction towards the product, at the same
time avoids manipulation and thus, eliminates impacting potential buyers or
users. Lastly, we refrained from performing the experiments on real-world
users. Performing actions on misinformative products could contaminate users’
searches and recommendations. It could potentially have long-term consequences
in terms of what types of products are pushed at participants. Thus, in our
audit experiments, accounts were managed by bots that emulated the actions of
actual users.
## 2\. Related work
### 2.1. Health misinformation in online systems
The current research on online health misinformation including vaccine
misinformation spans three broad themes: 1) quantifying the characteristics of
anti-vaccine discourse (Mitra et al., 2016; Mønsted and Lehmann, 2019; Cossard
et al., 2020), 2) building machine learning models to identify users engaging
with health misinformation or instances of health misinformation itself
(Ghenai and Mejova, 2018; Dai et al., 2020; Ghenai and Mejova, 2017) and 3)
designing and evaluating effective interventions to ensure that users
critically think when presented with health (mis)information (Kim et al.,
2020; van der Meer and Jin, 2020). Most of these studies are post-hoc
investigations of health misinformation, i.e the misinformation has already
propagated. Moreover, existing scholarship rarely takes into account how the
user encountered health misinformation or what role is played by the source of
the misinformation. With the increasing reliance on online sources for health
information, search engines have become the primary avenue of such
information, with 55% of American adults relying on the web to get medical
information (Rainie and Fox, 2000). A Pew survey reports that for 5.9M people,
web search results influenced their decision to visit a doctor and 14.7M
claimed that online information affected their decision on how to treat a
disease (Rainie and Fox, 2000). Given how medical information can directly
influence one’s health and well-being, it is essential that search engines
return quality results in response to health related search queries. However,
currently online health information has been contaminated by several outlets.
These sources could be conspiracy groups or websites spreading misinformation
due to vested interests or companies having commercial interests in selling
herbal cures or fictitious medical treatments (Schwitzer, 2017). Moreover,
online curation algorithms themselves are not built to take into account the
credibility of information. Thus, it is of paramount importance that the role
of search engines are investigated for harvesting health misinformation. How
can we empirically and systematically probe search engines to investigate
problematic behaviour like prevalence of health misinformation? In the next
section, we briefly describe the emerging research field of “algorithmic
auditing” that is focused on investigating search engines to reveal
problematic biases. We discuss this field as well as our contribution to this
growing research space in the next section.
### 2.2. Search engine audits
Search engines are modern day gatekeepers and curators of information. Their
black-box algorithm can shape user behaviour, alter beliefs and even affect
voting behaviour either by impeding or facilitating the flow of certain kinds
of information (Epstein and Robertson, 2015; Diakopoulos et al., 2018;
Knobloch-Westerwick et al., 2015). Despite their importance and the power they
exert, till date, search engine results and recommendations have mostly been
unregulated. Information quality of search engine’s output is still measured
in terms of relevance and it is up to the user to determine the credibility of
information. Thus, researchers have advocated for making algorithms more
accountable. One primary method to achieve this is to perform systematic
audits to empirically establish the conditions under which problematic
behavior surfaces. Raji et al provide the following definition of algorithmic
audits. An algorithmic audit involves the collection and analysis of outcomes
from a fixed algorithm or defined model within a system. Through the
stimulation of a mock user population, these audits can uncover problematic
patterns in models of interest (Raji and Buolamwini, 2019).
(a)
[Amazon homepage]Figure illustrates the Amazon homepage containing several
books belonging to three different recommendation types specified in Table 1.
(b)
[Amazon pre-purchase page]Figure illustrates the Amazon pre-purchase page with
several books belonging to different recommendation types.
(c)
[Amazon product page]Figure illustrates several books belonging to five
different recommendation types present on the Amazon product page specified in
Table 1.
Figure 1. (a) Amazon homepage recommendations. (b) Pre-purchase
recommendations displayed to users after adding a product to cart. (c) Product
page recommendations.
Previous audit studies have investigated the search engines for partisan bias
(Robertson et al., 2018; Mustafaraj et al., 2020), gender bias (Chen et al.,
2018; Kay et al., 2015), content diversity (Trielli and Diakopoulos, 2019;
Steiner et al., 2020; Puschmann, 2019), and price discrimination (Hannak et
al., 2014). However, only a few have systematically investigated search
engines’ role in surfacing misinformation ((Hussein et al., 2020) is the only
exception). Moreover, there is a dearth of systematic audits focusing
specifically on health misinformation. The past literature, mostly consists of
small-scale experiments that probe search engines with a handful of search
queries. For example, an analysis of the first 30 pages of search results for
query “vaccines autism” revealed that Google.com has 10% less anti-vaccine
search results compared to the other search engines, like Qwant, Swisscows and
Bing (Ghezzi et al., 2020). Whereas, search results present in the first 102
pages for the query “autism vaccine” on Google’s Turkey version returned 20%
websites with incorrect information (Erden et al., 2019). One recently
published work, closely related to this study, examined Amazon’s first 10
pages of search results in response to the query “vaccine”. They only
collected and annotated books appearing in the searches for misinformation
(Shin and Valente, 2020). The aforementioned studies probed the search engine
for one single query and did the analysis on multiple search results pages.
We, on the other hand, perform our _Unpersonalized audit_ on a curated list of
48 search queries belonging to 10 most searched vaccine-related topics,
spanning various combinations of search filters and recommendation types, over
multiple days—an aspect missing in prior work. Additionally, we are the first
ones to experimentally quantify the prevalence of misinformation in various
search queries, topics, and filters on an e-commerce platform. Furthermore,
instead of just focusing on books, we analyze the platform for products
belonging to different categories, resulting in an extensive all-category
inclusive coding scheme for health misinformation.
Another recent study on YouTube, audited the platform for various
misinformative topics including vaccine controversies (Hussein et al., 2020).
The work established the effect of personalization due to watching videos on
the amount of misinformation present in search results and recommendations on
YouTube. However, there are no studies investigating the impact of
personalization on misinformation present in the product search engines of
e-commerce platforms. Our work fills this gap by conducting a second set of
audit— _Personalized audit_ where we shortlist several real-world user actions
and investigate their role in amplifying misinformation in Amazon’s searches
and recommendations.
| Recommend-
---
ation page
Recommendation types
Homepage | Related to items you’ve viewed
Inspired by your shopping trends”
| Recommended items other customers often buy again
---
Pre-purchase page | | Customers also bought these highly rated items
---
Customers also shopped these items
Related to items you’ve viewed
Frequently bought together
Related to items
Sponsored products related
Top picks for
Product page | Frequently bought together
| Customers who bought this item also bought
---
| Customers who viewed this item also viewed
---
| Sponsored products related to this item
---
| What other items customers buy after viewing this item
---
Table 1. Table showing 15 recommendation types spread across 3 recommendation
pages.
(a)
[Google Trends’ Related topics]Figure illustrates the Related topics section
of the Google Trends page for topic vaccine. The section displays topics
related to vaccine such as vaccination, influenza, HPV vaccine etc.
(b)
[Google Trends’ Related queries]Figure illustrates the Related queries section
of the Google Trends page for topic vaccine. The section contains search
queries related to topic vaccine such as vaccine, vaccines, vaccination, flu
vaccine etc.
(c)
[Amazon’s auto-complete suggestions]Figure illustrates Amazon’s auto-complete
suggestions for query anti vaccine. Some of the search query suggestions
displayed are anti vaccine shirt, anti vaccine books,anti vaccine mask etc.
Figure 2. (a) Google Trends’ Related Topics list for topic vaccine. People who
searched for vaccine topic also searched for these topics. (b) Google Trends’
Related queries list for topic vaccine. These are the top search queries
searched by people related to vaccine topic. (c) Amazon’s auto-complete
suggestions displaying popular and trending search queries.
## 3\. Amazon components and terminology
For the audits, we collected 3 major Amazon components and numerous sub-
components. We list them below.
1. (1)
Search results: These are products present on Amazon’s Search Engine Results
Page (SERP) returned in response to a search query. SERP results can be sorted
using five filters: “featured”, “price low to high,” “price high to low,”
“average customer review” and “newest arrivals.”
2. (2)
Auto-complete suggestions: These are the popular and trending search queries
suggested by Amazon when a query is typed into the search box (see Figure
2(c)).
3. (3)
Recommendations: Amazon presents several recommendations as users navigate
through the platform. For the purpose of this project, we collect
recommendations present on three different Amazon pages: homepage, pre-
purchase page and product pages. Each page hosts several types of
recommendations. Table 1 shows the 15 recommendation types collected across 3
recommendation pages. We describe all three recommendations below.
1. (a)
Homepage recommendations: These recommendations are present on the homepage of
a user’s Amazon account. They could be of three types namely, “Related to
items you’ve viewed”, “Inspired by your shopping trends” and “Recommended
items other customers often buy again” (see Figure 1(a)). Any of the three
types together or separately could be present on the homepage depending on the
actions performed by the user. For example, “Inspired by your shopping trends”
recommendation type appears when a user performs one of two actions: either
makes a purchase or adds a product to cart.
2. (b)
Pre-purchase recommendations: These recommendations consist of product
suggestions that are presented to users after they add product(s) to cart.
These recommendations could be considered as a nudge to purchase other similar
products. Figure 1(b) displays pre-purchase page. The page has several
recommendations like “Frequently bought together”, “Customers also bought
these highly rated items”, etc. We collectively call these recommendations as
pre-purchase recommendations.
3. (c)
Product recommendations: These are the recommendations present on the product
page, also known as details
page222https://sellercentral.amazon.com/gp/help/external/51. The page contains
details of an Amazon product, like product title, category (e.g., Amazon
Fashion, Books, Health & Personal care, etc.), description, price, star
rating, number of reviews, and other metadata. The details page is home to
several different types of recommendations. We extracted five: “Frequently
bought together”, “What other items customers buy after viewing this item”,
“Customers who viewed this item also viewed”, “Sponsored products related to
this item” and “Customers who bought this item also bought”. Figure 1(c)
presents an example of product page recommendations.
Figure 3. Figure illustrating the breadth-wise topic discovery approach used
to collect vaccine-related topics from Google Trends starting from two seed
topics: vaccine and vaccine controversies. Each node in the tree denotes a
vaccine-related topic. An edge A$\rightarrow$ B indicates that topic B was
discovered from the Trends’ Related Topic list of topic A. For example, topics
“vaccination” and “andrew wakefield” were obtained from the Trends’ Related
Topic list of “vaccine controversies” topic. Then, topic “mmr vaccine and
autism” was obtained from topic “andrew wakefield” and so on. indicates the
topic was discarded during filtering. Similar colored square brackets indicate
similar topics that were merged together.
[Topic discovery approach]The figure contains two trees with roots as our seed
topics namely, vaccine and vaccine controversies. The children of each node
are the topics discovered from the Related Topic list of the parent. For
example, topics vaccination and andrew wakefield are children of root node
vaccine controversies. Furthermore, topics vaccination schedule, rabies
vaccine and immunization are child nodes of topic vaccination.
# | Search topic | Seed query | | Sample search
---
queries
| # | Search topic | Seed query | | Sample search
---
queries
1 | vaccine controversies | vaccine controversy/ anti vaccine | anti vaccination | | 6 | mmr vaccine and autism | mmr autism/ vaccine autism | autism
anti vaccine shirt | | autism vaccine
2 | vaccination | vaccine/ vaccination | vaccine | | 7 | influenza vaccine | varicella vaccine | flu shot
vaccine friendly me | | influenza vaccine
3 | andrew wakefield | andrew wakefield | andrew wakefield | | 8 | hepatitis vaccine | hepatitis vaccine | hepatitis b vaccine
wakefield autism | | hepatitis a vaccine
4 | hpv vaccine | hpv vaccine | vaccine hpv | | 9 | varicella vaccine | varicella vaccine | chicken pox
hpv vaccine on trial | | | varicella vaccine
5 | immunization | immunization | immunization | | 10 | mmr vaccine | mmr vaccine | mmr vaccine
immunization book | | | measles vaccination
Table 2. Sample search queries for each of the ten vaccine-related search
topics.
## 4\. Methodology
Here we present our audit methodology in detail. This section is organized as
follows. We start by describing our approach to compile high impact vaccine
related topics and associated search queries (section 4.1). Then, we present
overview of each audit experiment followed by the details of numerous
methodological decisions we took while designing our audits (section 4.2 and
section 4.3). Next, we describe our qualitative coding scheme for annotating
Amazon products for health misinformation (section 4.4). Finally, we discuss
our approach to calculate misinformation bias in search results (section 4.5).
### 4.1. Compiling high impact vaccine-related topics and search queries
Here, we present our methodology to curate high impact vaccine-related topics
and search queries.
#### 4.1.1. Selecting high impact search topics:
The first step of any audit is to determine input—a viable set of topics and
associated search queries that will be used to query the platform under
investigation. We leveraged Google Trends (_Trends_ henceforth) to select and
expand vaccine-related search topics. _Trends_ is an optimal choice since it
shares past search trends and popular queries searched by people across the
world. Since it is not practical to audit all topics present on _Trends_ , we
designed a method to curate a reasonable number of high impact topics and
associated search queries, i.e., topics that were searched by a large number
of people for the longest period of time. We started with 2 seed topics and
employed a breadth-wise search to expand our topic list.
_Trends_ allows to search for any subject matter either as a topic or a term.
Intuitively, topic can be considered as a collection of terms that share a
common concept. Searching as a term returns results that include terms present
in the search query while searching as a topic returns all search terms having
same meaning as the
topic333https://support.google.com/trends/answer/4359550?hl=en. We began our
search with two seed words namely “vaccine” and “vaccine controversies” and
decided to search them as topics. Starting our topic search by the
aforementioned seed words ensured that the related topics will cover general
vaccine-related topics and topics related to controversies surrounding the
vaccines, offering us a holistic view of search interests. We set location to
United States, date range to 2004-Present (this step was performed in Feb,
2020), categories to “All” and search service to “Web search”. The date range
ensured that the topics are perennial, and have been popular for a long time
(note that _Trends_ data is available from 1/1/2004 onwards). We selected the
category setting as “All” so as to get a holistic view of the search trends
encompassing all the categories together. Search service filter has options
like ‘web search’, ‘YouTube search’, ‘Google Shopping’, etc. Although Google
shopping is an e-commerce platform like Amazon, its selection returned handful
to no results. Thus, we opted for ‘web search’ service filter.
We employed _Trends’_ Related Topics feature for breadth-wise expansion of
search topics (see Figure 2(a)). We viewed the Related Topics using “Top”
filter which presents popular search topics in the selected time range that
are related to the topic searched. We manually went through the top 15 Related
Topics and retained relevant topics using the following guidelines. All
generic topics like Infant, Travel, Side-Effects, Pregnancy CVS, etc. were
discarded. Our focus was to only pick topics representing vaccine information.
Thus, we discarded topics that were names of diseases but kept their
corresponding vaccines. For example, we discarded topic Influenza but kept the
topic Influenza vaccine. We kept track of duplicates and discarded them from
the search. To further expand the topics list, we again went through the
Related Topics list of the shortlisted topics and used the aforementioned
filtering strategy to shortlist relevant topics. This step allowed us to
expand our topic list to a reasonable number. After two levels of breadth-wise
search, we obtained a list of 16 vaccine-related search topics (see Figure 3).
Figure 4. Eight steps performed in Unpersonalized audit. The steps are
described in detail in Section 4.2.4
[Experimental steps of Unpersonalized audit]Figure illustrates the eight steps
performed in our Unpersonalized audit. These steps are explained in the method
section in detail.
Next, we combined multiple similar topics into a single topic. The idea is to
collect search queries for both topics separately and then combine them under
one single topic. For example, topics zoster vaccine and varicella vaccine
were combined since both the vaccines are used to prevent chickenpox. Thus,
later search queries of both topics were combined under topic varicella
vaccine. All topics enclosed with similar colored boxes in Figure 3 were
merged together. 11 topics remained after merging.
#### 4.1.2. Selecting high impact search queries:
After shortlisting a reasonable number of topics, next we determined the
associated search queries per topic, to be later used for querying Amazon’s
search engine. To compile search queries, we relied on both _Trends_ and
Amazon’s auto-complete suggestions; _Trends_ , because it gives a list of
popular queries that people searched on Google—the most popular search
service, and Amazon, because it is the platform under investigation and it
will provide popular trending queries specific to the platform.
Searching for a topic on _Trends_ displays popular search queries related to
the topic (see Figure 2(b)). We obtained top 3 queries per topic. Next, we
collected Top 3 auto-complete suggestions obtained by typing seed query of
each topic into Amazon’s search box (see Figure 2(c)). We removed all animal
or pet related search queries (e.g “rabies vaccine for dogs”), overly specific
queries (e.g. “callous disregard by andrew wakefield”) and replaced redundant
and similar queries with a single search query selected at random. For example
search queries “flu shots” and “flu shot” were replaced with a single search
query “flu shot”. After these filtering steps, only one query remained in the
query list of topic vaccination schedule, and thus, it was removed from the
topic list. Finally, we had 48 search queries corresponding to 10 vaccine-
related search topics. Table 2 presents sample search queries for all 10
search topics.
### 4.2. RQ1: Unpersonalized Audit
#### 4.2.1. Overview
The aim of the Unpersonalized audit is to determine the amount of
misinformation present in Amazon’s search results and recommendations without
the influence of personalization. We measure the amount of misinformation by
determining the misinformation bias of the returned results. We explain the
misinformation bias calculation in detail in Section 4.5. Intuitively, more
the number of higher ranked misinformative results, higher the overall bias.
We ran the Unpersonalized audit for 15 days, from 2 May, 2020 to 16 May, 2020.
We took two important methodological decisions regarding which components to
audit and what sources of noise to control for. We present these decisions as
well as implementation details of the audit experiment below.
#### 4.2.2. What components should we collect for our Unpersonalized audits?
We collected SERPs sorted by all 5 Amazon filters: “featured”, “price low to
high”, “price high to low”, “average customer review” and “newest arrivals”.
For analysis, we extracted the top 10 search results from each SERP. Since 70%
of Amazon users never click on search results beyond the first page (Baker,
2018), count 10 is a reasonable approximation of the number of search results
users are likely to engage with. Recent statistics have also shown that the
first three search results receive 75% of all clicks (Dean, 2019). Thus, we
extracted the recommendations present on the product pages of the first three
search results. We collected following 5 types of product page
recommendations: “Frequently bought together”, “What other items customers buy
after viewing this item”, “Customers who viewed this item also viewed”,
“Sponsored products related to this item” and “Customers who bought this item
also bought”. Refer Figure 1(c) for an example. We extracted the first product
present in each recommendation type for analysis. Next, we annotated all
collected components as promoting, neutral or debunking health misinformation.
We describe our annotation scheme shortly in Section 4.4.
#### 4.2.3. How can we control for noise?
We controlled for potential confounding factors that may add noise to our
audit measurements. To eliminate the effect of personalization, we ran the
experiment on newly created virtual machines (VM) and freshly installed
browser with empty browsing history, cookies and cache. Additionally, we ran
search queries from the same version of Google Chrome in incognito mode to
ensure that no history is built during our audit runs. To avoid cookie
tracking, we erased cookies and cache before and after opening the incognito
window and destroyed the window after each search. In sum, we performed
searches on newly created incognito windows everyday. All VMs operated from
same geolocation so that any effects due to location would affect all machines
equally. To prevent machine speeds from affecting the experiment, all VMs had
the same architecture and configuration. To control for temporal effect, we
searched every single query at one particular time everyday for consecutive 15
days. Prior studies have established the presence of carry-over effect in
search engines, where previously executed queries affect the results of the
current query when both queries are issued subsequently within a small time
interval (Hannak et al., 2013). Since, we destroyed browser windows and
cleared session cookies and cache after every single search, carry over effect
did not influence our experiment.
# | User action | Type of history | Tested values
---|---|---|---
1 | Search product | | Product search history | Product debunks vaccine or other health related misinformation (annotation value -1) & Neutral health information (annotation value 0) & Product promotes vaccine or other health related misinformation (annotation value 1)
2 | Search + click product | | Product search and click history
3 | Search + click + add to cart | | Intent to purchase history
4 | Search + click + mark “Top rated, All positive review” helpful | | Searching, clicking and marking reviews helpful history
5 | Following contributor by clicking follow button on contributor’s page | | Following history
6 | Search product on Google (third party application) | | Third party search history
Table 3. List of user actions employed to build account history. Every action
and product type (misinformative, neutral or debunking) combination was
performed on two accounts. One account sorted search results by filters
“featured” and “average customer review”. The other account built history in
the same way but sorted the search results by filters “price low to high” and
“newest arrivals”. Overall, we created 40 Amazon accounts (6 actions X 3
tested values X 2 replicates for filters + 2 control accounts + 2 twin
accounts).
#### 4.2.4. Implementation details
Figure 4 illustrates the eight steps for the _Unperonalized audit_. We used
Amazon Web Services (AWS) infrastructure to create all the VMs. We created
selenium bots to automate web browser actions. As a first step, each day at a
particular time, the bot opened amazon.com in incognito window. Next, the bot
searched for a single query, sorted the results by an Amazon filter and saved
the SERPs. The bot then extracted the top 10 URLs of the products present in
the results. The sixth step is an iterative step where the bot iteratively
opened the product URLs and saved the product pages. In the last two steps,
the bot cleared the browser cache and killed the browser window. We repeated
steps 1 to 8 to collect search results sorted by all 5 Amazon filters. We
added appropriate wait times after each step to prevent Amazon from detecting
the account as a bot and blocking our experiment. We repeated these steps for
15 consecutive days for each of the 48 search queries. After completion of the
experiment, we parsed the saved product pages to extract product metadata,
like product category, contributors’ names (author, editor, etc.), star rating
and number of ratings. We extracted product page recommendations for the top 3
search results only.
### 4.3. RQ2: Personalized Audit
#### 4.3.1. Overview
The goal of our Personalization Experiments is twofold. First, we assess
whether user actions, such as clicking on a product, adding to cart would
trigger personalization on Amazon. Second, and more importantly, we determine
the impact of a user’s account history on the amount of misinformation
presented to them in the search results page, recommendations, and auto-
complete suggestions; account history is built progressively by performing a
particular action for seven consecutive days. We ran our _Personalized audit_
from 12th to 18th August, 2020. We took several methodological decisions while
designing this experimental setup. We discuss each of these decisions below.
#### 4.3.2. What real-world user actions should we select to build account
history?
Users’ click history and purchase history trigger personalization and
influence the price of commodities on e-commerce websites (Hannak et al.,
2014). Account history also affects the amount of misinformation present in
the personalized results (Hussein et al., 2020). Informed by the results of
these studies, we selected six real-world user actions that could trigger
personalization and thus, could potentially impact the amount of
misinformation in search results and recommendations. The actions are (1)
“search” (2) “search + click” (3) “search + click + add to cart” (4)
“search + click + mark top-rated all positive review as helpful” (5) “follow
contributor” and (6) “search on third party website” (Google.com in our case)
. Table 3 provides an overview. First two actions involve searching for a
product and/or clicking on it. Through the third and fourth action, a user
shows positive reaction towards a product by adding it to cart and marking its
top rated critical review as helpful respectively. Fifth action investigates
the impact of following a contributor. For example, for a product in the Books
category, the associated list of contributors include the author and editor of
the book. The contributors have dedicated profile pages that a user can
follow. Sixth action investigates the effect of searching for an Amazon
product on Google.com. The user logs into Google using the email id used to
register the Amazon account. The hypothesis is that Amazon search results
could be affected by third party browsing history. After selecting the
actions, we determined the products on which the actions needed to be
performed.
#### 4.3.3. What products and contributors should we select for building
account history?
To build user history, all user actions except “follow contributor” need to be
performed on products. First, we annotated all products collected in the
Unpersonalized audit run as debunking (-1), neutral (0) or promoting (1)
health misinformation. We present the annotation details in Section 4.4. For
each annotation value (-1, 0, 1), we selected top-rated products that had
received maximum engagement and belonged to the most occurring
category—‘Books’. We started by filtering Books belonging to each annotation
value and eliminated the ones that did not have an “Add to cart” button on
their product page at the time of product selection. Since users make
navigation and engagement decisions based on information cues on the web
(Pirolli, 2005), we considered cues present on Amazon such as customer ratings
as a criteria to further shortlist Books. First, we sorted Books based on the
accumulated engagement—number of customer ratings received. Next, we sorted
the top 10 Books obtained from the previous sorting based on star ratings
received by the Books to end up with highly rated, high-impact and high-
engagement products. We selected top 7 books from the second sorting for the
experiment (see Appendix, Table 9 for the shortlisted books).
Action “follow contributor” is the only action that is performed on
contributors’ Amazon profile pages 444The contributors could be authors,
editors, people writing foreward of a book, publisher, etc.. We selected
contributors who contributed to the most number of debunking (-1), neutral (0)
and promoting (1) books. We retained only those who had a profile page on
Amazon. Table 6 lists the selected contributors.
# | | Contributors to debunking
---
health products
| Contributors to neutral
---
health products
| Contributors to misinformative
---
health products
name | url code | name | url code | name | url code
1 | Paul-A-Offit | B001ILIGP6 | Jason-Soft | B078HP6TBD | Andrew-J-Wakefield | B003JS8YQC
2 | Seth-Mnookin | B001H6NG7A | Joy-For-All-Art | B07LDMJ1P4 | Mary-Holland | B004MZW7HS
3 | Michael-Fitzpatrick | B001H6L348 | Peter-Pauper-Press | B00P7QR4RO | Kent-Heckenlively | B00J08DNE8
4 | Ziegler-Prize | B00J8VZKBQ | Geraldine-Dawson | B00QIZY0MA | Jenny-McCarthy | B001IGJOUC
5 | Ben-Goldacre | B002C1VRBQ | Tina-Payne-Bryson | B005O0PL3W | Forrest-Maready | B0741C9TKH
6 | Jennifer-A-Reich | B001KDUUHY | Vassil-St-Georgiev | B001K8I8XC | Wendy-Lydall | B001K8LNVQ
7 | Peter-J-Hotez | B001HPIC48 | Bryan-Anderson | B087RL79G8 | Neil-Z-Miller | B001JP7UW6
Table 4. List of contributors who have contributed to the most number of books
that are either debunking, neutral or promote health misinformation, selected
for building account history for action “Follow contributors”. For example,
Andrew J Wakefield, Mary Holland (both prominent vaccine deniers) have
contributed to the most number of books that promote health
misinformation.666The contributor’s Amazon web page can be accessed by forming
the url “www.amazon.com/ + name + /e/ + url_code”.
Figure 5. Steps performed by treatment and control accounts in Personalized
audit corresponding to the 6 different features.
[Experimental steps of Personalized audit]Figure illustrates how treatment
accounts built histories by performing various actions and later collected
homepage recommendations, SERPs for all 48 search queries followed by auto-
complete suggestions. The control accounts did not build account history but
collected SERPs and auto-complete suggestions for the 48 search queries at the
same time as the treatment accounts.
#### 4.3.4. How do we design the experimental setup?
We performed all six actions explained in Section 4.3.2 and Table 3 on Books
(or contributors of the books in case of action “follow contributor”) that are
either all debunking, neutral or promoting health misinformation. Each action
and product type combination was acted upon by two treatment accounts. One
account built its search history by first performing searches on Amazon and
then viewing search results sorted by filters “featured” and “average customer
review” while the other did the same but sorted results by “price low to high”
and “newest arrivals”777Every account created for this experiment was run by a
bot. It was not possible for a bot to complete the following order of tasks in
24 hours because of wait times added after every action– building history
using a particular action, searching for 48 search queries sorted by 4 filters
and collecting auto-complete suggestions for those queries etc. Thus, every
action-product type combination was performed on two accounts. First account,
sorted the search results by two filters and second account sorted results
using remaining two filters. We call these two accounts replicates since they
built their history in the same way.. We did not use the filter “price high to
low” since intuitively it is less likely to be used during searches.
We also created 2 control accounts corresponding to 2 treatments that emulated
the same actions as the treatments except that they did not build account
histories by performing one of the 6 user actions. Like 2 treatment accounts,
the first control account searched for 48 queries curated in Section 4.1.2 and
sorted them by filters “Featured” and “Average customer Review” while the
other control sorted them by the remaining two filters. Figure 5 outlines the
experimental steps performed by treatment and control accounts. We also
created twins for each of the control accounts. The twins performed the exact
same tasks as the corresponding control. Any inconsistencies between a control
account and its twin can be attributed to noise, and not personalization.
Remember, Amazon’s algorithms are a black box. Even after controlling for all
known possible sources of noise, there could be some sources that we are not
aware of or the algorithm itself could be injecting some noise in the results.
If the difference between search results of control and treatment is greater
than the baseline noise, only then it can be attributed to personalization.
Prior audit work have also adopted the strategy of creating a control and its
twin to differentiate between the effect due to noise versus personalization
(Hannak et al., 2014). Overall, we created 40 Amazon accounts (6 actions X 3
tested values X 2 replicates for filters + 2 control accounts + 2 twin
accounts). Next, we discuss the components collected from each account.
#### 4.3.5. What components should we collect for the personalized audit?
We collected search results and auto-complete suggestions for treatment and
control accounts to measure the extent of personalization. We collected
recommendations only for the treatment accounts since they built history by
clicking on product pages, pre-purchase pages, etc. Search results were sorted
by filters ‘featured”, “average customer review”, “price low to high” and
“newest arrivals”. Once users start building their account history, Amazon
displays several recommendations to drive engagement on the platform. We
collected various types of recommendations spread across three recommendation
pages: homepage, product page and pre-purchase page. Pre-purchase pages were
only collected for the accounts that performed “add to cart” action.
Additionally, product pages were collected for accounts that clicked on search
results while creating their respective account history. Each of the
aforementioned pages consist of several recommendation types, such as
“Customers who bought this item also bought”, etc. We collected the first
product present in each of these recommendation types from both product pages
and pre-purchase pages and two products from each type from the homepages for
further analysis. Refer to Table 1 and Figures 1(a), 1(b) and 1(c) for
examples of these recommendation types.
#### 4.3.6. How do we control for noise?
Just like our _Unpersonalized audit_ , we first controlled for VM
configuration and geolocation. Next, we controlled for demographics by setting
the same gender and age for newly creating Google accounts. Recall, that these
Google accounts were used to sign-up for the Amazon accounts. Since, the VMs
were newly created, the browser had no search history that could otherwise
hint towards users’ demographics. All accounts created their histories at the
same time. They also performed the searches at the same time each day, thus,
controlling for temporal effects. Lastly, we did not account for carry over
effects since it affected all the treatment and control accounts equally.
#### 4.3.7. Implementation details
Figure 5 illustrates the experimental steps. We ran 40 selenium bots on 40
VMs. Each selenium bot operated on a single Amazon account. On day 0, we
manually logged in to each of the accounts by entering login credentials and
performing account verification. Next day, experiment began at time t. All
bots controlling treatment accounts started performing various actions to
build history. Note, everyday bots built history by performing actions on a
single Book/contributor. We gave bots sufficient time to build history (90
min) after which they collected and saved Amazon homepage. Later, all 40
accounts (control + treatment) searched for 48 queries with different search
filters and saved the SERPs. Next, the bots collected and saved auto-complete
suggestions for all 48 queries. We included appropriate wait times between
every step to prevent accounts from being recognized as bots and getting
banned in the process. We repeated these steps for a week. At the end of the
week, for each treatment account we had collected personalized search results,
recommendations and auto-complete suggestions. Next, we annotated the
collected search results and recommendations to determine their stance on
misinformation so that later we could analyze them to study the effect of user
actions on the amount of misinformation presented to users in each component.
A. Scale Value | Annotation Description | Annotation Heuristics | Sample Amazon Products
---|---|---|---
-1 | debunks vaccine misinformation | Product debunks, derides OR provides evidence against the myths/controversies surrounding vaccines OR helps understand anti-vaccination attitude OR promotes use of vaccination OR describes history of a disease and details how its vaccine was developed OR describes scientific facts about vaccines that help users to understand how they work OR debunks other health-related misinformation |
0 | neutral health related information | All medicines and antibodies OR medical equipment (thermometer, syringes, record-books, etc.) OR dietary supplements that do not violate Amazon’s policy OR products about animal vaccination and diseases OR health-related products not promoting any conspiratorial views about health and vaccines |
1 | promotes vaccine and other health related misinformation | Product promotes disuse of vaccines OR promotes anti-vaccine myths, controversies or conspiracy theories surrounding the vaccines OR advocates alternatives to vaccines and/or western medicine (diets, pseudoscience methods like homeopathy, hypnosis, etc.) OR product is a misleading dietary supplement that violates Amazon’s policy on dietary supplements- the supplement states that it can cure, mitigate, treat, or prevent a disease in humans, but the claim is not approved by the FDA OR it promotes other health-related misinformation |
2 | unknown | Product’s description and metadata is not sufficient to annotate it as promoting, debunking or neutral information |
3 | removed | Product’s URL is not accessible at the time of annotation | -
4 | Other language | Product’s title and description is in language other than english |
5 | Unrelated | Non-health related products |
Table 5. Description of annotation scale, heuristics along with sample
products corresponding to each annotation value.
### 4.4. Annotating Amazon data for health misinformation
Unlike partisan bias where bias could be determined by using features such as
news source bias (Robertson et al., 2018), labelling a product for
misinformation is hard and time-consuming. There are no pre-determined sources
of misinformation such as list of sellers or authors of misinformative
products on Amazon. Additionally, we found that the annotation process for
some categories of products, like Books, Kindle ebooks, etc. required us to
consider the product image, read the book’s preview, if available, and even
perform external search about the authors. Therefore, we opted to manually
annotate our data collection. We developed a qualitative coding scheme to
label our Amazon data collection through an iterative process that required
several rounds of discussions to reach an agreement on the annotation scale.
In the first round, first author randomly sampled 200 Amazon products across
different topics and categories. After multiple iterations of analyzing and
interpreting each product, the author came up with an initial 7-point
annotation scale. Then, six researchers with extensive work experience on
online misinformation independently annotated 32 products, randomly selected
from the 200 products. We discussed every product’s annotation value and the
researchers’ annotation process. We refined the scale as well as the scheme
based on the feedback. This process was repeated thrice after which all six
annotators reached a consensus on the annotation scheme and process. In the
fourth round, we gathered additional feedback from an external researcher from
the Credibility Coalition group888https://credibilitycoalition.org/—an
international organization of interdisciplinary researchers and practitioners
dedicated to developing standards for news credibility and tackling the
problem of online misinformation. The final result of the multi-stage
iterative process (see Appendix, Figure 14) is a 7-point annotation scale
comprising of values ranging from -1 to 5 (see Table 5). The scale measures
the scientific quality of products that users are exposed to when they make
vaccine-related searches on Amazon.
#### 4.4.1. Annotation Guidelines
In order to annotate an Amazon product, the annotators were required to go
through several fields present on the product’s detail page in the following
order: title, description, top critical and top positive reviews about the
product, other metadata present on the detail page, such as editorial reviews,
legal disclaimers, etc. If the product was a book, the annotators were also
recommended to do the following three steps: (1) go through the first few
pages in the book preview 999Amazon has introduced a Look Inside feature that
allows users to preview few pages from the book., (2) see other books
published by the authors, (3) perform a google search on the book and go
through the first few links to discover more information about the book.
Annotators were asked to see contextual information about the product from
multiple sources to gain more context and perspective. This technique is
grounded in lateral reading that has proven to be a good approach for
credibility assessment (Spector, 2017).
#### 4.4.2. Annotation scale and heuristics:
Below we describe each value in our annotation scale. Table 5 presents
examples.
Debunking (-1): Annotation value ‘-1’ indicates that the product debunks
vaccine misinformation or derides any vaccine-related myth or conspiracy
theory or promotes the use of vaccination. As an example, consider the poster
titled Immunization Poster 1979 Vintage Star Wars C-3PO R2-D2 Original
(B00TFTS194)101010Every title of the Amazon product is followed by a URL id.
This URL id can be converted into a url using the format:
http://www.amazon.com/dp/url_id that encourages parents to vaccinate their
children. Products helping users understand anti-vaccination attitude or those
that describe the history about the development of vaccines or the science
behind how vaccines work were also included in this category.
Promoting (1): This category includes all products that support or
substantiate any vaccine related myth or controversy and encourages parents to
raise a vaccine-free child. For example, consider the following books that
promote anti-vaccination agenda. In A Summary of the Proofs that Vaccination
Does Not Prevent Small-pox but Really Increases It (B01G5QWIFM), the author
talks about dangers of large scale vaccination and in Vaccine Epidemic: How
Corporate Greed, Biased Science, and Coercive Government Threaten Our Human
Rights, Our Health, and Our Children (B00CWSONCE), the authors question
vaccine safety and present several narratives of vaccine injuries. We included
several Amazon Fashion (B07R6PB2KP) and Amazon Home (B01HXAB7TM) merchandise
in this category too since they contained anti-vaccine slogans like “Educate
before you Vaccinate”, “Jesus wasn’t vaccinated”, etc.
We also included all products advocating any alternatives to vaccines,
products that promote other health-related misinformation, dietary supplements
that claim to cure diseases in their description but are not approved by Food
and Drug Administration (FDA) 111111Note that for dietary supplements
category, Amazon asks sellers not to state that the products cure, mitigate,
treat, or prevent a disease in humans in their details page, unless that
statement is approved by the FDA (Central, 2020) in this category.
Neutral (-0): We annotated all medical equipment and medicines as neutral
(annotation value ‘0’). Note that it is beyond the scope of this project to
determine the safety and veracity of the claims of each medicine sold on the
Amazon platform. This means that the number of products that we have
determined to be promoting (1) serve as the lower bound of the amount of
misinformation present on the platform. This category also includes dietary
supplements that do not violate Amazon’s policy, pet-related products and
health-related products not advocating a conspiratorial view.
Other annotations: We annotated a product as ‘2’ if the product’s description
and metadata were not sufficient to determine its stance. We assigned values
‘3’ and ‘4’ to all products whose URL was not accessible at the time of the
annotation and whose title and description was in a language other than
English, respectively. We annotated all non-health related products (e.g.
diary, carpet, electronic products, etc.) with value ‘5’.
Both our audits resulted in a dataset of 4,997 Amazon products that were
annotated by the first author and Amazon Mechanical Turk workers (MTurks). The
first author being the expert annotated majority of products (3,367) to
determine what would be a good task representation to obtain high quality
annotations for the remaining 1,630 products from novice MTurks. We obtained
three Turker ratings for each remaining product and used the majority response
to assign the annotation value. Our task design worked. For 97.9% of the
products, annotation values converged. Only 34 products had diverging
responses. The first author then annotated these 34 products to obtain the
final set of annotation values. We describe the AMT job in detail in Appendix
A.1.
| Rank
---
r
Product | | Bias of each
---
product
| Bias till
---
rank r
Bias value
1 | $p_{1}$ | $s_{1}$ | B(1) | $s_{1}$
2 | $p_{2}$ | $s_{2}$ | B(2) | $\frac{1}{2}$ ($s_{1}$ \+ $s_{2}$)
3 | $p_{3}$ | $s_{3}$ | B(3) | $\frac{1}{3}$($s_{1}$ \+ $s_{2}$ \+ $s_{3}$)
Input Bias (ib) | $\frac{1}{3}$($s_{1}$ \+ $s_{2}$ \+ $s_{3}$)
Output Bias (ob) | $\frac{1}{3}$ [$s_{1}$(1 + $\frac{1}{2}$ \+ $\frac{1}{3}$) + $s_{2}$($\frac{1}{2}$ \+ $\frac{1}{3}$) + $s_{3}$($\frac{1}{3}$)]
Rank Bias (rb) | ob-ib
Table 6. Example illustrating the bias calculations. For a given query,
Amazon’s search engine presents users with the following products in the
search results $p_{1}$, $p_{2}$ and $p_{3}$. The misinformation bias scores of
the products are $s_{1}$, $s_{2}$ and $s_{3}$ respectively. The table has been
adopted from previous work (Kulshrestha et al., 2017). A bias score larger
than 0 indicates a lean towards misinformation.
(a) Search results
[Bar graph showing the percentage of search results belonging to different
annotation values ] debunking (8.99%), neutral (40.81%), promoting (10.47%),
unable to annotate (5.44%), URL not accessible (3.23%), other language (0.97%)
and unrelated (30.06%).
(b) Recommendations
[Bar graph showing the percentage of recommendations belonging to different
annotation values ]debunking (1.99%), neutral (37.56%), promoting (12.95%),
unable to annotate (0.48%), URL not accessible (2.80%), other language (0.21%)
and unrelated (43.98%).
Figure 6. RQ1a: (a) Number (percentage) of search results belonging to each
annotation value. While majority of products have a neutral stance (40.81%),
products promoting health misinformation (10.47%) are greater than products
debunking health misinformation (8.99%). (b) Number (percentage) of
recommendations belonging to each annotation value. A high percentage of
product recommendations promote misinformation (12.95%) while percentage of
recommendations debunking health misinformation is very low (1.99%).
Figure 7. RQ1a: Figure showing categories of promoting, neutral and debunking
Amazon products (search results). All categories occurring less than 5% were
combined and are presented as other category. Note that misinformation exists
in various forms on Amazon. Products promoting health misinformation include
books (Books, Kindle eBooks, Audible Audiobooks), apparel (Amazon Fashion) and
dietary supplements (Health & Personal Care). Additionally, proportion of
books promoting health misinformation is much greater than proportion of books
debunking misinformation.
[Categories of debunking, neutral and promoting Amazon products] Debunking
products mostly belong to categories, Kindle eBooks, Books, Amazon fashion and
Amazon home. Neutral products mostly belong to categories Books, Kindle
eBooks, Health & Personal care and Amazon home. Promoting products belong to
categories Books, Kindle eBooks, Health & Personal care and Amazon fashion.
### 4.5. Quantifying misinformation bias in SERPs:
In this section, we describe our method to determine the amount of
misinformation present in search results. How do we estimate the
misinformation bias present in Amazon’s SERPs? First, we used our annotation
scheme to assign misinformation bias scores ($s_{i}$) to individual products
present in SERPs. We converted our 7 point (-1 to 5) scale to misinformation
bias scores with values -1, 0 and 1. We mapped annotation values 2, 3, 4, and
5 to bias score 0. Merging “unknown” annotations to neutral will result in a
conservative estimate of misinformation bias present in the search results.
Now, a product can be assigned one of the three bias scores: -1 suggests that
product debunks misinformation, 0 indicates a neutral stance and 1 implies
that the product promotes misinformation. Next, to quantify misinformation
bias in Amazon’s SERPs, we adopt the framework and metrics proposed in prior
work to quantify partisan bias in Twitter search results (Kulshrestha et al.,
2017). Below we discuss three kinds of bias proposed by the framework and
delineate how we estimate each bias with respect to misinformation. Table 6
illustrates how we calculated the bias values.
1. (i)
The input bias (ib) of a list of Amazon products is the mean of misinformation
bias scores of the constituting products (Kulshrestha et al., 2017).
Therefore, ib = ${\sum_{i=1}^{n}{s_{i}}}$, where n is the length of the list &
${s_{i}}$ is the misinformation bias score of ith product in the list. Input
bias is an unweighted bias, i.e it is not affected by the rank/ordering of the
items.
2. (ii)
The output bias (ob) of a ranked list is the overall bias present in the SERPs
and is the sum of biases introduced due to input and ranks of the input. We
first calculate weighted bias score B(r) of every rank r, which is the average
misinformation bias of products ranked from 1 to r. Thus, B(r) =
$\frac{\sum_{i=1}^{r}{s_{i}}}{r}$, where ${s_{i}}$ is the misinformation bias
score of ith product. Output bias (ob) is the average of weighted bias score
B(r) for all ranks. Thus, by definition ob = $\frac{\sum_{i=1}^{r}{B(i)}}{r}$.
3. (iii)
The ranking bias (rb) is introduced by the ranking algorithm of search engine
(Kulshrestha et al., 2017). It is calculated by subtracting input bias from
output bias. Thus, rb = ob-ib. In our case, high ranking bias indicates that
search algorithm ranks misinformative products higher than neutral or
debunking products.
Why do we need three bias scores? Amazon’s search algorithm is not only
selecting the products to be shown in the search results but it is also
ranking them according to their internal algorithm. Therefore, the overall
bias (ob) could be introduced either at the product selection stage (ib), or
ranking stage (rb) or both. Studying all three biases gives us an elaborate
understanding of how biases are introduced by the search algorithm. All three
bias values (ib, ob and rb) lie between -1 and 1. A bias score larger than 0
indicates a lean towards misinformation. Conversely, a bias score less than 0
indicates a propensity towards debunking information. We only consider top 10
search results in each SERP. Thus, in the bias calculations, rank always
varies from 1 to 10.
## 5\. RQ1 Results [Unpersonalized audit]: Quantify misinformation bias
The aim of the Unpersonalized audit is to determine the amount of
misinformation bias in search results. Below we present the input, rank, and
output bias detected by our audit in search results of all 10 vaccine-related
topics with respect to 5 search filters.
Figure 8. RQ1a: Input, rank and output bias for all 10 vaccine-related topics
across five search filters. The bias scores are average of scores obtained for
each of the 15 days. Input and rank bias is positive (¿0) in the search
results of majority of topics for filters “featured” and “average customer
review”. A bias value greater than 0 indicates a lean towards misinformation.
Topics “andrew wakefield” and “mmr vaccine & autism” have a positive input
bias across all five filters indicating that search results of these topics
contain large number of products promoting health misinformation irrespective
of the filter used to sort the search results. Topic “vaccination” has the
highest overall bias (output bias) of 0.63 followed by topic “andrew
wakefield” that has output bias of 0.53 for filter “featured”.
[Input, rank and output bias values]All topics except hepatitis have positive
input and output bias for filter customer reviews. Additionally, all topics
except mmr and influenza vaccine have positive input and output bias values
for filter featured. Furthermore, all topics except andrew wakefield, mmr
vaccine and autism, and vaccine controversies have a negative input and output
bias for filter newest arrivals. On the other hand, ranking bias of all topics
except vaccination and hepatitis for filter customer review and immunization
and varicella vaccine for filter price low to high is positive.
### 5.1. RQ1a: Search results
We collected 36,000 search results from our Unpersonalized audit run, out of
which 3,180 were unique. Recall, we collected these products by searching for
48 search queries belonging to vaccine-related topics and sorting results by
each of the 5 Amazon filters. We later extracted and annotated top 10 search
results from all the collected SERPs resulting in 3,180 annotations. Figure
6(a) shows the number (and percentage) of products corresponding to each
annotation value. Through our audits, we find a high percentage (10.47%) of
misinformative products in the search results. Moreover, the number of
misinformative products outnumbered the debunking products. Figure 7
illustrates the distribution of categories of Amazon products annotated as
debunking (-1), neutral (0) and promoting (1). Note that the products
promoting health misinformation primarily belong to categories Books (35.43%),
Kindle eBooks (28.52%), Amazon Fashion (12.61%)—a category that includes
t-shirts, apparel, etc. and Health & Personal Care (10.21%)—a category
consisting of dietary supplements. Below we discuss the misinformation bias
observed across all the vaccine-related topics, the Amazon search filters and
search queries.
#### 5.1.1. Misinformation bias in vaccine related topics
We calculate the input, rank and output bias for each of the 10 search topics.
All the bias scores presented are average of scores obtained across the 15
days of audit. The bias score for a topic is also the average across each of
the constituting search queries. Figure 8 shows the bias scores for all the
topics, search filters and bias combinations.
Input bias: We observe a high input bias (¿0) for all topics except
“hepatitis” for “average customer review” filter indicating presence of a
large number of misinformative products in the SERPs when search results are
sorted by this filter. Similarly, input biases for most topics is also
positive for “featured” filter. Note, “featured” is the default Amazon filter.
Thus, by default Amazon is presenting more misinformative search results to
users searching for vaccine related queries. Topics “andrew wakefield”,
“vaccination” and “vaccine controversies” have highest input biases for the
both “featured” and “average customer review” filters. Another noteworthy
trend is the negative input bias for 7 out of 10 topics with respect to filter
“newest arrivals” indicating that there are more debunking products present in
the SERP when users look for newly appearing products on Amazon. “Andrew
wakefield” and “mmr vaccine & autism” are the only two topics that have
positive input bias (¿0) across all the five filters. Interestingly, there is
no topic that has negative input bias across all filters. Recall, a negative
(¡0) bias indicates a debunking lean. Topics “mmr”, “influenza vaccine” and
“hepatitis” have negative bias scores in four out of five filters.
Figure 9. Input, rank and output bias for all filter types.
[Bias values for all filter types]Input bias values: featured (0.21), avg
customer reviews (0.3), price low to high (0.032), price high to low (0.056),
newest arrival (-0.015). Rank bias: featured (0.018), avg customer reviews
(0.034), price low to high (0.013), price high to low (0.011), newest arrival
(-0.051). Output bias: featured (0.22), avg customer reviews (0.34), price low
to high (0.045), price high to low (0.066), newest arrival (-0.067).
Figure 10. Top 20 search query-filter combinations when sorted by output bias
(ob). In other words, these query-filter combinations are the most problematic
ones containing highest amount of misinformation (highest ob).
[Search query - Amazon filter combinations containing highest amount of
misinformation]Top five combinations: vaccination is not immunization -
custReview (ob = 1), vaccination is not immunization - featured (ob = 1),
vaccination is not immunization - priceHtoL (ob = 1), autism vaccine -
custReview (ob = 0.99), vaccine - custReview (0.99)
Rank bias: 8 out of 10 topics have positive (¿0) rank bias for filters “price
low to high” and “average customer reviews” and 6 out of 10 topics have
positive rank bias for filter “featured”. These results suggest that Amazon’s
ranking algorithm favors misinformative products and ranks them higher when
customers filter their search results by the aforementioned filters. Some
topics have negative input bias but positive rank bias. Consider topic “mmr”
with respect to filter “price low to high” whose input bias is -0.1 but the
rank bias is 0.065. This observation suggests that although the SERPs obtained
had more debunking products, a few misinformative products were still ranked
higher. Rank bias for 8 out of 10 topics with respect to filter “newest
arrivals” was negative, similar to what we observed for input bias.
Output bias: Output bias is positive (¿0) for most topics with respect to
filters “featured” and “average customer reviews”. Recall, a bias value
greater than 0 indicates a lean towards misinformation. Topic “vaccination”
has the highest output bias (0.63) for filter “featured”. On the other hand,
topic “influenza vaccine” has least output bias (-0.24) for filter “price high
to low”.
#### 5.1.2. Misinformation bias in search filters
Figure 9 shows the results for all 5 filters. Bias scores are averaged across
all search queries. All filters except “newest arrivals” have positive input,
rank, and output misinformation bias. Filter “average customer review” has the
highest positive output bias indicating that misinformative products belonging
to vaccine related topics receive higher ratings. We present the implications
of these results in our discussion (Section 7).
#### 5.1.3. Misinformation bias in search queries
Figure 10 shows the top 20 search queries and filter combinations with highest
output bias. Predictably, filter “newest arrivals” does not appear in any
instance. Surprisingly, 9 search query-filter combinations have very high
output biases (ob ¿ 0.9). Search query “vaccination is not immunization” has
output bias of 1 for three filter types. Most of the search queries in Figure
10 have a negative connotation, i.e the queries themselves have a bias (e.g
search queries anti vaccine books, vaccination is not immunization indicates
an intent to search for misinformation). This observation reveals that if you
search for anti vaccine stuff, you will get high amount of vaccine and health
misinformation. This indicates how Information Retrieval systems currently
work; they curate by relevance with no notion of veracity. The most
troublesome observation is the presence of high output bias for generic and
neutral search queries, “vaccine” (ob = 0.99) and “varicella vaccine” (ob =
0.79). These results indicate that, unlike companies like Pinterest, who have
altered their search engines in response to vaccine related queries (Caron,
2019), Amazon has not made any modification to its search algorithm to push
less anti vaccine products to users.
(a) Customers who bought this item also bought (CBB)
[CBB]There are several instances of red nodes connected to each other and
green nodes connected to each other. Few of the green nodes are attached to
red nodes too.
(b) Customers who viewed this item also viewed (CVV)
[CVV]There are several instances of red nodes connected to each other and
green nodes connected to each other. Few of the green nodes are attached to
red nodes too.
(c) Frequently bought together (FBT)
[FBT]There are large sized red nodes. Red nodes are attached to other red
nodes and several green nodes are also attached together.
(d) Sponsored products related to this item
[Sponsored]The graph contains many large green nodes attached to other green
nodes. Few large red nodes are also present that are attached to other red and
green nodes.
(e) What other items customers buy after viewing this item (CBV). Note that
the recommendation graph for CBV recommendation type is indeed one figure. It
consists of two disconnected components, indicating strong filter bubble
effect.
[CBV]The graph has two disconnected components. One component mostly consists
of red nodes attached to one another. Other component comprises of green nodes
and a few red nodes.
Figure 11. Recommendation graphs for 5 different types of recommendations
collected from the product pages of top three search-results obtained in
response to 48 search queries, sorted by 5 filters over a duration of 15 days
during Unpersonalized audit run. denotes products annotated as
misinformative, as neutral and as debunking. Node size is proportional to
the times the product was recommended in that recommendation type. Large sized
red nodes coupled with several interconnections between red nodes indicate a
strong filter-bubble effect where recommendations of misinformative products
returned more misinformation.
### 5.2. RQ1b: Product page recommendations
We extracted the product page recommendations of top 3 search results present
in the SERPs. The product page constitutes of various types of
recommendations. For analysis, we considered the first product present in 5
types of recommendations “Customers who bought this item also bought” (CBB),
“Customers who viewed this item also viewed” (CVV), “Frequently bought
together” (FBT), “Sponsored products related to this item” and “What other
items customers buy after viewing this item” (CBV). The process resulted in
16,815 recommendations out of which 1,853 were unique. Figure 6(b) shows the
number and percentage of recommendations belonging to different annotation
values. The percentage of misinformative recommendations (12.95%) is much
higher than the debunking recommendations (1.95%). The total input bias in all
16,815 recommendations is 0.417 while in all 1,853 unique recommendations is
0.109, indicating a lean towards misinformation.
Does filter-bubble effect occur in product page recommendations? To answer, we
compared the misinformation bias scores of all types of recommendations
considered together (refer Table 7). Kruskal Wallis Anova test revealed the
difference to be significant (KW H(2, N=16815) = 6,927.6, p=0.0). Post-hoc
Tukey HSD test showed that the product page recommendations of misinformative
products contain more misinformation when compared to recommendations of
neutral and debunking products. Even more concerning is that the
recommendations of debunking products have more misinformation than neutral
products. To investigate further, we qualitatively studied the recommendation
graphs of each of the five recommendation types (Figure 11). Each node in the
graph represents an Amazon product. An edge A$\rightarrow$ B indicates that B
was recommended in the product page of A. Node size is proportional to the
number of times the product was recommended.
Type of product page recommendations | Kruskal Wallis Anova Test | Post hoc Tukey HSD | d | n | m
---|---|---|---|---|---
All | KW H(2, N=16815) = 6,927.6, p=0.0 | M>D & M>N & D>N | 37 | 1576 | 240
Cust. who bought this item also bought (CBB) | KW H(2, N=3133) = 2136.03, p=0.0 | M >D & M>N & N>D | 11 | 225 | 66
Cust. who viewed this item also viewed (CVV) | KW H(2, N=4485) = 2673.95, p=0.0 | M>D & M>N & D>N | 18 | 331 | 100
Frequently bought together (FBT) | KW H(2, N=388) = 277.08, p=6.8e-61 | M>D & M>N & D>N | 1 | 111 | 16
Sponsored products related to this item | KW H(2, N=6575) = 628.52, p=3.2e-137 | M>D & M>N & D>N | 7 | 953 | 98
| What other items cust. buy after viewing
---
this item (CBV)
KW H(2, N=2234) = 1611.34, p=0.0 | M>D & M>N & D>N | 9 | 230 | 57
Table 7. RQ1b: Analyzing echo chamber effect in product page recommendations.
M, N and D are the means of misinformation bias scores of products recommended
in the product pages of misinformative, neutral and debunking Amazon products
respectively. Higher means indicate that recommendations contain more
misinformative products. For example, M>D indicates that recommendations of
misinformative products have more misinformation than recommendations of
debunking products. d, n and m are number of unique products annotated as
debunking, neutral and promoting for each recommendation type.
#### 5.2.1. Recommendation type- Customers who bought this item also bought
(CBB)
Misinformation bias scores of CBB are significantly different for debunking,
neutral, and promoting products (KW H(2, N=3133) = 2136.03, p=0.0). Post hoc
tests reveal that CBB recommendations of misinformative products have more
misinformation when compared to CBB recommendations of neutral and debunking
products. Additionally CBB recommendations of neutral products have more
misinformation than CBB recommendations of debunking products. The findings
are evident from Figure 11(a) too. For example, there are several instances of
red nodes connected to each other. In other words, if you click on a
misinformative search result, you will get misinformative products in CBB
recommendations. Few of the green nodes are attached to red ones indicating
that CBB recommendation of a neutral product sometimes contain a
misinformative product. The most recommended product present in CBB is a
misinformative Kindle book titled Miller’s Review of Critical Vaccine Studies:
400 Important Scientific Papers Summarized for Parents and Researchers
(B07NQW27VD).
#### 5.2.2. Recommendation type- Customers who viewed this item also viewed
(CVV)
Misinformation bias scores of CVV recommendations are significantly different
for debunking, neutral and promoting products (KW H(2, N=4485) = 2673.95,
p=0.0) . Post hoc test indicates that CVV recommendations of misinformative
products have more misinformation than CVV recommendations of debunking and
neutral products. Notably, CVV recommendations of debunking products contain
more misinformation than CVV recommendations of neutral products. This is
troubling since users who are clicking on products that present scientific
information are pushed more misinformation in this recommendation type. In the
recommendation graph (Figure 11(b) ), we see edges connecting multiple red
nodes supporting our finding that CVV recommendations of misinformative
products mostly contain other misinformative products. The most recommended
product occurring in this recommendation type is a misinformative Kindle book
titled Dissolving Illusions (B00E7FOA0U).
#### 5.2.3. Recommendation type- Frequently bought together (FBT)
Misinformation bias scores of FBT recommendations are significantly different
for debunking, neutral and promoting products (KW H(2, N=388) = 277.08,
p=6.8e-61). Post hoc tests reveal that amount of misinformation in FBB
recommendations of misinformative products is significantly more than the FBB
recommendations of neutral and debunking products. The finding is also evident
from the graph (Figure 11(c)). There are large sized red nodes attached to
other red nodes and several green nodes attached together indicating the
presence of a strong filter-bubble effect. “Frequently bought together” can be
considered an indicator of buying patterns on the platform. The post hoc tests
indicate that people buy multiple misinformative products together. The most
recommended product present in this recommendation type is a misinformative
Paperback book titled Dissolving Illusions: Disease, Vaccines, and The
Forgotten History (1480216895).
#### 5.2.4. Recommendation type- Sponsored products related to this item
Most of the sponsored recommendations are either neutral or promoting (Figure
11(d) and Table 7). Statistical test reveals that the misinformation bias
score of sponsored recommendations are significantly different among
debunking, neutral and promoting products (KW H(2, N=6575) = 628.52,
p=3.2e-137). Post hoc tests reveal same results as for CVV recommendations.
There are two most recommended sponsored books. First is a misinformative
paperback book titled Vaccine Epidemic: How Corporate Greed, Biased Science,
and Coercive Government Threaten Our Human Rights, Our Health, and Our
Children (1620872129). Second is a neutral Kindle book titled SPANISH FLU
1918: Data and Reflections on the Consequences of the Deadliest Plague, What
History Teaches, How Not to Repeat the Same Mistakes (B08774MCVP).
#### 5.2.5. Recommendation type- What other items customers buy after viewing
this item (CBV)
Misinformation bias scores of CBV recommendations are significantly different
for debunking, neutral and promoting products (KW H(2, N=2234) = 1611.34,
p=0.0). Results of post hoc tests are same as that of CVV recommendations. The
presence of an echo chamber is quite evident in the recommendation graph (see
Figure 11(e)). The graph has two disconnected components, one comprising a
mesh of misinformative products indicating a cluster of misinformative
products that keep getting recommended. CBV is also indicative of buying
patterns of Amazon users. The algorithm has learnt that people viewing
misinformative products end up purchasing them. Thus, it pushes more
misinformative items to users that click on them, creating a problematic
feedback loop. The most recommended product in this recommendation type is a
misinformative Kindle book titled Miller’s Review of Critical Vaccine Studies:
400 Important Scientific Papers Summarized for Parents and Researchers
(B07NQW27VD).
| RQ2a | RQ2b | RQ2c
---|---|---|---
| Search results | Recommendations | | Auto complete
---
suggestions
| Featured | | Avg.
---
customer
reviews
| Price low
---
to High
| Newest
---
Arrivals
Homepage | Pre-purchase | Product page | | |
Actions performed to build account history | D | N | M | D | N | M | D | N | M | D | N | M | D | N | M | D | N | M | D | N | M | D | N | M
Search product | IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | - | - | - | X | X | X | X | X | X | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC
| Search & click
---
product
IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | | KW H(2, N=42) = 32.07,
---
p = 1.08e-07
M>N>D
X | X | X | | KW H(2, N=42) = 24.89,
---
p = 3.94e-06
M>D & M>N
NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC
| Search + click &
---
add to cart product
IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | | KW H(2, N=42) = 33.48,
---
p = 5.38e-08
M>N>D
| KW H(2, 42) = 32.63,
---
p = 8.19e-08
M>N>D
| KW H(2, N=42) = 24.05,
---
p = 5.98e-06
M>D & M>N
NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC
| Search + click &
---
mark “Top rated,
All positive review”
as helpful
IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | | KW H(2, N=42) = 32.33,
---
p = 9.52e-08
M>N>D
X | X | X | | KW H(2, 42) = 23.36,
---
p = 8.44e-06
M>N & M>D
NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC
| Following
---
contributor
IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | - | - | - | X | X | X | X | X | X | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC
| Search product
---
on Google
IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | IR [HTML]FFD9D7 | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC | - | - | - | X | X | X | X | X | X | NP [HTML]E3DCDC | NP [HTML]E3DCDC | NP [HTML]E3DCDC
Table 8. RQ2: Table summarizing RQ2 results. IR suggests noise and
inconclusive results, i.e search results of control and its twin seldom
matched. Thus, difference between treatment and control could either be
attributed to noise or personalization, making it impossible to study the
impact of personalization on misinformation. NP denotes little to no
personalization. - indicates that the given activity had no impact on the
component. X indicates that component was not collected for the activity. M, N
and D indicate average per day bias in the component collected by accounts
that built their history by performing actions on misinformative, neutral or
debunking products. Higher mean value indicates more misinformation. For
example, consider the cell corresponding to action “search + click & add to
cart product” and “Homepage” recommendation. M¿N¿D indicates that accounts
adding misinformative products to cart ends up with more misinformation in
their homepage recommendations in comparison to accounts that add neutral or
debunking products to cart.
(a)
[Average jaccard index of treatment and control accounts]Bar chart
illustrating jaccard index between search results of control and its twin as
well as control and the treatment accounts that performed “following
contributors” action on misinformative, neutral and debunking products and
later searched and sorted results using four Amazon filters. The jaccard index
for control and its twin for filter “featured” is low (¡0.8). For other three
search filters, “average customer review”,“price low to high” and “newest
arrivals”, we see high (¿0.8) jaccard index and between and control and its
twin and the metric values for treatment-control comparison are similar to
that of control-twin comparison.
(b)
[Average kendall’s tau of treatment and control accounts]Bar chart
illustrating kendall’s tau index between search results of control and its
twin as well as control and the treatment accounts that performed “following
contributors” action on misinformative, neutral and debunking products and
later searched and sorted results using four Amazon filters. The kendall’s tau
coefficient for control and its twin for filter “featured” is low (¡0.2). For
other three search filters, “average customer review”,“price low to high” and
“newest arrivals”, we see high (¿0.8) kendall’s tau coefficient between and
control and its twin and the coefficient values for treatment-control
comparison are similar to that of control-twin comparison.
Figure 12. Investigating the presence and amount of personalization due to
“following contributors” action by calculating (a) Jaccard index and (b)
kendall’s tao metric between search results of treatment and control. M, N and
D indicate results for accounts that follow contributors of misinformative,
neutral and debunking products respectively.
(a)
[Input bias in homepages]Line graph showing input bias on the y axis and dates
of experiment run (2020-08-12 to 2020-08-18) on x axis. Input bias for
accounts performing actions on neutral products is 0 for all seven days. Input
bias for accounts performing actions search+click and mark-review on
misinformative products is greater than 0 for all seven days and becomes 1
(max value) from fourth day onwards. Input bias in homepages for accounts
adding misinformative products to their cart is also greater than 0 for all
seven days but the value is less than the bias in homepages for accounts
performing actions search+click and mar review helpful from third day onwards.
Input bias in homepages of accounts performing actions on debunking products
becomes positive on the third day and after that drops below 0.
(b)
[Input bias in pre-purchase recommendations]Line graph showing input bias on
the y axis and dates of experiment run (2020-08-12 to 2020-08-18) on x axis.
Input bias for accounts that added neutral products to their cart remains 0
for all days except fifth and seventh when its 0.25. For accounts that added
misinformative products to their cart, the bias value is greater than 0 for
all seven days and for accounts adding debunking products to their cart, the
bias value is less than 0 for all days except on fifth day when its 0 and
sixth day when its 0.125
(c)
[Input bias in product pages ]Line graph showing input bias on the y axis and
dates of experiment run (2020-08-12 to 2020-08-18) on x axis. Input bias for
accounts performing actions on neutral products is 0 while its greater than 0
for accounts performing actions on misinformative products for all seven days.
Input bias in product pages of accounts performing actions on debunking
products is less than zero on first, third, fourth and seventh day and
unusually high (¿0) on sixth day.
Figure 13. (a) Input bias in homepages of accounts performing actions ‘add to
cart”, “search + click” and “mark top rated all positive review” for seven
days of experiment run. (b) Input bias in pre-purchase recommendations of
accounts for 7 days experiment run. These recommendations are only collected
for accounts adding products to their carts. (c) Input bias in product pages
of accounts performing actions “add to cart”, “search + click” and “mark top
rated all positive review” for 7 days of experiment run. M, N and D indicate
that the accounts performed actions on misinformative, neutral and debunking
products respectively.
## 6\. RQ2 Results [Personalized audit]: Effect of personalization
The aim of our Personalized audit was to determine the effect of
personalization due to account history on the amount of misinformation
returned in search results and various recommendations. Table 8 provides a
summary. Below, we explain the effect of personalization on each component.
### 6.1. RQ2a: Search Results
We measure personalization in search results for each Amazon filter using two
metrics: Jaccard index and Kendall $\tau$ coefficient. Jaccard index
determines similarity between two lists. A Jaccard index of 1 indicates that
the two lists have same elements and zero indicates that the lists are
completely different. On the other hand, Kendall $\tau$ coefficient, also
known as Kendall rank correlation coefficient determines the ordinal
correlation between two lists. It can take values between [-1,1] with -1
indicating that lists have inverse ordering, 0 signifying no correlation and 1
suggesting that items in the list have same ranks.
First, we compare search results of control account and its twin. Recall we
created twins for our 2 control accounts in the _Personalized audit_ to
establish the baseline noise. Ideally, both should have Jaccard and Kendall
rank correlation coefficient closer to 1 since the accounts do not build any
history, are set up in a similar manner, perform searches at the same time and
are in the same geolocation. Next, we compare search results of control
account with treatment accounts that built account histories by performing
different actions. If personalization is occurring, the difference between
search results of treatment and control should be more than the baseline noise
(or Jaccard index and Kendall $\tau$ should be less). Whereas, if the baseline
noise itself is large, it indicates inconsistencies and randomness in the
search results. Interestingly, we found significant noise in search results of
control and its twin for “featured” filter with jaccard index ¡0.8 and
Kendall’s rank correlation coefficient ¡0.2, that is, control and its twins
seldom matched. Presence of noise suggests that Amazon is injecting some
randomness in the “featured” search results. Unfortunately, this means that we
would not be able to study the effect of personalization on the accounts for
the “featured” search filter setting.
For the other three search filters, “average customer review”, “price low to
high” and “newest arrivals”, we see high (¿0.8) jaccard index and kendall
$\tau$ metric values between and control and its twin. Additionally, we do not
see any personalization for these filters since metrics values for treatment-
control comparison are similar to that of control-twin comparison. Figure 12
shows the metrics calculation for control account and treatments that have
built their search histories by following contributor’s of misinformative,
neutral and debunking products. We see two minor inconsistencies for filter
“average customer review” in accounts building their history on debunking
products where treatment received more similar results to control than its
twin account. In any case, the treatment does not see more inconsistency than
the control and its twin indicating no personalization. Other user actions
show similar results, hence, we have removed their results for brevity.
### 6.2. RQ2b: Recommendations
We investigated the occurrence of personalization and its impact on the amount
of misinformation in three different recommendation pages. We discuss them
below.
Homepage recommendations: We find that homepages are personalized only when a
user performs click actions on the search results. Thus, actions “add to
cart”, “search + click” and “mark top rated most positive review helpful” led
to homepage personalization. On the other hand, homepages were not
personalized for actions “follow contributor”, “search product” and “google
search” actions. After identifying the actions leading to personalized
homepages, we investigate the impact of personalization on the amount of
misinformation. In other words, we investigate how misinformation bias in
homepages is different for accounts building their history by performing
actions on misinformative, neutral and debunking products. For each action, we
had 6 accounts, two replicates for each action and product type
(misinformation, neutral and debunking). For example, for action “add to cart”
two accounts built their history by adding misinformative products to cart for
7 days, two added neutral products and two accounts added debunking products
to their carts. We calculate per day input bias (ib) in homepages by averaging
the misinformation bias scores of each recommended product present in the
homepage. Therefore, for every account we have seven bias values. We consider
only top two products in each recommendation type. Recall, homepages could
contain three different types of recommendations ‘Inspired by your shopping
trends”, “Recommended items other customers often buy again” and “Related to
items you’ve viewed”. All the different types are considered together for
analysis.
Statistical tests reveal significant differences in the amount of
misinformation present in homepages of accounts that built their histories by
performing actions on misinformative, neutral and debunking products (see
Table 8). This observation holds true for all three activities “add to cart”,
“search + click” and “mark top rated most positive review helpful”. Post hoc
test reveals an echo chamber effect. Amount of misinformation in
recommendations of products performing actions on misinformative products is
more than the amount of misinformation in homepages of accounts performing
actions on neutral products which in turn is more than the misinformation
present in homepages of accounts performing actions on debunking products.
Figure 13(a) shows per day input bias of homepages of different accounts
performing different actions. We take an average of the replicates for
plotting the graph. Surprisingly, performing actions “mark top rated most
positive review helpful” and “search + click” on a misinformative product
leads to highest amount of misinformation in the homepages, even more than the
homepages of accounts adding misinformative products to the cart. This means
that amount of misinformation present in homepage is comparatively less once a
user shows an intention to purchase a misinformative product but high if a
user shows interest in the misinformative product but doesn’t show an
indication to buy it. Figure 13(a) also shows that amount of misinformation
present in homepages of accounts performing actions “mark top rated most
positive review helpful” and “search + click” on misinformative products
gradually increases and becomes 1 on day 4 (2020-08-15). Bias value 1
indicates that all analysed products in homepages were misinformative.
Homepage recommendations of products performing actions on neutral objects
show 0 bias constantly indicating all recommendations on all days were
neutral. On the other hand, average bias in homepages of accounts building
history on debunking accounts rose a little above 0 in the first three days
but eventually fells below 0 indicating a debunking lean.
Pre-purchase recommendations: These recommendations are only presented to
users that add product(s) to their Amazon cart. Therefore, they were collected
for 6 accounts, 2 of which added misinformative products to cart, 2 added
neutral products and the other 2 added debunking products. These
recommendations could be of several types. See Figure 1(b) for an example of
pre-purchase page. For our analysis, we consider the first product present in
each recommendation type. Statistical tests reveal significant difference in
the amount of misinformation present in pre-purchase recommendations of
accounts that added misinformative, neutral and debunking products to cart (KW
H(2, 42) = 32.63, p = 8.19e-08). Those adding misinformative products to cart
contain more misinformation than the accounts adding neutral or debunking
products to their carts. Figure 13(b) shows the input bias in the pre-purchase
recommendations for all the accounts. There is no coherent temporal trend,
indicating that the input bias in this recommendation type depends on the
particular product being added to cart. However, an echo chamber effect is
evident. For example, bias in pre-purchase recommendations of accounts adding
misinformative products to cart is above 0 for all 7 days.
Product recommendations: We collect product recommendations for accounts
performing “add to cart”, “search + click” and “mark top rated most positive
review helpful” actions. We find significant difference in the amount of
misinformation present in product page recommendations when accounts performed
these actions on misinformative, neutral, and debunking products (refer Table
8). Post hoc analysis reveals that product page recommendations of
misinformative products contain more misinformation than those of neutral and
debunking products. Figure 13(c) shows the input bias present in product pages
across accounts. The bias for neutral products is constantly 0 across the 7
days, but for misinformative products, it is constantly greater than 0 for all
actions. We see an unusually high bias value on the 6th day (2020-08-17) of
our experiment for accounts performing actions on debunking product titled
Reasons to Vaccinate: Proof That Vaccines Save Lives (B086B8MM71). We checked
the product page recommendations of this particular debunking book and found
several misinformative recommendations on its product page.
### 6.3. RQ2c: Auto-complete suggestions
We audited auto-complete suggestions to investigate how personalization
affects the change in search query suggestions. Our initial hypothesis was
that performing actions on misinformative products could increase the auto-
complete suggestions of anti-vaccine search queries. However, we found little
to no personalization in the auto-complete suggestions indicating that account
history built by performing actions on vaccine-related misinformative, neutral
or debunking products have little to no effect on how auto-complete
suggestions of accounts change. In interest of brevity, we do not add the
results and graphs for this component.
## 7\. Discussion
There is a growing concern that e-commerce platforms are becoming hubs of
dangerous medical misinformation. Unlike search engines where the motivation
of the platform is to show relevant search results to sell advertisements,
goal of e-commerce platforms is to sell products. The motivation to increase
sales means that relevance in recommendations and search suggestions is driven
by what people purchase after conducting a search or viewing an item,
irrespective of whether the product serves credible information or not. As a
result, due to lack of regulatory policies, websites like Amazon are providing
a platform to people who are making money by selling misinformation—dangerous
anti-vaccine ideas, pseudoscience treatments, or unproven dietary
alternatives—some of which could have dangerous effects on people’s health and
well-being. With a US market share of 49%, Amazon is the leading product
search engine in the United States (Dayton, 2020). Therefore, any
misinformation present in its search and recommendations could have a far
reaching influence where they can negatively shape users’ viewing and
purchasing patterns. Thus, in this paper we audited Amazon for the most
dangerous form of health misinformation—vaccine misinformation. Our work
resulted in several critical findings with far reaching implications. We
discuss them below.
### 7.1. Amazon: a marketplace of multifaceted health misinformation
Our analysis shows that Amazon hosts a variety of health misinformative
products. Maximum number of such products belong to the category Books and
Kindle eBooks (Figure 7). Despite the enormous amount of information available
online, people still turn to books to gain information. A Pew Research survey
revealed that 73% of Americans read atleast one book in a year (Perrin, 2016).
Books are considered “intellectual heft”, have more presence than scientific
journals and thus, leave “a wider long lasting wake” (Herr, 2017). Thus, anti-
vaccine books could have a wider reach and can easily influence the audience
negatively. Moreover, it does not help that a large number of anti-vaccine
books are written by authors with medical degrees (Shin and Valente, 2020).
Not just anti-vaccine books, there are abundant pseudoscience books on the
platform, all suggesting unproven methods to cure diseases. We found diet
books suggesting recipes with colloidal silver—an unsafe product, as an
ingredient. Some of the books proposing cures for incurable diseases, like
autism and auto immune diseases, can have a huge appeal for people suffering
with such diseases (Reynolds, 2019). Thus, there is an urgent need to check
the quality of health books presented to the users.
The next most prominent category of health misinformative products is Amazon
Fashion. Numerous apparels are sold on the platform with innovative anti-
vaccine slogans, giving tools to the anti-vaccine propagandists to advocate
their anti-vaccine agenda and gain visibility, not just in the online world,
but in the offline world. During our annotation process, we also found many
dietary supplements claiming to treat and cure diseases—a direct violation of
Amazon’s policy on dietary supplements. Overall, we find that health
misinformation exists on the platform in various forms—books, t-shirts and
other merchandise. Additionally, it is very easy to sell problematic content
because of lack of appropriate quality-control policies and their enforcement.
### 7.2. Amazon search results: a stockpile of health misinformation
Analysis of our _Unpersonalized audit_ revealed that 10.47% of search results
promote vaccine and other health-related misinformation. Notably, the higher
percentage of products promoting misinformation compared to debunking suggests
that anti-vaccine and problematic health-related content is churned out more
and the attempts to debunk the existing misinformation is less. We also found
that Amazon’s search algorithm puts more health misinformative products in
search results than debunking products leading to high input bias for topics
like “vaccination”, “vaccine controversies”, “hpv vaccine”, etc. This is
specifically true for search filters “featured” and “average customer
reviews”. Note, that “featured” is the default search filter indicating that
by default users will see more misinformation when they search for the
aforementioned topics. On the other hand, if users want to make a purchase
decision based on product ratings, again users will be presented with more
misinformation since our analysis indicates that sorting by filter ”average
customer reviews” leads to highest misinformation bias in the search results.
We also found a ranking bias in Amazon’s search algorithm with misinformative
products getting ranked higher. Past research has shown that people trust
higher ranked search results (Guan and Cutrell, 2007). Thus, more number of
higher ranked misinformative products can make problematic ideas in these
products appear mainstream. The only positive finding of our analysis was the
presence of more debunking products in search results sorted by filter “newest
arrivals”. This might indicate that more higher quality products are being
sold on the platform in recent times. However, since there are no
studies/surveys indicating which search filters are mostly used by people
while making purchase decisions, it is difficult to conclude how beneficial
this finding is.
### 7.3. Amazon recommendations: problematic echo chambers
Many search engines and social media platforms employ personalization to
enhance users’ experience on their platform by recommending them items that
the algorithm think they will like based on their past browsing or purchasing
history. But on the downside, if not checked, personalization can also lead
users into a rabbit hole of problematic content. Our analysis of _Personalized
audit_ revealed that an echo chamber exists on Amazon where users performing
real-world actions on misinformative books are presented with more
misinformation in various recommendations. Just a single click on an anti-
vaccine book could fill your homepage with several other similar anti vaccine
books. And if you proceed to add that book in your cart, Amazon again presents
more anti-vaccine books, nudging you to purchase even more problematic
content. The worst discovery is that your homepages get filled with more
misinformation if you just show an interest in a misinformative product (by
clicking on it) compared to when you show an intention to buy it by adding
product to your cart. Additionally on the product page itself, you are
presented 5 different kinds of recommendations each of which contains equally
problematic content. In a nutshell, once you start engaging with
misinformative products on the platform, you will be presented with more
misinformative stuff at every point of your Amazon navigation route and at
multiple places. These findings would not have been concerning if buying a
milk chocolate would lead to recommendations of other chocolates of different
brands. The problem is that Amazon is blindly applying its algorithms on all
products including problematic content. Its algorithms do not differentiate or
give special significance to vaccine-related topics. Amazon has learnt from
users’ past viewing and purchasing behaviour and has categorized all the anti-
vaccine and other problematic health cures together. It presents the
problematic content to users performing actions on any of these products,
creating a dangerous recommendation loop in the process. There is an urgent
need for the platform to treat vaccine and other health related topics
differently and ensure high quality searches and recommendations. In the next
section, we present a few ways, based on our findings, that could assist the
platform in combating health misinformation.
### 7.4. Combating health misinformation
Tackling online health misinformation is a complex problem and there is no
easy silver-bullet solution to curb its spread. However, the first step
towards addressing is accepting that there is a problem. Many tech giants have
acknowledged their social responsibility in ensuring high quality in health-
related content and are actively taking many steps to ensure the same. For
example, Google’s policy “Your Money Or Your Life” classifies medical and
health-related search pages as pages of particular importance, whose content
should come from reputable websites (McGee, 2013). Pinterest completely
hobbled the search results of certain queries such as ‘anti-vax’ (Caron, 2019)
and limited the search results for other vaccine-related queries to content
from officially recognized health institutions (Hutchinsona, 2019). Even
Facebook—a platform known to have questionable content moderation
policies—banned anti-vaccine advertisements and demoted the anti-vaccine
content in its search results to make its access difficult (Matsakis, 2019).
Therefore, given the massive reach and user base of Amazon—206 million website
visits every month (10under100, 2020)—it is disconcerting to see that Amazon
has not yet joined the bandwagon. Till date, it has not taken any concrete
steps towards addressing the problem of anti-vaccine content on its platform.
Through our findings, we recommend several short-term and long-term strategies
that the platform can adopt.
#### 7.4.1. Short term strategies: design interventions.
The simplest short term solution would be to introduce design interventions.
Our Unpersonalized audit revealed high misinformation bias in search results.
The platform can use interventions as an opportunity to communicate to users
the quality of data presented to them by signalling misinformation bias. The
platform could introduce a bias meter or scale that signals the amount of
misinformation present in search results every time it detects a vaccine-
related query in its search bar. The bias indicators could be coupled with
informational interventions like showing Wikipedia and encyclopedia links,
that have already been proven to be effective in reducing traffic to anti-
vaccine content (Kim et al., 2020). The second intervention strategy could be
to recognise and signal source bias. During our massive annotation process, we
realized that several health misinformative books have been written by known
anti-vaxxers like Andrew Wakefield, Jenny Mccarthy, Robert S. Mendelsohn, etc.
We also present a list of authors who have contributed to most misinformative
books in Table 6. Imagine a design where users are presented with a message
“The author is a known anti-vaxxer and is known to write books that might
contain health minformation” every time they click a book written by these
authors. An another extreme short term solution could be to either enforce a
platform-wide ban prohibiting sale of any anti-vaccine product or hobble
search results for anti-vaccine search queries.
#### 7.4.2. Long term strategies: algorithmic modifications and policy
changes.
Long term interventions would include modification of search, ranking and
recommendation algorithm. Our investigations revealed that Amazon’s algorithm
has learnt problematic patterns through consumer’s past viewing and buying
patterns. It has categorized all products of similar stance together (see
several edges connecting red nodes— products promoting misinformation in
Figure 11). In some cases, it has also associated some misinformative products
with neutral and debunking products (refer Figure 11) Amazon needs to
“unlearn” this categorization. Additionally, the platform should incorporate
misinformation bias in their search and recommendation algorithms to reduce
the exposure to misinformative content. There is also an urgent need to
introduce some policy changes. First and foremost, Amazon should stop
promoting health misinformative books by sponsoring them. We found 98
misinformative products in the sponsored recommendations indicating that
today, anti-vaccine outlets can easily promote their products by spending some
money. Amazon should also introduce some minimum quality requirements that
should be met before a product is allowed to be sponsored or sold on its
platform. It can employ search quality raters to rate the quality of search
results for various health-related search queries. Google has already set an
example with its extensive Search Quality Rating process and guidelines
(Google, 2020, 2019). In recent times Amazon introduced several policy and
algorithmic changes including roll out of a new feature “verified purchase” to
curb fake reviews problem on its platform (Roddy, 2019). Similar efforts are
required to ensure product quality as well. Amazon can introduce a similar
“verified quality” or “verified claims” tag with health-related products once
they are evaluated by experts. Having a product base of millions of products
can make any kind of review process tedious and challenging. Amazon can start
by targeting specific health and vaccine related topics that are most likely
to be searched. Our work itself presents a list of most popular vaccine-
related topics that can be used as a starting point. Can we expect Amazon to
make any changes to its current policies and algorithms without sustained
pressure? We believe audit studies like ours are the way to reveal biases in
the algorithms used by commercial platforms so that there is more awareness
about the issues which in turn would create pressure on the organization to
act. In the past, such audit studies have led platforms to make positive
changes to their algorithms (Raji and Buolamwini, 2019). We hope our work acts
as a call to action for Amazon and also inspires vaccine and health audits on
other platforms.
## 8\. Limitations
Our study is not without limitations. First, we only considered top products
in each recommendation-type present on a page while determining bias of the
entire page. Annotating and determining bias of all the recommendations
occurring in a page would give a much more accurate logic of recommendation
algorithms. However, past studies have shown that the top results receive the
highest number of clicks, thus, are more likely to receive attention from
users (Dean, 2019). Second, search queries themselves have inherent bias. For
example query ‘anti vaccine t-shirt’ suggests that user is looking for anti-
vax products. Higher bias in search results of neutral queries is much worse
than that of biased queries. We did not segregate our analysis based on search
query bias. Although, we did notice two neutral search queries namely
‘vaccine’ and ‘varicella vaccine’ appearing in the list of most problematic
search-query and filter combinations. Third, while we audited various
recommendations present on the platform, we did not analyse the email
recommendations—product recommendations present outside the platform. A
journalistic report pointed that email recommendations could be contaminated
too if a user shows an interest in a misinformative product but leaves the
platform without buying it (Diresta, 2019). We leave investigation of these
recommendations to future work. Fourth, in our _Personalized audit_ , accounts
only built history for a week. Moreover, experiments were only run on
Amazon.com. We plan to continue to run our experiments and explore features
such as geolocation for future audits. Fifth, our audit study only targeted
results returned in response to vaccine-related queries. Since, Amazon is a
vast platform that hosts variety of products and sellers, we cannot claim that
our results are generalizable for other misinformative topics or conspiracy
theories. However, our methodology is generic enough to be applied to other
misinformative topics. Lastly, another major limitation of the study is that
in the _Personalized audit_ account histories were built in a very
conservative setting. Accounts performed actions on only one product each day.
Additionally, the actions were only performed on products with the same
stance. In real-world it will be tough to find users who only add
misinformative products in their carts for seven days continuously. But in
spite of this limitation, our study still provides a peek into the workings of
Amazon’s algorithm and has paved way for future audits that could use our
audit methodology and extensive qualitative coding scheme to perform
experiments considering complex real world settings.
## 9\. Conclusion
In this study, we conducted two sets of audit experiments on a popular
e-commerce platform, Amazon to empirically determine the amount misinformation
returned by its search and recommendation algorithm. We also investigated
whether personalization due to user history plays any role in amplifying
misinformation. Our audits resulted in a dataset of 4,997 Amazon products
annotated for health misinformation. We found that search results returned for
many vaccine-related queries contain large number of misinformative products
leading to high misinformation bias. Moreover, misinformative products are
also ranked higher than debunking products. Our study also suggests presence
of a filter-bubble effect in recommendations, where users performing actions
on misinformative products are presented with more misinformation in their
homepages, product page recommendations and pre-purchase recommendations. We
believe, our proposed methodology to audit vaccine misinformation can be
applied to other platforms to investigate health misinformation bias. Overall,
our study brings attention to the need for search engines to ensure high
standards and quality of results for health related queries.
## References
* (1)
* 10under100 (2020) 10under100. 2020\. 20 Eye Opening Amazon Statistics & Facts For 2020. https://10under100.com/amazon-statistics-facts/
* Baker (2018) Loren Baker. 2018\. Amazon’s Search Engine Ranking Algorithm: What Marketers Need to Know. https://www.searchenginejournal.com/amazon-search-engine-ranking-algorithm-explained/265173/
* Ball (2020) P Ball. 2020. Anti-vaccine movement could undermine efforts to end coronavirus pandemic, researchers warn.
* Ball and Maxmen (2020) Philip Ball and Amy Maxmen. 2020. The epic battle against coronavirus misinformation and conspiracy theories. _Nature (London)_ 581, 7809 (2020), 371–374.
* Belluz (2016) Julia Belluz. 2016\. Amazon is a giant purveyor of medical quackery. https://www.vox.com/2016/9/6/12815250/amazon-health-products-bogus
* Bragazzi et al. (2017) Nicola Luigi Bragazzi, Ilaria Barberis, Roberto Rosselli, Vincenza Gianfredi, Daniele Nucci, Massimo Moretti, Tania Salvatori, Gianfranco Martucci, and Mariano Martini. 2017. How often people google for vaccination: Qualitative and quantitative insights from a systematic search of the web-based activities using Google Trends. _Human Vaccines & Immunotherapeutics_ 13, 2 (2017), 464–469. https://doi.org/10.1080/21645515.2017.1264742 arXiv:https://doi.org/10.1080/21645515.2017.1264742 PMID: 27983896.
* Caron (2019) Christina Caron. 2019\. Pinterest Restricts Vaccine Search Results to Curb Spread of Misinformation. https://www.nytimes.com/2019/02/23/health/pinterest-vaccination-searches.html
* Center (2006) Pew Research Center. 2006\. Most internet users start at a search engine when looking for health information online. https://www.pewresearch.org/internet/2006/10/29/most-internet-users-start-at-a-search-engine-when-looking-for-health-information-online/
* Central (2020) Amazon Seller Central. accessed in 2020. Dietary Supplements. https://sellercentral.amazon.com/gp/help/external/G201829010?language=en_US
* Chen et al. (2018) Le Chen, Ruijun Ma, Anikó Hannák, and Christo Wilson. 2018. Investigating the Impact of Gender on Rank in Resume Search Engines. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ (Montreal QC, Canada) _(CHI ’18)_. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3173574.3174225
* Chen et al. (2016) Le Chen, Alan Mislove, and Christo Wilson. 2016. An Empirical Analysis of Algorithmic Pricing on Amazon Marketplace. In _Proceedings of the 25th International Conference on World Wide Web_ (Montréal, Québec, Canada) _(WWW ’16)_. International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1339–1349. https://doi.org/10.1145/2872427.2883089
* Cossard et al. (2020) Alessandro Cossard, Gianmarco De Francisci Morales, Kyriaki Kalimeri, Yelena Mejova, Daniela Paolotti, and Michele Starnini. 2020\. Falling into the Echo Chamber: The Italian Vaccination Debate on Twitter. _Proceedings of the International AAAI Conference on Web and Social Media_ 14, 1 (May 2020), 130–140. https://ojs.aaai.org/index.php/ICWSM/article/view/7285
* Dai et al. (2020) Enyan Dai, Yiwei Sun, and Suhang Wang. 2020. Ginger Cannot Cure Cancer: Battling Fake Health News with a Comprehensive Data Repository. _Proceedings of the International AAAI Conference on Web and Social Media_ 14, 1 (May 2020), 853–862. https://www.aaai.org/ojs/index.php/ICWSM/article/view/7350
* Dayton (2020) Emily Dayton. 2020\. Amazon Statistics You Should Know: Opportunities to Make the Most of America’s Top Online Marketplace. https://www.bigcommerce.com/blog/amazon-statistics/#10-fascinating-amazon-statistics-sellers-need-to-know-in-2020
* Dean (2019) Brian Dean. 2019\. Here’s What We Learned About Organic Click Through Rate. https://backlinko.com/google-ctr-stats
* Diakopoulos et al. (2018) Nicholas Diakopoulos, Daniel Trielli, Jennifer Stark, and Sean Mussenden. 2018. I vote for—how search informs our choice of candidate.
* Diresta (2019) Renee Diresta. 2019\. How Amazon’s Algorithms Curated a Dystopian Bookstore. https://www.wired.com/story/amazon-and-the-spread-of-health-misinformation/
* Dreisbach (2020) Tom Dreisbach. 2020\. On Amazon, Dubious ’Antiviral’ Supplements Proliferate Amid Pandemic. https://www.npr.org/2020/07/27/894825441/on-amazon-dubious-antiviral-supplements-proliferate-amid-pandemic
* Edelman and Luca (2014) Benjamin Edelman and Michael Luca. 2014. Digital Discrimination: The Case of Airbnb.com. https://doi.org/10.2139/ssrn.2377353
* Epstein and Robertson (2015) Robert Epstein and Ronald E Robertson. 2015. The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections. _Proceedings of the National Academy of Sciences_ 112, 33 (2015), E4512–E4521.
* Erden et al. (2019) Semih Erden, Kevser Nalbant, and Hurşit Ferahkaya. 2019\. Autism and Vaccinations: Does Google side with Science? _Journal of Contemporary Medicine_ 9, 3 (2019), 295–299.
* Financial (2020) The Financial. 2020\. Amazon removed 1 million fake coronavirus cures and overpriced products. https://www.finchannel.com/world/77738-amazon-removed-1-million-fake-and-overpriced-coronavirus-products
* Fox (2006) Susannah Fox. 2006\. Online Health Search 2006. https://www.pewresearch.org/internet/2006/10/29/online-health-search-2006/
* Ghenai and Mejova (2017) Amira Ghenai and Yelena Mejova. 2017. Catching Zika Fever: Application of Crowdsourcing and Machine Learning for Tracking Health Misinformation on Twitter. arXiv:1707.03778 http://arxiv.org/abs/1707.03778
* Ghenai and Mejova (2018) Amira Ghenai and Yelena Mejova. 2018. Fake cures: user-centric modeling of health misinformation in social media. _Proceedings of the ACM on human-computer interaction_ 2, CSCW (2018), 1–20.
* Ghezzi et al. (2020) Pietro Ghezzi, Peter Bannister, Gonzalo Casino, Alessia Catalani, Michel Goldman, Jessica Morley, Marie Neunez, Andreu Prados-Bo, Pierre Smeesters, Mariarosaria Taddeo, Tania Vanzolini, and Luciano Floridi. 2020\. Online Information of Vaccines: Information Quality, Not Only Privacy, Is an Ethical Responsibility of Search Engines. _Frontiers in Medicine_ 7 (08 2020). https://doi.org/10.3389/fmed.2020.00400
* Glaser (2017) April Glaser. 2017\. Amazon Is Suggesting “Frequently Bought Together” Items That Can Make a Bomb. https://slate.com/technology/2017/09/amazons-algorithm-is-suggesting-items-frequently-bought-together-that-can-make-a-bomb.html
* Goldhill (2020) Olivia Goldhill. 2020\. Amazon is selling coronavirus misinformation. https://qz.com/1816973/amazon-is-selling-coronavirus-misinformation/
* Google (2019) Google. 2019. Google’s Search Quality Rating Guidelines. https://static.googleusercontent.com/media/guidelines.raterhub.com/en//searchqualityevaluatorguidelines.pdf
* Google (2020) Google. 2020. Google Search Help. https://support.google.com/websearch/answer/9281931?hl=en
* Guan and Cutrell (2007) Zhiwei Guan and Edward Cutrell. 2007. An Eye Tracking Study of the Effect of Target Rank on Web Search. In _Proceedings of the SIGCHI Conference on Human Factors in Computing Systems_ (San Jose, California, USA) _(CHI ’07)_. Association for Computing Machinery, New York, NY, USA, 417–420. https://doi.org/10.1145/1240624.1240691
* Hannak et al. (2013) Aniko Hannak, Piotr Sapiezynski, Arash Molavi Kakhki, Balachander Krishnamurthy, David Lazer, Alan Mislove, and Christo Wilson. 2013\. Measuring Personalization of Web Search. In _Proceedings of the 22nd International Conference on World Wide Web_ (Rio de Janeiro, Brazil) _(WWW ’13)_. Association for Computing Machinery, New York, NY, USA, 527–538. https://doi.org/10.1145/2488388.2488435
* Hannak et al. (2014) Aniko Hannak, Gary Soeller, David Lazer, Alan Mislove, and Christo Wilson. 2014. Measuring Price Discrimination and Steering on E-Commerce Web Sites. In _Proceedings of the 2014 Conference on Internet Measurement Conference_ (Vancouver, BC, Canada) _(IMC ’14)_. Association for Computing Machinery, New York, NY, USA, 305–318. https://doi.org/10.1145/2663716.2663744
* Herr (2017) M. Herr. 2017. _Writing and Publishing Your Book: A Guide for Experts in Every Field_. Greenwood, USA. https://books.google.com/books?id=r2fuswEACAAJ
* Hu et al. (2019) Desheng Hu, Shan Jiang, Ronald E. Robertson, and Christo Wilson. 2019. Auditing the Partisanship of Google Search Snippets. In _The World Wide Web Conference_ (San Francisco, CA, USA) _(WWW ’19)_. Association for Computing Machinery, New York, NY, USA, 693–704. https://doi.org/10.1145/3308558.3313654
* Hussein et al. (2020) Eslam Hussein, Prerna Juneja, and Tanushree Mitra. 2020\. Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube. _Proceedings of the ACM on Human-Computer Interaction_ 4, CSCW1 (2020), 1–27.
* Hutchinsona (2019) Andrew Hutchinsona. 2019\. Pinterest Will Limit Search Results for Vaccine-Related Queries to Content from Official Health Outlets. https://www.socialmediatoday.com/news/pinterest-will-limit-search-results-for-vaccine-related-queries-to-content/561885/
* Kata (2010) Anna Kata. 2010\. A postmodern Pandora’s box: anti-vaccination misinformation on the Internet. _Vaccine_ 28, 7 (2010), 1709–1716.
* Kay et al. (2015) Matthew Kay, Cynthia Matuszek, and Sean A. Munson. 2015\. Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_ (Seoul, Republic of Korea) _(CHI ’15)_. Association for Computing Machinery, New York, NY, USA, 3819–3828. https://doi.org/10.1145/2702123.2702520
* Kim et al. (2020) Sangyeon Kim, Omer F. Yalcin, Samuel E. Bestvater, Kevin Munger, Burt L. Monroe, and Bruce A. Desmarais. 2020. The Effects of an Informational Intervention on Attention to Anti-Vaccination Content on YouTube. _Proceedings of the International AAAI Conference on Web and Social Media_ 14, 1 (May 2020), 949–953. https://ojs.aaai.org/index.php/ICWSM/article/view/7364
* Knobloch-Westerwick et al. (2015) Silvia Knobloch-Westerwick, Benjamin K Johnson, Nathaniel A Silver, and Axel Westerwick. 2015. Science exemplars in the eye of the beholder: How exposure to online science information affects attitudes. _Science Communication_ 37, 5 (2015), 575–601.
* Kulshrestha et al. (2017) Juhi Kulshrestha, Motahhare Eslami, Johnnatan Messias, Muhammad Bilal Zafar, Saptarshi Ghosh, Krishna P. Gummadi, and Karrie Karahalios. 2017. Quantifying Search Bias: Investigating Sources of Bias for Political Searches in Social Media. In _Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing_ (Portland, Oregon, USA) _(CSCW ’17)_. Association for Computing Machinery, New York, NY, USA, 417–432. https://doi.org/10.1145/2998181.2998321
* Matsakis (2019) Louise Matsakis. 2019\. Facebook Will Crack Down on Anti-Vaccine Content. https://www.wired.com/story/facebook-anti-vaccine-crack-down/
* McGee (2013) Matt McGee. 2013\. In Quality Raters’ Handbook, Google Adds Higher Standards For “Your Money Or Your Life” Websites. https://searchengineland.com/quality-raters-handbook-your-money-or-your-life-177663
* Mitra et al. (2016) Tanushree Mitra, Scott Counts, and James W. Pennebaker. 2016\. Understanding Anti-Vaccination Attitudes in Social Media. In _Proceedings of the Tenth International Conference on Web and Social Media, Cologne, Germany, May 17-20, 2016_. AAAI Press, Cologne, Germany, 269–278. http://www.aaai.org/ocs/index.php/ICWSM/ICWSM16/paper/view/13073
* Mitra et al. (2015) Tanushree Mitra, C.J. Hutto, and Eric Gilbert. 2015\. Comparing Person- and Process-Centric Strategies for Obtaining Quality Data on Amazon Mechanical Turk. In _Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems_ (Seoul, Republic of Korea) _(CHI ’15)_. Association for Computing Machinery, New York, NY, USA, 1345–1354. https://doi.org/10.1145/2702123.2702553
* Mønsted and Lehmann (2019) Bjarke Mønsted and Sune Lehmann. 2019. Algorithmic Detection and Analysis of Vaccine-Denialist Sentiment Clusters in Social Networks. arXiv:1905.12908 http://arxiv.org/abs/1905.12908
* Mustafaraj et al. (2020) Eni Mustafaraj, Emma Lurie, and Claire Devine. 2020\. The Case for Voter-Centered Audits of Search Engines during Political Elections. In _Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency_ (Barcelona, Spain) _(FAT* ’20)_. Association for Computing Machinery, New York, NY, USA, 559–569. https://doi.org/10.1145/3351095.3372835
* Owen (2020) Laura Hazard Owen. 2020\. One group that’s really benefited from Covid-19: Anti-vaxxers. https://www.niemanlab.org/2020/07/one-group-thats-really-benefitted-from-covid-19-anti-vaxxers/
* Perrin (2016) Andrew Perrin. 2016\. Book Reading 2016. https://www.pewresearch.org/internet/2016/09/01/book-reading-2016/
* Pirolli (2005) Peter Pirolli. 2005\. Rational Analyses of Information Foraging on the Web. _Cognitive Science_ 29, 3 (2005), 343–373. https://doi.org/10.1207/s15516709cog0000_20 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog0000_20
* Puschmann (2019) Cornelius Puschmann. 2019\. Beyond the Bubble: Assessing the Diversity of Political Search Results. _Digital Journalism_ 7, 6 (2019), 824–843. https://doi.org/10.1080/21670811.2018.1539626 arXiv:https://doi.org/10.1080/21670811.2018.1539626
* Rainie and Fox (2000) Lee Rainie and Susannah Fox. 2000. The Online Health Care Revolution. https://www.pewresearch.org/internet/2000/11/26/the-online-health-care-revolution/
* Raji and Buolamwini (2019) Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. In _Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society_ (Honolulu, HI, USA) _(AIES ’19)_. Association for Computing Machinery, New York, NY, USA, 429–435. https://doi.org/10.1145/3306618.3314244
* Reynolds (2019) Matt Reynolds. 2019\. Amazon sells ’autism cure’ books that suggest children drink toxic, bleach-like substances. https://www.wired.co.uk/article/amazon-autism-fake-cure-books
* Robertson et al. (2018) Ronald E Robertson, Shan Jiang, Kenneth Joseph, Lisa Friedland, David Lazer, and Christo Wilson. 2018\. Auditing partisan audience bias within google search. _Proceedings of the ACM on Human-Computer Interaction_ 2, CSCW (2018), 1–22.
* Roddy (2019) Shannon Roddy. 2019\. Recent Updates to Amazon Verified Purchase Reviews. https://www.marketplacesellercourses.com/recent-updates-to-amazon-verified-purchase-reviews/
* Schwitzer (2017) Gary Schwitzer. 2017\. Pollution of health news.
* Shin and Valente (2020) Jieun Shin and Thomas Valente. 2020. Algorithms and Health Misinformation: A Case Study of Vaccine Books on Amazon. _Journal of Health Communication_ 25, 5 (2020), 394–401. https://doi.org/10.1080/10810730.2020.1776423 arXiv:https://doi.org/10.1080/10810730.2020.1776423 PMID: 32536257.
* Spector (2017) Carrie Spector. 2017\. Stanford scholars observe ’experts’ to see how they evaluate the credibility of information online. https://news.stanford.edu/press-releases/2017/10/24/fact-checkers-ouline-information/
* Steiner et al. (2020) Miriam Steiner, Melanie Magin, Birgit Stark, and Stefan Geiß. 2020. Seek and you shall find? A content analysis on the diversity of five search engines’ results on political queries. _Information, Communication & Society_ 0, 0 (2020), 1–25. https://doi.org/10.1080/1369118X.2020.1776367 arXiv:https://doi.org/10.1080/1369118X.2020.1776367
* Trielli and Diakopoulos (2019) Daniel Trielli and Nicholas Diakopoulos. 2019. Search as News Curator: The Role of Google in Shaping Attention to News Information. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ (Glasgow, Scotland Uk) _(CHI ’19)_. Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300683
* van der Meer and Jin (2020) Toni GLA van der Meer and Yan Jin. 2020. Seeking formula for misinformation treatment in public health crises: The effects of corrective information type and source. _Health Communication_ 35, 5 (2020), 560–575.
## Appendix A appendix
The appendix contains a table (Table 9) of books annotated as promoting,
neutral and debunking that were selected to build history of accounts in the
_Personalized audit_ as well as illustration of our multi-stage iterative
coding process (Figure 14). Additionally, we give details about our Amazon
Mechanical Turk (AMT) task in Appendix, Section A.1.
# | Debunking products | Neutral products | Misinformative products
---|---|---|---
title (url code) | S | R | title (url code) | S | R | title (url code) | S | R
1 | Vaccinated: One Man’s Quest to Defeat the World’s Deadliest Diseases (006122796X) | 4.7 | 134 | Baby’s Book: The First Five Years (Woodland Friends) 144131976X | 4.9 | 614 | Dissolving Illusions: Disease, Vaccines, and The Forgotten History (1480216895) | 4.9 | 953
2 | Epidemiology and Prevention of Vaccine-Preventable Diseases, 13th Edition (990449114) | 4.5 | 11 | My Child’s Health Record Keeper (Log Book) (1441313842) | 4.8 | 983 | The Vaccine Book: Making the Right Decision for Your Child (Sears Parenting Library) (0316180521) | 4.8 | 1013
3 | The Panic Virus: The True Story Behind the Vaccine-Autism Controversy (1439158657) | 4.4 | 175 | Ten Things Every Child with Autism Wishes You Knew, 3rd Edition: Revised and Updated paperback (1941765882) | 4.8 | 792 | The Vaccine-Friendly Plan: Dr. Paul’s Safe and Effective Approach to Immunity and Health-from Pregnancy Through Your Child’s Teen Years (1101884231) | 4.8 | 877
4 | Vaccines: Expert Consult - Online and Print (Vaccines (Plotkin)) (1455700908) | 4.4 | 18 | Baby 411: Your Baby, Birth to Age 1! Everything you wanted to know but were afraid to ask about your newborn: breastfeeding, weaning, calming a fussy baby, milestones and more! Your baby bible! (1889392618)) | 4.8 | 580 | How to End the Autism Epidemic (1603588248) | 4.8 | 717
5 | Bad Science (865479186) | 4.3 | 967 | Uniquely Human: A Different Way of Seeing Autism (1476776245) | 4.8 | 504 | How to Raise a Healthy Child in Spite of Your Doctor: One of America’s Leading Pediatricians Puts Parents Back in Control of Their Children’s Health (0345342763) | 4.8 | 598
6 | Reasons to Vaccinate: Proof That Vaccines Save Lives (B086B8MM71) | 4.3 | 232 | The Whole-Brain Child: 12 Revolutionary Strategies to Nurture Your Child’s Developing Mind (0553386697) | 4.7 | 2347 | Miller’s Review of Critical Vaccine Studies: 400 Important Scientific Papers Summarized for Parents and Researchers (188121740X) | 4.8 | 473
7 | Deadly Choices: How the Anti-Vaccine Movement Threatens Us All (465057969) | 4.2 | 223 | We’re Pregnant! The First Time Dad’s Pregnancy Handbook (1939754682) | 4.7 | 862 | Herbal Antibiotics, 2nd Edition: Natural Alternatives for Treating Drug-resistant Bacteria (1603429875) | 4.7 | 644
Table 9. Books corresponding to each annotation value shortlisted to build
account histories in our _Personalized audit_. S represents the star rating of
the product and R denotes the number of ratings received by the book.
Figure 14. Our multi-stage iterative qualitative coding process to obtain a
coding scheme for annotating Amazon products for health misinformation.
[Qualitative coding process]Three stages: (1) multiple iterations of
qualitative codification by the first author, (2) multiple iterations of
refining the codification scheme based on feedback obtained from 6 researchers
followed by (3) feedback from an external researcher.
### A.1. Amazon Mechanical Turk Job
#### A.1.1. Turk job description
In this section, we describe how we obtained annotations for our study from
Amazon Mechanical Turk workers (MTurks). Past research has shown that it is
possible to get good data from crowd-sourcing platforms like Amazon Mechanical
Turk (AMT) if the workers are screened and trained for the crowd-sourced task
(Mitra et al., 2015). Below we describe the screening process and our
annotation task briefly.
#### A.1.2. Screening
To get high quality annotations, we screened MTurks by adding 3 qualification
requirements. First, we required MTurks to be Masters. Second, we required
them to have atleast 90% approval rating. And lastly, we required them to get
a full score of 100 in a Qualification Test. We introduced a test to ensure
that MTurks attempting our annotation job had a good understanding of the
annotation scheme. The test had one eligibility question asking them to
confirm whether they are affiliated to authors’ University. Other three
questions involved Mturks to annotate three Amazon products (see Figure 18 for
a sample question). First author annotated these products and thus, their
annotation values were known. To ensure MTurks understood the task and
annotation scheme, we gave detailed instructions and described each annotation
value in detail with various examples of Amazon products in the qualifying
test (Figures 15, 16 and 17). Examples were added as visuals. In each example,
we marked the meta data used used for the annotation and explained why a
particular annotation value was assigned to the product (see Figure 17).
We took two steps to ensure that instructions and test questions were easy to
understand and attempt. First, we posted the test on subreddit
r/mturk121212https://www.reddit.com/r/mturk/—a community of MTurks, to obtain
feedback. Second, we did a pilot run by posting ten tasks along with the
aforementioned screening requirements. After obtaining positive feedback from
the community and successful pilot-run, we released our AMT job titled “Amazon
product categorization task”. We paid the Turks according to the United States
federal minimum wage ($7.25/hr). Additionally, we did not disapprove any
worker’s responses.
#### A.1.3. Amazon product categorization task
We posted 1630 annotations (tasks) in batches of 50 at a time. The job was
setup to get three responses for each annotation value. The majority response
was selected to label the Amazon product. To avoid any MTurk bias, we did not
explicitly reveal that the idea behind the task was to get misinformation
annotations. We used the term ”Amazon product categorization” to describe our
project and task throughout. For 34 products, all three MTurk responses
differed. The first author then annotated these products to get annotation
values. Figure 19 shows the interface of our AMT job.
Figure 15. Figure illustrating Qualification Test instructions. Test included
4 questions including one eligibility question required to be added by
authors’ University. A full score of 100 was required to qualify the test.
[Qualitative test instructions:]The instructions were worded as “You will be
graded on 4 questions in total including the eligibility question. You qualify
if you fulfill our eligibility criteria and answer all three questions
mentioned below correctly. Please read the instructions carefully before
attempting the questions. In case you do not qualify, you can retake this test
after 10 minutes.”
Figure 16. Task description in the Qualification test. Same instructions were
provided in the actual task.
[Instructions and annotation description]The figure is a snapshot of the
annotation instructions provided to MTurks including detailed description of
each annotation value.
Figure 17. Example explaining Turks how to determine the annotation value.
[Figure illustrates how user can assign annotation value to a product by
looking at its metadata]In the figure we highlight the description of an
Amazon book The vaccine-friendly plan where the author addresses vaccines as
aluminium shots and has come up with a vaccine schedule that is not approved
by medical authorities. We also show the top critical review which suggests
that book is anti-vaccine.
Figure 18. Example of Qualification Test question.
[Qualification Test question]Figure illustrates a qualification test question
which has URL of the Amazon product along with radio buttons enlisting all the
annotation values. MTurks had to select the annotation value that best suits
the Amazon product.
Figure 19. Interface of our Amazon product categorization task.
[Interface of our AMT product categorization task]Each task had a URL of the
Amazon product along with radio buttons enlisting all the annotation values.
|
# Some punctured codes of several families of binary linear codes
Xiaoqiang Wang, Dabin Zheng, Cunsheng Ding Corresponding author.
Xiaoqiang Wang and Dabin Zheng are with the Hubei Key Laboratory of Applied
Mathematics, Faculty of Mathematics and Statistics, Hubei University, Wuhan
430062, China (E-mail<EMAIL_ADDRESS>[email protected]).
Cunsheng Ding is with the Department of Computer Science and Engineering, The
Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong
Kong, China (E-mail: [email protected]).
Abstract. Two general constructions of linear codes with functions over finite
fields have been extensively studied in the literature. The first one is given
by $\mathcal{C}(f)=\left\\{{\rm
Tr}(af(x)+bx)_{x\in\mathbb{F}_{q^{m}}^{*}}:a,b\in\mathbb{F}_{q^{m}}\right\\}$,
where $q$ is a prime power,
${\mathbb{F}}_{q^{m}}^{*}={\mathbb{F}}_{q^{m}}\setminus\\{0\\}$,
${\mathrm{Tr}}$ is the trace function from ${\mathbb{F}}_{q^{m}}$ to
${\mathbb{F}}_{q}$, and $f(x)$ is a function from $\mathbb{F}_{q^{m}}$ to
$\mathbb{F}_{q^{m}}$ with $f(0)=0$. Almost bent functions, quadratic functions
and some monomials on ${\mathbb{F}}_{2^{m}}$ were used in the first
construction, and many families of binary linear codes with few weights were
obtained in the literature. This paper studies some punctured codes of these
binary codes. Several families of binary linear codes with few weights and new
parameters are obtained in this paper. Several families of distance-optimal
binary linear codes with new parameters are also produced in this paper.
Keywords. Boolean function, linear code, punctured code, distance-optimal
code, weight distribution
2010 Mathematics Subject Classification. 94B05, 94B15
## 1 Introduction of motivations, objectives, and methodology
Let $q$ be a prime power and $n$ be a positive integer. An $[n,k,d]$ code
$\mathcal{C}$ over the finite field $\mathbb{F}_{q}$ is a $k$-dimensional
linear subspace of $\mathbb{F}_{q}^{n}$ with minimum Hamming distance $d$. The
dual code, denoted by ${\mathcal{C}}^{\perp}$, of ${\mathcal{C}}$ is defined
by
${\mathcal{C}}^{\perp}=\left\\{{\mathbf{x}}=(x_{0},\ldots,x_{n-1})\in{\mathbb{F}}_{q}^{n}:\sum_{i=0}^{n-1}x_{i}c_{i}=0\
\forall\ {\mathbf{c}}=(c_{0},\ldots,c_{n-1})\in{\mathcal{C}}\right\\}.$
The minimum distance of ${\mathcal{C}}^{\perp}$, denoted by $d^{\perp}$, is
called the dual distance of ${\mathcal{C}}$. ${\mathcal{C}}$ is called a
projective code if its dual distance is at least $3$. An $[n,k,d]$ code over
${\mathbb{F}}_{q}$ is said to be distance-optimal (respectively, dimension-
optimal and length-optimal) if there is no $[n,k,d^{\prime}\geq d+1]$
(respectively, $[n,k^{\prime}\geq k+1,d]$ and $[n^{\prime}\leq n-1,k,d]$)
linear code over ${\mathbb{F}}_{q}$. An optimal code is a code that is length-
optimal, or dimension-optimal, or distance-optimal, or meets a bound for
linear codes. A binary linear code $\mathcal{C}$ is called self-complementary
if it contains the all-one vector. Let $A_{i}$ denote the number of codewords
with Hamming weight $i$ in $\mathcal{C}$. The weight enumerator of
$\mathcal{C}$ is defined by $1+A_{1}x+A_{2}x^{2}+\cdots+A_{n}x^{n}$. The
weight distribution of $\mathcal{C}$ is defined by the sequence
$(1,A_{1},\cdots,A_{n})$. If the number of nonzero $A_{i}$ in the sequence
$(A_{1},\cdots,A_{n})$ is $t$, then the code $\mathcal{C}$ is said to be a
$t$-weight code. By the parameters of a code, we mean its length, dimension
and minimum distance.
Coding theory has important applications in communications systems, data
storage systems, consumer electronics, and cryptography. In addition, coding
theory is closely related to many areas of mathematics, such as algebra,
algebraic geometry, algebraic function fields, algebraic number theory,
association schemes, combinatorics, finite fields, finite geometry, graph
theory, and group theory. These are the major motivations of studying coding
theory. Constructing linear codes with desired parameters and weight
distributions has been an important task in the history of coding theory.
Linear codes may be constructed directly with algebraic approaches,
combinatorial approaches and other approaches. Alternatively, almost all
linear codes over finite fields can be constructed from some known codes by
the puncturing or shortening techniques.
Let ${\mathcal{C}}$ be an $[n,k,d]$ code over ${\mathbb{F}}_{q}$, and let $T$
be a set of $t$ coordinate positions in ${\mathcal{C}}$. We puncture
${\mathcal{C}}$ by deleting all the coordinates in $T$ in each codeword of
${\mathcal{C}}$. The resulting code is still linear and has length $n-t$,
where $t=|T|$. We denote the punctured code by ${\mathcal{C}}^{T}$. Let
${\mathcal{C}}(T)$ be the set of codewords which are $0$ on $T$. Then
${\mathcal{C}}(T)$ is a subcode of ${\mathcal{C}}$. We now puncture
${\mathcal{C}}(T)$ on $T$, and obtain a linear code over ${\mathbb{F}}_{q}$
with length $n-t$, which is called a _shortened code_ of ${\mathcal{C}}$, and
is denoted by ${\mathcal{C}}_{T}$. The puncturing and shortening techniques
are two very important tools for constructing new codes from old ones. It was
shown that every projective linear code over ${\mathbb{F}}_{q}$ (i.e., the
minimum distance of the dual code is at least 3) is a punctured code of a
Simplex code over ${\mathbb{F}}_{q}$ and a shortened code of a Hamming code
over ${\mathbb{F}}_{q}$ [37]. These facts justify the importance of the
Simplex codes and the Hamming codes as well as the puncturing and shortening
techniques. Note that the Simplex codes are optimal with respect to the
Griesmer bound. Since every projective code is a punctured Simplex code, a
punctured code of an optimal linear code may have good or bad parameters. To
obtain a very good punctured code ${\mathcal{C}}^{T}$ from a good or optimal
linear code ${\mathcal{C}}$, one has to choose a proper set $T$ of coordinate
positions in ${\mathcal{C}}$. This is the difficulty of using the puncturing
technique to construct new linear codes with good parameters from old ones
[37, 56]. In this paper, we will use the puncturing technique to construct new
codes with interesting and new parameters from some old linear codes.
Linear codes with few weights have applications in secret sharing [1],
strongly regular graphs [5], association schemes [4] and authentication codes
[17]. In finite geometry, hyperovals in the projective geometry
${\mathrm{PG}}(2,2^{m})$ are the same as $[2^{m}+2,3,2^{m}]$ MDS codes with
two weights [13, Chapter 12], maximal arcs in ${\mathrm{PG}}(2,2^{m})$ are the
same as a special type of two-weight codes [13, Chapter 12], and ovoids in
${\mathrm{PG}}(3,q)$ are the same as a special type of two-weight codes [13,
Chapter 13]. Many families of linear codes have been used to construct
combinatorial $t$-designs [13, Chapters 5–13]. These are some of the
motivations of studying linear codes with few weights in the literature. In
the past two decades, a lot of progress on the construction of linear codes
with few weights has been made. The reader is referred to [16, 11, 12, 18, 24,
34, 38, 43, 46, 50, 51, 52, 47, 54, 44, 60] and the references therein for
information. One of the objectives of this paper is to construct binary linear
codes with few weights.
Functions and linear codes are closely connected. In the literature two
general constructions of linear codes with functions over finite fields have
been intensively investigated [12]. The first construction is given by
$\displaystyle\mathcal{C}(f)=\left\\{{\rm
Tr}(af(x)+bx)_{x\in\mathbb{F}_{q^{m}}^{*}}\,:\,a,b\in\mathbb{F}_{q^{m}}\right\\},$
(1)
where $q$ is a prime power,
${\mathbb{F}}_{q^{m}}^{*}={\mathbb{F}}_{q^{m}}\setminus\\{0\\}$,
${\mathrm{Tr}}$ is the trace function from ${\mathbb{F}}_{q^{m}}$ to
${\mathbb{F}}_{q}$, and $f(x)$ is a function from $\mathbb{F}_{q^{m}}$ to
$\mathbb{F}_{q^{m}}$ with $f(0)=0$. It is clear that $\mathcal{C}(f)$ is a
linear code with length $q^{m}-1$ and dimension at most $2m$. If $f(x)$ is a
monomial, then ${\mathcal{C}}(f)$ is permutation-equivalent to a cyclic code
[7]. This general construction has a long history and its importance is
supported by Delsarte’s Theorem [10]. The weight distribution of
$\mathcal{C}(f)$ is closely related to the value distributions of certain
exponential sums, and is difficult to settle in general. In order to determine
the weight distribution of ${\mathcal{C}}(f)$, people usually choose $f(x)$ to
be a special function such as a quadratic function, PN function, and APN
function. Many good and optimal linear codes have been obtained with this
construction. This is also a main method for constructing linear codes with
few weights. The reader is referred to, for example, [7, 26, 20, 39, 33, 43,
51, 57] for information.
The second general construction of linear codes is described as follows [16,
53]. Let $D=\\{d_{1},d_{2},\cdots,d_{n}\\}\subset{\mathbb{F}}_{q^{m}}^{*}$ be
a multiset. Define a linear code
${\mathcal{C}}_{D}=\left\\{\left({\rm Tr}(xd_{1}),{\rm Tr}(xd_{2}),\cdots,{\rm
Tr}(xd_{n})\right):x\in{\mathbb{F}}_{q^{m}}\right\\},$
where $q$ is a prime power, ${\mathrm{Tr}}$ is the trace function from
${\mathbb{F}}_{q^{m}}$ to ${\mathbb{F}}_{q}$. The code ${\mathcal{C}}_{D}$
over ${\mathbb{F}}_{q}$ has length $n$ and dimension at most $m$, where $D$ is
called the defining set of ${\mathcal{C}}_{D}$. This construction is
fundamental in the sense that every linear code over ${\mathbb{F}}_{q}$ can be
expressed as ${\mathcal{C}}_{D}$ for some positive integer $m$ and some subset
$D$ of ${\mathbb{F}}_{q^{m}}$ [23, 55]. It is known that this construction is
equivalent to the generator matrix construction of linear codes. The code
$\mathcal{C}_{D}$ may have good parameters if the defining set is properly
chosen. With the second general construction, many good linear codes with few
weights have been constructed [11, 15, 19, 24, 36, 34, 25, 38, 43, 52, 50].
With some variants of the second construction, interesting linear codes were
obtained in [48, 34, 32].
By the definition of the second construction above,
${\mathcal{C}}_{{\mathbb{F}}_{q^{m}}^{*}}$ has parameters
$[q^{m}-1,m,(q-1)q^{m-1}]$ and weight enumerator
$1+(q^{m}-1)z^{(q-1)q^{m-1}}$. If $D\subset{\mathbb{F}}_{q^{m}}^{*}$ does not
contain repeated elements, let $\bar{D}={\mathbb{F}}_{q^{m}}^{*}\setminus D$.
In this case, we have
${\mathcal{C}}_{D}=({\mathcal{C}}_{{\mathbb{F}}_{q^{m}}^{*}})^{\bar{D}}$,
where the coordinate positions in ${\mathcal{C}}_{{\mathbb{F}}_{q^{m}}^{*}}$
are indexed by the elements in ${\mathbb{F}}_{q^{m}}^{*}$. This means that
${\mathcal{C}}_{D}$ is in fact a punctured code of the one-weight code
${\mathcal{C}}_{{\mathbb{F}}_{q^{m}}^{*}}$, which is a concatenation of
$(q-1)$ Simplex codes over ${\mathbb{F}}_{q}$ with the same parameters. Hence,
the second construction above is in fact a puncture construction, and every
projective linear code over ${\mathbb{F}}_{q}$ is a punctured code of the one-
weight code ${\mathcal{C}}_{{\mathbb{F}}_{q^{m}}^{*}}$.
Motivated by the power of the puncture technique and the first construction,
in this paper we study some punctured codes of several families of binary
linear codes ${\mathcal{C}}(f)$ from special functions on
${\mathbb{F}}_{2^{m}}$. Specifically, we will study the following punctured
codes.
Let $f$ be a function on ${\mathbb{F}}_{2^{m}}$ with $f(0)=0$, and let
$D=\\{d_{1},d_{2},\cdots,d_{n}\\}\subset{\mathbb{F}}_{2^{m}}^{*}$ that does
not contain any repeated elements. Define
$\bar{D}={\mathbb{F}}_{2^{m}}^{*}\setminus D$. In this paper, we will study
the punctured code
${\mathcal{C}}(f)^{\bar{D}}=\left\\{{\mathbf{c}}(a,b)=\left({\rm
Tr}(af(d_{1})+bd_{1}),\cdots,{\rm
Tr}(af(d_{n})+bd_{n})\right):a,b\in{\mathbb{F}}_{2^{m}}\right\\},$ (2)
where ${\mathrm{Tr}}$ is the trace function from ${\mathbb{F}}_{2^{m}}$ to
${\mathbb{F}}_{2}$ and the binary code ${\mathcal{C}}(f)$ was defined in (1).
We call the set $D$ the position set of the code ${\mathcal{C}}(f)^{\bar{D}}$,
as we index the coordinate positions of the code ${\mathcal{C}}(f)$ with the
elements in ${\mathbb{F}}_{2^{m}}^{*}$. The dimension of
${\mathcal{C}}(f)^{\bar{D}}$ is at most $2m$. The two objectives of this paper
are to obtain binary linear codes ${\mathcal{C}}(f)^{\bar{D}}$ with new
parameters and few weights and $({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ with new
and good parameters. To this end, we have to select $f$ and the position set
$D$ carefully.
Concretely, we first choose the position set to be
$\begin{split}D=\left\\{x\in\mathbb{F}_{2^{m}}^{*}\,:\,{\rm Tr}(\lambda
f(x))=\nu\right\\}\end{split}$ (3)
and determine the weight distributions of ${\mathcal{C}}(f)^{\bar{D}}$, where
$\nu\in\left\\{0,1\right\\}$, $\lambda\in\mathbb{F}_{2^{m}}^{*}$ and $f(x)$ is
an almost bent function from ${\mathbb{F}}_{2^{m}}$ to itself. We show that
${\mathcal{C}}(f)^{\bar{D}}$ is a five-weight code if $\nu=0$ and a self-
complementary six-weight code if $\nu=1$. Some of the codes
${\mathcal{C}}(f)^{\bar{D}}$ are optimal according to the tables of best codes
known in [22]. The dual of ${\mathcal{C}}(f)^{\bar{D}}$ is distance-optimal
with respect to the sphere packing bound if $\nu=1$. We then present several
classes of four-weight or six-weight linear codes by choosing $f(x)$ to be
some special quadratic functions, and the position set to be the support of
${\rm Tr}(x)$, i.e.,
$D=\left\\{x\in\mathbb{F}_{2^{m}}^{*}\,:\,{\rm Tr}(x)=1\right\\}.$ (4)
Several families of complementary binary linear codes are obtained. The
parameters of the duals of ${\mathcal{C}}(f)^{\bar{D}}$ are also determined
and almost all of them are distance-optimal with respect to the sphere packing
bound. Finally, we present several classes of binary linear codes with three
weights, or five weights or six weights by selecting the position sets to be
some cyclotomic classes. Some of the codes and their duals are distance-
optimal. The parameters of most of the codes presented in this paper are new.
The rest of this paper is organized as follows. Section 2 introduces some
preliminaries. Section 3 investigates the weight distribution of the linear
code ${\mathcal{C}}(f)^{\bar{D}}$ and the parameters of its dual, where $f(x)$
is an almost bent function, $D=\left\\{x\in\mathbb{F}_{2^{m}}^{*}:{\rm
Tr}(\lambda f(x))=\nu\right\\}$, $\nu\in\left\\{0,1\right\\}$ and
$\lambda\in\mathbb{F}_{2^{m}}^{*}$. Section 4 determines the weight
distribution of the linear code ${\mathcal{C}}(f)^{\bar{D}}$ and the
parameters of its dual, where $f(x)$ is some special quadratic function and
$D=\left\\{x\in\mathbb{F}_{2^{m}}^{*}:{\rm Tr}(x)=1\right\\}$. Section 5
settles the weight distribution of the linear code
${\mathcal{C}}(f)^{\bar{D}}$ and the parameters of its dual, where $D$ is a
cyclotomic class and $f$ is a monomial. Section 6 concludes this paper.
## 2 Preliminaries
In this section, we introduce some special functions on
${\mathbb{F}}_{2^{m}}$, some exponential sums and some basic results in coding
theory, which will be used later in this paper.
### 2.1 Notation used starting from now on
Starting from now on, we assume $m\geq 4$ and adopt the following notation
unless otherwise stated:
$\bullet$ ${\mathbb{F}}_{2^{m}}$ is the finite field with $2^{m}$ elements and
$\gamma$ is a primitive element of $\mathbb{F}_{2^{m}}$.
$\bullet$ ${\mathbb{F}}_{2^{m}}^{*}={\mathbb{F}}_{2^{m}}\setminus\\{0\\}$.
$\bullet$ ${\rm Tr}(\cdot)$ is the absolute trace function from
${\mathbb{F}}_{2^{m}}$ to ${\mathbb{F}}_{2}$.
$\bullet$ ${\rm Tr}_{u}^{v}(\cdot)$ is the trace function from
${\mathbb{F}}_{2^{v}}$ to ${\mathbb{F}}_{2^{u}}$, where $u,v$ are positive
integers such that $u\,|\,v$.
$\bullet$ $v_{2}(\cdot)$ is the 2-adic order function with $v_{2}(0)=\infty$.
$\bullet$ $\operatorname{wt_{H}}(\bf c)$ denotes the Hamming weight of a
vector ${\mathbf{c}}$.
$\bullet$ $d_{H}({\mathcal{C}})$ denotes the minimum distance of a linear code
${\mathcal{C}}$.
### 2.2 AB and APN functions
Let $f(x)$ be a function from $\mathbb{F}_{2^{m}}$ to $\mathbb{F}_{2^{m}}$.
The Walsh transform of $f(x)$ at $(a,b)\in\mathbb{F}_{2^{m}}^{2}$ is defined
as
$W_{f}(a,b)=\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm Tr}(af(x)+bx)}.$ (5)
If $W_{f}(a,b)=0$ or $\pm 2^{\frac{m+1}{2}}$ for any pair
$(a,b)\in{\mathbb{F}}_{2^{m}}^{2}$ with $a\neq 0$, then $f(x)$ is called an
almost bent (AB) function. Almost bent functions exist only for odd $m$.
Define
$\delta_{f}(a,b)={\rm
max}_{a\in\mathbb{F}_{2^{m}}^{*},b\in\mathbb{F}_{2^{m}}}|\\{x\in\mathbb{F}_{2^{m}}\,:\,f(x+a)+f(x)=b\\}|,$
then $f(x)$ is called an almost perfect nonlinear (APN) function if
$\delta_{f}(a,b)=2$.
APN and AB functions have applications in coding theory, combinatorics,
cryptography, finite geometry and sequence design. Many good linear codes over
finite fields have been constructed with APN and AB functions [6, 11, 12, 33,
43]. AB functions and APN functions have the following relationship.
###### Lemma 2.1
[3] Let $\mathbb{F}_{2^{m}}$ be a finite field with $2^{m}$ elements. If
$f(x)$ is an almost bent function over $\mathbb{F}_{2^{m}}$, then $f(x)$ is an
almost perfect nonlinear function over $\mathbb{F}_{2^{m}}$.
The converse is not true for Lemma 2.1, as almost bent functions exist only
for $m$ being odd while almost perfect nonlinear functions exist for $m$ being
even too.
### 2.3 Quadratic functions
By identifying the finite field ${\mathbb{F}}_{2^{m}}$ with the
$m$-dimensional vector space ${\mathbb{F}}_{2}^{m}$ over ${\mathbb{F}}_{2}$, a
function $f$ from ${\mathbb{F}}_{2^{m}}$ to ${\mathbb{F}}_{2}$ can be viewed
as an $m$-variable polynomial over ${\mathbb{F}}_{2}$. In the sequel, we fix a
basis of ${\mathbb{F}}_{2^{m}}$ over ${\mathbb{F}}_{2}$ and identify
$x\in{\mathbb{F}}_{2^{m}}$ with a vector
$(x_{1},x_{2},\cdots,x_{m})\in{\mathbb{F}}_{2}^{m}$, a quadratic function over
${\mathbb{F}}_{2}$ is of the form:
$Q(x_{1},x_{2},\cdots,x_{m})=(x_{1},x_{2},\cdots,x_{m})A(x_{1},x_{2},\cdots,x_{m})^{T},$
where $A=(a_{ij})_{m\times m},\,a_{ij}\in{\mathbb{F}}_{2}$, is an upper
triangular matrix. The matrix $A+A^{T}$ is called an alternate matrix and its
rank must be even [49]. By the theory of linear equations, the rank $r$ of the
matrix $A+A^{T}$ is equal to the codimension of the ${\mathbb{F}}_{2}$-linear
subspace
$V=\\{x\in{\mathbb{F}}_{2^{m}}:Q(x+z)+Q(x)+Q(z)=0\mbox{ for all
}z\in{\mathbb{F}}_{2^{m}}\\},$ (6)
i.e. $r=m-\dim_{{\mathbb{F}}_{2}}V$. Let $G(x)$ be a linear polynomial over
$\mathbb{F}_{2^{m}}$, then
$\begin{split}\left(\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(Q(x)+G(x))}\right)^{2}&=\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(Q(x)+G(x))}\sum_{y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm Tr}(Q(y)+G(y))}\\\
&=\sum_{x,y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm Tr}(Q(x+y)+G(x+y)+Q(x)+G(x))}\\\
&=\sum_{y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(Q(y)+G(y))}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(Q(x+y)+Q(x)+Q(y))}\\\ &=2^{m}\cdot\sum_{y\in V}(-1)^{{\rm
Tr}(Q(y)+G(y))},\end{split}$
where $V$ was defined in $(\ref{eq:quadraticform1})$. It is easy to check that
${{\rm Tr}\left(Q(x+y)+G(x+y)\right)}={{\rm Tr}\left(Q(x)+G(x)\right)}+{{\rm
Tr}\left(Q(y)+G(y)\right)}$
for any $x,y\in V$. Then
$\begin{split}\left(\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(Q(x)+G(x))}\right)^{2}=\begin{cases}2^{m+r},&\text{if ${{\rm
Tr}\left(Q(y)+G(y)\right)}=0$ for all $y\in V$,}\\\
0,&\text{otherwise},\end{cases}\end{split}$ (7)
where $r$ is the rank of $Q(x)$ and $r=m-\dim_{{\mathbb{F}}_{2}}V$. The
following are some well known results about quadratic forms, which will be
needed in this paper.
###### Lemma 2.2
[8, 9] Let $m$ and $k$ be non-negative integers with $v_{2}(m)\leq v_{2}(k)$
and $a,b\in\mathbb{F}_{2^{m}}$ with $a\neq 0$. Let
$\begin{split}S(a,b)=\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}\left(ax^{2^{k}+1}+bx\right)},\end{split}$ (8)
then the possible values of $S(a,b)$ are in the set $\\{0,\pm
2^{\frac{m+\ell}{2}}\\}$, where $\ell=\gcd(m,k)$.
###### Lemma 2.3
[8, 9] Let $m$ and $k$ be non-negative integers with $v_{2}(m)>v_{2}(k)$ and
$a,b\in\mathbb{F}_{2^{m}}$ with $a\neq 0$. Let $S(a,b)$ be defined in (8).
Then $S(a,b)=0$ unless the equation $a^{2^{k}}x^{2^{2k}}+ax+b^{2^{k}}=0$ is
solvable. Let $\gamma$ be a primitive element of $\mathbb{F}_{2^{m}}$. Let
$\ell=\gcd(m,k)$. Assume $a^{2^{k}}x^{2^{2k}}+ax+b^{2^{k}}=0$ is solvable.
Then there are two possibilities as follows.
(i) If $a\neq\gamma^{s\left(2^{\ell}+1\right)}$ for any integer $s$, then the
equation has a unique solution $x_{b}$ for any $b\in\mathbb{F}_{2^{m}}$, and
$S(a,b)=(-1)^{\frac{m}{2\ell}-{\rm
Tr}\left(ax_{b}^{2^{k}+1}\right)}2^{\frac{m}{2}}.$
(ii) If $a=\gamma^{s\left(2^{\ell}+1\right)}$ for some integer $s$, then the
equation is solvable if and only if ${\rm
Tr}_{2\ell}^{m}\left(b\beta^{-s}\right)=0$, where
$\beta\in\mathbb{F}_{2^{m}}^{*}$ is the unique element satisfying
$\beta^{\frac{2^{k}+1}{2^{\ell}+1}}=\gamma$. In such case,
$S(a,b)=-(-1)^{\frac{m}{2\ell}-{\rm
Tr}\left(ax_{b}^{2^{k}+1}\right)}2^{\frac{m}{2}+\ell},$
where $x_{b}$ is a solution to $a^{2^{k}}x^{2^{2k}}+ax+b^{2^{k}}=0$.
###### Lemma 2.4
[42] Let $\gamma$ be a primitive element of $\mathbb{F}_{2^{m}}$. Assume that
$m=2sh$ and $\ell\,|\,(2^{h}+1)$. Then
$\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(\gamma^{i}x^{\ell})}=\left\\{\begin{array}[]{lcl}(-1)^{s}2^{\frac{m}{2}},&{\rm
if}\,\,\,i\not\equiv 0\pmod{\ell},\\\ (-1)^{s-1}(\ell-1)2^{\frac{m}{2}},&{\rm
if}\,\,\,i\equiv 0\pmod{\ell}.\end{array}\right.$
###### Lemma 2.5
[40] Let $\ell=\gcd(\frac{m}{2},k)$ and
$\ell^{\prime}=\gcd(\frac{m}{2}+k,2k)$. Let
$S_{1}(a,b)=(-1)^{{\rm Tr}\left(ax^{2^{k}+1}+bx^{2^{\frac{m}{2}}+1}\right)}.$
If $\ell^{\prime}=2\ell$ and $(a,b)$ runs over
$\mathbb{F}_{2^{m}}\times\mathbb{F}_{2^{\frac{m}{2}}}$, then
$\begin{split}S_{1}(a,b)=\begin{cases}2^{m},&{\rm occuring}\,\,\,1\,\,{\rm
time},\\\ -2^{\frac{m}{2}},&{\rm
occuring}\,\,\,\frac{2^{3k}(2^{\frac{m}{2}}-1)(2^{m}-2^{m-2k}-2^{m-3k}+2^{\frac{m}{2}}-2^{\frac{m}{2}-k}+1}{(2^{k}+1)(2^{2k}-1)}\,\,{\rm
times},\\\ 2^{\frac{m}{2}+k},&{\rm
occuring}\,\,\,\frac{2^{k}(2^{m}-1)(2^{m}-2^{m-\ell}+2^{m-2\ell}+1)}{(2^{k}+1)^{2}}\,\,{\rm
times},\\\ -2^{\frac{m}{2}+2k},&{\rm
occuring}\,\,\,\frac{(2^{\frac{m}{2}-\ell}-1)(2^{m}-1)}{(2^{k}+1)(2^{2k}-1)}\,\,{\rm
times}.\end{cases}\end{split}$
### 2.4 Pless power moments and the sphere packing bound
To study the parameters of the duals of the punctured binary codes
${\mathcal{C}}(f)^{\bar{D}}$, we need the Pless power moments of linear codes.
Let $\mathcal{C}$ be a binary $[n,k]$ code, and denote its dual by
$\mathcal{C}^{\perp}$. Let $A_{i}$ and $A^{\perp}_{i}$ be the number of
codewords of weight $i$ in $\mathcal{C}$ and $\mathcal{C}^{\perp}$,
respectively. The first five Pless power moments are the following [41, p.
131]:
$\begin{split}&\sum_{i=0}^{n}A_{i}=2^{k};\\\
&\sum_{i=0}^{n}iA_{i}=2^{k-1}(n-A_{1}^{\perp});\\\
&\sum_{i=0}^{n}i^{2}A_{i}=2^{k-2}[n(n+1)-2nA_{1}^{\perp}+2A_{2}^{\perp}];\\\
&\sum_{i=0}^{n}i^{3}A_{i}=2^{k-3}[n^{2}(n+3)-(3n^{2}+3n-2)A_{1}^{\perp}+6nA_{2}^{\perp}-6A_{3}^{\perp}];\\\
&\sum_{i=0}^{n}i^{4}A_{i}=2^{k-4}[n(n+1)(n^{2}+5n-2)-4n(n^{2}+3n-2)A_{1}^{\perp}+4(3n^{2}+3n-4)A_{2}^{\perp}-24nA_{3}^{\perp}+24A_{4}^{\perp}].\\\
\end{split}$
If $A_{1}^{\perp}=A_{2}^{\perp}=A_{3}^{\perp}=A_{4}^{\perp}=0$, then the sixth
Pless power moment becomes the following:
$\sum_{i=0}^{n}i^{5}A_{i}=2^{k-5}\cdot n^{5}+5\cdot 2^{k-4}\cdot n^{4}+15\cdot
2^{k-5}\cdot n^{3}-5\cdot 2^{k-4}\cdot n^{2}-A_{5}^{\perp}\cdot 2^{k-5}\cdot
120.$
We will need the following bound for binary linear codes later.
###### Lemma 2.6 (The sphere packing bound)
Let $\mathcal{C}$ be an $[n,k,d]$ binary code. Then
$2^{n}\geq
2^{k}\sum_{i=0}^{\lfloor\frac{d-1}{2}\rfloor}\left(\begin{array}[]{cccc}n\\\
i\\\ \end{array}\right).$
## 3 Some punctured codes of the binary codes from almost bent functions
Recall the code ${\mathcal{C}}(f)$ defined in (1). When $q=2$ and
$f(x)=x^{2^{h}+1}$ with $\gcd(h,m)=1$ and $m$ being odd, the parameters and
weight distribution of the binary code ${\mathcal{C}}(f)$ were settled in [29,
30]. When $q=2$, $m$ is odd and $f(x)$ is an almost bent function on
${\mathbb{F}}_{2^{m}}$, the parameters and weight distribution of the binary
code ${\mathcal{C}}(f)$ were settled in [6]. The binary code
${\mathcal{C}}(f)$ has parameters $[2^{m}-1,2m,2^{m-1}-2^{(m-1)/2}]$ and three
nonzero weights [6]. Let ${\mathcal{C}}(f)^{\bar{D}}$ be the binary punctured
code defined in (2) with position set $D$ in (3), where $f(x)$ is an almost
bent function from ${\mathbb{F}}_{2^{m}}$ to itself. In this section, we
investigate the weight distribution of the punctured code
${\mathcal{C}}(f)^{\bar{D}}$ and the parameters of its dual. We first give the
length of the linear code ${\mathcal{C}}(f)^{\bar{D}}$ in the following lemma.
###### Lemma 3.1
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code defined in (2) with the
position set $D$ in (3), where $f(x)$ is an almost bent function from
$\mathbb{F}_{2^{m}}$ to itself. Then the length $n$ of
${\mathcal{C}}(f)^{\bar{D}}$ is
$\begin{split}n=|D|=\begin{cases}2^{m-1}-(-1)^{\nu}2^{\frac{m-1}{2}}-1+\nu,&{\rm
if}\,\,\,W_{f}(\lambda,0)=-2^{\frac{m+1}{2}},\\\
2^{m-1}+(-1)^{\nu}2^{\frac{m-1}{2}}-1+\nu,&{\rm
if}\,\,\,W_{f}(\lambda,0)=2^{\frac{m+1}{2}},\\\ 2^{m-1}-1+\nu,&{\rm
if}\,\,\,W_{f}(\lambda,0)=0,\end{cases}\end{split}$
where $W_{f}(\lambda,0)$ was defined in (5) and $\nu\in\\{0,1\\}$.
In order to apply the Pless power moments to determine the multiplicity of
each Hamming weight of ${\mathcal{C}}(f)^{\bar{D}}$, we need to investigate
the minimum Hamming distance of its dual.
###### Lemma 3.2
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code defined in (2) with the
position set $D$ in (3), where $f(x)$ is an almost bent function from
$\mathbb{F}_{2^{m}}$ to itself. Then the dual distance is lower bounded by
$\begin{split}d_{H}\left(\left({\mathcal{C}}(f)^{\bar{D}}\right)^{\perp}\right)\geq\begin{cases}5,&{\rm
if}\,\,\,\nu=0,\\\ 6,&{\rm if}\,\,\,\nu=1.\\\ \end{cases}\end{split}$
Proof. It is easy to see
$d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)\geq 3$ from the
definition of ${\mathcal{C}}(f)^{\bar{D}}$. Next, we show that
$d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)\neq 4$. The case of
$d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)\neq 3$ can be shown
similarly, and we omit the details of the proof.
If $d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)=4$, then there are
four pairwise-distinct elements $x_{1}$, $x_{2}$, $x_{3}$ and $x_{4}$ in
${\mathbb{F}}_{2^{m}}^{*}$ such that
$\begin{split}\begin{cases}{\rm Tr}(\lambda f(x_{1}))={\rm Tr}(\lambda
f(x_{2}))={\rm Tr}(\lambda f(x_{3}))={\rm Tr}(\lambda f(x_{4}))=\nu,\\\
a(x_{1}+x_{2}+x_{3}+x_{4})+b(f(x_{1})+f(x_{2})+f(x_{3})+f(x_{4}))=0\end{cases}\end{split}$
for any $a,b\in\mathbb{F}_{2^{m}}$. Then,
$\begin{split}\begin{cases}{\rm Tr}(\lambda f(x_{1}))={\rm Tr}(\lambda
f(x_{2}))={\rm Tr}(\lambda f(x_{3}))={\rm Tr}(\lambda f(x_{4}))=\nu,\\\
x_{1}+x_{2}+x_{3}+x_{4}=0,\\\
f(x_{1})+f(x_{2})+f(x_{3})+f(x_{4})=0.\end{cases}\end{split}$ (9)
The second and third equations in (9) can be rewritten as
$\begin{split}\begin{cases}x_{1}+x_{2}=\alpha\,\,\text{and}\,\,x_{3}+x_{4}=\alpha,\\\
f(x_{1})+f(x_{2})=\beta\,\,\text{and}\,\,f(x_{3})+f(x_{4})=\beta,\end{cases}\end{split}$
where $\alpha,\beta\in\mathbb{F}_{2^{m}}$ with $\alpha\neq 0$. Hence, there
are four different elements $x_{1}$, $x_{1}+\alpha$, $x_{3}$ and
$x_{3}+\alpha$ satisfying the equation $f(x)+f(x+\alpha)=\beta$. This
contradicts Lemma 2.1, as $f(x)$ is an almost perfect nonlinear function.
Therefore, $d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)\geq 5$.
If $\nu=1$ and $d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)=5$,
there are five pairwise-distinct elements $x_{1},x_{2},x_{3},x_{4},x_{5}$ in
${\mathbb{F}}_{2^{m}}^{*}$ such that
$f(x_{1})+f(x_{2})+f(x_{3})+f(x_{4})+f(x_{5})=0$ by the definition of
${\mathcal{C}}(f)^{\bar{D}}$, then ${\rm
Tr}(\lambda(f(x_{1})+f(x_{2})+f(x_{3})+f(x_{4})+f(x_{5})))=0$, which is
contradictory to ${\rm Tr}(\lambda f(x_{1}))={\rm Tr}(\lambda f(x_{2}))={\rm
Tr}(\lambda f(x_{3}))={\rm Tr}(\lambda f(x_{4}))={\rm Tr}(\lambda
f(x_{5}))=1.$ Hence,
$\begin{split}d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)\geq\begin{cases}5,&\text{if
$\nu=0$,}\\\ 6,&\text{if $\nu=1$.}\end{cases}\end{split}$
This completes the proof of this lemma. $\square$
We now give the weight distribution of the binary code
${\mathcal{C}}(f)^{\bar{D}}$ and the parameters of its dual as follows.
###### Theorem 3.3
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code defined in (2) with the
position set $D$ in (3), where $f(x)$ is an almost bent function from
$\mathbb{F}_{2^{m}}$ to itself. Then the following statements hold.
(1) If $\nu=0$, then ${\mathcal{C}}(f)^{\bar{D}}$ is an
$[n,2m-1,\frac{n+1}{2}-2^{\frac{m-3}{2}}]$ code with the weight distribution
in Table 1, where $n$ was given in Lemma 3.1. Its dual has parameters
$[n,n-2m+1,5]$.
Table 1: Weight distribution of the code ${\mathcal{C}}(f)^{\bar{D}}$ for $\nu=0$ in Theorem 3.3 Weight | Multiplicity
---|---
$0$ | $1$
$\frac{n+1}{2}$ | $2^{2m-1}-(n+1)^{4}2^{-2m}+5(n+1)^{2}2^{-m-1}-5(n+1)2^{m-2}+\frac{3}{2}n^{2}+2n-\frac{1}{2}$
$\frac{n+1}{2}\pm 2^{\frac{m-1}{2}}$ | $\begin{array}[]{c}\pm\frac{1}{6}\big{(}(n+1)^{3}2^{\frac{1-3m}{2}}-(3n+1)2^{\frac{m-1}{2}}-(n+1)2^{-\frac{m+1}{2}}+2^{\frac{3m-3}{2}}\big{)}-\\\ \frac{1}{6}(n+1)^{4}2^{-2m}+\frac{1}{6}(n+1)^{2}2^{-m-1}-\frac{1}{6}(n+1)2^{m-2}+\frac{1}{4}n^{2}+\frac{1}{3}n+\frac{1}{12}\end{array}$
$\frac{n+1}{2}\pm 2^{\frac{m-3}{2}}$ | $\begin{array}[]{c}\pm\frac{1}{6}\big{(}-(n+1)^{3}2^{\frac{3-3m}{2}}+(n+1)2^{\frac{5-m}{2}}+2^{\frac{m+1}{2}}-2^{\frac{3+3m}{2}}+6n\cdot 2^{\frac{m-1}{2}}\big{)}+2^{2-2m}\cdot n^{2}+\\\ \frac{1}{3}(n^{4}+4n^{3}+4n+1)2^{1-2m}-\frac{1}{3}(n+1)^{2}2^{2-m}+\frac{1}{3}(n+1)2^{1+m}-n^{2}-\frac{4}{3}n-\frac{1}{3}\end{array}$
(2) If $\nu=1$, then ${\mathcal{C}}(f)^{\bar{D}}$ is an
$[n,2m,\frac{n}{2}-2^{\frac{m-3}{2}}]$ code with the weight distribution in
Table 2, where $n$ was given in Lemma 3.1. Its dual has parameters
$[n,n-2m,6]$, and is distance-optimal with respect to the sphere packing
bound.
Table 2: Weight distribution of the code ${\mathcal{C}}(f)^{\bar{D}}$ for $\nu=1$ in Theorem 3.3 Weight | Multiplicity
---|---
$0$ | $1$
$\frac{n}{2}$ | $2^{2m}-5n\cdot 2^{m-1}+5n^{2}\cdot 2^{-m}-2^{1-2m}n^{4}+3n^{2}-2n-2$
$\frac{n}{2}\pm 2^{\frac{m-1}{2}}$ | $-\frac{1}{3}n^{4}2^{-2m}+\frac{1}{6}(2^{-m}n^{2}+3n^{2}-2^{m-1}n-2n)$
$\frac{n}{2}\pm 2^{\frac{m-3}{2}}$ | $\frac{4n}{3}(2^{-2m}n^{3}-2^{1-m}n-\frac{3n}{2}+2^{m}+1)$
$n$ | $1$
Proof. It follows from (2) that the Hamming weight of the codeword
${\mathbf{c}}(a,b)$ in ${\mathcal{C}}(f)^{\bar{D}}$ is given by
$\begin{split}{\rm wt_{H}}(\mathbf{c}(a,b))&=|D|-\left|\left\\{x\in D:\,\,{\rm
Tr}\left(af(x)+bx\right)=0\right\\}\right|\\\
&=\frac{|D|}{2}-\frac{1}{2}\sum_{x\in D}(-1)^{{\rm Tr}(af(x)+bx)}\\\
&=\frac{|D|}{2}-\frac{1}{2}\sum_{x\in\mathbb{F}_{2^{m}}\setminus\\{0\\}}\left(\frac{1}{2}\sum_{y\in\mathbb{F}_{2}}(-1)^{y{({\rm
Tr}(\lambda f(x))-\nu)}}\right)(-1)^{{\rm Tr}\left(af(x)+bx\right)}\\\
&=\frac{|D|}{2}-\frac{1}{4}\sum_{x\in\mathbb{F}_{2^{m}}}\left(\sum_{y\in\mathbb{F}_{2}}(-1)^{y{({\rm
Tr}(\lambda f(x))-\nu)}}\right)(-1)^{{\rm
Tr}(af(x)+bx)}+\frac{1}{4}\sum_{y\in\mathbb{F}_{2}}(-1)^{y\nu}\\\
&=\frac{|D|}{2}-\frac{1}{4}\sum_{x\in\mathbb{F}_{2^{m}}}\left(1+(-1)^{({\rm
Tr}(\lambda f(x))-\nu)}\right)(-1)^{{\rm
Tr}(af(x)+bx)}+\frac{1}{4}\sum_{y\in\mathbb{F}_{2}}(-1)^{y\nu}\\\
&=\frac{|D|}{2}-\frac{1}{4}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(af(x)+bx)}-(-1)^{\nu}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}((\lambda+a)f(x)+bx)}+\frac{1}{4}\sum_{y\in\mathbb{F}_{2}}(-1)^{zy\nu}\\\
&=\frac{|D|}{2}-\frac{1}{4}W_{f}(a,b)-\frac{(-1)^{\nu}}{4}W_{f}(a+\lambda,b)+\frac{1}{4}\sum_{y\in\mathbb{F}_{2}}(-1)^{z_{0}\nu},\end{split}$
(10)
where $W_{f}(a,b)$ was defined in (5). By the definition of almost bent
functions, for any $(a,b)\in\mathbb{F}_{2^{m}}^{2}\setminus\\{(0,0)\\}$, we
know that $W_{f}(a,b)\in\\{0,\pm 2^{\frac{m+1}{2}}\\}$. So,
$\frac{1}{4}\left(W_{f}(a,b)\pm W_{f}(a+\lambda,b)\right)\in\left\\{0,\pm
2^{\frac{m-1}{2}},\pm 2^{\frac{m-3}{2}}\right\\}$ (11)
for any $(a,b)\in\mathbb{F}_{2^{m}}^{2}\setminus\\{(0,0),(\lambda,0)\\}$. In
the following, we prove this theorem case by case.
Case 1: $\nu=0$, i.e., $D=\\{x\in\mathbb{F}_{2^{m}}^{*}:{\rm Tr}(\lambda
f(x))=0\\}$. By (10) and (11), when $(a,b)$ runs over
$\mathbb{F}_{2^{m}}^{2}\setminus\\{(0,0),(\lambda,0)\\}$, the possible values
of $\operatorname{wt_{H}}(\mathbf{c}(a,b))$ are
$\frac{n+1}{2},\,\,\frac{n+1}{2}\pm
2^{\frac{m-1}{2}},\,\,\text{and}\,\,\frac{n+1}{2}\pm 2^{\frac{m-3}{2}},$
where $n$ was given in Lemma 3.1. It is easy to see that
$\operatorname{wt_{H}}(\mathbf{c}(a,b))=0$ if and only if $(a,b)=(0,0)$ or
$(a,b)=(\lambda,0)$. So, the dimension of ${\mathcal{C}}(f)^{\bar{D}}$ is
$2m-1$.
Denote $w_{1}=\frac{n+1}{2}$, $w_{2}=\frac{n+1}{2}+2^{\frac{m-1}{2}}$,
$w_{3}=\frac{n+1}{2}-2^{\frac{m-1}{2}}$,
$w_{4}=\frac{n+1}{2}+2^{\frac{m-3}{2}}$ and
$w_{5}=\frac{n+1}{2}-2^{\frac{m-3}{2}}$. Let $A_{w_{i}}$ be the number of the
codewords with weight $w_{i}$ in ${\mathcal{C}}(f)^{\bar{D}}$. By Lemma 3.2,
we know that $A_{1}^{\perp}=A_{2}^{\perp}=A_{3}^{\perp}=A_{4}^{\perp}=0$. From
the first five Pless power moments, we have the following system of equations:
$\begin{split}\begin{cases}\sum_{i=1}^{5}A_{w_{i}}=2^{2m-1}-1;\\\
\sum_{i=1}^{5}w_{i}A_{w_{i}}=2^{2m-2}n;\\\
\sum_{i=1}^{5}w_{i}^{2}A_{w_{i}}=2^{2m-3}n(n+1);\\\
\sum_{i=1}^{5}w_{i}^{3}A_{w_{i}}=2^{2m-4}n^{2}(n+3);\\\
\sum_{i=1}^{5}w_{i}^{4}A_{w_{i}}=2^{2m-5}n(n+1)(n^{2}+5n-2).\\\
\end{cases}\end{split}$
Solving this system of equations, we obtain the desired values of $A_{w_{1}}$,
$A_{w_{2}}$, $A_{w_{3}}$, $A_{w_{4}}$ and $A_{w_{5}}$ in Table 1.
We now determine the parameters of the dual of ${\mathcal{C}}(f)^{\bar{D}}$.
We consider only the case $n=2^{m-1}-1$, i.e., the value of $W_{f}(\lambda,0)$
is zero. The other two cases can be shown similarly. Substituting the value of
$n=2^{m-1}-1$ in Table 1, we obtain that $A_{w_{1}}=3\cdot
2^{2m-4}+2^{m-3}-1$,
$A_{w_{2}}=2^{2m-5}-2^{\frac{3m-7}{2}}+2^{\frac{m-5}{2}}-2^{m-4}$,
$A_{w_{3}}=2^{2m-5}+2^{\frac{3m-7}{2}}-2^{\frac{m-5}{2}}-2^{m-4}$,
$A_{w_{4}}=2^{2m-3}-2^{\frac{3m-5}{2}}$ and
$A_{w_{5}}=2^{2m-3}+2^{\frac{3m-5}{2}}$. By Lemma 3.2,
$A_{1}^{\perp}=A_{2}^{\perp}=A_{3}^{\perp}=A_{4}^{\perp}=0$. Then from the
sixth Pless power moment, we have
$\begin{split}\sum_{i=1}^{5}w_{i}^{5}A_{w_{i}}&=2^{2m-6}\cdot(2^{m-1}-1)^{5}+5\cdot
2^{2m-5}\cdot(2^{m-1}-1)^{4}\\\ &+15\cdot 2^{2m-6}\cdot(2^{m-1}-1)^{3}-5\cdot
2^{2m-5}\cdot(2^{m-1}-1)^{2}-A_{5}^{\perp}\cdot 2^{2m-6}\cdot 120.\\\
\end{split}$
Solving this equation, we obtain $A_{5}^{\perp}=(11\cdot
2^{m}+2^{3m-4}-13\cdot 2^{2m-3}-2^{4})/120\neq 0$. Hence,
$({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ has parameters
$[2^{m-1}-1,2^{m-1}-2m,5]$.
Case 2: $\nu=1$, i.e., $D=\\{x\in\mathbb{F}_{2^{m}}^{*}:{\rm Tr}(\lambda
f(x))=1\\}$. By (10) and (11), when $(a,b)$ runs over
$\mathbb{F}_{2^{m}}^{2}\setminus\\{(0,0),(\lambda,0)\\}$, the possible values
of $\operatorname{wt_{H}}(\mathbf{c}(a,b))$ are
$\frac{n}{2},\,\,\frac{n}{2}\pm
2^{\frac{m-1}{2}}\,\,\text{and}\,\,\frac{n}{2}\pm 2^{\frac{m-3}{2}},$
where $n$ was given in Lemma 3.1. Moreover,
$\operatorname{wt_{H}}(\mathbf{c}(a,b))=0$ if and only if $(a,b)=(0,0)$ and
$\operatorname{wt_{H}}(\mathbf{c}(a,b))=n$ if $(a,b)=(\lambda,0)$. So, the
dimension of ${\mathcal{C}}(f)^{\bar{D}}$ is $2m$.
Denote $w_{1}=2^{m-2}$, $w_{2}=2^{m-2}+2^{\frac{m-1}{2}}$,
$w_{3}=2^{m-2}-2^{\frac{m-1}{2}}$, $w_{4}=2^{m-2}+2^{\frac{m-3}{2}}$ and
$w_{5}=2^{m-2}-2^{\frac{m-3}{2}}$. Let $A_{w_{i}}$ be the number of the
codewords with weight $w_{i}$ in ${\mathcal{C}}(f)^{\bar{D}}$. From Lemma 3.2
we know that $A_{1}^{\perp}=A_{2}^{\perp}=A_{3}^{\perp}=A_{4}^{\perp}=0$. Then
the first five Pless power moments lead to the following system of equations:
$\begin{split}\begin{cases}\sum_{i=1}^{5}A_{w_{i}}=2^{2m}-2;\\\
\sum_{i=1}^{5}w_{i}A_{w_{i}}=2^{2m-1}n-n;\\\
\sum_{i=1}^{5}w_{i}^{2}A_{w_{i}}=2^{2m-2}n(n+1)-n^{2};\\\
\sum_{i=1}^{5}w_{i}^{3}A_{w_{i}}=2^{2m-3}n^{2}(n+3)-n^{3};\\\
\sum_{i=1}^{5}w_{i}^{4}A_{w_{i}}=2^{2m-4}n(n+1)(n^{2}+5n-2)-n^{4}.\end{cases}\end{split}$
Solving this system of equations, we obtain the desired values of $A_{w_{1}}$,
$A_{w_{2}}$, $A_{w_{3}}$, $A_{w_{4}}$ and $A_{w_{5}}$ in Table 2.
We now determine the parameters of the dual of ${\mathcal{C}}(f)^{\bar{D}}$.
We treat only the case $n=2^{m-1}$ and the other two cases can be treated
similarly. Substituting the value of $n=2^{m-1}$ in Table 2, we obtain that
$A_{w_{1}}=3\cdot 2^{2m-3}+2^{m-2}-2$, $A_{w_{2}}=A_{w_{3}}=2^{2m-4}-2^{m-3}$
and $A_{w_{4}}=A_{w_{5}}=2^{2m-2}$. If
$d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)>6$, then
$\begin{split}\sum_{i=0}^{3}\left(\begin{array}[]{cccc}2^{m-1}\\\ i\\\
\end{array}\right)=1+2^{m-1}+2^{m-2}\cdot(2^{m-1}-1)+\frac{2^{m-2}\cdot(2^{m-1}-1)\cdot(2^{m-1}-2)}{3}>2^{2m},\end{split}$
which contradicts the sphere packing bound. From Lemma 3.2, we then deduce
that $d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)=6$, and
$({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ is distance-optimal with respect to the
sphere packing bound. $\square$
###### Example 3.4
Let $m=7$ and $f(x)$ be an almost bent function from $\mathbb{F}_{2^{7}}$ to
$\mathbb{F}_{2^{7}}$ with $W_{f}(1,0)=2^{\frac{7+1}{2}}$. Let
${\mathcal{C}}(f)^{\bar{D}}$ be the linear code in Theorem 3.3.
(1) If $\nu=0$, then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters $[71,13,28]$
and its dual has parameters $[71,58,5]$.
(2) If $\nu=1$ then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters $[56,14,20]$
and its dual has parameters $[56,42,6]$.
The four codes are optimal according to the tables of best codes known in
[22].
###### Remark 3.5
In [36], the authors proposed the following open problem (Problem 4.4): Let
$\lambda\in\mathbb{F}_{2^{s}}^{*}$, $F$ be a function from
$\mathbb{F}_{2^{m}}$ to $\mathbb{F}_{2^{s}}$ and $D$ be the support of ${\rm
Tr}_{1}^{s}(\lambda F(x))$. Define a linear code
${\mathcal{C}}^{\prime}(F)^{\bar{D}}$ over $\mathbb{F}_{2}$ by
${\mathcal{C}}^{\prime}(F)^{\bar{D}}=\\{({\rm Tr}_{1}^{m}(xh)+{\rm
Tr}_{1}^{s}(yF(h)))_{h\in
D}\,:\,x\in\mathbb{F}_{2^{m}},y\in\mathbb{F}_{2^{s}}\\}.$
Determining the weight distributions of the linear codes if $F$ is a vectorial
bent function with $m\neq 2s$ or an almost bent function but not the Gold
type. Clearly, if $F$ is an almost bent function, then $s=m$. Table 2 in
Theorem 3.3 has given the weight distribution of
${\mathcal{C}}^{\prime}(F)^{\bar{D}}$ for $F$ being an almost bent function.
The following is a list of known almost bent monomials $f(x)=x^{d}$ on
$\mathbb{F}_{2^{m}}$ for an odd $m$:
* •
$d=2^{h}+1$, where $\gcd(m,h)=1$ is odd [21];
* •
$d=2^{2h}-2^{h}+1$, where $h\geq 2$ and $\gcd(m,h)=1$ is odd [31];
* •
$d=2^{\frac{m-1}{2}}+3$, where $m$ is odd[31];
* •
$d=2^{\frac{m-1}{2}}+2^{\frac{m-1}{4}}-1$, where $m\equiv 1\pmod{4}$ [26, 27];
* •
$d=2^{\frac{m-1}{2}}+2^{\frac{3m-1}{4}}-1$, where $m\equiv 3\pmod{4}$[26, 27].
All almost bent monomials $f(x)=x^{d}$ for $d$ in the list above are
permutation polynomials on $\mathbb{F}_{2^{m}}$. Hence, the length of
${\mathcal{C}}(f)^{\bar{D}}$ is $n=2^{m-1}-1$ if $\nu=0$ and $n=2^{m-1}$ if
$\nu=1$, respectively. Substituting the value of $n$ into Theorem 3.3, we
obtain the following results.
###### Corollary 3.6
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code defined in (2) with the
position set $D$ in (3). If $f(x)=x^{d}$ for some integer $d$ in the list
above, then the following statements hold.
(1) If $\nu=0$, then ${\mathcal{C}}(f)^{\bar{D}}$ is a
$[2^{m-1}-1,2m-1,2^{m-2}-2^{\frac{m-3}{2}}]$ code with the weight distribution
in Table 3. Its dual has parameters $[2^{m-1}-1,2^{m-1}-2m,5]$.
Table 3: Weight distribution of the code ${\mathcal{C}}(f)^{\bar{D}}$ for $\nu=0$ in Corollary 3.6 Weight | Multiplicity
---|---
$0$ | $1$
$2^{m-2}$ | $3\cdot 2^{2m-4}+2^{m-3}-1$
$2^{m-2}\pm 2^{\frac{m-1}{2}}$ | $2^{2m-5}\mp 2^{\frac{3m-7}{2}}\pm 2^{\frac{m-5}{2}}-2^{m-4}$
$2^{m-2}\pm 2^{\frac{m-3}{2}}$ | $2^{2m-3}\mp 2^{\frac{3m-5}{2}}$
(2) If $\nu=1$, then ${\mathcal{C}}(f)^{\bar{D}}$ is a
$[2^{m-1},2m,2^{m-2}-2^{\frac{m-3}{2}}]$ code with the weight distribution in
Table 4. Its dual has parameters $[2^{m-1},2^{m-1}-2m,6]$, and is distance-
optimal with respect to the sphere packing bound.
Table 4: Weight distribution of the code ${\mathcal{C}}(f)^{\bar{D}}$ for $\nu=1$ in Corollary 3.6 Weight | Multiplicity
---|---
$0$ | $1$
$2^{m-2}$ | $3\cdot 2^{2m-3}+2^{m-2}-2$
$2^{m-2}\pm 2^{\frac{m-1}{2}}$ | $2^{2m-4}-2^{m-3}$
$2^{m-2}\pm 2^{\frac{m-3}{2}}$ | $2^{2m-2}$
$2^{m-1}$ | $1$
###### Example 3.7
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code in Corollary 3.6.
(1) If $m=7$, $\nu=0$, then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters
$[63,13,24]$ and its dual has parameters $[63,50,5]$.
(2) If $m=7$, $\nu=1$, then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters
$[64,14,24]$ and its dual has parameters $[64,50,6]$.
The four codes are optimal according to the tables of best codes known in
[22].
## 4 Some punctured codes of binary linear codes from quadratic functions
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the binary punctured code defined in (2)
with the position set $D$ in (4). It is clear that the length of
${\mathcal{C}}(f)^{\bar{D}}$ is equal to $2^{m-1}$, as
$|D|=|\\{x\in{\mathbb{F}}_{2^{m}}^{*}:{\rm Tr}_{1}^{m}(x)=1\\}|=2^{m-1}$. As
shown in (10), the Hamming weight of each codeword in this case can be
expressed as
$\begin{split}{\rm
wt_{H}}(\mathbf{c}(a,b))&=2^{m-2}-\frac{1}{4}\left(W_{f}(a,b)-W_{f}(a,b+1)\right),\end{split}$
(12)
where $W_{f}(a,b)$ was given in (5). In this section, we investigate the
weight distribution of the punctured code ${\mathcal{C}}(f)^{\bar{D}}$ with
the position set $D$ in (4), where $f$ is a quadratic function in the list
below and the parameters of its dual.
* •
$f(x)=x^{2^{k}+1}$, where $k$ is an integer with $1\leq k\leq m-1$;
* •
$f(x)=x^{t_{1}}+x^{t_{2}}$, where $3\,|\,m$, $m\geq 9$ and
$t_{1},t_{2}\in\\{2^{\frac{m}{3}}+1,2^{\frac{2m}{3}}+1,2^{\frac{2m}{3}}+2^{\frac{m}{3}}\\}$
with $t_{1}\neq t_{2}$;
* •
$f(x)={\rm Tr}_{k}^{m}(x^{2^{k}+1})$, where $m,k$ are positive integers such
that $k\,|\,m$.
When $f(x)=x^{2^{k}+1}$, the parameters and weight distribution of the binary
code ${\mathcal{C}}(f)$ were settled in [29, 30]. In this section we will
investigate the punctured code ${\mathcal{C}}(f)^{\bar{D}}$ with a different
position set $D=\\{x\in{\mathbb{F}}_{2^{m}}^{*}:{\rm Tr}_{1}^{m}(x)=1\\}$. It
is open if the binary code ${\mathcal{C}}(f)$ was studied in the literature or
not when $f$ is one of the other two quadratic functions in the list above.
### 4.1 The case that $f(x)=x^{2^{k}+1}$
In this subsection, we study the punctured code ${\mathcal{C}}(f)^{\bar{D}}$
in (2) and determine its weight distribution, where $f(x)=x^{2^{k}+1}$ and
$D=\\{x\in{\mathbb{F}}_{2^{m}}^{*}:{\rm Tr}_{1}^{m}(x)=1\\}$. When $k=0$,
$f(x)=x^{2}$. In this case, it can be proved that the punctured code
${\mathcal{C}}(f)^{\bar{D}}$ is permutation-equivalent to the first-order
Reed-Muller code. In the following, we investigate the linear code
${\mathcal{C}}(f)^{\bar{D}}$ for $f(x)=x^{2^{k}+1}$ with $1\leq k<m$. We start
with the following two lemmas.
###### Lemma 4.1
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code defined in (2) with the
position set $D$ in (4). Let $A_{i}^{\perp}$ denote the number of codewords
with weight $i$ in $({\mathcal{C}}(f)^{\bar{D}})^{\perp}$. If
$f(x)=x^{2^{k}+1}$ with $1\leq k<m$, then
$A_{1}^{\perp}=A_{2}^{\perp}=A_{3}^{\perp}=A_{5}^{\perp}=0\,\,\text{and}\,\,A_{4}^{\perp}=\frac{2^{m-1}\cdot(2^{m-2}-1)\cdot(2^{\ell}-2)}{4!},$
where $\ell=\gcd(k,m)$.
Proof. From the definition of the linear code ${\mathcal{C}}(f)^{\bar{D}}$, we
know that $A_{i}^{\perp}$ is equal to the number of sets
$\\{x_{1},x_{2},\cdots,x_{i}\\}$ with $i$ pairwise-distinct nonzero elements
in ${\mathbb{F}}_{2^{m}}$ such that
$\begin{split}\begin{cases}{\rm Tr}(x_{1})={\rm Tr}(x_{2})=\cdots={\rm
Tr}(x_{i})=1,\\\ x_{1}+x_{2}+\cdots+x_{i}=0,\\\
x_{1}^{2^{k}+1}+x_{2}^{2^{k}+1}+\cdots+x_{i}^{2^{k}+1}=0.\\\
\end{cases}\end{split}$
It is clear that $A_{1}^{\perp}=A_{2}^{\perp}=0$. From the first and second
equations, we see that $A_{i}^{\perp}=0$ if $i$ is odd. Hence,
$A_{3}^{\perp}=A_{5}^{\perp}=0$. In the following, we determine the value of
$A_{4}^{\perp}$, which is equal to the number of sets
$\\{x_{1},x_{2},x_{3},x_{4}\\}$ with $4$ pairwise-distinct nonzero elements in
${\mathbb{F}}_{2^{m}}$ such that
$\begin{split}\begin{cases}{\rm Tr}(x_{1})={\rm Tr}(x_{2})={\rm
Tr}(x_{3})={\rm Tr}(x_{4})=1,\\\ x_{1}+x_{2}+x_{3}+x_{4}=0,\\\
x_{1}^{2^{k}+1}+x_{2}^{2^{k}+1}+x_{3}^{2^{k}+1}+x_{4}^{2^{k}+1}=0.\\\
\end{cases}\end{split}$ (13)
Assume that $x_{1}=\mu$, $x_{2}=\mu+\beta$, $x_{3}=\gamma$ and
$x_{4}=\gamma+\beta$, where $\mu\neq 0,\beta,\gamma,\gamma+\beta$, and
$\gamma\neq 0,\beta$, and $\beta\neq 0$. From (13) we know that
$A_{4}^{\perp}$ is equal to the number of the sets of the form
$\\{\mu,\mu+\beta,\gamma,\gamma+\beta\\}$ such that
$\mu^{2^{k}+1}+(\mu+\beta)^{2^{k}+1}=\gamma^{2^{k}+1}+(\gamma+\beta)^{2^{k}+1},\,\,{\rm
Tr}(\mu)={\rm Tr}(\gamma)=1\,\,\text{and}\,\,{\rm Tr}(\beta)=0,$
i.e.,
$(\mu+\gamma)^{2^{k}-1}=\beta^{2^{k}-1},\,\,{\rm Tr}(\mu)={\rm
Tr}(\gamma)=1\,\,\text{and}\,\,{\rm Tr}(\beta)=0.$
It is clear that $(\mu+\gamma)^{2^{k}-1}=\beta^{2^{k}-1}$ if and only if there
is a $\delta\in\mathbb{F}_{2^{\ell}}$ such that $\mu+\gamma=\delta\beta$ as
$\gcd(2^{m}-1,2^{k}-1)=2^{\ell}-1$. Then $A_{4}^{\perp}$ is equal to the
number of the sets of the form
$\\{\mu,\mu+\beta,\mu+\delta\beta,\mu+\beta(\delta+1)\\}$ such that ${\rm
Tr}(\mu)=1$ and ${\rm Tr}(\beta)={\rm Tr}(\delta\beta)=0$, where
$\delta\in\mathbb{F}_{2^{\ell}}\backslash\\{0,1\\}$, $\mu\neq
0,\beta,\delta\beta,\beta(\delta+1)$ and $\beta\neq 0$. Hence,
$\begin{split}A_{4}^{\perp}&=\frac{1}{8\cdot
4!}\sum_{z_{0}\in\mathbb{F}_{2}}\sum_{\mu\in\mathbb{F}_{2^{m}}^{*}\backslash\\{\beta,\delta\beta,\beta(\delta+1)\\}}(-1)^{z_{0}({\rm
Tr}(\mu)-1)}\sum_{z_{1}\in\mathbb{F}_{2}}\sum_{\beta\in\mathbb{F}_{2^{m}}^{*}}(-1)^{z_{1}{\rm
Tr}(\beta)}\sum_{z_{2}\in\mathbb{F}_{2}}\sum_{\delta\in\mathbb{F}_{2^{\ell}}^{*}\backslash\\{1\\}}(-1)^{z_{2}{\rm
Tr}(\delta\beta)}\\\
&=\frac{2^{m-3}}{4!}\sum_{z_{1}\in\mathbb{F}_{2}}\sum_{\beta\in\mathbb{F}_{2^{m}}^{*}}(-1)^{z_{1}{\rm
Tr}(\beta)}\sum_{z_{2}\in\mathbb{F}_{2}}\sum_{\delta\in\mathbb{F}_{2^{\ell}}^{*}\backslash\\{1\\}}(-1)^{z_{2}{\rm
Tr}(\delta\beta)}\\\
&=\frac{2^{m-3}}{4!}\sum_{z_{1}\in\mathbb{F}_{2}}\sum_{z_{2}\in\mathbb{F}_{2}}\sum_{\beta\in\mathbb{F}_{2^{m}}^{*}}\sum_{\gamma\in\mathbb{F}_{2^{\ell}}^{*}\backslash\\{1\\}}(-1)^{{\rm
Tr}((z_{1}+z_{2}\gamma)\beta)}\\\
&=\frac{2^{m-3}}{4!}\left(\sum_{z_{1}\in\mathbb{F}_{2}}\sum_{z_{2}\in\mathbb{F}_{2}}\sum_{\beta\in\mathbb{F}_{2^{m}}}\sum_{\gamma\in\mathbb{F}_{2^{\ell}}^{*}\backslash\\{1\\}}(-1)^{{\rm
Tr}((z_{1}+z_{2}\gamma)\beta)}-2^{2}\cdot(2^{\ell}-2)\right)\\\
&=\frac{2^{m-3}}{4!}\left(2^{m}\cdot(2^{\ell}-2)-2^{2}\cdot(2^{\ell}-2)\right)=\frac{2^{m-1}\cdot(2^{m-2}-1)\cdot(2^{\ell}-2)}{4!}.\end{split}$
The desired conclusion then follows. $\square$
###### Theorem 4.2
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code defined in (2) with the
position set $D$ in (3). Let $k$ be a positive integer with $k<m$ and
$\ell=\gcd(k,m)$. If $f(x)=x^{2^{k}+1}$, then the following statements hold.
(1) If $v_{2}(m)\leq v_{2}(k)$, then ${\mathcal{C}}(f)^{\bar{D}}$ is a
$[2^{m-1},2m,2^{m-2}-2^{\frac{m+\ell-4}{2}}]$ code with the weight
distribution in Table 5. If $\ell\geq 2$, then its dual has parameters
$[2^{m-1},2^{m-1}-2m,4]$. If $\ell=1$, then its dual has parameters
$[2^{m-1},2^{m-1}-2m,6]$, and is distance-optimal with respect to the sphere
packing bound.
Table 5: The weight distribution of ${\mathcal{C}}(f)^{\bar{D}}$ in Theorem 4.2 Weight | Multiplicity
---|---
$0$ | $1$
$2^{m-2}$ | $2^{2m}-2^{2m-\ell+1}+3\cdot 2^{2m-2\ell-1}+2^{m-\ell-1}-2$
$2^{m-2}\pm 2^{\frac{m+\ell-2}{2}}$ | $2^{2m-2\ell-2}-2^{m-\ell-2}$
$2^{m-2}\pm 2^{\frac{m+\ell-4}{2}}$ | $2^{2m-\ell}-2^{2m-2\ell}$
$2^{m-1}$ | $1$
(2) If $v_{2}(m)>v_{2}(k)$ and $\gcd(m,k)=1$, then
${\mathcal{C}}(f)^{\bar{D}}$ is a $[2^{m-1},2m,2^{m-2}-2^{\frac{m}{2}}]$ code
with the weight distribution in Table 6. Its dual has parameters
$[2^{m-1},2^{m-1}-2m,6]$, and is distance-optimal with respect to the sphere
packing bound.
Table 6: The weight distribution of ${\mathcal{C}}(f)^{\bar{D}}$ in Theorem 4.2 Weight | Multiplicity
---|---
$0$ | $1$
$2^{m-2}$ | $17\cdot 2^{2m-5}+3\cdot 2^{m-3}-2$
$2^{m-2}\pm 2^{\frac{m}{2}}$ | $\frac{1}{3}\left(2^{2m-6}-2^{m-4}\right)$
$2^{m-2}\pm 2^{\frac{m-2}{2}}$ | $\frac{1}{6}\left(11\cdot 2^{2m-3}-2^{m}\right)$
$2^{m-1}$ | $1$
(3) If $k=\frac{m}{2}$ and $m\geq 4$, then ${\mathcal{C}}(f)^{\bar{D}}$ is a
$[2^{m-1},\frac{3m}{2},2^{m-2}-2^{\frac{m-2}{2}}]$ code with the weight
distribution in Table 7. Its dual has parameters
$[2^{m-1},2^{m-1}-\frac{3m}{2},4]$, and is distance-optimal with respect to
the sphere packing bound.
Table 7: The weight distribution of ${\mathcal{C}}(f)^{\bar{D}}$ in Theorem 4.2 Weight | Multiplicity
---|---
$0$ | $1$
$2^{m-2}$ | $2^{\frac{3m}{2}-1}+2^{m-1}-2$
$2^{m-2}\pm 2^{\frac{m-2}{2}}$ | $2^{\frac{3m}{2}-2}-2^{m-2}$
$2^{m-1}$ | $1$
Proof. We prove the desired conclusions for Cases (1) and (3) only. The
conclusions in Case (2) can be proved in a similar way. If $a=0$, it is easy
to see that
${\rm wt_{H}}\left({\bf
c}(a,b)\right)=2^{m-2}-\frac{1}{4}\left(W_{f}(0,b)-W_{f}(0,b+1)\right)=\begin{cases}0,&{\rm
if}\,\,\,b=0,\\\ 2^{m-1},&{\rm if}\,\,\,b=1,\\\ 2^{m-2},&{\rm if}\,\,\,b\neq
0,\,\,1.\end{cases}$
If $a\neq 0$ and $v_{2}(m)\leq v_{2}(k)$, then Lemma 2.2 shows that
$W_{f}(a,b)\in\\{0,\pm 2^{\frac{m+\ell}{2}}\\}$. Consequently, in this case we
have $W_{f}(a,b)-W_{f}(a,b+1)\in\\{0,\pm 2^{\frac{m+\ell}{2}},\pm
2^{\frac{m+\ell+2}{2}}\\}$. From (12) we see that the set of possible nonzero
weights of ${\mathcal{C}}(f)^{\bar{D}}$ is $\\{2^{m-1},2^{m-2},2^{m-2}\pm
2^{\frac{m+\ell-2}{2}},2^{m-2}\pm 2^{\frac{m+\ell-4}{2}}\\}$ and
${\mathcal{C}}(f)^{\bar{D}}$ has dimension $2m$. Set
$w_{1}=2^{m-1},w_{2}=2^{m-2},w_{3}=2^{m-2}+2^{\frac{m+\ell-2}{2}}$,
$w_{4}=2^{m-2}-2^{\frac{m+\ell-2}{2}}$, $w_{5}=2^{m-2}+2^{\frac{m+\ell-4}{2}}$
and $w_{6}=2^{m-2}-2^{\frac{m+\ell-4}{2}}$. It is known that $A_{w_{1}}=1$.
From Lemma 4.1 and the first five Pless power moments we have
$\left\\{\begin{array}[]{lll}\sum_{i=2}^{6}A_{w_{i}}=2^{2m}-2;\\\
\sum_{i=2}^{6}w_{i}A_{w_{i}}=2^{m-1}(2^{2m-1}-1);\\\
\sum_{i=2}^{6}w_{i}^{2}A_{w_{i}}=2^{2m-2}(2^{2m-2}+2^{m-1}-1);\\\
\sum_{i=2}^{6}w_{i}^{3}A_{w_{i}}=2^{3m-3}(2^{2m-2}+3\cdot 2^{m-1}-1);\\\
\sum_{i=2}^{6}w_{i}^{4}A_{w_{i}}=2^{3m-5}\left((2^{m-1}+1)(2^{2m-2}+5\cdot
2^{m-1}-2)-(2^{m-2}-1)(2^{\ell}-1)\right)-2^{4(m-1)}.\end{array}\right.$ (14)
Solving the linear equations in (14), we get the desired values of $A_{w_{i}}$
in Table 5. If $\ell>1$, by Lemma 4.1, $A_{4}^{\perp}>0$. Consequently, the
dual distance of the code equals $4$. If $\ell=1$, by Lemma 4.1,
$A_{4}^{\perp}=0$. Since all weights in $({\mathcal{C}}(f)^{\bar{D}})^{\perp}$
are even, the minimum distance of $({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ is at
least $6$. By the sphere packing bound, the minimum distance of
$({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ cannot be $8$ or more. Consequently,
the minimum distance of $({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ is equal to
$6$. This completes the proof of the conclusions in Case (1).
Next, we prove the conclusions for Case (3). Assume that $k=\frac{m}{2}$ and
$f(x)=x^{2^{m/2}+1}$, then
$\begin{split}W_{f}^{2}(a,b)&=\sum_{x_{0}\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(ax_{0}^{2^{m/2}+1}+bx_{0})}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(ax^{2^{m/2}+1}+bx)}\\\ &=\sum_{x,y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a(x+y)^{2^{m/2}+1}+b(x+y)+ax^{2^{m/2}+1}+bx)}\\\
&=\sum_{x,y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a(y^{2^{m/2}+1}+xy^{2^{m/2}}+x^{2^{m/2}}y)+by)}\\\
&=\sum_{y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(ay^{2^{m/2}+1}+by)}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a(xy^{2^{m/2}}+x^{2^{m/2}}y))}\\\
&=2^{m}\sum_{\small\begin{array}[]{c}y\in\mathbb{F}_{2^{m}}\\\
(a+a^{2^{m/2}})y=0\end{array}}(-1)^{{\rm Tr}(ay^{2^{m/2}+1}+by)}\\\
&=\begin{cases}2^{m}W_{f}(a,b),&\text{if
$a\in\mathbb{F}_{2^{\frac{m}{2}}}$,}\\\
2^{m},&\text{otherwise}.\end{cases}\end{split}$ (15)
If $a\in\mathbb{F}_{2^{\frac{m}{2}}}$, then ${\rm Tr}(ay^{2^{m/2}+1})=0$ and
the possible values of $W_{f}(a,b)$ are as follows:
$W_{f}(a,b)=\sum_{y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(ay^{2^{m/2}+1}+by)}=\sum_{y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(by)}=\begin{cases}2^{m},&\text{if $b=0$,}\\\
0,&\text{otherwise}.\end{cases}$
Hence,
$\begin{split}W_{f}(a,b)=\begin{cases}2^{m},&\text{if
$a\in\mathbb{F}_{2^{\frac{m}{2}}}$ \text{and} $b=0$,}\\\ 0,&\text{if
$a\in\mathbb{F}_{2^{\frac{m}{2}}}$ \text{and} $b\neq 0$,}\\\ \pm
2^{\frac{m}{2}},&\text{otherwise.}\end{cases}\end{split}$
When $(a,b)$ runs through $\mathbb{F}_{2^{m}}^{2}$, we know that
$W_{f}(a,b)-W_{f}(a,b+1)\in\\{0,\pm 2^{\frac{m+2}{2}},\pm 2^{m}\\}$ and the
value $2^{m}$ occurs $2^{\frac{m}{2}}$ times. Then
$\operatorname{wt_{H}}({\mathbf{c}}(a,b))\in\\{0,2^{m-2}\pm
2^{\frac{m-2}{2}},2^{m-1}\\}$ and $\operatorname{wt_{H}}({\mathbf{c}}(a,b))=0$
occurs $2^{\frac{m}{2}}$ times by (15). So, ${\mathcal{C}}(f)^{\bar{D}}$ has
dimension $\frac{3m}{2}$ and we obtain the weight distribution in Table 7 from
the first three Pless power moments. From the sphere packing bound and Lemma
4.1, the desired conclusions on ${\mathcal{C}}(f)^{\bar{D}}$ then follow. In
this case, $\ell=m/2>1$. It then follows from Lemma 4.1, $A_{4}^{\perp}>0$.
Consequently, the dual distance of the code equals $4$. $\square$
###### Example 4.3
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code in Theorem 4.2.
(1) Let $m=5$, $k=1$, then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters
$[16,10,4]$ and its dual has parameters $[16,6,6]$.
(2) Let $m=8$, $k=4$, then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters
$[128,12,56]$ and its dual has parameters $[128,116,4]$.
All the four codes are optimal according to the tables of best codes known in
[22].
### 4.2 The case that $f(x)=x^{t_{1}}+x^{t_{2}}$
In this subsection, we investigate the weight distribution of the punctured
code ${\mathcal{C}}(f)^{\bar{D}}$ and the parameters of its dual for
$f(x)=x^{t_{1}}+x^{t_{2}}$, where $3\,|\,m$, $m\geq 9$ and
$t_{1},t_{2}\in\\{2^{\frac{m}{3}}+1,2^{\frac{2m}{3}}+1,2^{\frac{2m}{3}}+2^{\frac{m}{3}}\\}$
with $t_{1}\neq t_{2}$. We first determine all possible Hamming weights in
${\mathcal{C}}(f)^{\bar{D}}$.
###### Lemma 4.4
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code defined in (2) with the
position set $D$ in (4). Let $3\,|\,m$, $m\geq 9$ and
$f(x)=x^{t_{1}}+x^{t_{2}}$, where
$t_{1},t_{2}\in\\{2^{\frac{m}{3}}+1,2^{\frac{2m}{3}}+1,2^{\frac{2m}{3}}+2^{\frac{m}{3}}\\}$
with $t_{1}\neq t_{2}$. Then ${\mathcal{C}}(f)^{\bar{D}}$ is a
$[2^{m-1},\frac{5m}{3}]$ code with nonzero weights in the set
$\\{2^{m-2},\,2^{m-1},\,2^{m-2}\pm 2^{\frac{2m}{3}-1}\\}$.
Proof. We prove the conclusions only for the case $t_{1}=2^{\frac{2m}{3}}+1$
and $t_{2}=2^{\frac{2m}{3}}+2^{\frac{m}{3}}$. The conclusions in the other two
cases can be similarly proved. In this case, we have
$W_{f}(a,b)=\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a(x^{2^{2m/3}+1}+x^{2^{2m/3}+2^{m/3}})+bx)}.$
If $a\in\mathbb{F}_{2^{\frac{m}{3}}}$, then $a+a^{2^{{m/3}}}=0$ and
$a+a^{2^{{2m/3}}}=0$. In this case,
$W_{f}(a,b)=\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(bx)}=\left\\{\begin{array}[]{lll}2^{m},&\text{if $b=0$},\\\ 0,&\text{if
$b\neq 0$.}\end{array}\right.$
Hence, when $(a,b)$ runs over
$\mathbb{F}_{2^{\frac{m}{3}}}\times\mathbb{F}_{2^{m}}$, we obtain
$W_{f}(a,b)-W_{f}(a,b+1)=\left\\{\begin{array}[]{lll}0,&\text{occuring
$2^{2m}-2^{\frac{m}{3}+1}$ times},\\\ 2^{m},&\text{occuring $2^{\frac{m}{3}}$
times},\\\ -2^{m},&\text{occuring $2^{\frac{m}{3}}$ times}.\\\
\end{array}\right.$ (16)
If $a\in\mathbb{F}_{2^{m}}\setminus\mathbb{F}_{2^{\frac{m}{3}}}$, similar to
the calculations in (15), we have
$\begin{split}W_{f}^{2}(a,b)&=\sum_{y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a(y^{2^{2m/3}+1}+y^{2^{2m/3}+2^{m/3}})+by)}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}((ay^{2^{m/3}}+a^{2^{m/3}}y+a^{2^{2m/3}}y^{2^{m/3}}+ay)x^{2^{2m/3}})}\\\
&=2^{m}\sum_{\small\begin{array}[]{c}y\in\mathbb{F}_{2^{m}}\\\
(a+a^{2^{{2m/3}}})y^{2^{m/3}}+(a^{2^{m/3}}+a)y=0\end{array}}(-1)^{{\rm
Tr}(a(y^{2^{2m/3}+1}+y^{2^{2m/3}+2^{m/3}})+by)}.\end{split}$
Let $L_{a}(y)=(a+a^{2^{2m/3}})y^{2^{m/3}}+(a^{2^{m/3}}+a)y$, then
$\textrm{Ker}(L_{a}(y))=\\{\,y\in\mathbb{F}_{2^{m}}\,\,|\,\,L_{a}(y)=0\,\\}=\left\\{(a^{2^{2m/3}}+a)z:\,\,z\in\mathbb{F}_{2^{\frac{m}{3}}}\right\\}.$
From (7) we get
$W_{f}^{2}(a,b)=\begin{cases}2^{\frac{4m}{3}},&\text{if ${{\rm
Tr}\left(a(y^{2^{2m/3}+1}+y^{2^{2m/3}+2^{m/3}})+by\right)}=0$ for all
$y\in$Ker$(L_{a}(y))$,}\\\ 0,&\text{otherwise}.\end{cases}$
If ${{\rm Tr}\left(a(y^{2^{2m/3}+1}+y^{2^{2m/3}+2^{m/3}})+by\right)}=0$ for
all $y\in$Ker$(L_{a}(y))$, then
${{\rm Tr}\left(a(y^{2^{2m/3}+1}+y^{2^{2m/3}+2^{m/3}})+(b+1)y\right)}={\rm
Tr}(y)={\rm Tr}_{1}^{\frac{m}{3}}\left({\rm
Tr}_{\frac{m}{3}}^{m}((a^{2^{2m/3}}+a)t)\right)=0$
because $t\in\mathbb{F}_{2^{\frac{m}{3}}}$. Hence,
$W_{f}(a,b)-W_{f}(a,b+1)\in\left\\{0,\pm 2^{\frac{2m}{3}+1}\right\\}$ for
$a\in\mathbb{F}_{2^{m}}\backslash\mathbb{F}_{2^{\frac{m}{3}}}$. Combining this
with (16), when $(a,b)$ runs through
$\mathbb{F}_{2^{m}}\times\mathbb{F}_{2^{m}}$, we have
$W_{f}(a,b)-W_{f}(a,b+1)\in\left\\{0,\pm 2^{m},\pm
2^{\frac{2m}{3}+1}\right\\}$
and each of the values $\pm 2^{m}$ occurs $2^{\frac{m}{3}}$ times. Then from
(12) we know that ${\rm wt_{H}}({\mathbf{c}}(a,b))=0$ and ${\rm
wt_{H}}({\mathbf{c}}(a,b))=2^{m-1}$ both occur $2^{\frac{m}{3}}$ times and the
nonzero weights in ${\mathcal{C}}(f)^{\bar{D}}$ belong to the set
$\\{2^{m-2},\,2^{m-1},\,2^{m-2}\pm 2^{\frac{2m}{3}-1}\\}$. It then follows
that ${\mathcal{C}}(f)^{\bar{D}}$ is degenerate and has dimension
$\frac{5m}{3}$. This completes the proof. $\square$
###### Theorem 4.5
Follow the notation and conditions introduced in Lemma 4.4. Then
${\mathcal{C}}(f)^{\bar{D}}$ is a
$[2^{m-1},\frac{5m}{3},2^{m-2}-2^{\frac{2m}{3}-1}]$ code with the weight
distribution in Table 8. Its dual has parameters
$[2^{m-1},2^{m-1}-\frac{5m}{3},4]$, and is distance-optimal with respect to
the sphere packing bound.
Table 8: The weight distribution of ${\mathcal{C}}(f)^{\bar{D}}$ in Theorem 4.5 Weight | Multiplicity
---|---
$0$ | $1$
$2^{m-2}$ | $2^{\frac{5m}{3}}-2^{\frac{4m}{3}-1}+2^{\frac{2m}{3}-1}-2$
$2^{m-2}\pm 2^{\frac{2m}{3}-1}$ | $2^{\frac{4m}{3}-2}-2^{\frac{2m}{3}-2}$
$2^{m-1}$ | $1$
Proof. From Lemma 4.4, we conclude that the dimension of
${\mathcal{C}}(f)^{\bar{D}}$ is $\frac{5m}{3}$, the possible weights in
${\mathcal{C}}(f)^{\bar{D}}$ are given in the set $\\{0,2^{m-1},2^{m-2},$
$2^{m-2}\pm 2^{\frac{2m}{3}-1}\\}$ and the weight $2^{m-1}$ occurs $1$ time.
Denote $w_{1}=2^{m-2}$, $w_{2}=2^{m-2}-2^{\frac{2m}{3}-1}$ and
$w_{3}=2^{m-2}+2^{\frac{2m}{3}-1}$. Let $A_{w_{i}}$ be the number of the
codewords with weight $w_{i}$ in ${\mathcal{C}}(f)^{\bar{D}}$. Note that the
all-one vector is a codeword of ${\mathcal{C}}(f)^{\bar{D}}$. It then follows
that all codewords in $({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ have even
weights. It is easily seen that the minimum weight in
$({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ cannot be $2$. Consequently, the
minimum weight in $({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ is at least $4$. From
the first three Pless power moments, we have
$\begin{split}\begin{cases}\sum_{i=1}^{3}A_{w_{i}}=2^{\frac{5m}{3}}-2;\\\
\sum_{i=1}^{3}w_{i}A_{w_{i}}=2^{\frac{8m}{3}-2}-2^{m-1};\\\
\sum_{i=1}^{3}w_{i}^{2}A_{w_{i}}=2^{\frac{8m}{3}-2}(2^{m-1}+1)-2^{2m-2}.\end{cases}\end{split}$
Solving this system of equations, we obtain
$A_{w_{1}}=2^{\frac{5m}{3}}-2^{\frac{4m}{3}-1}+2^{\frac{2m}{3}-1}-2$,
$A_{w_{2}}=A_{w_{3}}=2^{\frac{4m}{3}-2}-2^{\frac{2m}{3}-2}$.
We now consider the minimum distance of
$({\mathcal{C}}(f)^{\bar{D}})^{\perp}$. We have already proved that
$d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)\geq 4.$
If there exists a $[2^{m-1},2^{m-1}-\frac{5m}{3}]$ binary code with Hamming
distance at least $5$, then
$\begin{split}\sum_{i=0}^{2}\left(\begin{array}[]{cccc}2^{m-1}\\\ i\\\
\end{array}\right)=1+2^{m-1}+2^{m-2}\cdot(2^{m-1}-1)>2^{\frac{5m}{3}},\end{split}$
which contradicts the sphere packing bound. Hence,
$d_{H}\left(({\mathcal{C}}(f)^{\bar{D}})^{\perp}\right)=4$ and
$({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ is distance-optimal to the sphere
packing bound. $\square$
###### Example 4.6
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code in Theorem 4.5. Let $m=9$,
then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters $[256,15,96]$ and its dual
has parameters $[256,241,4]$.
We settled the weight distribution of the punctured code
${\mathcal{C}}(f)^{\bar{D}}$ in Theorem 4.5, but do not know if the
corresponding code ${\mathcal{C}}(f)$ was studied in the literature or not.
### 4.3 The case that $f(x)={\rm Tr}_{k}^{m}(x^{2^{k}+1})$
In this subsection, we study the weight distribution of the punctured code
${\mathcal{C}}(f)^{\bar{D}}$ and the parameters of its dual for $f(x)={\rm
Tr}_{k}^{m}(x^{2^{k}+1})$, where $k$ divides $m$. It is easy to see that
$f(x)=0$ if $k=\frac{m}{2}$. In the following, we just consider the case that
$k\not\in\\{m,\frac{m}{2}\\}$. We begin with the following lemma.
###### Lemma 4.7
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the punctured code defined in (2) with the
position set $D$ in (4). Let $f(x)={\rm Tr}_{k}^{m}(x^{2^{k}+1})$, where $k$
divides $m$ and $k\not\in\\{m,\frac{m}{2}\\}$. Let $t=2^{\frac{m+2k-2}{2}}$ if
$v_{2}(m)>v_{2}(k)+1$, and $t=2^{\frac{m+2k-4}{2}}$ if $v_{2}(m)=v_{2}(k)+1$,
and $t=2^{\frac{m+k-4}{2}}$ if $v_{2}(m)=v_{2}(k)$. Then
${\mathcal{C}}(f)^{\bar{D}}$ is a $[2^{m-1},k+m]$ code whose nonzero weights
are in the set $\left\\{2^{m-2},\,\,2^{m-1},\,\,2^{m-2}\pm t\right\\}$.
Proof. We prove the conclusions only for the case $v_{2}(m)>v_{2}(k)+1$. The
conclusions for the other two cases can be similarly proved. We first
determine the possible values of $W_{f}(a,b)$ for
$(a,b)\in\mathbb{F}_{2^{m}}^{2}$, where $W_{f}(a,b)$ was defined in (5). Note
that ${\rm Tr}(a{\rm Tr}_{k}^{m}(x^{2^{k}+1}))={\rm Tr}({\rm
Tr}_{k}^{m}(a)x^{2^{k}+1})$. If ${\rm Tr}_{k}^{m}(a)=0$, then
$W_{f}(a,b)=\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(bx)}=\left\\{\begin{array}[]{lll}2^{m},&\text{if $b=0$},\\\ 0,&\text{if
$b\neq 0$.}\end{array}\right.$
Let $L=\\{a\in\mathbb{F}_{2^{m}}:{\rm Tr}_{k}^{m}(a)=0\\}$, then
$|L|=2^{m-k}$. Hence, when $(a,b)$ runs over $L\times\mathbb{F}_{2^{m}}$, we
have
$W_{f}(a,b)-W_{f}(a,b+1)=\left\\{\begin{array}[]{lll}0,&\text{occuring
$2^{m+k}-2^{m-k+1}$ times},\\\ 2^{m},&\text{occuring $2^{m-k}$ times},\\\
-2^{m},&\text{occuring $2^{m-k}$ times}.\\\ \end{array}\right.$ (17)
If ${\rm Tr}_{k}^{m}(a)\neq 0$, similar to the discussions in (15), we have
$\begin{split}W_{f}^{2}(a,b)&=\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a{\rm Tr}_{k}^{m}(x^{2^{k}+1})+bx)}\sum_{y\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a{\rm Tr}_{k}^{m}(xy^{2^{k}}+x^{2^{k}}y))}\\\
&=2^{m}\sum_{\small\begin{array}[]{c}x\in\mathbb{F}_{2^{m}}\\\
x+x^{2k}=0\end{array}}(-1)^{{\rm Tr}(a{\rm Tr}_{k}^{m}(x^{2^{k}+1})+bx)}\\\
&=2^{m}\cdot\sum_{x\in\mathbb{F}_{2^{2k}}}(-1)^{{\rm Tr}(a{\rm
Tr}_{k}^{m}(x^{2^{k}+1})+bx)},\end{split}$
as $v_{2}(m)>v_{2}(k)+1$. Then by (7) we obtain
$W_{f}^{2}(a,b)=\begin{cases}2^{m+2k},&\text{if ${{\rm Tr}\left(a{\rm
Tr}_{k}^{m}(x^{2^{k}+1})+bx\right)}=0$ for all $x\in\mathbb{F}_{2^{2k}}$,}\\\
0,&\text{otherwise}.\end{cases}$
Clearly, if ${{\rm Tr}(a{\rm Tr}_{k}^{m}(x^{2^{k}+1})+bx)}=0$ for all
$x\in\mathbb{F}_{2^{2k}}$, then ${{\rm Tr}(a{\rm
Tr}_{k}^{m}(x^{2^{k}+1})+(b+1)x)}={{\rm Tr}\left(x\right)}=\frac{m}{2k}{{\rm
Tr}_{1}^{2k}\left(x\right)}$. Hence, $W_{f}(a,b)-W_{f}(a,b+1)\in\\{0,\pm
2^{\frac{m+2k+2}{2}}\\}$ for ${\rm Tr}_{k}^{m}(a)\neq 0$. Combining this with
(17), when $(a,b)$ runs through $\mathbb{F}_{2^{m}}^{2}$, we have
$W_{f}(a,b)-W_{f}(a,b+1)\in\left\\{0,2^{m},-2^{m},\pm
2^{\frac{m+2k+2}{2}}\right\\}$
and each of the values $\pm 2^{m}$ occurs $2^{m-k}$ times. Then ${\rm
wt_{H}}({\mathbf{c}}(a,b))=0$ and ${\rm wt_{H}}({\mathbf{c}}(a,b))=2^{m-1}$
both occur $2^{m-k}$ times and every nonzero weight in
${\mathcal{C}}(f)^{\bar{D}}$ belongs to the set
$\\{2^{m-2},\,2^{m-1},\,2^{m-2}\pm 2^{\frac{m+2k-2}{2}}\\}$ by (12) . Hence,
${\mathcal{C}}(f)^{\bar{D}}$ is degenerate and has dimension $m+k$. This
completes the proof. $\square$
Using Lemma 4.7 and similar discussions in the proof of Theorem 4.2, one can
prove the following theorem.
###### Theorem 4.8
Follow the notation and conditions introduced in Lemma 4.7. Then
${\mathcal{C}}(f)^{\bar{D}}$ is a $[2^{m-1},m+k,2^{m-2}-t]$ code with the
weight distribution in Table 9. Its dual has parameters
$\left[2^{m-1},2^{m-1}-m-k,4\right]$, and is distance-optimal with respect to
the sphere packing bound.
Table 9: The weight distribution of ${\mathcal{C}}(f)^{\bar{D}}$ in Theorem 4.8 Weight | Multiplicity
---|---
$0$ | $1$
$2^{m-2}$ | $2^{m+k}-2+2^{2m-3}\cdot(1-2^{k})/t^{2}$
$2^{m-2}\pm t$ | $2^{2m-4}\cdot(2^{k}-1)/t^{2}$
$2^{m-1}$ | $1$
###### Example 4.9
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code in Theorem 4.8. Let $m=5$
and $k=1$. Then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters $[16,6,6]$ and its
dual has parameters $[16,10,4]$. Both codes are optimal according to the
tables of best codes known in [22].
We settled the parameters and weight distribution of the punctured code
${\mathcal{C}}(f)^{\bar{D}}$ in Theorem 4.8, but do not know if the
corresponding code ${\mathcal{C}}(f)$ was studied in the literature or not.
## 5 Some punctured codes of binary linear codes from cyclotomic classes
In this section, we settle the weight distribution of the punctured code
${\mathcal{C}}(f)^{\bar{D}}$ and the parameters of its dual, where the
position set $D$ is a cyclotomic class and $f(x)=x^{d}$ for some integer $d$.
Let $\gamma$ be a primitive element of $\mathbb{F}_{2^{m}}$ and let $t$ be a
positive integer dividing $2^{m}-1$. Let $D=\langle\gamma^{t}\rangle$, which
is the subgroup of $\mathbb{F}_{2^{m}}^{*}$ generated by $\gamma^{t}$. The
multiplicative cosets of $D$ are called the cyclotomic classes of order $t$ in
$\mathbb{F}_{2^{m}}^{*}$. Recall that the binary punctured code is
${\mathcal{C}}(f)^{\bar{D}}=\\{\mathbf{c}(a,b)=({\rm Tr}(ax^{d}+bx))_{x\in
D}:a,b\in{\mathbb{F}}_{2^{m}}\\}$ (18)
if $f(x)=x^{d}$. Since $|D|=\frac{2^{m}-1}{t}$, the length $n$ of
${\mathcal{C}}(f)^{\bar{D}}$ is $\frac{2^{m}-1}{t}$. It is easily seen that
the Hamming weight of the codeword ${\mathbf{c}}(a,b)$ is given by
$\begin{split}{\rm wt_{H}}({\bf c}(a,b))&=n-\left|\left\\{x\in D:\,\,{\rm
Tr}\left(ax^{d}+bx\right)=0\right\\}\right|=\frac{1}{2}\left(n-\sum_{x\in
D}(-1)^{{\rm Tr}(ax^{d}+bx)}\right).\end{split}$ (19)
To determine the weight distribution of ${\mathcal{C}}(f)^{\bar{D}}$, we need
to determine the value distribution of
$T(a,b)=\sum_{x\in D}(-1)^{{\rm Tr}(ax^{d}+bx)}$ (20)
for $(a,b)$ running through $\mathbb{F}_{2^{m}}^{2}$. In the following, we
propose several classes of linear codes with few weights by choosing proper
$d$ and $t$.
### 5.1 The case that $d=\frac{2^{m}-1}{3}$ and
lcm$(3,t)\,|\,(2^{\frac{m}{2}}+1)$
In this subsection, we always assume that $v_{2}(m)=1$, $d=\frac{2^{m}-1}{3}$
and $t$ is a positive integer satisfying lcm$(3,t)\,|\,(2^{\frac{m}{2}}+1)$.
If $3\,|\,t$, then $x^{\frac{2^{m}-1}{3}}=1$ for any $x\in D$. From (20) we
have
$\begin{split}T(a,b)=\sum_{x\in D}(-1)^{{\rm Tr}(a+bx)}.\end{split}$ (21)
If $3\,\nmid\,t$, then
$\begin{split}T(a,b)&=\sum_{x\in\langle\gamma^{3t}\rangle}(-1)^{{\rm
Tr}(a+bx)}+\sum_{x\in\langle\gamma^{3t}\rangle}(-1)^{{\rm
Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}}+b\gamma^{t}x)}+\sum_{x\in\langle\gamma^{3t}\rangle}(-1)^{{\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}}+b\gamma^{2t}x)}\\\
&=\frac{1}{3t}\left(\sum_{x\in\mathbb{F}_{2^{m}}^{*}}(-1)^{{\rm
Tr}(a+bx^{3t})}+\sum_{x\in\mathbb{F}_{2^{m}}^{*}}(-1)^{{\rm
Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}}+b\gamma^{t}x^{3t})}+\sum_{x\in\mathbb{F}_{2^{m}}^{*}}(-1)^{{\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}}+b\gamma^{2t}x^{3t})}\right)\\\
&=\frac{1}{3t}\left(\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a+bx^{3t})}+\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}}+b\gamma^{t}x^{3t})}+\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}}+b\gamma^{2t}x^{3t})}\right)\\\
&-\frac{1}{3t}\left((-1)^{{\rm Tr}(a)}+(-1)^{{\rm
Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}})}+(-1)^{{\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}})}\right).\end{split}$ (22)
In order to obtain the possible values of $T(a,b)$ for $3\,\nmid\,t$, we need
the following lemma.
###### Lemma 5.1
Let $N$ be the number of zeros in the sequence $\left({\rm Tr}(a),\,\,{\rm
Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}}),\,\,{\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}})\right)$. When $a$ runs over
$\mathbb{F}_{2^{m}}$, we have
$N=\left\\{\begin{array}[]{lll}3,&{\rm occuring}\,\,\,2^{m-2}\,\,\,{\rm
times},\\\ 1,&{\rm occuring}\,\,\,3\cdot 2^{m-2}\,\,\,{\rm
times}.\end{array}\right.$
Proof. Obviously, the possible values of $N$ are 0, 1, 2 or 3. Let $N_{i}$
denote the number of times that $N=i$ when $a$ runs over $\mathbb{F}_{2^{m}}$,
where $i\in\\{0,1,2,3\\}$. Then
$\begin{split}N_{3}&=\frac{1}{2^{3}}\sum_{a\in\mathbb{F}_{2^{m}}}\sum_{y_{0}\in\mathbb{F}_{2}}(-1)^{{\rm
Tr}(y_{1}a)}\sum_{y_{1}\in\mathbb{F}_{2}}(-1)^{{\rm
Tr}(y_{1}a\gamma^{\frac{2^{m}-1}{3}})}\sum_{y_{2}\in\mathbb{F}_{2}}(-1)^{{\rm
Tr}(y_{2}a\gamma^{\frac{2(2^{m}-1)}{3}})}\\\
&=\frac{1}{2^{3}}\sum_{a\in\mathbb{F}_{2^{m}}}\sum_{y_{0}\in\mathbb{F}_{2}}\sum_{y_{1}\in\mathbb{F}_{2}}\sum_{y_{2}\in\mathbb{F}_{2}}(-1)^{{\rm
Tr}\left(a(y_{0}+y_{1}\gamma^{\frac{2^{m}-1}{3}}+y_{2}\gamma^{\frac{2(2^{m}-1)}{3}})\right)}.\end{split}$
Note that
$y_{0}+y_{1}\gamma^{\frac{2^{m}-1}{3}}+y_{2}\gamma^{\frac{2(2^{m}-1)}{3}}=0$
if and only if $y_{0}=y_{1}=y_{2}=0$ or $y_{0}=y_{1}=y_{2}=1$. Then
$N_{3}=\frac{1}{2^{3}}\left(2^{m}+2^{m}\right)=2^{m-2}.$
Due to symmetry, we have
$\displaystyle N_{2}$ $\displaystyle=$
$\displaystyle\frac{3}{2^{3}}\sum_{a\in\mathbb{F}_{2^{m}}}\sum_{y_{0}\in\mathbb{F}_{2}}(-1)^{y_{0}({\rm
Tr}(a)-1)}\sum_{y_{1}\in\mathbb{F}_{2}}(-1)^{{\rm
Tr}(y_{1}a\gamma^{\frac{2^{m}-1}{3}})}\sum_{y_{2}\in\mathbb{F}_{2}}(-1)^{{\rm
Tr}(y_{2}a\gamma^{\frac{2(2^{m}-1)}{3}})}$ $\displaystyle=$
$\displaystyle\frac{3}{2^{3}}\sum_{a\in\mathbb{F}_{2^{m}}}\sum_{y_{1}\in\mathbb{F}_{2}}\sum_{y_{0}\in\mathbb{F}_{2}}\sum_{y_{2}\in\mathbb{F}_{2}}(-1)^{{\rm
Tr}(a(y_{0}+y_{1}\gamma^{\frac{2^{m}-1}{3}}+y_{2}\gamma^{\frac{2(2^{m}-1)}{3}})-y_{0})}$
$\displaystyle=$
$\displaystyle\frac{3}{2^{3}}\sum_{a\in\mathbb{F}_{2^{m}}}\sum_{y_{1}\in\mathbb{F}_{2}}\sum_{y_{2}\in\mathbb{F}_{2}}(-1)^{{\rm
Tr}\left(a(y_{1}\gamma^{\frac{2^{m}-1}{3}}+y_{2}\gamma^{\frac{2(2^{m}-1)}{3}})\right)}-$
$\displaystyle\frac{3}{2^{3}}\sum_{a\in\mathbb{F}_{2^{m}}}\sum_{y_{1}\in\mathbb{F}_{2}}\sum_{y_{2}\in\mathbb{F}_{2}}(-1)^{{\rm
Tr}\left(a(1+y_{1}\gamma^{\frac{2^{m}-1}{3}}+y_{2}\gamma^{\frac{2(2^{m}-1)}{3}})\right)}.$
Note that
$y_{1}\gamma^{\frac{2^{m}-1}{3}}+y_{2}\gamma^{\frac{2(2^{m}-1)}{3}}=0$ if and
only if $y_{1}=y_{2}=0$, and
$1+y_{1}\gamma^{\frac{2^{m}-1}{3}}+y_{2}\gamma^{\frac{2(2^{m}-1)}{3}}=0$ if
and only if $y_{1}=y_{2}=1$. Then
$N_{2}=\frac{1}{2^{3}}\left(2^{m}-2^{m}\right)=0.$
Similarly, we can prove that $N_{1}=3\cdot 2^{m-2}$ and $N_{0}=0$. $\square$
###### Theorem 5.2
Let $v_{2}(m)=1$, $d=\frac{2^{m}-1}{3}$ and $t$ be a positive integer
satisfying $\textrm{lcm}(3,t)\,|\,(2^{\frac{m}{2}}+1)$. Let
${\mathcal{C}}(f)^{\bar{D}}$ be the linear code defined in (18) and
$D=\langle\gamma^{t}\rangle$. If $t\neq 2^{\frac{m}{2}}+1$, then the following
statements hold.
(1) If $3\,|\,t$, then ${\mathcal{C}}(f)^{\bar{D}}$ is a
$[\frac{2^{m}-1}{t},m+1]$ code with the weight distribution in Table 10. Its
dual has parameters $[\frac{2^{m}-1}{t},\frac{2^{m}-1}{t}-m-1,4]$, and is
distance-optimal with respect to the sphere packing bound.
Table 10: Weight distribution of the code ${\mathcal{C}}(f)^{\bar{D}}$ for $3\,|\,t$ in Theorem 5.2 Weight | Multiplicity
---|---
$0$ | $1$
$\frac{1}{2t}\left(2^{m}-2-2^{\frac{m}{2}}\right)$ | $\frac{(t-1)(2^{m}-1)}{t}$
$\frac{1}{2t}\left(2^{m}+2^{\frac{m}{2}}\right)$ | $\frac{(t-1)(2^{m}-1)}{t}$
$\frac{1}{2t}\left(2^{m}-2+(t-1)2^{\frac{m}{2}}\right)$ | $\frac{(2^{m}-1)}{t}$
$\frac{1}{2t}\left(2^{m}-(t-1)2^{\frac{m}{2}}\right)$ | $\frac{(2^{m}-1)}{t}$
$\frac{2^{m}-1}{t}$ | $1$
(2) If $3\,\nmid\,t$, then ${\mathcal{C}}(f)^{\bar{D}}$ is a
$[\frac{2^{m}-1}{t},m+2]$ code with the weight distribution in Table 11. Its
dual has parameters $[\frac{2^{m}-1}{t},\frac{2^{m}-1}{t}-m-2,3]$.
Table 11: Weight distribution of the code ${\mathcal{C}}(f)^{\bar{D}}$ for $3\,\nmid\,t$ in Theorem 5.2 Weight | Multiplicity
---|---
$0$ | $1$
$\frac{2^{m}-1}{2t}-\frac{1}{2t}\left((t-1)2^{\frac{m}{2}}-1\right)$ | $\frac{2^{m}-1}{t}$
$\frac{2^{m}-1}{2t}+\frac{1}{6t}\left((3t-1)2^{\frac{m}{2}}-1\right)$ | $\frac{2^{m+1}-2}{t}$
$\frac{1}{2t}\left(2^{m}+2^{\frac{m}{2}}\right)$ | $\frac{(t-1)(2^{m}-1)}{t}$
$\frac{2^{m}-1}{2t}-\frac{1}{6t}\left(2^{\frac{m}{2}}+1\right)$ | $\frac{3(t-1)(2^{m}-1)}{t}$
$\frac{2^{m}-1}{2t}-\frac{1}{6t}\left((3t+1)2^{\frac{m}{2}}+1\right)$ | $\frac{2^{m}-1}{t}$
$\frac{2(2^{m}-1)}{3t}$ | $3$
Proof. We prove this theorem case by case as follows.
Case 1: $3\,|\,t$. From (21) we have
$\begin{split}T(a,b)&=(-1)^{{\rm Tr}(a)}\sum_{x\in E}(-1)^{{\rm
Tr}(bx)}=\frac{1}{t}(-1)^{{\rm
Tr}(a)}\sum_{x\in\mathbb{F}_{2^{m}}^{*}}(-1)^{{\rm Tr}(bx^{t})}\\\
&=\frac{1}{t}(-1)^{{\rm Tr}(a)}-\frac{1}{t}(-1)^{{\rm
Tr}(a)}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm Tr}(bx^{t})}.\end{split}$
If $b=0$, it is clear that
$\begin{split}T(a,0)&=\left\\{\begin{array}[]{lcl}\frac{(1-2^{m})}{t},&{\rm
if}\,\,\,{\rm Tr}(a)=0,\\\ \frac{(2^{m}-1)}{t},&{\rm if}\,\,\,{\rm
Tr}(a)=1.\end{array}\right.\end{split}$ (23)
If $b\neq 0$, then $b$ can be written as $b=\gamma^{i}$, where $\gamma$ is a
primitive element of $\mathbb{F}_{2^{m}}$ and $1\leq i\leq 2^{m}-1$. From
Lemma 2.4 we have
$\begin{split}T(a,\gamma^{i})&=\left\\{\begin{array}[]{lcl}\frac{1}{t}(-1)^{{\rm
Tr}(a)}-\frac{1}{t}(-1)^{{\rm Tr}(a)}(-1)^{s}2^{\frac{m}{2}},&{\rm
if}\,\,\,i\not\equiv 0\pmod{t},\\\ \frac{1}{t}(-1)^{{\rm
Tr}(a)}-\frac{1}{t}(-1)^{{\rm Tr}(a)}(-1)^{s-1}(t-1)2^{\frac{m}{2}},&{\rm
if}\,\,\,i\equiv 0\pmod{t},\end{array}\right.\end{split}$ (24)
as $t$ is a positive integer such that lcm$(3,t)\,|\,(2^{m/2}+1)$. Hence, when
$(a,b)$ runs over $\mathbb{F}_{2^{m}}^{2}$, by (23) and (24), the value
distribution of $T(a,b)$ is given as follows:
$\begin{split}T(a,b)&=\left\\{\begin{array}[]{lll}\frac{(1-2^{m})}{t},&{\rm
occuring}\,\,2^{m-1}\,\,{\rm times},\\\ \frac{(2^{m}-1)}{t},&{\rm
occuring}\,\,2^{m-1}\,\,{\rm times},\\\
\frac{1}{t}\left(2^{\frac{m}{2}}+1\right),&{\rm
occuring}\,\,\frac{2^{m-1}(t-1)(2^{m}-1)}{t}\,\,{\rm times},\\\
-\frac{1}{t}\left(2^{\frac{m}{2}}+1\right),&{\rm
occuring}\,\,\frac{2^{m-1}(t-1)(2^{m}-1)}{t}\,\,{\rm times},\\\
\frac{1}{t}-\frac{1}{t}(t-1)2^{\frac{m}{2}},&{\rm
occuring}\,\,\frac{2^{m-1}(2^{m}-1)}{t}\,\,{\rm times},\\\
-\frac{1}{t}+\frac{1}{t}(t-1)2^{\frac{m}{2}},&{\rm
occuring}\,\,\frac{2^{m-1}(2^{m}-1)}{t}\,\,{\rm
times}.\end{array}\right.\end{split}$ (25)
From (19) and (25), we know that the Hamming weight $0$ occurs $2^{m-1}$ times
when $(a,b)$ runs through $\mathbb{F}_{2^{m}}^{2}$. Hence, in this case,
${\mathcal{C}}(f)^{\bar{D}}$ is degenerate and has dimension $m+1$. Dividing
each frequency by $2^{m-1}$ in (25), we get the weight distribution in Table
10 from (19). From the first five Pless power moments and the weight
distribution of ${\mathcal{C}}(f)^{\bar{D}}$, we deduce that the dual of
${\mathcal{C}}(f)^{\bar{D}}$ is a
$[\frac{2^{m}-1}{t},\frac{2^{m}-1}{t}-m-1,4]$ code. If there exists a
$[\frac{2^{m}-1}{t},\frac{2^{m}-1}{t}-m-1]$ binary code with Hamming distance
at least $5$, then we have
$\begin{split}\sum_{i=0}^{2}\left(\begin{array}[]{cccc}\frac{2^{m}-1}{t}\\\
i\\\
\end{array}\right)=1+\frac{2^{m}-1}{t}+\frac{2^{m}-1}{2t}\cdot(\frac{2^{m}-1}{t}-1)>2^{m+1}\end{split}$
as $t\neq 2^{\frac{m}{2}}+1$, which is contrary to the sphere packing bound.
Hence, the dual code $({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ is distance-
optimal with respect to the sphere packing bound.
Case 2: $3\,\nmid\,t$. From (22) we have
$\begin{split}T(a,b)&=\frac{1}{3t}\Bigg{(}(-1)^{{\rm
Tr}(a)}\big{(}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(bx^{3t})}-1\big{)}+(-1)^{{\rm
Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}})}\big{(}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(b\gamma^{t}x^{3t})}-1\big{)}\\\ &+(-1)^{{\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}})}\big{(}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{(b\gamma^{2t}x^{3t})}-1\big{)}\Bigg{)}.\end{split}$
If $b=0$, it is clear that
$\begin{split}T(a,0)&=\frac{2^{m}-1}{3t}\left((-1)^{{\rm Tr}(a)}+(-1)^{{\rm
Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}})}+(-1)^{{\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}})}\right).\end{split}$
From Lemma 5.1 we have
$\begin{split}T(a,0)&=\left\\{\begin{array}[]{lll}\frac{(2^{m}-1)}{t},&{\rm
occuring}\,\,\,2^{m-2}\,\,\,{\rm times},\\\ -\frac{(2^{m}-1)}{3t},&{\rm
occuring}\,\,\,3\cdot 2^{m-2}\,\,\,{\rm times}.\end{array}\right.\end{split}$
(26)
If $b\neq 0$, then $b$ can be written as $b=\gamma^{i}$, where $\gamma$ is a
primitive element of $\mathbb{F}_{2^{m}}$ and $1\leq i\leq 2^{m}-1$. From
Lemma 2.4 we have
$\begin{split}T(a,\gamma^{i})=\begin{cases}\frac{1}{3t}+\frac{1}{3t}\sum_{x\in\mathbb{F}_{2^{m}}}\left((-1)^{{\rm
Tr}(\gamma^{i}x^{3t})}-(-1)^{{\rm Tr}(\gamma^{i+t}x^{3t})}-(-1)^{{\rm
Tr}(\gamma^{i+2t}x^{3t})}\right),\\\ &\hskip-199.16928pt\text{ if \ ${\rm
Tr}(a)=0$, ${\rm Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}})=1$ and ${\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}})=1$,}\\\
\frac{1}{3t}+\frac{1}{3t}\sum_{x\in\mathbb{F}_{2^{m}}}\left(-(-1)^{{\rm
Tr}(\gamma^{i}x^{3t})}+(-1)^{{\rm Tr}(\gamma^{i+t}x^{3t})}-(-1)^{{\rm
Tr}(\gamma^{i+2t}x^{3t})}\right),\\\ &\hskip-199.16928pt\text{ if \ ${\rm
Tr}(a)=1$, ${\rm Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}})=0$ and ${\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}})=1$,}\\\
\frac{1}{3t}+\frac{1}{3t}\sum_{x\in\mathbb{F}_{2^{m}}}\left(-(-1)^{-{\rm
Tr}(\gamma^{i}x^{3t})}-(-1)^{{\rm Tr}(\gamma^{i+t}x^{3t})}+(-1)^{{\rm
Tr}(\gamma^{i+2t}x^{3t})}\right),\\\ &\hskip-199.16928pt\text{ if \ ${\rm
Tr}(a)=1$, ${\rm Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}})=1$ and ${\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}})=0$,}\\\
-\frac{1}{t}+\frac{1}{3t}\sum_{x\in\mathbb{F}_{2^{m}}}\left((-1)^{{\rm
Tr}(\gamma^{i}x^{3t})}+(-1)^{{\rm Tr}(\gamma^{i+t}x^{3t})}+(-1)^{{\rm
Tr}(\gamma^{i+2t}x^{3t})}\right),\\\ &\hskip-199.16928pt\text{ if \ ${\rm
Tr}(a)={\rm Tr}(a\gamma^{\frac{t(2^{m}-1)}{3}})={\rm
Tr}(a\gamma^{\frac{2t(2^{m}-1)}{3}})=0$.}\\\ \end{cases}\end{split}$ (27)
Clearly, one of $3t\,|\,i$, $3t\,|\,(i+t)$ and $3t\,|\,(i+2t)$ holds if and
only if $t\,|\,i$ for any positive integer $t$ and $1\leq i\leq 2^{m}-1$.
Otherwise, $3t\,\nmid\,i$, $3t\,\nmid\,(i+t)$ and $3t\,\nmid\,(i+2t)$. Then
combining Lemma 2.4, (26) and (27), it is not hard to see that when $(a,b)$
runs over $\mathbb{F}_{2^{m}}^{2}$, the value distribution of $T(a,b)$ is
given as follows:
$\begin{split}T(a,b)&=\left\\{\begin{array}[]{lll}\frac{(2^{m}-1)}{t},&{\rm
occuring}\,\,2^{m-2}\,\,{\rm times},\\\ -\frac{(2^{m}-1)}{3t},&{\rm
occuring}\,\,3\cdot 2^{m-2}\,\,{\rm times},\\\
-\frac{1}{t}+\frac{1}{t}\left((t-1)2^{\frac{m}{2}}\right),&{\rm
occuring}\,\,\frac{(2^{m}-1)2^{m-2}}{t}\,\,{\rm times},\\\
\frac{1}{3t}\left(1-3\cdot 2^{\frac{m}{2}}\right),&{\rm
occuring}\,\,\frac{(t-1)(2^{m}-1)2^{m-2}}{t}\,\,{\rm times},\\\
\frac{1}{3t}\left(1-(3t-1)2^{\frac{m}{2}}\right),&{\rm
occuring}\,\,\frac{(2^{m}-1)2^{m-1}}{t}\,\,{\rm times},\\\
\frac{1}{3t}\left(2^{\frac{m}{2}}+1\right),&{\rm
occuring}\,\,\frac{3(t-1)(2^{m}-1)2^{m-2}}{t}\,\,{\rm times},\\\
\frac{1}{3t}\left((3t+1)2^{\frac{m}{2}}+1\right),&{\rm
occuring}\,\,\frac{(2^{m}-1)2^{m-2}}{t}\,\,{\rm times}.\\\
\end{array}\right.\end{split}$ (28)
By a similar analysis to Case 1, we obtain the weight distribution of
${\mathcal{C}}(f)^{\bar{D}}$ and the parameters of its dual. This completes
the proof. $\square$
###### Example 5.3
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code in Theorem 5.2. Let $m=6$
and $t=3$, then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters $[21,7,8]$ and its
dual has parameters $[21,14,4]$. The two codes are optimal according to the
tables of best codes known in [22].
###### Remark 5.4
If $t=2^{\frac{m}{2}}+1$, it is easy to check that
${\mathcal{C}}(f)^{\bar{D}}$ is a
$[2^{\frac{m}{2}}-1,\frac{m}{2}+1,2^{\frac{m}{2}-1}-1]$ code with the weight
enumerator
$1+(2^{\frac{m}{2}}-1)(x^{2^{\frac{m}{2}-1}-1}+x^{2^{\frac{m}{2}-1}})+x^{2^{\frac{m}{2}}-1},$
which is optimal with respect to the Griesmer bound. Its dual has parameters
$[2^{\frac{m}{2}}-1,\frac{m}{2}+1,4]$, which is distance-optimal with respect
to the sphere packing bound. By the Assmus-Mattson theorem [2], the code
${\mathcal{C}}(f)^{\bar{D}}$ and its dual support $2$-designs [13, Chapter 4].
The reader is informed that in the special case $t=2^{\frac{m}{2}}+1$, the
code ${\mathcal{C}}(f)^{\bar{D}}$ is permutation-equivalent to a singly
punctured code of the first-order Reed-Muller code [37].
### 5.2 The case that $d(2^{k}+1)\equiv 2^{\frac{m}{2}}+1\pmod{2^{m}-1}$ and
$t=2^{k}+1$
In this subsection, we always assume that $m$ is even, $d(2^{k}+1)\equiv
2^{\frac{m}{2}}+1\pmod{2^{m}-1}$ and $t=2^{k}+1$. From (20) it follows that
$\begin{split}T(a,b)&=\sum_{x\in D}(-1)^{{\rm
Tr}(ax^{d}+bx)}=\frac{1}{2^{k}+1}\sum_{x\in\mathbb{F}_{2^{m}}^{*}}(-1)^{{\rm
Tr}(ax^{2^{m/2}+1}+bx^{2^{k}+1})}\\\
&=\frac{1}{2^{k}+1}\sum_{x\in\mathbb{F}_{2^{m}}}(-1)^{{\rm
Tr}(ax^{2^{m/2}+1}+bx^{2^{k}+1})}-\frac{1}{2^{k}+1}.\end{split}$ (29)
If $k=\frac{m}{2}$, by Lemma 2.4, (19) and (29), ${\mathcal{C}}(f)^{\bar{D}}$
is a one-weight code with parameters
$[2^{\frac{m}{2}}-1,\frac{m}{2},2^{\frac{m}{2}-1}]$, and is permutation-
equivalent to a Simplex code. In the following, we determine the parameters
and the weight distribution of ${\mathcal{C}}(f)^{\bar{D}}$ and the parameters
of the dual code $({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ for
$k\,\neq\,\frac{m}{2}$.
###### Theorem 5.5
Let $d$ satisfy the condition $d(2^{k}+1)\equiv
2^{\frac{m}{2}}+1\pmod{2^{m}-1}$. Let $t=2^{k}+1$ and $k\,\neq\frac{m}{2}$.
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code defined in (18). Then
${\mathcal{C}}(f)^{\bar{D}}$ is a
$[\frac{2^{m}-1}{t},\frac{3m}{2},\frac{2^{m-1}-2^{\frac{m}{2}+k-1}}{t}]$ code
with the weight distribution in Table 12. If $k>1$, its dual has parameters
$[\frac{2^{m}-1}{t},\frac{2^{m}-1}{t}-\frac{3m}{2},3]$. If $k=1$ and $m\neq
6$, its dual has parameters
$[\frac{2^{m}-1}{t},\frac{2^{m}-1}{t}-\frac{3m}{2},4]$, and is distance-
optimal with respect to the sphere packing bound. If $k=1$ and $m=6$, its dual
has parameters $[21,12,5]$, and is optimal according to the tables of best
codes known in [22].
Table 12: Weight distribution of ${\mathcal{C}}(f)^{\bar{D}}$ in Theorem 5.5 Weight | Multiplicity
---|---
$0$ | $1$
$\frac{2^{m-1}+2^{\frac{m}{2}-1}}{t}$ | $\frac{2^{3k}(2^{\frac{m}{2}}-1)(2^{m}-2^{m-2k}-2^{m-3k}+2^{\frac{m}{2}}-2^{\frac{m}{2}-k}+1)}{t^{2}(2^{k}-1)}$
$\frac{2^{m-1}-2^{\frac{m}{2}+k-1}}{t}$ | $\frac{2^{k}(2^{m}-1)(2^{\frac{m}{2}}+2^{\frac{m}{2}-k}+2^{\frac{m}{2}-2k}+1)}{t^{2}}$
$\frac{2^{m-1}+2^{\frac{m}{2}+2k-1}}{t}$ | $\frac{(2^{\frac{m}{2}-k}-1)(2^{m}-1)}{t^{2}(2^{k}-1)}$
Proof. It is clear that ${\rm Tr}\left(ax^{2^{m/2}+1}\right)={\rm
Tr}_{1}^{\frac{m}{2}}\left((a+a^{2^{m/2}})x^{2^{m/2}+1}\right)$ and
$a+a^{2^{m/2}}\in\mathbb{F}_{2^{\frac{m}{2}}}$ for any
$a\in\mathbb{F}_{2^{m}}$. Obviously, $a+a^{2^{m/2}}$ runs through
$\mathbb{F}_{2^{\frac{m}{2}}}$ with multiplicity $2^{\frac{m}{2}}$ when $a$
runs through $\mathbb{F}_{2^{m}}$. Let
$K=\left\\{x\in\mathbb{F}_{2^{m}}:x+x^{2^{m/2}}\right\\}.$
Then $\mathbf{c}(a,b)=\mathbf{c}(a,b+\delta)$ for any $\delta\in K$. Since
$t\,|\,(2^{m}-1)$ and $t=2^{k}+1$, it is easy to prove that there exists a
positive integer $\ell$ such that $\ell(2^{k}+1)\equiv
2^{\frac{m}{2}}+1\pmod{2^{m}-1}$ if and only if $v_{2}(k)=v_{2}(\frac{m}{2})$
and $k\,|\,\frac{m}{2}$. From Lemma 2.5, (19) and (29), the desired weight
distribution of ${\mathcal{C}}(f)^{\bar{D}}$ follows.
Let $A^{\perp}_{i}$ be the number of the codewords with weight $i$ in
$({\mathcal{C}}(f)^{\bar{D}})^{\perp}$. By the first four Pless power moments,
we get that $A_{1}^{\perp}=A_{2}^{\perp}=0$ and
$\begin{split}A_{3}^{\perp}&=\frac{1}{48(2^{k}+1)^{5}(2^{2k}-1)}\big{(}(2^{7k}+2^{6k+1}+6\cdot
2^{2k}+4\cdot 2^{3k}+7\cdot 2^{k}+2-3\cdot 2^{5k}-10\cdot 2^{4k}-9\cdot
2^{3k})\cdot 2^{\frac{3m}{2}}\\\ &+(3\cdot 2^{5k}+10\cdot 2^{4k}+5\cdot
2^{3k}-6\cdot 2^{2k}-7\cdot 2^{k}-2)\cdot 2^{\frac{5m}{2}}\big{)}.\end{split}$
We can check that $A_{3}=0$ if $k=1$ and $A_{3}\neq 0$ if $k>1$. If $k=1$, by
the fifth Pless power moment, we obtain that
$\begin{split}A_{4}^{\perp}=\frac{1}{6^{4}}(2^{4m}+70\cdot
2^{\frac{5m}{2}}-6\cdot 2^{\frac{7m}{2}}-25\cdot
2^{3m})+\frac{2^{2m}}{54}-\frac{2^{\frac{3m}{2}+2}}{81}.\end{split}$
It is easy to check that $A_{4}^{\perp}=0$ if and only if $m=6$. Similarly to
the proof of Theorem 5.2, we can show that
$({\mathcal{C}}(f)^{\bar{D}})^{\perp}$ is distance-optimal with respect to the
sphere packing bound if $k=1$ and $m\neq 6$. By the sixth Pless power moment,
we obtain $A_{5}^{\perp}\neq 0$. This completes the proof. $\square$
###### Example 5.6
Let ${\mathcal{C}}(f)^{\bar{D}}$ be the linear code in Theorem 5.5. Let $m=10$
and $k=1$. Then ${\mathcal{C}}(f)^{\bar{D}}$ has parameters $[341,15,160]$.
Its dual has parameters $[341,326,4]$ and is distance-optimal with respect to
the sphere packing bound.
We settled the parameters and weight distribution of the code
${\mathcal{C}}(f)^{\bar{D}}$ in Theorem 5.5, but do not know if the
corresponding code ${\mathcal{C}}(f)$ was studied in the literature or not.
## 6 Summary and concluding remarks
The main contributions of this paper are the following:
1. 1.
We obtained several classes of binary punctured codes with three weights, or
four weights, or five weights, or six weights, and determined their weight
distributions (see Theorem 3.3, Corollary 3.6, Theorem 4.2, Theorem 4.5,
Theorem 4.8, Theorem 5.2 and Theorem 5.5).
2. 2.
We presented several classes of self-complementary linear codes. Almost all of
their duals are distance-optimal with respect to the sphere packing bound (see
Theorem 3.3, Corollary 3.6, Theorem 4.2, Theorem 4.5, Theorem 4.8, Theorem 5.2
and Theorem 5.5).
3. 3.
We got some distance-optimal codes with specific parameters (see Example 3.4,
Example 3.7, Example 4.3, Example 4.9 and Example 5.3).
A constructed binary linear code ${\mathcal{C}}$ is new if one of the
following happens:
* •
No binary linear code with the same parameters was documented in the
literature.
* •
Some binary linear codes with the same parameters as ${\mathcal{C}}$ were
documented in the literature, but their weight distributions are different
from the weight distribution of ${\mathcal{C}}$.
* •
Some binary linear codes with the same parameters and weight distribution as
those of ${\mathcal{C}}$ were documented in the literature, but they are not
permutation-equivalent to ${\mathcal{C}}$.
Except the class of codes in Remark 5.4, every other class of binary codes
presented in this paper would be new, as we have not found a class of binary
codes with the same parameters and weight distributions in the literature as
those codes documented in this paper.
Starting from Section 2, we restricted our discussions on finite fields with
characteristic 2. The position sets were constructed from some trace functions
and cyclotomic classes. It would be interesting to extend some of the results
in this paper to the case that $q\geq 3$.
Finally, we make some comments on the puncturing and shortening techniques. As
mentioned in the introductory section, every projective linear code over
${\mathbb{F}}_{q}$ is a punctured Simplex code over ${\mathbb{F}}_{q}$ and a
shortened code of a Hamming code over ${\mathbb{F}}_{q}$. However, it is in
general very hard to determine the parameters of punctured codes of Simplex
codes and shortened codes of Hamming codes [37, 56]. Hence, we still need to
study punctured and shortened codes of other families of linear codes. For a
given linear code ${\mathcal{C}}$ and a set $T$ of coordinate positions in
${\mathcal{C}}$, it may be possible to determine the parameters of the
punctured code ${\mathcal{C}}^{T}$ when $|T|$ is small, but it is very hard to
do so in general if $|T|$ is large [45].
## References
* [1] R. Anderson, C. Ding, T. Helleseth, T. Kløve, How to build robust shared control systems, Des. Codes Cryptogr. 15 (1998) 111-124.
* [2] E. F. Assmus Jr., H. F. Mattson Jr., New 5-designs, J. Combin. Theory 6(2) (1969) 122–151.
* [3] L. Budaghyan, C. Carlet, A. Pott, New classes of almost Bent and almost perfect nonlinear polynomials, IEEE Trans. Inf. Theory 52(3) (2006) 1141–1152.
* [4] A. R. Calderbank, J. M. Goethals, Three-weight codes and association schemes, Philips J .Res. 39 (1984) 143–152.
* [5] A. R. Calderbank, W. M. Kantor, The geometry of two-weight codes, Bull. London Math. Soc. 18 (1986) 97–122.
* [6] C. Carlet, P. Charpin, V. Zinoviev, Codes, bent functions and permutations suitable for DES-like cryptosystems, Des. Codes Cryptogr. 15 (1998) 125–156.
* [7] C. Carlet, C. Ding, J. Yuan, Linear codes from highly nonlinear functions and their secret sharing schemes, IEEE Trans. Inf. Theory (51)(6) (2005) 2089–2102.
* [8] R. S. Coulter, Further evaluations of Weil sums, Acta Arith. 86 (1998) 217–226.
* [9] R. S. Coulter, The number of rational points of a class of Artin-Schreier curves, Finite Fields Appl. 8 (2002) 397–413.
* [10] P. Delsarte, On subfield subcodes of modified Reed-Solomon codes, IEEE Trans. Inf. Theory 21(5) (1975) 575–576.
* [11] C. Ding, Linear codes from some 2-designs, IEEE Trans. Inf. Theory 61 (2015) 3265–3275.
* [12] C. Ding, A construction of binary linear codes from Boolean functions, Discrete Math. 339 (2016) 2288–2303.
* [13] C. Ding, Designs from Linear Codes, World Scientific, Singapore, 2018.
* [14] C. Ding, Z. Heng, The subfield codes of ovoid codes, IEEE Trans. Inf. Theory 65(8) (2019) 4715–4729.
* [15] C. Ding, C. Li, N. Li, Z. Zhou, Three-weight cyclic codes and their weight distributions, Discrete Math. 339 (2016) 415–427.
* [16] C. Ding, H. Niederreiter, Cyclotomic linear codes of order 3, IEEE Trans. Inf. Theory 53(6) (2007) 2274–2277.
* [17] C. Ding, X. Wang, A coding theory construction of new systematic authentication codes, Theor. Comput. Sci. 330(1) (2005) 81–99.
* [18] K. Ding, C. Ding, Binary linear codes with three weights, IEEE Commun. Lett. 18(11) (2014) 1879–1882.
* [19] K. Ding, C. Ding, A class of two-weight and three-weight codes and their applications in secret sharing, IEEE Trans. Inf. Theory 61(11) (2015) 5835–5842.
* [20] K. Feng, J. Luo, Value distribution of exponential sums from perfect nonlinear functions and their applications, IEEE Trans. Inf. Theory 53(9) (2007) 3035–3041.
* [21] R. Gold, Maximal recursive sequences with 3-valued recursive cross-correlation function, IEEE Trans. Inf. Theory 14(1) (1968) 154–156.
* [22] M. Grassl, Bounds on the minimum distance of linear codes and quantum codes, Online available at http://www.codetables.de.
* [23] Z. Heng, W. Wang, Y. Wang, Projective binary linear codes from special Boolean functions, Appl. Algebra Eng. Commun. Comput., https://doi.org/10.1007/s00200-019-00412-z.
* [24] Z. Heng, Q. Yue, A class of binary linear codes with at most three weights, IEEE Commun. Lett. 19 (9) (2015) 1488–1491.
* [25] Z. Heng, Q. Yue, C. Li, Three classes of linear codes with two or three weights, Discrete Math. 339 (2016) 2832–2847.
* [26] H. D. L. Hollmann, Q. Xiang, A proof of the Welch and Niho conjectures on cross-correlations of binary msequences, Finite Fields Appl. 7 (2001) 253–286.
* [27] X. D. Hou, A note on the proof of Niho’s conjecture, SIAM J. Discrete Math. 18(2) (2004) 313–319.
* [28] W. Huffman, V. Pless, Fundamentals of Error-Correcting Codes. Cambridge University Press, Cambridge, 2010.
* [29] T. Kasami, Weight distribution formula for some class of cyclic codes, University of Illinois, Urbana, Rept. R-265, April 1966.
* [30] T. Kasami, Weight distributions of Bose-Chaudhuri-Hocquenghem codes, University of Illinois, Urbana, Rept. R-317, August 1966.
* [31] T. Kasami, The weight enumerators for several classes of subcodes of the 2nd order binary RM codes, Information and Control 18 (1971) 369–394.
* [32] C. Li, S. Bae, S. Yang, Some two-weight and three-weight linear codes, Math. of Comm. 13(1) (2019) 195–211.
* [33] C. Li, N. Li, T. Helleseth, C. Ding, The weight distribution of several classes of cyclic codes from APN monomials, IEEE Trans. Inf. Theory 60(8) (2014) 4710–4721.
* [34] C. Li, Q. Yue, F. Fu, A construction of several classes of two-weight and three-weight linear codes, Appl. Algebra Eng. Commun. Comput. 28 (2017) 11–30.
* [35] F. Li, Q. Wang, D. Lin, A class of three-weight and five-weight linear codes, Discrete Appl. Math. 241 (2018) 25–38.
* [36] N. Li, S. Mesnager, Recent results and problems on constructions of linear codes from cryptographic functions, Cryptogr. Commun. 12 (2020) 965–986.
* [37] Y. Liu, C. Ding, C. Tang, Shortened linear codes over finite fields, arXiv:2007.05901 [cs.IT].
* [38] G. Luo, X. Cao, S. Xu, J. Mi, Binary linear codes with two or three weights from niho exponents, Cryptogr. Commun. 10 (2018) 301–318.
* [39] J. Luo, K. Feng, On the weight distributions of two classes of cyclic codes, IEEE Trans. Inf. Theory 54(12) (2008) 5332–5344.
* [40] J. Luo, Y. Tang, H. Wang, Cyclic codes and sequences: the generalized Kasami case, IEEE Trans. Inf. Theory 56 (5) (2010) 2130–2142.
* [41] F. J. MacWilliams, N. J. A. Sloane, The Theory of Error-Correcting Codes, North-Holland Publishing Company, Amsterdam, 1997.
* [42] J. M. Marko, A note on evaluations of some exponential sums, Acta Arith. 93 (2000) 117–119.
* [43] S. Mesnager, Linear codes from functions, in _Concise Encyclopedia of Coding Theory_ , W. C. Huffman, J.-L. Kim, P. Slolé (Eds.), pp. 463–526, CRC Press, New York, 2021.
* [44] P. Tan, Z. Zhou, D. Tang, T. Helleseth, The weight distribution of a class of two-weight linear codes derived from Kloosterman sums, Cryptogr. Commun. 10 (2018) 291–299.
* [45] C. Tang, C. Ding, M. Xiong, Codes, differentially $\delta$-uniform functions and $t$-designs, IEEE Trans. Inf. Theory 66(6) (2020) 3691–3703.
* [46] C. Tang, N. Li, Y. Qi, Z. Zhou, T. Helleseth, Linear codes with two or three weights from weakly regular bent functions, IEEE Trans. Inf. Theory 62(3) (2016) 1166–1176.
* [47] C. Tang, Y. Qi, D. Huang, Two-weight and three-weight linear codes from square functions, IEEE Commun. Lett. 20(1) (2015) 29–32.
* [48] D. Tang, C. Carlet, Z. Zhou, Binary linear codes from vectorial boolean functions and their weight distribution, Discrete Math. 340(12) (2017) 3055–3072.
* [49] Z. Wan, Lectures on Finite Fields and Galois Rings, World Scientific, Singapore, 2003.
* [50] Q. Wang, K. Ding, R. Xue, Binary linear codes with two weights, IEEE Commun. Lett. 19(7) (2015) 1097–1100.
* [51] X. Wang, D. Zheng, L. Hu, X. Zeng, The weight distributions of two classes of binary codes, Finite Fields Appl. 34 (2015) 192–207.
* [52] X. Wang, D. Zheng, H. Liu, Several classes of linear codes and their weight distributions, Appl. Algebra Eng. Commun. Comput. 30 (2019) 75–92.
* [53] J. Wolfmann, Codes projectifs à deux ou trois poids associés aux hyperquadriques d’une géométrie finie, Discrete Math. 13(2) (1975) 185–-211.
* [54] Y. Xia, C. Li, Three-weight ternary linear codes from a family of power functions, Finite Fields Appl. 46 (2017) 17–37.
* [55] C. Xiang, It is indeed a fundamental construction of all linear codes, arXiv:1610.06355.
* [56] C. Xiang, C. Tang, C. Ding, Shortened linear codes from APN and PN functions, arXiv:2007.05923 [cs.IT].
* [57] J. Yuan, C. Carlet, C. Ding, The weight distribution of a class of linear codes from perfect nonlinear functions, IEEE Trans. Inf. Theory 52(2) (2006) 712–717.
* [58] Z. Zhou, C. Ding, Seven classes of three-weight cyclic codes, IEEE Trans. Inf. Theory 61(10) (2013) 4120–4126.
* [59] Z. Zhou, C. Ding, A class of three-weight cyclic codes, Finite Fields Appl. 25 (2014) 79–93.
* [60] Z. Zhou, N. Li, C. Fan, T. Helleseth, Linear codes with two or three weights from quadratic bent functions, Des. Codes Cryptogr. 81 (2015) 1–13.
|
# Malware Detection and Analysis: Challenges and Research Opportunities
Zahid Akhtar Department of Network and Computer Security,
State University of New York Polytechnic Institute, USA.
Email<EMAIL_ADDRESS>
###### Abstract
Malwares are continuously growing in sophistication and numbers. Over the last
decade, remarkable progress has been achieved in anti-malware mechanisms.
However, several pressing issues (e.g., unknown malware samples detection)
still need to be addressed adequately. This article first presents a concise
overview of malware along with anti-malware and then summarizes various
research challenges. This is a theoretical and perspective article that is
hoped to complement earlier articles and works.
## I Introduction
Use of personal computers and mobile devices coupled with internet has now
become integral part of everyday life. This ubiquity of high interconnectivity
has prompted many serious privacy and security menaces as well as different
other malicious activities. For instance, 117 million LinkedIn user’s email
and password were made publicly available by hackers in 2016. In 2017, Uber
revealed that its server was attacked and 57 million drivers and riders data
were stolen. While, in 2018 almost 50 million Facebook accounts were
compromised due to security breach. Similarly, cyberattacks on Norway’s
‘Health South East RHF’ healthcare organization in 2018 exposed health record
of more than half of country’s population. Moreover, it is estimated that on
an average every 10 second a new malicious code specimen is released to attack
mobile devices [1]. A surge of cyberattacks with increasing number and
sophistication can be seen with each passing year, which is impacting
governments, enterprises and individual alike and causing severe reputation,
financial and social damages. For example, malicious cyber activities in 2016
cost U.S. economy alone up to 109 billion USD [2].
Different types of cyberattacks are presently being performed by
cybercriminals, e.g., man-in-the-middle, malware or birthday attack. In
particular, malware attacks have advanced as one of the main formidable issues
in cybersecurity domain as well as primary tool utilized by cybercriminals
[3]. Malware is a short form of _mal_ icious _soft_ ware. In French language,
‘mal’ means ‘bad’. Malware is a catch-all term widely employed to denote
various kinds of unwanted harmful software programs with malicious motives
[4]. When malware is executed on a system or computing device it attempts to
breach the system/device’s security policies regarding integrity,
confidentiality and availability of data. Other names for malware are badware,
malicious code, malicious executable and malicious program. Malwares are
developed or used by hobbyists and cyber-offenders trying to show their
ability by causing havoc and to steal information potentially for monetary
gains, respectively. They are popularly known as hackers, black hats and
crackers, and could be external/internal menace, industrial spies or foreign
governments. Malwares can be used to change or erase data from victim
computers, to collect confidential information, or to hijack systems in order
to attack other devices, send spams, host and share illicit contents, bring
down servers, penetrate networks, and cripple critical infrastructures.
Consequently, a broad range of tools and schemes have been devised to detect
and mitigate malware attacks [1]. Anti-malware systems thwart malwares by
determining whether given program has malign intent or not [4]. Despite great
advancement of malware defense techniques and their incessant evolution,
badwares still can bypass the anti-malware solutions owing to mainly
sophisticated packers and weakest link, i.e., humans. Namely, most anti-
malware methods do not exhibit low enough error rates. Additionally, their
performance particularly drops when they face unknown malwares. While, daily
360,000 novel malware samples hit the scene [4]. As anti-malware becomes more
avant-garde so as malwares in the wild, thereby escalating the arms race
between malware guardians and writers. The quests for scalable and robust
automated malware detection frameworks still have to go a long way. This
article presents an overview of malwares and their defenses formulated in
recent years, and highlights challenges, open issues and research
opportunities for researchers and engineers. It is a perspective and academic
article, which is aimed at complementing existing studies and prompt
interdisciplinary research.
## II Malware Categories
Malwares, as depicted in Fig. 1, can be divided into various types depending
on how they infect, propagate or exploit the target system as follows [3].
Please note that some of the malware types/tools/names fall in gray area of
features intended for begin purposes as well, e.g., cookie, Wireshark, etc.
### II-A Virus
A piece of code that duplicates, reproduces or propagates itself across files,
programs and machines if they are network-connected. Viruses cannot execute
independently, therefore they are mainly appended to ‘host’ programs/files
(e.g., executable files, master boot record). When executed by ‘host’ can
corrupt or destroy files, programs, computer’s functioning and shared network
that may result in denial of service and system’s performance degradation.
Examples of viruses are Melissa virus and Creeper virus.
Figure 1: General taxonomy of malware.
### II-B Worm
Unlike virus, worm does not need ‘host’ but can run independently. Worms are
self-replicating and self-propagating via a network or storage devices. Worms
exploit operation system vulnerabilities, but do not corrupt user or system
files. They consume computing and network resources by residing in main memory
while replicating and spreading themselves causing DoS and SPD. Examples are
MyDoom and SQL Slammer.
### II-C Trojan
Trojan surfaces as benign program but performs malevolent activities in the
backend without user’s knowledge. Trojans usually do not infect files or
replicate themselves, rather create backdoors for unauthorized system access
to delete files, install programs or extricate private data (e.g., passwords).
Examples are Zeus and Zitmo.
### II-D Spyware
Spyware spies on users without their knowledge or consent. It is generally
used to surveil user activities, gather keystrokes or harvest sensitive
information (e.g., login credentials). Examples of spyware are LogKext and
GPSSpy. Following are popular spyware sub-categories:
#### II-D1 Adware
Adware dispenses either spiteful code/content or ads to infected users via web
browser, mobile app or PC’s desktop to generate attacker’s revenue. Another
name of this malware is malvertising, as it may use reputed companies/banners
to distribute malicious codes. It can be considered as a subcategory of
spyware, but unlikely leading to a big harm until coupled with other spywares.
Examples are AllSearchApp and Plankton.
Pornware is also seen as a subclass of adware, when installed maliciously
without user’s knowledge to display pornographic materials.
#### II-D2 Keylogger
This malware is also called keystroke logger, password grabbers, sniffer or
information-stealing malware, which is employed by attackers to record each
keystroke to steal sensitive data (e.g., passwords, credit card numbers).
Keylogger is generally transferred to a system when spiteful-software is
installed or -site is visited. Examples are SpyEye and Formgrabber.
#### II-D3 Trackware
Trackware is unwanted software that tracks and collects user activities and
habits then share data with a third party. Though trackwares harm user’s
privacy, they do not harvest confidential or personally identifiable
information. Examples are Trackware. Rewardnet and Win32/WebHancer.A.
#### II-D4 Cookie
Cookies are plain text files with information of user’s web browsing sessions.
They are stored on user’s computer/device for future use. Although cookies
seemingly are not detrimental, they can be menace if exploited by some
spyware. Likewise, tracking cookies can be utilized by hackers to gain user’s
personal details.
#### II-D5 Riskware
Riskware (aka grayware) is a genuine program when utilized by the attacker can
cause damage to system security or data by deletion, duplication, or
modification. Authentic programs for riskware could be IRC client, file
downloaders, etc.
#### II-D6 Sniffer
It is a malicious program that observes and captures the network traffics. It
analyzes various fields of packets and gather data for preparation of malware
attacks. Sniffers can be ‘Ethereal’ (i.e., legitimate used for
troubleshooting) and ‘BUTTSniffer’ (i.e., illegitimate for malign purposes).
Examples of sniffers are Wireshark and Aircrack-ng.
### II-E Ransomware
Ransomwares are covertly installed on a victim computer to execute a
cryptovirology attack. This malware type encrypts the data or locks down the
system, thereby restricting user access till ransom is paid. Specifically,
ransomwares can be classified in two main groups, viz. locker ransomwares that
decline access to the system/device functionalities, and crypto ransomware
that avert access to files/data. Ransomwares examples are FakeDefender and
TorrentLocker.
### II-F Scareware
Scareware deludes people into buying/downloading inessential and potentially
perilous security software, opening attachments or visiting a malevolent
website. It mostly attempts to frighten users (e.g., by displaying false
warning messages), and when installed it collects stored information from
victim system, which maybe sold to cybercriminals. Examples are Mac Defender
and Registry Cleaner XP.
### II-G Diallerware
Diallerware send premium-rate SMS/multimedia messages without mobile user’s
knowledge, thereby causing monetary sums to user. The premium-rate
SMS/multimedia messages/calls provide value-added services, which can be
abused by attackers. Attackers lure mobile owners to sign up for the premium
services managed by themselves, e.g., HippoSMS. Diallerware blocks the
messages from service providers to users to avoid user’s awareness of unwanted
additional charges.
### II-H Bot
A bot (abbreviated from robot) is a malicious program, which enables attacker
(aka botmaster or bot herder) to remotely control infected machine without
user’s knowledge via a command and control (C&C) channel from a system called
C&C server. A cluster of bots controlled by a sole server is known as botnet.
Botnets can be employed to organize DDoS attacks, phishing fraud, sending
spams, etc. Well-known examples are Sdbot and Agobot.
#### II-H1 Spamware
Spamware (aka spam sending malware or spambot) is malicious software designed
to search and compile list of email addresses as well as sending large number
of spam emails. It is an element of a botnet functioning as a distributed
spam-sending network. Spamware can use infected user’s email ID or IP address
to send emails, which may consume great amount of bandwidth and slow down the
system. Examples are Trik Spam and Necurs botnet.
#### II-H2 Reverse Shell
A reverse shell is an unauthorized program (malware) that provides access of
undermined computer to the attacker. Reverse shell enables attacker to run and
type command on host as the attacker is local. Examples are Netcat and JSP web
shell.
### II-I Rootkit
A rootkit is a stealthy software that is devised to conceal specific
programs/processes and enabling privileged access to computer/data. Rootkit
allows the attacker accessing and controlling the system remotely without
being detected, as it normally runs with root privileges and subverts system
logs and security software. Examples are NTRootkit and Stuxnet.
#### II-I1 Bootkit
Bootkit is an advanced form of rootkits that infects master boot record or
volume boot record. Since it resides in boot sector, it is difficult to be
detected by security software, and also stays active after system reboot.
Well-known examples are BOOTRASH and FinFisher.
### II-J Backdoor
Backdoor is a malware that installs by itself and creates secret entrance for
attackers to bypass system’s authentication procedures and to access and
perform illegitimate activities. Backdoors are never utilized alone but as
foregoing malware attacks of other kinds, as they do not harm but furnish
wider attack surfaces. A notable backdoor tool is Remote Access
Terminal/Trojan (RAT). Other examples are Basebridge and Olyx.
### II-K Browser Hijackers
It is an undesired software that alters settings of web browser without user’s
consent either to inject ads in the browser or replace home/error page and
search engine. Some of them may access sensitive data with spyware. Examples
are CoolWebSearch and RocketTab.
### II-L Downloader
It is a malicious program that downloads and installs/runs new versions of
malwares from internet on compromised computers. Downloader is usually
embedded in websites and software. Examples are Trojan-Downloader:W32/JQCN and
Trojan-Downloader:OSX/Jahlev.A.
Figure 2: A generic malware detection and analysis system. First, input sample
is provided to feature extraction module that yields feature representation
vector. A feature reduction/selection process is carried out on feature
representation vector to obtain fixed dimensionality regardless of length of
input sample for enhanced performance. A classification/clustering technique
is trained on available set of malware and benign samples. During
detection/analysis, unseen sample is reported by the classification/clustering
techniques as malware or not. Further analysis is also sometimes performed,
e.g., describing suspicious (or benign) characteristics present in the sample.
## III Malware Concealment Techniques
To evade anti-malwares, malware writers have applied following different
malware camouflage approaches [3]:
### III-A Encryption
Encrypted malware by this method consists of encryption and decryption
algorithms, keys and malicious codes. Each time attacker employs new
encryption algorithm and key to generate novel malware version. Since
decryption algorithm remains same, there is a higher probability to be
detected. The main target of this procedure is to avoid static analysis and
delaying investigation process. CASCADE was reported as the first encrypted
malware in 1987.
### III-B Packing
Packing mechanism is utilized to compress/encrypt malware executable file. To
detect malwares with packing technique, reverse engineering methods or correct
unpacking algorithm is needed, which sometime is hard as it requires knowledge
of true packing/compression algorithm. UPX and Upack are examples of packing.
### III-C Obfuscation
This technique obscures program’s principal logic to stop others gaining
associated knowledge of the code. Malwares with obfuscation and their
deleterious functionality stay unintelligible till activated. Quintessential
obfuscation strategies are inessential jumps and including garbage commands.
### III-D Polymorphism
Polymorphic malware is designed to alter its appearance every time it is
executed while retaining original code entirely. Compared to encryption
technique, boundless number of encryption algorithms can be utilized by a
polymorphic malware such that in each implementation a decryption code’s
portion is mutated. Transformation engine is generally kept in encrypted
malware. When any mutation occurs, a random encryption algorithm is produced
to re-encrypt the engine and malware with new decryption key. Different
inimical actions can be embedded under encryption operations. Since original
code remains intact, polymorphic malwares comparatively become easy to be
detected. First known polymorphic virus developed in 1990 is 1260.
### III-E Metamorphism
Metamorphism malware (aka body-polymorphic malware) mutates its malevolent
codes in each execution to create novel instance that has no similitude with
native codes, but functionality yet remains the same. There are two categories
of metamorphic malwares. _Open-world malware_ that mutates by communicating
with other sites over net, e.g., Conficker worm. _Open-world malware_ that
reprograms itself without external communication by mutating binary code
(i.e., binary transformer) or employing pseudocode representation, e.g.,
Win32/Apparition virus.
## IV Malware Detection and Analysis System
As Fig. 2 shows, a generic malware detection system consists of four main
modules: feature extraction, feature selection, classification/clustering, and
decision. The raw data sample is input to feature extraction module, which
extricates salient attributes as a feature set. Next, feature selection is
performed to tackle the curse of dimensionality, to reduce the computational
complexity, and to increase performance of the system by quantifying feature
correlations. The resultant feature vector is given to a classifier/clustering
scheme. Finally, decision module is employed either to acquire the final
binary decision: malware or benign (cleanware), or/and for additional malware
analysis such as malware variants detection (i.e., recognizing variant and
families), malware category detection (i.e., categorizing based on malwares’
prominent behaviors and objectives), malware similarity and novelty detection
(i.e., acquiring knowledge about unknown sample by specific similarities and
differences against known ones), malware development detection (i.e., finding
out if the malware writer has previously submitted it to online defense
tools), and malware attribution (i.e., identifying its programming language,
from where its launched and actor/group involved).
### IV-A Malware Analysis
In general, malware analysis is deployed both for detecting/classification and
other investigations (e.g., understanding the working to devise novel
identification schemes) of malware. Different features such as strings (i.e.,
frequency of code fragments, names, etc.), byte sequences (i.e.,
characterization of byte-level contents), opcodes (i.e., identification of
machine/assembly-level operations), system/APIs calls (i.e., analyses of
execution traces/disassembly code or characterization of APIs’ executed
actions), call graphs and data dependent (i.e., analyses of data being
exchanged between process calls), control flow graphs (i.e., behavior
relationships of data flow between system resources), multilayer dependency
chains (i.e., characterization of sub-behaviors to capture interactions among
samples and system levels), causal dependency graphs (i.e., tracking
persistent state changes on target system), influence graphs (i.e., encoding
of downloads by malware), memory accesses (i.e., analyses of memory during
malware executions), file system (i.e., frequency of created/deleted/modified
files), system registry (i.e., count of queried/deleted/modified registry
keys), CPU registers (i.e., frequency of registers usages/changes), function
length (i.e., number of bytes in a function), exceptions (i.e., exceptions
prompted during malware execution) and network traffic (i.e., analyses of
incoming and outgoing packets, visited addresses, etc.) are being used for
malware analysis. Malware analysis can be conducted in following three ways:
#### IV-A1 Static Analysis
It is also called signature-based, code analysis, white-box or misuse
detection approach. Methods in this category generally review statically the
code-structure for traits of infections using a pre-defined list of known
assails’ signatures without executing the sample [4]. However, advanced static
analysis techniques may run the sample by deploying reverse engineering, i.e.,
obtaining binary and assembly codes using decompiler, disassembler and
debugger.
Hellal _et al._ [5] presented a call code graph mining based static analysis
scheme, called minimal contrast frequent subgraph miner algorithm, to
distinguish variants of malware in Windows environment. Schultz _et al._ [6]
used features like list of DLLs functions, system calls and hex-dump to detect
novel and unseen malicious executables. Martin _et al._ [7] designed a malware
detection method that uses third-party API calls in Java files and multi-
objective optimization classification. While, Narayanan _et al._ [8] developed
a mutli-view (MKLDROID) framework utilizing a graph kernel with multiple
kernel learning to determine sets of semantic structures and contextual
information from Android apps for malware/malicious code localization. Yerima
and Sezer [9] proposed Android malware detection that analyzes permissions and
intents from the apps via multilevel classifier rank fusion architecture.
Recenlty, Cakir _et al._ [10] designed a shallow deep learning based method
that employed word2vec features via opcodes and a Gradient boosting
classifier.
Though static analysis techniques are capable of fast recognizing malwares in
versatile applications and pose no risk of infection while analyzing malwares,
they need huge number of pre-defined signature dataset. Moreover, they suffer
from runtime overhead and cannot discriminate variations of known- or obscure-
malwares and zero-day intrusions.
#### IV-A2 Dynamic Analysis
It is also called behavior-based, behavioral analysis, anomaly-based,
specification-based or black-box approach. Methods in this category assess
samples via their activities by executing them in a confined/simulated
environment, e.g., sandboxed, simulator, debugger, virtual machine or
emulator.
Miao _et al._ [11] proposed a bi-layer behavior abstraction technique via
semantic examination of dynamic API sequences in Windows environment. Lower-
and higher-layer behaviors were captured using data dependence of APIs and
complex good interpretability of lower abstractions, respectively. In [12],
authors developed a graph-based model harnessing relations (i.e., dependency
graphs) among system-calls’ groups for smartphone malicious software
detection, but the model requires high time consumption. Authors in [13]
presented a compression-based feature mining on system/API calls’ quantitative
information flow graphs to detect Windows malware. Mao _et al._ [14] designed
a security dependence network from access behaviors to evaluate importance of
system resources (e.g., files, registry, and processes) and malware detection.
While, Egele _et al._ [15] presented a dynamic blanket execution function that
employs high-level API-relevant semantic features. Enck _et al._ [16]
presented _TaintDroid_ for dynamic taint examination to trace leakage of
sensitive data (e.g., microphone, GPS and camera) in third-party apps. Ye _et
al._ [17] presented a deep learning strategy comprised of AutoEncoder,
multilayer restricted Boltzmann machines and associative memory. The framework
detects malware in embedded systems via Windows API calls extricated from
portable executable files.
Though dynamic analysis techniques are independent of malware source-code and
can detect unknown and zero-day malware instances, they require more resources
(e.g., memory, CPU time and disk space) and have high computational cost and
false positive rates.
#### IV-A3 Hybrid Analysis
It is also called gray-box approach. Neither static- nor dynamic-analysis
methods are unable to provide perfect anti-malware solutions. Thus, hybrid-
analysis approaches, which combine benefits of both static and dynamic
analyses, is more desirable. For instance, Santos _et al._ [18] designed a
hybrid method that integrates static (i.e., opcodes frequency) and dynamic
(i.e., executable’s execution trace data) features with multitude of
classifiers. Authors in [19] proposed a hybrid technique that collects system
calling runtime data and then utilizes a static scheme for mobile malware
detection. While, Dali _et al._ [20] developed a method that uses FlowDroid
static analysis tool and sensitive sources data flows with deep learning-based
classifier.
### IV-B Feature selection
The performance of malware detection depends on choice of feature
representation and length. The feature selection/dimensionality reduction is
conducted to attain a set of more discriminative features for enhanced
performance. Various anti-malwares have been presented using filter, wrapper
and embedding based feature selection algorithms such as distributed-,
hierarchical-, correlation-, low-rank matrix approximation-, forward-,
backward-, local sensitive hashing-, max relevance, adaptive feature scaling-,
spectral graph theory-, F1-score, F2-score, mean decrease impurity-, document
frequency-, information gain-, information gain ratio-, principal component
analysis- and latent dirichlet allocation [3].
### IV-C Classification/Clustering
To identify if a given sample is malicious or/and to determine malware family,
various binary and multiclass classification techniques such as Multilayer
Perceptron, Support Vector Machines, Naïve Bayes, Decision Tree, Rule-based,
Random Forests, Multiple Kernel Learning, $K$-Nearest Neighbors, Logistic
Regression, Ensemble, Multi-Objective Evolutionary by Genetic Algorithm, Deep
Belief Networks have been employed [4].
Hierarchical-, $K$-means-, meanShift-, $K$-medoid partitional-, density-based
spatial-, prototype-, self-organizing maps-, single-linkage- and locality
sensitive hashing-based clustering techniques have been utilized to categorize
malware samples exhibiting identical behaviors into different groups or to
generate signatures for detection [3].
### IV-D Evaluation Metrics
Performance of malware detection methods is generally evaluated by False
Positive Rate = FP/(FP + TN), True Positive Rate = TP/(TP + FN), specificity =
TN/(TN + FP), precision = TP/(TP + FP), accuracy = (TP + TN)/(TP + TN + FP +
FN), where TP, FP, TN and FN are true positives, false positives, true
negatives and false negatives, respectively. Malware samples are commonly
considered as positive instances. Moreover, Matthews correlation coefficient,
F-score, Kappa statistic, confusion matrix, receiver operating characteristic
and under the curve measures have been used. While, for clustering-based
algorithms Macro-F1 and Micro-F1 metrics, respectively, for accentuating the
performance on rare and common categories [3, 4].
## V Research challenges and opportunities
The ever-growing demand of minimized failure rates of anti-malware solutions
have opened up exigent research opportunities and challenges to be resolved
yet.
### V-A Issues in existing anti-malware methods
Malwares are still exponentially evolving in sophistication, and more
difficult plights lie ahead. Most prior static and dynamic or hybrid methods
do not work for novel/unknown/zero-day signatures and require virtual
environment plus are time consuming, respectively. Nonetheless, virtual
environments are becoming less effective as malware writers are usually one
step ahead by implementing high-level new techniques to conceal malicious
features. Though efforts are afoot to design multi-level and parallel
processing system, existing anti-malware methods/tools all in all are not
adequate or potent for higher levels of concealments. Current anti-malware
systems also face challenges like scalability, lack of truly real-world
representative datasets, irreproducibility of published results, low
generalization and detection disagreement among them for the same samples.
There is a need of improved and comprehensive malware countermeasures, which
could be developed by utilizing recent advanced-machine/deep learning, -data
mining and -adaptive schemes. Also, approaches embodying anomaly analysis with
behavioral data should be designed to investigate what the malware is doing
rather than how it is doing. This may result in minimized error and false
alarm rates.
### V-B Advanced machine learning (AML) techniques for anti-malware
Quintessential anti-malwares often depend on non-linear adversary explicit
models and expert domain knowledge, thereby making them prone to overfitting
and lower overall reliability. Conversely, AML techniques attempt to imitate
attackers with various content, contexts, etc. rather than explicit
models/systems/attacks. Few preliminary studies on shallow AML usage for anti-
malware has been conducted, but still a lot of efforts to be done regarding
AML anti-malware. For improved accuracy, flexibility and scalability on wide
range and unknown samples, AML paradigms like open set recognition, more
complex and residual deep learning, dictionary learning and data mining should
be explored for feature segmentation/representation
learning/selection/classification and determining temporal relationships
within and between malware sections.
### V-C Mobile device malwares
Smart-devices connected to internet is growing exponentially, so as malwares
(especially via third party apps) against them. Insubstantial studies have
been conducted on mobile device malwares. Moreover, most existing anti-malware
techniques are not real-time and unsuited for mobile devices because of high
computational cost and/or features complexity used for analysis. Thus, real-
time lightweight mobile anti-malwares via Bayesian classification is an
interesting research direction to be explored. Multiple information from in-
built sensors (e.g., accelerometer) may enhance mobile anti-malware
performance. Mobile hardware malware detection and removal is another issue
that needs serious exploration. Sooner mobile anti-malware-inspired techniques
will substantially impact smart-devices design. Anyway, smart-device malwares
should be tackled both by preventive and effective countermeasures. App
developers should assure that their apps are abiding security and privacy
policies. App stores administrators should vet and remove dubious apps. Users
should use superior anti-malwares and install trusted apps. On the whole,
wearable and mobile devices malware and anti-malware are a new research field
in cybersecurity with pressing problems worth researching like malware
affecting device’s master boot record or stealthily exploiting device to mine
cryptocurrency, and how a malware performing well on benchmark data will be
better under real-world environments.
### V-D Large-scale benchmark databases
Advancement in malware research deeply depends on the public availability of
comprehensive benchmark datasets incorporating accurate labels and contexts.
Most existing databases suffers from limitations like small size, missing
information/features, imbalanced-classes, and not publicly available. Lack of
adequate large-scale public datasets has stymied research on malware.
Benchmark public datasets will assist to compare independent anti-malware
schemes, determine inter and intra relationships between security infringement
phenomena and unify malware findings to draw determined conclusions with
reference to statistical significance. Nevertheless, collecting large-scale
heterogenous annotated databases is challenging and time- and resource-
consuming due to malware attributes, forms and behaviors diversity.
Crowdsourcing may help accumulating different annotated large-scale databsets.
### V-E Graph-based malware analysis
Malwares with concealments are dominant nowadays and effectual in evading
conventional anti-malwares that largely disregard learning and identifying the
underlying relationships between samples and variants, and contextual
information. Graph-based relationship representations and features (e.g.,
data- and control-flow graphs, call graphs, data-, program-, and control-
dependency graphs) offer interesting possibility even when malware code is
altered as it helps in tracking malware genealogy in different settings.
Devising graph-based anti-malwares yet have issues from data heterogeneity,
noisy and incomplete labels, and computational cost during real-time
detection. Up to some extend such challenges may be addressed in decentralized
fashion. Furthermore, use of multiple directed and undirected graphs, multi-
view spectral clustering, heterogeneous networks, multiple graph kernel
learning, dynamic graph mining and deep graph convolution kernels to capture
contextual and structural information could be fruitful area of research.
### V-F Bio-inspired anti-malware
Several limitations of traditional anti-malwares could be suppressed by bio-
inspired (e.g., biological immune system, biological evolution, genetic
algorithms and swarm intelligence) techniques. Comparatively these techniques
are lightweight, highly scalable and less resource-constrained. Adaptive bio-
inspired techniques that is used both for intelligent concealment-invariant
feature extraction and classification can dramatically enhance accuracy in the
wild. Bio-inspired methods that define particular objective functions to
discriminate a system under attack from a malfunctioning or failing may also
help strengthening the security. Combination of bio-inspired algorithms with
deep neural networks is one of the most promising direction, however has been
explored less in anti-malwares.
### V-G Defense-in-depth anti-malware
Anti-malware strategy that is composed of multiple defense levels/lines rather
than single is called defense-in-depth. Such strong defensive mechanism is
expected to be more robust as it doesn’t depend on one defense technique and
if one is breached the others aren’t. Each machine/cyber-system architecture
can be divided in various levels of depth, e.g., in a power grid system, the
meters, communication frameworks, and smaller components, respectively, could
be envisaged as lowest, intermediate and highest level. Another solution is
active or adaptive malware defense. Active defense has received little
attention due to inherent complexity, where developer anticipates attack
scenarios at different level and accordingly devises malware countermeasures.
In adaptive defense, the system is persistently updated by
retraining/appending novel features or dynamically adjusted corresponding to
reshaping environments. Adaptive defenses would require fast, automated and
computationally effective and could use unsupervised learning and concept
drift.
### V-H Internet of things (IoT) attacks
IoT are progressively being used in different domains ranging from smart-
cities to smart- and military-grids. Despite finest security endeavors, IoT
devices/systems can also be compromised by innovative cyber-attacks. Security
of IoT technology is more crucial as it is always connected to a network. IoT
cyber-security is relatively new research realm and quite challenging owing to
heterogeneous networks with multisource data and several categories of nodes.
To this end, different routes (e.g., predictive and blockchain) could be
effective. Predictive security is attaining cyber resiliency by devising
models that predict future attacks and prevent in advance. As there is a
strong correlation between security infractions and human blunders, predictive
models should consider computer networking, social sciences, descriptive
theory, uncertain behavior theory and psychology from attackers, users and
administrators’ perspectives at different granularity levels. Blockchain can
be utilized for self-healing of compromised devices/systems. Models could be
devised that exploit e.g., redundancy to heal corrupted codes/software by good
codes replacements, since in blockchain one can trace and roll back the
firmware versions. However, such models should also be capable to handle
resource, energy and communication constraints, which may be achieved by
lightweight machine/transfer/reinforcement learning based access control
protocols.
### V-I Deception and moving target anti-malware techniques
Deception techniques (e.g., honeypot) are being used to detect and prevent
malwares, which lures adversaries to strike in order to mislead them with
false information. There are two kinds of honeypots, i.e., client and server.
Honeypot helps to reduce false positives and prevent DDoS attacks. Complex
attacks/tools (e.g., polymorphic malware) is increasing to identify honeypots
or to alter their behaviors to deceive honeypots themselves. Also, honeypot
can be exploited by attackers to undermine other sensitive parts of
frameworks. More complicated honeypot and honey nets (i.e., bunch of
honeypots) schemes (e.g., shadow honeypots) should be devised as compromised
honeypot will put security of whole organization in danger.
Moving target techniques (aka dynamic platform methods-DPMs) dynamically
randomizes system components to suppress successful attacks’ likelihood and
shorten attack lifetime. Though adversary must undermine all platforms not one
to evade DPMs, DPMs require complicated application state synchronization
among varying platforms, and expand the system’s attack surface. Much less
efforts have been dedicated to developing well-articulated attack models and
how to upgrade deception elements and strategy to confront dynamic changes in
attack behaviors. Future research should concentrate on devising unorthodox
methodologies, performing real-world analyses to compute and compare
effectiveness of deception and DPMs techniques, and studying if DPMs conflict
or can co-exist with other anti-malwares.
### V-J Decentralized anti-malware
Data sharing and trust management hinder current anti-malwares advancement,
which can be resolved by decentralized malware detectors using blockchain
technology. But it has received little attention till now. For intersection of
anti-malware and blockchain technology, future directions will include
exploring overhead traffic handling, quality and sparse malware signatures,
building accurate dynamic normal nature of traffics, reducing massive false
alerts, energy and cost, blockchain latency, case-by-case scenario
investigation, and more proof-of-concept implementations.
### V-K Botnet countermeasures
Thwarting botnets has become key area. Several botnet detection and defense
architectures have been proposed. Various issues surround botnet
countermeasure study, e.g., difficulties in testing devised botnet defenses in
real scenarios/data. Besides, lack of widely acknowledged benchmark or
standard methodology to quantitative evaluate or compare bot defenses
presumably due to privacy and data sharing concerns. Botnets, including IoT
bot and socialbot, will continue to rise until effective means both technical
and non-technical are taken. Technical factors include passive internet
service providers and unassertive software. Non-technical factors include
establishing distributed global environment, local and multinational legal
issues and poor user awareness.
### V-L Privacy preservation
Malwares that steal sensitive information has received much attention.
However, preserving user privacy in malware analysis (especially at the cloud
or third party server) and malware data sharing is yet an open and seldom
touched concern. Establishing privacy and regaining trust in commercial anti-
malwares would become difficult if user’s privacy/data is compromised once.
Majority of prior anti-malwares overlook the privacy and security of user,
data and network. Thus, reasonably little has been worked on privacy
protection frameworks to respect public and law opinions. Privacy preservation
mechanisms that do not influence the detection performance is practically
worthy of contemplation. Formulating lightweight detection and privacy
protection systems usable on mobile devices to balance security, efficacy,
privacy and power consumption demands special considerations. More innovative
privacy preservation approaches (e.g., allowing user to stabilize privacy,
convenience and security levels) in malware analysis has been highlighted by
many experts as an essential future research to be carried out.
### V-M Big data malware analysis
The demand for big data malware analysis frameworks is steadily expanding.
Practitioners are working to resolve big data malware challenges such as
volume (e.g., collecting, cleaning and compressing data/labels), velocity
(e.g., real-time online training, learning, processing or streaming big data),
variety (e.g., heterogeneous multi-view data learning/embedding), veracity
(e.g., leaning with contradicting and unreliable data), and value (explainable
ML based malware analysis). Another promising future research direction is
devising large-scale feature selection techniques, which are less-dependent on
feature engineering, via distributed feature selection, low-rank matrix
approximation, adaptive feature scaling, spectral graph theory, and fuzzy and
neuro-fuzzy clustering. Rigorous efforts need to be made to investigate use of
synchronous parallel processing (e.g., Spark) and to develop body of knowledge
on pros and cons of big data anti-malware tools to assist practitioners.
### V-N Malware analysis visualization systems
Existing methods to analyze malwares are time-consuming for malware analysts.
Highly interactive visual analysis would aid researchers and practitioners to
forensically investigate, summarize, classify and compare malwares more
easily. Most prior techniques are very limited with regard to interactivity,
mapping temporal dimensions, scalability and representation space (e.g., they
are superficially 2D rather than 3D). The field of developing malware
visualization systems covering consequential rang of malware types and
environments is vital and emerging. Encyclopedic visualization systems will
lead analysts/researchers to ascertain novel research domains in the years to
come.
### V-O Multimodal anti-malwares
Multimodal anti-malwares, which consolidate evidences from different kinds of
features/sources (e.g., string, permission, elements converted to image
matrices) can overcome numerous constraints in frameworks that consider only
one/fewer features. Multimodal frameworks are more flexible and can
significantly enhance the accuracy of unimodal ones in the wild. Multimodal
may include multiple sensors, algorithms and instances, and information can be
fused at feature, score or decision level. There is ample room to develop
novel fusion architectures. Moreover, multimodal frameworks are expected to be
intrinsically more robust to concealments, but no study investigated how
robust are they to concealments.
### V-P Clustering for malware analysis
Previous works have shown that clustering could be a useful tool to
effectively classify unknown malwares for improved generalization, to
underline unseen family’s behaviors for thorough analysis that may help more
robust anti-malware schemes, and to label huge volumes of malwares in fast and
automatic fashion that has become major challenge. Future goal should be
further improving accuracy of clustering-based malware analysis using cluster
quality measurements, contextual/metadata information, and boosted genetic
algorithms, etc. Attentions should also be given to rectify security issues,
e.g., poisoning and obfuscation attacks against targeted clusters.
### V-Q Hardware-based Solutions
Hardware-based detectors are recently getting momentum against proliferation
of malware. Such detection mechanisms utilize low-level architectural
features, which are obtained by redesigning the micro-architecture of computer
processors, e.g., CPUs with special registers providing hardware and software
anomaly events. Nevertheless, research in this domain and trustworthy systems
(i.e., inherently secure and reliable against human errors and hostile
parties) is yet in its initial genesis and has to go a long way. Furthermore,
there is dearth of studies on efficacy of anti-malwares combining hardware-
and software-based techniques that have exceptional potential to uncover extra
elaborate malwares. Likewise, smart devices’ sensors (e.g., GPS and ambient
light sensors) data could also be used as additional feature vector to profile
malware.
### V-R Malware adversarial learning
Machine-learning (ML) recently has been used to achieve effective malware
defenses, however they are not designed for situations where an adversary is
actively trying to impact outcomes. Specially, deep learning-based
countermeasures lack robustness against adversarial examples (i.e., sample
crafted from genuine samples with careful minor perturbations). Attackers can
also inject poisoning samples in (online/adaptive) training database with the
aim to remarkably decrease ML-malware countermeasure’s accuracy at testing
phase. A comprehensive analysis for each malware considering attacker’s
capability and what features to what extend should be modified to avoid
detection has not been done yet. It is still difficult task to design ML anti-
malwares that are robust in adversarial setting. Researchers should explore
malware adversarial ML in identifying probable countermeasures’
vulnerabilities both at train and test stages, devising homologous assails and
their impacts and developing techniques to enhance robustness of ML-based
anti-malwares.
### V-S Performance evaluation framework
Malware analysis accuracy/performance, which is used to evaluate, compare, or
configure anti-malwares, in general lacks standardization. A unified and
comprehensive evaluation framework should be developed to rank present and
future methods, that incorporates static and dynamic techniques, adversary’s
goal, knowledge and capability, attack strategies at train and test phase,
evaluation metrics (i.e., security- and privacy-relevant error rates as most
current methods do not cover all aspects), and common parlance to elucidate
anti-malware performances. Any such framework with common criteria and open
online platform for evaluating resilient, malware sophistication, decision
making, policies, experimental setups, big databases, and open source codes
will surely help both in reporting baseline performances without giving a
false sense of progress and encouraging reproducible research on scalability
and challenges in real-world scenarios.
### V-T Malware education
Most malwares succeed contemplating humans as weakest link. Additionally,
there is growing demand for cybersecurity workforces, therefore it is
imperative to educate people about malware safety. In academic institutions,
malware analysis and related courses should be taught both at undergraduate
and graduate levels. Nonetheless, relatively very limited
colleges/universities offer malware courses, which maybe because of the
shortage of agreement on fundamental topics among institutions, book and
training providers, and ethical sensitivity of educating/creating white-hats.
Moreover, most academic courses being offered are practitioner-oriented but
not science-/research-oriented and heavily rely on text books that are not
current. Some training camps/workshops are being held by
companies/organizations also for general public, but they are exceptionally
expensive. More on-line free-to-access training courses will surely diminish
malware damages.
### V-U Interdisciplinary research
To advance state-of-the-art malware analysis, the research and industrial
communities need to support and promote interdisciplinary fundamental science
research and development (including contributions from machine learning, human
psychology, computer engineering, etc.) to accomplish dependable, natural, and
generalized anti-malware techniques.
## VI Conclusion
Malwares, including in mobile and smart devices, have become more
sophisticated and greater in frequency during recent years. Although there
exist many defense tools and mechanisms, malware detection and analysis are
still challenging tasks, since malware developers continuously conceal the
information in attacks or evolve cyber-attacks to circumvent newer security
techniques, plus some prior methods face low generalization to unknown
malwares and scalability issues. It is hoped that this academic and
perspective article will stimulate focused interdisciplinary research and
development in anti-malware towards aggrandizing its full potential in
different cyberspace applications.
## References
* [1] P. Faruki, A. Bharmal, V. Laxmi, V. Ganmoor, M. S. Gaur, M. Conti, M. Rajarajan, “Android Security: A Survey of Issues, Malware Penetration, and Defenses”, IEEE Communications Surveys & Tutorials, 17-2:998-1022, 2015.
* [2] “The Cost of Malicious Cyber Activity to the U.S. Economy”, Council of Economic Advisors, February 2018, https://www.whitehouse.gov/wp-content/uploads/2018/02/The-Cost-of-Malicious-Cyber-Activity-to-the-U.S.- Economy.pdf (Last accessed: 2/12/2018).
* [3] A. Souri, R. Hosseini, “A state-of-the-art survey of malware detection approaches using data mining techniques”, Human-centric Computing and Information Sciences, 8:1-22, 2018.
* [4] D. Ucci, L. Aniello, R. Baldoni, “Survey on the usage of machine learning techniques for malware analysis”, arXiv preprint arXiv:1710.08189, pp. 1-67, 2017.
* [5] A. Hellal, L.B. Romdhane, “Minimal contrast frequent pattern mining for malware detection”, Computer Security, 62:19-32, 2016.
* [6] M.G. Schultz, E. Eskin, F. Zadok, “Data mining methods for detection of new malicious executables”, Proceedings of IEEE Symposium on Security and Privacy, pp. 1-12, 2001.
* [7] A. Martin, H.D. Menendez, D. Camacho, “MOCDroid: multi-objective evolutionary classifier for Android malware detection”, Soft Computing, 21:7405-7415, 2016.
* [8] A. Narayanan, M. Chandramohan, L. Chen, Y. Liu, “A multi-view context-aware approach to Android malware detection and malicious code localization”, 2017:1-53, Empir Software Eng., 2017.
* [9] S.Y. Yerima and S. Sezer, “DroidFusion: A novel multilevel classifier fusion approach for Android malware detection”, IEEE Transactions on Cybernetics, 10:1-14, 2018.
* [10] B. Cakir and E. Dogdu, “Malware classification using deep learning methods”, Proceedings of ACMSE Conference, pp. 1-10, 2018.
* [11] Q. Miao, J. Liu, Y. Cao, J. Song, “Malware detection using bilayer behavior abstraction and improved one-class support vector machines”, International Journal of Information Security, 15:361-379, 2016.
* [12] S.D. Nikolopoulos, I. Polenakis, “A graph-based model for malware detection and classification using system-call groups”, Journal of Computer Virology Hacking Techniques, 13:29-46, 2016.
* [13] T. Wuechner, A. Cislak, M. Ochoa, A. Pretschner, “Leveraging compression-based graph mining for behavior based malware detection”, IEEE Transactions Dependable Secure Computing, 16:1-14, 2017.
* [14] W. Mao, Z. Cai, D. Towsley, Q. Feng, X. Guan, “Security importance assessment for system objects and malware detection”, Computer Security, 68:47-68, 2017.
* [15] M. Egele, M. Woo, P. Chapman, D. Brumley, “Blanket execution: Dynamic similarity testing for program binaries and components”, USENIX Security Symposium, pp. 303-317, 2014.
* [16] W. Enck, P. Gilbert, B.-G. Chun, L.P. Cox, J. Jung, P. McDaniel, A.N. Sheth, “TaintDroid: An information-flow tracking system for realtime privacy monitoring on smartphones”, ACM Transactions Computing System, 32(2):1-8, 2014.
* [17] Y. Ye, L. Chen, S. Hou, W. Hardy, X. Li, “DeepAM: a heterogeneous deep learning framework for intelligent malware detection”, Knowledge Information System, 54:265-285, 2017.
* [18] I. Santos, J. Devesa, F. Brezo, J. Nieves, P.G. Bringas, “OPEM: A static-dynamic approach for machine-learning-based malware detection”, International Joint Conference CISIS’12-ICEUTE’12-SOCO’12, pp. 271-280, 2012.
* [19] Fei Tong and Zheng Yan, “A hybrid approach of mobile malware detection in Android”, Journal of Parallel Distributed Computing, 103:22-31, 2017.
* [20] Z. Dali, J. Hao, Y. Ying, D. Wu, C. Weiyi, “DeepFlow: deep learning-based malware detection by mining Android application for abnormal usage of sensitive data”, IEEE symposium on computers and communications, pp. 438-443, 2017.
|
# Learning Based Signal Detection for MIMO Systems with Unknown Noise
Statistics
Ke He, Le He, Lisheng Fan, Yansha Deng, George K. Karagiannidis, _Fellow,
IEEE_ , and Arumugam Nallanathan, _Fellow, IEEE_ K. He, L. He and L. Fan are
all with the School of Computer Science and Cyber Engineering, Guangzhou
University, China (e-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>[email protected]).Y. Deng is with the Department of Informatics, King’s
College London, London WC2R 2LS, UK (e-mail: [email protected]).G. K.
Karagiannidis is with the Wireless Communications Systems Group (WCSG),
Aristotle University of Thessaloniki, Thessaloniki 54 124, Greece (e-mail:
[email protected]).A. Nallanathan is with the School of Electronic Engineering
and Computer Science, Queen Mary University of London, London, U.K (e-mail:
[email protected]).
###### Abstract
This paper aims to devise a generalized maximum likelihood (ML) estimator to
robustly detect signals with unknown noise statistics in multiple-input
multiple-output (MIMO) systems. In practice, there is little or even no
statistical knowledge on the system noise, which in many cases is non-
Gaussian, impulsive and not analyzable. Existing detection methods have mainly
focused on specific noise models, which are not robust enough with unknown
noise statistics. To tackle this issue, we propose a novel ML detection
framework to effectively recover the desired signal. Our framework is a fully
probabilistic one that can efficiently approximate the unknown noise
distribution through a normalizing flow. Importantly, this framework is driven
by an unsupervised learning approach, where only the noise samples are
required. To reduce the computational complexity, we further present a low-
complexity version of the framework, by utilizing an initial estimation to
reduce the search space. Simulation results show that our framework
outperforms other existing algorithms in terms of bit error rate (BER) in non-
analytical noise environments, while it can reach the ML performance bound in
analytical noise environments. The code of this paper is available at
https://github.com/skypitcher/manfe.
###### Index Terms:
Signal detection, MIMO, impulsive noise, unknown noise statistics,
unsupervised learning, generative models.
## I Introduction
Consider the linear inverse problem encountered in signal processing, where
the aim is to recover a signal vector $\bm{x}\in\mathbb{C}^{N\times 1}$ given
the noisy observation $\bm{y}\in\mathbb{C}^{M\times 1}$, and the channel
response matrix $\bm{H}\in\mathbb{C}^{M\times N}$. Formally, the observation
vector can be expressed as
$\displaystyle\bm{y}=\bm{H}\bm{x}+\bm{w},$ (1)
where $\bm{w}\in\mathbb{C}^{M\times 1}$ is an additive measurement noise, that
is independent and identically distributed (i.i.d) with an unknown
distribution $p_{\bm{w}}(\bm{w})$. From a Bayesian perspective, the optimal
solution to the above problem is the maximum a posteriori (MAP) estimation
$\displaystyle\hat{\bm{x}}_{MAP}=$
$\displaystyle\arg\max_{\bm{x}\in\mathcal{X}}p(\bm{x}|\bm{y}),$ (2)
$\displaystyle=$
$\displaystyle\arg\max_{\bm{x}\in\mathcal{X}}p(\bm{y}|\bm{x})p(\bm{x}),$ (3)
where $\mathcal{X}$ denotes the set of all possible signal vectors. When there
is no prior knowledge on the transmitted symbols, the MAP estimate is
equivalent to the maximum likelihood estimation (MLE), which can be expressed
as
$\displaystyle\hat{\bm{x}}_{MAP}=\hat{\bm{x}}_{MLE}=$
$\displaystyle\arg\max_{\bm{x}\in\mathcal{X}}p(\bm{y}|\bm{x}),$ (4)
$\displaystyle=$
$\displaystyle\arg\max_{\bm{x}\in\mathcal{X}}p_{\bm{w}}(\bm{y}-\bm{H}\bm{x}).$
(5)
In most of the existing works in the literature, the noise $\bm{w}$ is assumed
to be additive white Gaussian noise (AWGN), whose probability density function
(PDF) is analytical and the associated likelihood of each possible signal
vector is tractable. In this case, the MLE in (5) becomes
$\displaystyle\hat{\bm{x}}_{\text{E-MLE}}=\arg\min_{\bm{x}\in\mathcal{X}}\|\bm{y}-\bm{H}\bm{x}\|^{2},$
(6)
which aims to minimize the Euclidean distance, referred to as E-MLE. However,
in practical communication scenarios, we may have little or even no
statistical knowledge on the noise. In particular, the noise may present some
impulsive characteristics and may be not analyzable. For example, the noise
distribution becomes unknown and mainly impulsive for scenarios like long-
wave, underwater communications, and multiple access systems [1, 2, 3, 4]. In
these cases, the performance of E-MLE will deteriorate severely [5]. In
contrast to the Gaussian case, the exact PDF of impulsive noise is usually
unknown and not analytical [3, 4, 6, 7], which means that the exact likelihood
$p(\bm{y}|\bm{x})$ is computationally intractable.
### I-A Related Research
In general, there are two major approaches to solve the problem of signal
detection in MIMO systems: model-driven and data-driven. Next, we briefly
present both of them.
#### I-A1 Model-Driven Methods
Model-driven approaches have been extensively studied in the literature for
MIMO signal detection, by assuming that the noise is Gaussian. Among them, the
approximate message passing (AMP) algorithm is an attractive method, which
assumes Gaussian noise and well-posed channel matrix [8]. The AMP can detect
the desired signal by iteratively predicting and minimizing the mean squared
error (MSE) with a state evolution process [8, 9]. Combined with deep learning
methods, AMP is unfolded into a number of neural layers to improve the
performance with ill-posed channel matrix [10, 11]. Moreover, an efficient
iterative MIMO detector has been proposed in [12] to leverage the channel-
aware local search (CA-LS) technology to significantly improve the signal
detection under Gaussian noise environment. When the noise has an arbitrary
density function, a generalized AMP (GAMP) algorithm can be designed for the
generalized linear mixing model [9], where the sum-product version of the GAMP
algorithm can be treated as a hybrid of the iterative soft-threshold algorithm
(ISTA) and alternating direction method of multipliers (ADMM) [13]. The
Gaussian GAMP (G-GAMP) is equivalent to the AMP, and the GAMP with MMSE
denoiser can be rigorously characterized with a scalar state evolution whose
fixed points, when unique, are Bayes-optimal [14, 15]. Since the GAMP
algorithm extends the AMP algorithm to adapt to arbitrary noise whose PDF is
analytical, it still requires numerical or approximate methods to compute the
marginal posterior, when the noise statistics is unknown [9]. In addition, for
some specific non-Gaussian noises, A. Mathur et al. have investigated the
system performance and found the ML detectors in [16, 17, 18], which is
critical for the development of the model-driven methods.
When there is no prior knowledge on the noise statistics, researchers proposed
other model-driven approaches to approximate the unknown noise distribution
$p_{\bm{w}}(\bm{w})$ and compute the approximate likelihood $p(\bm{y}|\bm{x})$
accordingly. However, this requires a huge amount of computation or sampling
loops. As a result, these methods can not be applied efficiently in practical
scenarios. For example, the expectation-maximization (EM) algorithm will be
extremely slow in this case, since the dimension of data can be very high and
the data set can be very large [19]. Moreover, in order to select an
appropriate approximate model, the EM algorithm requires to have some
knowledge on the noise statistics, otherwise it may performs worst[20].
Besides, for the variational inference based approximations in [21, 22, 23],
the noise is assumed to depend on a hidden variable $\bm{\theta}$, so that we
can approximate the associated a posteriori probability
$p(\bm{\theta}|\bm{w})$ with a simpler distribution $q(\bm{w})$, by maximizing
a lower bound of the reverse KL-divergence
$D\left(q(\bm{w})\|p(\bm{w}|\bm{\theta})\right)$. This indicates that the
exact marginal probability of the noise $p(\bm{w})=\int
p(\bm{w},\bm{\theta})\mathrm{d}\bm{\theta}$ as well as the likelihood of
signal vectors remains computationally intractable.
#### I-A2 Data-Driven Methods
In recent years, thanks to the tremendous success of deep learning,
researchers have developed some data-driven methods to solve the problems
encountered in various communication areas [24, 25, 26, 27]. For example,
Y.-S. Jeon et al. have proposed a supervised-learning based novel
communication framework to construct a robust nonlinear MIMO system, which
consisted of the concatenation of a wireless channel and a quantization
function used at the ADCs for data detection [28]. For widely connected
internet of things (IoT) devices, a novel deep learning-constructed joint
transmission-recognition scheme was introduced in [29] to tackle the crucial
and challenging transmission and recognition problems. It effectively improves
the data transmission and recognition by jointly considering the transmission
bandwidth, transmission reliability, complexity, and recognition accuracy. For
the aspect of signal detection, the authors in [30] proposed a projection
gradient descent (PGD) based signal detection neural network (DetNet), by
unfolding the iterative PGD process into a series of neural layers. However,
its performance is not guaranteed when the noise statistics is unknown, for
which the gradient is computed based on the maximum likelihood criterion of
Gaussian noise in DetNet. Moreover, when the noise is dynamically correlated
in time or frequency domain, the authors proposed a deep learning based
detection framework to improve the performance of MIMO detectors in [31, 32].
In addition, some generative models based on deep learning have been proposed
to learn the unknown distribution of random variables. In particular, the
generative models are probabilistic, driven by unsupervised learning
approaches. Currently, there are three major types of generative models [33,
34], which are variational auto encoders (VAEs) [35, 36], generative
adversarial networks (GANs) [37, 38] and normalizing flows (NFlows) [39, 40,
41]. Recently, the deep generative models are adopted in literature to solve
the linear inverse problems [42]. For example, the images can be restored with
high quality from the noisy observations by approximating the natural
distribution of images with a generative model [43], which inspires us to try
to solve the linear inverse problem with the aid of data-driven generative
models rather than noise statistics.
### I-B Contributions
In this paper, we propose an effective MLE method to detect the signal, when
the noise statistics is unknown. Specifically, we propose a novel signal
detection framework, named _maximum a normalizing flow estimate_ (MANFE). This
is a fully probabilistic model, which can efficiently perform MLE by
approximating the unknown noise distribution through a normalizing flow. To
reduce the computational complexity of MLE, we further devise a low-complexity
version of the MANFE, namely G-GAMP-MANFE, by jointly integrating the G-GAMP
algorithm and MANFE. The main contributions of this work can be summarized as
follow:
* •
We propose a novel and effective MLE method, when only noise samples are
available rather than statistical knowledge. Experiments show that this method
achieves much better performance than other relevant algorithms under
impulsive noise environments. Also, it can still reach the performance bound
of MLE in Gaussian noise environments.
* •
The proposed detection framework is very flexible, since it does not require
any statistical knowledge on the noise. In addition, it is driven by an
unsupervised learning approach, which does not need any labels for training.
* •
The proposed detection framework is robust to impulsive environments, since it
performs better a more effective MLE with comparison compared to E-MLE, when
the noise statistics is unknown.
* •
We extend the MANFE by presenting a low-complexity version in order to reduce
the computational complexity of MLE. The complexity of this version is very
low, so that it can be easily implemented in practical applications. Further
experiments show that its performance can even outperform the E-MLE under
highly impulsive noise environments.
### I-C Organization
In section II, we first overview the prototype of normalizing flows and
discuss the reasons why we choose normalizing flows to solve the problem under
investigation. The proposed detection framework and the implementation details
are presented in Section III. We present various simulation results and
discussions in Section IV to show the effectiveness of the proposed methods.
Finally, we conclude the contribution of this paper in Section V.
## II Unknown Distribution Approximation
In this section, we firstly present the maximum likelihood approach for the
distribution approximation, and then provide the concept of normalizing flows.
Furthermore, we compare the normalizing flows with other generative models,
and explain the reason why we choose the former method to approximate an
unknown noise distribution.
### II-A Maximum Likelihood Approximation
Let $\bm{w}$ be a random vector with an unknown distribution
$p_{\bm{w}}(\bm{w})$ and
$\mathcal{D}_{\bm{w}}=\\{\bm{w}^{(1)},\bm{w}^{(2)},\cdots,\bm{w}^{(L)}\\}$ is
a collected data set consisting of $L$ i.i.d data samples. Using
$\mathcal{D}_{\bm{w}}$, the distribution $p_{\bm{w}}(\bm{w})$ can be
approximated by maximizing the total likelihood of the data set on the
selected model $q(\bm{w};\bm{\theta})$ parameterized by $\bm{\theta}$, such as
the mixture Gaussian models. In this case, the loss function is the sum of the
negative log-likelihoods of the collected data set, which can be expressed as
$\displaystyle\mathcal{L}(\bm{\theta})=-\frac{1}{L}\sum_{l=1}^{L}\log
q\left(\bm{w}^{(l)};\bm{\theta}\right).$ (7)
Clearly, (7) measures how well the model $q(\bm{w};\bm{\theta})$ fits the data
set drawn from the distribution $p_{\bm{w}}(\bm{w})$. Since
$q(\bm{w};\bm{\theta})$ is a valid PDF, it is always nonnegative. In
particular, it reaches its minimum if the selected model
$q(\bm{w};\bm{\theta})$ perfectly fits the data, i.e.
$q(\bm{w};\bm{\theta})\equiv p_{\bm{w}}(\bm{w})$. Otherwise, it enlarges if
$q(\bm{w};\bm{\theta})$ deviates from $p_{\bm{w}}(\bm{w})$. Hence, the
training objective is to minimize the loss and find the optimal parameters as
$\displaystyle\bm{\theta}^{*}=\arg\min_{\bm{\theta}}\mathcal{L}(\bm{\theta}),$
(8)
where $\bm{\theta}$ can be optimized by some methods, such as the stochastic
gradient descent (SGD) with mini-batches of data [44]. This is an unsupervised
learning approach, since the objective does not require any labeled data.
However, it is not flexible enough if the optimization is performed directly
on the selected model $q(\bm{w};\bm{\theta})$, since the knowledge of the true
distribution is needed in order to choose an appropriate model for
approximation.
### II-B Normalizing Flow
As a generative model, _normalizing flow_ allows to perform efficient
inference on the latent variables [36]. More importantly, the computation of
log-likelihood on the data set is accomplished by using the change of variable
formula rather than computing on the model directly. For the observation
$\bm{w}\in\mathcal{D}_{\bm{w}}$, it depends on a latent variable $\bm{z}$
whose density function $p(\bm{z};\bm{\theta})$ is simple and computationally
tractable (e.g. spherical multivariate Gaussian distribution), and we can
describe the generative process as
$\displaystyle\bm{z}$ $\displaystyle\sim p(\bm{z};\bm{\theta}),$ (9)
$\displaystyle\bm{w}$ $\displaystyle=g(\bm{z}),$ (10)
where $g(\cdot)$ is an invertible function (aka bijection), so that we can
infer the latent variables efficiently by applying the inversion
$\bm{z}=f(\bm{w})=g^{-1}(\bm{w})$. By using (10), we can model the approximate
distribution as
$\displaystyle\log q(\bm{w};\bm{\theta})=\log
p(\bm{z};\bm{\theta})+\log\bigg{|}\det\bigg{(}\frac{\mathrm{d}\bm{z}}{\mathrm{d}{\bm{w}}}\bigg{)}\bigg{|},$
(11)
where the so called _log-determinant_ term
$\log\bigg{|}\det\bigg{(}\frac{\mathrm{d}\bm{z}}{\mathrm{d}{\bm{w}}}\bigg{)}\bigg{|}$
denotes the logarithm of the absolute value of the determinant on the Jacobian
matrix $\left(\frac{\mathrm{d}\bm{z}}{\mathrm{d}\bm{w}}\right)$. To improve
the model flexibility, it is recognized that the invertible function
$f(\cdot)$ is composed of $K$ invertible subfunctions
$f(\cdot)=f_{1}(\cdot)\otimes f_{2}(\cdot)\otimes\cdots
f_{k}(\cdot)\cdots\otimes f_{K}(\cdot).$ (12)
From the above equation, we can infer the latent variables $\bm{z}$ by
$\displaystyle\bm{w}\stackrel{{\scriptstyle
f_{1}}}{{\longrightarrow}}\bm{h_{1}}\stackrel{{\scriptstyle
f_{2}}}{{\longrightarrow}}\bm{h_{2}}\cdots\stackrel{{\scriptstyle
f_{k}}}{{\longrightarrow}}\bm{h}_{k}\cdots\stackrel{{\scriptstyle
f_{K}}}{{\longrightarrow}}\bm{z}.$ (13)
By using the definitions of $\bm{h_{0}}\triangleq\bm{w}$ and
$\bm{h_{K}}\triangleq\bm{z}$, we can rewrite the loss function in (7) as
$\displaystyle\mathcal{L}(\bm{\theta})$
$\displaystyle=-\frac{1}{L}\sum_{l=1}^{L}\log q(\bm{w}^{(l)};\bm{\theta})$
(14) $\displaystyle=-\frac{1}{L}\sum_{l=1}^{L}\left(\log
p(\bm{z}^{(l)};\bm{\theta})+\sum_{k=1}^{K}\log\bigg{|}\det\bigg{(}\frac{\mathrm{d}\bm{h}^{(l)}_{k}}{\mathrm{d}\bm{h}^{(l)}_{k-1}}\bigg{)}\bigg{|}\right).$
(15)
Using the above, we can treat each subfunction as a step of the flow,
parameterized by trainable parameters. By putting all the $K$ flow steps
together, a normalizing flow is constructed to enable us to perform
approximate inference and efficient computation of the log-probability.
In general, the normalizing flow is inspired by the change of variable
technique. It assumes that the observed random variable comes from the
invertible change of a latent variable which follows a specific distribution.
Hence, the normalizing flow is actually an approximate representation of the
invertible change. From this perspective, one simple example is the general
Gaussian distribution. Let us treat the normalizing flow as a black box, and
simply use $f(\cdot)$ to represent the revertible function parameterized by
the normalizing flow. Since any Gaussian variable
$X\sim\mathcal{N}(\mu,\sigma)$ can be derived via the change of the standard
Gaussian variable $Y\sim\mathcal{N}(0,1)$, the normalizing flow actually
represents the approximation of the perfect inversion, saying that
$Y=\frac{X-\mu}{\sigma}\approx f(X)$. In this case, one can easily compute the
approximation of the corresponding latent variable as $Y\approx f(X)$.
Therefore, when the observed variable follows different unknown distributions,
the only difference is that the network parameters are fine tuned to different
values for different distributions, which makes the principle of computing the
associated latent variables become rather simple. To summarize the normalizing
flow, its name has the following interpretations:
* •
“Normalizing” indicates that the density is normalized by the reversible
function and the change of variables, and
* •
“Flow” means that the reversible functions can be more complex by
incorporating other invertible functions.
### II-C Why to use the Normalizing Flow?
Similar to normalizing flow, the other generative models like VAE and GAN map
the observation into a latent space. However, the exact computation of log-
likelihood is totally different in these models. Specifically, in the VAE
model the latent variable is inferred by approximating the posterior
distribution $p(\bm{z}|\bm{w})$. Hence, the exact log-likelihood can be
computed through the marginal probability $p(\bm{w})=\int
p(\bm{w},\bm{z})\mathrm{d}\bm{z}$, with the help of numerical methods like
Monte Carlo. Therefore, in this case the computational complexity of the log-
likelihood is very high. On the other hand, the GAN model do not maximize the
log-likelihood. Instead of the training via the maximum likelihood, it will
train a generator and a discriminator. The generator maps the observation into
a latent space and draws samples from the latent space, while the
discriminator decides whether the samples drawn from the generator fit the
collected data set. Hence, both generator and discriminator play a min-max
game. In this case, drawing samples from GANs is easy, while the exact
computation of the log-likelihood is computationally intractable.
For the reversible generative models like normalizing flows, the inference of
latent variables can be exactly computed. The benefit of this approach is that
we are able to compute the corresponding log-likelihood efficiently. In
consequence, the normalizing flow is a good choice among these three
generative models to perform exact Bayesian inference, especially when the
noise statistics is unknown.
## III Proposed Signal Detection Framework
In this section, we propose an unsupervised learning driven and normalizing
flow based signal detection framework, which enables the effective and fast
evaluation of MLE without knowledge of the noise statistics.
As shown in Fig. 1, the proposed detection framework is built upon a
normalizing flow, which includes three kinds of components: squeeze layer, $K$
flow steps, and an unsqueeze layer. To perform the MLE, given $\bm{y}$ and
$\bm{H}$, we firstly compute the associated noise vector
$\bm{w}_{i}=\bm{y}-\bm{H}\bm{x}_{i}$ for each possible signal vector
$\bm{x}_{i}\in\mathcal{X}$. Then, by using the normalizing flow, we can map
$\bm{w}_{i}$ into the latent space, and infer the associated latent variable
$\bm{z}_{i}$ as well as the log-determinant. Therefore we can compute the
corresponding log-likelihood $p(\bm{y}|\bm{x}_{i})$ from (11) and determine
the final estimation of the maximum log-likelihood.
Since most neural networks focus on real-valued input, we can use a well-known
real-valued representation to express the complex-valued input equivalently as
$\displaystyle\underbrace{\begin{bmatrix}\mathop{R}(\bm{y})\\\
\mathop{I}(\bm{y})\\\
\end{bmatrix}}_{\bar{\bm{y}}}=\underbrace{\begin{bmatrix}\mathop{R}(\bm{H})&-\mathop{I}(\bm{H})\\\
\mathop{I}(\bm{H})&\mathop{R}(\bm{H})\\\
\end{bmatrix}}_{\bar{\bm{H}}}\underbrace{\begin{bmatrix}\mathop{R}(\bm{x})\\\
\mathop{I}(\bm{x})\\\
\end{bmatrix}}_{\bar{\bm{x}}}+\underbrace{\begin{bmatrix}\mathop{R}(\bm{w})\\\
\mathop{I}(\bm{w})\end{bmatrix}}_{\bar{\bm{w}}},$ (16)
where $\mathop{R}(\cdot)$ and $\mathop{I}(\cdot)$ denote the real and
imaginary part of the input, respectively. Then, we can rewrite (16) into a
form like (1) as
$\displaystyle\bar{\bm{y}}=\bar{\bm{H}}\bar{\bm{x}}+\bar{\bm{w}}.$ (17)
Based on the above real-valued representation, the squeeze layer will process
the complex-valued input by separating the real and imaginary part from the
input, while the unsqueeze layer performs inverselly. In general, an
$M$-dimensional complex-valued input
$\bm{h}=\begin{bmatrix}h_{1},h_{2},\cdots,h_{M}\end{bmatrix}$ for a flow step
can be represented by a 2-D tensor with a shape of $M\times 2$ and the
following structure
$\displaystyle\begin{bmatrix}\mathop{R}(h_{1})&\mathop{I}(h_{1})\\\
\mathop{R}(h_{2})&\mathop{I}(h_{2})\\\ \vdots&\vdots\\\
\mathop{R}(h_{M})&\mathop{I}(h_{M})\\\ \end{bmatrix},$ (18)
where the first column/channel includes the real and the second column/channel
the imaginary part, respectively.
Figure 1: Architecture of the detection framework with normalizing flow.
From Section II we can conclude that a critical point when implementing a
normalizing flow is to carefully design the architecture, in order to ensure
that the subfunctions, represented by neural blocks, are invertible and
flexible, and the corresponding log-determinants are computationally
tractable. Accordingly, as shown in Fig. 2, each flow step consists of three
hidden layers: an activation normalization, an invertible $1\times 1$
convolution layer, and an alternating affine coupling layer. In particular,
all these hidden layers except squeeze and unsequeeze, would not change the
shape of input tensor. This means that its output tensor will have the same
structure as the input one. In the rest of this section, we introduce these
hidden layers one by one and then we present how the detection framework
works.
Figure 2: Structure of one step of the normalizing flow.
### III-A Activation Normalization
To accelerate the training of the deep neural network, we adopt a batch
normalization technique [45] and employ an activation normalization layer [41]
to perform inversely affine transformation of activations. The activation
normalization at the $k$-th layer utilizes trainable scale and bias parameters
for each channel, and the corresponding initial value depends on the initial
mini-batch of input data $\bm{h}_{k-1}$ from the prior layer
$\displaystyle\bm{s}_{k}$
$\displaystyle=\frac{1}{\sqrt{\mathbb{V}\left[\bm{h}_{k-1}\right]}},$ (19)
$\displaystyle\bm{b}_{k}$
$\displaystyle=-\mathbb{E}\left[\bm{h}_{k-1}\right],$ (20)
where $\mathbb{E}[\cdot]$ and $\mathbb{V}[\cdot]$ denote the expectation and
variance, respectively. Once the initialization is performed, the scale and
bias are treated as trainable parameters and then the activation normalization
layer output is
$\displaystyle\bm{h}_{k}=\bm{h}_{k-1}\odot\bm{s}_{k}+\bm{b}_{k},$ (21)
with $\odot$ being the element-wise multiplication on the channel axis. As
explained in [40] and [41], the Jacobian of the transformation of coupling
layers depends on the associated diagonal matrix. Due to that the determinant
of a triangular matrix is equal to the product of the diagonal elements, the
corresponding log-determinant can be easily computed. Specifically, the log-
determinant of (21) is given by
$\displaystyle\log\left|\det\left(\frac{\mathrm{d}\bm{h}_{k}}{\mathrm{d}\bm{h}_{k-1}}\right)\right|=M\text{sum}\left(\log\lvert\sqrt{\bm{s}_{k}}\rvert\right),$
(22)
where the operator $\text{sum}(\cdot)$ means sum over all the elements of a
tensor.
### III-B Invertible $1\times 1$ Convolution
In order to improve the model flexibility, we employ an invertible $1\times 1$
convolutional layer. This can incorporate the permutation into the deep neural
network, while it does not change the channel size and it can be treated as a
generalized permutation operation [41]. For an invertible $1\times 1$
convolution with a $2\times 2$ learnable weight matrix $\bm{W}_{k}$, the log-
determinant is computed directly as [41]
$\displaystyle\log\left|\det\left(\frac{\mathrm{d}\bm{h}_{k}}{\mathrm{d}\bm{h}_{k-1}}\right)\right|=M\log\lvert\det(\bm{W}_{k})\rvert.$
(23)
In order to construct an identity transformation at the beginning, we set the
weight $\bm{W}_{k}$ to be a random rotation matrix with a zero log-determinant
at initialization.
### III-C Alternating Affine Coupling Layer
Affine coupling layer is a powerful and learnable reversible function, since
its log-determinant is computationally tractable[39, 40]. For an
$M$-dimensional input $\bm{h}_{k-1}$, the affine coupling layer will change
some part of the input based on another part of the input. To do this, we
first separate the input into two parts, which is given by
$\displaystyle\bm{q}_{k}$ $\displaystyle=\bm{h}_{k-1}(1\colon m),$ (24a)
$\displaystyle\bm{u}_{k}$ $\displaystyle=\bm{h}_{k-1}(m+1\colon M),$ (24b)
where $\bm{h}_{k-1}(1\colon m)$ represents the first part of the input, and
$\bm{h}_{k-1}(m+1\colon M)$ denotes the rest. After that, we couple the two
parts together in some order $\displaystyle\bm{s}_{k}$
$\displaystyle=\mathcal{G}(\bm{q_{k}})\quad\mbox{or}\quad\mathcal{G}(\bm{u_{k}}),$
(24c) $\displaystyle\bm{b}_{k}$
$\displaystyle=\mathcal{H}(\bm{q_{k}})\quad\mbox{or}\quad\mathcal{H}(\bm{u_{k}}),$
(24d) $\displaystyle\bm{h}_{k}$ $\displaystyle=\bm{h}_{k-1},$ (24e)
$\displaystyle\bm{h}_{k}({m+1\colon M})$
$\displaystyle=\bm{u}_{k}\odot\bm{s}_{k}+\bm{b}_{k},$ (24f)
$\mathcal{G}(\cdot)$ and $\mathcal{H}(\cdot)$ represent two learnable neural
networks, which dynamically and nonlinearly compute the corresponding scale
and bias. Similarly, the determinant of a triangular matrix is again equal to
the product of the diagonal elements [40], which indicates that we can readily
compute the corresponding log-determinant as
$\displaystyle\log\left|\det\left(\frac{\mathrm{d}\bm{h}_{k}}{\mathrm{d}\bm{h}_{k-1}}\right)\right|=\text{sum}\left(\log\lvert\bm{s}_{k}\rvert\right).$
(25)
Basically, the architecture of our flow steps is developed based on the
implementations suggested in [39, 40, 41]. However, there are two key
differences between our architecture and them. One key difference is that our
implementation can handle the complex-valued input by using the squeeze layer
and the unsqueeze layer, in order to separate and recover the real parts and
imaginary parts. Another key difference is that we just need to ensure the
existence of inversions rather than investigating their exact analytical
forms, since there is no need to draw samples from the latent space in signal
detection. This difference leads to more generalities and flexibilities
enabled in our implementation. Specifically, as shown in Fig. 2, we
alternatively combine two affine coupling layers together in a flow step, and
they can change any part of the input without considering whether it has been
changed or not at the prior coupling layer. As a contrast, the existing
implementations suggested in [39, 40, 41] must change the part that has not
been changed yet by the prior layer. Therefore, we incorporate a powerful
alternating pattern into the network to help enhance the generality and
flexibility, and eventually improve the network’s ability to approach unknown
distributions.
### III-D Signal Detection
Input: Received signal $\bm{y}$, channel matrix $\bm{H}$,
Output: Recovered signal $\bm{x}^{*}$
1 foreach _$\bm{x}_{i}\in\mathcal{X}$_ do
2 $\bm{w}_{i}\leftarrow\bm{y}-\bm{H}\bm{x}_{i}$
3 $\bm{z}_{i}\leftarrow f(\bm{w}_{i})$
4 $\mathcal{L}_{\bm{x}_{i}}\leftarrow\log
p(\bm{z}_{i};\bm{\theta})+\log\left|\det\left(\frac{\mathrm{d}\bm{z}_{i}}{\mathrm{d}\bm{w}_{i}}\right)\right|$
5 end foreach
6$\bm{x}^{*}\leftarrow\arg\underset{\bm{x}_{i}}{\max}(\mathcal{L}_{\bm{x}_{i}})$
return $\bm{x}^{*}$
Algorithm 1 MANFE
As discussed above, by jointly using the change of latent variables and the
nonlinearity of neural networks, the approximate distribution parameterized by
a normalizing flow can be highly flexible to approach the unknown true
distribution. In this case, we are able to perform MLE in (5) by evaluating
the log-likelihood through the normalizing flow. Accordingly, we devise a
signal detection algorithm for the proposed detection framework. In contrast
to E-MLE, the proposed detection framework estimates the signal by finding the
maximum likelihood computed from a normalizing flow, so that we call it
_maximum a normalizing flow estimate_ (MANFE).
Specifically, in the MANFE algorithm, we first evaluate the corresponding
noise vector $\bm{w}_{i}$ given received signal $\bm{y}$ and channel maxtrix
$\bm{H}$ for each possible signal vector $\bm{x}_{i}\in\mathcal{X}$. Then, the
algorithm maps the noise vector $\bm{w}_{i}$ into the latent space to infer
the corresponding latent variable $\bm{z}_{i}$. After that, we compute the
corresponding log-likelihood by evaluating
$\displaystyle\mathcal{L}_{\bm{x}_{i}}$ $\displaystyle=\log
p(\bm{y}|\bm{x}_{i})$ (26) $\displaystyle\approx\log
q(\bm{w}_{i};\bm{\theta})$ (27) $\displaystyle=\log
p(\bm{z}_{i};\bm{\theta})+\log\left|\det\left(\frac{\mathrm{d}\bm{z}_{i}}{\mathrm{d}\bm{w}_{i}}\right)\right|.$
(28)
Finally, by finding the most possible signal vector which has the maximum log-
likelihood, we get the MLE of desired signal as
$\displaystyle\bm{x}^{*}=\arg\underset{\bm{x}_{i}\in\mathcal{X}}{\max}(\mathcal{L}_{\bm{x}_{i}}).$
(29)
The whole procedures of the MANFE algorithm is summarized in Algorithm 1.
Intuitively, the major difference between MANFE and E-MLE is that MANFE is a
generalized ML estimator which approximates the unknown noise distributions
and thereby compute the log-likelihoods accordingly in different noise
environments. On the contrary, E-MLE is only designed for Gaussian noises so
that it can be treated as a perfectly trained MANFE under specific noise
environment, which results that it loses flexibilities with comparison to
MANFE.
### III-E Low-Complexity Version of MANFE
As the MLE is an NP-hard problem, we have to exhaust all possible candidates
to check out which one has the maximum probability. In particular, the
computational complexity of MLE is $\mathcal{O}(P^{N})$, which indicates that
the complexity increases exponentially with the constellation size $P$ and the
antenna number $N$. Hence, it is difficult to implement a perfect MLE in
practice.
To solve this problem, some empirical approaches have been proposed to reduce
the complexity of MLE, by utilizing an initial guess to reduce the searching
space [46]. The initial guess can be estimated by some kind of low-complexity
detectors, such as ZF detector, MMSE detector, and GAMP detector. Though the
selection range of low-complexity detectors is broadly wide, we should choose
a detector which has a lower complexity and fine bit error rate (BER)
performance. From this viewpoint and in order to reduce the complexity of MLE,
we propose to jointly use the low-complexity G-GAMP detection algorithm and
the MAFNE, which is named as G-GAMP-MANFE.
Specifically, in the G-GAMP-MANFE algorithm, we first get an initial estimate
came from G-GAMP algorithm, where the details about the GAMP algorithm can be
found in the existing works such as [8, 9, 13]. Since the initial guess is
approximate, we can assume that there exist at most $E$ ($0\leq E\leq N$)
error symbols at the initial guess. Accordingly, we will only require to
compare $\sum_{i=0}^{E}C_{N}^{i}(P-1)^{i}$ signal candidates instead of
$P^{N}$ ones. The number of error symbols can be sufficiently small so that
the total complexity can be reduced significantly, especially when the channel
is in good condition. For example, we only need to compare $1+N(P-1)$
candidates when $E=1$. Hence, the searching space is reduced as well as the
total complexity of MANFE. The whole procedures of the G-GAMP-MANFE algorithm
is summarized in Algorithm 2, as shown at the top of the next page.
Basically, the low-complexity version of MANFE is a generalized framework that
helps improve the detection performance of other low-complexity detectors
under unknown noise environments. In this paper, we use G-GAMP detector for
the initial estimation as its performance and complexity are both acceptable
under most common scenarios [11]. Obviously, the BER performance of the low-
complexity MANFE depends on two factors. One is that the choice of initial
detector significantly affects the BER performance of the low-complexity
MANFE, while the other is the choice of $E$. If the problem scale is not too
large and we can tolerate a high complexity, we can increase $E$ to improve
the BER performance. On the other hand, if the systems are sensitive to the
computational complexity, it would be better to maintain $E$ at a lower level.
In other worlds, the choice of $E$ is quite flexible and users would set the
value of $E$ based on their specific needs. In particular, as an iterative
detection algorithm, G-GAMP-MANFE’s convergence mainly depends on the G-GAMP,
whose convergence can be guaranteed when the noise is Gaussian and the channel
matrix is a large i.i.d. sub-Gaussian matrix, where the details can be found
in the literature such as [14, 13, 15].
Input: Received signal $\bm{y}$, channel matrix $\bm{H}$
Output: Recovered signal $\bm{x}^{*}$
1 Get an initial estimate $\bm{x}_{0}$ from G-GAMP algorithm
2 Get a subset $\mathcal{X}_{E}$ from $\mathcal{X}$ where there exist at most
$E$ different symbols between a possible signal candidate and the initial
estimate $\bm{x}_{0}$
3 foreach _$\bm{x}_{i}\in\mathcal{X}_{E}$_ do
4 $\bm{w}_{i}\leftarrow\bm{y}-\bm{H}\bm{x}_{i}$
5 $\bm{z}_{i}\leftarrow f(\bm{w}_{i})$
6 $\mathcal{L}_{\bm{x}_{i}}\leftarrow\log
p(\bm{z}_{i};\bm{\theta})+\log\left|\det\left(\frac{\mathrm{d}\bm{z}_{i}}{\mathrm{d}\bm{w}_{i}}\right)\right|$
7 end foreach
8$\bm{x}^{*}\leftarrow\arg\underset{\bm{x}_{i}}{\max}(\mathcal{L}_{\bm{x}_{i}})$
return $\bm{x}^{*}$
Algorithm 2 Combine G-GAMP With MANFE (G-GAMP-MANFE)
### III-F Complexity Analysis
In this part, we provide some analysis on the computational complexity for the
MANFE and G-GAMP-MANFE algorithms. There are three kinds of subfunction in the
MANFE detection framework and all these subfunctions operate in an element-
wise manner. Accordingly, the computational complexity of a flow step depends
on the element-wise summation of log-determinant, which is about
$\mathcal{O}(M)$ for a single flow step. Therefore, the computational cost to
compute the log-likelihood of a possible signal vector $\bm{x}_{i}$ depends on
the matrix multiplication $\bm{H}\bm{x}_{i}$ and the cost of $K$ flow steps,
which is about $\mathcal{O}(KM+MN)$. Hence, the total computational complexity
for MANFE can be expressed as $\mathcal{O}((KM+MN)P^{N})$, for which we have
to exhaust all $P^{N}$ possible signal candidates.
As to the G-GAMP-MANFE algorithm, since the G-GAMP has a computational
complexity of $\mathcal{O}(T(M+NP))$ for $T$ iterations and we have to compare
$\sum_{i=0}^{E}C_{N}^{i}(P-1)^{i}$ candidates, the computational complexity
for the G-GAMP-MANFE is
$\mathcal{O}\big{(}T(M+NP)+(KM+MN)\sum_{i=0}^{E}C_{N}^{i}(P-1)^{i}\big{)}$.
More specifically, when $E=1$, the complexity of the G-GAMP-MANFE is only of
$\mathcal{O}\big{(}T(M+NP)+(KM+MN)(1+N(P-1))\big{)}$, which indicates that it
can be easily implemented in practical MIMO systems.
## IV Simulations and Discussions
In this section, we perform simulations to verify the effectiveness of the
proposed detection framework. In particular, we first introduce the
environment setup of these simulations as well as the implementation details
of the deep neural network, and then, we present some simulation results and
give the related discussions.
### IV-A Environment Setup
The simulations are performed in a MIMO communication system in the presence
of several typical additive non-Gaussian noises, such as Gaussian mixture
noise, Nakagami-$m$ noise, and impulsive noise. The numbers of antennas are
$N$ and $M$ at the transmitter and the receiver, respectively. The modulation
scheme is quadrature phase shift keying (QPSK) with $P=4$, and the channel
experiences Rayleigh flat fading. The receiver has perfect knowledge on the
channel state information (CSI). To model the impulsive noise, we employ a
typical impulsive model, named symmetric $\alpha$-stable (S$\alpha$S) noise[7,
6, 3, 4]. In particular, an S$\alpha$S random variable $w$ has the following
characteristic function
$\displaystyle\psi_{w}(\theta)=\mathbb{E}[e^{jw\theta}]=e^{-\sigma^{\alpha}\lvert\theta\rvert^{\alpha}},$
(30)
where $\mathbb{E}[\cdot]$ represents the statistical expectation, $\sigma>0$
is the scale exponent, and $\alpha\in(0,2]$ is the characteristic exponent.
When $\alpha$ decreases, the noise becomes heavy-tailed and impulsive. For
practical scenarios, $\alpha$ usually falls into $[1,2]$. Especially, the
S$\alpha$S distribution turns into a Cauchy distribution when $\alpha=1$,
while it is a Gaussian distribution when $\alpha=2$. The density function
$f_{w}(w)$ can be expressed by [7]
$\displaystyle
f_{w}(w)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}e^{-\lvert\theta\rvert\sigma^{\alpha}-j\theta
w}\mathrm{d}\theta.$ (31)
Unfortunately, we can only compute the approximate density through numerical
methods since $f_{w}(w)$ does not have a closed-form expression when
$\alpha\in(1,2)$. In other words, the exact MLE under S$\alpha$S noise is
computationally intractable and the performance of E-MLE will severely deviate
from the situation of Gaussian noise. In particular, we mix two Gaussian
distributed noises subject to $\mathcal{CN}(-\mathbf{I},2\mathbf{I})$ and
$\mathcal{CN}(\mathbf{I},\mathbf{I})$ equably as the instance of the Gaussian
mixture noise. Notice that any statistical knowledge on these noises such as
the value of $\alpha$ which indicates the impulse level is not utilized during
the training and testing processes, which can simulate the situation that the
noise statistics is unknown.
### IV-B Training Details
In the proposed framework, the hyper-parameters that need to be chosen
carefully are concluded as follows:
* •
The total number of flow steps $K$,
* •
The specified partition parameter $m$ in alternative affine coupling layers,
* •
The specified structure of the two neural networks $\mathcal{G}(\cdot)$ and
$\mathcal{H}(\cdot)$ in alternative affine coupling layers.
Obviously, $K$ significantly affects the complexity and the effectiveness of
our methods, and we find that $K=4$ is a good choice based on the experiences.
As $\bm{q}_{k}$ is often the half part of the input $\bm{h}_{k-1}$, we can set
the partition parameter $m$ to $\frac{M}{2}$. Additionally, the scale and bias
functions $\mathcal{G}(\cdot)$ and $\mathcal{H}(\cdot)$ are both implemented
by a neural network with three fully-connected layers, where the activation
functions are rectified linear unit (ReLU) functions and the hidden layer
sizes are all constantly set to $8$ for both $4\times 4$ and $8\times 8$ MIMO
systems.
In practice, we consider that the latent variables follow a multivariate
Gaussian distribution with trainable mean $\bm{\mu}$ and variance
$\bm{\Sigma}$. Therefore, the trainable parameters of the proposed framework
can be summarized below:
* •
The scale vectors $\bm{s}_{k}$ and bias vectors $\bm{b}_{k}$ for each
activation normalization layers introduced in (19),
* •
The learnable weight matrix $\bm{W}_{k}$ for each $1\times 1$ convolution
layer introduced in (23),
* •
The network parameters in $\mathcal{G}(\cdot)$ and $\mathcal{H}(\cdot)$ for
each affine coupling layer introduced in (24),
* •
The mean $\bm{\mu}$ and variance $\bm{\Sigma}$ of the latent variable’s
multivariate Gaussian distribution.
Hence, we can find that there are only a handful of trainable parameters
inside a single flow step and the total number of flow steps is small too.
This indicates that the training overhead of our framework is affordable.
Specifically, we use the TensorFlow framework [47] to train and test the
model, and there are $10$ millions training noise samples and $2$ millions
test noise samples generated from the aforementioned noise models in the
training phase. Since our model is driven by unsupervised learning, the data
sets only consist of noise samples. To refine the trainable parameters, (7) is
adopted as the loss function, and the Adam optimizer [48] with learning rate
set to $0.001$ is adopted to update the trainable parameters by using the
gradient descent method. Furthermore, engineering details and reproducible
implementations can be found in our open-source codes located at
https://github.com/skypitcher/manfe for interested readers.
TABLE I: Computational Complexity of Competing Algorithms Abbreviations | Complexity
---|---
G-GAMP($T$) | $\mathcal{O}\left(T(N+M)\right)$
DetNet($T$) | $\mathcal{O}\left(T(N^{2}+(3N+2L_{z})L_{h})\right)$ 111$L_{z}$ and $L_{h}$ are the size of a latent parameter and the size of hidden layers, which are both suggested to be $2N$ in the paper of DetNet [30], respectively.
E-MLE | $\mathcal{O}\left(MNP^{N}\right)$
MANFE | $\mathcal{O}\left((KM+MN)P^{N}\right)$
G-GAMP($T$)-MANFE($E$) | $\mathcal{O}\left(T(M+NP)+(KM+MN)\sum_{i=0}^{E}C_{N}^{i}(P-1)^{i}\right)$
TABLE II: Computational Complexity of Competing Algorithms in $4\times 4$ QPSK Modulated MIMO Systems Algorithm | Complexity
---|---
G-GAMP($30$) | $\mathcal{O}\left(480\right)$
DetNet($30$) | $\mathcal{O}(28800)$
E-MLE | $\mathcal{O}(16384)$
MANFE | $\mathcal{O}(24576)$
G-GAMP($30$)-MANFE($2$) | $\mathcal{O}(7152)$
### IV-C Comparison with Relevant Algorithms
In order to verify the effectiveness of the proposed detection framework, we
compare the proposed methods with various competitive algorithms. For
convenience, we use the following abbreviations,
* •
G-GAMP($T$): Gaussian GAMP algorithm with $T$ iterations. See [8, 9, 13] for
more details.
* •
DetNet($T$): Deep learning driven and projected gradient descent (PGD) based
detection network introduced in [30] with $T$ layers (iterations).
* •
E-MLE: Euclidean distance based maximum likelihood estimate introduced in (6).
* •
MANFE: Maximum a normalizing flow estimate introduced in Section III-D.
* •
G-GAMP($T$)-MANFE($E$): A low-complexity version of MANFE combined by the
G-GAMP algorithm with $T$ iterations and the assumption that there exist at
most $E$ error symbols at the initial guess came from the G-GAMP algorithm.
See Section III-D and III-E for more information.
* •
G-GAMP($T$)-E-MLE($E$): As similar to the G-GAMP-MANFE, the G-GAMP-E-MLE is a
low-complexity version of E-MLE combined by the G-GAMP algorithm with $T$
iterations and the assumption that there are at most $E$ error symbols at the
initial guess came from the G-GAMP algorithm.
For the purpose of complexity comparison, we provide two tables to present the
complexity comparison among the competing algorithms. Specifically, Table I
lists the theoretical computational complexity of the competing algorithms,
while Table II presents the corresponding complexities in $4\times 4$ QPSK
modulated MIMO systems. From these two tables, we can find that although the
computational complexity of the proposed MANFE is slightly higher than that of
the conventional E-MLE, the complexity of the proposed G-GAMP-MANFE is
affordable with a fine detection performance.
### IV-D Simulation Results and Discussions
Fig. 3 demonstrates the detection BER performances of the aforementioned
detection methods, where the QPSK modulated $4\times 4$ MIMO system is used
with SNR$=25$ dB and $\alpha$ varies from $1$ to $2$. In particular,
$\alpha=1$ and $\alpha=2$ correspond to the typical Cauchy and Gaussian
distributed noise, respectively. We can find from Fig. 3 that the proposed
MANFE outerperforms the other several detectors in the impulsive noise
environments, in the terms of BER performance. Specifically, when $\alpha=1.9$
where the noise is slightly impulsive and deviates a little from the Gaussian
distribution, the MANFE can significantly reduce the detection error of E-MLE
to about $1.1$%. This indicates that the MANFE can compute an effective log-
likelihood with contrast to E-MLE, especially when the noise is non-Gaussian
with unknown statistics. More importantly, when $\alpha$ changes from $2.0$ to
$1.9$ associated with a little impulsiveness, the E-MLE meets a severe
performance degradation, whereas the MANFE has a relatively slight performance
dagradation. This indicates that the MANFE is robust to the impulsive noise
compared with E-MLE. In further, when the impulsive noise approaches to the
Gaussian distribution with $\alpha=2$, the MANFE achieves the performance
bound of MLE, indicating that the MANFE is very flexible to approach unknown
noise distribution even when the noise statistics is unknown.
Figure 3: BER performance comparison versus $\alpha$ with SNR=25 dB for
$4\times 4$ MIMO systems.
In addition, we can observe from Fig. 3 that as a low-complexity version of
MANFE, the G-GAMP-MANFE, still outperforms the other low-complexity detectors
in impulsive noise environments. In particular, the G-GAMP(30)-MANFE(1) can
even achieve the same performance as the E-MLE when $\alpha\leq 1.7$. In these
cases, the G-GAMP(30)-MANFE(1) has a much lower computational complexity with
comparison to the E-MLE, which is only about $5.08\%$222When $E=1$, the
G-GAMP(30)-MANFE(1) can reduce the computational complexity of the E-MLE to
about $\frac{1+N(P-1)}{P^{N}}$, which is equal to $\frac{13}{256}\approx
5.08\%$. of the E-MLE. When $\alpha\leq 1.9$, we can find that if we can
tolerate a moderate computational complexity by increasing $E$ from $1$ to
$2$, the G-GAMP(30)-MANFE(2) will have a better BER performance than the
E-MLE, which reduces the detection error of the E-MLE to about $47.5\%$ and
meanwhile decreases the computational complexity to about
$26.17\%$333Similarly, when $E=2$, the G-GAMP(30)-MANFE(2) can reduce the
computational complexity of the E-MLE to about
$\frac{\sum_{i=0}^{2}C_{N}^{i}(P-1)^{i}}{P^{N}}$, which is equal to
$\frac{67}{256}\approx 26.17\%$. with respect to the E-MLE. By comparing with
the other low-complexity detectors, the G-GAMP(30)-MANFE(1) can reduce the
detection error of the DetNet(30) and the G-GAMP(30) to about $39.61\%$ and
$57.55\%$, respectively, when $\alpha=1.9$. These results further verify the
effectiveness of the proposed detection framework.
Figure 4: BER performance comparison versus SNR with $\alpha=1.9$ for $4\times
4$ MIMO systems.
To verify the effectiveness of the proposed detection framework under
different channel conditions and different levels of impulsive noise, Figs.
4-6 illustrate the BER performance comparison among several detectors for
$4\times 4$ MIMO systems in impulsive noise environments, where SNR varies
from $10$ dB to $30$ dB. Specifically, Fig. 4, Fig. 5 and Fig. 6 are
associated with $\alpha=1.9$, $\alpha=1.5$ and $\alpha=1.1$, respectively.
From Figs. 4-6, we can find that the BER performance gap between the MANFE and
the E-MLE enlarges when the corresponding SNR increases. For example, when
$\alpha=1.9$ and the values of SNR are set to $20$ dB, $25$ dB and $30$ dB,
the MANFE can reduce the detection error of E-MLE to about $16.18$%, $1.06$%
and $0.1$%, respectively. In particular, for $\alpha=1.9$ when the noise is
slightly impulsive, the SNR gain of the MANFE over the E-MLE is $10$ dB at the
BER level of $10^{-4}$. Similarly, when the noises have moderate and strong
level of impulsiveness where the corresponding values of $\alpha$ are equal to
$1.5$ and $1.1$, the SNR gains of the MANFE over the E-MLE are about $10.5$ dB
and $12$ dB at the BER levels of $10^{-3}$ and $10^{-2}$, respectively. This
further verifies that the MANFE can perform the efficient MLE even under
highly impulsive situations.
Figure 5: BER performance comparison versus SNR with $\alpha=1.5$ for $4\times
4$ MIMO systems. Figure 6: BER performance comparison versus SNR with
$\alpha=1.1$ for $4\times 4$ MIMO systems.
Moreover, for the low-complexity detectors in Figs. 4-6, we can observe that
the BER performance gap of the G-GAMP(30)-MANFE over the G-GAMP(30) and
DetNet(30) enlarges with the imcreasing SNR. Specifically, when $\alpha=1.9$
where the noise is nearly Gaussian, the G-GAMP(30)-MANFE(1) can reduce the
detection error of G-GMAP(30) to about $89.7$%, $63.9$% and $51.9$% at the SNR
levels of $10$ dB, $15$ dB and $20$ dB, respectively. For the same situation
with regards to the DetNet(30), the above results are about $97$%, $78$% and
$61$%, respectively. In general, the SNR gains of the G-GAMP(30)-MANFE(1) over
the G-GAMP(30) are $6$ dB, $4$ dB and $3$ dB at the BER level of $10^{-2}$ for
$\alpha=1.9$, $\alpha=1.5$ and $\alpha=1.1$, respectively. In addition, the
SNR gains of the G-GAMP(30)-MANFE(1) over the DetNet(30) are $9$ dB, $4$ dB
and $3$ dB at the BER level of $10^{-2}$ for $\alpha=1.9$, $\alpha=1.5$ and
$\alpha=1.1$, respectively. More importantly, when $\alpha=1.1$ and
$\alpha=1.5$, the G-GAMP(30)-MANFE(1) has almost the same BER performance as
the E-MLE in a wide range of SNR. This indicates that the G-GAMP(30)-MANFE(1)
can obtain the same BER performance as the E-MLE under highly impulsive
environments. In further, we can see from Fig. 4-6 that the
G-GAMP(30)-MANFE(2) with a moderate computational complexity achieves the BER
performance at least not worse than the E-MLE, for MIMO systems in impulsive
noise environments. For example, the SNR gains of the G-GAMP(30)-MAFNet(2)
over the E-MLE are about $5$ dB and $6.5$ dB at the BER level of $10^{-2}$ for
$\alpha=1.5$ and $\alpha=1.1$, respectively. In particular, when the MIMO
system is significantly affected by the impulsive noise, the BER performance
of the G-GAMP(30)-MANFE(2) will be much better than that of the E-MLE.
Figure 7: BER performance comparison versus $\alpha$ with SNR=25 dB for
$8\times 8$ MIMO systems.
To further verify the effectiveness of the low-complexity version of the
MANFE, we use Figs. 7 \- 9 to demonstrate the BER performance of $8\times 8$
MIMO systems in impulsive noise environments. Specifically, Fig. 7 shows the
BER performance versus $\alpha$ with SNR $=25$ dB, while Fig. 8 and Fig. 9
correspond to the BER performance versus SNR with $\alpha=1.9$ and
$\alpha=1.7$, respectively. In these figures, we did not plot the BER
performances of E-MLE and MANFE, due to that the computational complexities of
these two detectors are too high to implement in practice for $8\times 8$ MIMO
systems. Instead, we plot the BER performance of the G-GAMP-E-MLE for
comparison. From Fig. 7, we can observe that the BER performance of
G-GAMP(30)-MANFE(2) is much better than that of the other detectors when
$\alpha$ varies from $1$ to $2$. Specifically, when $\alpha=1.9$ and SNR$=25$
dB, the G-GAMP(30)-MANFE(2) can sufficiently reduce the detection error of the
G-GAMP(30)-E-MLE(2), G-GAMP(30) and DetNet(30) to about 70%, 30% and 21.2%,
respectively. Moreover, the performance gain of G-GAMP(30)-E-MLE(2) over the
G-GAMP(30) algorithm vanishes with the raise of the impulse strength, while
the G-GAMP(30)-MANFE(2) still outperforms the G-GAMP(30) algorithm under
significantly impulsive noise environments. In further, the
G-GAMP(30)-MANFE(2) achieves the performance bound of the G-GAMP(30)-E-MLE(2)
when the noise falls into Gaussian distribution. This further indicates that
the proposed detection framework has the ability to effectively approximate
the unknown distribution.
Figure 8: BER performance comparison versus SNR with $\alpha=1.9$ for $8\times
8$ MIMO systems. Figure 9: BER performance comparison versus SNR with
$\alpha=1.7$ for $8\times 8$ MIMO systems.
In addition, from Figs. 8-9, we can find that the G-GAMP(30)-MANFE(2) still
outperforms the other low-complexity detectors in a wide range of SNR. More
specifically, when $\alpha=1.9$ where the noise is nearly Gaussian distributed
and slightly impulsive, the SNR gains of the G-GAMP(30)-MANFE(2) over the
G-GAMP(30)-E-MLE(2), G-GAMP(30) and DetNet(30) are $1.8$ dB, $4.5$ dB and $7$
dB at the BER level of $10^{-4}$, respectively. In the case that $\alpha=1.7$
and the noise becomes more impulsive, the SNR gains of G-GAMP(30)-MANFE(2)
with respect to the G-GAMP(30)-E-MLE(2), G-GAMP(30) and DetNet(30) are $2$ dB,
$4.5$ dB and $6$ dB at the BER level of $10^{-3}$, respectively. These results
verify the effectiveness of the proposed framework furthermore.
Figure 10: BER performance comparison versus SNR for $4\times 4$ MIMO systems
with Nakagami-$m$ noises. Figure 11: BER performance comparison versus SNR
for $4\times 4$ MIMO systems with Gaussian mixture noises.
In order to further exam the effectiveness of the proposed detection framework
under different non-Gaussian noise environments, Figs. 10-11 illustrate the
BER performance comparisons among several detectors for the $4\times 4$ QPSK
modulated MIMO system, where the noises are subject to the Nakagami-$m$
($m=2$) distribution and Gaussian mixture distribution Fig. 10 and Fig. 11,
respectively. In particular, the ML estimates are available and provided in
the two figures, since the two distributions are both analytical. In Fig. 10,
we can observe that MANFE can still achieve the optimal ML performance, while
E-MLE fails to work in nakagami-$m$ noise environments. As to the low-
complexity detectors, G-GAMP-MANFE can still outperform the conventional
detectors in nakagami-$m$ noise environment. Similar to the results in Fig.
10, the simulation results in Fig. 11 show that MANFE can still achieve almost
the optimal ML performance under the Gaussian mixture noise environment, and
G-GAMP-MANFE can still outperform the conventional sub-optimal detectors. From
the simulation results of Gaussian, impulsive and nakagami-$m$ distributed
noises in Figs. 3-11, we can find that the proposed method can almost achieve
the optimal ML performance under various analytical noises, and it also
outperforms the conventional detectors under non-analytical noises, which
further verified the generality, flexibility, and effectiveness of the
proposed method.
## V Conclusions
In this paper, we have investigated the signal detection problem in the
presence of noise whose statistics is unknown. We have devised a novel
detection framework to recover the signal by approximating the unknown noise
distribution with a flexible normalizing flow. The proposed detection
framework does not require any statistical knowledge about the noise since it
is a fully probabilistic model and driven by the unsupervised learning
approach. Moreover, we have developed a low-complexity version of the proposed
detection framework with the purpose of reducing the computational complexity
of MLE. Since the practical MIMO systems may suffer from various additive
noise with some impulsiveness of nature and unknown statistics, we believe
that the proposed detection framework can effectively improve the robustness
of the MIMO systems in practical scenarios. Indeed, to the best of our
knowledge, the main contribution of this work is that our methods are the
first attempt in the literature to address the maximum likelihood based signal
detection problem without any statistical knowledge on the noise suffered in
MIMO systems.
Nevertheless, there are still some interesting issues for future researches.
One is that, although the low-complexity version’s performance is much better
than the other sub-optimal detectors, there is still a gap between it and the
ML estimate. To further improve the performance of the low-complexity method,
one promising approach is to leverage the automatic distribution approaching
method to improve the convergence of AMP algorithms under unknown noise
environments.
## References
* [1] J. Ilow and D. Hatzinakos, “Analytic alpha-stable noise modeling in a Poisson field of interferers or scatterers,” _IEEE Trans. Sig. Proc._ , vol. 46, no. 6, pp. 1601–1611, 1998.
* [2] N. Beaulieu and S. Niranjayan, “UWB receiver designs based on a Gaussian-Laplacian noise-plus-MAI mode,” _IEEE Trans. Commun._ , vol. 58, no. 3, pp. 997–1006, 2010.
* [3] Y. Chen, “Suboptimum detectors for AF relaying with Gaussian noise and S$\alpha$S interference,” _IEEE Trans. Vech. Tech._ , vol. 64, no. 10, pp. 4833–4839, 2015.
* [4] Y. Chen and J. Chen, “Novel S$\alpha$S PDF approximations and their applications in wireless signal detection,” _IEEE Trans. Commun._ , vol. 14, no. 2, pp. 1080–1091, 2015.
* [5] K. N. Pappi, V. M. Kapinas, and G. K. Karagiannidis, “A theoretical limit for the ML performance of MIMO systems based on lattices,” in _Proceedings of IEEE International Conference on Communications, ICC 2013, Budapest, Hungary, June 9-13, 2013_ , 2013, pp. 3198–3203.
* [6] L. Fan, X. Li, X. Lei, W. Li, and F. Gao, “On distribution of S$\alpha$S noise and its application in performance analysis for linear Rake receivers,” _IEEE Commun. Lett._ , vol. 16, no. 2, pp. 186–189, 2012.
* [7] G. Samorodnitsky and M. S. Taqqu, “Stable non-Gaussian random processes: Stochastic models with infinite variance,” _Bulletin of the London Mathematical Society_ , vol. 28, no. 5, pp. 554–556, 1996.
* [8] D. L. Donoho, A. Maleki, and A. Montanari, “Message-passing algorithms for compressed sensing,” _Proc. Nat. Acad. Sci._ , vol. 106, no. 45, pp. 18 914–18 919, 2009.
* [9] S. Rangan, “Generalized approximate message passing for estimation with random linear mixing,” in _Proc. IEEE Int. Symp. Inform. Theory_ , 2011, pp. 2168–2172.
* [10] H. He, C. Wen, S. Jin, and G. Y. Li, “Model-driven deep learning for MIMO detection,” _IEEE Trans. Signal Process._ , vol. 68, pp. 1702–1715, 2020\.
* [11] L. Liu, Y. Li, C. Huang, C. Yuen, and Y. L. Guan, “A new insight into GAMP and AMP,” _IEEE Trans. Veh. Technol._ , vol. 68, no. 8, pp. 8264–8269, 2019.
* [12] I. Lai, C. Lee, G. Ascheid, H. Meyr, and T. Chiueh, “Channel-aware local search (CA-LS) for iterative MIMO detection,” in _26th IEEE Annual International Symposium on Personal, Indoor, and Mobile Radio Communications, PIMRC 2015_. IEEE, 2015, pp. 731–736.
* [13] S. Rangan, P. Schniter, E. Riegler, A. K. Fletcher, and V. Cevher, “Fixed points of generalized approximate message passing with arbitrary matrices,” _IEEE Trans. Inf. Theory_ , vol. 62, no. 12, pp. 7464–7474, 2016.
* [14] A. Javanmard and A. Montanari, “State evolution for general approximate message passing algorithms, with applications to spatial coupling,” _Information and Inference: A Journal of the IMA_ , vol. 2, no. 2, pp. 115–144, 2013.
* [15] L. Liu, Y. Chi, C. Yuen, Y. L. Guan, and Y. Li, “Capacity-achieving MIMO-NOMA: Iterative LMMSE detection,” _IEEE Trans. Sig. Proc._ , vol. 67, no. 7, pp. 1758–1773, 2019.
* [16] A. Mathur and M. R. Bhatnagar, “PLC performance analysis assuming BPSK modulation over nakagami- $m$ additive noise,” _IEEE Commun. Lett._ , vol. 18, no. 6, pp. 909–912, 2014.
* [17] P. Saxena, A. Mathur, and M. R. Bhatnagar, “Ber performance of an optically pre-amplified fso system under turbulence and pointing errors with ase noise,” _Journal of Optical Communications and Networking_ , vol. 9, no. 6, pp. 498–510, 2017.
* [18] A. Mathur, M. R. Bhatnagar, and B. K. Panigrahi, “Performance evaluation of PLC under the combined effect of background and impulsive noises,” _IEEE Commun. Lett._ , vol. 19, no. 7, pp. 1117–1120, 2015.
* [19] N. Sammaknejad, Y. Zhao, and B. Huang, “A review of the expectation maximization algorithm in data-driven process identification,” _Journal of Process Control_ , vol. 73, pp. 123–136, 2019.
* [20] J. P. Vila and P. Schniter, “Expectation-maximization Gaussian-mixture approximate message passing,” _IEEE Trans. Sig. Proc._ , vol. 61, no. 19, pp. 4658–4672, 2013.
* [21] D. G. Tzikas, A. C. Likas, and N. P. Galatsanos, “The variational approximation for Bayesian inference,” _IEEE Sig. Proc. Mag._ , vol. 25, no. 6, pp. 131–146, 2008.
* [22] L. Yu, O. Simeone, A. M. Haimovich, and S. Wei, “Modulation classification for MIMO-OFDM signals via approximate Bayesian inference,” _IEEE Trans. Vech. Tech._ , vol. 66, no. 1, pp. 268–281, 2017.
* [23] Z. Zhang, X. Cai, C. Li, C. Zhong, and H. Dai, “One-bit quantized massive MIMO detection based on variational approximate message passing,” _IEEE Trans. Sig. Proc._ , vol. 66, no. 9, pp. 2358–2373, 2018.
* [24] N. Jiang, Y. Deng, A. Nallanathan, and J. A. Chambers, “Reinforcement learning for real-time optimization in NB-IoT networks,” _IEEE J. Sel. Areas Commun._ , vol. 37, no. 6, pp. 1424–1440, 2019.
* [25] C. Li, J. Xia, F. Liu, L. Fan, G. K. Karagiannidis, and A. Nallanathan, “Dynamic offloading for multiuser muti-CAP MEC networks: A deep reinforcement learning approach,” _IEEE Trans. Veh. Technol._ , vol. 70, no. 1, pp. 1–5, 2021.
* [26] J. Cui, Y. Liu, and A. Nallanathan, “Multi-agent reinforcement learning-based resource allocation for UAV networks,” _IEEE Trans. Wireless Communications_ , vol. 19, no. 2, pp. 729–743, 2020.
* [27] Y. Xu, F. Yin, W. Xu, C. Lee, J. Lin, and S. Cui, “Scalable learning paradigms for data-driven wireless communication,” _IEEE Commun. Mag._ , vol. 58, no. 10, pp. 81–87, 2020.
* [28] Y. Jeon, S. Hong, and N. Lee, “Supervised-learning-aided communication framework for MIMO systems with low-resolution adcs,” _IEEE Trans. Veh. Technol._ , vol. 67, no. 8, pp. 7299–7313, 2018.
* [29] C. Lee, J. Lin, P. Chen, and Y. Chang, “Deep learning-constructed joint transmission-recognition for internet of things,” _IEEE Access_ , vol. 7, pp. 76 547–76 561, 2019.
* [30] N. Samuel, T. Diskin, and A. Wiesel, “Learning to detect,” _IEEE Trans. Sig. Proc._ , vol. 67, no. 10, pp. 2554–2564, 2019.
* [31] J. Xia, K. He, W. Xu, S. Zhang, L. Fan, and G. K. Karagiannidis, “A MIMO detector with deep learning in the presence of correlated interference,” _IEEE Trans. Veh. Technol._ , vol. 69, no. 4, pp. 4492–4497, 2020.
* [32] Z. Wang, W. Zhou, L. Chen, F. Zhou, F. Zhu, and L. Fan, “An adaptive deep learning-based UAV receiver design for coded MIMO with correlated noise,” _Physical Commun._ , vol. 8, pp. 10–18, 2021.
* [33] A. Oussidi and A. Elhassouny, “Deep generative models: Survey,” in _International Conference on Intelligent Systems and Computer Vision (ISCV)_ , 2018, pp. 1–8.
* [34] O. S. Salih, C. Wang, B. Ai, and R. Mesleh, “Adaptive generative models for digital wireless channels,” _IEEE Trans. Wireless Commun._ , vol. 13, no. 9, pp. 5173–5182, 2014.
* [35] D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” in _International Conference on Learning Representations (ICLR)_ , 2014. [Online]. Available: http://arxiv.org/abs/1312.6114
* [36] D. Rezende and S. Mohamed, “Variational inference with normalizing flows,” in _Proceedings of The 32nd International Conference on Machine Learning (ICML)_ , 2015, pp. 1530–1538.
* [37] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” in _Advances in Neural Information Processing Systems (NIPS)_ , vol. 3, 2014, pp. 2672–2680.
* [38] K. Kurach, M. Lucic, X. Zhai, M. Michalski, and S. Gelly, “A large-scale study on regularization and normalization in GANs,” in _Proceedings of the 36th International Conference on Machine Learning (ICML)_ , 2019, pp. 3581–3590. [Online]. Available: http://proceedings.mlr.press/v97/kurach19a.html
* [39] L. Dinh, D. Krueger, and Y. Bengio, “NICE: Non-linear independent components estimation,” in _3rd International Conference on Learning Representations (ICLR)_ , 2015. [Online]. Available: http://arxiv.org/abs/1410.8516
* [40] L. Dinh, J. Sohl-Dickstein, and S. Bengio, “Density estimation using real NVP,” in _5th International Conference on Learning Representations (ICLR)_ , 2017.
* [41] D. P. Kingma and P. Dhariwal, “Glow: Generative flow with invertible 1x1 convolutions,” in _Advances in Neural Information Processing Systems (NeurIPS)_ , 2018, pp. 10 236–10 245.
* [42] S. R. Arridge, P. Maass, O. Öktem, and C. Schönlieb, “Solving inverse problems using data-driven models,” _Acta Numer._ , vol. 28, pp. 1–174, 2019.
* [43] S. Jalali and X. Yuan, “Solving linear inverse problems using generative models,” in _IEEE Int. Symp. Inf. Theory (ISIT)_ , 2019, pp. 512–516.
* [44] M. Li, T. Zhang, Y. Chen, and A. J. Smola, “Efficient mini-batch training for stochastic optimization,” in _The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)_ , 2014, pp. 661–670.
* [45] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in _Proceedings of the 32nd International Conference on International Conference on Machine Learning (ICML)_ , vol. 37, no. 9, 2015, pp. 448–456.
* [46] V. Pammer, Y. Delignon, W. Sawaya, and D. Boulinguez, “A low complexity suboptimal MIMO receiver: The combined ZF-MLD algorithm,” in _Proceedings of the IEEE 14th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC)_ , 2003, pp. 2271–2275.
* [47] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. A. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: A system for large-scale machine learning,” in _12th USENIX Symposium on Operating Systems Design and Implementation (OSDI)_ , 2016, pp. 265–283.
* [48] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _3rd International Conference on Learning Representations (ICLR)_ , 2015\. [Online]. Available: http://arxiv.org/abs/1412.6980
|
# Kubo formulae for first-order spin hydrodynamics
Jin Hu Department of Physics, Tsinghua University, Beijing 100084, China
###### Abstract
We derive Kubo formulae for first-order spin hydrodynamics based on non-
equilibrium statistical operators method. In first-order spin hydrodynamics,
there are two new transport coefficients besides the ordinary ones appearing
in first-order viscous hydrodynamics. They emerge due to the incorporation of
the spin degree of freedom into fluids and the spin-orbital coupling.
Zubarev’s non-equilibrium statistical operator method can be well applied to
investigate these quantum effects in fluids. The Kubo formulae, based on the
method of non-equilibrium statistical operators, are related to equilibrium
(imaginary-time) infrared Green’s functions, and all the transport
coefficients can be determined when the microscopic theory is specified.
## I Introduction
Recent developments in relativistic heavy-ion collisions have seen great
progress in studying observables with spin dependence. The measurements of
spin polarization of $\Lambda$ hyperons show that a fraction of the spin of
quarks within the hyperons takes one particular direction Adamczyk et al.
(2017); Alpatov (2020), which implies the media, quark-gluon plasma (QGP),
should carry a large magnitude of angular momentum. Such a significant
magnitude of vorticity leads to the phenomenon of spin alignments as a result
of the well-known spin-orbital coupling. Theoretical researches on global
polarization of $\Lambda$ hyperons can be found in Wei et al. (2019); Karpenko
and Becattini (2017); Csernai et al. (2019); Li et al. (2017); Bzdak (2017);
Shi et al. (2019); Sun and Ko (2017); Ivanov et al. (2020); Xie et al. (2017).
The results of theoretical calculations fit the data well. Later, the STAR
Collaboration published the measurements of differential spin polarization,
namely, the dependence of $\Lambda$ polarization on the azimuthal angle and
transverse momentum Adam et al. (2019, 2018). However, theoretical calculation
can not provide satisfying explanation to experimental data, which is usually
called “spin sign problem” Becattini and Karpenko (2018); Xia et al. (2018) ,
also see Liu and Huang (2020) and Gao et al. (2020) for a review. To resolve
this problem, new theoretical frameworks are necessary. One promising
framework is hydrodynamics with the spin degree of freedom included. In other
words, these direct experimental measurements of quantum effects in
relativistic heavy-ion collisions motivate the incorporation of the quantum
spin degree of freedom into the evolution of fluids.
To well describe the macroscopic dynamics of spin, it is intuitive to
generalize ordinary hydrodynamics, making it a spinful one. There are many
efforts following this direction. “Ideal” relativistic hydrodynamics with spin
freedom was proposed in the context of the QGP Florkowski et al. (2018). Some
relevant discussions can also be seen in Becattini and Tinti (2010);
Montenegro et al. (2017); Florkowski et al. (2019). Recently, viscous spin
hydrodynamics has also been put into consideration Hattori et al. (2019);
Fukushima and Pu (2020a). In these works, two new transport coefficients arise
reflecting new physical effects with spin freedom, which will be the main
focus of our interest herein.
Transport coefficients are important quantities manifesting the transport
property of the medium in the field of heavy-ion collisions. In the case of
spin hydrodynamics, new transport coefficients well capture the property of
slow dynamics in the spinful fluid system Hattori et al. (2019). The usual
dissipative hydrodynamics have been widely used to describe the collective
behavior of the QGP in the last decade. Once the spin degree of freedom was
considered, as mentioned in Liu and Huang (2020) , a numerical implementation
of causal spin hydrodynamics is a promising one, particularly, it is hopeful
to get insight into the “spin sign problem”. For the purpose of quantitatively
describing the evolution of the fluid system, transport coefficients are
indispensable inputs. There are many methods for calculating the transport
coefficients. The kinetic theory based on transport equation offers us an
effective tool for investigating transport properties Xu et al. (2008); Xu and
Greiner (2008). Noting that transport methods, for example, Boltzmann
equation, rely on the picture of quasi-particle, so we are supposed to apply
them with caution. An alternative approach is based on the Kubo method, in
which the correlation functions represent the response of an equilibrated
system to a perturbation. Based on this method transport coefficients are
directly linked to real-time retarded Green’s functions, which can be
evaluated by analytic continuation of equilibrium Green’s functions formulated
in imaginary time.
In this paper, we utilize the non-equilibrium statistical operator method
developed by Zubarev Zubarev (1974); Hosoya et al. (1984); Horsley and
Schoenmaker (1987) to derive Kubo formulae for transport coefficients of
relativistic spinful fluids. The non-equilibrium statistical operator is a
generalization of the equilibrium Gibbs statistical operator to non-
equilibrium states. Using this approach, we are able to separate naturally the
equilibrium part and the non-equilibrium part which takes the form of
gradients of the thermodynamical parameters. From the viewpoint of linear
response, transport coefficients can be obtained from linear perturbations of
the non-equilibrium statistical operator around its equilibrium expectation
value.
This paper is organized as follows. In Sec. II we present a brief review of
relativistic spin hydrodynamics in dissipative cases based on Hattori et al.
(2019); Fukushima and Pu (2020a). In Sec. III we adopt the non-equilibrium
statistical operator method to derive Kubo formulae in first-order spin
hydrodynamics, which relates transport coefficients to retarded correlation
functions defined in terms of the underlying elementary fields. Discussion and
outlook are given in Sec. IV. Natural units $\hbar=k_{B}=c=1$ are used. The
metric tensor here is given by $g^{\mu\nu}=\operatorname{diag}(1,-1,-1,-1)$,
while $\Delta^{\mu\nu}\equiv g^{\mu\nu}-u^{\mu}u^{\nu}$ is the projection
tensor orthogonal to the four-vector fluid velocity $u^{\mu}$. In addition, we
employ the symmetric/antisymmetric shorthand notations:
$\displaystyle X^{(\mu\nu)}$ $\displaystyle\equiv$
$\displaystyle(X^{\mu\nu}+X^{\nu\mu})/2,$ (1) $\displaystyle X^{[\mu\nu]}$
$\displaystyle\equiv$ $\displaystyle(X^{\mu\nu}-X^{\nu\mu})/2,$ (2)
$\displaystyle X^{\langle\mu\nu\rangle}$ $\displaystyle\equiv$
$\displaystyle\bigg{(}\frac{\Delta^{\mu}_{\alpha}\Delta^{\nu}_{\beta}+\Delta^{\nu}_{\alpha}\Delta^{\mu}_{\beta}}{2}-\frac{\Delta^{\mu\nu}\Delta_{\alpha\beta}}{3}\bigg{)}X^{\alpha\beta}.$
(3)
## II Review of first order spin hydrodynamics
Hydrodynamics is based on basic conservation laws Landau and Lifshitz (1987),
which are conservation of the energy-momentum $T^{\mu\nu}$ and conserved
current $N^{\mu}$ for the spinless case,
$\displaystyle\partial_{\mu}T^{\mu\nu}=0\,,$ (4)
$\displaystyle\partial_{\mu}N^{\mu}=0\,.$ (5)
Problem comes when we need to take into account the spin degree of freedom.
Spin angular momentum plays a big role in the evolution of spinful fluids, and
one needs to refer to another conserve law: the conservation of total angular
momentum, which is expressed as:
$\displaystyle\partial_{\lambda}\Sigma^{\lambda\mu\nu}=0\,.$ (6)
Microscopically, the rank three tensor $\Sigma^{\lambda\mu\nu}$ can be
decomposed into two distinct components by calculating Noether current for the
Lorentz symmetry which reads:
$\Sigma^{\mu\alpha\beta}=(x^{\alpha}T^{\mu\beta}-x^{\beta}T^{\mu\alpha})+S^{\mu\alpha\beta}$,
where $S^{\mu\alpha\beta}=-S^{\mu\beta\alpha}$ Fukushima and Pu (2020b).
$S^{\mu\alpha\beta}$ arises from the invariance with respect to the
representation of the Lorentz group acting on a field under consideration, and
is naturally identified with spin angular momentum. On the other hand, the
orbital angular momentum part comes from the coordinate transformation of the
argument of the field. Here $T^{\mu\nu}$ is the canonical energy-momentum
tensor, featured as having both symmetric and antisymmetric components:
$T^{\mu\nu}\equiv T^{\mu\nu}_{(s)}+T^{\mu\nu}_{(a)}$. In this work, we will
keep using canonical form of energy-momentum tensor. Discussions about details
of pseudo gauge transformed form, for instance, Belinfante form can be seen in
Fukushima and Pu (2020a). Explicitly (6) can be rewritten as:
$\displaystyle\partial_{\lambda}S^{\lambda\mu\nu}=T^{\nu\mu}-T^{\mu\nu}\,.$
(7)
First, recalls that the thermodynamic relation in equilibrium as well as the
first law of thermodynamics, which read as:
$\displaystyle Ts+\mu n=e+p-\omega_{\mu\nu}S^{\mu\nu},$ (8) $\displaystyle
Tds+\mu dn=de-\omega_{\mu\nu}dS^{\mu\nu},$ (9) $\displaystyle
sdT+nd\mu=dp-S^{\mu\nu}d\omega_{\mu\nu},$ (10)
where $T$, $s$, $\mu$, $n$, $e$, and $p$ denote the local temperature, entropy
density, chemical potential, conserved charge density, energy density, and
pressure, respectively. In this paper, we consider only one conserved charge.
Here, analogous to the relation of chemical potential and charge density, a
“spin potential” $\omega_{\mu\nu}$ is introduced conjugate to the spin density
$S^{\mu\nu}$. And one thing that needs to be paid attention to is
$\omega_{\mu\nu}=O(\partial^{1})$ in derivative expansion, and we will show
the reason for this counting later.
On the basis of a derivative expansion, we obtain the constitutive relations:
$\displaystyle
T^{\mu\nu}=eu^{\mu}u^{\nu}+p\Delta^{\mu\nu}+T^{\mu\nu}_{(1)}\,,$ (11)
$\displaystyle N^{\mu}=nu^{\mu}+j^{\mu}_{(1)}\,,$ (12)
$\displaystyle\Sigma^{\mu\alpha\beta}=u^{\mu}S^{\alpha\beta}+\Sigma^{\mu\alpha\beta}_{(1)}\,.$
(13)
The normalization of the fluid velocity reads $u^{\mu}u_{\mu}=1$, and we also
use $\nabla^{\mu}\equiv\Delta^{\mu\nu}\partial_{\nu}$, $D\equiv
u^{\mu}\partial_{\mu}$ as the spatial and temporal component of derivative,
respectively. It is not hard to notice that the spin density $S^{\mu\nu}$
satisfies the antisymmetric property $S^{\mu\nu}=-S^{\nu\mu}$. Accordingly, we
have $\omega_{\mu\nu}=-\omega_{\nu\mu}$. The thermodynamic second law puts
additional limits onto the entropy production. Following the prescription of
Israel and Stewart (1979), we make assumptions about $s^{\mu}$ in the presence
of spin freedom:
$\displaystyle s^{\mu}$
$\displaystyle=\frac{u_{\nu}}{T}T^{\mu\nu}+\frac{p}{T}u^{\mu}s-\frac{\mu}{T}j^{\mu}-\frac{1}{T}\omega_{\alpha\beta}S^{\alpha\beta}u^{\mu}+O(\partial^{2})$
$\displaystyle=su^{\mu}+\frac{u_{\nu}}{T}T^{\mu\nu}_{(1)}-\frac{\mu}{T}j^{\mu}_{(1)}+O(\partial^{2}).$
(14)
Combined with thermodynamic relation (8) and the constituent equation of
energy, $u_{\nu}\partial_{\mu}T^{\mu\nu}=0$ , the entropy production is
simplified as:
$\partial_{\mu}s^{\mu}=-j^{\mu}_{(1)}\partial_{\mu}\frac{\mu}{T}+T^{\mu\nu}_{(1)}\partial_{\mu}\frac{u_{\nu}}{T}+\frac{1}{T}\omega_{\alpha\beta}\partial_{\mu}(S^{\alpha\beta}u^{\mu})\,.$
(15)
Taking into account that $T^{\mu\nu}_{(1)}$ has the symmetric and
antisymmetric part, we then have:
$\displaystyle
T^{\mu\nu}_{(1s)}=2h^{(\mu}u^{\nu)}+\pi^{\mu\nu}+\Pi\Delta^{\mu\nu},$ (16)
$\displaystyle T^{\mu\nu}_{(1a)}=2q^{[\mu}u^{\nu]}+\tau^{\mu\nu},$ (17)
where $\pi^{\mu\nu}$ and $\Pi$ represent shear stress tensor and bulk viscous
pressure, and $h^{\mu}$ is heat flow. Meanwhile, $\tau^{\mu\nu}$ and $q^{\mu}$
are antisymmetric counterparts of $\pi^{\mu\nu}$ and $h^{\mu}$, respectively.
These five quantities are all of the first order in gradient expansion. One
can further find $\pi^{\mu\nu}=\pi^{\nu\mu}$, $\tau^{\mu\nu}=-\tau^{\nu\mu}$,
and
$h^{\mu}u_{\mu}=q^{\mu}u_{\mu}=\tau^{\mu\nu}u_{\nu}=\pi^{\mu\nu}u_{\nu}=0$.
Putting $T^{\mu\nu}_{(1)}$ into $\partial_{\mu}s^{\mu}$, and neglecting the
terms of $O(\partial^{3})$, we obtain full form of the entropy production in
first-order spin hydrodynamics:
$\displaystyle\partial_{\mu}s^{\mu}$
$\displaystyle=\Big{(}h^{\mu}-\frac{e+p}{n}j^{\mu}_{(1)}\Big{)}\frac{n}{e+p}\nabla_{\mu}\frac{\mu}{T}+\frac{\pi^{\mu\nu}}{T}\partial_{\langle\mu}u_{\nu\rangle}$
$\displaystyle\
-\frac{\Pi}{T}\theta+q^{\mu}\Big{(}-\frac{u\cdot\partial}{T}u_{\mu}+\partial_{\mu}\frac{1}{T}+\frac{4\omega_{\mu\nu}u^{\nu}}{T}\Big{)}$
$\displaystyle\
+\tau^{\mu\nu}\Big{[}\Delta_{\mu\rho}\Delta_{\nu\sigma}\Big{(}\partial^{\rho}\frac{u^{\sigma}}{T}-\partial^{\sigma}\frac{u^{\rho}}{T}\Big{)}+2\frac{\omega_{\mu\nu}}{T}\Big{]},$
(18)
where the notation $\theta=\partial_{\mu}u^{\mu}$ is used. Noting that, when
the system is in equilibrium, the entropy production must cease and we obtain
$\omega_{\mu\nu}=-\frac{T}{2}\omega^{th}_{\mu\nu}$ with the thermal vorticity
$\omega^{th}_{\mu\nu}=\Delta_{\mu\rho}\Delta_{\nu\sigma}(\partial^{\rho}\frac{u^{\sigma}}{T}-\partial^{\sigma}\frac{u^{\rho}}{T})$
Becattini (2012); Becattini et al. (2019). According to this argument, the
counting of $\omega_{\mu\nu}$ can be estimated as $O(\partial^{1})$ assuming
the system is not far away from global equilibrium. Following the routine of
first-order hydrodynamics, we impose the sufficient conditions of semipositive
entropy production $\partial_{\mu}s^{\mu}\geq 0$, this is, cast every term
into positive semidefinite quadratic form so that the entropy production can
be seen as a sum of squares. Therefore we have:
$\displaystyle\pi^{\mu\nu}=2\eta\nabla^{\langle\mu}u^{\nu\rangle},$ (19)
$\displaystyle\Pi=-\zeta\theta,$ (20) $\displaystyle
h^{\mu}-\frac{e+p}{n}j^{\mu}_{(1)}=-\kappa\frac{nT}{e+p}\nabla^{\mu}\frac{\mu}{T},$
(21)
$\displaystyle\tau^{\mu\nu}=2\gamma\big{(}\omega_{th}^{\mu\nu}+2\Delta^{\mu\rho}\Delta^{\nu\sigma}\omega_{\rho\sigma}\big{)},$
(22) $\displaystyle
q^{\mu}=\lambda\big{(}Du^{\mu}+\frac{\nabla^{\mu}T}{T}-4\omega^{\mu\nu}u_{\nu}\big{)},$
(23)
$\eta$, $\zeta$ and $\kappa$ represent shear viscosity, bulk viscosity and
heat conductivity respectively, $\gamma$ and $\lambda$ are new transport
coefficients of spin hydrodynamics, which are identified as “rotational
viscosity” in de Groot and Mazur (2011) and “boost heat conductivity” in
Hattori et al. (2019). In the next section, we will derive Kubo formulae for
these transport coefficients.
## III Non-equilibrium Statistical Operators and Kubo formulae
The method of non-equilibrium statistical operators (NESO) developed by
Zubarev starts from constructing a statistical ensemble encoding thermodynamic
information of the macroscopic state of the system in non-equilibriumn state.
In present case we consider the system in the hydrodynamic regime which is
near local equilibrium and thermodynamic parameters such as temperature and
chemical potentials can be well defined locally. See Hosoya et al. (1984) and
Huang et al. (2011) for reference about NESO. Following Zubarev’s practice,
the form of NESO in the textbook is Zubarev (1974); Hosoya et al. (1984);
Huang et al. (2011)
$\displaystyle\hat{\rho}(t)=Q^{-1}\exp\left[-\int
d^{3}\bm{x}\,\hat{Z}(\bm{x},t)\right],$ (24) $\displaystyle
Q=\mathop{\mathrm{Tr}}\exp\left[-\int d^{3}\bm{x}\,\hat{Z}(\bm{x},t)\right],$
(25)
where the operator $\hat{Z}$ is defined as
$\displaystyle\hat{Z}(\bm{x},t)=\epsilon\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}\Big{[}\beta^{\nu}(\bm{x},t^{\prime})\hat{T}_{0\nu}(\bm{x},t^{\prime})$
$\displaystyle\ \
\qquad-\alpha(\bm{x},t^{\prime})\hat{N}^{0}(\bm{x},t^{\prime})-\frac{1}{2}\omega_{\rho\sigma}(\bm{x},t^{\prime})\hat{S}^{0\rho\sigma}(\bm{x},t^{\prime})\Big{]},$
(26)
with $\epsilon\rightarrow+0$ after taking thermodynamic limit. Here we have
introduced new Lagrange multiplier $\omega^{\rho\sigma}(\bm{x},t)$ and the
operator $\hat{S}^{0\rho\sigma}$ coupled to it, which can be understood as the
incorporation of total angular momentum. From the form of total angular
momentum $\Sigma^{\mu\alpha\beta}$, we can deduce that the conservation
condition of total angular momentum brings only new information in the part of
spin angular momentum (the information of orbital part can be reproduced by
energy-momentum tensor). More details can be found in Becattini et al. (2019).
Other Lagrange multipliers are written explicitly as:
$\displaystyle\beta^{\nu}(\bm{x},t)=\beta(\bm{x},t)u^{\nu}(\bm{x},t),$ (27)
$\displaystyle\alpha(\bm{x},t)=\beta(\bm{x},t)\mu(\bm{x},t).$ (28)
The parameters $\beta$ stand for the inverse local equilibrium temperature. We
need to identify these parameters in the language of statistical operators,
which will be deferred below. We here express three local conservation laws
with statistical operators:
$\displaystyle\partial_{\mu}\hat{T}^{\mu\nu}=0\;,\;\;\partial_{\mu}\hat{N}^{\mu}=0\;,\;\;\partial_{\mu}\hat{S}^{\mu\rho\sigma}=\hat{T}^{\sigma\rho}-\hat{T}^{\rho\sigma}.$
(29)
Integrating Eq.(III) by parts and utilizing Eq.(29), we can get:
$\displaystyle\int d^{3}\bm{x}\,\hat{Z}(\bm{x},t)=\hat{A}-\hat{B},$ (30)
$\displaystyle\hat{A}=\int
d^{3}\bm{x}\Big{[}\beta^{\nu}(\bm{x},t)\hat{T}_{0\nu}(\bm{x},t)-\alpha(\bm{x},t)\hat{N}^{0}(\bm{x},t)$
$\displaystyle\quad-\frac{1}{2}\omega_{\rho\sigma}(\bm{x},t)\hat{S}^{0\rho\sigma}(\bm{x},t)\Big{]},$
(31) $\displaystyle\hat{B}=\int
d^{3}\bm{x}\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}\Big{[}\hat{T}_{\mu\nu}(\bm{x},t^{\prime})\partial^{\mu}\beta^{\nu}(\bm{x},t^{\prime})$
$\displaystyle\quad-\hat{N}^{\mu}(\bm{x},t^{\prime})\partial_{\mu}\alpha(\bm{x},t^{\prime})+\omega_{\mu\nu}(\bm{x},t^{\prime})\hat{T}^{[\mu\nu]}(\bm{x},t^{\prime})\Big{]}.$
(32)
In deriving Eq.(30), we notice that integrating by parts will bring in three-
dimensional surface integrals that are often discarded, but for temporal
dimension, it is different because of definite integration upper limit. If we
take $t$ to infinity, the derivative term would all vanish showing that the
system should go to equilibrium given long enough evolution time. Due to the
counting of $\omega_{\mu\nu}$, the term of $\partial_{\mu}\omega^{\mu\nu}$ is
the order of $O(\partial^{2})$, so we neglect this term in Eq.(30). Following
the spirit of non-equilibrium statistical mechanics, we treat the derivative
terms as thermodynamic forces that lead to dissipation. By doing so, we are
able to decompose statistical operators into the local equilibrium part and
non-equilibrium part. We define the local equilibrium statistical operator as:
$\displaystyle\hat{\rho}_{\rm{leq}}\equiv
Q_{\rm{leq}}^{-1}\exp\Big{(}-\hat{A}\,\Big{)},$ (33) $\displaystyle
Q_{\rm{leq}}=\mathop{\mathrm{Tr}}\exp\Big{(}-\hat{A}\,\Big{)},$ (34)
and the complete statistical operator as:
$\displaystyle\hat{\rho}\equiv Q^{-1}\exp\Big{(}-\hat{A}+\hat{B}\,\Big{)},$
(35) $\displaystyle
Q=\mathop{\mathrm{Tr}}\exp\Big{(}-\hat{A}+\hat{B}\,\Big{)}.$ (36)
Now comes the question of how to handle the complete statistical operator
taking in the form of the exponential function of the sum of two operators.
Noting $[\hat{A},\hat{B}]\neq 0$ , $[\hat{A},[\hat{A},\hat{B}]]\neq 0$ and
$[\hat{B},[\hat{A},\hat{B}]]\neq 0$, the form of exponential expansion is
complex. Here we adopt an approach of operators expansion proposed in Zubarev
(1974). We focus on small perturbation around the equilibrium system, that is,
the thermodynamic forces can be treated as perturbations. In this case, it is
safe to say that the relation of these forces and irreversible dissipative
currents is linear so that we can expand Eq.(35), keep only linear term, and
approximate the complete statistical operator as:
$\displaystyle\hat{\rho}=\left[1+\int_{0}^{1}d\tau\left(e^{-\hat{A}\tau}\hat{B}e^{\hat{A}\tau}-\langle\hat{B}\rangle_{\rm{leq}}\right)\right]\hat{\rho}_{\rm{leq}},$
(37)
where
$\langle\hat{B}\rangle_{\rm{leq}}=\mathop{\mathrm{Tr}}[\hat{\rho}_{\rm{leq}}\hat{B}]$
is the expectation over the local equilibrium operator. We consider energy-
momentum tensor in non-equilibrium state. First, we evaluate the energy-
momentum tensor averaged over the non-equilibrium distribution:
$\displaystyle\langle{\hat{T}^{\mu\nu}(\bm{x},t)}\rangle=\langle{\hat{T}^{\mu\nu}(\bm{x},t)}\rangle_{\rm{leq}}+\int
d^{3}\bm{x}^{\prime}\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}$
$\displaystyle\qquad\quad\quad\ \
\Big{[}\Big{(}\hat{T}^{\mu\nu}(\bm{x},t)\,,\,\hat{T}^{\rho\sigma}(\bm{x}^{\prime},t^{\prime})\Big{)}\partial_{\rho}\beta_{\sigma}(\bm{x}^{\prime},t^{\prime})$
$\displaystyle\qquad\quad\quad\ \
-\Big{(}\hat{T}^{\mu\nu}(\bm{x},t)\,,\,\hat{N}^{\rho}(\bm{x}^{\prime},t^{\prime})\Big{)}\partial_{\rho}\alpha(\bm{x}^{\prime},t^{\prime})$
$\displaystyle\qquad\quad\quad\ \
+\Big{(}\hat{T}^{\mu\nu}(\bm{x},t)\,,\,\hat{T}^{[\rho\sigma]}(\bm{x}^{\prime},t^{\prime})\Big{)}\omega_{\rho\sigma}(\bm{x}^{\prime},t^{\prime})\Big{]},$
(38)
where $\langle\hat{B}\rangle=\mathop{\mathrm{Tr}}[\hat{\rho}\hat{B}]$ is the
expectation over the complete operator, with the binary operator
$\displaystyle\left(\hat{X}(\bm{x},t),\hat{Y}(\bm{x}^{\prime},t^{\prime})\right)$
$\displaystyle\equiv\int_{0}^{1}d\tau\Big{\langle}\hat{X}(\bm{x},t)\big{(}e^{-\hat{A}\tau}\hat{Y}(\bm{x}^{\prime},t^{\prime})e^{\hat{A}\tau}-\langle\hat{Y}(\bm{x}^{\prime},t^{\prime})\rangle_{\rm{leq}}\big{)}\Big{\rangle}_{\rm{leq}}$
(39)
being the Kubo correlation function.
We precede to match the relevant tensors $T^{\mu\nu}$ and $N^{\mu}$ in
hydrodynamics with the corresponding operators. A straightforward and natural
tensor decomposition reads as:
$\displaystyle\hat{T}^{\mu\nu}=\hat{e}u^{\mu}u^{\nu}-\hat{p}\Delta^{\mu\nu}+\hat{T}^{\mu\nu}_{(1s)}+\hat{T}^{\mu\nu}_{(1a)}\,,$
(40)
$\displaystyle\hat{T}^{\mu\nu}_{(1s)}=2\hat{h}^{(\mu}u^{\nu)}+\hat{\pi}^{\mu\nu}+\hat{\Pi}\Delta^{\mu\nu}\,,$
(41)
$\displaystyle\hat{T}^{\mu\nu}_{(1a)}=2\hat{q}^{[\mu}u^{\nu]}+\hat{\tau}^{\mu\nu},\,$
(42) $\displaystyle\hat{N}^{\mu}=\hat{n}u^{\mu}+\hat{j}^{\mu}_{(1)},$ (43)
which consistently matches the form of ${T}^{\mu\nu}$ and $N^{\mu}$ in
hydrodynamics. We also need to specify the parameters coupled to
$\hat{T}^{\mu\nu}$, $\hat{N}^{\mu}$ and $\hat{S}^{\lambda\mu\nu}$. By imposing
Landau matching conditions $u^{\mu}\delta\langle\hat{T}_{\mu\nu}\rangle
u^{\nu}=0$ , $u^{\mu}\delta\langle\hat{N}_{\mu}\rangle=0$ Landau and Lifshitz
(1987) with the notation
$\delta\langle\hat{X}\rangle=\langle\hat{X}\rangle-\langle\hat{X}\rangle_{\rm{leq}}$,
the parameters $\beta$ and $\alpha$ are identified as the inverse of
temperature and the ratio of chemical potential to temperature. As for the
identification of $\omega_{\mu\nu}$ with spin potential, details can be found
in Becattini et al. (2019) and we will keep using this identification
hereafter. Then the expectation of $\hat{T}_{\mu\nu}$ is evaluated over non-
equilibrium operator and compared with the result with that of first-order
spin hydrodynamics. To that end, we first rewrite the terms
$\hat{T}^{\rho\sigma}\partial_{\rho}\beta_{\sigma}$,
$N^{\mu}\partial_{\mu}\alpha$ and $\hat{T}^{\rho\sigma}\omega_{\rho\sigma}$
with the identification of these Lagrange multipliers:
$\displaystyle\hat{T}^{\mu\nu}\partial_{\mu}\beta_{\nu}=\beta\hat{\pi}^{\mu\nu}\partial_{\mu}u_{\nu}+\beta\hat{h}^{\mu}\big{(}\beta^{-1}\partial_{\mu}\beta+Du_{\mu}\big{)}$
$\displaystyle\qquad\qquad+\beta\hat{\tau}^{\mu\nu}\partial_{\mu}u_{\nu}+\beta\hat{q}^{\mu}\big{(}-\beta^{-1}\partial_{\mu}\beta+Du_{\mu}\big{)}$
$\displaystyle\qquad\qquad+\hat{e}D\beta-\beta\big{(}\hat{p}-\hat{\Pi}\big{)}\theta,$
(44)
$\displaystyle\hat{N}^{\mu}\partial_{\mu}\alpha=\hat{n}D\alpha+\hat{j}^{\mu}_{(1)}\nabla_{\mu}\alpha,$
(45)
$\displaystyle\hat{T}^{[\mu\nu]}\omega_{\mu\nu}=2\hat{q}^{[\mu}u^{\nu]}\omega_{\mu\nu}+\hat{\tau}^{\mu\nu}\omega_{\mu\nu}.$
(46)
In order to match thermodynamic forces in first-order hydrodynamics, we
substitute $D\beta$ and $D\alpha$ with $\theta$ by using the equations of
zero-order hydrodynamics,
$\displaystyle\partial_{\mu}\langle\hat{T}^{\mu\nu}\rangle_{\rm{leq}}=0,$ (47)
$\displaystyle\partial_{\mu}\langle\hat{N}^{\mu}\rangle_{\rm{leq}}=0.$ (48)
Taking the scalar product of these equations with the four velocity $u^{\nu}$,
we get:
$\displaystyle
D\langle\hat{e}\rangle_{\rm{leq}}=-(\langle\hat{e}\rangle_{\rm{leq}}+\langle\hat{p}\rangle_{\rm{leq}})\theta,$
(49) $\displaystyle
D\langle\hat{n}\rangle_{\rm{leq}}=-\langle\hat{n}\rangle_{\rm{leq}}\theta.$
(50)
Noting that the matching conditions ensures that
$e=\langle\hat{e}\rangle_{\rm{leq}}$ and $n=\langle\hat{n}\rangle_{\rm{leq}}$,
we shall using these notations in the following paragraphs. Straightforward
calculation leads us to:
$\displaystyle D\beta=\frac{\theta}{J}\Big{(}-\big{(}e+p\big{)}\frac{\partial
n}{\partial\alpha}+n\frac{\partial e}{\partial\alpha}\,\Big{)},$ (51)
$\displaystyle
D\alpha=\frac{\theta}{J^{\prime}}\Big{(}-\big{(}e+p\big{)}\frac{\partial
n}{\partial\beta}+n\frac{\partial e}{\partial\beta}\,\Big{)},$ (52)
$\displaystyle J=$ $\displaystyle\frac{\partial
e}{\partial\beta}\frac{\partial n}{\partial\alpha}-\frac{\partial
n}{\partial\beta}\frac{\partial e}{\partial\alpha},\quad
J^{\prime}=\frac{\partial e}{\partial\alpha}\frac{\partial
n}{\partial\beta}-\frac{\partial n}{\partial\alpha}\frac{\partial
e}{\partial\beta},$ (53)
with the derivative of the thermodynamic functions with respect to $\alpha$
calculated holding $\beta$ fixed and vice versa.
Now Eq.(III) and (45) can be cast into:
$\displaystyle\hat{T}^{\mu\nu}\partial_{\mu}\beta_{\nu}=\beta\hat{\pi}^{\mu\nu}\partial_{\mu}u_{\nu}+\beta\hat{h}^{\mu}\left(\beta^{-1}\partial_{\mu}\beta+Du_{\mu}\right)$
$\displaystyle\qquad\qquad+\beta\hat{\tau}^{\mu\nu}\partial_{\mu}u_{\nu}+\beta\hat{q}^{\mu}\left(-\beta^{-1}\partial_{\mu}\beta+Du_{\mu}\right)$
$\displaystyle\qquad\qquad-\beta\hat{p}^{\prime}\theta,$ (54)
$\displaystyle\hat{N}^{\mu}\partial_{\mu}\alpha=\hat{j}^{\mu}_{(1)}\nabla_{\mu}\alpha+\frac{\hat{n}}{J^{\prime}}\Big{[}-\big{(}e+p\big{)}\frac{\partial
n}{\partial\beta}+n\frac{\partial e}{\partial\beta}\,\Big{]}\theta,$ (55)
$\displaystyle\hat{p}^{\prime}=\hat{p}-\hat{\Pi}-\frac{1}{J\beta}\Big{[}-\big{(}e+p\big{)}\frac{\partial
n}{\partial\alpha}+\,n\frac{\partial e}{\partial\alpha}\,\Big{]}\,\hat{e}.$
(56)
From now on we will handle Kubo correlations so as to get final results. In an
isotropic medium, one can turn to Curie’s principle for help and that has been
used to simplify Eq.(III). Curie’s principle shows that the correlation
function between operators of different ranks and spatial parity vanishes. The
remaining Kubo correlations can be expressed in the comoving frame as:
$\displaystyle\left(\hat{h}_{k}^{\prime},\hat{h}_{l}^{\prime}\right)=L_{h^{\prime}}\delta_{kl},$
$\displaystyle\left(\hat{\pi}^{kl},\hat{\pi}^{mn}\right)=L_{\pi}\frac{1}{2}\left(\delta^{km}\delta^{ln}+\delta^{kn}\delta^{lm}-\frac{2}{3}\delta^{kl}\delta^{mn}\right),$
$\displaystyle\left(\hat{q}^{k},\hat{q}^{l}\right)=L_{q}\delta^{kl},$
$\displaystyle\left(\hat{\tau}^{kl},\hat{\tau}^{mn}\right)=L_{\tau}\frac{1}{2}\left(\delta^{km}\delta^{ln}-\delta^{kn}\delta^{lm}\right),$
(57)
with $\hat{h}^{\prime}_{\mu}=\hat{h}_{\mu}-\frac{e+p}{n}\hat{j}_{\mu}$,
$\delta^{ij}$ being the Kronecker symbol, and $L_{i}$ are scalar functions
that can be determined by taking trace in both sides of Eq.(III). And we
conclude that the correlation between symmetric tensor and antisymmetric
tensor is zero and we needn’t consider the contribution of these cross parts.
A simple explanation is put following. First, we assume that:
$\displaystyle\left(\hat{\pi}^{kl},\hat{\tau}^{mn}\right)\partial_{m}u_{n}=L_{1}\partial^{k}u^{l}+L_{2}\partial^{l}u^{k}+L_{3}\delta^{kl}\theta,$
(58)
with $L_{i}(i=1,2,3)$ being scalar functions. The reason why we can make such
an ansatz is that we have constrained the form of current-force relation as
Eq.(19). It is this simple linear current-force relation that leads to this
ansatz. Moving on, for the symmetry of exchanging $\mu$ and $\nu$, $L_{1}$
must equal $L_{2}$. Equivalently, we have:
$\displaystyle\left(\hat{\pi}^{kl},\hat{\tau}^{mn}\right)=L_{1}\left(\delta^{km}\delta^{ln}+\delta^{kn}\delta^{lm}\right)+L_{3}\delta^{kl}\delta^{mn}.$
(59)
Because the right-hand side of Eq.(59) is not antisymmetric when exchanging
$m$ and $n$, which is in conflict with the left-hand side, the correlation
between symmetric tensor and antisymmetric tensor is exactly zero. Then we
boost to general reference frame, substitute the Kronecker symbol
$\delta^{\mu\nu}$ with $-\Delta^{\mu\nu}$, and take trace to get all $L$
functions Zubarev (1974); Hosoya et al. (1984).
In order to extract transport coefficients, we suppose the changes of
thermodynamic forces within correlation length are sufficiently small so that
we can factorize them out of Eq.(III). Thus we obtain the linear thermodynamic
current-force relation combining Eqs.(40), (41), (42), (III), (56), and (III)
together:
$\displaystyle\langle\hat{\pi}^{\mu\nu}\rangle=2\eta\nabla^{\langle\mu}u^{\nu\rangle},$
(60) $\displaystyle\langle\hat{\Pi}\rangle=-\zeta\theta,$ (61)
$\displaystyle\langle\hat{h}^{\mu}\rangle-\frac{e+p}{n}\langle\hat{j}_{(1)}^{\mu}\rangle=-\kappa\frac{nT}{e+p}\nabla^{\mu}\frac{\mu}{T},$
(62)
$\displaystyle\langle\hat{\tau}^{\mu\nu}\rangle=2\gamma\Big{(}\omega_{th}^{\mu\nu}+2\Delta^{\mu\rho}\Delta^{\nu\sigma}\omega_{\rho\sigma}\Big{)},$
(63)
$\displaystyle\langle\hat{q}^{\mu}\rangle=\lambda\bigg{(}Du^{\mu}+\frac{\nabla^{\mu}T}{T}-4\omega^{\mu\nu}u_{\nu}\bigg{)},$
(64)
where the Gibbs–Duhem relation has been employed. By comparing these equations
with Eqs.(19), (20), (21), (22) and (23), we conclude that we have reproduced
the linear law of first-order spin hydrodynamics with the method of
statistical operators. Throughout the derivation, the four-vector fluid
velocity is not specified, which means the results we obtain is frame
independent up to first order in gradients. After finishing the above
manipulations, all the transport coefficients can be given in terms of Kubo
correlation functions:
$\displaystyle\eta=$ $\displaystyle\frac{\beta}{10}\int
d^{3}\bm{x}^{\prime}\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}\Big{(}\hat{\pi}^{\mu\nu}(\bm{x},t),\hat{\pi}_{\mu\nu}(\bm{x}^{\prime},t^{\prime})\Big{)},$
(65) $\displaystyle\kappa=$ $\displaystyle\frac{-\beta}{3}\int
d^{3}\bm{x}^{\prime}\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}\Big{(}\hat{h}^{\prime\mu}(\bm{x},t),\hat{h}_{\mu}^{\prime}(\bm{x}^{\prime},t^{\prime})\Big{)},$
(66) $\displaystyle\zeta=$ $\displaystyle\beta\int
d^{3}\bm{x}^{\prime}\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}\Big{(}\hat{p}^{\star}(\bm{x},t),\hat{p}^{\star}(\bm{x}^{\prime},t^{\prime})\Big{)},$
(67) $\displaystyle\gamma=$ $\displaystyle\frac{-\beta}{6}\int
d^{3}\bm{x}^{\prime}\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}\Big{(}\hat{\tau}^{\mu\nu}(\bm{x},t),\hat{\tau}_{\mu\nu}(\bm{x}^{\prime},t^{\prime})\Big{)},$
(68) $\displaystyle\lambda=$ $\displaystyle\frac{-\beta}{3}\int
d^{3}\bm{x}^{\prime}\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}\Big{(}\hat{q}^{\mu}(\bm{x},t),\hat{q}_{\mu}(\bm{x}^{\prime},t^{\prime})\Big{)},$
(69)
with $\hat{p}^{*}=\hat{p}^{\prime}+\frac{\hat{n}}{\beta\theta}D\alpha$.
In the following paragraphs, we will build up the connection between Kubo
correlation functions and retarded Green functions. The discussion of the
following paragraphs is similar to that in Huang et al. (2011). Direct
evaluation of Eq.(III) leads to:
$\displaystyle\left(\hat{X}(\bm{x},t),\hat{Y}(\bm{x}^{\prime},t^{\prime})\right)$
$\displaystyle\equiv\int_{0}^{1}d\tau\Big{\langle}\hat{X}(\bm{x},t)\left[e^{-\hat{A}\tau}\hat{Y}(\bm{x}^{\prime},t^{\prime})e^{\hat{A}\tau}-\langle{\hat{Y}}(\bm{x}^{\prime},t^{\prime})\rangle_{\rm{leq}}\right]\Big{\rangle}_{\rm{leq}}$
$\displaystyle=\frac{i}{\beta}\int_{-\infty}^{t^{\prime}}ds\Big{\langle}\Big{[}{\hat{X}}(\bm{x},t),{\hat{Y}}(\bm{x}^{\prime},s)\Big{]}\Big{\rangle}_{\rm{leq}},$
(70)
where we supposed in the last step that the correlation of two operators
vanishes when time goes to distant past. In deriving Eq.(III), we have
utilized the conclusion that $\hat{A}$ can be treated as Hamiltonian operator.
To see that, we are informed of three points. First, when we choose local rest
frame or comoving frame, the first term within the integrand of Eq.(III) is
exactly $\beta\hat{H}$. Second, taking the second term means we are doing the
calculation of the grand canonical ensemble. In finite temperature field
theory, this term can always be added to Hamiltonian to construct the grand
canonical Hamiltonian operator. Third, the third term related to the coupling
of spin and vorticity, which is the covariant form of the scalar product of
angular velocity and angular momentum $\bm{\omega}\cdot\bm{J}$ Becattini
(2012), can also be included in Hamiltonian in a rotating system. Combining
these three considerations, we conclude that $e^{\hat{A}\tau}$ is a quantum
mechanical evolution operator. Keep following this procedure:
$\displaystyle\rm{I}$ $\displaystyle=\int
d^{3}\bm{x}^{\prime}\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}\frac{i}{\beta}\int_{-\infty}^{t^{\prime}}ds\Big{\langle}\Big{[}{\hat{X}}(\bm{x},t),{\hat{Y}}(\bm{x}^{\prime},s)\Big{]}\Big{\rangle}_{\rm{leq}}$
$\displaystyle=-\int
d^{3}\bm{x}^{\prime}\int_{-\infty}^{t}dt^{\prime}e^{\epsilon(t^{\prime}-t)}\frac{1}{\beta}\int_{-\infty}^{t^{\prime}}dsG_{R}(\bm{x}-\bm{x}^{\prime},t-s)$
$\displaystyle=\frac{i}{\beta}\lim_{\omega\rightarrow
0}\lim_{\bm{k}\rightarrow
0}\frac{\partial}{\partial\omega}G_{R}(\bm{k},\omega).$ (71)
In obtaining this equation, the definition of retarded Green function is
required:
$\displaystyle
G_{\hat{A}\hat{B}}^{R}(\bm{x},t)\equiv-i\theta(t)\left[\hat{A}(\bm{x},t),\hat{B}({\bf
0},0)\right].$ (72)
So far we have proved Kubo correlation is exactly related to retarded Green
function. Because formulae for transport coefficients are all relevant to
self-correlation, we keep our focus on this case. Suppose $A,B$ represent the
same operator, the imaginary(real) part of retarded Green is even(odd)
function of $\omega$ according to the Onsager’s reciprocal principle Zubarev
(1974) such that:
$\displaystyle\rm{I}=-\frac{1}{\beta}\lim_{\omega\rightarrow
0}\lim_{\bm{k}\rightarrow
0}\frac{\partial}{\partial\omega}ImG^{R}_{\hat{A}\hat{A}}(\bm{k},\omega).$
(73)
Collect all the results we obtained:
$\displaystyle\eta=-\frac{1}{10}\lim_{\omega\rightarrow
0}\frac{\partial}{\partial\omega}\rm{Im}G^{R}_{\hat{\pi}\hat{\pi}}(\b{0},\omega),$
(74) $\displaystyle\zeta=-\lim_{\omega\rightarrow
0}\frac{\partial}{\partial\omega}\rm{Im}G^{R}_{\hat{p}*\hat{p}*}(\b{0},\omega),$
(75) $\displaystyle\kappa=\frac{1}{3}\lim_{\omega\rightarrow
0}\frac{\partial}{\partial\omega}\rm{Im}G^{R}_{\hat{h}^{\prime}\hat{h}^{\prime}}(\b{0},\omega),$
(76) $\displaystyle\gamma=\frac{1}{6}\lim_{\omega\rightarrow
0}\frac{\partial}{\partial\omega}\rm{Im}G^{R}_{\hat{\tau}\hat{\tau}}(\b{0},\omega),$
(77) $\displaystyle\lambda=\frac{1}{3}\lim_{\omega\rightarrow
0}\frac{\partial}{\partial\omega}\rm{Im}G^{R}_{\hat{q}\hat{q}}(\b{0},\omega).$
(78)
The operators arising in the subscripts are all defined in the previous
paragraphs. And the first three transport coefficients are consistent with the
results of Huang et al. (2011) and Hosoya et al. (1984). We note that there is
a factor $2$ difference compared to the result of $\eta$ in Hosoya et al.
(1984), which is due to the different definition of shear viscosity, [see
Eq.(19)]. The last two transport coefficients are exactly what we want, which
can give a description of new transport properties of spinful fluids.
## IV Summary and Outlook
We have evaluated Kubo formulae for transport coefficients arising in first-
order spin hydrodynamics based on the approach of the non-equilibrium
statistical operator. We apply Zubarev’s statistical operator method to
linearize the non-equilibrium corrections, and study how a thermal system
respond to such linear perturbations. The Kubo formulae we obtained are
related to equilibrium (imaginary-time) infrared Green’s functions. Given
specific microscopic theory, the imaginary-time Green’s functions in finite
temperature field can be formulated. Thus, by analytical continuation, the
real-time retarded Green’s functions can be calculated to obtain final results
of these transport coefficients, which in turn are the basis of numerical
simulation of the evolution of spinful fluids. According for the spin degree
of freedom, one would need to perform the calculation based on the theory of
spinor or vector field, which would be a non-trivial extension of that based
on a scalar field theory Jeon (1993). Then it would be interesting to see to
what extent the results obtained are the same compared to the calculation of
Montenegro and Torrieri (2020), which is also based on linear response theory.
On the other hand, it is an efficient way to use transport methods to
determine these coefficients if the Green’s functions show good behavior
during some period of the evolution of the system Arnold and Yaffe (1998). In
that case, we are able to perform simulation within a transport model like
Greif et al. (2014); Chen et al. (2019, 2020). The more straightforward way to
calculate transport coefficients is to linearize quantum transport equations
bypass the path of Kubo formulation. Such a study will be performed in future.
## V Acknowledgments
J.H. is grateful to Guojun Huang for helpful discussions and to Shuzhe Shi,
Zhengyu Chen and Ziyue Wang for reading the manuscript and valuable comments
and discussions. This work was supported by the NSFC Grant No.11890710 and
No.11890712.
## References
* Adamczyk et al. (2017) L. Adamczyk et al. (STAR), Nature 548, 62 (2017), eprint 1701.06657.
* Alpatov (2020) E. Alpatov (), STAR (for the), J. Phys. Conf. Ser. 1690, 012120 (2020).
* Wei et al. (2019) D.-X. Wei, W.-T. Deng, and X.-G. Huang, Phys. Rev. C 99, 014905 (2019), eprint 1810.00151.
* Karpenko and Becattini (2017) I. Karpenko and F. Becattini, Eur. Phys. J. C 77, 213 (2017), eprint 1610.04717.
* Csernai et al. (2019) L. Csernai, J. Kapusta, and T. Welle, Phys. Rev. C 99, 021901 (2019), eprint 1807.11521.
* Li et al. (2017) H. Li, L.-G. Pang, Q. Wang, and X.-L. Xia, Phys. Rev. C 96, 054908 (2017), eprint 1704.01507.
* Bzdak (2017) A. Bzdak, Phys. Rev. D 96, 056011 (2017), eprint 1703.03003.
* Shi et al. (2019) S. Shi, K. Li, and J. Liao, Phys. Lett. B 788, 409 (2019), eprint 1712.00878.
* Sun and Ko (2017) Y. Sun and C. M. Ko, Phys. Rev. C 96, 024906 (2017), eprint 1706.09467.
* Ivanov et al. (2020) Y. B. Ivanov, V. D. Toneev, and A. A. Soldatov, Phys. Atom. Nucl. 83, 179 (2020), eprint 1910.01332.
* Xie et al. (2017) Y. Xie, D. Wang, and L. P. Csernai, Phys. Rev. C 95, 031901 (2017), eprint 1703.03770.
* Adam et al. (2019) J. Adam et al. (STAR), Phys. Rev. Lett. 123, 132301 (2019), eprint 1905.11917.
* Adam et al. (2018) J. Adam et al. (STAR), Phys. Rev. C 98, 014910 (2018), eprint 1805.04400.
* Becattini and Karpenko (2018) F. Becattini and I. Karpenko, Phys. Rev. Lett. 120, 012302 (2018), eprint 1707.07984.
* Xia et al. (2018) X.-L. Xia, H. Li, Z.-B. Tang, and Q. Wang, Phys. Rev. C 98, 024905 (2018), eprint 1803.00867.
* Liu and Huang (2020) Y.-C. Liu and X.-G. Huang, Nucl. Sci. Tech. 31, 56 (2020), eprint 2003.12482.
* Gao et al. (2020) J.-H. Gao, G.-L. Ma, S. Pu, and Q. Wang, Nucl. Sci. Tech. 31, 90 (2020), eprint 2005.10432.
* Florkowski et al. (2018) W. Florkowski, B. Friman, A. Jaiswal, and E. Speranza, Phys. Rev. C 97, 041901 (2018), eprint 1705.00587.
* Becattini and Tinti (2010) F. Becattini and L. Tinti, Annals Phys. 325, 1566 (2010), eprint 0911.0864.
* Montenegro et al. (2017) D. Montenegro, L. Tinti, and G. Torrieri, Phys. Rev. D 96, 056012 (2017), [Addendum: Phys.Rev.D 96, 079901 (2017)], eprint 1701.08263.
* Florkowski et al. (2019) W. Florkowski, A. Kumar, and R. Ryblewski, Prog. Part. Nucl. Phys. 108, 103709 (2019), eprint 1811.04409.
* Hattori et al. (2019) K. Hattori, M. Hongo, X.-G. Huang, M. Matsuo, and H. Taya, Phys. Lett. B 795, 100 (2019), eprint 1901.06615.
* Fukushima and Pu (2020a) K. Fukushima and S. Pu (2020a), eprint 2010.01608.
* Xu et al. (2008) Z. Xu, C. Greiner, and H. Stocker, J. Phys. G 35, 104016 (2008), eprint 0807.2986.
* Xu and Greiner (2008) Z. Xu and C. Greiner, Phys. Rev. Lett. 100, 172301 (2008), eprint 0710.5719.
* Zubarev (1974) D. N. Zubarev, _Nonequilibrium Statistical Thermodynamics_ (Plenum, New York, 1974).
* Hosoya et al. (1984) A. Hosoya, M.-a. Sakagami, and M. Takao, Annals Phys. 154, 229 (1984).
* Horsley and Schoenmaker (1987) R. Horsley and W. Schoenmaker, Nucl. Phys. B 280, 716 (1987).
* Landau and Lifshitz (1987) L. Landau and E. Lifshitz, _Fluid Mechanics_ (Butterworth Heinemann, Oxford, UK, 1987), second edition ed.
* Fukushima and Pu (2020b) K. Fukushima and S. Pu (2020b), eprint 2001.00359.
* Israel and Stewart (1979) W. Israel and J. Stewart, Annals Phys. 118, 341 (1979).
* Becattini (2012) F. Becattini, Phys. Rev. Lett. 108, 244502 (2012), eprint 1201.5278.
* Becattini et al. (2019) F. Becattini, W. Florkowski, and E. Speranza, Phys. Lett. B 789, 419 (2019), eprint 1807.10994.
* de Groot and Mazur (2011) S. de Groot and P. Mazur, _Non-Equilibrium Thermodynamics_ (Dover Publications, Inc., New York, 2011).
* Huang et al. (2011) X.-G. Huang, A. Sedrakian, and D. H. Rischke, Annals Phys. 326, 3075 (2011), eprint 1108.0602.
* Jeon (1993) S. Jeon, Phys. Rev. D 47, 4586 (1993), eprint hep-ph/9210227.
* Montenegro and Torrieri (2020) D. Montenegro and G. Torrieri, Phys. Rev. D 102, 036007 (2020), eprint 2004.10195.
* Arnold and Yaffe (1998) P. B. Arnold and L. G. Yaffe, Phys. Rev. D 57, 1178 (1998), eprint hep-ph/9709449.
* Greif et al. (2014) M. Greif, I. Bouras, C. Greiner, and Z. Xu, Phys. Rev. D 90, 094014 (2014), eprint 1408.7049.
* Chen et al. (2019) Z. Chen, C. Greiner, Z. Xu, and P. Zhuang, Phys. Rev. C 100, 014906 (2019), eprint 1806.07594.
* Chen et al. (2020) Z. Chen, C. Greiner, A. Huang, and Z. Xu, Phys. Rev. D 101, 056020 (2020), eprint 1910.13721.
|
# Spin liquid behavior of a three-dimensional magnetic system Ba3NiIr2O9 with
$S$ = 1
Siddharth Kumar Department of Physics, Indian Institute of Science, Bengaluru
560012, India S. K. Panda Department of Physics, Bennett University, Greater
Noida 201310, Uttar Pradesh, India Manju Mishra Patidar UGC-DAE Consortium
for Scientific Research, University Campus, Khandwa Road, Indore 452017, India
Shashank Kumar Ojha Department of Physics, Indian Institute of Science,
Bengaluru 560012, India Prithwijit Mandal Department of Physics, Indian
Institute of Science, Bengaluru 560012, India Gangadhar Das Chemistry and
Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific
Research, Jakkur, Bengaluru, 560064 India J. W. Freeland Advanced Photon
Source, Argonne National Laboratory, Argonne, Illinois 60439, USA V. Ganesan
UGC-DAE Consortium for Scientific Research, University Campus, Khandwa Road,
Indore 452017, India Peter J. Baker ISIS Pulsed Neutron and Muon Source,
STFC Rutherford Appleton Laboratory, Harwell Campus, Didcot OX11 0QX, United
Kingdom S. Middey<EMAIL_ADDRESS>Department of Physics, Indian
Institute of Science, Bengaluru 560012, India
###### Abstract
The quantum spin liquid (QSL) is an exotic phase of magnetic materials where
the spins continue to fluctuate without any symmetry breaking down to zero
temperature. Among the handful reports of QSL with spin $S\geq$1, examples
with magnetic ions on a three-dimensional magnetic lattice are extremely rare
since both larger spin and higher dimension tend to suppress quantum
fluctuations. In this work, we offer a new strategy to achieve 3-D QSL with
high spin by utilizing two types of transition metal ions, both are
magnetically active but located at crystallographically inequivalent
positions. We design a 3-D magnetic system Ba3NiIr2O9 consisting of
interconnected corner shared NiO6 octahedra and face shared Ir2O9 dimer, both
having triangular arrangements in a-b plane. X-ray absorption spectroscopy
measurements confirm the presence of Ni2+ ($S$=1). Our detailed thermodynamic
and magnetic measurements reveal that this compound is a realization of
gapless QSL state down to at least 100 mK. Ab-initio calculations find a
strong magnetic exchange between Ir and Ni sublattices and in-plane
antiferromagnetic coupling between the dimers, resulting in dynamically
fluctuating magnetic moments on both the Ir and Ni sublattice.
Experimental realization and theoretical description of the highly entangled
quantum spin liquid phase remain challenging topics of quantum many-body
physics Anderson (1973). Over the last fifteen years, several spin-1/2 systems
with two-dimensional frustrated lattice have been reported as probable
candidates with QSL behavior Balents (2010); Savary and Balents (2016); Zhou,
Kanoda, and Ng (2017); Knolle and Moessner (2019); Broholm _et al._ (2020).
Since experimental reports of QSL with either spin $(S)\geq$ 1 Cheng _et al._
(2011); Chamorro _et al._ (2018) or three dimensional arrangement of spins
Okamoto _et al._ (2007a); Koteswararao _et al._ (2014); Balz _et al._
(2016); Gao _et al._ (2019); Chillal _et al._ (2020) are very few, it can be
easily anticipated that chances of having a QSL with both higher spin and 3-D
geometry is extremely low Plumb _et al._ (2019). Many of the attempts to
obtain spin-1 QSL have been focused on stabilizing Ni within various
structural network Nakatsuji _et al._ (2005); Cheng _et al._ (2011); Lu _et
al._ (2018); Chamorro _et al._ (2018); Plumb _et al._ (2019); Medrano _et
al._ (2018). However, the magnetic behavior of these $S$ = 1 systems at low
temperature differs widely from each other, even in the compounds with similar
structural geometry. For example, unlike to the well known 120∘ spin structure
Starykh (2015), as observed in $A_{3}$NiNb2O9 Lu _et al._ (2018), Ba3NiSb2O9
shows characteristic spin liquid behavior Cheng _et al._ (2011); Fåk _et
al._ (2017); Quilliam _et al._ (2016), whereas NiGa2S4 hosts a spin nematic
phase Bhattacharjee, Shenoy, and Senthil (2006). The interaction of such Ni-
based $S$ = 1 triangular lattice with another magnetically active sublattice
might result in an exotic magnetic phase in three-dimensional materials.
However, only very few 3-D compounds with such feasibility exist Treiber,
Kemmler-Sack, and Ehmann (1982); Lightfoot and Battle (1990); Ferreira _et
al._ (2018).
Six-layered hexagonal (6$H$) perovskite $A_{3}MM^{\prime}_{2}$O9 (Fig. 1(a))
with magnetic $M$ and $M^{\prime}$ ions constitutes a 3-D spin system. Both,
$M$O6 octahedral units and face-shared $M^{\prime}_{2}$O9 dimers form a
triangular lattice in $a$-$b$ plane (Fig. 1(b)-(c)) and would become
geometrically frustrated in the presence of antiferromagnetic interaction.
Moreover, the $M$-O-$M^{\prime}$ connectivity constitutes a buckled honeycomb
lattice (Fig. 1(d)), which could host a Kitaev spin liquid phase in case of
spin-1/2 ions with strong spin-orbit coupling (SOC) Kitaev (2006); Takagi _et
al._ (2019). In search of SOC driven elusive nonmagnetic $J$ = 0 state and
excitonic magnetism in $d^{4}$ system, several Ba${}_{3}M$Ir2O9 compounds with
nonmagnetic $M^{2+}$ have been investigated recently Khaliullin (2013); Nag
_et al._ (2016, 2018a); Khan _et al._ (2019). However, the comparable
strength of SOC, non-cubic crystal field, Hund’s coupling, and superexchange
interaction gives rise to a small but finite moment on the Ir sublattice.
Moreover, interdimer hopping results in a spin-orbital liquid phase in
Ba3ZnIr2O9 Nag _et al._ (2016). Replacing the nonmagnetic Zn2+ by an
isovalent magnetic ion such as Ni2+ should provide a unique opportunity to
investigate the magnetic response of a triangular lattice with $S$ = 1 in
presence of the interconnected Ir sublattice with fluctuating magnetic
moments. If both Ni and Ir moments of Ba3NiIr2O9 fluctuate dynamically, then
it would offer a new route to realize 3-D QSL by involving two different
magnetic ions. Also, it would be a 3-D QSL with a new type of structural
network as compared to all existing examples with pyrochlore Gao _et al._
(2019); Plumb _et al._ (2019), hyperkagome Okamoto _et al._ (2007a), and
hyper hyperkagome structure Koteswararao _et al._ (2014); Chillal _et al._
(2020).
Figure 1: (a) Unit cell of 6$H$ $A_{3}MM^{\prime}$O9 without any disorder.
(b), & (c) shows the triangular lattice arrangement of $M^{\prime}_{2}$O9
dimers, $M$O6 octahedra, respectively, in a-b plane. (d) $M$-O-$M\prime$
connectivity forms buckled honeycomb lattice. $A$, $M$ and $M^{\prime}$
corresponds to Ba, Ni, and Ir, respectively, for BNIO. Different magnetic
exchange pathways ($J_{i}$) are also shown in (a)-(d). (e) Observed and
refined powder XRD pattern of BNIO. (f) XAS spectrum of Ni $L_{3,2}$-edge of
BNIO along with NiO and NdNiO3 for comparison. The XAS data for NdNiO3 has
been adapted from Ref. Freeland, Van Veenendaal, and Chakhalian, 2016.
In this paper, we report on the electronic and magnetic behavior of Ba3NiIr2O9
(BNIO). The phase purity and absence of any cationic site disorder have been
demonstrated by powder X-ray diffraction (XRD) measurement. X-ray absorption
spectroscopy (XAS) experiments have confirmed the desired +2 oxidation state
of Ni. Persisting spin fluctuations down to 100 mK have been revealed by
magnetization, specific heat and muon spin-rotation ($\mu$SR) measurements. We
have also investigated BNIO by density functional theory calculations
including Hubbard $U$ and SOC within the framework of LSDA+$U$ (local spin
density approximation + $U$) approach. We have found not only appreciable
magnetic exchange between Ni and Ir-sublattice but also antiferromagnetic
coupling in the triangular sublattice of Ir. This geometrical frustration
prohibits any long-range magnetic ordering and makes BNIO a rare example of
three-dimensional QSL involving $S$ = 1.
## Results
Polycrystalline BNIO was synthesized by solid state synthesis route. Members
of the $A_{3}MM^{\prime}_{2}$O9 series can have site disorder between face
shared, and corner shared octahedra Middey _et al._ (2011). It is also well
known that structural disorder often jeopardizes QSL behavior, resulting in
magnetic order or spin glass freezing Zhong _et al._ (2019). All peaks of the
powder XRD pattern of BNIO (Fig. 1(e)) can be indexed and refined with 6$H$
structure having space group $P6_{3}/mmc$. The refinement also confirms that
all corner-shared (face-shared) octahderal units are occupied by Ni (Ir)
without any Ni-Ir site disorder. The structural parameters obtained from the
refinement have been listed in SM sup . The temperature dependent XRD
measurements down to 15 K also rules out any structural transition. Having
confirmed that both Ni and Ir ions form triangular lattices in a-b plane
without any disorder, we want to verify whether Ni has indeed $S$ = 1 state.
For this purpose, XAS measurements were carried out Freeland, Van Veenendaal,
and Chakhalian (2016). The comparison of Ni $L_{3,2}$ XAS line shape and
energy position of BNIO, Ni2+O, and NdNi3+O3 (Fig. 1(e)) testifies the desired
+2 oxidation of Ni in present case. The octahedral crystal field of Ni2+
($d^{8}$: $t_{2g}^{6}$, $e_{g}^{2}$) ensures $S$ = 1 on Ni sublattice.
Electrical measurement demonstrates insulating nature of the sample (inset of
Fig.2 (a)), which can be fitted using Mott’s variable range hopping (VRH) Hill
(1976) model in three-dimensions ($\rho=\rho_{o}\exp(T_{o}/T)^{1/4}$) as shown
in Fig.2 (a). We also note that the insulating behavior of Ba3ZnIr2O9 is well
described by VRH in two-dimensions. This difference between two compounds is a
manifestation of electron hopping paths along the Ni-O-Ir bonds. Our
electronic structure calculations (see SM sup ) further demonstrate that the
insulating state can be obtained only by considering both electron correlation
and SOC, implying that BNIO is a SOC driven Mott insulator Kim _et al._
(2008).
Figure 2: (a) Fitting of $\rho$ by 3-D variable range hopping model for BNIO.
Inset shows $\rho$ vs. $T$. (b) $\chi$ vs T on left axis and
$1/(\chi-\chi_{0})$ along with fitting on right axis. (c) M-H at 2 K.
The temperature dependent field cooled and zero-field cooled magnetic
susceptibility does not differ from each other and also do not show any
anomaly (Fig. 2(b)). This strongly implies absence of any long range magnetic
ordering and spin glass behavior. We have fitted the data by a modified Cuire-
Weiss law ($\chi$ = $\chi_{0}$ \+ $\frac{C_{W}}{T-\theta_{CW}}$) where
$\chi_{0}$, $C_{W}$ and $\theta_{CW}$ represents temperature independent
susceptibility contribution, Curie constant and Curie temperature,
respectively. The fitting, shown as plot of 1/($\chi$-$\chi_{0}$) vs. $T$ in
right axis of Fig. 2(b), results a $\theta_{CW}\sim$ -15 K. Negative values of
Curie-Weiss temperature signify net antiferromagnetic interaction among the
spins of BNIO. The relatively smaller value of $\theta_{CW}$ is related to the
presence of multiple exchange pathways, which will be discussed in later part
of this paper. The effective magnetic moment $\mu_{eff}$ (= $\sqrt{8C_{W}}$)
is found to be $\sim$ 3.65 $\mu_{B}$ from the fitting. Interestingly, the
effective magnetic moment of a similar compound Ba3NiSb2O9, with nonmagnetic
Sb5+ was reported to be around 3.54 $\mu_{B}$ Cheng _et al._ (2011). This
gives an estimate of $g$-factor $\sim$ 2.5, similar to other Ni2+ based
systems carlin_paramagnetism_1986 . If we assume similar value for the present
compound, the effective Ir moment turns out to be 0.9 $\mu_{B}$ per dimer (=
$\sqrt{\mu_{eff}^{2}-\mu_{Ni}^{2}}$) i.e. 0.64 $\mu_{B}$/Ir. This is very
similar to the Ir moment (0.5 - 0.6 $\mu_{B}$) reported for Ba3MgIr2O9 Nag
_et al._ (2018a), though a nonmagnetic $J$ = 0 state is expected for Ir5+ from
a pure ionic picture. Thus, our analysis highlights both Ni2+ and Ir5+
participate in magnetism of BNIO compound. The estimated magnetic moments from
our LSDA+U+SOC calculations (shown in SM sup ) are also in good agreement with
our experimental results. Fig. 2 (c) shows $M$-$H$ done at 2 K between $\pm$ 9
T. The absence of any hysteresis again confirms absence of ferromagnetism and
spin glass freezing at 2 K. The presence of antiferromagnetic interaction
without any long range magnetic ordering or spin glass transition strongly
indicates that BNIO is a favorable candidate of QSL.
Figure 3: (a) $C_{p}$ vs T curves at various fields. (b) Magnetic specific
heat ($C_{m}$) extracted by subtracting lattice contribution. (c) Low
temperature part of $C_{m}$ plotted in log-log plot. (d) Magnetic entropy for
zero field.
In order to further investigate the magnetic nature of the sample, we have
measured specific heat ($C_{p}$) from 100 mK to 200 K. $C_{p}$ not only probes
the presence/absence of any long-range magnetic ordering but also provides
very crucial information about the nature of low energy excitation. The
absence of any $\lambda$-anomaly (Fig. 3(a)) again confirms absence of long-
range order and/or any structural transition down to 100 mK, consistent with
the magnetic measurement and XRD results, respectively. For an insulating
compound with magnetic spins, $C_{p}$ consists of lattice specific heat
($C_{lat}$) and magnetic specific heat ($C_{m}$). In absence of any analogous
non-magnetic compound, the contribution of $C_{lat}$ has been evaluated by
fitting $C_{p}$ in 30 K - 200 K range by a Debye-Einstein equation with one
Debye term and two Einstein terms (details are in SM sup ) and extrapolating
the fitted curve down to 100 mK. A broad peak is observed around 7 K in
$C_{m}$ vs. $T$ plot (Fig. 3(a)). We can not capture this feature by Schottky
anomaly, arising due to energy level splitting (see SM sup ). On the other
hand, such feature is commonly observed in spin liquid materials and thus,
could be considered as a signature of crossover from thermally disordered
paramagnet to quantum disordered spin liquid state Okamoto _et al._ (2007b);
Balents (2010); Li _et al._ (2015); Dey _et al._ (2017). The position of
this broad peak shows negligible shifts with the application of magnetic field
(shifts $\sim$ 1 K for applied field of 12 T).
At low temperature, $C_{m}$ follows power-law behavior $C_{m}$ = $\gamma
T^{\alpha}$ (Fig. 3(c)). For zero field, the magnitude of coefficient $\gamma$
is 45 mJ/mol K2 and the exponent $\alpha$ is 1.0$\pm$0.05 within 0.1 K - 0.6 K
range. The value of $\gamma$ is very similar to other gapless spin liquid
candidates: like Ba3CuSb2O9 (43.4 mJ/mol K2) Zhou _et al._ (2011), Ba2YIrO6
(44 mJ/mol K2) Nag _et al._ (2018b), Ba3ZnIr2O9 (25.9 mJ/mol K2) Nag _et
al._ (2016), Sr2Cu(Te0.5W0.5)O6 (54.2 mJ/mol K2) Mustonen _et al._ (2018).
Also, the linear $T$ behavior with nonzero $\gamma$ in an insulator comes due
to gapless spinon excitations with a Fermi surface and has been reported in
several organic and inorganic spin liquid candidates Yamashita _et al._
(2008, 2011); Zhou _et al._ (2011); Cheng _et al._ (2011); Mustonen _et
al._ (2018); Clark _et al._ (2014); Uematsu and Kawamura (2019). Otherwise,
any gapped excitation would result in an exponential dependence of $C_{m}$ on
$T$. $\alpha$ becomes 2.6$\pm$0.05 within 0.8 K - 2.1 K. Interestingly, the
application of an external field destroys the linear $T$ behavior of $C_{m}$.
For $\mu_{0}H$ = 4 T, $\alpha$ becomes 2 for 0.15 K $\leq T\leq$ 0.50 K and
2.9 for 0.5 K $\leq T\leq$ 2.4 K. We note that $\alpha$ is found to be between
2 to 3 for several spin nematic phase Nakatsuji _et al._ (2005, 2007);
Povarov _et al._ (2019); Kohama _et al._ (2019). Further studies are
necessary to investigate the possibility of transition from spin liquid to
spin nematic phase by the application of a magnetic field. The amount of
released magnetic entropy ($S_{m}$) is evaluated by integrating $C_{m}/T$
w.r.t. $T$ and is shown in Fig. 3(d). For BNIO, the entropy saturates at 6.9
J/mol K for zero field measurement, which is only 75% of the total entropy
expected for even a $S=1$ system [$R$ln(2$S$+1), where $R$ is universal gas
constant]. The retention of a large amount of entropy at low temperature is
another signature of the spin-liquid nature of BNIO, which has been reported
as well for many other QSL Zhou _et al._ (2011); Cheng _et al._ (2011);
Mustonen _et al._ (2018); Yamashita _et al._ (2008).
To have a further understanding of the magnetic behavior at low temperature,
we have performed $\mu$SR measurements, which is a very sensitive local probe
to detect a weak magnetic field, even of the order of 0.1 Oe Blundell (1999).
Fig. 4(a) shows asymmetry vs time curves for zero-field (ZF) measurements at
selected temperatures. Neither any oscillation nor the characteristic 1/3rd
recovery of the asymmetry is observed down to 60 mK, strongly implying the
absence of long-range magnetic ordering or spin freezing. For a system with
two interacting spin networks, the local magnetic field, felt by a muon at a
stopping site is contributed by both magnetic sublattices. In such cases, the
muon relaxation function is generally described by a product of two response
functions, representing local fields from two spin networks Uemura _et al._
(1985). However, our such attempts considering different possible combinations
of relaxation functions, including a spin glass like relaxation Morenzoni _et
al._ (2008); Uemura _et al._ (1985), did not provide a satisfactory fitting
of the experimental observed data (see SM sup ). This further supports the
absence of spin glass freezing in present case.
Figure 4: (a) Asymmetry vs time curves at various temperatures taken at zero
magnetic field and fitted curve (solid line) using equation 1. (b) Variation
of $\nu$ to temperature (shaded area is a guide to the eye). Inset shows
variation of $\nu$ with applied LF at 100 mK
Interestingly, similar to the other hexagonal Ba${}_{3}M$Ir2O9 Nag _et al._
(2016, 2018a), these asymmetry curves consist of one fast relaxing, one
moderately relaxing, and one hardly relaxing components. We have fitted these
curves using a combination of two dynamical relaxation functions with a common
fluctuation rate $\nu$ and a Kubo-Toyabe function (KT) Hayano _et al._
(1979),
$A(t)=A_{1}G(t,\Delta H_{1},\nu)+A_{2}G(t,\Delta H_{2},\nu)+A_{3}KT(t,\delta)$
(1)
where $A_{1}$, $A_{2}$, $A_{3}$ are amplitudes. The static KT function,
corresponding to the hardly relaxing component, accounts for the muons
stopping at the silver sample holder as we find that the relaxation curve from
the bare sample holder can also be described by a KT function with similar
$\delta$. The dynamical relaxation, arising due to the presence of a field
distribution ($\Delta H$) with a fluctuation rate ($\nu$) is represented by
the Keren function $G(t,\Delta H,\nu)$ Keren (1994). The presence of two
dynamical relaxations implies two inequivalent muon stopping sites, which are
likely to be related with the two types of crystallographically inequivalent
oxygen in hexagonal Ba${}_{3}M$Ir2O9 Nag _et al._ (2016, 2018a). The
asymmetry data over a large temperature range (60 mK - 150 K) have been fitted
by allowing $\nu$ to vary with $T$ and, the extracted values of $\nu$ has been
shown as a function of $T$ in Fig. 4(b). The background contribution is
different between measurements in dilution refrigerator and helium cryostat
and, has been kept fixed for our analysis within the corresponding temperature
range. The inequality $\nu>\gamma\Delta H$ ( $\gamma$ = muon gyromagnetic
ratio = 2$\pi\times$135.5 Mrad s-1 T-1) holds for both relaxing components as
$\gamma\Delta H_{1}\sim$0.425 MHz and $\gamma\Delta H_{2}\sim$ 0.09 MHz for
the lowest temperature of our measurement (60 mK). This justifies the use of
dynamical relaxation functions and establishes spin liquid nature of BNIO. We
note that the value of $\nu$ ($\sim$ 4 MHz) at low temperature for BNIO is one
order of magnitude smaller than Ba3ZnIr2O9 Nag _et al._ (2016) and is likely
to be related with the involvement of large spins on the Ni sublattice.
$\mu$-SR spectra, recorded at 100 mK in presence of an applied longitudinal
field (LF) have further corroborated QSL nature of BNIO. In case of relaxation
arising from a static internal field distribution with width $\Delta H_{i}$,
an applied LF $\sim$ 5-10$\Delta H_{i}$ would completely suppress the
relaxation. From the analysis of ZF $\mu$SR data, we found $\Delta H_{1}\sim$
5 Gauss and $\Delta H_{2}\sim$ 1 Gauss. No such decoupling is observed in the
present case in measurement up to 200 Gauss (see inset of Fig. 4(b) and SM sup
), establishing the dynamic nature of spins in BNIO down to atleast 100 mK.
## Discussion
To understand the underlying mechanism of the observed QSL state, we estimated
the inter-atomic magnetic exchange interactions from the converged
LSDA+$U$+SOC calculations using the formalism of Ref. Kvashnin _et al._ ,
2015a (see Method section and SM sup for details). As shown in Table-I, the
strongest interaction is antiferromagnetic, which is between the Ir ions of
the structural dimer. The strong Ir-Ni interaction further testifies three-
dimensional nature of BNIO. Most importantly, Ir-Ir exchange in the a-b plane
($J_{4}$) is found to be antiferromagnetic, resulting in-plane magnetic
frustration and explains the origin of the QSL behavior of the present system.
However, the presence of strong Ir-Ni and Ni-Ni ferromagnetic exchange reduces
the net antiferromagnetic exchange of this system, resulting a relatively
lower value of negative $\theta_{CW}$. We further note that the ferromagnetic
Ni-Ni exchange has been observed also in antiferromagnetic phase of analogous
compound Ba3NiRu2O9 Lightfoot and Battle (1990).
Table 1: Exchange couplings obtained from $ab$-initio calculations. Exchange pathways have been shown in Fig. 1. AFM and FM refers to antiferromagnetic and ferromagnetic interaction, respectively. Exchange | Interacting | Number | Magnitude | Type | $|$z${}_{i}J_{i}$/$J_{1}|$
---|---|---|---|---|---
($J_{i}$) | pair | of neighbor ($z_{i}$) | (meV) | |
$J_{1}$ | Ir-Ir | 1 | -8.91 | AFM | 1
$J_{2}$ | Ir-Ni | 3 | 0.96 | FM | 0.32
$J_{3}$ | Ir-Ir | 3 | 0.10 | FM | 0.03
$J_{4}$ | Ir-Ir | 6 | -0.17 | AFM | 0.11
$J_{5}$ | Ni-Ni | 6 | 0.09 | FM | 0.06
To summarize, our detailed measurements reveal that 6$H$ BNIO containing $S$=1
hosts a gapless spin liquid phase below 2 K. The involvement of Ir5+ and Ni2+
in the magnetic properties of BNIO is revealed by dc magnetic measurements,
$\mu$SR experiments, and electronic structure calculation. The
antiferromagnetic interaction between Ir2O9 dimers in a-b plane facilitates
geometrical frustration driven QSL phase of BNIO. Since many
Ba3$MM_{2}^{\prime}$O9 compounds can be stabilized in 6$H$ structure, we
believe this work will lead to the realization of many 3-D QSLs with large
spin by judicial choice of $M$ and $M^{\prime}$.
### Method
Stoichiometric amount of BaCO3, NiO and Ir metal power were used as starting
materials for the solid state synthesis of BNIO. The mixture was heated
multiple times at 1175∘ C with intermediate grindings till the desired phase
is formed. Powder XRD was carried out using a lab based Rigaku Smartlab
diffractometer and also in the Indian beamline (BL-18B) at the Photon Factory,
KEK, Japan. The diffraction pattern of the final phase was refined by
Reitveild method using FULLPROF Rodríguez-Carvajal (1993).
XAS spectra of Ni $L_{3,2}$-edges were recorded in bulk sensitive total
fluorescence yield mode in 4-ID-C beam line of Advanced Photon Source, USA. DC
magnetic measurements were carried using a Quantum Design (QD) SQUID
magnetometer. Heat capacity measurements ($C_{p}$) were done in a dilution
refrigerator insert coupled with a 16T QD-PPMS system using relaxation
calorimetry. $\mu$SR experiments down to 60 mK were performed using pulsed
muon beam at MuSR spectrometer of ISIS Neutron and Muon Source, UK. A dilution
fridge was used to record $\mu$SR data from 60 mK to 4 K and a cryostat was
used for temperatures above 1.5 K.
The density functional theory (DFT) calculations have been performed in the
local spin-density approximation + Hubbard $U$ (LSDA+U) approach with and
without including spin-orbit coupling (SOC) by means of a full potential
linearized muffin-tin orbital method (FP-LMTO) Andersen (1975); Wills and
Cooper (1987) as implemented in the RSPt code Wills _et al._ (2000). The
Brillouin-zone (BZ) integration is carried out by using the thermal smearing
method with 10 $\times$ 10 $\times$ 4 k-mesh. For the charge density and
potential angular decomposition inside the muffin-tin (MT) spheres, the value
of maximum angular momentum was taken equal to $l_{max}=8$. To describe the
electron-electron correlation within LSDA+U approach, we have taken $U$ = 6
eV, $J$ =0.8 eV for Ni-$d$ states and $U$ = 2 eV, $J$ =0.6 eV for the Ir-$d$
states. The set of the correlated orbitals located on Ni and Ir sites were
obtained by projecting the electron density onto the corresponding MT sphere
with a certain angular character (so-called “MT-heads” projectionGrechnev _et
al._ (2007)).
After obtaining the self-consistent fully converged LSDA+$U$+SOC calculations,
the magnetic force theorem Liechtenstein _et al._ (1987); Katsnelson and
Lichtenstein (2000) was used to extract the effective inter-site magnetic-
interaction parameters ($J_{ij}$). In this approach the magnetic system is
mapped onto the Heisenberg Hamiltonian:
$\hat{H}=-\sum_{i\neq j}J_{ij}\vec{S_{i}}\cdot\vec{S_{j}}.$ (2)
Further, $J_{ij}$ are extracted in a linear-response manner via Green’s
function technique. A detailed discussion of the implementation of the
magnetic force theorem in RSPt is provided in Ref. Kvashnin _et al._ , 2015b.
This method is considered to be one of the most accurate techniques for the
estimation of exchange interactions and also been successfully employed for
many transition metal compounds Panda _et al._ (2016).
### Acknowledgement
S.M. acknowledges financial support from ISRO-IISc Space Technology Cell and
Infosys Foundation, Bangalore. S.M. and S.K. acknowledge insightful
discussions with Dr. Subhro Bhattacharjee, Dr. Yogesh Singh, Dr. Pabitra
Biswas. Authors thank the Department of Science and Technology, India for the
financial support, Saha Institute of Nuclear Physics and Jawaharlal Nehru
Centre for Advanced Scientific Research, India for facilitating the
experiments at the Indian Beamline, Photon Factory, KEK, Japan. The authors
also acknowledge the experimental support from UGC-DAE CSR, Indore, staffs of
LT & Cryogenics especially Er. P. Saravanan for their technical support. This
research used resources of the Advanced Photon Source, a U.S. Department of
Energy Office of Science User Facility operated by Argonne National Laboratory
under Contract No. DE-AC02-06CH11357. Experiments at the ISIS Neutron and Muon
Source were supported by a beamtime allocation RB1920396 from the Science and
Technology Facilities Council.
## References
* Anderson (1973) P. W. Anderson, Materials Research Bulletin 8, 153 (1973).
* Balents (2010) L. Balents, Nature 464, 199 (2010).
* Savary and Balents (2016) L. Savary and L. Balents, Reports on Progress in Physics 80, 016502 (2016).
* Zhou, Kanoda, and Ng (2017) Y. Zhou, K. Kanoda, and T.-K. Ng, Reviews of Modern Physics 89, 025003 (2017).
* Knolle and Moessner (2019) J. Knolle and R. Moessner, Annual Review of Condensed Matter Physics 10, 451 (2019).
* Broholm _et al._ (2020) C. Broholm, R. Cava, S. Kivelson, D. Nocera, M. Norman, and T. Senthil, Science 367 (2020).
* Cheng _et al._ (2011) J. G. Cheng, G. Li, L. Balicas, J. S. Zhou, J. B. Goodenough, C. Xu, and H. D. Zhou, Phys. Rev. Lett. 107 (2011), 10.1103/PhysRevLett.107.197204, arXiv:1108.2897 .
* Chamorro _et al._ (2018) J. R. Chamorro, L. Ge, J. Flynn, M. A. Subramanian, M. Mourigal, and T. M. McQueen, Phys. Rev. Materials 2, 034404 (2018).
* Okamoto _et al._ (2007a) Y. Okamoto, M. Nohara, H. Aruga-Katori, and H. Takagi, Physical Review Letters 99, 137207 (2007a).
* Koteswararao _et al._ (2014) B. Koteswararao, R. Kumar, P. Khuntia, S. Bhowal, S. K. Panda, M. R. Rahman, A. V. Mahajan, I. Dasgupta, M. Baenitz, K. H. Kim, and F. C. Chou, Phys. Rev. B 90, 035141 (2014).
* Balz _et al._ (2016) C. Balz, B. Lake, J. Reuther, H. Luetkens, R. Schönemann, T. Herrmannsdörfer, Y. Singh, A. N. Islam, E. M. Wheeler, J. A. Rodriguez-Rivera, _et al._ , Nature Physics 12, 942 (2016).
* Gao _et al._ (2019) B. Gao, T. Chen, D. W. Tam, C.-L. Huang, K. Sasmal, D. T. Adroja, F. Ye, H. Cao, G. Sala, M. B. Stone, _et al._ , Nature Physics 15, 1052 (2019).
* Chillal _et al._ (2020) S. Chillal, Y. Iqbal, H. O. Jeschke, J. A. Rodriguez-Rivera, R. Bewley, P. Manuel, D. Khalyavin, P. Steffens, R. Thomale, A. N. Islam, _et al._ , Nature communications 11, 1 (2020).
* Plumb _et al._ (2019) K. Plumb, H. J. Changlani, A. Scheie, S. Zhang, J. Krizan, J. Rodriguez-Rivera, Y. Qiu, B. Winn, R. J. Cava, and C. L. Broholm, Nature Physics 15, 54 (2019).
* Nakatsuji _et al._ (2005) S. Nakatsuji, Y. Nambu, H. Tonomura, O. Sakai, S. Jonas, C. Broholm, H. Tsunetsugu, Y. Qiu, and Y. Maeno, science 309, 1697 (2005).
* Lu _et al._ (2018) Z. Lu, L. Ge, G. Wang, M. Russina, G. Günther, C. dela Cruz, R. Sinclair, H. Zhou, and J. Ma, Physical Review B 98, 094412 (2018).
* Medrano _et al._ (2018) C. Medrano, D. Freitas, E. Passamani, J. Resende, M. Alzamora, E. Granado, C. Galdino, E. Baggio-Saitovitch, M. Continentino, and D. Sanchez, Physical Review B 98, 054435 (2018).
* Starykh (2015) O. A. Starykh, Reports on Progress in Physics 78, 052502 (2015).
* Fåk _et al._ (2017) B. Fåk, S. Bieri, E. Canévet, L. Messio, C. Payen, M. Viaud, C. Guillot-Deudon, C. Darie, J. Ollivier, and P. Mendels, Phys. Rev. B 95, 1 (2017).
* Quilliam _et al._ (2016) J. A. Quilliam, F. Bert, A. Manseau, C. Darie, C. Guillot-Deudon, C. Payen, C. Baines, A. Amato, and P. Mendels, Phys. Rev. B 93, 214432 (2016).
* Bhattacharjee, Shenoy, and Senthil (2006) S. Bhattacharjee, V. B. Shenoy, and T. Senthil, Phys. Rev. B 74, 092406 (2006).
* Treiber, Kemmler-Sack, and Ehmann (1982) U. Treiber, S. Kemmler-Sack, and A. Ehmann, Zeitschrift für anorganische und allgemeine Chemie 487, 189 (1982).
* Lightfoot and Battle (1990) P. Lightfoot and P. Battle, Journal of Solid State Chemistry 89, 174 (1990).
* Ferreira _et al._ (2018) T. Ferreira, S. M. Heald, M. D. Smith, and H.-C. zur Loye, Inorganic chemistry 57, 2973 (2018).
* Kitaev (2006) A. Kitaev, Annals of Physics 321, 2 (2006).
* Takagi _et al._ (2019) H. Takagi, T. Takayama, G. Jackeli, G. Khaliullin, and S. E. Nagler, Nature Reviews Physics 1, 264 (2019).
* Khaliullin (2013) G. Khaliullin, Phys. Rev. Lett. 111, 1 (2013).
* Nag _et al._ (2016) A. Nag, S. Middey, S. Bhowal, S. K. Panda, R. Mathieu, J. C. Orain, F. Bert, P. Mendels, P. G. Freeman, M. Mansson, H. M. Ronnow, M. Telling, P. K. Biswas, D. Sheptyakov, S. D. Kaushik, V. Siruguri, C. Meneghini, D. D. Sarma, I. Dasgupta, and S. Ray, Phys. Rev. Lett. 116, 1 (2016).
* Nag _et al._ (2018a) A. Nag, S. Bhowal, F. Bert, A. D. Hillier, M. Itoh, I. Carlomagno, C. Meneghini, T. Sarkar, R. Mathieu, I. Dasgupta, and S. Ray, Phys. Rev. B 97, 1 (2018a).
* Khan _et al._ (2019) M. S. Khan, A. Bandyopadhyay, A. Nag, V. Kumar, A. V. Mahajan, and S. Ray, Phys. Rev. B 100, 064423 (2019).
* Freeland, Van Veenendaal, and Chakhalian (2016) J. W. Freeland, M. Van Veenendaal, and J. Chakhalian, Journal of Electron Spectroscopy and Related Phenomena 208, 56 (2016).
* Middey _et al._ (2011) S. Middey, S. Ray, K. Mukherjee, P. L. Paulose, E. V. Sampathkumaran, C. Meneghini, S. D. Kaushik, V. Siruguri, K. Kovnir, and D. D. Sarma, Phys. Rev. B 83, 144419 (2011).
* Zhong _et al._ (2019) R. Zhong, S. Guo, G. Xu, Z. Xu, and R. J. Cava, Proceedings of the National Academy of Sciences 116, 14505 (2019).
* (34) See Supplemental Material.
* Hill (1976) R. M. Hill, Phys. status solidi 34, 601 (1976).
* Kim _et al._ (2008) B. J. Kim, H. Jin, S. J. Moon, J.-Y. Kim, B.-G. Park, C. S. Leem, J. Yu, T. W. Noh, C. Kim, S.-J. Oh, J.-H. Park, V. Durairaj, G. Cao, and E. Rotenberg, Phys. Rev. Lett. 101, 076402 (2008).
* (37) R. L. Carlin, in Magnetochemistry, edited by R. L. Carlin (Springer, Berlin, Heidelberg, 1986) Chap. 4.
* Okamoto _et al._ (2007b) Y. Okamoto, M. Nohara, H. Aruga-Katori, and H. Takagi, Phys. Rev. Lett. 99, 137207 (2007b).
* Li _et al._ (2015) Y. Li, H. Liao, Z. Zhang, S. Li, F. Jin, L. Ling, L. Zhang, Y. Zou, L. Pi, Z. Yang, _et al._ , Scientific reports 5, 1 (2015).
* Dey _et al._ (2017) T. Dey, M. Majumder, J. C. Orain, A. Senyshyn, M. Prinz-Zwick, S. Bachus, Y. Tokiwa, F. Bert, P. Khuntia, N. Büttgen, A. A. Tsirlin, and P. Gegenwart, Phys. Rev. B 96, 174411 (2017), arXiv:1702.08305 .
* Zhou _et al._ (2011) H. D. Zhou, E. S. Choi, G. Li, L. Balicas, C. R. Wiebe, Y. Qiu, J. R. D. Copley, and J. S. Gardner, (2011), 10.1103/PhysRevLett.106.147204.
* Nag _et al._ (2018b) A. Nag, S. Bhowal, A. Chakraborty, M. M. Sala, A. Efimenko, F. Bert, P. K. Biswas, A. D. Hillier, M. Itoh, S. D. Kaushik, V. Siruguri, C. Meneghini, I. Dasgupta, and S. Ray, Phys. Rev. B 98, 014431 (2018b).
* Mustonen _et al._ (2018) O. Mustonen, S. Vasala, E. Sadrollahi, K. P. Schmidt, C. Baines, H. C. Walker, I. Terasaki, F. J. Litterst, E. Baggio-Saitovitch, and M. Karppinen, Nat. Commun. 9, 1085 (2018).
* Yamashita _et al._ (2008) S. Yamashita, Y. Nakazawa, M. Oguni, Y. Oshima, H. Nojiri, Y. Shimizu, K. Miyagawa, and K. Kanoda, Nat. Phys. 4, 459 (2008).
* Yamashita _et al._ (2011) S. Yamashita, T. Yamamoto, Y. Nakazawa, M. Tamura, and R. Kato, Nat. Commun. 2, 1 (2011).
* Clark _et al._ (2014) L. Clark, G. J. Nilsen, E. Kermarrec, G. Ehlers, K. S. Knight, A. Harrison, J. P. Attfield, and B. D. Gaulin, Phys. Rev. Lett. 113, 117201 (2014).
* Uematsu and Kawamura (2019) K. Uematsu and H. Kawamura, Phys. Rev. Lett. 123, 087201 (2019).
* Nakatsuji _et al._ (2007) S. Nakatsuji, H. Tonomura, K. Onuma, Y. Nambu, O. Sakai, Y. Maeno, R. T. Macaluso, and J. Y. Chan, Phys. Rev. Lett. 99, 157203 (2007).
* Povarov _et al._ (2019) K. Y. Povarov, V. K. Bhartiya, Z. Yan, and A. Zheludev, Phys. Rev. B 99, 24413 (2019), arXiv:1807.09549 .
* Kohama _et al._ (2019) Y. Kohama, H. Ishikawa, A. Matsuo, K. Kindo, N. Shannon, and Z. Hiroi, Proc. Natl. Acad. Sci. U. S. A. 166, 10686 (2019).
* Blundell (1999) S. J. Blundell, Contemporary Physics 40, 175 (1999), https://doi.org/10.1080/001075199181521 .
* Uemura _et al._ (1985) Y. J. Uemura, T. Yamazaki, D. R. Harshman, M. Senba, and E. J. Ansaldo, Phys. Rev. B 31, 546 (1985).
* Morenzoni _et al._ (2008) E. Morenzoni, H. Luetkens, T. Prokscha, A. Suter, S. Vongtragool, F. Galli, M. B. S. Hesselberth, N. Garifianov, and R. Khasanov, Phys. Rev. Lett. 100, 147205 (2008).
* Hayano _et al._ (1979) R. S. Hayano, Y. J. Uemura, J. Imazato, N. Nishida, T. Yamazaki, and R. Kubo, Physical Review B 20, 850 (1979).
* Keren (1994) A. Keren, Phys. Rev. B 50, 10039 (1994).
* Kvashnin _et al._ (2015a) Y. O. Kvashnin, O. Grånäs, I. Di Marco, M. I. Katsnelson, A. I. Lichtenstein, and O. Eriksson, Phys. Rev. B 91, 125133 (2015a).
* Rodríguez-Carvajal (1993) J. Rodríguez-Carvajal, Phys. B Phys. Condens. Matter 192, 55 (1993).
* Andersen (1975) O. K. Andersen, Phys. Rev. B 12, 3060 (1975).
* Wills and Cooper (1987) J. M. Wills and B. R. Cooper, Phys. Rev. B 36, 3809 (1987).
* Wills _et al._ (2000) J. M. Wills, O. Eriksson, M. Alouni, and D. L. Price, _Electronic Structure and Physical Properties of Solids: The Uses of the LMTO Method_ (Springer-Verlag, Berlin, 2000).
* Grechnev _et al._ (2007) A. Grechnev, I. Di Marco, M. I. Katsnelson, A. I. Lichtenstein, J. Wills, and O. Eriksson, Phys. Rev. B 76, 035107 (2007).
* Liechtenstein _et al._ (1987) A. Liechtenstein, M. Katsnelson, V. Antropov, and V. Gubanov, Journal of Magnetism and Magnetic Materials 67, 65 (1987).
* Katsnelson and Lichtenstein (2000) M. I. Katsnelson and A. I. Lichtenstein, Phys. Rev. B 61, 8906 (2000).
* Kvashnin _et al._ (2015b) Y. O. Kvashnin, O. Grånäs, I. Di Marco, M. I. Katsnelson, A. I. Lichtenstein, and O. Eriksson, Phys. Rev. B 91, 125133 (2015b).
* Panda _et al._ (2016) S. K. Panda, Y. O. Kvashnin, B. Sanyal, I. Dasgupta, and O. Eriksson, Phys. Rev. B 94, 064427 (2016).
|
# All-Day Object Tracking for Unmanned Aerial Vehicle
Bowen Li, Changhong Fu*, Fangqiang Ding,
Junjie Ye, and Fuling Lin, *Corresponding author The aurhors are with the
School of Mechanical Engineering, Tongji University, 201804 Shanghai, China.
(Email<EMAIL_ADDRESS>
###### Abstract
Visual object tracking, which is representing a major interest in image
processing field, has facilitated numerous real-world applications. Among
them, equipping unmanned aerial vehicle (UAV) with real-time robust visual
trackers for all-day aerial maneuver, is currently attracting incremental
attention and has remarkably broadened the scope of applications of object
tracking. However, prior tracking methods have merely focused on robust
tracking in the well-illuminated scenes, while ignoring trackers’ capabilities
to be deployed in the dark. In darkness, the conditions can be more complex
and harsh, easily posing inferior robust tracking or even tracking failure. To
this end, this work proposed a novel discriminative correlation filter-based
tracker with illumination adaptive and anti-dark capability, namely ADTrack.
ADTrack firstly exploits image illuminance information to enable adaptability
of the model to the given light condition. Then, by virtue of an efficient and
effective image enhancer, ADTrack carries out image pretreatment, where a
target-aware mask is generated. Benefiting from the mask, ADTrack aims to
solve a dual regression problem where dual filters, i.e., the context filter
and target-focused filter, are trained with mutual constraint. Thus ADTrack is
able to maintain continuously favorable performance in all-day conditions.
Besides, this work also constructed one UAV nighttime tracking benchmark
UAVDark135, comprising of more than 125k manually annotated frames, which is
also very first UAV dark tracking benchmark. Exhaustive experiments are
extended on authoritative daytime benchmarks, i.e., UAV123@10fps, DTB70, and
the newly built dark benchmark UAVDark135. Our results have validated the
superiority of ADTrack in both bright and dark conditions compared with other
arts. Meanwhile, ADTrack realizes a real-time speed of over 30 frames/s on a
single CPU, remarkably ensuring real-world UAV object tracking under all-day
scenes.
###### Index Terms:
Unmanned aerial vehicle, visual object tracking, discriminative correlation
filter, dark tracking benchmark, image illumination based mask, dual
regression model.
## I Introduction
Standing as one of the hotspots in image processing field, visual object
tracking aims at estimating the location and scale of an initial given object
in the subsequent frames of an image sequence. Applying such flourishing
approach onboard unmanned aerial vehicle (UAV) has enabled extensive
applications in practice, e.g., path optimization and planning [1], disaster
response [2], target following [3], and autonomous landing [4]. Specifically,
transmission-line inspection [5], collision avoidance [6], and aircraft
tracking [7], often need around-the-clock operation.
Figure 1: Performance comparison of SOTA trackers on the newly constructed
UAVDark135, where tracking scenes are generally much darker and more onerous.
Clearly, ADTrack outperforms the other trackers in both distance precision
(DP), and area under curve (AUC), maintaining favorable robustness even in
harsh dark conditions. Figure 2: The first frames of representative scenes in
newly constructed UAVDark135. Here, target ground-truths are marked out by
green boxes and sequence names are located at the top left corner of the
images. Dark special challenges like objects’ unreliable color feature and
objects’ merging into the dark can be seen clearly.
Unfortunately, state-of-the-art (SOTA) trackers [8, 9, 10, 11, 12, 13, 14, 15,
16] only focus on tracking in bright environment, where external illumination
condition is favorable and the inline texture as well as outline of the object
is representatvie. When the night falls, despite that the content of the scene
is discernible, the visual quality of images captured is barely satisfactory,
hurting the performance of methods that are primarily designed for object
tracking with high-visibility inputs. Compared with common tracking scenes,
tracking in the dark onboard UAV confronts special undesirable hardships such
as:
* •
Color distortion: Since the objects in the dark are not well-illuminated, very
little light is reflected on their surfaces, making them nearly all-dark. In
this case, the objects’ color is distorted. Without representative color
features, the discriminative ability of the tracker can decrease in a notable
margin.
* •
Low visibility: When the object enters dimmer region in the dark, like the
shadow, it can merge into the dark background, ending up in low visibility.
Such scenes set barrier to trackers’ robust localization and precise scale
estimation of the object.
* •
High-level noise: Images captured by UAV at night inevitably contains random
noise, which may distract trackers, resulting in inferior robust tracking.
* •
Limited computation: For cost issue and scarce battery power, UAV generally
carries a single CPU as its computation platform. While in order to handle
tough darkness, more modules are needed to maintain tracking robustness,
making real-time processing even more arduous.
Though the outstanding trackers [17, 8, 9, 10, 11, 18, 19, 14, 15, 16] have
achieved promising performance in well-illuminated scenes. The deep trackers
[18, 19, 14, 15, 16] on the one hand introduce too much computation burden to
be deployed on a single CPU for real-time tracking. On the other, the
discriminative ability of deep semantic features were proved to drop
significantly in our experiment since the off-the-shelf network is trained by
bright images. While the brilliant discriminative correlation filter
(CF)-based trackers [17, 8, 9, 10, 11], which utilize online learned
handcrafted features, easily lose accuracy and robustness under low-light
nighttime scenes due to the challenges mentioned above. In Fig. 1, both deep
and handcrafted CF-based methods fails to meet our expectation. Prior work
contributed very few to robust tracking in the dark, while there is an urgent
need to develop to broaden the service life and usage scenarios of UAV.
In this regard, this work proposes a novel and pioneer tracker with
illuminance adaptive and anti-dark function (ADTrack) to achieve robust all-
day real-time UAV tracking. Fig. 1 exhibits the superiority of proposed
ADTrack against other arts in nighttime UAV tracking scene.
To be specific, ADTrack explores image illumination processing methods [20,
21] and proposes more innovative module to be embedded into efficient robust
CF-based tracker [10]. To achieve robust tracking under 2 distinct light
condition, i.e., day and night, ADTrack firstly exploits image illumination
[20] to detect and adapt to the condition, which is inspired by human visual
system and proves to be effective and efficient. Then, a fast enhancer [21]
generates appropriate image samples according to the detection result for
training.
Surprisingly, we found the image illumination map in [21] can be utilized to
obtain a target-aware mask. By virtue of the mask, ADTrack solves a dual
regression model to train target-focused and context filters with mutual
constraint. During detection phase, dual responses, i.e., target-focused and
context response maps, are fused using weight sum to achieve more precise
object localization. With the proposed dual filters, ADTrack proves to be more
robust in around-the-clock UAV tracking scenes.
Besides, to the best of our knowledge, there exists no dark UAV tracking
benchmark for large-scale evaluation in literature. Thus, this work also
builds the very first UAV dark tracking benchmark, i.e., UAVDark135.
UAVDark135 consists of totally 135 sequences, most of which were newly shot by
a standard UAV at night, including more than 125k manually annotated frames.
The benchmark covers a wide range of scenes, e.g., road, ocean, street,
highway, and lakeside, including a large number of objects, such as person,
car, building, athlete, truck, and bike. Fig. 2 displays representative scenes
in UAVDark135, where dark special challenges are distinct.
Contributions111The source code of the proposed ADTrack and newly constructed
benchmark UAVDark135 are located at
https://github.com/vision4robotics/ADTrack_v2. of this work can be summarized
as:
* •
This work proposes a novel tracker with illumination adaptive and anti-dark
function (ADTrack).
* •
The proposed ADTrack exploits image illumination to acquire target-aware mask,
which can be creatively utilized to solve a dual regression.
* •
This work constructed the very pioneer UAV dark tracking benchmark UAVDark135
to perform large-scale evaluation.
* •
Exhaustive experiments have been conducted on two authoritative daytime
benchmark UAV123@10fps, DTB70 and the newly built nighttime benchmark
UAVDark135 to validate the surprising ability of proposed ADTrack in around-
the-clock tracking performance.
The remainder of this work is organized as follows. Section II summarizes
related work about image enhancing approaches, CF-based tracking methods, and
target-aware tracking approaches. Section III elaborately interprets the
proposed ADTrack, which can be epitomized as 4 modules, respectively,
illumination adaptation, pretreatment, filter training, and object detection.
Section IV gives a thorough introduction of the newly built UAVDark135,
including its statistics, platform, annotation, and attributes. Section V
exhibits comprehensive experimental evaluation on various benchmarks, i.e.,
UAV123@10fps [22], DTB70 [23], and UAVDark135, where the superiority of
ADTrack is apparent. Finally, Section VI gives an integrated conclusion of the
full article.
## II Related Work
### II-A Low-Light Image Enhancers
Existing SOTA enhancers can be generally divided into 2 categories, i.e.,
learning-based and model-based. learning-based methods aim at training an end-
to-end network specialized for domain transformation with paired images [24,
25, 26]. To name a few, C. Chen et al. [25] carefully designed an encoder-
decoder structured network and trained it with paired short-exposure low-light
images and corresponding long-exposure images, which can generate high-quality
enhanced images. W. Ren et al. [26] proposed a deep hybrid network made up of
distinct content stream and edge stream, which can handle the degraded images
captured at night. Such methods are usually carried out on GPU due to their
huge amount of calculation brought by convolution layers, which cannot realize
real-time processing on a single CPU for tracking onboard UAV.
Model-based methods [27, 28, 21, 29] are in view of the famous retinex theory
[30], which need no off-line training. For instance, M. Li et al. [27]
creatively proposed to estimate a noise map based on tradition retinex model,
which is able to obtain de-noised enhanced images. Z. Rahman et al. [29]
replaced the original logarithm function in multi-scale retinex model with a
sigmoid function which can suppress noise speckles in extreme low light areas.
Specially, the enhancer [21] proves to be both efficient and effective. This
work introduces it into the UAV tracking structure and constructs an
illumination adaptive anti-dark tracker. Different from roughly applying the
enhancer to preprocess the frames, ADTrack achieves light condition awareness
and dual regression with the in-depth integration of image enhancement and
visual tracking.
### II-B CF-Based Tracking Methods
The key idea of CF-based tracking methods is to train a discriminative
correlation filter using current image samples and utilize the trained filter
to search for the object in the coming frame [31, 32, 10, 33, 11, 9, 8, 34].
J. F. Henriques et al. introduced circular sample matrix, ridge regression and
kernel correlation in the KCF tracker [31], considered as the foundation of
all CF-based trackers. M. Danelljan et al. proposed scale filter in the DSST
tracker [11], settling scale estimation efficiently. H. K. Galoogahi et al.
put forward cropping matrix in the training regression equation [10], not only
immensely expanding the real negative samples but also alleviating boundary
effect.
Generally speaking, the key enablers of the superiority of CF type methods
involve: $1$) its simplicity of calculation by discrete Fourier transformation
as well as a myriad of implicitly circular shift samples generated in this
duration, $2$) its online learning schemes which make the filter maintain
satisfying robustness in various scenarios. UAV tracking, owing to its limited
computation resource and wide application, is right the field where CF-based
trackers can shine brilliantly [35, 17, 36, 9, 8]. To be specific, Z. Huang et
al. exploited response map aberrance and proposed the ARCF tracker [9], which
ensures high robustness under volatile tracking distractors. The AutoTrack
tracker [8] proposed by Y. Li et al. aimed at the onerous mode adjustment
procedure and adapted to various given condition automatically.
Though the trackers mentioned above can strike a balance between accuracy and
speed in common bright environment, they lose robustness and accuracy in the
dark, when the light condition becomes abominable. To realize all-day real-
time UAV tracking, this work proposed ADTrack, which can adapt to the given
light condition and maintain predominant robustness even in the terrible
darkness.
### II-C Target-Aware CF-Based Tracking
Target-aware mask, which is a foreground matrix, concentrating the filter’s
attention on pixels more possible included within the object outline. Such
strategy can raise the importance of features indeed represent the object
characteristics. M. Danelljan et al. proposed a spatial punishment term in the
SRDCF tracker [32], which can be considered as a foreground mask, making the
filter learn center region more to alleviate boundary effect. A. Lukezic et
al. improved the fixed spatial regularization term in the SRDCF tracker [32]
into alterable reliability mask in the CSR-DCF tracker [37], which is based on
Bayes principle. Recently, saliency-aware methods are also introduced [35,
38]. Specifically, C. Fu et al. [35] creatively utilized an efficient saliency
detection method to generate effective mask, which raised the robustness
tracker onboard UAV tremendously.
Figure 3: Pipeline of ADTrack. The proposed tracker contains four stages,
i.e., illumination adaptation, pretreatment, training, and detection stages,
which are marked out by boxes in different colors. In well-illuminated daytime
and dim nighttime, ADTrack is able to adjust its tracking modules
automatically, according to light condition judgment in illumination
adaptation stage. In training and detection stage, ADTrack adopts dual filter
training and dual response fusion, respectively, target-focused level and
context level, as is shown in different routes.
Despite that the methods mentioned can improve tracking performance, they have
two obvious drawbacks. Firstly, it is hard to obtain valid alterable masks
like [37], [38], and [35] in the dark, since nighttime images lack sufficient
information. Secondly, the aforementioned trackers directly embed the masks
into the CF in training process as a regularization term, assigning higher
weights to the predicted target region in the filter. When an invalid mask
generates, wrong pixels that odd with actual target region of the CF will
possess higher importance, easily leading to tracking failure.
Totally different from the above, ADTrack exploits image illumination
information to obtain effective masks in arbitrary light condition. By virtue
of the target-aware mask, ADTrack proposes a dual regression, where target-
focused and context filters are trained with mutual constriction. In this way,
both background and target information are learned and utilized, greatly
improving tracking robustness.
## III Illumination Adaptive and Anti-Dark Object Tracking
The pipeline of the proposed ADTrack can be divided into four stages, i.e.,
illumination adaptive stage, pretreatment stage, training stage, and detection
stage. As is clearly exhibited in Fig. 3, for a given dark tracking scene,
ADTrack firstly implements an illumination adaptation decider in the first
frame captured by UAV to judge whether it is at daytime or nighttime. The
determined outcome can enable mode switching automatically. In the subsequent
frame $f$, ADTrack carries out pretreament stage with the illumination
judgment result, where appropriate samples (from original image or enhanced
image) and target-aware mask are obtained simultaneously. Then in training
stage, dual filters are jointly trained by focusing on both context and target
appearance. As next frame $f+1$ comes, the trained filters generate dual
response maps which are fused to obtain the final response for target
localization as detection stage.
### III-A Illumination Adaptation
To realize all-day adaptation, we consider an effective illumination
expression algorithm, which transforms complex image illuminance information
into a simple constant. For a given RGB image
$\mathcal{I}\in\mathbb{R}^{w\times h\times 3}$, the pixel-level world
illumination value $\mathcal{L}^{\mathrm{W}}(\mathcal{I})$ is firstly computed
as:
$\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I})=\sum_{\mathrm{m}}\alpha_{\mathrm{m}}\Psi_{\mathrm{m}}(\mathcal{I}(x,y)),~{}\mathrm{m}\in\\{\mathrm{R,G,B}\\}~{},$
(1)
where $\Psi_{\mathrm{m}}(\mathcal{I}(x,y))$ denotes the intensity value of
image $\mathcal{I}$ at coordinate $(x,y)$ in color channel $\mathrm{m}$, e.g.,
$\Psi_{\mathrm{G}}(\mathcal{I}(x,y))$ denotes the value in green channel. The
channel weight parameters
$\alpha_{\mathrm{R}},\alpha_{\mathrm{G}},\alpha_{\mathrm{B}}$ meet
$\alpha_{\mathrm{R}}+\alpha_{\mathrm{G}}+\alpha_{\mathrm{B}}=1$. Then, the
log-average luminance $\tilde{\mathcal{L}}^{W}(\mathcal{I})$ is given as in
[20]:
$\tilde{\mathcal{L}}^{\mathrm{W}}(\mathcal{I})={\mathrm{exp}}\Big{(}\frac{1}{wh}\sum_{x,y}\mathrm{log}(\delta+\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I}))\Big{)}~{},$
(2)
where $\delta$ is a small value, to avoid zero value.
Remark 1: Our experiment has proved that the log-average luminance
$\tilde{\mathcal{L}}^{W}(\mathcal{I})$ can express the light condition of
image $\mathcal{I}$ effectively.
Different light condition, e.g., in the dark or in daytime, the log-average
luminance varies largely. Thus, ADTrack introduced a threshold $\tau$ for
illumination judgment as:
$S(\mathcal{I})=\left\\{\begin{array}[]{rcl}1&&{\tilde{\mathcal{L}}^{W}(\mathcal{I})<\tau}\\\
0&&{\tilde{\mathcal{L}}^{W}(\mathcal{I})\geq\tau}\end{array}\right.~{},$ (3)
where $S(\mathcal{L})$ can be seen as the night identifier, $S(\mathcal{I})=1$
indicates that image $\mathcal{I}$ is a night image.
Remark 2: To test the validity of the above night decider and determine a
proper $\tau$, this work selected first frames in benchmark UAV123@10fps [22]
as daytime test samples and newly constructed benchmark UAVDark135 as
nighttime test samples. The deciding success rate of different thresholds
$\tau$ are shown in TABLE I, where the decider can achieve a surprising result
of over 99%.
Remark 3: During UAV tracking, ADTrack implements illumination adaptation
decider in the first frame of a given sequence, then adjusts its mode and
following pretreatment stage automatically according to the judgment outcome
$S(\mathcal{I})$ in Eq. (3).
TABLE I: Success rates of proposed illuminance decider with different thresholds $\tau$. Clearly, the results can surprisingly achieve over 99%, enabling effective night judgment. Thresholds $\tau$ | 0.12 | 0.13 | 0.14 | 0.15 | 0.16 | 0.17 | 0.18 | 0.19
---|---|---|---|---|---|---|---|---
Success rate | 0.979 | 0.983 | 0.987 | 0.991 | 0.987 | 0.987 | 0.991 | 0.983
### III-B Pretreatment
The pretreatment stage can achieve two purposes. Firstly, for determined night
sequences, ADTrack adopts an efficient and effective image enhancer [21] to
obtain enhanced images for the subsequent training and detection stages.
Secondly, the target-aware mask is acquired by virtue of image illuminance
information, so that dual filters learning in the training process and dual
response maps generation in the detection process can be realized.
Remark 4: To our excitement, the two purposes can be both based on world
illumination value $\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I})$ in Eq. (1) and
log-average luminance $\tilde{\mathcal{L}}^{W}(\mathcal{I})$ in Eq. (2).
To realize image enhancing, the global adaptation factor
$\mathcal{L}_{\mathrm{g}}(x,y,\mathcal{I})$, which is based on the original
world illumination map, is firstly calculated as in [21]:
$\mathcal{L}_{\mathrm{g}}(x,y,\mathcal{I})=\frac{\mathrm{log}(\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I})/\tilde{\mathcal{L}}^{\mathrm{W}}(\mathcal{I})+1)}{\mathrm{log}(\mathcal{L}^{\mathrm{W}}_{\mathrm{max}}(\mathcal{I})/\tilde{\mathcal{L}}^{\mathrm{W}}(\mathcal{I})+1)}~{},$
(4)
where
$\mathcal{L}^{\mathrm{W}}_{\mathrm{max}}(\mathcal{I})=\mathrm{\mathrm{max}}(\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I}))$.
The calculated factor can be referred to change the pixel value in three
intensity channels of each pixel to realize image enhancement as:
$\Psi_{\mathrm{m}}(\mathcal{I}_{\mathrm{e}}(x,y))=\Psi_{\mathrm{m}}(\mathcal{I}(x,y))\cdot\frac{\mathcal{L}_{\mathrm{g}}(x,y,\mathcal{I})}{\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I})}~{},$
(5)
where $\mathcal{I}_{\mathrm{e}}$ denotes the enhanced image based on original
$\mathcal{I}$. Since $\mathcal{L}_{\mathrm{g}}(x,y,\mathcal{I})$ varies in
different regions that process different illumination, Eq. (5) can readjust
the brightness of the whole image while keeping the proportion of the three
color channels a constant, i.e., the image color unchanged. Remark 5: The
aforementioned strategy is merely the fast first stage of [21], Eq. (5) shows
its efficacy for image enhancing.
For target-aware mask generation, ADTrack considers the illuminance change
$\bm{\Theta}_{\mathcal{L}}(\mathcal{I})$ after enhancement, which can be
written as:
$\begin{split}\bm{\Theta}_{\mathcal{L}}(\mathcal{I})&=\mathcal{L}^{\mathrm{W}}(\mathcal{I})-\mathcal{L}^{\mathrm{W}}(\mathcal{I}_{\mathrm{e}})\\\
&=\frac{\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I})-\mathrm{log}\Big{(}\frac{\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I})}{\tilde{\mathcal{L}^{\mathrm{W}}}(\mathcal{I})+1}\Big{)}}{\mathrm{log}(\mathcal{L}^{\mathrm{W}}_{\mathrm{max}}(\mathcal{I})/\tilde{\mathcal{L}}^{\mathrm{w}}(\mathcal{I})+1)}~{}.\end{split}$
(6)
Remark 6: The illumination change
$\bm{\Theta}_{\mathcal{L}}(\mathcal{I})(x,y)$ of pixel $(x,y)$ in a given
image depends only on the first 2 terms in the numerator of Eq. (6), i.e.,
$\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I})-\rm{log}(\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I}))$.
Since $\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I})\in[0,1]$, the value of
$\bm{\Theta}_{\mathcal{L}}(\mathcal{I})$ apparently varies according to the
original illumination $\mathcal{L}^{\mathrm{W}}(x,y,\mathcal{I})$. Assume that
different objects’ illumination are different under similar light condition in
an image due to their various reflectivity. By virtue of Eq. (6), the class of
pixels can be indicated as the target or the context.
To be specific, consider that the target is located at the center part, the
average value $\mu$ and standard deviation $\sigma$ of the center region of
$\mathbf{\Theta}_{\mathcal{L}}$ are computed. Following a three-sigma
criterion in statistics, which reflects the probability distribution
characteristics of samples, pixels in the range $\mu\pm 3\sigma$ are
considered targets while others are the context. Then, a binary mask
$\mathbf{m}_{r}$ is generated as:
$\mathbf{m}_{r}(x,y)=\left\\{\begin{array}[]{rcl}1&&{\mu-3\sigma\leq\mathbf{\Theta}_{\mathcal{L}}(x,y)\leq\mu+3\sigma}\\\
0&&{\mathrm{else}}\end{array}\right.~{}.$ (7)
Ultimately, the expected mask is obtained by
$\mathbf{m}=\mathbf{m}_{r}\odot\mathbf{P}$, where $\odot$ denotes element-wise
product. $\mathbf{P}\in\mathbb{R}^{w\times h}$ is the cropping matrix, which
extracts the value of the target-size area in the middle of the raw mask
$\mathbf{m}_{r}$, and set the value of the remaining area to 0 to shield the
interference of similar brightness objects in the background. Fig. 4 displays
some representative examples of mask generation in all-day conditions.
Figure 4: Visualization of mask generation in both nighttime and daytime. From
top to bottom, the images denote original patch, illumination map, and
generated mask. The sequences person10_1, pedestrian5_2 are from newly
constructed UAVDark135, and boat2, person20 are from UAV123@10fps [22]. It’s
clear that in both conditions, the proposed method can obtain valid mask with
vivid object contour.
Thus, according to the outcome in Eq. (3), for $S(\mathcal{I})=1$, ADTrack
uses Eq. (5) to obtain enhanced image $\mathcal{I}_{e}$, and for
$S(\mathcal{I})=0$, original image $\mathcal{I}$ is utilized. Both daytime
sequences and nighttime sequences can adopt Eq. (6) for mask generation.
### III-C Filter Training
#### III-C1 Review of BACF
Due to both its robustness and efficiency, this work adopts background-aware
correlation filter (BACF) [10] as baseline. The BACF tracker achieves its
satisfying performance mainly by virtue of the introduction of the cropping
matrix $\mathbf{P}$, which expands the training samples hugely without
inletting much boundary effect. The training regression equation of the BACF
tracker can be expressed as:
$\mathcal{E}(\mathbf{w})=\frac{1}{2}\sum_{j=1}^{T}\left\|\sum_{c=1}^{D}\mathbf{w}^{c\top}\mathbf{P}\mathbf{C}^{j}\mathbf{x}^{c}-\mathbf{y}(j)\right\|_{2}^{2}+\frac{\lambda}{2}\sum_{c=1}^{D}\left\|\mathbf{w}^{c}\right\|_{2}^{2}~{},$
(8)
where $\mathbf{w}^{c}\in\mathbb{R}^{N}(c=1,2,\cdots,D)$ is the filter in the
$c$-th channel obtained in current frame and
$\mathbf{w}=[\mathbf{w}^{1},\mathbf{w}^{2},\cdots,\mathbf{w}^{D}]$ denotes the
whole filter. $\mathbf{x}^{c}\in\mathbb{R}^{T}$ is the $c$-th channel of
extracted feature map and $\mathbf{y}(j)$ denotes the $j$-th element in the
expected Gaussian-shape regression label $\mathbf{y}\in\mathbb{R}^{T}$.
Cropping matrix $\mathbf{P}\in\mathbb{R}^{N\times T}$ aims at cropping the
center region of samples $\mathbf{x}^{c}$ for training and cyclic shift matrix
$\mathbf{C}^{j}\in\mathbb{R}^{T\times T}$ is the same in [31], which is
employed to obtain cyclic samples. $\lambda$ is the regularization term
parameter.
Remark 7: Since in Eq. (8), $T$ and $N$ meet $T>>N$, the filter $\mathbf{w}$
learns far more samples, the negative samples in particular, than in other CF-
based trackers. Such strategy makes the filter aware of the background
information, resulting in its better discriminative ability.
#### III-C2 Proposed ADTrack
Apart from BACF [10], which trains single filter $\mathbf{w}$ with both
negative and positive target-size samples, ADTrack trains dual filters
$\mathbf{w}_{g}$ and $\mathbf{w}_{o}$, which learns context and target
information separately. Besides, a constraint term is added into the overall
objective to promise more robust tracking on-the-fly. The proposed regression
objective can be written as:
$\begin{split}\mathcal{E}(\mathbf{w}_{g},\mathbf{w}_{o})=&\sum_{k}(\frac{1}{2}\left\|\sum_{c=1}^{D}\mathbf{P}^{\top}\mathbf{w}_{k}^{c}\star\mathbf{x}_{k}^{c}-\mathbf{y}\right\|_{2}^{2}\\!\\!+\frac{\lambda}{2}\sum_{c=1}^{D}\left\|\mathbf{w}_{k}^{c}\right\|_{2}^{2})\\\
&+\frac{\mu}{2}\sum_{c=1}^{D}\left\|\mathbf{w}_{g}^{c}-\mathbf{w}_{o}^{c}\right\|_{2}^{2}~{},k\in\\{g,o\\}~{},\end{split}$
(9)
where $\star$ denotes circular correlation operator, which implicitly executes
sample augmentation by circular shift. Differently, $\mathbf{x}_{g}$ denotes
the context feature map, while $\mathbf{x}_{o}$ indicates the target region
feature map, which is generated using the mask $\mathbf{m}$, i.e.,
$\mathbf{x}_{o}=\mathbf{m}\odot\mathbf{x}_{g}$. The second and fourth term in
Eq. (9) serve as the regularization term to prevent overfitting of the
filters. The last term can be considered as the constraint term, where
$\mathbf{w}_{g}$ and $\mathbf{w}_{o}$ bind each other during training. In this
case, the discriminative ability of both filters will be more robust. $\mu$ is
a parameter used to control the impact of the constraint term.
Remark 8: In order to maintain historic appearance information of object, this
work follows a conventional fashion in [10] for adaptive feature updates using
linear interpolation strategy with a fixed learning rate $\eta$ as:
$\begin{split}\mathbf{x}^{f}_{k,\mathrm{model}}=\mathbf{x}^{f-1}_{k,\mathrm{model}}+\eta\mathbf{x}^{f}_{k}~{},k\in\\{g,o\\}~{},\end{split}$
(10)
where $\mathbf{x}^{f}_{k,\mathrm{model}}$ denotes the training sample in the
$f$-th frame, which is utilized to train dual filters in Eq. (9).
#### III-C3 Optimization
Assume that $\mathbf{w}_{o}$ is given, ADTrack firstly finds the optimal
solution of $\mathbf{w}_{g}$. Defining an auxiliary variable $\mathbf{v}$,
i.e.,
$\mathbf{v}=\mathbf{I}_{N}\otimes\mathbf{P}^{\top}\mathbf{w}_{g}\in\mathbb{R}^{TD}$,
where $\otimes$ denotes Kronecker product, $\mathbf{I}_{N}$ an $N$-order
identical matrix. Here,
$\mathbf{w}_{g}=[\mathbf{w}^{1\top}_{g},\mathbf{w}^{2\top}_{g},\cdots,\mathbf{w}^{D\top}_{g}]^{\top}\in\mathbb{R}^{ND}$.
Then, the augmented Lagrangian form of Eq. (9) is formulated as:
$\begin{split}\mathcal{E}(\mathbf{w}_{g},\mathbf{v},\bm{\theta})&=\frac{1}{2}\left\|\mathbf{v}\star\mathbf{x}-\mathbf{y}\right\|^{2}_{2}+\frac{\lambda}{2}\left\|\mathbf{w}_{g}\right\|_{2}^{2}\\\
&+\frac{\mu}{2}\left\|\mathbf{w}_{g}-\mathbf{w}_{o}\right\|_{2}^{2}+(\mathbf{I}_{N}\otimes\mathbf{P}^{\top}\mathbf{w}_{g}-\mathbf{v})^{\top}\bm{\theta}\\\
&+\frac{\gamma}{2}\left\|\mathbf{I}_{N}\otimes\mathbf{P}^{\top}\mathbf{w}_{g}-\mathbf{v}\right\|^{2}_{2}~{},\end{split}$
(11)
where
$\bm{\theta}=[\bm{\theta}^{1\top},\bm{\theta}^{2\top},\cdots,\bm{\theta}^{D\top}]^{\top}\in\mathbb{R}^{TD}$
is the Lagrangian vector and $\gamma$ denotes a penalty factor. Adopting ADMM
[39], Eq. (11) can be dissected and solved by iteratively solving the
following three subproblems:
$\left\\{\begin{aligned}
\mathbf{w}^{e+1}_{g}&=\rm{arg}\min_{\mathbf{w}}\Big{\\{}\frac{\lambda}{2}\left\|\mathbf{w}^{e}_{g}\right\|_{2}^{2}+\frac{\mu}{2}\left\|\mathbf{w}^{e}_{g}-\mathbf{w}_{o}\right\|_{2}^{2}\\\
&+(\mathbf{I}_{N}\otimes\mathbf{P}^{\top}\mathbf{w}^{e}_{g}-\mathbf{v})^{\top}\bm{\theta}+\frac{\gamma}{2}\left\|\mathbf{I}_{N}\otimes\mathbf{P}^{\top}\mathbf{w}^{e}_{g}-\mathbf{v}\right\|^{2}_{2}\Big{\\}}\\\
\mathbf{v}^{e+1}&=\rm{arg}\min_{\mathbf{v}}\Big{\\{}\frac{1}{2}\left\|\mathbf{v}^{e}\star\mathbf{x}-\mathbf{y}\right\|^{2}_{2}+\\\
&(\mathbf{I}_{N}\otimes\mathbf{P}^{\top}\mathbf{w}_{g}-\mathbf{v}^{e})^{\top}\bm{\theta}+\frac{\gamma}{2}\left\|\mathbf{I}_{N}\otimes\mathbf{P}^{\top}\mathbf{w}_{g}-\mathbf{v}^{e}\right\|^{2}_{2}\Big{\\}}\\\
\bm{\theta}^{e+1}&=\bm{\theta}^{e}+\gamma(\mathbf{v}^{e+1}-(\mathbf{FP}^{\top}\otimes\mathbf{I}_{D})\mathbf{w}^{e+1}_{g})~{},\\\
\end{aligned}\right.$ (12)
where the superscript $\cdot^{e}$ indicates $e$-th iteration. Following
superscript ′ represents the optimization objectives.
Subproblem $\mathbf{w}^{\prime}_{g}$: By setting the partial derivative of the
first subproblem in Eq. (12) with respect to $\mathbf{w}_{o}$ as zero, we can
find the closed-form solution of $\mathbf{w}^{\prime}_{g}$, which is expressed
as:
$\mathbf{w}^{\prime}_{g}=\frac{\mu\mathbf{w}_{o}+T\bm{\theta}+\gamma
T\mathbf{v}}{\lambda+\mu+\gamma T}~{}.$ (13)
Subproblem $\mathbf{v}^{\prime}$: To effectively obtain the closed-form of
$\mathbf{v}$, this work firstly turn the second subproblem in Eq. (12) into
Fourier domain using discrete Fourier transform (DFT) as:
$\begin{split}\mathbf{v}^{\prime}=\rm{arg}\min_{\hat{\mathbf{v}}}&\Big{\\{}\frac{1}{2T}\left\|\hat{\mathbf{v}}^{*}\odot\hat{\mathbf{x}}-\hat{\mathbf{y}}\right\|^{2}_{2}+\hat{\bm{\theta}}^{\top}(\sqrt{T}\mathbf{I}_{N}\otimes\mathbf{P}^{\top}\mathbf{F}_{N}\mathbf{w}_{g}\\\
&-\hat{\mathbf{v}})+\frac{\gamma}{2T}\left\|\sqrt{T}\mathbf{I}_{N}\otimes\mathbf{P}^{\top}\mathbf{F}_{N}\mathbf{w}_{g}-\hat{\mathbf{v}}\right\|^{2}_{2}\Big{\\}}~{},\\\
\end{split}$ (14)
where $\hat{\cdot}$ denotes the Fourier form of a variable, i.e.,
$\hat{\mathbf{x}}=\sqrt{T}\mathbf{F}_{T}\mathbf{x}$.
$\mathbf{F}_{T}\in\mathbb{C}^{T\times T}$ is the Fourier matrix. Superscript
$\cdot^{*}$ indicates the complex conjugate.
Remark 9: Since circular correlation in time domain is turned into element-
wise product in Fourier domain, separating sample in Eq. (14) across pixels,
e.g.,
$\mathbf{x}(t)=[\mathbf{x}^{1}(t),\mathbf{x}^{2}(t),\cdots,\mathbf{x}^{D}(t)]\in\mathbb{R}^{T\times
D},(t=1,2,\cdots,T)$, each $\hat{\mathbf{v}}^{\prime}(t)$ can be solved as:
$\begin{split}\hat{\mathbf{v}}^{\prime}(t)=&\Big{(}\hat{\mathbf{x}}(t)\hat{\mathbf{x}}(t)^{\top}+T\gamma\mathbf{I}_{D}\Big{)}^{-1}\\\
&\times\Big{(}\hat{\mathbf{y}}(t)\hat{\mathbf{x}}(t)-T\hat{\bm{\theta}}(t)+T\gamma\hat{\mathbf{w}}_{g}(t)\Big{)}~{}.\end{split}$
(15)
Sherman-Morrison formula [40] is applied to avoid the time-consuming matrix
inversion operation and Eq. (15) is turned into:
$\begin{split}\hat{\mathbf{v}}^{\prime}(t)=\frac{1}{\gamma
T}\Big{(}\hat{\mathbf{y}}(t)\hat{\mathbf{x}}(t)-T\hat{\bm{\theta}}(t)+\gamma
T\hat{\mathbf{w}}_{g}(t)\Big{)}-\\\ \frac{\hat{\mathbf{x}}(t)}{\gamma
Tb}\Big{(}\hat{\mathbf{y}}(t)\hat{\mathbf{s}}_{\mathbf{x}}(t)-T\hat{\mathbf{s}}_{\bm{\theta}}(t)+\gamma
T\hat{\mathbf{s}}_{\bm{w}_{g}}(t)\Big{)}~{},\end{split}$ (16)
where
$\hat{\mathbf{s}}_{\mathbf{x}}(t)=\hat{\mathbf{x}}(t)^{\top}\hat{\mathbf{x}},\hat{\mathbf{s}}_{\bm{\theta}}=\hat{\mathbf{x}}(t)^{\top}\hat{\mathbf{\theta}},\hat{\mathbf{s}}_{\bm{w}_{g}}=\hat{\mathbf{x}}(t)^{\top}(t)\hat{\mathbf{w}}_{g}$,
and $b=\hat{\mathbf{s}}_{\mathbf{x}}(t)+T\gamma$ are scalar.
The positions of $\mathbf{w}_{g}$ and $\mathbf{w}_{o}$ in Eq. (9) are
equivalent. When an solving iteration of $\mathbf{w}_{g}$ is completed, then
the same ADMM iteration operation is performed to obtain the optimized
solution of $\mathbf{w}_{o}$.
### III-D Target Detection
Given the expected filter $\mathbf{w}^{f}_{g}$ and $\mathbf{w}^{f}_{o}$ in the
$f$-th frame, the response map $\mathbf{R}$ regarding the detection samples
$\mathbf{z}^{f+1}$ in the $(f+1)$-th frame can be obtained by:
$\begin{split}\mathbf{R}=\mathcal{F}^{-1}\sum_{c=1}^{D}\big{(}\hat{\mathbf{w}}^{f,c*}_{g}\odot\hat{\mathbf{z}}^{f+1,c}_{g}+\psi\hat{\mathbf{w}}^{f,c*}_{o}\odot\hat{\mathbf{z}}^{f+1,c}_{o}\big{)}~{},\end{split}$
(17)
where $\mathcal{F}^{-1}$ means inverse discrete Fourier transform.
$\mathbf{z}_{g}^{f+1,c}$ denotes the $c$-th channel of resized search region
samples extracted in the $(f+1)$-th frame, and $\mathbf{z}_{o}^{f+1,c}$ is the
$c$-th channel of the masked samples similar to $\mathbf{x}_{o}$. $\psi$ is a
weight parameter that controls the impact response map generated by context
filter and object filter. Finally, the object location in the $(f+1)$-th frame
can be estimated at the peak of response map $\mathbf{R}$.
The holonomic pipeline pseudo code of ADTrack is summarized in Algorithm 1.
1
Input: A video sequence of $F$ frames.
Position ($\mathbf{p}^{1}$) and size ($\mathbf{s}^{1}$) of the tracked object
in the first frame $\mathcal{I}^{1}$.
Output: Estimated position ($\mathbf{p}^{f}$) and size ($\mathbf{s}^{f}$) of
the object in all upcoming frames.
2 Construct the Gaussian label function $\mathbf{y}$.
3 for _frame number $f=1$ to end_ do
4 if _$f=1$_ then
5 Calclate log-average illuminance
$\tilde{\mathcal{L}}^{\mathrm{W}}(\mathcal{I}^{1})$ and dark identifier
$S(\mathcal{I}^{1})$ and adjust the mode of the tracker (Sec. III-A).
6 Crop the training patch from $\mathcal{I}^{1}$ with $\mathbf{p}^{1}$ and
$\mathbf{sc}\times\mathbf{s}^{1}$, where $\mathbf{sc}$ is a predefined scale
factor.
7 if _$S(\mathcal{I}^{1})==1$_ then
8 Do image enhancing to obtain enhanced patch (Sec. III-B).
9 end if
10 Obtain target-aware mask $\mathbf{m}$ (Sec. III-B).
11 Extract context features $\mathbf{x}^{1}_{g}$ and target features
$\mathbf{x}^{1}_{o}$ of the obtained patch.
12 Update the appearance model
$\mathbf{x}^{1}_{k,\mathrm{model}}=\mathbf{x}^{1}_{k}~{},k\in\\{g,o\\}$ for
filter training.
13 Learn context and target filter $\mathbf{w}_{k}~{},k\in\\{g,o\\}$ (Sec.
III-C).
14
15 else
16 Crop the search patch from $\mathcal{I}^{f}$ with $\mathbf{p}^{f-1}$ and
$\mathbf{sc}\times\mathbf{s}^{f-1}$.
17 if _$S(\mathcal{I}^{1})==1$_ then
18 Do image enhancing to obtain enhanced patch (Sec. III-B).
19 end if
20 Obtain target-aware mask $\mathbf{m}$ (Sec. III-B).
21 Extract context and target search features
$\mathbf{z}^{f}_{k}~{},k\in\\{g,o\\}$ of the obtained patch.
22 Generate the fused response map $\mathbf{R}$ (Sec. III-D).
23 Estimate $\mathbf{p}^{f}$ and $\mathbf{s}^{f}$.
24 Crop the training patch from $\mathcal{I}^{f}$ with $\mathbf{p}^{f}$ and
$\mathbf{sc}\times\mathbf{s}^{f}$.
25 if _$S(\mathcal{I}^{1})==1$_ then
26 Do image enhancing to obtain enhanced patch (Sec. III-B).
27 end if
28 Obtain target-aware mask $\mathbf{m}$ (Sec. III-B).
29 Extract context features $\mathbf{x}^{f}_{g}$ and target features
$\mathbf{x}^{f}_{o}$ of the obtained patch.
30 Update the appearance model using Eq. (10) for dual filter training.
31 Learn context and target filter $\mathbf{w}_{k}~{},k\in\\{g,o\\}$ (Sec.
III-C).
32 end if
33
34 end for
Algorithm 1 ADTrack tracker
## IV UAVDark135 Tracking Benchmark
TABLE II: Numbers of sequences, minimum, maximum, mean frames in each sequence, and total frames in 6 benchmarks, i.e., newly constructed UAVDark135, UAV123, UAV123@10fps [22], DTB70 [23], UAVDT [41], and VisDrone2019-SOT [42]. Red, green, and blue denotes the first, second and third place respectively. Benchmark | UAVDark135 | UAV123 | UAV123@10fps | DTB70 | UAVDT | VisDrone2019-SOT
---|---|---|---|---|---|---
Sequences | 135 | 123 | 123 | 70 | 50 | 132
Each sequence | 216 | 4571 | 929 | 109 | 3085 | 915 | 37 | 1029 | 306 | 68 | 699 | 225 | 82 | 2969 | 742 | 90 | 2970 | 833
Min | Max | Mean
Total frames | 125466 | 112578 | 37607 | 15777 | 37084 | 109909
TABLE III: Detailed explanation of the attributes in newly built UAVDark135, which are commonly confronted in UAV tracking. Attributes | Explanation
---|---
VC | | Viewpoint Change: In the sequence, different aspects, e.g.,
---
front, side, and top aspect, of the
tracking object are captured and involved.
FM | | Fast Motion: There exist two continuous frames, where the
---
center locations of the tracking object change
more than 20 pixels.
LR | | Low Resolution: There exist frames, where the
---
tracking object is small, whose total resolution is
fewer than 20 pixels.
OCC | | Occlusion: There exist frames, where the
---
tracking object is partially or
fully occluded by obstacles.
IV | | Illumination Variation: In the sequence, the
---
tracking object undergoes various light conditions.
### IV-A Platform and Statistics
Standing as the first UAV dark tracking benchmark, the UAVDark135 contains
totally 135 sequences captured by a standard UAV222This work utilized Parrot
Bebop 2 drone as shooting platform. More detailed information can be found at
https://support.parrot.com/us/support/products/parrot-bebop-2. at night. The
benchmark includes various tracking scenes, e.g., crossings, t-junctions,
road, highway, and consists of different kinds of tracked objects like people,
boat, bus, car, truck, athletes, house, etc. To extent the covered scenes, the
benchmark also contains some sequences from YouTube, which were shot on the
sea. The total frames, mean frames, maximum frames, and minimum frames of the
benchmark are 125466, 929, 4571, and 216 respectively, making it suitable for
large-scale evaluation. TABLE LABEL:tab:benchmarks exhibits main statistics of
UAVDark135 against existing UAV tracking benchmarks, i.e., UAV123@10fps,
UAV123 [22], DTB70 [23], UAVDT [41], and VisDrone2019-SOT [42]
(VisDrone2019-SOT testset-challenge is not included). The videos are captured
at a frame-rate of 30 frames/s (FPS), with the resolution of 1920$\times$1080.
Remark 10: Despite the fact that there exist some sequences captured at night
in benchmark UAVDT [41] and VisDrone2019-SOT [42], it is far from an
exhaustive dark tracking evaluation. Besides, the night sequences are actually
well-illuminated in [41, 42], which can not represent more common dark
tracking scenes in UAVDark135, where the light conditions are much more hash.
### IV-B Annotation
The frames in UAVDark135 are all manually annotated, where a sequence is
completely processed by the same annotator to ensure consistency. Since in
some dark scenes the object is nearly invisible, annotation process is much
more strenuous. After the first round, 5 professional annotators carefully
checked the results and made revision for several rounds to reduce errors as
much as possible in nearly 2 months.
Since the boundary contour of the object is not obvious in the dark, the
result boxes of the first annotation fluctuates in continuous image frames.
However, the actual motion process of the object should be smooth. In these
considerations, we record the original annotation every 5 frames for the
sequence with extremely severe vibration, and the results of the remaining
frames are obtained by linear interpolation, which is closer to the position
and scale variation of the real object.
Figure 5: Sequences distribution comparison of 6 UAV tracking benchmarks. The
abscissa are 5 attributes, and the ordinate is the numbers of the sequences.
The sequences in different benchmarks are marked by different colors, which
are explained in the legend. Note that not all benchmarks made contribution to
all 5 attributes.
(a)
(b)
(c)
Figure 6: Overall performance of SOTA handcrafted CF-based trackers on
UAV123@10fps [22], DTB70 [23], and newly built UAVDark135. The evaluation
metric in precision plot is the distance precision (DP) under center location
error (CLE) = 20 pixels, and the metric in success rate plot is the area under
curve (AUC). Clearly ADTrack maintains its robustness in all 3 benchmarks by
virtue of its dual regression.
### IV-C Attributes
To better evaluate the trackers’ abilities under special challenges,
UAVDark135 also provides 5 commonly encountered challenge attributes in UAV
tracking, following our prior work [43], i.e., viewpoint change (VC), fast
motion (FM), low resolution (LR), occlusion (OCC), and illumination variation
(IV). TABLE III elaborately explains the criterion for each attribute.
Additionally, Fig. 5 displays sequences distribute comparison of 6 UAV
tracking benchmarks. Clearly, UAVDark135 distributes both evenly and
considerably in the five attributes.
## V Experiment and Evaluation
TABLE IV: Average results by all 328 sequences of handcrafted trackers on benchmarks UAV123@10fps [22], DTB70 [23], and newly constructed UAVDark135. Red, green, and blue denotes the first, second and third place respectively. Here, the abilities of the trackers under all-day conditions are evaluated. Tracker | ADTrack | AutoTrack | ARCF-HC | ARCF-H | MCCT-H | STRCF | KCC | fDSST | DSST | BACF | CSR-DCF | ECO-HC | Staple_CA | Staple | KCF | SRDCF | SAMF
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
Venue | Ours | ’20CVPR | ’19ICCV | ’19ICCV | ’18CVPR | ’18CVPR | ’18AAAI | ’17TPAMI | ’17TPAMI | ’17ICCV | ’17CVPR | ’17CVPR | ’17CVPR | ’16CVPR | ’15TPAMI | ’15ICCV | ’14ECCV
DP | 0.659 | 0.651 | 0.638 | 0.591 | 0.514 | 0.611 | 0.459 | 0.465 | 0.420 | 0.581 | 0.576 | 0.601 | 0.495 | 0.510 | 0.391 | 0.566 | 0.434
AUC | 0.480 | 0.468 | 0.462 | 0.433 | 0.380 | 0.451 | 0.330 | 0.354 | 0.321 | 0.429 | 0.415 | 0.449 | 0.367 | 0.377 | 0.266 | 0.427 | 0.312
FPS | 31.621 | 45.485 | 22.585 | 32.320 | 44.858 | 20.438 | 29.393 | 122.976 | 58.113 | 33.911 | 9.274 | 53.571 | 46.829 | 81.216 | 374.912 | 8.583 | 7.518
This section displays exhaustive experimental evaluations, involving night
benchmark UAVDark135, daytime benchmark UAV123@10fps [22], and DTB70 [23]. In
subsection V-A, implementation details including experiment platform,
parameter settings, features, and metrics are introduced. Subsection V-B gives
a comprehensive comparison of the handcrafted CF-based trackers on the
benchmarks, where the superiority of proposed ADTrack for all-day UAV tracking
is demonstrated. Subsection V-C presents attribute-based evaluation of the
handcrafted CF-based trackers to test their abilities under UAV special
challenges. In subsection V-D, we also invite SOTA deep trackers that utilize
convolution neural network (CNN) for dark tracking comparison. Lastly, in
subsection V-E, we extended ablation study and parameter analysis to further
demonstrate the validity of different modules in proposed ADTrack.
### V-A Implementation Details
#### V-A1 Platform
The experiments extended in this work were mainly performed on MATLAB R2019a.
The main hardware adopted consists of an Intel Core I7-8700K CPU and 32GB RAM.
#### V-A2 Parameters
To guarantee the fairness and objectivity of the evaluation, the tested
trackers from other works have maintained their official initial parameters.
The different parameters of two conditions in ADTrack are as follows: In
daytime mode, $\mu$ is set as 280. During detection, weight $\psi$ is set as
$0.02$. Translation filter takes learning rate $\eta_{t}=0.032$ for model
update and for scale filter, $\eta_{s}$ is set as 0.016. In nighttime mode,
$\mu$ is set as 200. During detection, weight $\psi$ is set as $0.01$.
Translation filter takes learning rate $\eta_{t}=0.024$ for model update and
for scale filter, $\eta_{s}$ is set as 0.023.
Remark 11: ADTrack can adapt to the given light condition in its first stage,
where the tracking mode mentioned is switched automatically without manually
adjusting.
#### V-A3 Features and Scale Estimation
ADTrack uses handcrafted features for appearance representations, i.e., gray-
scale, a fast version of histogram of oriented gradient (fHOG) [44], and color
names (CN) [45]. Note that gray-scale and CN features can be valid in ADTrack
thanks to low-light enhancement. The cell size for feature extraction is set
as $4\times 4$. ADTrack adopts the scale filter proposed by [11] to perform
accurate scale estimation.
#### V-A4 Metrics
In the experiment evaluation, we mainly use two metrics: distance precision
(DP) and area under curve (AUC). DP is based on the distance between the
center points of the predicted box and the target ground-truth, and AUC is
based on the intersection ratio of the predicted box and the target ground-
truth box.
### V-B Overall Evaluation
Using merely handcrafted features, most handcrafted CF-based trackers can
achieve satisfying running speed, by virtue of their light calculation, while
ensuring their robustness under various tracking scenes onboard UAV. This work
employs proposed ADTrack and 16 SOTA handcrafted CF-based trackers, i.e.,
AutoTrack [8], KCF [31], SAMF [46], SRDCF [32], STRCF [33], BACF [10], DSST &
fDSST [11], ECO-HC [47], ARCF-HC & ARCF-H [9], KCC [48], MCCT-H [49], CSR-DCF
[37], Staple [34], and Staple_CA [50], for evaluation on tracking benchmarks,
UAV123@10fps [22], DTB70 [23], and UAVDark135 to demonstrate the robustness of
the proposed ADTrack in all-day UAV tracking comprehensively.
Figure 7: Visualization of some typical tracking scenes in both daytime and
nighttime. Sequences Car2, RcCar6, and uav3 are from the daytime benchmarks
DTB70 and UAV123@10fps. Sequences car13, pedestrian_l, and running_man are
from the new nighttime benchmark UAVDark135. Clearly, ADTrack favorably
maintains its robustness in all-day UAV tracking challenges. The typical
sequences were made into video, which can be found at
https://youtu.be/cJMUKF4J38A.
#### V-B1 Daytime Performance
In Fig. 6, DP and AUC comparison on benchmarks, respectively, UAV123@10fps
[22], DTB70 [23] is exhibited in the first 2 columns, where ADTrack ranks
first in both metrics. Specifically, in Fig. 6, ADTrack surpasses the second-
best AutoTrack tracker (0.671) [8] by 1.6% (0.682). In terms of AUC, ADTrack
surpasses its baseline BACF tracker (0.413) [10] by over 19% (0.482). Fig. 6
shows that ADTrack brings its baseline (0.581) up by 24% (0.722) in DP, and
exceeds the brilliant AutoTrack tracker (0.478) by nearly 4% (0.497) in AUC.
The outstanding results achieved by ADTrack in daytime indicates its strong
robustness in real-world UAV tracking scenes by virtue of its dual filter
learning.
#### V-B2 Nighttime Performance
Fig. 6 displays the excellent handcrafted CF-based trackers’ DPs under newly
constructed benchmark UAVDark135, where clearly ADTrack exceeds all other
trackers, surpassing the second best tracker (0.599) by over 1% (0.605).
Additionally, ADTrack enjoys its satisfying advantages in success rate as
well, exceeding the baseline tracker (0.460) by over 1.9% (0.469). In dark
scenes like UAVDark135, ADTrack maintains its robustness, providing a
favorable choice for UAV all-day tracking.
TABLE V: Average performance of the handcrafted trackers by UAV special attributes. Obviously, ADTrack maintains its superiority in most challenges. Tracker Metric | DP | AUC
---|---|---
VC | FM | LR | OCC | IV | VC | FM | LR | OCC | IV
ADTrack | 0.637 | 0.63 | 0.668 | 0.622 | 0.605 | 0.464 | 0.464 | 0.471 | 0.434 | 0.437
AutoTrack | 0.622 | 0.588 | 0.651 | 0.598 | 0.599 | 0.448 | 0.433 | 0.455 | 0.412 | 0.431
ARCF-HC | 0.61 | 0.61 | 0.649 | 0.595 | 0.597 | 0.438 | 0.448 | 0.458 | 0.417 | 0.433
ARCF-H | 0.565 | 0.551 | 0.606 | 0.537 | 0.56 | 0.413 | 0.4 | 0.427 | 0.373 | 0.411
MCCT-H | 0.504 | 0.471 | 0.504 | 0.503 | 0.476 | 0.366 | 0.361 | 0.367 | 0.353 | 0.353
STRCF | 0.584 | 0.568 | 0.611 | 0.584 | 0.59 | 0.424 | 0.43 | 0.442 | 0.406 | 0.437
KCC | 0.425 | 0.451 | 0.493 | 0.427 | 0.459 | 0.309 | 0.329 | 0.348 | 0.297 | 0.326
fDSST | 0.462 | 0.406 | 0.481 | 0.436 | 0.424 | 0.343 | 0.327 | 0.363 | 0.317 | 0.329
DSST | 0.425 | 0.342 | 0.413 | 0.385 | 0.391 | 0.316 | 0.275 | 0.303 | 0.274 | 0.298
BACF | 0.55 | 0.554 | 0.582 | 0.517 | 0.537 | 0.41 | 0.411 | 0.414 | 0.371 | 0.402
CSR-DCF | 0.57 | 0.536 | 0.576 | 0.561 | 0.54 | 0.399 | 0.383 | 0.405 | 0.387 | 0.381
ECO-HC | 0.584 | 0.524 | 0.572 | 0.599 | 0.56 | 0.434 | 0.409 | 0.426 | 0.423 | 0.421
Staple_CA | 0.476 | 0.465 | 0.534 | 0.484 | 0.486 | 0.353 | 0.353 | 0.387 | 0.346 | 0.36
Staple | 0.463 | 0.498 | 0.567 | 0.491 | 0.512 | 0.343 | 0.379 | 0.407 | 0.349 | 0.377
KCF | 0.376 | 0.31 | 0.38 | 0.363 | 0.353 | 0.251 | 0.227 | 0.262 | 0.242 | 0.24
SRDCF | 0.526 | 0.549 | 0.587 | 0.509 | 0.55 | 0.403 | 0.418 | 0.43 | 0.365 | 0.418
SAMF | 0.414 | 0.36 | 0.427 | 0.418 | 0.391 | 0.293 | 0.267 | 0.303 | 0.288 | 0.281
#### V-B3 All-Day Performance
To evaluate the abilities of the trackers in all-day tracking scenes, this
part takes the average results on 3 benchmarks by sequences, i.e. daytime
benchmark UAV123@10fps [22], DTB70 [23], and nighttime benchmark UAVDark135,
together 328 sequences. TABLE IV exhibits the average results of SOTA
handcrafted CF-based trackers. Obviously, the proposed ADTrack possesses great
advantages over all the other trackers in both DP and AUC. In specific,
ADTrack (0.659) improves the precision of its baseline (0.581) by more than
13%, surpassing second place (0.651) by over 1%. In terms of AUC, ADTrack
(0.480) is far ahead of the second place (0.468), up nearly 2.6%. Fig. 7
displays some representative tracking scenes in all-day condition, where
ADTrack exhibits competitive in robustness against other arts. In addition to
the satisfying tracking performance, ADTrack achieves an average speed of over
31 FPS on a single CPU, meeting real-time requirement of onboard UAV tracking.
Evidently, ADTrack achieves promising tracking performance in all-day
condition, through day and night, thus greatly expanding the tracking based
application onboard UAV around-the-clock.
### V-C Attribute-Based Evaluation
To clearly evaluate the abilities of the tracker under UAV specific challenge,
this part displays their performance in the aforementioned 5 UAV tracking
attributes, i.e., VC, FM, LR, OCC, and IV. TABLE V gives the average results
on all 328 sequences on 3 benchmarks. For the authoritative daytime
benchmarks, this subsection follows our precious work [43] to rewrite their
official attributes. Clearly, ADTrack outperforms all other trackers in most
attributes under 2 evaluation metrics. Specially, in FM, ADTrack surpasses
second-best tracker ARCF-HC [9] by over 3% in both DP and AUC. The results
demonstrate the satisfying comprehensive tracking performance and favorable
robustness of ADTrack in common challenges.
### V-D Against Deep Trackers
This subsection focuses on comparison between proposed ADTrack and deep
trackers which utilize off-line trained deep network for feature extraction or
template matching. This work invites totally 11 SOTA deep trackers, i.e.,
SiamRPN++ [18], DaSiamRPN [51], SiamFC++ [15], ASRCF [52], ECO [47], UDT+
[14], HCFT [53], CoKCF [54], CFWCR [55], DeepSTRCF [33], and MCCT [49], to
evaluate their performance in UAVDark135. From Fig. 8, ADTrack outperforms all
deep trackers in terms of DP and AUC under benchmark UAVDark135.
Using merely single CPU, ADTrack still achieves a real-time speed at over 30
FPS, while many deep trackers are far from real-time even on GPU,
demonstrating the excellence of ADTrack for real-time UAV tracking against the
deep trackers.
Remark 12: The results illustrate that the top-ranked deep trackers in recent
years, especially the trackers without online update, e.g., SiamRPN++ [18],
DaSiamRPN [51], SiamFC++ [15], fail to maintain their robustness in real-world
common dark scenes, since the off-the-shelf CNNs they utilize are trained by
daytime images, ending up in their huge inferiority compared with online-
learned ADTrack in the dark. Since there lacks sufficient dark images for
training, off-line trained deep trackers fall short onboard UAV tracking at
night.
Figure 8: Comparison of proposed ADTrack and the deep trackers on self-
constructed UAVDark135. Evidently, most deep trackers fall short in hash dark
scenes, while ADTrack maintains its robustness.
### V-E Ablation Study and Parameter Analysis
#### V-E1 Component Validity Analysis
To demonstrate the effectiveness of the proposed components, ablation study is
conducted on all three benchmarks. The average AUC results by sequences on 3
benchmarks are displayed in Fig. 9, where from bottom to top, proposed
components, i.e., weight sum, dual filter constraint, and illumination
adaptation, were disabled one by one. ADTrack_aed denotes ADTrack without the
weight sum in detection phase. In ADTrack_ae, dual filter training constraint
is disabled on the basis of ADTrack_aed. ADTrack_a denotes ADTrack_ae without
the image enhancer. And ADTrack_b_day, ADTrack_b_dark respectively denotes
ADTrack with daytime parameters and nighttime parameters without any proposed
module. The first 3 bars in Fig. 9 has demonstrated the validity of
illumination adaptation module, and the last 3 bars illustrates how dual
filter constraint and weight sum boosts the trackers’ performance.
#### V-E2 Impacts of Key Parameters
The key parameters in ADTrack are the constraint parameter $\mu$ in the
training regression equation and weight parameter $\psi$ in detection stage.
We investigate the impact of two parameters on tracking results on nighttime
benchmark UAVDark135, i.e., DP and AUC, which is shown in Fig. 10. Note that
when $\mu$ varies, $\psi$ remains 0.01 unchanged, and when $\psi$ changes,
$\mu$ is settled as 200.
Figure 9: Ablation study of the modules in ADTrack. ADTrack, ADTrack_aed,
ADTrack_ae, ADTrack_a, ADTrack_b_day, and ADTrack_b_dark respectively denote
ADTrack with different component activated. Note that ADTrack_ae performs
slightly inferior than ADTrack_a since the enhancer needs to cooperate with
the dual filter learning and dual response fusion modules to work effectively.
Figure 10: Parameter analysis of ADTrack on newly built benchmark UAVDark135.
With other parameters remaining fixed, the tracking performance with different
$\mu$ (Top) and with different $\psi$ (Bottom) are displayed. The chosen
parameters along with their results are marked out by dotted lines.
## VI Conclusion
This work puts forward a novel real-time tracker with illumination adaptation
and anti-dark function, i.e., ADTrack. ADTrack first implements illumination
adaptation to decide day-night condition and switches its tracking mode. Then,
pretreatment is carried out where proper training patch and target-aware mask
are generated based on an image enhancer. With the mask, ADTrack proposes
innovative dual filter regression model, in which the dual filters restrict
each other in training and compensate each other in detection. In addition,
the first large-scale dark tracking benchmark, UAVDark135, is also built in
this work for visual tracking community. We strongly believe that the proposed
tracker and dark tracking benchmark will make outstanding contribution to the
research of UAV tracking in all-day conditions.
## Acknowledgment
This work is supported by the National Natural Science Foundation of China
(No. 61806148) and Natural Science Foundation of Shanghai (No. 20ZR1460100).
## References
* [1] S. Xu, K. Doğançay, and H. Hmam, “Distributed Pseudolinear Estimation and UAV Path Optimization for 3D AOA Target Tracking,” _Signal Processing_ , vol. 133, pp. 64–78, 2017.
* [2] C. Yuan, Z. Liu, and Y. Zhang, “UAV-based Forest Fire Detection and Tracking Using Image Processing Techniques,” in _Proceedings of the International Conference on Unmanned Aircraft Systems (ICUAS)_ , 2015, pp. 639–643.
* [3] F. Vanegas, D. Campbell, N. Roy, K. J. Gaston, and F. Gonzalez, “UAV tracking and following a ground target under motion and localisation uncertainty,” in _Proceedings of the IEEE Aerospace Conference (AC)_ , 2017, pp. 1–10.
* [4] S. Lin, M. Garratt, and A. Lambert, “Monocular Vision-based Real-time Target Recognition and Tracking for Autonomously Landing an UAV in a Cluttered Shipboard Environment,” _Autonomous Robots_ , vol. 41, no. 4, pp. 881–901, 2017.
* [5] J. Bian, X. Hui, X. Zhao, and M. Tan, “A Novel Monocular-Based Navigation Approach for UAV Autonomous Transmission-Line Inspection,” in _Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2018, pp. 1–7.
* [6] T. Baca, D. Hert, G. Loianno, M. Saska, and V. Kumar, “Model Predictive Trajectory Tracking and Collision Avoidance for Reliable Outdoor Deployment of Unmanned Aerial Vehicles,” in _Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2018, pp. 6753–6760.
* [7] C. Fu, A. Carrio, M. A. Olivares-Mendez, R. Suarez-Fernandez, and P. Campoy, “Robust Real-time Vision-based Aircraft Tracking from Unmanned Aerial Vehicles,” in _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_ , 2014, pp. 5441–5446.
* [8] Y. Li, C. Fu, F. Ding, Z. Huang, and G. Lu, “AutoTrack: Towards High-Performance Visual Tracking for UAV with Automatic Spatio-Temporal Regularization,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2020, pp. 11 923–11 932.
* [9] Z. Huang, C. Fu, Y. Li, F. Lin, and P. Lu, “Learning Aberrance Repressed Correlation Filters for Real-Time UAV Tracking,” in _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_ , 2019, pp. 2891–2900.
* [10] H. K. Galoogahi, A. Fagg, and S. Lucey, “Learning Background-Aware Correlation Filters for Visual Tracking,” in _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_ , 2017, pp. 1144–1152.
* [11] M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Discriminative Scale Space Tracking,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 39, no. 8, pp. 1561–1575, 2017.
* [12] N. Wang, W. Zhou, Y. Song, C. Ma, and H. Li, “Real-Time Correlation Tracking Via Joint Model Compression and Transfer,” _IEEE Transactions on Image Processing_ , vol. 29, pp. 6123–6135, 2020.
* [13] R. Han, W. Feng, and S. Wang, “Fast Learning of Spatially Regularized and Content Aware Correlation Filter for Visual Tracking,” _IEEE Transactions on Image Processing_ , vol. 29, pp. 7128–7140, 2020.
* [14] N. Wang, Y. Song, C. Ma, W. Zhou, W. Liu, and H. Li, “Unsupervised Deep Tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019, pp. 1308–1317.
* [15] Y. Xu, Z. Wang, Z. Li, Y. Yuan, and G. Yu, “SiamFC++: Towards Robust and Accurate Visual Tracking with Target Estimation Guidelines,” in _Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)_ , 2020, pp. 12 549–12 556.
* [16] Q. Guo, W. Feng, C. Zhou, R. Huang, L. Wan, and S. Wang, “Learning Dynamic Siamese Network for Visual Object Tracking,” in _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_ , 2017, pp. 1781–1789.
* [17] Y. Li, C. Fu, Z. Huang, Y. Zhang, and J. Pan, “Keyfilter-Aware Real-Time UAV Object Tracking,” in _Proceedings of the IEEE International Conference on Robotics and Automation (ICRA)_ , 2020, pp. 193–199.
* [18] B. Li, W. Wu, Q. Wang, F. Zhang, J. Xing, and J. Yan, “Siamrpn++: Evolution of Siamese Visual Tracking with Very Deep Networks,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019, pp. 4282–4291.
* [19] X. Li, C. Ma, B. Wu, Z. He, and M.-H. Yang, “Target-Aware Deep Tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019, pp. 1369–1378.
* [20] E. Reinhard, M. Stark, P. Shirley, and J. Ferwerda, “Photographic Tone Reproduction for Digital Images,” in _Proceedings of the Annual Conference on Computer Graphics and Interactive Techniques (CGIT)_ , 2002, pp. 267–276.
* [21] H. Ahn, B. Keum, D. Kim, and H. S. Lee, “Adaptive Local Tone Mapping Based on Retinex for High Dynamic Range Images,” in _Proceedings of the IEEE International Conference on Consumer Electronics (ICCE)_ , 2013, pp. 153–156.
* [22] M. Mueller, N. Smith, and B. Ghanem, “A Benchmark and Simulator for UAV Tracking,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2016, pp. 445–461.
* [23] S. Li and D.-Y. Yeung, “Visual Object Tracking for Unmanned Aerial Vehicles: A Benchmark and New Motion Models,” in _Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)_ , 2017.
* [24] W. Wang, C. Wei, W. Yang, and J. Liu, “Gladnet: Low-light enhancement network with global awareness,” in _Proceedings of the IEEE International Conference on Automatic Face & Gesture Recognition (ICAFGR)_, 2018, pp. 751–755.
* [25] C. Chen, Q. Chen, J. Xu, and V. Koltun, “Learning to See in the Dark,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2018, pp. 3291–3300.
* [26] W. Ren, S. Liu, L. Ma, Q. Xu, X. Xu, X. Cao, J. Du, and M.-H. Yang, “Low-light Image Enhancement via A Deep Hybrid Network,” _IEEE Transactions on Image Processing_ , vol. 28, no. 9, pp. 4364–4375, 2019.
* [27] M. Li, J. Liu, W. Yang, X. Sun, and Z. Guo, “Structure-Revealing Low-Light Image Enhancement Via Robust Retinex Model,” _IEEE Transactions on Image Processing_ , vol. 27, no. 6, pp. 2828–2841, 2018.
* [28] L. Meylan and S. Susstrunk, “High Dynamic Range Image Rendering with A Retinex-Based Adaptive Filter,” _IEEE Transactions on Image Processing_ , vol. 15, no. 9, pp. 2820–2830, 2006.
* [29] Z.-u. Rahman, D. J. Jobson, and G. A. Woodell, “Multi-Scale Retinex for Color Image Enhancement,” in _Proceedings of the IEEE International Conference on Image Processing (ICIP)_ , vol. 3, 1996, pp. 1003–1006.
* [30] E. H. Land, “The Retinex Theory of Color Vision,” _Scientific American_ , vol. 237, no. 6, pp. 108–129, 1977.
* [31] J. F. Henriques, R. Caseiro, P. Martins, and J. Batista, “High-Speed Tracking with Kernelized Correlation Filters,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 37, no. 3, pp. 583–596, 2015\.
* [32] M. Danelljan, G. Häger, F. S. Khan, and M. Felsberg, “Learning Spatially Regularized Correlation Filters for Visual Tracking,” in _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_ , 2015, pp. 4310–4318.
* [33] F. Li, C. Tian, W. Zuo, L. Zhang, and M. Yang, “Learning Spatial-Temporal Regularized Correlation Filters for Visual Tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2018, pp. 4904–4913.
* [34] L. Bertinetto, J. Valmadre, S. Golodetz, O. Miksik, and P. H. Torr, “Staple: Complementary Learners for Real-time Tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2016, pp. 1401–1409.
* [35] C. Fu, J. Xu, F. Lin, F. Guo, T. Liu, and Z. Zhang, “Object Saliency-Aware Dual Regularized Correlation Filter for Real-Time Aerial Tracking,” _IEEE Transactions on Geoscience and Remote Sensing_ , vol. 58, no. 12, pp. 8940–8951, 2020.
* [36] C. Fu, F. Ding, Y. Li, J. Jin, and C. Feng, “DR^ 2Track: Towards Real-Time Visual Tracking for UAV via Distractor Repressed Dynamic Regression,” in _Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2020, pp. 1–8.
* [37] A. Lukezic, T. Vojir, L. Cehovin Zajc, J. Matas, and M. Kristan, “Discriminative Correlation Filter with Channel and Spatial Reliability,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2017, pp. 6309–6318.
* [38] W. Feng, R. Han, Q. Guo, J. Zhu, and S. Wang, “Dynamic Saliency-Aware Regularization for Correlation Filter-Based Object Tracking,” _IEEE Transactions on Image Processing_ , vol. 28, no. 7, pp. 3232–3245, 2019.
* [39] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers,” _Foundations and Trends in Machine Learning_ , vol. 3, pp. 1–122, 2010.
* [40] J. Sherman and W. J. Morrison, “Adjustment of An Inverse Matrix Corresponding to A Change in One Element of A Given Matrix,” _The Annals of Mathematical Statistics_ , vol. 21, no. 1, pp. 124–127, 1950.
* [41] D. Du, Y. Qi, H. Yu, Y. Yang, K. Duan, G. Li, W. Zhang, Q. Huang, and Q. Tian, “The Unmanned Aerial Vehicle Benchmark: Object Detection and Tracking,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 370–386.
* [42] D. Du _et al._ , “VisDrone-SOT2019: The Vision Meets Drone Single Object Tracking Challenge Results,” in _Proceedings of the International Conference on Computer Vision Workshops (ICCVW)_ , 2019, pp. 1–14.
* [43] C. Fu, B. Li, F. Ding, F. Lin, and G. Lu, “Correlation Filter for UAV-Based Aerial Tracking: A Review and Experimental Evaluation,” _arXiv preprint arXiv:2010.06255_ , pp. 1–28, 2020.
* [44] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan, “Object Detection with Discriminatively Trained Part-Based Models,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 32, no. 9, pp. 1627–1645, 2010.
* [45] J. van de Weijer and C. Schmid, “Coloring Local Feature Extraction,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2006, pp. 334–348.
* [46] Y. Li and J. Zhu, “A Scale Adaptive Kernel Correlation Filter Tracker with Feature Integration,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2014, pp. 254–265.
* [47] M. Danelljan, G. Bhat, F. Shahbaz Khan, and M. Felsberg, “ECO: Efficient Convolution Operators for Tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2017, pp. 6638–6646.
* [48] C. Wang, L. Zhang, L. Xie, and J. Yuan, “Kernel Cross-Correlator,” in _Proceedings of the AAAI Conference on Artificial Intelligence (AAAI)_ , 2018, pp. 4179–4186.
* [49] N. Wang, W. Zhou, Q. Tian, R. Hong, M. Wang, and H. Li, “Multi-cue Correlation Filters for Robust Visual Tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2018, pp. 4844–4853.
* [50] M. Mueller, N. Smith, and B. Ghanem, “Context-Aware Correlation Filter Tracking,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2017, pp. 1396–1404.
* [51] Z. Zhu, Q. Wang, B. Li, W. Wu, J. Yan, and W. Hu, “Distractor-Aware Siamese Networks for Visual Object Tracking,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 101–117.
* [52] K. Dai, D. Wang, H. Lu, C. Sun, and J. Li, “Visual Tracking via Adaptive Spatially-Regularized Correlation Filters,” in _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019, pp. 4665–4674.
* [53] C. Ma, J. Huang, X. Yang, and M. Yang, “Hierarchical Convolutional Features for Visual Tracking,” in _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_ , 2015, pp. 3074–3082.
* [54] L. Zhang and P. N. Suganthan, “Robust Visual Tracking via Co-trained Kernelized Correlation Filters,” _Pattern Recognition_ , vol. 69, pp. 82–93, 2017.
* [55] Z. He, Y. Fan, J. Zhuang, Y. Dong, and H. Bai, “Correlation Filters with Weighted Convolution Responses,” in _Proceedings of the International Conference on Computer Vision Workshops (ICCVW)_ , 2017, pp. 1992–2000.
|
# Allocating Opportunities in a Dynamic Model of Intergenerational Mobility
Hoda Heidari
Carnegie Mellon University
<EMAIL_ADDRESS>
Jon Kleinberg
Cornell University
<EMAIL_ADDRESS>
###### Abstract
Opportunities such as higher education can promote intergenerational mobility,
leading individuals to achieve levels of socioeconomic status above that of
their parents. We develop a dynamic model for allocating such opportunities in
a society that exhibits bottlenecks in mobility; the problem of optimal
allocation reflects a trade-off between the benefits conferred by the
opportunities in the current generation and the potential to elevate the
socioeconomic status of recipients, shaping the composition of future
generations in ways that can benefit further from the opportunities. We show
how optimal allocations in our model arise as solutions to continuous
optimization problems over multiple generations, and we find in general that
these optimal solutions can favor recipients of low socioeconomic status over
slightly higher-performing individuals of high socioeconomic status — a form
of socioeconomic affirmative action that the society in our model discovers in
the pursuit of purely payoff-maximizing goals. We characterize how the
structure of the model can lead to either temporary or persistent affirmative
action, and we consider extensions of the model with more complex processes
modulating the movement between different levels of socioeconomic status.
## 1 Introduction
Intergenerational mobility — the extent to which an individual’s socioeconomic
status differs from the status of their prior generations of family members —
has emerged as a central notion in our understanding of inequality. A large
amount of empirical work has gone into estimating the extent of mobility for
different subsets of society; while many of the effects are complex and
challenging to measure, two broad and fairly robust principles emerge from
this work. First, socioeconomic status is persistent across generations: an
individual’s socioeconomic status is strongly dependent on parental status. As
Lee and Solon (2009) write in the opening to their survey of this topic, “Over
the past two decades, a large body of research has documented that the
intergenerational transmission of economic status in the United States is much
stronger than earlier sociological and economic analyses had suggested”.
Second, certain types of opportunities can serve as strong catalysts for
socioeconomic mobility; a canonical example is higher education, which has the
potential to raise an individual’s socioeconomic status (and, by the previous
principle, that of their current or future children as well). As Chetty et al.
(2014) write, “The fact that the college attendance is a good proxy for income
mobility is intuitive given the strong association between higher education
and subsequent earnings”.
An important question from a social planning perspective is thus the choice of
policy for allocating opportunities to people of different levels of
socioeconomic status. (Again, we can think of access to higher education as a
running example in this discussion.) Many goals can motivate the choice of
policy, including the reduction of socioeconomic inequality and the
prioritization of opportunities to those most in need. Such goals are often
viewed as operating in tension with the aim of maximizing the achievable
payoff from the available opportunities, which would seem to suggest targeting
the opportunities based only on the anticipated performance of the recipient,
not their socioeconomic status. In this view, society is implicitly being
asked to choose between these goals; this consideration forms a central
ingredient in the informal discourse and debate around the allocation of
opportunity. But through all of this, a challenging question remains: to what
extent is the tension between these goals genuine, and to what extent can they
be viewed as at least partially in alignment?
A large body of work in economics compares various allocation policies in
terms of the above seemingly-competing criteria — typically in simplified
settings in which only two generations are considered. The literature includes
seminal work by Nobel Laureate Garry Becker with Nigel Tomes (Becker and
Tomes, 1986) and by Glenn Loury (Loury, 1981). In multigenerational settings,
however, deriving the optimal policy becomes exceedingly challenging, and it
has been highlighted as a class of open questions in this literature. For
example, in his work on models of college admissions and intergenerational
mobility, Durlauf (2008) notes: “ A college admissions rule has
intergenerational effects because it not only influences the human capital of
the next generation of adults, but also affects the initial human capital of
the generation after next. […] Efficiency in student allocation [in this case]
is far more complicated than before. I am unaware of any simple way of
describing efficiency conditions for college assignment rules analogous to
[the above setting].” In this work, we address this challenge and the
associated open questions concerning the behavior of multigenerational models.
A key ingredient in our progress on these questions is the development of
methods for working with a class of Markov Decision Processes that operate
over continuous states and continuous actions. Our analysis of
multigenerational models enables us to investigate the apparent tension
between efficiency and fairness considerations in allocating opportunities.
#### Allocating Opportunities in a Payoff-Maximizing Society.
We work with a simple mathematical model representing a purely payoff-
maximizing society, operating over multiple generations. As we discuss briefly
in Section 1 and at more length in Appendix A, our model is grounded in the
types of models proposed in economic theory work on these problems. The
society must decide how to allocate opportunities in each generation across a
population heterogeneous in its socioeconomic status. The payoff to the
society is the total performance of everyone who receives the opportunities,
summed (with discounting) over all generations. Although the set-up of the
model is highly streamlined, the analysis of the model becomes quite subtle
since society must solve a continuous-valued dynamic programming problem over
multiple generations.
What we find from the model is that the optimal solution will in general tend
to offer opportunities to individuals of lower socioeconomic status over
comparable individuals of higher socioeconomic status, even when these
competing individuals are predicted to have a slightly better performance from
receiving the opportunity. This is not arising because the optimal solution
has any a priori interest in reducing socioeconomic inequality (although such
goals are important in their own right (Forde-Mazrui, 2004)); rather it is
strictly trying to maximize payoff over multiple generations. But given two
individuals of equal predicted performance, the one with lower socioeconomic
status confers an added benefit to the payoff function: their success would
grow the size of the socioeconomically advantaged class, resulting in higher
payoffs in future generations. Because the difference in payoff contributions
between these two individuals is strictly positive, the same decision would be
optimal even if the individual of lower socioeconomic status had a slightly
lower predicted performance from receiving the opportunity. The optimal
solution should still favor the candidate with lower status in this case.
In other words, the society in this model discovers a form of socioeconomic
affirmative action in allocating opportunities, based purely on payoff-
maximizing motives. The model thus offers a view of a system in which reducing
inequality is compatible with direct payoff maximization. In this sense, our
results belong to a genre of analyses (popularized by Page (Page, 2008) and
others) asserting that policies and interventions that we think of as
motivated by equity concerns, can also be motivated by purely performance-
maximizing considerations: even if society only cares about performance, not
equity, it should still (at least in the underlying models) undertake these
policies. In addition to providing a purely utilitarian motivation for
socioeconomic affirmative action, our model provides novel insights regarding
the shape and extent of effective affirmative action policies by specifying
the way in which criteria for receiving the opportunity should be adjusted
based on socioeconomic status to maximize society’s performance across
multiple generations.
We now give a rough overview of the model and results; a complete description
of the model is provided in the following section.
#### A Model for Allocating Opportunities.
We consider a population that is partitioned into two groups of different
socioeconomic status: $D$ (disadvantaged), consisting of a $\phi_{0}$ fraction
of the population, and $A$ (advantaged), consisting of a $\phi_{1}=1-\phi_{0}$
fraction of the population. Each agent $i$ (from either group) has an ability
$a_{i}$ drawn uniformly at random from the interval $[0,1]$,
Society has the ability to offer an opportunity to an $\alpha$ fraction of the
population. Note that the parameter $\alpha$ specifies the inherent limitation
on the amount of opportunities available. Since opportunities are limited, the
society has to wrestle with the question of how to allocate them. An
individual $i$ in group $D$ who is offered the opportunity has a probability
$\sigma a_{i}$ of succeeding at it, for a parameter $0<\sigma<1$. An
individual $i$ in group $A$ who is offered the opportunity has a probability
$\sigma a_{i}+\tau$ of succeeding at it, for the same $\sigma$ and an
additional parameter $0<\tau\leq 1-\sigma$ reflecting the advantage. We will
refer to the above quantities as the success probabilities of the agents.
Success probabilities reflect various levels of performance when agents are
offered the opportunity.
Anyone in group $D$ who is offered the opportunity and succeeds at it moves up
to group $A$. Each individual is then replaced by one offspring of the same
socioeconomic status and the process continues to the next generation. In the
general form of the model, there is also some probability that an individual’s
offspring does not perfectly inherit their socioeconomic status. The payoff to
society is the number of individuals who succeed at the opportunity summed
over all generations, with the generation $t$ steps into future multiplied by
$\gamma^{t}$ for a discount factor $0<\gamma<1$.
#### Summary of Results.
In any given generation, society’s policy will consist of a threshold for
group $D$ and a (possibly different) threshold for group $A$: the opportunity
is given to every individual whose success probability is above the threshold
for their group. The optimal policy is given by a dynamic program over the
continuous set of all possible choices for the population composition
$(\phi_{0},\phi_{1})$ as state variables. We solve the most basic version of
the model analytically. We computationally solve more complex versions of the
model by discretizing the state space, then applying standard dynamic
programming solutions for finite decision processes.
If the problem of allocating the opportunity only spanned a single generation,
then the payoff-maximizing policy would use the same threshold for both
groups. But given the discounting sum over multiple generations, we find that
society’s optimal policy can, in general, use a lower threshold for group $D$
than for group $A$. The difference in thresholds is a form of socioeconomic
affirmative action, and it arises due to the intuition discussed above:
boosting the number of individuals from group $D$ who receive the opportunity
will increase the number of available candidates from group $A$ in future
generations, each of whom provides a (discounted) payoff in future generations
via their enhanced performance. Finding the correct trade-off in allocating
opportunity thus involves a delicate balance between immediate and future
utility.
(a)
(b)
(c)
Figure 1: Visualizing the triples $(\alpha,\tau,\gamma)$ for which the
optimal policy uses persistent affirmative actions. (We set $\sigma=1-\tau$
for these plots.) Points that lie above the surfaces in panels (a) and (c),
and below the surface in panel (b), correspond to parameter values yielding
persistent affirmative action. (1(a)) When $1-\frac{\alpha\sigma}{\tau}<0$,
any $\gamma>0$ suffices for persistent affirmative action and eventually
moving the entire population to group $A$. When
$1-\frac{\alpha\sigma}{\tau}>0$, there exists some $\gamma<1$ (and hence a
finite level of patience $\left(\frac{\gamma}{1-\gamma}\right))$ that suffices
for persistent affirmative action. (1(b)) When $\tau$ is sufficiently large,
the optimal policy does not use persistent affirmative action; this is because
for a large $\tau$, the extent of affirmative action required to pick up the
best performing members of $D$ is large—which in turn significantly reduces
the immediate payoff. For any given value of $\alpha$, there exists a
sufficiently small $\tau$ that guarantees persistent affirmative action.
(1(c)) When $\alpha$ is small relative to $\tau$, the optimal policy does not
use persistent affirmative action; this is because the cost of picking the
best performing members of $D$ is very high and a small $A$ group suffices for
filling the available opportunities. Note that for some values of $\tau$, no
matter how large $\alpha$ is, the optimal policy never employs persistent
affirmative action.
Whether socioeconomic affirmative action is employed by the optimal solution —
and the extent to which it is employed — depends on the fraction $\phi_{0}$ of
individuals from group $D$; in the most basic model, the amount of affirmative
action decreases monotonically as $\phi_{0}$ is reduced. The extent of
affirmative action is also determined by the amount of opportunity available
($\alpha$), the dependence of success on ability and socioeconomic status
($\sigma$ and $\tau$), and society’s patience in trading off immediate payoff
in return for payoff from future generations ($\gamma$). We characterize the
optimal solution in this respect as a function of these parameters, finding
that for some regions of the parameter space, the society employs temporary
affirmative action, reducing the size of group $D$ to a given level before
equalizing thresholds in subsequent generations; in other parts of the
parameter space, the society employs persistent affirmative action, in which
the threshold for group $D$ is strictly lower in every generation and the size
of group $D$ converges to 0 over time.
Figure 1 provides some ways of describing the regions of parameter space in
which the optimal solution uses persistent affirmative action. As the
partitions of the space there make apparent, the interactions among the key
parameters is fairly subtle. First, persistent affirmative action is promoted
by large values of $\alpha$ and small values of $\tau$, since these make it
easier to include high-performing members of group $D$ without a large
difference in thresholds; and it is promoted by larger values of $\gamma$,
indicating greater concern for the payoffs in future generations. One might
have suspected that persistent affirmative action would only be realized in
the optimal solution in the limit as society’s patience (essentially
$\gamma/(1-\gamma)$) goes to infinity; but in fact, a sufficiently large
finite amount of patience is sufficient for the optimal policy to use
persistent affirmative action.
In our model, we include a probabilistic background process by which
individuals can also move between groups $A$ and $D$; this reflects the idea
that there are many mechanisms operating simultaneously for socioeconomic
mobility, and we are studying only one of these mechanisms via the opportunity
under consideration. The most basic version posits a single probability $p$
that each individual independently loses their group membership and re-samples
it from the current distribution of group sizes. We also consider a version of
the model in which this probability of loss of group membership is different
for groups $A$ and $D$; in this case, we are only able to solve the model
computationally, and these computational results reveal interesting non-
monotonicities in the amount of affirmative action employed as a function of
the relative size of group $D$ ($\phi_{0}$).
#### Utilitarianism, Prioritarianism, and the Desert Principle.
Our simple mathematical model allows us to represent and distinguish among
several distinct worldviews toward allocation policies (see, e.g., (Arneson,
2013) for further discussion of these views): (1) a _utilitarian_ view, which
generally favors slightly lower-ability members of $A$ to comparable, but
slightly higher ability members of $D$ in pursuit of maximizing social utility
and productivity (recall that membership in $A$ confers a boost in success
probability); (2) a _prioritarian_ view, which evaluates a policy according to
its impact on the well-being of the worse-off members of society. Our model
can capture the priority view through large discount factors (recall that as
the society’s patience increases, it effectively increases the priority
assigned to the disadvantaged group members), or by adjusting the welfare
function; (3) a _desert-principle_ view, which advocates for allocating
opportunities based on some notion of deservingness. Deservingness in this
view is often defined in terms of the contributions people make to the social
utility. Hence success probability in our model is arguably the closest match
to individual desert. With that definition for desert, desert-based principles
would allocate opportunities myopically in each generation. As our analysis
illustrates, such policies often fail to maximize the social utility in the
long-run.
#### Limitations and Interpretations.
Our model is designed to incorporate the basic points we just mentioned in as
simplified a fashion as possible; as such, it is important to note some of its
key limitations. First, it is intended to model the effect of a single
opportunity, and it treats other forms of mobility probabilistically in the
background. It also assumes that the fundamental parameters
($\alpha,\sigma,\tau,\gamma$) are constant over all generations as well as
over individuals within one generation. It treats an individual’s group
membership ($A$ and $D$) and ability as a complete description of their
performance, rather than including any dependence on the group membership of
the individual’s parent. (That is, an individual in group $A$ performs the
same in the model regardless of whether their parent belonged to group $A$ or
$D$.) All of these would be interesting restrictions to relax in an extension
of the model. Second, much of the past theoretical work on intergenerational
mobility focuses on an issue that we do not consider here: the strategic
considerations faced by parents as they decide how much to consume in the
present generation and how much to pass on to their children. Our interest
instead has been in the optimization problem faced by a social planner in
allocating opportunities, treating the behavior of the agents as fixed and
simple. Here too, it would be interesting to explore models that address these
issues in combination. Finally, because our focus is on intergenerational
mobility in a socioeconomic sense, we do not model discrimination based on
race, ethnicity, or gender, and the role of race- or gender-based affirmative
action in combatting these effects. The model is instead concerned with
_socio-economic_ or _class-based_ (Malamud, 1995; Kahlenberg, 1996)
affirmative action. That said, the ingredients here could be combined with
models of statistical or taste-based discrimination on these attributes to
better understand their interaction (as outlined in Section 5).
The simplicity of our model, however, does allow us to make a correspondingly
fundamental point: that even a purely payoff-maximizing society can discover
affirmative action policies from first principles as it seeks to optimize the
allocation of opportunities over multiple generations. Moreover, the optimal
allocation policy is deeply connected to dynamic programming over the
generations; the society is essentially attempting to “steer” the balance of
group $A$ and group $D$ over time, making sure not to turn things too abruptly
(giving up present benefit) or too gradually (giving up future benefit). This
idea that society is searching for a way to turn optimally toward a better
outcome is not specific to our model; it is an image that has arisen in
qualitative discourse over several centuries. It can be seen in a quote
popularized by Martin Luther King, that “the arc of the moral universe is
long, but it bends toward justice” (Cohen, 2006). Interestingly, the original
form of this quote, by the American minister Theodore Parker in 1853, has an
even more abstractly mathematical flavor: “I do not pretend to understand the
moral universe; the arc is a long one, my eye reaches but little ways. I
cannot calculate the curve and complete the figure by the experience of sight;
I can divine it by conscience. And from what I see I am sure it bends towards
justice” (Parker, 1853). It is a curiously apt image for the way in which our
optimal solutions gradually turn through the state space to reshape the
distribution of socioeconomic groups, and it can be seen as added motivation
for the issues at the heart of the model.
### Related Work
Here, we briefly mention several lines of scholarship that are closely related
to our work. See Appendix A for a more in-depth discussion.
#### Long-term Implications of Fair ML.
Several recent articles study the long-term impact of ML-based decision-making
and fairness interventions on society, including the enforcement of
statistical parity in hiring (Hu and Chen, 2018), and responses by individuals
and populations to an ML-based decision rule (Liu et al., 2018; Mouzannar et
al., 2019; Kannan et al., 2019). Liu et al. (2018), for example, study the
conditions under which the choices of a myopic profit-maximizing institution
(e.g., a bank lending money to individuals) work in the interest of the
disadvantaged group. Dong et al. (2018); Hu et al. (2019); Milli et al. (2019)
address _strategic classification_ where the goal is to design classifiers
robust to strategic manipulation. This body of research focuses on strategic
responses by agents being evaluated; in contrast, we assume idealized
prediction so as to focus on the perspective of a social planner who seeks
optimal allocation of opportunities over time.
#### Intergenerational Income Mobility.
A substantial literature in economics studies how higher inequality results in
lower income mobility across generations (see, e.g., (Becker and Tomes, 1979;
Maoz and Moav, 1999; Corak, 2013)). The precise measurements of inequality and
mobility significantly influence the strength of this effect (see, e.g.,
(Solon, 1992; Piketty, 2000)). Theoretical models of income mobility have
studied utility-maximizing parents deciding how much of their capital to
consume and how much of it to invest in their offspring (Becker and Tomes,
1986; Becker and Tomes, 1979; Loury, 1981; Solon, 1999). We deliberately set
aside parental strategic considerations and focus instead on deriving the
optimal policy that maximizes the discounted payoff over generations.
#### Affirmative Action Policies.
A rich body of work in economics investigates statistical discrimination
(Arrow, 1973) and the role of affirmative action in redressing it (Fang and
Moro, 2011). Outcome-based policies including affirmative action targets have
long been proposed and implemented as temporary remedies to eliminate group-
level inequalities. Race-based affirmative action and socioeconomic
affirmative action can be viewed as distinct categories of intervenions
(Kahlenberg, 1996; Carnevale and Rose, 2013), with the former addressing long-
term effects of racial bias and the latter facilitating access for
economically disadvantaged individuals (Reardon et al., 2017). While the
relationship between them is complex and contested in the literature (Kane,
1998; Reardon et al., 2006; Gaertner and Hart, 2013), they can co-exist
without necessarily competing.
#### Affirmative Action in College Admissions and Comparison with (Durlauf,
2008).
Durlauf (2008) provides a model to compare the equality and efficiency of
affirmative action policies with those of meritocratic policies in the context
of admission rules to public universities. In his concluding remarks, Durlauf
poses the key question our work sets out to answer: how do multigenerational
considerations impact the optimal allocation policy? As we address this open
question that he poses, we follow his model in many respects, although we
depart from it in a few key areas. 111For a more detailed comparison, see
Section A.4. Similar to our work, Durlauf provides a condition under which
efficiency and equality considerations are aligned, but he focuses on settings
where a diverse body of students on college campuses improves the human
capital development for all of them. Durlauf assumes the policymaker aims to
maximize the level of human capital among _the next generation_ of adult
citizens. He restricts attention to two generations only, and instead of
solving for the optimal policy, he compares the meritocratic policy with
affirmative action in terms of the average human capital of the next
generation. In contrast, we model the dynamics of human-capital development
across _multiple generations_ and derive the _optimal allocation policy_.
Moreover, groups in model corresponds to _socio-economic tiers_ , whereas
Durlauf defines them in terms of _race_.
In both models, generations are represented as follows: in each time step, a
new member is born into each family/dynasty, and he/she replaces the current
member of the family in the next generation. The _initial human capital_ in
Durlauf’s model corresponds to our notion of _success probability_. _Adult
human capital_ in his model is determined by college attendance, and it
roughly maps to our notion of _success_ (i.e., whether the individual succeeds
if given the opportunity.) In both models, an admission rule maps a student’s
human capital and group membership into a binary outcome indicating whether
the student is given the opportunity. In both models, the admission rule may
vary across time and generations; the level of state expenditures on education
is assumed constant across generations ($\alpha$ is fixed in our model); the
only output of universities is human capital (our objective function is made
up of the percentage of the population who succeed in each generation); and
finally, the initial human capital is accurately measured for every student
(we assume ability and success probability are perfectly observable.)
## 2 A Dynamic Model
### Agents, Circumstances, Abilities, & Generations
We consider a model of a society that consists of a continuum of agents in two
different sets of socioeconomic circumstances — a disadvantaged circumstance
$D$ and an advantaged circumstance $A$. These circumstances are
(probabilistically) inherited from one generation to the next, but we can try
to increase the number of agents in the advantaged circumstance in future
generations by offering opportunities to disadvantaged agents in the current
generation. This comes with a trade-off, however, since a competing option is
to offer these opportunities to advantaged agents in the current generation.
Our goal is to model this trade-off. We say that an agent $i$ has circumstance
$c_{i}=0$ if they are disadvantaged ($i\in D$), and circumstance $c_{i}=1$ if
they are advantaged ($i\in A$). Each agent $i$ also has an ability $a_{i}$,
which is a real number in $[0,1]$.
Time advances in discrete periods, beginning with period $t=0$. We think of
these as generations. Consider an agent $i$ who has circumstance
$c^{\text{init}}_{i}$ at the beginning of time $t$. Depending on whether $i$
receives the opportunity, his/her circumstance may change to
$c_{i}^{\text{post}}$. At the end of time step $t$, $i$ produces a new agent
$i^{\prime}$ in generation $t+1$. This new agent $i^{\prime}$ has an ability
$a_{i^{\prime}}$ drawn uniformly at random from $[0,1]$. With some fixed
probability (specified below) $i^{\prime}$ inherits the ex-post circumstance
of $i$ (so $c_{i^{\prime}}=c^{\text{post}}_{i}$), otherwise, it takes on a
circumstance randomly selected from the background distribution of
circumstances within the population in generation $t$. More specifically, in a
given period $t$, let $\phi_{j}(t)$ denote the fraction of agents who have
circumstance $j$, for $j=0,1$. If $c^{\text{post}}_{i}=0$, then with a fixed
probability $1-p_{D}$, $i^{\prime}$ inherits circumstance $D$, and with fixed
probability $p_{D}$, it receives a circumstance randomly selected from the
background distribution $(\phi_{0}(t),\phi_{1}(t))$. Similarly, If
$c^{\text{post}}_{i}=1$, then with a fixed probability $1-p_{A}$ ,
$i^{\prime}$ inherits circumstance $A$, and with probability $p_{A}$, it
receives a circumstance randomly selected from the background distribution
$(\phi_{0}(t),\phi_{1}(t))$.
The movement probabilities, $p_{A}$ and $p_{D}$, capture all processes—other
than the opportunity we seek to allocate optimally—through which individuals
can change their circumstance from their parental inheritance. For example, in
the college admissions example, while our model focuses on how admission
decisions can reshape circumstances over generations, there are many other
forces and processes that impact the evolution of circumstances within society
(e.g., number of jobs in the economy, training opportunities outside college,
or pure luck). The movement probabilities summarize and capture all these
alternative upward or downward movement possibilities.
### Opportunities and Payoffs
We consider the problem of performing an intervention in this society, which
consists of offering an opportunity to a subset of the population. We only
have the resources to offer the opportunity to an $\alpha$ fraction of the
population. An agent who is offered the opportunity has some probability of
succeeding at it, as a function of their ability and circumstances that we
specify below. Succeeding at the opportunity confers two benefits on society:
1. (i)
it produces an immediate payoff/reward to the society in the form of
productivity;
2. (ii)
if the agent is disadvantaged, it moves them (and subsequently their future
generations) into the advantaged group.
The central problem to be solved in the model, as we will see below, is how to
balance the immediate gains from (i) against the long-term gains from (ii)
over multiple generations.
In particular, if an agent of ability $a_{i}\in[0,1]$ and circumstance
$c_{i}\in\\{0,1\\}$ is offered the opportunity, their probability of
succeeding at it is $a_{i}\sigma+c_{i}\tau,$ where $\sigma,\tau>0$ and
$\sigma+\tau\leq 1$. Note that since $c_{i}\in\\{0,1\\}$, this simply means
that $\tau$ gets added to the success probability of all agents whose
circumstance is equal to $1$.
Our payoff (or reward) $r(t)$ in period $t$ is simply the fraction of the
population that both receives the opportunity and succeeds at it. Our total
payoff is a discounted sum of payoffs over all periods, with discount factor
$0<\gamma<1$; that is, the total payoff $r$ is equal to
$\sum_{t=0}^{\infty}\gamma^{t}r(t)$. As noted earlier, agents with
circumstance $0$ who receive the opportunity and succeed at it will produce
offspring who (are more likely to) have circumstance $1$; this matters for the
payoff because the total payoff $r=\sum_{t=0}^{\infty}\gamma^{t}r(t)$ depends
on the fraction of agents with each type of circumstance in all time periods.
### Thresholds and Interventions
The way we allocate the opportunity at time $t$ is to set a threshold
$\theta_{j}(t)$ for agents with circumstance $j$, for $j\in\\{0,1\\}$, and to
offer the opportunity to all agents $i$ with circumstance $j$ whose success
probability, $a_{i}\sigma+c_{i}\tau$, is at least $\theta_{j}(t)$. That is,
$a_{i}\sigma+c_{i}\tau\geq\theta_{j}.$
We will sometimes write the threshold $\theta_{j}$ and the population fraction
with each circumstance $\phi_{j}$ without the explicit dependence “$(t)$” when
we are considering a single fixed time period.
Agents of circumstance $0$ make up a $\phi_{0}$ fraction of the population,
and a $1-\frac{\theta_{0}}{\sigma}$ fraction of them receive the opportunity,
for a total fraction of the population equal to
$\phi_{0}\times(1-\frac{\theta_{0}}{\sigma})$. Similarly, agents of
circumstance $1$ make up a $\phi_{1}$ fraction of the population, and a
$1-\frac{\theta_{1}-\tau}{\sigma}$ fraction of them receive the opportunity,
for a total fraction of the population equal to
$\phi_{1}\times(1-\frac{\theta_{1}-\tau}{\sigma})$. The sum of these two
fractions must add up to $\alpha$ — the portion of the population to whom we
can offer the opportunity:
$\displaystyle\forall 0\leq\theta_{0}\leq\sigma\text{ and
}\tau\leq\theta_{1}\leq\sigma+\tau:\text{
}\phi_{0}\times\left(1-\frac{\theta_{0}}{\sigma}\right)+\phi_{1}\times\left(1-\frac{\theta_{1}-\tau}{\sigma}\right)=\alpha$
(1) $\displaystyle\Leftrightarrow$ $\displaystyle\forall
0\leq\theta_{0}\leq\sigma\text{ and }\tau\leq\theta_{1}\leq\sigma+\tau:\text{
}\phi_{0}\theta_{0}+\phi_{1}\theta_{1}=\sigma(1-\alpha)+\phi_{1}\tau.$
This also shows how our choice of thresholds is a one-dimensional problem in
the single variable $\theta_{0}$ (or equivalent in the single variable
$\theta_{1}$), since after setting one of the two thresholds, the other is
determined by this equation. More precisely, we have that:
$\begin{cases}\theta_{1}=\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\theta_{0}}{1-\phi_{0}}\quad\forall\phi_{0}<1\\\
\theta_{1}\in[\tau,\sigma+\tau]\quad\text{ for }\phi_{0}=1.\end{cases}$ (2)
Note that if $\phi_{0}=1$, $\theta_{1}$ can take on any value in
$[\tau,\sigma+\tau]$, the threshold will not affect how opportunities are
allocated. The same holds for $\phi_{0}=0$ and any $\phi_{0}\in[0,\sigma]$.
### Dynamics in a Single Period
Recall that our payoff $r(t)$ in period $t$ is simply the fraction of the
population that both receives the opportunity and succeeds at it. We can
decompose this as follows.
* •
Agents of circumstance $0$ make up a $\phi_{0}$ fraction of the population,
and a $1-\frac{\theta_{0}}{\sigma}$ fraction of them receive the opportunity,
for a total fraction of the population equal to
$\phi_{0}\left(1-\frac{\theta_{0}}{\sigma}\right)$. Not all of these agents
succeed at the opportunity; the average success probability in this group is
$\frac{1}{2}(\sigma+\theta_{0})$, so the expected quantity that succeeds is
$\phi_{0}\left(1-\frac{\theta_{0}}{\sigma}\right)\frac{(\sigma+\theta_{0})}{2}=\frac{\phi_{0}}{2\sigma}\left(\sigma^{2}-\theta_{0}^{2}\right).$
* •
Agents of circumstance $1$ make up a $\phi_{1}$ fraction of the population,
and a $1-\frac{\theta_{1}-\tau}{\sigma}$ fraction of them receive the
opportunity, for a total fraction of the population equal to
$\phi_{1}\left(1-\frac{\theta_{1}-\tau}{\sigma}\right)$. Again, not all of
these agents succeed at the opportunity; the average success probability in
this group is $\frac{1}{2}(\sigma+\tau+\theta_{1})$, so the expected quantity
that succeeds is
$\phi_{1}\left(1-\frac{\theta_{1}-\tau}{\sigma}\right)\frac{(\sigma+\tau+\theta_{1})}{2}=\frac{\phi_{1}}{2\sigma}\left((\sigma+\tau)^{2}-\theta_{1}^{2}\right).$
The total payoff in period $t$ is the sum of these two terms:
$r(t)=\frac{\phi_{0}(t)}{2\sigma}\left(\sigma^{2}-\theta_{0}(t)^{2}\right)+\frac{\phi_{1}(t)}{2\sigma}\left((\sigma+\tau)^{2}-\theta_{1}(t)^{2}\right).$
(3)
### Dynamics over Multiple Periods
If we were just optimizing the payoff in this single time period, then we’d
have a single-variable optimization problem in the variable $\theta_{0}$ (or
equivalently in $\theta_{1}$), with the objective function given by (3). But
since there is also the set of discounted payoffs in future time periods, we
also need to look at the effect of our decisions on the quantities
$\phi_{0}(.)$ and $\phi_{1}(.)$ in future periods.
If $p_{A}=p_{D}=0$, $\phi_{1}(t+1)$ grows relative to $\phi_{i}(t)$ depending
on the fraction of the population that transitions from circumstance $0$ to
circumstance $1$ by succeeding at the opportunity. Thus we have
$\phi_{0}(t+1)=\phi_{0}(t)-\frac{\phi_{0}(t)}{2\sigma}\left(\sigma^{2}-\theta_{0}(t)^{2}\right).$
(4)
More generally when $p_{A}$ or $p_{D}$ are non-zero, let’s define
$\phi^{\text{post}}_{0}(t)=\phi_{0}(t)-\frac{\phi_{0}(t)}{2\sigma}\left(\sigma^{2}-\theta_{0}(t)^{2}\right)$.
(For simplicity, we drop “$(t)$” and simply use $\phi^{\text{post}}_{0}$ in
the remainder of this section). We have:
$\phi_{0}(t+1)=\phi^{\text{post}}_{0}(1-p_{D})+\phi^{\text{post}}_{0}p_{D}\phi^{\text{post}}_{0}+\left(1-\phi^{\text{post}}_{0}\right)p_{A}\phi^{\text{post}}_{0},$
(5)
It is easy to see that:
###### Proposition 1
If $p_{A}=p_{D}$, then
$\phi_{0}(t+1)=\phi_{0}(t)-\frac{\phi_{0}(t)}{2\sigma}\left(\sigma^{2}-\theta_{0}(t)^{2}\right)$.
Proof Suppose $p_{A}=p_{D}=p$. Then we can re-write (5) as follows:
$\displaystyle\phi_{0}(t+1)$ $\displaystyle=$
$\displaystyle\phi^{\text{post}}_{0}(1-p)+\phi^{\text{post}}_{0}p\phi^{\text{post}}_{0}+\left(1-\phi^{\text{post}}_{0}\right)p\phi^{\text{post}}_{0}$
$\displaystyle=$
$\displaystyle\phi^{\text{post}}_{0}(1-p)+\phi^{\text{post}}_{0}p\left(\phi^{\text{post}}_{0}+\left(1-\phi^{\text{post}}_{0}\right)\right)$
$\displaystyle=$
$\displaystyle\phi^{\text{post}}_{0}(1-p)+\phi^{\text{post}}_{0}p$
$\displaystyle=$
$\displaystyle\phi^{\text{post}}_{0}=\phi_{0}(t)-\frac{\phi_{0}(t)}{2\sigma}\left(\sigma^{2}-\theta_{0}(t)^{2}\right)$
where in the last line, we replace $\phi^{\text{post}}_{0}$ with its
definition.
The above proposition shows that with respect to dynamics and optimal policy,
settings in which $p_{A}=p_{D}$, are essentially equivalent to settings in
which $p_{A}=p_{D}=0$.
In summary, the full problem is to choose thresholds
$\theta_{0}(t),\theta_{1}(t)$ for each time period $t$ so as to maximize the
infinite sum $r=\sum_{t=0}^{\infty}\gamma^{t}r(t)$. Each term $r(t)$ depends
not just on the chosen thresholds but also on the fractions of agents with
each type of circumstance $\phi_{0}(t),\phi_{1}(t)$, which evolve according to
the recurrence in Equation (5). Note that the intuitive trade-off between
$\theta_{0}$ and $\theta_{1}$ shows up in the formulation of (3) and (4):
lowering $\theta_{0}$ and lowering $\theta_{1}$ have different effects, both
in period $t$ and in future time periods.
## 3 Theoretical Analysis for $p_{D}\geq p_{A}$
In this section, we focus on settings in which $p_{D}\geq p_{A}$ and
characterize the optimal policy to maximize the discounted payoff over
generations. (We only provide the analysis for settings of $p_{D}=p_{A}$ but
the extension to $p_{D}>p_{A}$ is straightforward). We cast the problem as
deriving the infinite-time-horizon optimal policy in a continuous state- and
action-space (Markov) decision process. We characterize the optimal threshold
and value function for every state $\phi_{0}\in[0,1]$. Importantly, we show
that there exists a tipping point $\phi^{*}_{0}$ that splits the state space
into two distinct regions: states at which the optimal threshold uses strict
affirmative action, and states at which the optimal policy consists of
imposing equally high thresholds on both $A$ and $D$ groups.
### The Decision Process
Given $\alpha,\sigma,\tau$, and $\gamma$, we define a decision process
$\mathcal{D}_{\alpha,\sigma,\tau,\gamma}=(\Phi,\Theta,S,R)$ (or $\mathcal{D}$
for short) with a continuous state space $\Phi=[0,1]$, action space
$\Theta=[0,\sigma]$, state transition $S:\Phi\times\Theta\rightarrow\Phi$, and
reward function $R:\Phi\times\Theta\rightarrow[0,1]$. Each state
$\phi_{0}\in\Phi$ corresponds to a particular fraction of disadvantaged
individuals within the population. For instance, the states $0$ (or $1$)
represents a society in which no one (or everyone) belongs to $D$.
The set of thresholds admissible in each state $\phi_{0}$ is denoted by
$\Theta_{\phi_{0}}$. $\Theta_{\phi_{0}}$ consist of all thresholds
$0\leq\theta_{0}\leq\sigma$ that can satisfy the capacity constraint,
$\alpha$. In other words, for any $\in\theta_{0}\Theta_{\phi_{0}}$ if we
impose the threshold $\theta_{0}\in[0,\sigma]$ on group $D$, we can find a
threshold $\theta_{1}\in[0,\sigma+\tau]$ for group $A$ such that exactly
$\alpha$ fraction of the overall population receives the opportunity. This
capacity constraint translates into two conditions on $\Theta_{\phi_{0}}$:
1. 1.
A threshold $\theta_{0}\in\Theta_{\phi_{0}}$ should not give the opportunity
to _more than_ $\alpha$ fraction of the population. Formally,
$\forall\theta_{0}\in\Theta_{\phi_{0}}:\phi_{0}\left(1-\frac{\theta_{0}}{\sigma}\right)\leq\alpha$,
which is equivalent to:
$\forall\theta_{0}\in\Theta_{\phi_{0}}:\quad\theta_{0}\geq\sigma\left(1-\frac{\alpha}{\phi_{0}}\right).$
(6)
2. 2.
A threshold $\theta_{0}\in\Theta_{\phi_{0}}$ should not waste opportunities,
that is, it should give the opportunity to _at least_ $\alpha$ fraction of the
overall population. Formally,
$\forall\theta_{0}\in\Theta_{\phi_{0}}:\phi_{0}\left(1-\frac{\theta_{0}}{\sigma}\right)+\phi_{1}\geq\alpha.$
Replacing $\phi_{1}$ with $1-\phi_{0}$ and rearranging terms, the above is
equivalent to
$\forall\theta_{0}\in\Theta_{\phi_{0}}:\quad\theta_{0}\leq\frac{\sigma(1-\alpha)}{\phi_{0}}.$
(7)
Figure 2 illustrates the actions satisfying conditions (6) and (7) for every
state $\phi_{0}\in[0,1]$.
Figure 2: The admissible thresholds for every state of the decision process,
$\mathcal{D}$. The x-axis specifies the state $\phi_{0}$, and the y-axis
highlights (in light red) the admissible thresholds, $\Theta_{\phi_{0}}$, at
every state, $\phi_{0}$.
The state transition function, $S:\Phi\times\Theta\rightarrow\Phi$, specifies
the state transitioned to for every (state, admissible threshold) pair.
Formally,
$\forall\phi_{0}\in\Phi,\forall\theta_{0}\in\Theta_{\phi_{0}}:\quad
S(\phi_{0},\theta_{0})=\phi_{0}-\frac{\phi_{0}}{2\sigma}(\sigma^{2}-\theta_{0}^{2}).$
(8)
The reward function, $R:\Phi\times\Theta\rightarrow[0,1]$, is defined as
follows: $R(\phi_{0},\theta_{0})$ denotes the immediate reward/payoff of
imposing threshold $\theta_{0}$ at state $\phi_{0}$. Formally,
$\forall\phi_{0}\in\Phi,\forall\theta_{0}\in\Theta_{\phi_{0}}:\quad
R(\phi_{0},\theta_{0})=\frac{\phi_{0}}{2\sigma}(\sigma^{2}-\theta_{0}^{2})+\frac{\phi_{1}}{2\sigma}\left((\sigma+\tau)^{2}-\theta_{1}^{2}\right).$
Replacing $\phi_{1}$ with $1-\phi_{0}$ and $\theta_{1}$ with the right hand
side of (2), we obtain the following equivalent expression for $R$:
$R(\phi_{0},\theta_{0})=\begin{cases}\frac{1}{2\sigma}(\sigma^{2}-\theta_{0}^{2})\quad\text{
for }\phi_{0}=1\text{, otherwise: }\\\
\frac{\phi_{0}}{2\sigma}(\sigma^{2}-\theta_{0}^{2})+\frac{1-\phi_{0}}{2\sigma}\left((\sigma+\tau)^{2}-\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\theta_{0}}{1-\phi_{0}}\right)^{2}\right)\end{cases}$
(9)
### Characterization of the Optimal Policy
Next, we illustrate and characterize the optimal policy for the decision
process $\mathcal{D}$ defined above. (For further information and references
on continuous-state decision processes, see Appendix A.5.)
A deterministic policy $\pi$ for a decision process $\mathcal{D}$ is a mapping
$\pi:\Phi\rightarrow\Theta$ such that $\pi(\phi_{0})$ prescribes the threshold
at state $\phi_{0}$. The value $V_{\pi}(\phi_{0})$ of a state $\phi_{0}$ under
policy $\pi$ is the discounted reward of executing policy $\pi$ on
$\mathcal{D}$ starting with initial state $\phi_{0}$. A policy $\pi$ is
optimal if its value function $V_{\pi}$ satisfies Bellman Optimality—defined
recursively as follows:
$V_{\pi}(\phi_{0})=\max_{\theta_{0}}R(\phi_{0},\theta_{0})+\gamma
V_{\pi}(S(\phi_{0},\theta_{0})).$ (10)
We establish in Appendix C that for our decision process $\mathcal{D}$, the
value function satisfying the above functional equation is _unique_ ,
_continuous_ , and _differentiable_. (We prove these facts utilizing tools
from recursive analysis and dynamic programming (Stokey, 1989; Cotter and
Park, 2006)). For simplicity, from this point on we refer to this unique
optimal value function as $V(.)$ and drop the subscript $\pi$.
Let the correspondence $\Pi^{*}_{0}:\Phi\rightarrow 2^{\Theta}$ denote all
optimal policies for $\mathcal{D}$. More precisely, for any $0\leq\phi_{0}\leq
1$, the set $\Pi^{*}_{0}(\phi_{0})$ contains all optimal threshold values at
$\phi_{0}$. Figure 3(a) illustrates $\Pi^{*}_{0}$ for a sample setting of the
parameters $\alpha,\sigma,\tau,\delta$. (See Figure 6 in the Appendix for more
instances.) Figure 7 in the Appendix illustrates the value function $V$. Note
that the value function is consistently decreasing and concave.
(a) The optimal $\theta_{0}$ at every state $0\leq\phi_{0}\leq 1$. The optimal
threshold decreases with $\phi_{0}$. The dashed lines indicate the tipping
points, $\phi^{*}_{0}$, below which $\sigma$ is the only optimal threshold,
and above it $\sigma$ is not optimal.
(b) The difference between $\theta_{0}$ and $\theta_{1}$ at every state
$0\leq\phi_{0}\leq 1$. The dashed lines specify the tipping points. Strict
affirmative action is employed beyond $\phi^{*}_{0}$ only and the extent of
affirmative action is increasing in $\phi_{0}$.
(c) The state to which the optimal policy converges given the initial state
$\phi_{0}$. The dashed lines specify the tipping point, $\phi^{*}_{0}$. Note
that the optimal policy never shrinks the size of group $D$ to a value less
than $\phi^{*}_{0}$.
Figure 3: Illustration of the (a) optimal policy, (b) extent of affirmative
action, and (c) absorbing state.
We say that the optimal policy, $\Pi^{*}_{0}$, uses affirmative action at a
state $\phi_{0}$ if at $\phi_{0}$, it imposes a lower threshold on group $D$
compared to $A$.
###### Definition 1 ((Strict) Affirmative Action)
The optimal policy uses (strict) affirmative action at state $\phi_{0}$ if for
all $\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$ and all
$\theta_{1}\in\Pi^{*}_{1}(\phi_{0})$, $\theta_{0}<\theta_{1}$.
If the inequality above is not strict (i.e., $\theta_{0}\leq\theta_{1}$), we
say the optimal policy uses _weak_ affirmative action. Figure 3(b) shows the
extent of affirmative action for a sample setting of the parameters
$\alpha,\sigma,\tau,\delta$. (See Figure 8 in the Appendix for more
instances).
Our main result (Theorem 1) characterizes the states at which the optimal
policy employs affirmative action. In particular, it establishes the existence
of a tipping point $0\leq\phi^{*}_{0}\leq 1-\alpha$ that designate the region
of affirmative action: At any state $\phi_{0}\leq\phi^{*}_{0}$, the optimal
policy assigns similar high threshold to both the advantaged and disadvantaged
groups. In contract, at any state $\phi_{0}\leq\phi^{*}_{0}$ the optimal
policy imposes a strictly lower threshold on group $D$ compared to $A$.
###### Theorem 1
Given $\alpha,\sigma,\tau,\gamma$ and the decision process $\mathcal{D}$, let
$\phi^{*}_{0}=\max\left\\{0,\min\left\\{1-\alpha,1-\frac{\alpha\sigma}{2\tau}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)\right\\}\right\\}.$
(11)
* •
For any state $\phi_{0}\leq\phi^{*}_{0}$, there exists
$\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$ and $\theta_{1}\in\Pi^{*}_{1}(\phi_{0})$
such that $\theta_{0}=\theta_{1}$.
* •
For any state $\phi_{0}>\phi^{*}_{0}$, for all
$\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$ and all
$\theta_{1}\in\Pi^{*}_{1}(\phi_{0})$ such that $\theta_{0}<\theta_{1}$.
We prove Theorem 1 by establishing a series of Lemmas. Lemma 1 determines the
largest state $\phi^{*}_{0}\in[0,1]$ below which the optimal policy consists
of applying the high threshold of $\sigma$ to group $D$. Clearly, below such
point, the optimal policy does not use affirmative action. Next, we
investigate states larger than $\phi^{*}_{0}$. For every state
$\phi_{0}>\phi^{*}_{0}$, Lemma 2 identifies the set of thresholds that exhibit
affirmative action. Lemma 3 establishes that the optimal policy uses weak
affirmative action beyond $\phi^{*}_{0}$. That is, it shows that for any state
$\phi_{0}>\phi^{*}_{0}$, it is never optimal to impose a strictly higher
threshold on D compared to A. Proposition 2 shows that beyond $\phi^{*}_{0}$,
the optimal policy in fact uses _strict_ affirmative action. That is, at every
state $\phi_{0}>\phi^{*}_{0}$, the optimal policy imposes a strictly lower
threshold on D compared to A.
Lemma 1 determines the state $\phi^{*}_{0}\in[0,1]$ up to which $\sigma$ is an
optimal threshold. Note that if $\sigma$ is an optimal threshold at a state
$\phi_{0}$, the optimal policy does not use affirmative action at $\phi_{0}$.
To see this, note that when $\sigma$ is optimal, any action
$\theta_{0}>\sigma$ is also optimal (they all pick a 0-fraction of group $D$
and are, therefore, effectively equivalent). (All omitted proof can be found
in Appendix C.1.)
###### Lemma 1 (The Tipping Point)
For any state $\phi_{0}<\phi^{*}_{0}$, $\sigma\in\Pi^{*}_{0}(\phi_{0})$.
The following Lemma characterizes the region of affirmative action in the
state-action space.
###### Lemma 2 (Region of Affirmative Action)
For a state $\phi_{0}^{*}<\phi_{0}<1$, the threshold
$\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$ uses affirmative action if and only if
$\theta_{0}\leq\sigma(1-\alpha)+\tau(1-\phi_{0})$.
Proof Let $\theta$ be the threshold that if applied to both D and A at
$\phi_{0}$, allocates opportunities exactly. We have that
$\phi_{0}\left(1-\frac{\theta}{\sigma}\right)+(1-\phi_{0})\left(1-\frac{\theta-\tau}{\sigma}\right)=\alpha$
or equivalently,
$\theta=\sigma(1-\alpha)+\tau(1-\phi_{0}).$
Note that $\theta_{0}<\theta$ if and only if $\theta_{1}>\theta$—otherwise the
capacity constraints would not be maintained. Therefore, for any
$\theta_{0}<\theta$, $\theta_{1}>\theta_{0}$, which implies affirmative
action.
The following Lemma shows that beyond $\phi^{*}_{0}$, the optimal threshold
for the disadvantaged is never higher than that for the advantaged.
###### Lemma 3 (Weak Affirmative Action)
Consider a state $\phi_{0}>\phi^{*}_{0}$. For all
$\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$ and all
$\theta_{1}\in\Pi^{*}_{1}(\phi_{0})$, $\theta_{0}\leq\theta_{1}$.
Proof Suppose not and there exists $\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$ and
$\theta_{1}\in\Pi^{*}_{1}(\phi_{0})$ such that $\theta_{0}>\theta_{1}$. If we
lower $\theta_{0}$ down to $\sigma(1-\alpha)+\tau(1-\phi_{0})$ and increase
$\theta_{1}$ up to $\sigma(1-\alpha)+\tau(1-\phi_{0})$, we maintain the
capacity constraints and at the same time achieve the following:
* (a)
We improve the immediate reward of the current time step— because we replace
advantaged agents with low success probabilities in the range of
$[\theta_{1},\sigma(1-\alpha)+\tau(1-\phi_{0})]$ with disadvantaged agents
with higher success probabilities in
$[\sigma(1-\alpha)+\tau(1-\phi_{0}),\theta_{0}]$.
* (b)
We move to a state with a relatively smaller size of group $D$ (simply because
we gave the opportunity to more disadvantaged agents). The value function is
strictly decreasing in $\phi_{0}$. Therefore, this new next state has a higher
value compared to the previous one.
The fact that we can improve the value contradicts the optimality of
$\theta_{0},\theta_{1}$. Therefore, $\theta_{0}>\theta_{1}$ cannot hold
The following Proposition establishes that beyond $\phi^{*}_{0}$, the optimal
policy uses strict affirmative action. (The proofs establishing the
proposition can be found in Appendix C.1.)
###### Proposition 2 (Strict Affirmative Action)
Consider a state $\phi_{0}>\phi^{*}_{0}$, and let $V^{\prime}(\phi_{0})$
denote the derivative of the value function $V$ evaluated at $\phi_{0}$. If
$V^{\prime}(\phi_{0})<0$, then for all $\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$
and all $\theta_{1}\in\Pi^{*}_{1}(\phi_{0})$, $\theta_{0}<\theta_{1}$.
#### Insights from the Analysis.
We end this section by making several observations about the optimal policy:
First, note that the optimal policy never shrinks the size of $D$ to a value
less than $\phi^{*}_{0}$. For every initial state $\phi_{0}\in[0,1]$, Figure 9
shows the state one converges to when the optimal policy is simulated for
$1000$ steps on $\mathcal{D}$. In other words, affirmative action is optimal
from a utilitarian point of view as long as group $D$ is sufficiently large.
Second, the precise derivation of $\phi_{0}^{*}$, as specified in (11), allows
us to gain new insights into how the interaction between the parameters of our
model can give rise to or avert affirmative action. Figure 1 depicts the
status of persistent affirmative action (i.e., $\phi_{0}^{*}\leq 0$) for
$\alpha,\tau,\gamma$ in settings where $\sigma=1-\tau$. (The derivation behind
the plots can be found in Appendix C.2.) Notice that persistent affirmative
action is promoted by large values of $\alpha$ and small values of $\tau$,
since these make it easier to include high-performing members of group $D$
without a large difference in thresholds. Persistent affirmative action is
also promoted by larger values of $\gamma$, indicating greater concern for the
rewards in future generations. Note, however, that a finite level of patience
often suffices for persistent affirmative action to be optimal. Finally, a
frequent objection to affirmative action polices is their potential for
reverse discrimination (see, e.g., Hopwood v. Texas, 78 F.3d 932 (5th Cir.
1996)). Translating these concerns into the terminology of our model, one may
object that “if the highest-ability member of $D$ has ability below the
lowest-ability member of $A$, then _any_ affirmative action in this scenario
will violate the desert principle”. Note, however, that in our model,
abilities for both $A$ and $D$ group members are uniform in the $[0,1]$
interval. So the optimal policy will never favor a lower-ability member of $D$
to a higher ability member of $A$.
## 4 Computational Analysis for $p_{D}<p_{A}$
The focus of our theoretical analysis was on the settings in which $p_{A}\leq
p_{D}$. Next, we computationally investigate the optimal policy for cases
where $p_{A}>p_{D}$.
Recall that when $p_{A}$ and $p_{D}$ are non-zero, the circumstance of an
offspring is not deterministically specified by that of their parent. Instead,
$p_{A}$ and $p_{D}$ specify the offspring’s probability of spontaneous
movement to the background distribution of circumstances in society, that is,
$(\phi_{0},\phi_{1})$. When $p_{A}\leq p_{D}$, both the spontaneous movement
dynamics and the allocation of opportunities work toward reducing the relative
size of group $D$ over time. This fact allowed us to utilize backward
induction to characterize the optimal policy theoretically. When
$p_{A}>p_{D}$, however, the spontaneous movement work in the opposite
direction of opportunity allocation: with no opportunity allocated, the
spontaneous movement dynamics gradually shift the entire population to group
$D$. In such settings, the role of the allocation policy consists of
combatting the natural flow of the population to $D$.
Although in settings with $p_{A}>p_{D}$, it is significantly more challenging
to characterize the optimal allocation policy, we can still approximately
compute this policy via discretization followed by solution methods for finite
decision problems. More specifically, we discretize the state- and action-
space of the decision process, $\mathcal{D}$, then compute the optimal policy
of the resulting finite decision process using methods such as policy
iteration. In what follows, we report the result of our simulations when the
state- and action-space are approximated by 1000 equidistant states and
actions (i.e., $\tilde{\Phi}=(0,0.001,0.002,\cdots,1)$ and
$\tilde{\Theta}=\sigma(0,0.001,0.002,\cdots,1)$) and the reward and transition
functions are approximated accordingly.
Figure 4: The optimal policy $\theta_{0}$ at every state $0\leq\phi_{0}\leq 1$
for various $p_{A}$ values. Figure 5: The absorbing state for every initial
state $0\leq\phi_{0}\leq 1$. The optimal policy is non-monotone and exhibits
sharp drops when better absorbing states become cost-effective.
Figure 4 shows how the optimal policy varies with $p_{A}$ for a fixed value of
$p_{D}$. Compared to the optimal policy for settings of $p_{A}\leq p_{D}$, we
observe two significant differences here:
* •
The optimal policy can be non-monotone in $\phi_{0}$.
* •
There exists a sharp drop in the optimal threshold at some intermediate state
$\phi_{0}\in[0,1]$.
Next, we provide some justifications for both the intuitive and potentially
counter-intuitive aspects of these phenomena.
For every initial state $\phi_{0}\in[0,1]$, Figure 5 illustrates the state one
converges to (i.e., the absorbing state) if the optimal policy is simulated
for 1000 steps. Note that in all cases, the optimal policy successfully
reduces the absorbing state to a more desirable value less than 1 (recall that
with no intervention, we always converge to $\phi_{0}=1$ through the
spontaneous movements).
As illustrated in Figure 5, the sharp drop in the optimal threshold coincides
with an abrupt change in the absorbing state. When the initial state is close
to 1, the allocation policy must employ extensive affirmative action to reduce
and maintain a better absorbing state. This feat, however, comes at the cost
of immediate reward. The optimal policy, therefore, settles for reaching and
obtaining a slightly more desirable absorbing state (a value between 0.6 and
0.8 in Figure 5). For smaller initial states, the cost of employing extensive
affirmative action goes down, and at some point, it becomes viable to reach
and maintain a much better absorbing state (a value between 0 to 0.3 in Figure
5). The sharp drop in the optimal threshold happens precisely at the state
where the benefit of extensive affirmative action outweighs its cost.
In summary, when $p_{A}>p_{D}$, we need persistent (albeit non-monotone)
affirmative action to reach and maintain more desirable absorbing states. The
optimal extent of affirmative action crucially depends on the initial state of
the population: If the disadvantaged group is large to begin with, the optimal
policy has to forgo extensive affirmative action to maintain sufficient short-
term return. If the advantaged group initially has a sufficient mass,
extensive affirmative action becomes optimal, and in the long-run, manages to
significantly increases the fraction of advantaged individuals in the
population. These findings are in sharp contrast with settings of $p_{A}\leq
p_{D}$. In those settings, affirmative action is optimal under the much more
straightforward condition that the disadvantaged group exceeds a
$\phi_{0}^{*}$ fraction of the population, for a constant $\phi_{0}^{*}$ that
we can specify precisely, and it ceases to be optimal as soon as society gains
at least $1-\phi_{0}^{*}$ mass in the advantaged group.
## 5 Discussion and Future Directions
In this paper, we developed and analyzed a model for allocating opportunities
in a society with limited intergenerational mobility. These opportunities
produce benefits in the present generation, but they can also raise the
socioeconomic status of the recipients and their descendants. This creates a
trade-off: whether to maximize the current value achievable from the
opportunities or to increase the value achievable in future generations. We
have shown how to resolve this trade-off by solving a continuous optimization
problem over multiple generations, and we have seen how optimal solutions to
this problem can exhibit a form of socioeconomic affirmative action, favoring
individuals of low socioeconomic status over slightly higher-performing
individuals of high socioeconomic status. Characterizing the conditions under
which this type of affirmative action occurs in the model provides insights
into the interaction between the amount of opportunity available, the
magnitude of the gap between different socioeconomic classes, and the
“patience”’ of the society in evaluating the benefits achievable from future
generations.
#### Insights and Implications.
Our work provides a purely utilitarian account of a society–with no intrinsic
interest in reducing inequality– which nonetheless chooses to employ
affirmative action along socioeconomic lines. In that sense, our work responds
to concerns around affirmative action hurting social utility. In addition, our
analysis present new insights on the shape and extent of effective
socioeconomic affirmative action policies across multiple generations. Our
findings offer several important insights to the Fair-ML literature: (a) If
temporal dynamics are taken into account, there are important cases where
fairness interventions can be fully aligned with performance-maximizing goals.
(b) Effective fairness interventions should often adapt to population changes
over time. (c) For a comprehensive assessment of fairness for an allocation
policy/algorithm, we may need to look beyond its immediate impact on its
direct subjects. Our work, for instance, characterizes optimal allocations
when decisions made for individuals today impact their future generations.
#### Toward a Broader Class of Models Leading to Affirmative Action.
Our work’s main focus was on _intergenerational dynamics_ and their impact on
the effectiveness of _socioeconomic affirmative action policies_. But we hope
that our work also serves as a stepping stone toward a broader class of models
characterizing conditions under which affirmative action comes out of optimal
policy derivations. In particular, a generalized version of our decision
process can capture _race-dependent_ dynamics of movement between various
socioeconomic levels. Race-dependent dynamics are motivated by the observation
that advantaged Black people are more likely to be downwardly mobile than
their advantaged white counterparts (Chetty et al., 2020). A more general
version of our decision process would consist of race-dependent $\sigma$’s and
$\tau$’s. Analyzing the resulting dynamics can shed light on the tradeoffs
between race/ethnicity-based and socioeconomic affirmative action policies. We
leave this analysis as a crucial direction for future work.
There are a number of interesting further research directions suggested by the
results in the paper, and several of them address the limits of the model
noted in the introduction. In particular, it would be interesting to consider
extensions of the model in which basic parameters of the society varied over
time rather than being fixed and could only be estimated with delayed feedback
(e.g., after several years); the ability distributions were non-uniform; the
dynamics would amplify the interaction between ability and socioeconomic
advantage; or there were multiple opportunities being allocated concurrently.
It would be interesting to incorporate strategic considerations as in other
work on intergenerational mobility. And finally, there is a natural
opportunity to try integrating the models here with work on the nature of
statistical discrimination and the design of interventions to alleviate it.
#### On the Scope and Limitations of Mathematical Models.
Our work addresses one potential role that affirmative action policies can
have on future generations and mitigating socioeconomic inequality. Our work
follows a long tradition in the mathematical social sciences, where a stylized
model is proposed to capture certain aspects of a complex societal issue; the
model is then rigorously analyzed with the hope that the formal analysis
provides new insights about alternative policy choices and their
counterfactual effects. These insights can in turn inform policymakers at a
qualitative level. We conclude this article by acknowledging that all
mathematical models are by definition highly simplified representations of the
phenomena at hand, and as such it is important to understand and interpret
them keeping their limitations and scope of applicability in mind. We have
grounded our work in a broad literature from economics so as to draw on the
insights that earlier modelers have brought to this setting. But we emphasize
that theoretical models and their implications should never be taken as exact
representations of the way complex societal processes operate and evolve. As
such, it is important to not draw policy interpretations on the basis of such
models alone.
## Acknowledgments
This work was supported in part by a Simons Investigator Award, a Vannevar
Bush Faculty Fellowship, a MURI grant, AFOSR grant FA9550-19-1-0183, and
grants from the ARO and the MacArthur Foundation. We are grateful to Lawrence
E. Blume, Alexandra Chouldechova, Zachary C. Lipton, Rediet Abebe, Manish
Raghavan, Kate Donahue, the AI, Policy, and Practice (AIPP) group at Cornell,
and the FEAT reading group at Carnegie Mellon for invaluable discussions.
Finally, we would like to thank the anonymous reviewers of our work for their
insightful and constructive feedback.
## References
* Arneson [2013] Richard Arneson. Egalitarianism. In Edward N. Zalta, editor, The Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, summer 2013 edition, 2013.
* Arrow [1973] Kenneth Arrow. The theory of discrimination. Discrimination in labor markets, 3(10):3–33, 1973.
* Becker and Tomes [1979] Gary S. Becker and Nigel Tomes. An equilibrium theory of the distribution of income and intergenerational mobility. Journal of Political Economy, 87(6):1153–1189, 1979.
* Becker and Tomes [1986] Gary S. Becker and Nigel Tomes. Human capital and the rise and fall of families. Journal of labor economics, 4(3, Part 2):S1–S39, 1986.
* Black and Devereux [2010] Sandra E. Black and Paul J. Devereux. Recent developments in intergenerational mobility. Technical report, National Bureau of Economic Research, 2010.
* Carnevale and Rose [2013] Anthony P. Carnevale and Stephen Rose. Socioeconomic status, race/ethnicity, and selective college admissions. 2013\.
* Chetty and Hendren [2018] Raj Chetty and Nathaniel Hendren. The impacts of neighborhoods on intergenerational mobility i: Childhood exposure effects. The Quarterly Journal of Economics, 133(3):1107–1162, 2018.
* Chetty et al. [2014] Raj Chetty, Nathaniel Hendren, Patrick Kline, Emmanuel Saez, and Nicholas Turner. Is the United States still a land of opportunity? Recent trends in intergenerational mobility. American Economic Review, 104(5):141–147, 2014.
* Chetty et al. [2020] Raj Chetty, Nathaniel Hendren, Maggie R. Jones, and Sonya R. Porter. Race and economic opportunity in the united states: An intergenerational perspective. The Quarterly Journal of Economics, 135(2):711–783, 2020.
* Chung [2000] Kim-Sau Chung. Role models and arguments for affirmative action. American Economic Review, 90(3):640–648, 2000.
* Coate and Loury [1993] Stephen Coate and Glenn C. Loury. Will affirmative-action policies eliminate negative stereotypes? The American Economic Review, pages 1220–1240, 1993.
* Cohen [2006] Joshua Cohen. The arc of the moral universe. Philosophy and Public Affairs, pages 91–134, 2006.
* Corak [2013] Miles Corak. Income inequality, equality of opportunity, and intergenerational mobility. Journal of Economic Perspectives, 27(3):79–102, 2013.
* Cotter and Park [2006] Kevin D. Cotter and Jee-Hyeong Park. Non-concave dynamic programming. Economics Letters, 90(1):141–146, 2006.
* Dong et al. [2018] Jinshuo Dong, Aaron Roth, Zachary Schutzman, Bo Waggoner, and Zhiwei Steven Wu. Strategic classification from revealed preferences. In Proceedings of the 2018 ACM Conference on Economics and Computation, pages 55–70. ACM, 2018.
* Durlauf [2008] Steven N. Durlauf. Affirmative action, meritocracy, and efficiency. Politics, Philosophy & Economics, 7(2):131–158, 2008.
* Fang and Moro [2011] Hanming Fang and Andrea Moro. Theories of statistical discrimination and affirmative action: A survey. In Handbook of social economics, volume 1, pages 133–200. Elsevier, 2011.
* Forde-Mazrui [2004] Kim Forde-Mazrui. Taking conservatives seriously: A moral justification for affirmative action and reparations. California Law Review, 92:683, 2004.
* Gaertner and Hart [2013] Matthew N. Gaertner and Melissa Hart. Considering class: College access and diversity. Harvard Law & Policy Review, 7:367, 2013.
* Hauskrecht and Kveton [2004] Milos Hauskrecht and Branislav Kveton. Linear program approximations for factored continuous-state markov decision processes. In Advances in Neural Information Processing Systems, pages 895–902, 2004.
* Hu and Chen [2018] Lily Hu and Yiling Chen. A short-term intervention for long-term fairness in the labor market. In Proceedings of the 2018 World Wide Web Conference, pages 1389–1398, 2018.
* Hu et al. [2019] Lily Hu, Nicole Immorlica, and Jennifer Wortman Vaughan. The disparate effects of strategic manipulation. In Proceedings of the 2nd ACM Conference on Fairness, Accountability, and Transparency, 2019.
* Kahlenberg [1996] Richard D. Kahlenberg. Class-based affirmative action. California Law Review, 84:1037, 1996.
* Kane [1998] Thomas J. Kane. Racial and ethnic preferences in college admissions. Ohio State Law Journal, 59:971, 1998.
* Kannan et al. [2019] Sampath Kannan, Aaron Roth, and Juba Ziani. Downstream effects of affirmative action. In Proceedings of the 2nd ACM Conference on Fairness, Accountability, and Transparency, 2019.
* Lee and Solon [2009] Chul-In Lee and Gary Solon. Trends in intergenerational income mobility. The Review of Economics and Statistics, 91(4):766–772, 2009.
* Li and Littman [2005] Lihong Li and Michael L. Littman. Lazy approximation for solving continuous finite-horizon mdps. In AAAI, volume 5, pages 1175–1180, 2005.
* Liu et al. [2018] Lydia T. Liu, Sarah Dean, Esther Rolf, Max Simchowitz, and Moritz Hardt. Delayed impact of fair machine learning. In Proceedings of the 35th International Conference on Machine Learning, 2018.
* Loury [1981] Glenn C. Loury. Intergenerational transfers and the distribution of earnings. Econometrica: Journal of the Econometric Society, pages 843–867, 1981.
* Malamud [1995] Deborah C. Malamud. Class-based affirmative action: Lessons and caveats. Texas Law Review, 74:1847, 1995.
* Maoz and Moav [1999] Yishay D. Maoz and Omer Moav. Intergenerational mobility and the process of development. The Economic Journal, 109(458):677–697, 1999.
* Marecki et al. [2006] Janusz Marecki, Zvi Topol, Milind Tambe, et al. A fast analytical algorithm for mdps with continuous state spaces. In AAMAS-06 Proceedings of 8th Workshop on Game Theoretic and Decision Theoretic Agents, 2006.
* Milli et al. [2019] Smitha Milli, John Miller, Anca D Dragan, and Moritz Hardt. The social cost of strategic classification. In Proceedings of the 2nd ACM Conference on Fairness, Accountability, and Transparency, 2019.
* Mouzannar et al. [2019] Hussein Mouzannar, Mesrob I Ohannessian, and Nathan Srebro. From fair decision making to social equality. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 359–368. ACM, 2019.
* Page [2008] Scott E. Page. The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies. Princeton University Press, 2008.
* Parker [1853] Theodore Parker. Ten Sermons of Religion. Charles Francis and Company, 1853.
* Piketty [2000] Thomas Piketty. Theories of persistent inequality and intergenerational mobility. Handbook of income distribution, 1:429–476, 2000.
* Reardon et al. [2006] Sean F. Reardon, John T. Yun, and Michal Kurlaender. Implications of income-based school assignment policies for racial school segregation. Educational Evaluation and Policy Analysis, 28(1):49–75, 2006.
* Reardon et al. [2017] Sean F. Reardon, Rachel Baker, Matt Kasman, Daniel Klasik, and Joseph B. Townsend. Can socioeconomic status substitute for race in affirmative action college admissions policies? evidence from a simulation model. 2017\.
* Solon [1992] Gary Solon. Intergenerational income mobility in the united states. The American Economic Review, pages 393–408, 1992.
* Solon [1999] Gary Solon. Intergenerational mobility in the labor market. In Handbook of labor economics, volume 3, pages 1761–1800. Elsevier, 1999.
* Stokey [1989] Nancy L. Stokey. Recursive methods in economic dynamics. Harvard University Press, 1989.
* Torche [2011] Florencia Torche. Is a college degree still the great equalizer? intergenerational mobility across levels of schooling in the united states. American Journal of Sociology, 117(3):763–807, 2011.
## Appendix A Expanded Related Work
### A.1 Long-term Implications of Fair-ML
Several recent papers study the long-term impact of decision-making models and
fairness interventions on society and individuals. Liu et al. [2018] and
Kannan et al. [2019] study how a utility-maximizing _decision-maker_ may
respond to the predictions made by the model. For instance, the decision-maker
may interpret and use the predictions in a certain way, or update the model
entirely. Dong et al. [2018]; Hu et al. [2019]; Milli et al. [2019] address
_strategic classification_ —a setting in which decision subjects are assumed
to respond _strategically_ and potentially _untruthfully_ to the choice of the
classification model, and the goal is to design classifiers that are robust to
strategic manipulation.
[Hu and Chen, 2018] and [Mouzannar et al., 2019] take inspiration from
existing models of statistical discrimination and affirmative action. Hu and
Chen [2018] study the impact of enforcing statistical parity on hiring
decisions made in a temporary labor market that precedes a permanent labor
market. They show that under certain conditions, statistical parity can result
in an equilibrium that Pareto-dominates the one that would emerge in an
unconstrained labor market.
Mouzannar et al. [2019] model the dynamics of how a population reacts to a
selection rule by changing their qualifications. The qualification of an
individual is assumed to be estimated via some classification function and
this estimate is assumed to be the same as their true qualification/label of
the individual. They model a selection rule by its selection rates across the
qualified and unqualified members of two socially salient groups. In other
words, the rule is modeled by four non-negative numbers. They assume that
there exist continuously differentiable functions $f_{1}$ and $f_{2}$ that map
the selection rates in each group to the percentage of qualified individuals
in that group in the next round. These functions model how qualifications
change over time in response to the selection rule. Within this model,
Mouzannar et al. [2019] study two types of myopic utility-maximizing policies:
an affirmative action policy which forces the selection rates to be equal
across the two groups, and an unconstrained policy that simply picks the
qualified in each group. They then provide several conditions under which each
of these policies lead to social equality (i.e., groups that are
indistinguishable in terms of their qualifications). As for affirmative action
policies, under-acceptance of qualified individuals in one group (to guarantee
equal selection rates) is shown to be inefficient (both in terms of the
decision-maker’s utility and average qualifications). Over-acceptance of the
unqualified, on the other hand, can lead to a more qualified population
(although it may fail to guarantee social equality).
### A.2 Intergenerational Income Mobility
A large economic literature addresses the relationship between income
inequality and intergenerational mobility. At a high-level, the literature
suggests that higher inequality results in lower income mobility across
generations (see e.g., [Maoz and Moav, 1999; Corak, 2013]). The precise
measurements of inequality and mobility significantly influence the strength
of this effect (see, e.g., [Solon, 1992; Piketty, 2000]). The literature
recognizes two main mechanisms through which a parent may affect its offspring
income: (1) by transmitting endowments (e.g., innate abilities, connections,
etc.) to the next generation; (2) by investing in the human capital of the
next generation [Becker and Tomes, 1979; Solon, 1999]. In the existing
theoretical models of income mobility, utility-maximizing parents decide how
much of their income/human capital to consume and how much of it to invest in
their offspring (depending on their level of altruism) [Becker and Tomes,
1986; Becker and Tomes, 1979; Loury, 1981; Solon, 1999]. This decision,
combined with the social institutions and the technology that converts
parental investment to the next generation’s human capital, determine the fate
of the next generation.
For example, Loury [1981] provides a dynamic model of how the earnings
distributions of the next generation depends on the parent’s earnings and
his/her investment decision for his/her offspring. The earnings of the
offspring is determined by their innate ability/endowment $\alpha$222Similar
to ours, Loury’s model the ability of each individual is drawn randomly from
some fixed distribution. The ability of the offspring is only observed after
the parent makes their investment decision. and how much the parent invests in
their training $e$ through a function $h(\alpha,e)$. Unlike our model, Loury
assumes the the parents are expected-utility maximizing when making their
investment decisions for their offspring. In particular, he assumes the
utility $u(.,.)$ of a parent with earnings $y$ is a function of his/her own
consumption $c$ and the expected utility of the offspring as the result of the
investment he/she makes in their training, which is $y-c$. This recursive
definition of utility specifies how the utility of one generation depends on
that of all their subsequent generations (not just the immediate offspring).
Loury defines the indirect utility associated with an earning $y$ to be the
function the parent uses to estimate their offspring’s utility. If this
function is the same as the one obtained by solving the parent’s utility
maximization problem, the indirect utility function is called _consistent_. He
goes on to show that under certain (weak) assumptions, there is a unique
consistent indirect utility function. He also defines an _equilibrium_
earnings distribution as one that if it characterizes the earnings
distribution for one generation, it continues to do so for all subsequent
generations. He then provides several comparative statics results for various
policies (e.g., “education specific tax policies”). For example, he shows that
under certain conditions, egalitarian policies that redistribute earnings of
the next generation have insurance effects that make every member of society
today better off.
Other factors that have been shown to causally affect income mobility are
neighborhood [Chetty and Hendren, 2018, 2018], parental education [Torche,
2011], and family background characteristics (e.g., welfare receipt, health,
attitudes and social behavior [Black and Devereux, 2010]).
### A.3 Affirmative Action Policies
A rich body of work in economics investigates sources of statistical
discrimination333According to [Fang and Moro, 2011] “Statistical
discrimination generally refers to the phenomenon of a decision-maker using
observable characteristics of individuals as a proxy for unobservable, but
outcome-relevant, characteristics.” and the role of affirmative action
policies in redressing it. For an excellent survey of this literature, see
[Fang and Moro, 2011].
In contrast to taste-based theories of discrimination—which attribute group
inequality to racial or gender preference against members of certain
groups—statistical discrimination theories cast group inequality as a
consequence of interaction between two _rational_ parties:
1. 1.
A utility maximizing decision-maker (e.g., an employer) who has imperfect
information about a decision subject’s characteristics and uses his/her group
membership as a signal for his/her outcome-relevant unobservable
characteristics (e.g., employee’s productivity);
2. 2.
Several groups of individuals (e.g., racial or gender groups) that are ex-ante
identical in terms of the distribution of qualifications. Individuals are
utility maximizing and best-respond to the decision maker’s strategy by
adjusting their investment in qualifications.
In a seminal work, Arrow [1973] argues that differences between groups can be
explained as a form of _coordination failure_ : In equilibrium, the decision
maker holds asymmetric beliefs about group qualifications, and this serves as
a _self-fulfilling stereotype_. Because of this belief, members of the
disadvantaged group don’t have enough incentive to invest in skills and
qualifications—precisely because they know that the decision maker will treat
them unfavorably). This in turn rationalizes the decision maker’s
belief—members of the disadvantaged group indeed end up being less qualified
(on average) than the advantage group.
Outcome-based policies, such as affirmative action quotas or the application
of disparate impact tests, have long been proposed and implemented as a
_temporary_ remedy to eliminate group-level inequalities. Such policies may
seem particularly effective when self-fulfilling stereotypes are the primary
cause of group inequalities. Imposing quota constraints can lead the players
to coordinate on a symmetric outcome, that is, and equilibrium in which the
decision maker holds symmetric beliefs about different groups. This, however,
is not the only possible consequence of imposing quota constraints. Coate and
Loury [1993] have shown that quota constraints may reduce the disadvantaged
group’s incentives to invest in skills, and subsequently, result in an even
worse asymmetric the equilibrium. Coate and Loury call this phenomenon
_patronization_. (This is a potential consequence of algorithmic fairness-
enhancing interventions.)
Advocates of affirmative action have often argued that the larger
representation of minorities in higher social positions can generate _role
models_ that can positively influence future generations of minorities in
their investment decisions. Chung [2000] formalizes these arguments, by
allowing for groups to differ in their costs of investment.
While the existing literature on affirmative action largely focuses on race-
based interventions and policies, some scholars have advocated for _socio-
economic_ or _class-based_ affirmative action as a race-neutral alternative
(see e.g., [Kahlenberg, 1996; Carnevale and Rose, 2013]). Race-neutral
alternatives become particularly important to understand when race-sensitive
interventions are constitutionally challenged (for example, see the case of
Fisher v. University of Texas at Austin (2013)). The extent to which
substituting socio-economic status for race in college admissions is effective
in improving minority enrollment is contested (see, e.g., [Kane, 1998; Reardon
et al., 2006; Gaertner and Hart, 2013; Reardon et al., 2017]). However, socio-
economic affirmative action can certainly facilitate access to college for
economically disadvantaged students. The resulting socioeconomic diversity is
a desirable outcome in and of itself [Reardon et al., 2017] and in essence, it
does not have to compete with or replace race-sensitive admission policies.
### A.4 Comparison with [Durlauf, 2008]
[Durlauf, 2008] is one of the closest papers to our work—in terms of the
motivation and modeling choices. In his concluding remarks, Durlauf poses the
key question our work sets out to answer: how do multigenerational
considerations impact the optimal allocation policy? Next, we provide a brief
summary of [Durlauf, 2008] and compare our work with it in terms of modeling
choices and results.
Durlauf focuses on a condition under which efficiency and equality
considerations are aligned, and that is when a diverse body of students on
college campuses improves the human capital development for all of them.
Durlauf [2008] provides a model to compare the equality and efficiency of
affirmative action policies with those of meritocratic policies in the context
of admission rules to public universities. An admission rule maps the initial
human capital of students (and possibly their demographic information, such as
race) to their admission decisions. If a student is admitted, their human
capital is then improved by the quality of the college they attend and the
average human capital of their classmates in that college. The state wishes to
employ an admission policy that maximizes the aggregate human capital. He
argues that while the meritocratic policy may be efficient under certain
conditions, the same holds for affirmative action policies (e.g., when
“diversity improves education for all students.”).
In many respects, our model is similar to [Durlauf, 2008], although we depart
from his model and analysis in a few key areas. Most importantly, we model the
dynamics of human-capital development across _multiple generations_ and derive
the _optimal allocation policy_.
Similar to our work, Durlauf studies a multi-generational model of a society,
wherein each generation, a new member is born into every family/dynasty, and
he/she replaces the current adult member of the family in the next generation.
Even though our model is not expressed in terms of _overlapping generations_ ,
one can equivalently cast it in those terms.
The _initial human capital_ in Durlauf’s model corresponds to our notion of
_success probability_. _Adult human capital_ in his model is determined by
college attendance, and it roughly maps to our notion of _success_ (i.e.,
whether the individual succeeds if given the opportunity.)
In Durlauf’s model, students always prefer classmates with higher human
capital and colleges with higher fundamental quality. Unlike his model, we do
not capture the _spillover effects_ of students attending the same college on
each other. We also don’t allow for various levels of college quality. (It may
be worth noting that while in his general model, Durlauf allows for various
college qualities, in his analysis he focuses on at most two quality
levels—one college of high quality, the other of low quality.)
In both models, an admission rule maps a student’s human capital and group
membership into a binary outcome indicating whether the student is admitted.
In our work, groups corresponds to _socio-economic tiers_ , whereas in
Durlauf’s, they correspond to _racial groups_. In both models, the admission
rule may vary across time and generations.
Other simplifying assumptions in Durlaufs model are:
* •
The level of state expenditures on education is fixed across generations.
Similarly, alpha is fixed in our model.
* •
The only output of universities is human capital. Similarly, we only care
about the percentage of the population who succeed.
* •
The initial human capital is accurately measured for every student. Similarly,
we assume ability and success probability are perfectly observable.
As for the objective function, Durlauf assumes the policymaker wants to
maximize the level of human capital among _the next generation_ of adult
citizens. He restricts attention to two generations only, but he notes that
“When one moves to this dynamic perspective, one also needs to consider an
_inter-temporal_ objective function; in parallel to my simplest specification,
which is based on the level of human capital at one point in time, it is
appropriate to consider a _weighted average_ of the levels across time. A
standard way to make this move involves working with […] a _discounted sum_ of
adult human capital across two generations.” The latter is precisely the
objective function we aim to optimize. Unlike our work, Durlauf does not solve
for the optimal policy. He merely compares the meritocratic policy with
affirmative action in terms of the average human capital of the next
generation. (Similar to our work, he defines a meritocratic admissions policy
to mean that each student is admitted to college exclusively on the basis of
their initial human capital. An affirmative action policy in his view is one
that takes group membership into account as well.)
Both models take the perspective of a policymaker who needs to choose between
affirmative action and meritocratic rules and aim to understand how efficiency
and equity interact. Similar to our findings, he finds that depending on the
parameters of the model, affirmative action can be more efficient than myopic
meritocratic admission rules. In our model, this happens because the human
capital of a student is directly impacted by that of his/her parent. In
Durlauf’s work, in contrast, this can happen because of the spillover effects
of students on each other and the impact of college quality on developing
adult human capital. In more concrete terms, Durlauf’s model can capture the
competing claims that ”diversity improves education for all students on
campus”, or ”stronger students going to the same college leads to higher
social capital for all of them”. Our model does not capture such _spillover_
effects. We instead focus on the _intertemporal_ or intergenerational effects
of admission rules. As mentioned earlier, Durlauf emphasizes the importance of
intergenerational factors in Section 6.2 of ”Affirmative action, meritocracy,
and efficiency”.
### A.5 MDPs with Continuous State and Action Spaces
Finding the optimal policy for a continuous state- and action-space MDP is
often a difficult task [Marecki et al., 2006]. Two main categories of
algorithms have been proposed to (approximately) calculate the optimal policy:
(1) A typical approach is to discretize the state and action space, then solve
the resulting MDP using standard methods, such policy or value iteration. This
approach suffers from the well-known “curse of dimensionality”. (2) Another
approach approximates the optimal value function with a parametric form, then
sets out to fit those parameters such that they satisfy the Bellman equation
(see, e.g., [Li and Littman, 2005; Hauskrecht and Kveton, 2004].
## Appendix B Limitations and Interpretations
Our model is designed to incorporate the basic points we mentioned in the
introduction in as simplified a fashion as possible; as such, it is important
to note some of its key limitations. First, it is intended to model the effect
of a single opportunity, and it treats other forms of mobility
probabilistically in the background. It also assumes that its fundamental
parameters ($\alpha,\sigma,\tau,\gamma$) as constant over all generations. It
treats an individual’s group membership ($A$ and $D$) and ability as a
complete description of their performance, rather than including any
dependence on the group membership of the individual’s parent. (That is, an
individual in group $A$ performs the same in the model regardless of whether
their parent belonged to group $A$ or $D$.) All of these would be interesting
restrictions to relax in an extension of the model.
Much of the past theoretical work on intergenerational mobility focuses on an
issue that we do not consider here: the strategic considerations faced by
parents as they decide how much to consume in the present generation and how
much to pass on to their children. Our interest instead has been in the
optimization problem faced by a social planner in allocating opportunities,
treating the behavior of the agents as fixed and simple. Here too, it would be
interesting to explore models that address these issues in combination.
Finally, because our focus is on intergenerational mobility in a socioeconomic
sense, we do not model discrimination based on race, ethnicity, or gender, and
the role of race-based and gender-based affirmative action in combatting these
effects. The model is instead concerned with _socio-economic_ or _class-based_
[Malamud, 1995; Kahlenberg, 1996] affirmative action. That said, the
ingredients here could be combined with models of statistical or taste-based
discrimination on these attributes to better understand their interaction.
The simplicity of our model, however, does allow us to make a correspondingly
fundamental point: that even a purely payoff-maximizing society can discover
affirmative action policies from first principles as it seeks to optimize the
allocation of opportunities over multiple generations. Moreover, the optimal
allocation policy is deeply connected to dynamic programming over the
generations; the society is essentially attempting to “steer” the balance of
group $A$ and group $D$ over time, making sure not to turn things too abruptly
(giving up present benefit) or too gradually (giving up future benefit).
## Appendix C Properties of the Value Function
In this section, we show that the value function $V(.)$ for the decision
process $\mathcal{D}$ is unique, continuous, differentiable, and monotonically
decreasing. We begin by offering an alternative formulation of the decision
process that has the exact same optimal policy and value function, but is more
conducive to recursive analysis. Next, we prove several properties of the
state space $\Phi$ and the reward function $R^{\prime}$ for this alternative
decision process. Then we apply previously-established theorems from dynamic
programming and recursive analysis [Stokey, 1989] to establish the
aforementioned properties of $V(.)$.
#### A Decision Process Equivalent to $\mathcal{D}$
Given the decision process $\mathcal{D}=(\Phi,\Theta,S,R)$ (as defined in
Section 3), we first provide an alternative formulation, called
$\mathcal{D}^{\prime}$, that fits the standard representation of decision
processes in the Dynamic Programming literature. This standard formulation
allows us to import tools and theorems from Dynamic Programming with minor
modifications.
We construct $\mathcal{D}^{\prime}$ from $\mathcal{D}$ by re-defining the
action space—not in terms of thresholds, but in terms of the states reachable
through an admissible threshold from a given state. The state transitions in
this formulation become trivial, but we need to re-write the reward function
in terms of the new parameters (i.e., current and the next state).
Formally, given $\mathcal{D}$, we define
$\mathcal{D}^{\prime}=(\Phi,\Gamma,I,R^{\prime})$ as follows:
* •
The correspondence $\Gamma:\Phi\rightarrow\Phi$ specifies the states reachable
by one admissible $\theta_{0}$ from any given state $\phi_{0}\in\Phi$. More
precisely
$\Gamma(\phi_{0})=\\{\omega\in\Phi|\exists\theta_{0}\in\Theta(\phi_{0})\text{
s.t. }\omega=S(\phi_{0},\theta_{0})\\}.$
* •
$I:\Phi_{0}\times\Phi_{0}\rightarrow\Phi_{0}$ simply returns its second
argument, that is, for all $\phi_{0}\in[0,1]$, $I(\phi_{0},\omega)=\omega$.
* •
To recast the reward function in terms of $(\phi_{0},\omega)$, we first write
$\theta_{0}$ as a function of $\phi_{0},\omega$:
$\displaystyle\omega=\phi_{0}-\frac{\phi_{0}}{2\sigma}(\sigma^{2}-\theta^{2}_{0})$
$\displaystyle\Leftrightarrow$ $\displaystyle
2\sigma\frac{\omega}{\phi_{0}}+\sigma^{2}-2\sigma=\theta^{2}_{0}$
$\displaystyle\Leftrightarrow$
$\displaystyle\theta_{0}=\sqrt{2\sigma\frac{\omega}{\phi_{0}}+\sigma^{2}-2\sigma}$
Given the above change of variables, we can write:
$R^{\prime}(\phi_{0},\omega_{0})=\begin{cases}R\left(\phi_{0},\sqrt{2\sigma\frac{\omega}{\phi_{0}}+\sigma^{2}-2\sigma}\right)\quad\text{
if }\phi_{0}>0\\\ R(0,\sigma)\quad\text{ if }\phi_{0}=0\end{cases}$
###### Property 1
$\Phi$ is a convex subset of $\mathbb{R}$, and the correspondence $\Gamma$ is
nonempty, compact-valued, and continuous.
Proof $\Phi=[0,1]$, which is clearly a convex subset of $\mathbb{R}$. The
correspondence $\Gamma$ can be characterized as follows:
$\displaystyle\Gamma(\phi_{0})$ $\displaystyle=$
$\displaystyle\\{\omega\in\Phi|\exists\theta_{0}\in\Theta(\phi_{0})\text{ s.t.
}\omega=S(\phi_{0},\theta_{0})\\}$ $\displaystyle=$
$\displaystyle\left\\{\omega\in\Phi|\omega=\phi_{0}-\frac{\phi_{0}}{2\sigma}(\sigma^{2}-\theta_{0}^{2})\text{
where
}\sigma\left(1-\frac{\alpha}{\phi_{0}}\right)\leq\theta_{0}\leq\frac{\sigma(1-\alpha)}{\phi_{0}}\text{
and }0\leq\theta_{0}\leq 1\right\\}$ $\displaystyle=$
$\displaystyle\begin{cases}\omega\in\left[\phi_{0}\left(1-\frac{1}{2\sigma}\right),\phi_{0}\right]\quad\text{
if }\phi_{0}\leq\alpha\\\
\omega\in\left[\phi_{0}-\sigma\alpha\left(1+\frac{\alpha}{2\phi_{0}}\right),\phi_{0}\right]\quad\text{
if }\alpha<\phi_{0}\leq 1-\alpha\\\
\omega\in\left[\phi_{0}-\sigma\alpha\left(1+\frac{\alpha}{2\phi_{0}}\right),\phi_{0}-\frac{\phi_{0}}{2\sigma}\left(1-\frac{1-\alpha}{\phi_{0}}\right)\left(1+\frac{1-\alpha}{\phi_{0}}\right)\right]\quad\text{
if }\phi_{0}\geq 1-\alpha\end{cases}$ $\displaystyle=$
$\displaystyle\left[\max\left\\{\phi_{0}\left(1-\frac{1}{2\sigma}\right),\phi_{0}-\sigma\alpha\left(1+\frac{\alpha}{2\phi_{0}}\right)\right\\},\min\left\\{\phi_{0},\phi_{0}-\frac{\phi_{0}}{2\sigma}\left(1-\frac{1-\alpha}{\phi_{0}}\right)\left(1+\frac{1-\alpha}{\phi_{0}}\right)\right\\}\right]$
Given the above definition, it is trivial to verify that $\Gamma$ is indeed
nonempty, compact-valued, and continuous.
###### Property 2
The reward function $R^{\prime}$ is bounded and continuous.
Proof The reward function $R$ specifies the fraction of the population who
succeed if given the opportunity. So clearly $0\leq R(.,.)\leq 1$ is bounded.
As a result $R^{\prime}$ is also bounded. To establish continuity of
$R^{\prime}$, we first establish the continuity of $R$. It is trivial to see
that $R(.,.)$ (defined in (9)) is continuous at any $\phi_{0}<1$. At
$\phi_{0}=1$, we have
$\displaystyle\lim_{\phi_{0}\rightarrow
1}\frac{\phi_{0}}{2\sigma}(\sigma^{2}-\theta_{0}^{2})+\frac{1-\phi_{0}}{2\sigma}\left((\sigma+\tau)^{2}-\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\theta_{0}}{1-\phi_{0}}\right)^{2}\right)$
$\displaystyle=$
$\displaystyle\frac{1}{2\sigma}(\sigma^{2}-\theta_{0}^{2})-\lim_{\phi_{0}\rightarrow
1}\frac{1-\phi_{0}}{2\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\theta_{0}}{1-\phi_{0}}\right)^{2}$
$\displaystyle=$
$\displaystyle\frac{1}{2\sigma}(\sigma^{2}-\theta_{0}^{2})-\lim_{\phi_{0}\rightarrow
1}\frac{1}{2\sigma}\frac{\left(\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\theta_{0}\right)^{2}}{1-\phi_{0}}$
($\Gamma(1)=\\{\sigma(1-\alpha)\\}$) $\displaystyle=$
$\displaystyle\frac{1}{2\sigma}(\sigma^{2}-\theta_{0}^{2})-\lim_{\phi_{0}\rightarrow
1}\frac{1}{2\sigma}\frac{\left(\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\sigma(1-\alpha)\right)^{2}}{1-\phi_{0}}$
(L’Hospital’s rule) $\displaystyle=$
$\displaystyle\frac{1}{2\sigma}(\sigma^{2}-\theta_{0}^{2})-\lim_{\phi_{0}\rightarrow
1}\frac{1}{\sigma}\frac{(\tau+\sigma(1-\alpha))\left(\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\sigma(1-\alpha)\right)}{1}$
$\displaystyle=$ $\displaystyle\frac{1}{2\sigma}(\sigma^{2}-\theta_{0}^{2})-0$
$\displaystyle=$ $\displaystyle\frac{1}{2\sigma}(\sigma^{2}-\theta_{0}^{2})$
$\displaystyle=$ $\displaystyle R(1,\theta_{0}).$
So $R$ is continuous at $\phi_{0}=1$, as well. Finally, note that
$\sqrt{2\sigma\frac{\omega}{\phi_{0}}+\sigma^{2}-2\sigma}$ is continuous
function of $(\phi_{0},\omega)\in\Gamma(\phi_{0})$, so $R^{\prime}$ is also
continuous.
Let $C(\Phi)$ be the space of bounded continuous functions
$f:\Phi_{0}\rightarrow\mathbb{R}$ with the sup norm
$\|f\|=\max_{\phi_{0}\in\Phi}|f(\phi_{0})|$. We define the operator $T$ on the
space $C(\Phi)$ as follows:
$(Tf)(\phi_{0})=\max_{\omega\in\Gamma(\phi_{0})}R^{\prime}(\phi_{0},\omega)+\gamma
f(\omega).$
###### Theorem 2 (Adapted version of Theorem 4.6 in [Stokey, 1989])
Let $\Phi$, $\Gamma$, and $R^{\prime}$ satisfy Properties 1 and 2. Then the
operator $T$ maps $C(\Phi)$ into itself, $T:C(\Phi)\rightarrow C(\Phi)$, and
$T$ has a unique fixed point $V\in C(\Phi)$. Moreover, given $V$, the optimal
policy correspondence $\Pi^{*}:\Phi\rightarrow\Phi$ is compact-valued and
upper hemi continuous (u.h.c).
According to Cotter and Park [2006], if the reward function $R^{\prime}$ is
differentiable, the value function $V$ is differentiable on any interior point
that is an optimal “next state” for some current state. More precisely,
###### Theorem 3 (Adapted version of Theorem 2 in [Cotter and Park, 2006])
Suppose $\omega\in\Pi^{*}(\phi_{0})\cap(0,1)$ for some $\phi_{0}\in[0,1]$. If
$R^{\prime}$ is continuously differentiable, then $V$ is differentiable at
$\omega$, with
$\frac{\partial}{\partial\phi_{0}}V|_{\omega}=\frac{\partial}{\partial\phi_{0}}R^{\prime}|_{(\omega,\omega^{\prime})}$
for any $\omega^{\prime}\in\Pi^{*}(\omega)$.
It only remains to show that:
###### Property 3
$R^{\prime}$ is continuously differentiable on the interior of its domain.
Proof To establish continuous differentiablity of $R^{\prime}$ with respect
to $\omega$, note that:
$\frac{\partial}{\partial\omega}R^{\prime}(\phi_{0},\omega)=\frac{\partial}{\partial\theta_{0}}R(\phi_{0},\theta_{0})\frac{\partial}{\partial\omega}\theta_{0}.$
and both terms on the right hand side of the above equation are continuous.
$\frac{\partial}{\partial\theta_{0}}R(\phi_{0},\theta_{0})=\frac{\phi_{0}}{\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\theta_{0}}{1-\phi_{0}}\right)$
$\frac{\partial}{\partial\omega}\theta_{0}=\frac{\sigma}{\sqrt{2\sigma\omega\phi_{0}+(\sigma^{2}-2\sigma)\phi_{0}^{2}}}$
Therefore, $\frac{\partial}{\partial\omega}R^{\prime}(\phi_{0},\omega)$ is
trivially continuous at any $(\phi_{0},\omega)\in(0,1)^{2}$.
To establish continuous differentiability of $R^{\prime}$ with respect to
$\phi_{0}$, note that:
$\frac{\partial}{\partial\phi_{0}}R^{\prime}(\phi_{0},\omega)=\frac{\partial}{\partial\phi_{0}}R(\phi_{0},\theta_{0})+\frac{\partial}{\partial\theta_{0}}R(\phi_{0},\theta_{0})\frac{\partial}{\partial\phi_{0}}\theta_{0}.$
It is easy to see that all three terms in the right hand side of the above are
continuous:
$\displaystyle\frac{\partial}{\partial\phi_{0}}R(\phi_{0},\theta_{0})$
$\displaystyle=$
$\displaystyle\frac{1}{2\sigma}(\sigma^{2}-\theta_{0}^{2})-\frac{1}{2\sigma}\left((\sigma+\tau)^{2}-\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\theta_{0}}{1-\phi_{0}}\right)^{2}\right)$
$\displaystyle-\frac{1}{\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\theta_{0}}{1-\phi_{0}}\right)\left(\frac{\sigma(1-\alpha)-\theta_{0}}{1-\phi_{0}}\right)$
$\frac{\partial}{\partial\phi_{0}}\theta_{0}=\frac{\sigma\omega}{\phi_{0}^{2}\sqrt{2\sigma\omega\phi_{0}+(\sigma^{2}-2\sigma)\phi_{0}^{2}}}$
Therefore, $\frac{\partial}{\partial\phi_{0}}R^{\prime}(\phi_{0},\omega)$ is
continuous at any $(\phi_{0},\omega)$ in the interior of $R^{\prime}$s domain.
###### Proposition 3
$V(\phi_{0})$ is monotonically decreasing at all $\phi_{0}\in(0,1)$.
Proof For any $\phi_{0}\leq\phi^{*}_{0}$, the statement is easy to verify
given the closed form expression for the value function in (12). For any
$\phi_{0}>\phi^{*}_{0}$ we show that $V$ is decreasing in an open neighborhood
on the left side of $\phi_{0}$. Since $V$ is differentiable at $\phi_{0}$,
this implies $V^{\prime}(\phi_{0})\leq 0$.
Let $\phi^{\prime}_{0}$ be the state we get to if we apply the optimal
threshold $\theta_{0}$ at $\phi_{0}$. We have that
$\phi^{\prime}_{0}<\phi_{0}$ (note that at $\phi_{0}>\phi^{*}_{0}$,
$\theta_{0}<\sigma$, which implies $\phi^{\prime}_{0}<\phi_{0}$). Now for any
state $\phi^{\prime}_{0}<\phi^{\prime\prime}_{0}<\phi_{0}$, we can show that
$V(\phi^{\prime\prime}_{0})>V(\phi_{0})$. This is simply because we can reach
$\phi^{\prime}_{0}$ from $\phi^{\prime\prime}_{0}$ using a threshold
$\theta^{\prime\prime}_{0}>\theta_{0}$. To see this, note that:
$\displaystyle
S(\phi_{0},\theta_{0})=\phi^{\prime}_{0}=S(\phi^{\prime\prime}_{0},\theta^{\prime\prime}_{0})$
$\displaystyle\Leftrightarrow$
$\displaystyle\phi_{0}-\frac{\phi_{0}}{2\sigma}(\sigma^{2}-\theta_{0}^{2})=\phi^{\prime\prime}_{0}-\frac{\phi^{\prime\prime}_{0}}{2\sigma}(\sigma^{2}-{\theta^{\prime\prime}_{0}}^{2})$
$\displaystyle\Leftrightarrow$
$\displaystyle\phi_{0}(1-\frac{\sigma}{2}+\frac{\theta_{0}^{2}}{2\sigma})=\phi^{\prime\prime}_{0}(1-\frac{\sigma}{2}+\frac{{\theta^{\prime\prime}_{0}}^{2}}{2\sigma})$
Since $\phi_{0}>\phi^{\prime\prime}_{0}$, it must be the case that
${\theta^{\prime\prime}_{0}}>\theta_{0}$ for the above equation to hold.
Next, observe that compared to applying $\theta_{0}$ at $\phi_{0}$, using the
higher threshold of $\theta^{\prime\prime}_{0}$ at $\phi^{\prime\prime}_{0}$
leads to a higher immediate reward and the same next state value at
$\phi^{\prime\prime}_{0}$. From this observation, we can conclude that the
value of $\phi^{\prime\prime}_{0}$ is higher than that of $\phi_{0}$, because:
$\displaystyle V(\phi^{\prime\prime}_{0})$ $\displaystyle\geq$ $\displaystyle
r(\phi^{\prime\prime}_{0},\theta^{\prime\prime}_{0})+\gamma
V(\phi^{\prime}_{0})$ $\displaystyle\geq$ $\displaystyle
r(\phi_{0},\theta_{0})+\gamma V(\phi^{\prime}_{0})$ $\displaystyle=$
$\displaystyle V(\phi_{0}).$
Figure 6: The optimal policy $\theta_{0}$ at every state $0\leq\phi_{0}\leq 1$
for various settings of $\alpha,\sigma,\tau$, and $\gamma$. Note that in all
cases, the optimal threshold is monotonically decreasing with $\phi_{0}$.
Moreover, there exists a point below which $\sigma$ is the only optimal
threshold, and above which $\sigma$ is no longer optimal. Observe that this
point coincides with the tipping point, $\phi^{*}_{0}$, established in Theorem
1 (depicted by dashed lines). The dotted red line illustrates the line of
affirmative action, derived in Lemma 2. Notice that beyond $\phi^{*}_{0}$, the
optimal policy is always below the line of affirmative action.
Figure 7: The scaled value function at every state $0\leq\phi_{0}\leq 1$ for
various settings of $\alpha,\sigma,\tau$, and $\gamma$. The dashed lines
specify the tipping point, $\phi^{*}_{0}$. Note that in all cases, the value
function is continuous, concave, and decreasing.
Figure 8: The difference between $\theta_{0}$ and $\theta_{1}$ at every state
$0\leq\phi_{0}\leq 1$ for various settings of $\alpha,\sigma,\tau$, and
$\gamma$. The dashed lines specify the tipping point, $\phi^{*}_{0}$. Note
that strict affirmative action is only employed beyond $\phi^{*}_{0}$. Also
note that the extent of affirmative action is monotonically increasing in
$\phi_{0}$.
Figure 9: The state the optimal policy converges to with the initial state
$\phi_{0}$. The dashed lines specify the tipping point, $\phi^{*}_{0}$. Note
that the optimal policy never shrinks the size of group $D$ to a value less
than $\phi^{*}_{0}$.
### C.1 Omitted Proofs
#### Proof of Lemma 1.
Proof We utilize _backward induction_ to pinpoint the largest state at which
$\sigma$ is an optimal threshold. First, observe that if $\phi_{0}$ if
sufficiently small, $\sigma\in\Pi^{*}_{0}(\phi_{0})$. To see this, note that
at least for $\phi_{0}=0$, $\sigma\in\Pi^{*}_{0}(\phi_{0})$. Second, note that
if $\sigma\in\Pi^{*}_{0}(\phi_{0})$), Bellman optimality (10) must hold:
$\displaystyle V(\phi_{0})$ $\displaystyle=$ $\displaystyle
R(\phi_{0},\sigma)+\gamma V(S(\phi_{0},\sigma))$ $\displaystyle=$
$\displaystyle R(\phi_{0},\sigma)+\gamma V(\phi_{0}),$
where in the second line we utilized the fact that
$S(\phi_{0},\sigma)=\phi_{0}$ for all $\phi_{0}\in[0,1]$. This fact can be
readily verified through (8). Rearranging the above equation, we obtain:
$V(\phi_{0})=\frac{1}{1-\gamma}R(\phi_{0},\sigma).$
Replacing $R$ above with its definition (9) and plugging $\sigma$ in place of
$\theta_{0}$, we obtain:
$\displaystyle V(\phi_{0})$ $\displaystyle=$
$\displaystyle\frac{1}{1-\gamma}\frac{1-\phi_{0}}{2\sigma}\left((\sigma+\tau)^{2}-\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\sigma}{1-\phi_{0}}\right)^{2}\right)$
(12) $\displaystyle=$
$\displaystyle\frac{1}{1-\gamma}\frac{1-\phi_{0}}{2\sigma}\left((\sigma+\tau)^{2}-\left(\frac{-\sigma\alpha+(1-\phi_{0})\tau+(1-\phi_{0})\sigma}{1-\phi_{0}}\right)^{2}\right)$
$\displaystyle=$
$\displaystyle\frac{1}{1-\gamma}\frac{1-\phi_{0}}{2\sigma}\left((\sigma+\tau)^{2}-\left(\sigma+\tau-\frac{\sigma\alpha}{1-\phi_{0}}\right)^{2}\right)$
$\displaystyle=$
$\displaystyle\frac{1}{1-\gamma}\frac{1-\phi_{0}}{2\sigma}\frac{\sigma\alpha}{1-\phi_{0}}\left(2(\sigma+\tau)-\frac{\sigma\alpha}{1-\phi_{0}}\right)$
$\displaystyle=$
$\displaystyle\frac{\alpha}{(1-\gamma)}\left((\sigma+\tau)-\frac{\sigma\alpha}{2(1-\phi_{0})}\right)$
A corollary of the above is that
$V(0)=\frac{\alpha}{(1-\gamma)}\left((\sigma+\tau)-\frac{\sigma\alpha}{2}\right).$
Next, we derive the largest $\phi_{0}$ at which $\sigma$ remains an optimal
threshold. Let’s denote this point by $\tilde{\phi}_{0}$. From Bellman
optimality, we know that an action is optimal if and only if it is optimal
with respect to the (optimal) value function. So taking (12) as the value
function $V$ up to $\tilde{\phi}_{0}$, any optimal threshold at
$\phi_{0}\leq\tilde{\phi}_{0}$ must maximize $R(\phi_{0},\theta_{0})+\gamma
V(S(\phi_{0},\theta_{0}))$. Since $R(\phi_{0},\theta_{0})+\gamma
V(S(\phi_{0},\theta_{0}))$ is differentiable and $\tilde{\phi}_{0}$ is the
largest state at which $\sigma$ is an optimal policy, $\sigma$ must satisfy
the first order condition at $\tilde{\phi}_{0}$—that is, the partial
derivative of $R(\phi_{0},\theta_{0})+\gamma V(S(\phi_{0},\theta_{0}))$
(w.r.t. $\theta_{0}$) must be 0 at $(\tilde{\phi}_{0},\sigma)$.
We can derive the derivative of $R(\phi_{0},\theta_{0})+\gamma
V(S(\phi_{0},\theta_{0}))$ as follows:
$\frac{\partial}{\partial\theta_{0}}\left\\{R(\phi_{0},\theta_{0})+\gamma
V(S(\phi_{0},\theta_{0}))\right\\}=\frac{\partial}{\partial\theta_{0}}R(\phi_{0},\theta_{0})+\gamma\frac{\partial}{\partial\theta_{0}}\
S(\phi_{0},\theta_{0})\frac{\partial}{\partial\phi_{0}}V(S(\phi_{0},\theta_{0})).$
(13)
We can calculate each term in (13) as follows:
$\displaystyle\frac{\partial}{\partial\theta_{0}}R(\phi_{0},\theta_{0})$
$\displaystyle=$
$\displaystyle-\frac{\phi_{0}\theta_{0}}{\sigma}+\frac{\phi_{0}}{\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\theta_{0}}{1-\phi_{0}}\right)$
(14) $\displaystyle=$
$\displaystyle\frac{\phi_{0}}{\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\phi_{0}\theta_{0}}{1-\phi_{0}}-\theta_{0}\right)$
$\displaystyle=$
$\displaystyle\frac{\phi_{0}}{\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\theta_{0}}{1-\phi_{0}}\right).$
$\frac{\partial}{\partial\theta_{0}}S(\phi_{0},\theta_{0})=\frac{\phi_{0}\theta_{0}}{\sigma}.$
(15)
$\frac{\partial}{\partial\phi_{0}}V(\phi_{0})=-\frac{\sigma\alpha^{2}}{2(1-\gamma)(1-\phi_{0})^{2}}.$
(16)
Plugging (8), (16), (14), and (15) into (13), we obtain that:
$\displaystyle\frac{\partial}{\partial\theta_{0}}\left\\{R(\phi_{0},\theta_{0})+\gamma
V(S(\phi_{0},\theta_{0}))\right\\}=$ (17)
$\displaystyle\frac{\phi_{0}}{\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\theta_{0}}{1-\phi_{0}}\right)-\gamma\frac{\phi_{0}\theta_{0}}{\sigma}\frac{\sigma\alpha^{2}}{2(1-\gamma)(1-\phi_{0}+\frac{\phi_{0}}{2\sigma}(\sigma^{2}-\theta_{0}^{2}))^{2}}.$
As mentioned earlier, at $(\tilde{\phi}_{0},\sigma)$, the derivative (17) must
amount to 0. Therefore, to find $\tilde{\phi}_{0}$, we must solve the
following equation (obtained by replacing $\theta_{0}$ in (17) with $\sigma$):
$\displaystyle
0=\frac{\phi_{0}}{\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\sigma}{1-\phi_{0}}\right)-\gamma\phi_{0}\frac{\sigma\alpha^{2}}{2(1-\gamma)(1-\phi_{0})^{2}}$
$\displaystyle\Rightarrow$ $\displaystyle
0=(-\alpha\sigma+(1-\phi_{0})\tau)(1-\phi_{0})-\gamma\frac{\sigma^{2}\alpha^{2}}{2(1-\gamma)}$
$\displaystyle\Rightarrow$ $\displaystyle
0=\tau(1-\phi_{0})^{2}-\alpha\sigma(1-\phi_{0})-\frac{\gamma\sigma^{2}\alpha^{2}}{2(1-\gamma)}$
(Note that in the second line of the derivation above, we multiplied both
sides by $\sigma(1-\phi_{0})^{2}/\phi_{0})$.) Solving the above quadratic
equation for $(1-\phi_{0})$, we have:
$\tilde{\phi}_{0}\in
1-\frac{\alpha\sigma}{2\tau}\left(1\pm\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)$
To obtain the tightest bound, we pick the smaller value among the above two
possibilities:
$\tilde{\phi}_{0}=1-\frac{\alpha\sigma}{2\tau}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)$
Note that $\tilde{\phi}_{0}$ must always be between $0$ and $(1-\alpha)$ (due
to the budget constraints illustrated in Figure 2). So the above derivation
only goes through if $\tilde{\phi}_{0}\leq(1-\alpha)$. Therefore, we have:
$\phi^{*}_{0}=\max\left\\{0,\min\left\\{1-\alpha,\tilde{\phi}_{0}\right\\}\right\\}.$
#### Proof of Proposition 2.
Proof Note that according to Lemma 3, for all
$\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$ and all
$\theta_{1}\in\Pi^{*}_{1}(\phi_{0})$, $\theta_{0}\leq\theta_{1}$. It only
remains to show that the inequality is strict. That is, for all
$\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$ and all
$\theta_{1}\in\Pi^{*}_{1}(\phi_{0})$, $\theta_{0}\neq\theta_{1}$.
Suppose not and there exists $\theta_{0}\in\Pi^{*}_{0}(\phi_{0})$ and
$\theta_{1}\in\Pi^{*}_{1}(\phi_{0})$, such that $\theta_{0}=\theta_{1}$.
According to Lemma 2, this implies that
$\theta_{0}=\sigma(1-\alpha)+\tau(1-\phi_{0})$. Next we show that
$\theta_{0}=\sigma(1-\alpha)+\tau(1-\phi_{0})$ cannot be an optimal threshold
at $\phi_{0}$.
Recall that
$\Pi^{*}_{0}(\phi_{0})=\arg\max_{\theta_{0}}R(\phi_{0},\theta_{0})+\gamma
V(S(\phi_{0},\theta_{0})).$
So if $\sigma(1-\alpha)+\tau(1-\phi_{0})\in\Pi^{*}_{0}(\phi_{0})$, it must
satisfy the following first-order condition:
$\displaystyle\frac{\partial}{\partial\theta_{0}}\left(R(\phi_{0},\theta_{0})+\gamma
V(S(\phi_{0},\theta_{0}))\right)=\frac{\partial}{\partial\theta_{0}}R(\phi_{0},\theta_{0})+\gamma\frac{\partial}{\partial\theta_{0}}\
S(\phi_{0},\theta_{0})\frac{\partial}{\partial\phi_{0}}V(S(\phi_{0},\theta_{0}))=0$
But note that $\frac{\partial}{\partial\theta_{0}}R(\phi_{0},\theta_{0})$ at
$\theta_{0}=\sigma(1-\alpha)+\tau(1-\phi_{0})$ is 0:
$\displaystyle\frac{\partial}{\partial\theta_{0}}R(\phi_{0},\theta_{0})$
$\displaystyle=$
$\displaystyle\frac{\phi_{0}}{\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\theta_{0}}{1-\phi_{0}}\right)$
$\displaystyle=$
$\displaystyle\frac{\phi_{0}}{\sigma}\left(\frac{\sigma(1-\alpha)+(1-\phi_{0})\tau-\left(\sigma(1-\alpha)+\tau(1-\phi_{0})\right)}{1-\phi_{0}}\right)$
$\displaystyle=$
$\displaystyle\frac{\phi_{0}}{\sigma}\left(\frac{0}{1-\phi_{0}}\right)=0$
So $\theta_{0}=\sigma(1-\alpha)+\tau(1-\phi_{0})$ can only be optimal if
$\frac{\partial}{\partial\theta_{0}}\
S(\phi_{0},\theta_{0})\frac{\partial}{\partial\phi_{0}}V(S(\phi_{0},\theta_{0}))=0.$
But this equality cannot hold because
$\frac{\partial}{\partial\theta_{0}}S(\phi_{0},\theta_{0})=\frac{\phi_{0}\theta_{0}}{\sigma}>0,$
and $\frac{\partial}{\partial\phi_{0}}V(S(\phi_{0},\theta_{0}))<0$. So
$\theta_{0}=\sigma(1-\alpha)+\tau(1-\phi_{0})$ cannot be the optimal threshold
at $\phi_{0}$.
### C.2 Derivation of $\gamma^{*}$, $\tau^{*}$, and $\alpha^{*}$
The precise derivation of $\phi_{0}^{*}$ in Equation 11 allows us to gain
insight into how the interaction between the primitive parameters of our model
can promote or avert affirmative action. In Figure 1, we focus on the
interactions among $\alpha,\tau,\gamma$ (for simplicity assuming that
$\sigma=1-\tau$) and illustrate the regimes of persistent affirmative action
(i.e., $\phi_{0}^{*}\leq 0$). We define and investigate the following
quantities:
* •
Given $\tau$ and $\alpha$, $\gamma^{*}$ specifies the minimum discount factor
required for $\phi_{0}^{*}\leq 0$.
* •
Given $\gamma$ and $\alpha$, $\tau^{*}$ specifies the maximum level of $\tau$
that can maintain $\phi_{0}^{*}\leq 0$.
* •
Given $\tau$ and $\gamma$, $\alpha^{*}$ specifies the minimum level
opportunities required for $\phi_{0}^{*}\leq 0$.
Next, we derive the above quantities using Equation 11. In what follows, we
assume $\tau>\alpha\sigma$, because we have
$\displaystyle
1-\frac{\alpha\sigma}{2\tau}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)$
$\displaystyle\leq$ $\displaystyle
1-\frac{\alpha\sigma}{2\tau}\left(1+1\right)$ $\displaystyle\leq$
$\displaystyle 1-\frac{\alpha\sigma}{\tau}$
and if $\tau<\alpha\sigma$, then $1-\frac{\alpha\sigma}{\tau}<1-1=0$ and
$\phi^{*}_{0}=0$.
#### Derivation of $\tau^{*}$
Assuming that $\alpha>0$, $\tau,\gamma\in(0,1)$, $\sigma=1-\tau$ and
$2\tau>\sigma\alpha$,
$\displaystyle
1-\frac{\alpha\sigma}{2\tau}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)\leq
0$ $\displaystyle\Leftrightarrow$ $\displaystyle
1\leq\frac{\alpha\sigma}{2\tau}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)$
(assuming $\alpha>0$ and $0<\tau<1$) $\displaystyle\Leftrightarrow$
$\displaystyle\frac{2\tau}{\alpha\sigma}\leq
1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}$ $\displaystyle\Leftrightarrow$
$\displaystyle\frac{2\tau}{\alpha\sigma}-1\leq\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}$
(assuming $2\tau>\alpha(1-\tau)$) $\displaystyle\Leftrightarrow$
$\displaystyle\left(\frac{2\tau}{\alpha\sigma}-1\right)^{2}\leq
1+\frac{2\tau\gamma}{1-\gamma}$ $\displaystyle\Leftrightarrow$
$\displaystyle\frac{4\tau^{2}}{\alpha^{2}\sigma^{2}}-\frac{4\tau}{\alpha\sigma}+1\leq
1+\frac{2\tau\gamma}{1-\gamma}$ $\displaystyle\Leftrightarrow$
$\displaystyle\frac{4\tau^{2}}{\alpha^{2}\sigma^{2}}-\frac{4\tau}{\alpha\sigma}\leq\frac{2\tau\gamma}{1-\gamma}$
(divide by $\tau$ assuming $\tau>0$) $\displaystyle\Leftrightarrow$
$\displaystyle\frac{2\tau}{\alpha^{2}\sigma^{2}}-\frac{2}{\alpha\sigma}\leq\frac{\gamma}{1-\gamma}$
$\displaystyle\Leftrightarrow$ $\displaystyle
2\tau-2\alpha\sigma\leq\frac{\gamma\alpha^{2}\sigma^{2}}{1-\gamma}$ (18)
$\displaystyle\Leftrightarrow$ $\displaystyle
2\tau-2\alpha(1-\tau)\leq\frac{\gamma\alpha^{2}}{1-\gamma}(1-\tau)^{2}$
$\displaystyle\Leftrightarrow$ $\displaystyle
2-2+2\tau-2\alpha(1-\tau)\leq\frac{\gamma\alpha^{2}}{1-\gamma}(1-\tau)^{2}$
$\displaystyle\Leftrightarrow$ $\displaystyle
2-2(1-\tau)-2\alpha(1-\tau)\leq\frac{\gamma\alpha^{2}}{1-\gamma}(1-\tau)^{2}$
$\displaystyle\Leftrightarrow$ $\displaystyle
0\leq\frac{\gamma\alpha^{2}}{1-\gamma}(1-\tau)^{2}+2(1+\alpha)(1-\tau)-2$
$\displaystyle\Leftrightarrow$ $\displaystyle
0\leq\frac{\gamma\alpha^{2}}{2(1-\gamma)}(1-\tau)^{2}+(1+\alpha)(1-\tau)-1$
$\displaystyle\Leftrightarrow$ $\displaystyle\tau\leq
1-\frac{-(1+\alpha)+\sqrt{(1+\alpha)^{2}+\frac{2\gamma\alpha^{2}}{(1-\gamma)}}}{\frac{\gamma\alpha^{2}}{(1-\gamma)}}$
where the last line is derived by obtaining the roots of the quadratic
function $q(x)=\frac{\gamma\alpha^{2}}{2(1-\gamma)}x^{2}+(1+\alpha)x-1$, as
follows:
$x^{*}_{1}=\frac{-(1+\alpha)-\sqrt{(1+\alpha)^{2}+\frac{2\gamma\alpha^{2}}{(1-\gamma)}}}{\frac{\gamma\alpha^{2}}{(1-\gamma)}}\text{
,
}x^{*}_{2}=\frac{-(1+\alpha)+\sqrt{(1+\alpha)^{2}+\frac{2\gamma\alpha^{2}}{(1-\gamma)}}}{\frac{\gamma\alpha^{2}}{(1-\gamma)}}.$
Note that $\frac{\gamma\alpha^{2}}{2(1-\gamma)}>0$, so $q(x)\geq 0$ if and
only if $x<x^{*}_{1}$ or $x>x^{*}_{2}$. Note that $x^{*}_{1}<0$ so if we know
$x$ is positive, then $q(x)\geq 0$ if and only if $x>x^{*}_{2}$. Replacing $x$
with $(1-\tau)$, gives us equation (18).
#### Derivation of $\alpha^{*}$
Assuming that $\tau\in(0,1)$ and $\gamma<1$,
$\displaystyle
1-\frac{\alpha\sigma}{2\tau}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)\leq
0$ $\displaystyle\Leftrightarrow$ $\displaystyle
1\leq\frac{\alpha\sigma}{2\tau}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)$
$\displaystyle\Leftrightarrow$
$\displaystyle\frac{2\tau}{\sigma}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)^{-1}\leq\alpha$
#### Derivation of $\gamma^{*}$
Assuming that $\alpha,\tau,\sigma,\gamma\in(0,1)$ and $\tau>\sigma\alpha$,
$\displaystyle
1-\frac{\alpha\sigma}{2\tau}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)\leq
0$ $\displaystyle\Leftrightarrow$ $\displaystyle
1\leq\frac{\alpha\sigma}{2\tau}\left(1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}\right)$
$\displaystyle\Leftrightarrow$ $\displaystyle\frac{2\tau}{\alpha\sigma}\leq
1+\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}$ $\displaystyle\Leftrightarrow$
$\displaystyle\frac{2\tau}{\alpha\sigma}-1\leq\sqrt{1+\frac{2\tau\gamma}{1-\gamma}}$
$\displaystyle\Leftrightarrow$
$\displaystyle\left(\frac{2\tau}{\alpha\sigma}-1\right)^{2}\leq
1+\frac{2\tau\gamma}{1-\gamma}$ $\displaystyle\Leftrightarrow$
$\displaystyle\left(\frac{2\tau}{\alpha\sigma}-1\right)^{2}-1\leq\frac{2\tau\gamma}{1-\gamma}$
$\displaystyle\Leftrightarrow$
$\displaystyle\frac{1}{2\tau}\left(\frac{2\tau}{\alpha\sigma}-1\right)^{2}-\frac{1}{2\tau}\leq\frac{\gamma}{1-\gamma}$
$\displaystyle\Leftrightarrow$ $\displaystyle
1-\left(\frac{1}{2\tau}\left(\frac{2\tau}{\alpha\sigma}-1\right)^{2}-\frac{1}{2\tau}+1\right)^{-1}\leq\gamma$
|
Simple finite elements and multigrid for efficient mass-consistent wind
downscaling in a coupled fire-atmosphere model
J. Mandel 1, A. Farguell 2, A. K. Kochanski 2, D. V. Mallia 3, K. Hilburn 4
${}^{1}\,$University of Colorado Denver, Denver, CO
2San José State University, San José, CA
3University of Utah, Salt Lake City, UT
4Colorado State University, Fort Collins, CO
## 1 Introduction
In the coupled atmosphere-fire model WRF-SFIRE [6, 7], the Weather Research
Forecasting (WRF) model [12] runs at 300m–1km horizontal resolution, while the
fire model runs at the resolution of 30m or finer. The wind has a fundamental
effect on fire behavior and the topography details have a strong effect on the
wind, but WRF does not see the topography on the fire grid scale. We want to
downscale the wind from WRF to account for the fine-scale terrain. For this
purpose, we fit the wind from WRF with a divergence-free flow over the
detailed terrain. Such methods, called mass-consistent approximations, were
originally proposed on regular grids [10, 11] for urban and complex terrain
modeling, with terrain and surface features modeled by excluding entire grid
cells from the domain. For fire applications, WindNinja [13] uses finite
elements on a terrain-following grid. The resulting equations are generally
solved by iterative methods such as SOR, which converge slowly, so use of GPUs
is of interest [2]. A multigrid method with a terrain-following grid by a
change of coordinates was proposed in [15].
The method proposed here is to be used in every time step of WRF-SFIRE in the
place of interpolation to the fire model grid. Therefore, it needs to have the
potential to (1) scale to hundreds or thousands of processors using WRF
parallel infrastructure [14]; (2) scale to domains size at least 100km by
100km horizontally, with $3000\times 3000\times 15$ grid cells and more; (3)
have reasonable memory requirements per grid point; (4) not add to the cost of
the time step significantly when started from the solution in the previous
time step; and, (5) adapt to the problem automatically, with minimum or no
parameters to be set by the user.
## 2 Finite element formulation
Given vector field $\boldsymbol{u}_{0}$ on domain
$\Omega\subset\mathbb{R}^{d}$, subset $\Gamma\subset\partial\Omega$, and
$d\times d$ symmetric positive definite coefficient matrix
$\boldsymbol{A}=\boldsymbol{A}\left(\boldsymbol{x}\right)$, we want to find
the closest divergence-free vector field $\boldsymbol{u}$ by solving the
problem
$\min_{\boldsymbol{u}}\frac{1}{2}\int\limits_{\Omega}\left(\boldsymbol{u}-\boldsymbol{u}_{0}\right)\cdot\boldsymbol{A}\left(\boldsymbol{u}-\boldsymbol{u}_{0}\right)d\boldsymbol{x}\text{\quad
subject to }\operatorname{div}\boldsymbol{u}=0\text{ in }\Omega\text{ and
}\boldsymbol{u}\cdot\boldsymbol{n}=0\text{ on }\Gamma,$ (1)
where $\Gamma$ is the bottom of the domain (the surface), and
$\boldsymbol{A}\left(\boldsymbol{x}\right)$ is a $3\times 3$ diagonal matrix
with penalty constants $a_{1}^{2},a_{2}^{2},a_{3}^{2}$ on the diagonal.
Enforcing the constraints in (1) by a Lagrange multiplier $\lambda$, we obtain
the solution $\left(\boldsymbol{u},\lambda\right)$ as a stationary point of
the Lagrangean
$\mathcal{L}\left(\boldsymbol{u},\lambda\right)=\frac{1}{2}\int\limits_{\Omega}\boldsymbol{A}\left(\boldsymbol{u}-\boldsymbol{u}_{0}\right)\cdot\left(\boldsymbol{u}-\boldsymbol{u}_{0}\right)d\boldsymbol{x}+\int\limits_{\Omega}\lambda\operatorname{div}\boldsymbol{u}d\boldsymbol{x}-\int\limits_{\Gamma}\lambda\boldsymbol{n}\cdot\boldsymbol{u}d\boldsymbol{s}.$
(2)
Eliminating $\boldsymbol{u}$ from the stationarity conditions
$\partial\mathcal{L}(\boldsymbol{u},\lambda)/\partial\lambda=0$ and
$\partial\mathcal{L}(\boldsymbol{u},\lambda)/\partial\boldsymbol{u}=0$ by
$\boldsymbol{u}=\boldsymbol{u}_{0}+\boldsymbol{A}^{-1}\operatorname{grad}\lambda$
(3)
leads to the generalized Poisson equation for Lagrange multiplier $\lambda$,
$-\operatorname{div}\boldsymbol{A}^{-1}\operatorname{grad}\lambda=\operatorname{div}\boldsymbol{u}_{0}\text{
on }\Omega,\quad\lambda=0\text{ on }\partial\Omega\setminus\Gamma,\text{
\quad}\boldsymbol{n\cdot A}^{-1}\operatorname{grad}\lambda=-\boldsymbol{n\cdot
u}_{0}\text{ on }\Gamma.$ (4)
Multiplication of (4) by a test function $\mu$, $\mu=0$ on
$\partial\Omega\setminus\Gamma$, and integration by parts yields the
variational form to find $\lambda$ such that $\lambda=0$ on
$\partial\Omega\setminus\Gamma$ and
$\int_{\Omega}\boldsymbol{A}^{-1}\operatorname{grad}\lambda\cdot\operatorname{grad}\mu\,d\boldsymbol{x}=-\int_{\Omega}\operatorname{grad}\mu\cdot\boldsymbol{u}_{0}d\boldsymbol{x}$
(5)
for all $\mu$ such that $\mu=0$ on $\partial\Omega\setminus\Gamma$. The
solution is then recovered from (3). We proceed formally here; see [5] for a
different derivation of (5) in a functional spaces setting.
The variational problem (5) is discretized by standard isoparametric 8-node
hexahedral finite elements, e.g., [4]. The integral on the left-hand side of
(5) is evaluated by tensor-product Gauss quadrature with two nodes in each
dimension, while for the right-hand side, one-node quadrature at the center of
the element is sufficient. The same code for the derivatives of a finite
element function is used to evaluate $\operatorname{grad}$ $\lambda$ in (3) at
the center of each element.
The unknown $\lambda$ is represented by its values at element vertices, and
the wind vector is represented naturally by its values at element centers. No
numerical differentiation of $\lambda$ from its nodal values, computation of
the divergence of the initial wind field $\boldsymbol{u}_{0}$, or explicit
implementation of the boundary condition on $\operatorname{grad}\lambda$ in
(4) is needed. These are all taken care of by the finite elements naturally.
## 3 Multigrid iterations
The finite element method for (5) results in a system of linear equations
$Ku=f$. The values of the solution are defined on a grid, which we will call a
_fine grid_. One cycle of the multigrid method consists of several iterations
of a basic iterative method, such as Gauss-Seidel, called a _smoother_ ,
followed by a _coarse-grid correction_. A prolongation matrix $P$ is
constructed to interpolate values from a coarse grid, in the simplest case
consisting of every other node, to the fine grid. For a given approximate
solution $u$ after the smoothing, we seek an improved solution in the form
$u+Pu_{c}$ variationally, by solving
$P^{\top}K\left(u+Pu_{c}\right)=P^{\top}f$ (6)
for $u_{c}$, and obtain the coarse-grid correction procedure as
$\displaystyle f_{c}=P^{\top}\left(f-Ku\right)\qquad$ form the coarse right-
hand side $\displaystyle K_{c}=P^{\top}KP\qquad$ form the coarse stiffness
matrix $\displaystyle K_{c}u_{c}=f_{c}\qquad$ solve the coarse-grid problem
(7) $\displaystyle u\leftarrow u+Pu_{c}\qquad$ insert the coarse-grid
correction
The coarse grid correction is followed by several more smoothing steps, which
completes the multigrid cycle.
In the simplest case, $P$ is a linear interpolation and the coarse stiffness
matrix $K_{c}$ is the stiffness matrix for a coarse finite element
discretization on a grid with each coarse-grid element taking the place of a
$2\times 2\times 2$ agglomeration of fine-grid elements. That makes it
possible to apply the same method to the coarse-grid problem (7) recursively.
This process creates a hierarchy of coarser grids. Eventually, the coarsest
grid problem is solved by a direct method, or one can just do some more
iterations on it.
Multigrid methods gain their efficiency from the fact that simple iterative
methods like Gauss-Seidel change the values of the solution at a node from
differences of the values between this and neighboring nodes. When the error
values at neighboring nodes become close, the error can be well approximated
in the range of the prolongation $P$ and the coarse-grid correction can find
$u_{c}$ such that $u+Pu_{c}$ is a much better approximation of the solution.
For analysis of variational multigrid methods and further references, see [1,
8].
Multigrid methods are very efficient. For simple elliptic problems, such as
the Poisson equation on a regular grid, convergence rates of about $0.1$
(reduction of the error by a factor of $10$) at the cost of $4$ to $5$ Gauss-
Seidel sweeps on the finest grid are expected [3]. However, the convergence
rates get worse on more realistic grids, and adaptations are needed. We choose
as the smoother vertical sweeps of Gauss-Seidel from the bottom up to the top,
with red-black ordering horizontally into $4$ groups. For the base method, we
use $2\times 2\times 2$ coarsening and construct $P$ so that the vertices of
every $2\times 2\times 2$ agglomeration of elements interpolate to the fine-
grid nodes in the agglomeration, with the same weights as the trilinear
interpolation on a regular grid. The interpolation is still trilinear on a
stretched grid, but only approximately trilinear on a deformed terrain-
following grid.
The base method works as expected as long as some grid layers are not tightly
coupled. If they are, we mitigate the slower convergence by semicoarsening
[9]: After smoothing, the error is smoother in the tightly coupled
direction(s), which indicates that we should not coarsen the other
direction(s). When the grid is stretched vertically away from the ground, the
nodes are relatively closer and thus tightly coupled in the horizontal
direction. Similarly, when the penalty coefficient $a_{3}$ in the vertical
direction is larger than $a_{1}$ and $a_{2}$ in the horizontal directions, the
neighboring nodes in the vertical direction are tightly coupled numerically.
The algorithm to decide on coarsening we use is: Suppose that the penalty
coefficients are $a_{1}=a_{2}=1$ and $a_{3}\geq 1$, and at the bottom of the
grid, the grid spacing is $h_{1}=h_{2}$ (horizontal) and $h_{3}$ (vertical).
If $h_{3}/(h_{1}a_{3})>1/3$, coarsen in the horizontal directions by $2$,
otherwise do not coarsen. Then, replace $h_{1}$ and $h_{2}$ by their new
values, corsened (multiplied by 2) or not, and for every horizontal layer from
the ground up, if $h_{3}/(h_{1}a_{3})]<3$, coarsen about that layer
vertically, otherwise do not coarsen. This algorithm retains the coarse grids
as logically cartesian, which is important for computational efficiency and
keeping the code simple, and it controls the convergence rate to remain up to
about $0.28$ with four smoothing steps per cycle.
## 4 Conclusion
We have presented a simple and efficient finite element formulation of mass-
consistent approximation, and a multigrid iterative method with adaptive
semicoarsening, which maintains the convergence of iteration over a range of
grids and penalty coefficients. A prototype code is available at
https://github.com/openwfm/wrf-fire-matlab/tree/femwind/femwind.
Acknowledgement: This work has been supported by NSF grant ICER-1664175 and
NASA grant 80NSSC19K1091.
## References
* [1] R. E. Bank and T. Dupont: _An optimal order process for solving finite element equations_ , Math. Comp., 36 (1981), pp. 35–51.
* [2] B. Bozorgmehr, Z. Patterson, P. Willemsen, J. A. Gibbs, R. Stoll, J. J. Kim, and E. R. Pardyjak: _A CUDA-based implementation of a fast response urban wind model_. 100th American Meteorological Society Annual Meeting, 2020. https://ams.confex.com/ams/2020Annual/meetingapp.cgi/Paper/366583: accessed December 28, 2020.
* [3] A. Brandt: _Multi-level adaptive solutions to boundary-value problems_ , Math. Comp., 31 (1977), pp. 333–390.
* [4] T. J. R. Hughes: _The finite element method_ , Prentice Hall, Inc., Englewood Cliffs, NJ, 1987.
* [5] L. H. Juárez, M. L. Sandoval, J. López, and R. Reséndiz: _Mass-consistent wind field models: Numerical techniques by $L^{2}$ projection methods_, in Fluid Dynamics, Computational Modeling and Applications, L. H. Juárez, ed., IntechOpen, Rijeka, 2012, ch. 2, pp. 23–40.
* [6] J. Mandel, S. Amram, J. D. Beezley, G. Kelman, A. K. Kochanski, V. Y. Kondratenko, B. H. Lynn, B. Regev, and M. Vejmelka: _Recent advances and applications of WRF-SFIRE_ , Natural Hazards and Earth System Sciences, 14 (2014), pp. 2829–2845.
* [7] J. Mandel, J. D. Beezley, and A. K. Kochanski: _Coupled atmosphere-wildland fire modeling with WRF 3.3 and SFIRE 2011_ , Geoscientific Model Development, 4 (2011), pp. 591–610.
* [8] J. Mandel, S. McCormick, and R. Bank: _Variational multigrid theory_ , in Multigrid methods, vol. 3 of Frontiers Appl. Math., SIAM, Philadelphia, PA, 1987, pp. 131–177.
* [9] E. Morano, D. J. Mavriplis, and V. Venkatakrishnan: _Coarsening strategies for unstructured multigrid techniques with application to anisotropic problems_ , SIAM J. Sci. Comput., 20 (1998), pp. 393–415.
* [10] C. A. Sherman: _A mass-consistent model for wind fields over complex terrain_ , Journal of Applied Meteorology, 17 (1978), pp. 312–319.
* [11] B. Singh, B. S. Hansen, M. J. Brown, and E. R. Pardyjak: _Evaluation of the QUIC-URB fast response urban wind model for a cubical building array and wide building street canyon_ , Environmental Fluid Mechanics, 8 (2008), pp. 281–312.
* [12] W. C. Skamarock, J. B. Klemp, J. Dudhia, D. O. Gill, D. M. Barker, M. G. Duda, X.-Y. Huang, W. Wang, and J. G. Powers: _A description of the Advanced Research WRF version 3_. NCAR Technical Note 475, 2008.
* [13] N. S. Wagenbrenner, J. M. Forthofer, B. K. Lamb, K. S. Shannon, and B. W. Butler: _Downscaling surface wind predictions from numerical weather prediction models in complex terrain with WindNinja_ , Atmospheric Chemistry and Physics, 16 (2016), pp. 5229–5241.
* [14] W. Wang, C. Bruyère, M. Duda, J. Dudhia, D. Gill, M. Kavulich, K. Werner, M. Chen, H.-C. Lin, J. Michalakes, S. Rizvi, X. Zhang, J. Berner, D. Munoz-Esparza, B. Reen, S. Ha, K. Fossell, J. D. Beezley, J. L. Coen, and J. Mandel: _ARW version 4 modeling system user’s guide_. National Center for Atmospheric Research, Boulder, CO, January 2019.
* [15] Y. Wang, C. Williamson, D. Garvey, S. Chang, and J. Cogan: _Application of a multigrid method to a mass-consistent diagnostic wind model_ , Journal of Applied Meteorology, 44 (2005), pp. 1078–1089.
|
Inhalation drug delivery has seen a swift rise in the use of dry powder inhalers (DPIs) to treat chronic respiratory conditions. However, universal adoption of DPIs has been restrained due to their low efficiencies and significant drug losses in the mouth-throat region. Aerosol efficiency of DPIs is closely related to the fluid-dynamics characteristics of the inhalation flow generated from the devices, which in turn are influenced by the device design. In-vitro and particle image velocimetry (PIV) have been used in this study to assess the aerosol performance of a model carrier formulation delivered by DPI devices and to investigate their flow characteristics. Four DPI device models, with modification to their tangential inlets and addition of a grid, have been explored. Similar aerosol performances were observed for all four device models, with FPF larger than 50%, indicating desirable lung deposition. A high swirling and recirculating jet-flow emerging from the mouthpiece of the DPI models without the grid was observed, which contributed to particle deposition in the throat. DPI models where the grid was present showed a straightened outflow without undesired lateral spreading, that reduced particle deposition in the throat and mass retention in the device. These findings demonstrate that PIV measurements strengthen evaluation and can be jointly used to develop high-performance DPIs.
§ INTRODUCTION
The last few decades have seen dry powder inhalers (DPIs) evolve as a clinically-appropriate and preferred device to treat chronic respiratory conditions via aerosol drug delivery. This rapid development is due to the advantages that DPIs offer, which include delivery of larger doses, greater drug stability, and ease of use. In addition, the patient's inspiratory flow is the primary energy source for drug detachment and dispersion, thereby removing the requirement for the patient's coordination during inhalation. The need for a forceful and deep inhalation, however, creates disadvantages, such as large variability in the required inhalation effort Azouz2015, especially for patients with severe airflow limitation, low dose-emission uniformity Hindle1995, and high mouth-throat losses DeHaan2004, restricting the widespread use of DPIs in different patient populations. Moreover, despite the advances in the last decades, DPIs still suffer from poor efficiency, with conventional devices delivering approximately 20 to 30% of the nominal dose to the lungs at a normal inhalation flow rate Buttini2016.
DPI's performance is evaluated by the particle size distribution delivered to the lungs. Efficiency is assessed based on metrics of mean mass aerodynamic diameter, emitted dose, fine particle dose and fine particle fraction using impaction studies. Various factors affect this performance, such as particle entrainment and de-agglomeration, the device resistance at a given inhalation flow rate, and the formulation and properties of the drug Frijlink2004, Atkins2005. The first two factors are critical as they are mainly controlled by the device design, which in turn significantly affects not only the generation and properties of delivered aerosol, but also drug losses due to particle deposition in the mouth-throat region DeBoer2017. The characteristics of the resulting particle aerosol flow that emerges as a particle-laden jet from the DPI mouthpiece is closely correlated with the fluid-dynamics characteristics of that jet. When coupled with fluid motion in the human respiratory tract, these characteristics strongly control fine particle deposition in the lungs.
Characterisation of DPI jet-flow in this study has been performed using the experimental technique of particle image velocimetry (PIV). PIV is a non-intrusive, laser-based, optical imaging technique that enables measurement of the instantaneous 2-component - 2-dimensional (2C-2D) velocity field with high spatial and temporal resolutions, and has been used previously to investigate the fluid-dynamics characteristics of flows in DPIs. The velocity fields at different planes normal to the longitudinal axis of the mouthpiece of a Spiros® model DPI for three different flow rates were measured using PIV by Han et al. *Han2002. It was found that tangential inlet jets produced a cross-flow resulting in large re-circulation flow zones close to the mouthpiece wall. PIV measurements have also been performed in the mouthpiece of an idealized DPI model with an upstream grid, which showed an increase in turbulence intensities with grid voidage Ngoc2013. Furthermore, powder dispersion measurements in a Rotahaler® model DPI using PIV have revealed that particle-grid collisions and drag force were responsible for powder de-agglomeration, whereas particle-particle and particle-grid collisions assisted in dispersion of the de-agglomerated powder Kou2016.
Pasquali et al. *Pasquali2015 measured the axial and radial velocity components across a plane perpendicular to the mouthpiece exit of a Nexthaler® DPI. The device was tested at a transient inhalation airflow that had a peak flow rate of 60 and a rise time of 0.3 s. They found a slight asymmetry in the velocity magnitude field across the jet center-line indicating the presence of high swirl levels in the internal flow. Although the mean velocities represented moving average over only 10 vector fields, it showed that the increase in mean velocity correlated with the decrease in measured pressure difference across the inhaler. Voss and Finlay *Voss2002 used laser doppler velocimetry to measure turbulent flow velocities in an entrainment tube rig and a Diskhaler® DPI. They found that although higher turbulence velocities caused greater particle deagglomeration, turbulence might not be the only or most effective de-agglomeration mechanism in DPIs. Wang et al. *Wang2004 examined experimentally the effect of an impinging air jet on powder dispersion in a Ventodisk® DPI for various jet velocities, nozzle-to-surface separation distances, and dosing cup shapes. Optimum dispersion was found to occur at higher jet velocities and nozzle-to-surface separation distance of 5 jet diameters. A recent experimental study in a channel flow with a grid placed upstream of a pocket bed of lactose carrier powder Elserfy2020 has shown that powder de-agglomeration at air flow rates of 60 and above depends more on the action of aerodynamic shear forces on the agglomerates generated by higher mean flow and grid turbulence, than on the powder properties.
An extensive examination of the flow emerging from a DPI is important to fully understand aerosol dispersion from the device. The jet-flow from a DPI has not been extensively quantified in any of the previous experimental studies, including the distribution of mean and turbulent flow statistics, as well as quantification of changes in flow upon modifications due to the DPI design. The present study addresses these issues by carrying out an experimental investigation of the fluid-dynamics characteristics of flows originating from DPIs having different inlet configurations and grid positions. These results are then used to corroborate the findings of studies performed on the same DPIs.
§ MATERIALS AND METHODS
§.§ Fluid Mechanics Scaling
An important dimensionless quantity that characterises fluid flows is the Reynolds number, which for a DPI can be defined as
\begin{equation}
Re_a = \frac{U_aD_a}{\nu_a}
\end{equation}
where ${U_a}$ is the characteristic velocity, taken as the average flow velocity at the DPI mouthpiece exit, ${D_a}$ is the characteristic length, taken as the mouthpiece exit inner-diameter, and ${\nu_a}$ is the kinematic viscosity of the fluid. The subscript $'a'$ here refers to air as the fluid.
For an actual DPI with ${D_a}$ = 10 mm and an inspiratory air flow-rate of ${Q_a}$ = 60, as recommended for medium airflow resistance DPIs Byron1994, Ari2020, DeBoer2003, this yields ${U_a}$ = 12.74 m/s and ${Re_a \approx}$ 8400. PIV experiments can then be performed at dynamically similar conditions if the experimental flow is at the same Reynolds number. At this point, let us consider that these experiments are to be performed using water as the working fluid, such that
\begin{equation}
Re_w = \frac{U_wD_w}{\nu_w}={Re_a}
\end{equation}
where the subscript, $w$, refers to water in this case. This results in the following relationship between the geometric and dynamic flow conditions required between the water-based experiment and the dynamically equivalent air-based DPI flows as
\begin{equation}
\frac{U_w}{U_a} \frac{D_w}{D_a} = \frac{\nu_w}{\nu_a}
\end{equation}
The value of \({\nu_w}/{\nu_a}\) at a normal room temperature of $20 \degree$C is 0.066, which means that the geometric scaling factor, defined by \({S_f}={D_w}/{D_a}\), can be chosen to be greater than 1 such that \({U_w}<{U_a}\). So, for a scale factor of \({S_f} = 3\), ${D_w}$ = 30 mm with ${U_w}$ = 0.281 m/s, which is two orders of magnitude lower than ${U_a}$. Thus, this permits PIV measurements in the water-based model to be performed with higher spatial and temporal resolution than in the dynamically equivalent smaller air-based model.
§.§ DPI Device Models
Four DPI models have been used in this study. The DPI models used with air for study and with water for PIV experiments are geometrically similar, with the models for the latter being scaled-up by a factor of three, as explained in the previous section. These models have a fixed mouthpiece exit inner-diameter of ${D_a}$ = 10 mm and ${D_w}$ = 30 mm respectively, with a uniform circular inner cross-section for the mouthpiece.
The models differ in their configurations of tangential inlets and grid positions as shown in Fig. <ref>. The four models shown in the top row were used for the study, while those in the bottom row were used for PIV experiments. The model in Fig. <ref>(a) had 2 tangential inlets spaced $180\degree$ apart, while the model in Fig. <ref>(b) had 6 tangential inlets spaced $60\degree$ apart, with the summed area of the tangential inlets in the two models being the same. The model in Fig. <ref>(c) had a grid positioned just above the tangential inlets, whereas the model in Fig. <ref>(d) has the same grid positioned at the mouthpiece exit. The grid for the models presented square holes of side 1 mm and spaced 0.5 mm apart, while for the experimental models it was geometrically scaled-up by a factor of three. Each model had the same dosing cup design, which is a hollow hemisphere. For the experimental DPI models the dosing cup was integrated into a base fixture with a bottom flange to facilitate mounting the model in the experimental rig.
DPI device models examined in this study
The models were 3D printed in FormLabs Clear Resin v4 Methylacrylic Oligomer 75-90%, Methylacrylic monomer 25-50%, Diphenyl (2,4,6-trimethylbenzoyl) phosphine oxide < 1%, in a FormLab 3 3D Printer (FormLabs, Summervile, USA). The support material was removed after printing and the models were washed in the Form Wash (FormLabs, Sumervile, USA) using fresh isopropyl alcohol for 15 minutes, followed by curing in the Form Cure (FormLabs, Sumervile, USA) for 30 minutes at 65$^{\circ}$C. The experimental models were 3D printed in ABSplus thermoplastic material with a layer thickness of 0.254 mm, and the model outer surfaces were coated with urethane to prevent any structural porosity while immersed in water.
§.§ In-vitro Measurements
§.§.§ Materials
The formulation used, comprising of micronised beclamethasone dipropionate (BDP), the pre-blend composed of micronised magnesium stearate and micronised lactose, coarse granular α-lactose monohydrate (212 - 350) was prepared as described by Yeung et al. *Yeung2019. High-performance liquid chromatography (HPLC) grade methanol was purchased from Honeywell (North Carolina, USA), ethanol and isopropanol were obtained from ChemSupply (Sydney, Australia). Deionised water, used in this study, was purified by reverse osmosis (MilliQ, MilliPore, Australia). Brij35 and glycerol were purchased from Sigma (Sigma Aldrich, USA).
§.§.§ Formulation preparation
Due to the direct influence of formulation on aerosol performance, and to study the effect of different device parameters on aerosol performance, a model formulation of 1% BDP (w/w) with high aerosol performance was used Yeung2018. This formulation includes a pre-blend of magnesium stereate as adjuvant to reduce the cohesive forces between the API (BDP) and the carrier (lactose), facilitating API dettachment. A BDP-carrier based formulation containing 1% BDP (w/w), coarse lactose at 89.1% (w/w) and pre-blend magnesium stearate at 9.9% (w/w) was used in the study. Formulation was prepared as described by Yeung et al. *Yeung2019. Briefly, a pre-blend of magnesium stearate and coarse granular α-lactose were mixed at a 1:9 ratio (w/w) for 4 h at 32 rpm using a low shear 3-dimensional shaker-mixer (Alphie-03, Hexagon Product Development PVT. LTD., Vadodara, India). Micronised BDP was added to the carrier 24 h after carrier:pre-blend preparation to minimize the effect of electrostatic charges, and mixed for 90 minutes at 32 rpm. To remove any potential agglomerates the formulation was sieved through a 400 sieve, and mixed for further 30 min at 32 rpm using low shear 3-dimensional shaker-mixer. A resting period of 24 h in a desiccator was used prior to any further analysis to minimize electrostatic charges effect.
§.§.§ Particle size distribution
The particle size distribution of micronised BDP was assessed using laser diffraction with a Mastersizer 3000 (Malvern, Worcestershire, UK) equipped with a Mastersizer® Aero S™ dry powder dispersion unit (Malvern, Worcestershire, UK), tray and hopper. Samples were dispersed at 4 mbar shear pressure. The refractive index used was 1.56. Five measurements were performed and analysed in Mastersizer 3000 Software (Version 3.81). Results are depicted as D_10 , D_50 and D_90 .
§.§.§ Particle morphology
The morphology of the micronised BDP and BDP-loaded formulation was visualized under a scanning electron microscope (SEM, JCM-6000 Neoscope Scanning Electron Microscope, Joel Ltd., Akishima Tokyo, Japan). The samples were placed on a circular carbon tape and coated with 15 thickness of gold using a sputter gold-coater (Smart Coater, Joel Ltd., Akishima Tokyo, Japan). The samples were imaged at an accelerating voltage of 10 kV.
§.§.§ Drug content uniformity
Drug content uniformity of the formulation was assessed to ensure that a homogeneous blend was obtained. The assay was performed based on British Pharmacopoeia *Office2017. Briefly, after dispersion on wax paper, ten random powder samples of 10 mg were collected, and dissolved in methanol:water (80:20, v/v) solution to dissolve the drug. Samples were vortexed for 30 s to ensure drug dissolution, and filtered in 0.45 PTFE filter (Aireka Scientific, Zhejiang, China). Concentration of BDP was determined using a validated HPLC as described in the following section. The content uniformity of the drug in the formulation is expressed as a mean percentage of the theoretical loaded dose ± standard deviation. The acceptance value (AV) was also calculated based on the British Pharmacopoeia *Office2017.
§.§.§ Drug quantification via HPLC
The concentration of BDP was quantified in a Shimadzu HPLC system consisting of a LC20AT pump, the SIL20AHT autosampler and an SPD-20A UV-VIS detector (Shimadzu, Sydney, NSW, Australia) using a previously validated method Yeung2019. Chromatographic separation of BDP was achieved using a Luna C-18 column (150 $\times$ 4.6 mm, 3, Phenomenex, Torrance, USA). Samples were run in methanol:water (80:20) mobile phase, in isocratic flow of 0.8. BDP was detected at 243. Injection volume was kept at 100ł. A calibration curve between 0.1 - 100 was used to extrapolate the concentration of BDP in the samples.
§.§.§ In-vitro aerosol performance using cascade impactor
Aerosol deposition profile was conducted using British Pharmacopoeia Apparatus E – Next Generation Impactor (NGI, Copley, UK) connected to a critical flow controller (TPK 2100-R, Copley, UK) and a rotary pump (Copley, UK). Flow rate was set for ${Q_a}$ = 60 using a calibrated flow meter (Model 4040, TSI Precision Measurement Instruments, Aachen, Germany). To minimize particle bouncing, 50ł of Brij 35:glycerol:ethanol (10:50:40) solution was used per stage, to coat all the stages of the NGI (S1-S7 and micro orifice collector, MOC). The USP induction port was coated with 2 ml of BriJ35:glycerol:ethanol solution and spread onto its internal surface using a brush to prevent particle bouncing and assess throat deposition. Excess coating solution was removed by verting the induction port for 1 minute prior to deposition. For this assay, the cascade impactor was tilted at 45$^{\circ}$ angle to the bench to minimise any potential loss of the formulation loaded to the device from the air inlets. The NGI pre-separator was used to collect the remaining lactose carrier particles. Due to the tilted position of the NGI, a glass microfibre disc (Sartorius Stedin, Goettingen, Germany) was cut to fit into the central cup of the pre-separator and wetted with 2 ml of mobile phase, as a replacement of the 15 ml of mobile phase recommended by the Pharmacopoeia to collect the samples.
For each actuation, 10 mg of the BDP-loaded formulation was weighted on to the device dosing cup. Aerosol deposition was conducted for 4 s, based on the cutoff diameters of the NGI at 60 (Cutoffs: s1, 8.06; s2, 4.46; s3, 2.82; s4, 1.66; s5, 0.94; s6, 0.55; s7, 0.34; and micro-orifice collector, MOC, 0.00). The drug deposited in each stage was recovered using the mobile phase methanol:water (80:20) with the following volumes: device, 5 ml; adapter, 5 ml; induction port, 10 ml; pre-separator, 35 ml; S1 and MOC, 10 ml; S2-S7, 5 ml. All solutions were filtered using 0.45 PTFE filters prior to HPLC detection. The devices were weighted before and after actuation to determine shot weight. As required by the British Pharmacopoeia Office2017, total mass recovery was set within 85 - 115% of the nominal dose. Each device was tested in triplicate, with one actuation performed per run.
Data were analysed in Copley Inhaler Testing Data Analysis Software (CITDAS) (Version 3.10 Wibu, Copley, Nottingham, UK) based on the derived parameters of delivered dose (DD, total dose in that was recovered per experiment), fine particle dose (FPD, mass in of particles below 5), fine particle fraction (FPF% emitted dose, percentage of particles below 5), mass median aerodynamic diameter (MMAD, calculated as the 50^th percentile of the particle size distribution), geometric standard deviation (GSD, calculate as the square root of the 84.13^th/15.87^th percentile). As the throat was coated with Brij solution to assess the effect of the device design on throat deposition, the results of throat and pre-separator depositions have been combined.
§.§.§ Statistical analysis of in-vitro analysis
Data are presented as mean ± standard deviation of three independent experiments (n=3). Statistical analysis was performed using GraphPad Prism Software version 8.0 (GraphPad, San Diego, USA). and device models were compared by two-tailed t-test assuming gaussian distribution at 95% CI. The effect of the grid position was compared amongst the , and device models using one-way analysis of variance (ANOVA) followed by Tukey post hoc test. Differences were considered statistically different at 95% CI (* P0.05, ** P0.01, *** P0.001 and **** P0.0001).
§.§.§ Pressure drop and device intrinsic resistance
The intrinsic resistance and pressure drop of each device was measured by connecting the induction port measurement adapter (Copley, UK) between the device and the induction port of the NGI cascade impactor. The system was connected to the critical flow controller to measure the pressure drop ($\Delta$P) over the inhaler under test (4 s), when the flow rate was set for ${Q_a} = \SI{60}{\litre\per\minute}$ using a calibrated flow meter (Model 4040, TSI Precision Measurement Instruments, Aachen, Germany). Pressure drop is expressed as the mean of three independent measurements using different devices. The intrinsic resistance of the device was calculated from $\sqrt{{\Delta}P}/{Q_a}$, and is expressed in units of ^0.5.^-1.
§.§ PIV Experiments
§.§.§ Experimental apparatus
A schematic of the PIV experimental setup is shown in Fig. <ref>. The experimental rig comprises the DPI model placed in a Perspex tank with a closed-loop water flow system (represented by blue lines). The inflow is at the tank base, while the outflow is from the tank top. The tank has a circular channel milled in its base from the midpoint of one of its edges to the tank centre. Water flows in through this channel with an axial outflow, impinging on a circular plate which is cut out from the base to form an annular region as shown in Fig. <ref>. Flow enters the tank from this annular region at an average velocity that is an order of magnitude lower than ${U_w}$, and then flows into the DPI model through the tangential inlets.
A confining plate with sides equal to the inner cross-section of the tank (300 mm $\times$ 300 mm) is placed flush with the DPI model mouthpiece end to ensure that flow exits from the mouthpiece to avoid leakage from the interfaces between the plate, DPI model, and inner surfaces of the tank. The distance from the mouthpiece exit to the top of the tank is approximately 9${D_w}$. A magnetically coupled centrifugal pump provides the required pressure difference to drive the flow through the rig. The water flow rate is controlled via a globe valve and measured using a variable area flow meter placed downstream of the pump. A steady water flow rate ${Q_w}$ of approximately 12 is maintained resulting in ${Re_w \approx}$ 8400.
Experimental setup for PIV
§.§.§ PIV system and parameters
The PIV system is shown at the right of Fig. <ref>. Water in the tank is seeded with hollow glass spheres that have a mean particle diameter of 11. These particles, which have been used in numerous previous fluid mechanics studies Kostas2005, Buchner2012, Gonzalez-Espinosa2014, have a density of 1.1 g/cc and a relaxation time of 7.38 in water such that they faithfully follow the flow with high fidelity. A double-cavity pulsed Nd:YAG laser (New Wave Research) emitting at a wavelength of 532 nm is used to illuminate these particles. This laser can generate 120 mJ double-pulses of duration 3 - 5 ns at a repetition frequency of 15 Hz. The output laser beam is directed towards the experimental rig using an articulated mirror arm and then shaped into a thin light sheet using a telescope and plano-concave lens arrangement. The light sheet is approximately 1 mm thick and is aligned coincident with a vertical plane passing through the center of mouthpiece exit as shown in Fig. <ref>. Single-exposed double-frame PIV images are acquired using the array sensor (4008 px $\times$ 2672 px, 9) of a CCD camera (PCO AG pco.4000) at a frame rate of 2 Hz, with a time delay of 820 between two laser pulses. A 105 mm Micro-Nikkor lens set at an aperture of f/4 is used for these experiments with an image magnification of 0.26, resulting in a spatial resolution of 35.2/px. The synchronous timing signals to control the laser and the PIV image acquisition using the CCD camera are generated from a fully programmable in-house developed BBB control computer Fedrizzi2015.
The coordinate system used in this study is shown in the bottom left of Fig. <ref>, where $x$ represents the axial direction and $y$ the radial direction, with $u$ and $v$ being their respective velocity components. The particle images occupy an area of approximately 3.5${D_w}$ $\times$ 3${D_w}$ ($x$ $\times$ $y$) outwards from the mouthpiece exit. A total of 8000 PIV images were acquired for each experimental model.
§.§.§ PIV processing algorithm
Analysis of the single-exposed double-frame images is performed using multi-grid/multi-pass cross-correlation digital particle image velocimetry (MCCDPIV). The algorithm was developed by Soria *Soria1994 and is described in Soria *Soria1996 and Soria et al. *Soria1999. It employs an iterative and adaptive cross-correlation algorithm to increase the dynamic range of the measurements. This is done by adapting the sample window size to the local flow conditions and offsetting the discrete sampling window in the second frame by an amount approximately equal to the estimated particle displacement in the sampling window. The final sample window size is (48 px $\times$ 32 px) with a grid spacing of (24 px $\times$ 16 px).
The algorithm also employs a correlation based correction by comparing correlation data from adjacent sampling windows to improve sub-pixel accuracy and eliminate spurious vectors Hart2000. A two-dimensional Gaussian function is least-square fitted around the correlation peak region to locate the maximum spatial cross-correlation function value to sub-pixel accuracy. A median value test and a dynamic mean value operator test are then performed to validate the resulting displacement vectors Westerweel1994.
The PIV velocity vector components are then computed by dividing the measured pixel displacements in each sampling window by the time between the image double-frames (time delay between two laser pulses) and the optical magnification. The spacing between the vectors is 0.8424 mm $\times$ 0.5616 mm ($x$ $\times$ $y$). The performance and accuracy of this algorithm in the analysis of single-exposed double-frame PIV images is reported in Soria *Soria1998, wherein the uncertainty of a single-sample measurement is $\pm$ 0.06 px at 95% confidence interval.
§.§.§ Measurement uncertainties
An uncertainty analysis based on the methodology reported by Moffat *Moffat1988 was performed. The uncertainty in measurement of the mouthpiece exit inner-diameter is $\pm$ 0.029 mm. The variable area flow meter has an accuracy of $\pm$ 6.66% at a flow rate of ${Q_w}$ = 12. The pulse generator has a timing uncertainty of $\pm$ 5 at 2 Hz. The uncertainty in measuring the optical magnification is $\pm$ 0.61/px, based on an uncertainty of $\pm$ 1 px in the image space of the calibration reference points and an uncertainty of $\pm$ 0.1 mm in the physical distance between these reference points. This gives an uncertainty of ${\epsilon_u}/{U_w}$ = 1.78% in the PIV velocity measurement. The uncertainty in ensemble average velocity components is ${\epsilon_U}/{U_w}$ = ${\epsilon_V}/{U_w}$ = 0.02%.
§ RESULTS
§.§ In-vitro Measurements
The particle size distribution of the micronised BDP, measured via laser diffraction (Fig. <ref>(a)), showed that 90% of the particles were below 4.14 ± 0.38, indicating that these particles are suitable for deep lung deposition. Mean diameter of micronized BDP was 1.24 ± 0.06, smaller than those previously reported by Yeung et al. *Yeung2018 who used the same formulation to assess aerosol performance of lactose-loaded BDP.
Scanning electron micrographs confirmed the particle size of BDP (refer Fig. <ref>(b)), and showed that the drug was successfully loaded onto the surface of the lactose carrier by the blending procedure (Fig. <ref>(c)). The use of fine and cohesive particles, like the micronised BDP in this study, may lead to issues regarding drug content, as the dose of API loaded to the carrier is low, and the mixing process can vary. To ensure that a homogeneous blend was produced a content uniformity assay was carried out and showed a mean content of 94.55 ± 2.23%, with acceptance value of 9.31, complying with the British Pharmacopoeia requirements.
(a) Particle Size distribution of micronized BDP, scanning electron micrographs of (b) micronised BDP and (c) and pre-blend carrier system loaded with BDP (1% w/w)
Aerosol performance of all devices was assessed at ${Q_a}$ = 60 using a standard 1% BDP-loaded lactose formulation (Fig. <ref>). The pressure drop and intrinsic resistance of all device models tested here are shown in Table 2. The addition of air-inlets to the device models significantly decreased the mass of BDP remaining in the device from 25.17 $\pm$ 1.40 to 16.26 $\pm$ 1.95 (P0.0001). As the total dose observed for the model compared with the model was significantly lower (P0.05), the aerosol deposition was compared based on the percentage of the total dose delivered (%TD) using a two-tailed t-test. Throat deposition was similar between the devices (P0.05), with a significant increase in mass of BDP deposited on S4 and S5 (particles of aerodynamic size between 2.82 - 1.66 and 1.66 - 0.94, respectively) (P0.05). Similar mass depositions were observed in the remaining stages of the NGI (P0.05).
When comparing the aerosol performance parameters (Table 1), although a significant increase in the ED (%TD) was observed for the model compared with the model, the change in FPF (%ED) from 52.83 $\pm$ 3.45 to 61.41 $\pm$ 7.58 was not significant (P0.05). No significant differences were also observed for FPD and MMAD values (P0.05), despite the lower intrinsic resistance observed for the device model.
Aerosol Parameters of all 4 DPI models using a 1% (w/w) BPD – lactose formulation
2c 2c 2c 2c
Mean SD Mean SD Mean SD Mean SD
Total Dose () 104.96 2.74 90.91 b 4.10 100.90 8.47 97.53 8.94
Emitted Dose (%TD) 76.01 1.47 82.03 a 2.97 79.27 1.74 73.78 3.73
Fine Particle Dose () 42.13 2.62 45.76 5.89 42.69 8.73 40.42 4.53
Fine Particle Fraction (%ED) 52.83 3.45 61.41 7.58 53.05 7.17 56.25 4.54
MMAD () 1.59 0.09 1.72 0.04 1.86 a 0.07 1.78 a 0.07
GSD 1.91 0.09 1.84 0.15 1.99 0.04 1.94 0.09
Means were compared with model using a two-tailed t-test a: P0.05; b: P0.01
The effect of the flow straightener (grid) in the aerosol performance was investigated using the model. Compared with the $\twin$ model without the grid, the addition of the grid increased the intrinsic device resistance to 0.0339 and 0.0374 for the $\teng$ and $\texg$, respectively, without affecting the percentage of BDP that remained in the device. A comparison between the location of the grid in the entry or exit positions showed that a greater mass of BDP remained in the device for the model (P0.01), the device with higher intrinsic resistance. The addition of the grid at the exit position also led to a decrease in BDP deposited in the throat + pre-separator (P0.05). No significant differences were observed amongst the actual , and models for the remaining stages of the NGI (P0.05). Similarly, no changes in the aerodynamic performance parameters of ED (%TD), FPD, FPF (%ED) were observed, although a significantly greater MMAD () was observed for the models with a grid (Table 1.).
Aerosol performance of (a,c) and models, (b,d) , , and models, expressed as (a,b) mass of BDP deposited () and (c,d) percentage of TD (% TD)
Pressure drop and intrinsic device resistance measured at ${Q_a}$ = 60
1cPressure drop (kPa) 1cIntrinsic resistance (^0.5.^-1)
$\twin$ 1.79 0.0223
$\sxin$ 1.37 0.0195
$\teng$ 4.14 0.0339
$\texg$ 5.04 0.0374
§.§ PIV Experiments
The mean velocities $U$, $V$ and root-mean-square (RMS) velocity fluctuations $u_{rms}$, $v_{rms}$ in the region outside of the DPI model mouthpiece are presented here. The mean velocities are ensemble average velocities calculated over the measured 8000 PIV velocity fields, whereas the RMS fluctuations are standard deviations of those velocity components. The mean velocities and RMS fluctuations are non-dimensionalized using ${U_w}$, while the spatial coordinates are non-dimensionalized with ${D_w}$. The results are shown up to a location of $x$/${D_w}$ = 3 from the mouthpiece exit, wherein the location $x$/${D_w}$ = 0 is approximately 4 mm ($x$/${D_w}$ = 0.13) above of the mouthpiece exit plane.
The mean axial velocity $U/{U_w}$ distributions over the radial coordinate $y/{D_w}$ across the jet cross-section for all four models are shown in Fig. <ref>. The profiles for and models in Fig. <ref>(a) and <ref>(b) are similar to each other with negative velocities in the jet central region (core) and maximum positive velocities in the jet edge regions (shear layers). Negative mean axial velocities signify a reverse flow region which begins at the mouthpiece exit at and persists up to , with the most negative values observed at . These characteristics of mean axial velocities are exemplary of a high swirling jet flow that produces axial recirculation in the form a central toroidal recirculation zone due to strong radial and axial pressure gradients near the nozzle exit gupta1984swirl, Chigier1967, Giannadakis2008. The profiles for the model in Fig. <ref>(c) are representative of an axisymmetric jet without any swirl (Flow B in Liang and Maxworthy *Liang2005), which is a result of the flow-straightening effect produced by the grid placed after the tangential inlets. In contrast, the placement of the grid at the mouthpiece exit in the model does not entirely eliminate the flow swirl close to the mouthpiece exit, as shown by the negative mean axial velocities at in Fig. <ref>(d), which occur due to the grid square holes having a side and an axial length of 3 mm that allows the flow to exit while retaining most of its swirl level. However, the profiles from to show that the flow straightening effect begins to manifest, only further outside of the mouthpiece exit.
Mean axial velocities for: (a) ; (b) ; (c) ; (d) ;
at $x$/${D_w}$ = 0, $x$/${D_w}$ = 1, $x$/${D_w}$ = 2, and $x$/${D_w}$ = 3
For the and models, as we move downstream from the mouthpiece exit, the maximum values of mean axial velocities decrease and the radial location at which they occur shifts away from the longitudinal jet axis $y/{D_w}$ = 0. On the other hand, for the model, maximum mean axial velocities occur close to longitudinal jet axis and do not have large variations in their values, whereas for the model, the radial locations of the maximum mean axial velocities move closer to the jet axis with increasing $y/{D_w}$. These observations show that the jet emerging from a DPI mouthpiece spreads and decays faster if there is a high level of swirl present.
Mean radial velocities for: (a) ; (b) ; (c) ; (d) ;
at $x$/${D_w}$ = 0, $x$/${D_w}$ = 1, $x$/${D_w}$ = 2, and $x$/${D_w}$ = 3
The mean radial velocity $V/{U_w}$ distributions along the radial direction are shown in Fig. <ref>. The distributions for the and models in Fig. <ref>(a) and Fig. <ref>(b), almost mirror each other, with maximum positive and negative values at the mouthpiece exit attained at $y/{D_w}$ = $\pm$ 0.5, respectively. The negative mean radial velocities occurring at locations $y/D_w > 0.6$, and the positive mean radial velocities occurring at locations $y/D_w < -0.6$, signify entertainment of the ambient fluid into the jet core. This entrainment appears to be large for the and models, as the maximum negative and positive values of $V/{U_w}$ at $y/D_w = \pm 0.6$, respectively, reach at least 30% of $U_w$ due to high swirl levels. The mean radial velocities for the model in Fig. <ref>(c) are very small when compared with those for the and models, which reinforces the flow-straightening effect of the grid. However, for the model at in Fig. <ref>(d), the mean radial velocities are larger than those for the model but lower than those for the model, with maximum negative and positive values occurring at $y/{D_w}$ = $\pm$ 0.5, respectively. The asymmetry in the mean velocity profiles across the jet centerline $y/{D_w}$ = 0, observed in Figs. <ref> and <ref>, arise from non-uniform inner cross-sections of the mouthpiece resulting from the 3D printing process and also as a consequence of swirling jet flow, which has been reported in previous studies Toh2010,Vanierschot2014.
RMS axial velocity fluctuations for: (a) ; (b) ; (c) ; (d) ; at $x$/${D_w}$ = 0, $x$/${D_w}$ = 1, $x$/${D_w}$ = 2, and $x$/${D_w}$ = 3
RMS axial fluctuations $u_{rms}$/${U_w}$ at in Fig. <ref> attain maximum values of the order of ${U_w}$ for all models except the model. These occur in the jet shear layers where there are large velocity gradients due to sudden expansion and mixing of the jet with quiescent surrounding fluid. The fluctuations decrease while spreading out radially as the jet decays outside of the mouthpiece exit due to further mixing with the ambient fluid. A similar observation for the RMS radial fluctuations $v_{rms}$/${U_w}$ can be found in Fig. <ref>. Maximum radial fluctuations for the and models occur at the jet central axis which is the boundary of the recirculation zone where the mean radial velocities are zero gupta1984swirl. The RMS axial and radial fluctuations for the model are relatively very small when compared with the other models at all axial distances reported, which is due to the elimination of flow swirl that in turn reduces the amount of jet mixing with the ambient fluid.
RMS radial velocity fluctuations for: (a) ; (b) ; (c) ; (d) ; at $x$/${D_w}$ = 0, $x$/${D_w}$ = 1, $x$/${D_w}$ = 2, and $x$/${D_w}$ = 3
Figure <ref> shows streamlines in the jet flow emerging from all DPI models, overlaid on the velocity magnitude contours. Large recirculation zones in the central region can be seen in Fig. <ref>(a) and Fig. <ref>(b), for the jet flows from the and models, which occur due to high levels of swirl. Swirl induces a radial pressure gradient to balance the centrifugal forces in the flow thereby creating a toroidal vortex (vortex ring) with a low pressure region in the jet central region Percin2017. This sub-ambient pressure region leads to reverse axial flow directed towards the jet nozzle. For a strong swirl, such as that present in flows from these two DPI models, this reverse flow occurs close to the nozzle exit, in this case the DPI mouthpiece exit. The two regions of high velocity magnitude on either sides of the central jet axis are a result of high swirling flow emerging from inside the device, which was previously observed in the Nexthaler® DPI Pasquali2015.
The flow straightening effect of the grid in the model is clearly visible in Fig. <ref>(c), which shows the jet radial spread to be much lower than that for the and models. For the model in Fig. <ref>(d), there is a small recirculation and stagnation zone in the jet core that extends up to about $x$/${D_w}$ = 0.3. The size of this recirculation zone is smaller than that observed for the model, because of the impact of the upstream swirling flow in the mouthpiece with the grid which reduces the flow swirl-level.
Velocity magnitude contours and streamlines for: (a) ; (b) ; (c) ; (d)
§ DISCUSSION
DPI devices currently available in the market have a wide range of efficiencies, from as low as 19.3% and 22.9% of FPF for salmeterol xinafoate and fluticasone propionate, respectively, in Seretide® Diskus®, to as high as 62.3% and 62.6% for budesonide and formoterol fumarate (FF), respectively, in the Symbicort® Turbuhaler®, and 66.3% and 64.4% for BDP and FF, respectively, in the Foster® Nexthaler® Buttini2016. This large variation in aerosol performance confirms that despite various studies investigating DPI performance, the interaction and inter-dependence of the powder formulation used, the device design, and the energy supplied by the inspiratory flow affects DPI efficiency in a complex way which is not fully understood.
In order to generate inhaled particle aerosol from a DPI, both fluidization and de-agglomeration processes must occur. These processes are strongly affected by the characteristics of flow generated from the device and the powder formulation used in the device. The adhesive and cohesive forces between the particles must be overcome by aerodynamic and inertial forces in the inhaled air flow in order to achieve particle entrainment (pick-up) and detachment Frijlink2004. Micronised particles (APIs) have high cohesive forces that form strong agglomerates and require generation of higher forces in the inspiratory flow of a patient for their de-agglomeration and dispersion into the air stream. Thus, carriers, such as lactose, are blended with APIs do not only prevent agglomeration of cohesive APIs but also to bulk the formulation thereby facilitating powder flowability. In addition, particle engineers have also used ternary force control agents (FCA) like MgSt and lactose fines, which modify the surface of the carrier, altering interparticle interaction, decreasing API's adhesion to the carrier and thereby facilitating its detachment during aerosolization Jetzer2018. Therefore, the present study has used a model carrier-based formulation containing MgSt as a FCA agent to enhance API detachment from the lactose carrier.
The in-vitro results show that the model has lower drug mass retained in the device compared with the model. This is a consequence of the more uniform swirl in the flow cross-section, and hence, more uniform mixing being produced from 6 tangential inlets than that produced from only 2 tangential inlets in the model. The large BDP deposition of around 30% TD, observed in the throat + pre-separator stages for the and models, occurs due to the axially recirculating and radially spreading jet flow emerging from the mouthpieces of these DPIs, as shown in Fig. <ref>(a) and (b), respectively.
Since micronised API particles detached from the carrier have very low Stokes number because of their mean size of 1.24, their trajectories closely follow the jet flow, causing them to spread outwards and impact with the mouth cavity surface and on the throat. The increased BDP deposition on stages S4 and S5 for the model indicates a greater detachment/dispersion of API from the carrier than for the model. As similar jet-flows are observed emerging from these models, the aforesaid differences can be due to different flow characteristics within the device, particularly the swirl levels and distribution. Nevertheless, high FPF (larger than 50% ED) observed for both models demonstrates that high swirling flow produced inside the device is able to generate sufficient forces that detach the API from the carrier and entrain particles in the airflow.
The use of a grid in the model reduces mass retention in the device when compared with the model. In this case, the grid acts as a flow straightener as well as an `additional structure' for particle impaction/detachment. The flow straightening effect in eliminating flow swirl is evident in Figs. <ref>(c), <ref>(c), and <ref>(c), and has also been shown to exist in both carrier-based Zhou2013 and carrier-free formulation Wong2011 DPIs. API detachment from the carrier, as in the present carrier-free formulation, mainly occurs when most of the particles are trapped upstream of the grid where they are subjected to particle-particle and particle-obstacle (grid) collisions. This was also observed in the powder dispersion study by Kou et al. *Kou2016. Placement of the grid at the mouthpiece exit in the model shows an increase in the % of BDP (%TD) in the device, which indicates lower particle de-agglomeration due to fewer particle collisions in the absence of the grid closer to the tangential inlets.
The reduction in throat deposition for the model is due to the flow-straightening effect of the grid, illustrated in Fig. <ref>(d), as opposed to the larger throat deposition for the model, because of the emerging high swirling and radially spreading jet-flow, Fig. <ref>(a). However, this flow straightening effect increases particle deposition on stage S1 (cut off 8.06), instead on the lower stages of the NGI, which indicates that the API are still adhered to the lactose carrier or agglomerated after exiting the device, as the particle size distribution shows that 90% of the API was smaller than 4.14.
Although a flow straightening effect is observed by addition of the grid, there is no significant change in FPD or FPF for the and models. This similarity in aerosol performance can be attributed to powder de-agglomeration due to larger number of inter particle and particle-grid collisions when the grid is placed after the tangential inlets and a similar de-agglomeration potential achieved when high velocity particles collide with the grid placed at the mouthpiece exit. Such high velocity particle-grid collisions occur when a high swirling flow in the DPI mouthpiece impacts an obstacle (grid) placed at the exit, resulting in a jet-flow that has a higher velocity magnitude from that when the grid is placed at the entry, as shown in Figs. <ref>(c) and <ref>(d).
To summarize, the preceding discussion synthesizes the results between the performance and the fluid-dynamic characteristics of flows emerging from the four DPI models examined in this study. The increase in the number of tangential inlets leads to a lower drug retention within the device without changing the flow characteristics emerging from the DPIs. The addition of a grid either close to the tangential inlets or at the mouthpiece exit, leads to a flow straightening effect that removes swirl in the DPI flow. Although similar aerosol performances (FPD and FDF) are observed for all device models used in this study, the aerosol efficiency of these devices are high, with FPF larger than 50% ED and FPD larger than 40. The present work highlights the conjoint application of PIV and techniques to extensively characterise flows emerging from DPIs and quantify their aerosol performance, respectively, in order to improve DPI designs for effective pulmonary delivery.
§ CONCLUSION
Particle image velocimetry was used in this study to experimentally characterise the jet-flows emerging from four different DPI models, while studies on the same DPIs have been performed to corroborate their aerosol performance with their fluid-dynamics characteristics. The DPI models for which these performances and characteristics have been ascertained include modifications to their tangential inlets and the addition of a grid. DPI models without the grid have a highly swirling and recirculating jet-flow that emerges from the mouthpiece, whereas those with the grid have a straightened-flow without the undesirable radial spreading, which yields a reduction in the particle deposition in the mouth cavity and the throat. Similar aerosol performances were observed for all four device models, with FPF larger than 50%, indicating a desirable lung deposition.
§ ACKNOWLEDGMENTS
The research was supported by the Australian Research Council. The research was also benefited from computational resources provided by the Pawsey Supercomputing Centre and through the NCMAS, supported by the Australian Government. The computational facilities supporting this project included the NCI Facility, the partner share of the NCI facility provided by Monash University through a ARC LIEF grant and the Multi-modal Australian ScienceS Imaging and Visualisation Environment (MASSIVE).
Missing 'biblatex' package
The bibliography requires the 'biblatex' package.
Francis Ltd
abstractIntroduction: Aerosolized medications are commonly prescribed for the treatment of patients with pulmonary diseases, and there has been an increased interest in the development of aerosol delivery devices over the years. Technical innovations have advanced device design, novel features such as breath actuation, dose tracking, portability, and feedback mechanism during treatment that improved the performance of aerosol devices, and effectiveness of inhalation therapy. Areas covered: The purpose of this paper is to review recent advances in aerosol devices for delivery of inhaled medications. Expert opinion: Drug formulations and device designs are rapidly evolving to make more consistent dosing across a broad range of inspiratory efforts, to maximize dose and target specific areas of the diseased lung.
journaltitleExpert Opinion on Drug Delivery
titleRecent advances in aerosol devices for the delivery of inhaled medications
Aerosols,dry powder inhalers,inhaled drugs,metered-dose inhalers,nebulizers
Respiratory Care
journaltitleRespiratory Care
titleDry powder inhalers: an overview
Mary Ann Liebert Inc.
abstractBackground: The characteristics of each inhalation maneuver when patients use dry powder inhalers (DPIs) are important, because they control the quality of the emitted dose. Methods: We have measured the inhalation profiles of asthmatic children [CHILD; n=16, mean forced expiratory volume in 1 sec (FEV<inf>1</inf>) 79% predicted], asthmatic adults (ADULT; n=53, mean predicted FEV<inf>1</inf> 72%), and chronic obstructive pulmonary disease (COPD; n=29, mean predicted FEV<inf>1</inf> 42%) patients when they inhaled through an Aerolizer, Diskus, Turbuhaler, and Easyhaler using their "real-life" DPI inhalation technique. These are low-, medium-, medium/high-, and high-resistance DPIs, respectively. The inhalation flow against time was recorded to provide the peak inhalation flow (PIF; in L/min), the maximum pressure change ($\Delta$P; in kPa), acceleration rates (ACCEL; in kPa/sec), time to maximum inhalation, the length of each inhalation (in sec), and the inhalation volume (IV; in liters) of each inhalation maneuver. Results: PIF, $\Delta$P, and ACCEL values were consistent with the order of the inhaler's resistance. For each device, the inhalation characteristics were in the order ADULT>COPD>CHILD for PIF, $\Delta$P, and ACCEL (p<0.001). The results showed a large variability in inhalation characteristics and demonstrate the advantages of $\Delta$P and ACCEL rather than PIFs. Overall inhaled volumes were low, and only one patient achieved an IV >4 L and $\Delta$P >4 kPa. Conclusion: The large variability of these inhalation characteristics and their range highlights that if inhalation profiles were used with compendial in vitro dose emission measurements, then the results would provide useful information about the dose patients inhale during routine use. The inhalation characteristics highlight that adults with asthma have greater inspiratory capacity than patients with COPD, whereas children with asthma have the lowest. The significance of the inhaled volume to empty doses from each device requires investigation.
journaltitleJournal of Aerosol Medicine and Pulmonary Drug Delivery
titleThe Inhalation characteristics of patients when they use different dry powder inhalers
COPD,asthma,dry powder inhalers,inhalation profiles,inhaled therapy
abstractThis paper applies particle image velocimetry (PIV) to a simplified, canonical, pitch-hold-return problem of a pitching plate in order to gain some understanding of how three dimensionality develops in such flows. Data from a progression of PIV studies, from stereoscopic PIV yielding three-component, two-dimensional (3C-2D) data to tomographic PIV yielding three-component, three-dimensional (3C-3D) data are presented thus providing progressively more detailed information. A comparison of results is made between the two techniques. The PIV study is performed in a water tunnel facility with cross-sectional area 500 × 500 mm, and involves a full-span (nominally two-dimensional) plate, suspended between a wall end boundary condition and a free surface, pitching at a dimensionless pitch rate of K c = 0.93 in flow at Re = 7,500. Results demonstrate the existence of spanwise flows in both the leading edge and trailing edge vortices, but with strong directionality in the leading edge vortex towards the wall end boundary condition. Observations of instantaneous flow patterns suggest also the existence of three-dimensional coherent vortex filament structures in the outer regions of the leading edge vortex. © 2011 Springer-Verlag.
journaltitleExperiments in Fluids
titleStereoscopic and tomographic PIV of a pitching plate
ermodynamics,Fluid,Heat and Mass Transfer,and Aerodynamics
Mary Ann Liebert Inc.
abstractBackground: European and United States Pharmacopoeia compendial procedures for assessing the in vitro emitted dose and aerodynamic size distribution of a dry powder inhaler require that 4.0 L of air at a pressure drop of 4 kPa be drawn through the inhaler. However, the product performance should be investigated using conditions more representative of what is achievable by the patient population. This work compares the delivered dose and the drug deposition profile at different flow rates (30, 40, 60, and 90 L/min) of Foster NEXThaler® (beclomethasone dipropionate/formoterol fumarate), Seretide® Diskus® (fluticasone propionate/salmeterol xinafoate), and Symbicort® Turbohaler® (budesonide/formoterol fumarate). Methods: The delivered dose uniformity was tested using a dose unit sampling apparatus (DUSA) at inhalation volumes either 2.0 or 4.0 L and flow rates 30, 40, 60, or 90 L/min. The aerodynamic assessment was carried out using a Next Generation Impactor by discharging each inhaler at 30, 40, 60, or 90 L/min for a time sufficient to obtain an air volume of 4 L. Results: Foster® NEXThaler® and Seretide® Diskus® showed a consistent dose delivery for both the drugs included in the formulation, independently of the applied flow rate. Contrary, Symbicort® Turbohaler® showed a high decrease of the emitted dose for both budesonide and formoterol fumarate when the device was operated at airflow rate lower that 60 L/min. The aerosolizing performance of NEXThaler® and Diskus® was unaffected by the flow rate applied. Turbohaler® proved to be the inhaler most sensitive to changes in flow rate in terms of fine particle fraction (FPF) for both components. Among the combinations tested, Foster NEXThaler® was the only one capable to deliver around 50% of extra-fine particles relative to delivered dose. Conclusions: NEXThaler® and Diskus® were substantially unaffected by flow rate through the inhaler in terms of both delivered dose and fine particle mass.
journaltitleJournal of Aerosol Medicine and Pulmonary Drug Delivery
titleEffect of flow rate on in vitro aerodynamic performance of NEXThaler® in comparison with Diskus® and Turbohaler® dry powder inhalers
Diskus®,Extra-fine particlemass,NEXThaler®,NGI flow rate,Turbohaler®,aerodynamic assessment
US Pharmacopeial Convention 12601 Twinbrook Pkwy, Rockville, MD 20852
booktitlePharmacopeial Forum
titleRecommendations of the USP advisory panel on aerosols on the USP general chapters on aerosols (601) and uniformity of dosage units (905)
American Society of Mechanical Engineers Digital Collection
abstractExperiments have been carried out in a series of axisymmetric free turbulent jets with degrees of swirl covering the weak, moderate, and strong ranges, including the case of the onset of reversed flow in the central region of the jet. Measurements are reported of mean axial and swirl velocities, static pressure, and jet width at axial stations up to 15 orifice diameters. Mean velocity and pressure profiles are shown to be effectively similar from an axial distance of four diameters for weak and moderate siuirl. For the case of strong swirl, a vortex is generated in the region close to the orifice resulting in a displacement of the axial velocity maximum from the jet axis. After a distance of 10 diameters, the influence of the vortex motion becomes small, and similarity of the profiles is obtained farther doivnstream. Experimentally determined profiles are described in terms of Gaussian error curves and third-order polynomials. Jet width and mass flow rates of entrained fluid are shown to increase according to the degree of swirl so that, for strong svvirl, jet width and rate of entrainment are almost twice ihose for the nonswirling jet. Results of the decay of velocity and pressure along the axis are compared with values predicted by an approximate theory based on the integration of the Reynolds' equations of motion. Good agreement is found between results and predictions, and a set of semi-empirical equations is provided from which a complete description of the mean velocity and pressure fields can be obtained for swirling jets. © 1967 by ASME.
journaltitleJournal of Applied Mechanics, Transactions ASME
titleExperimental investigation of swirling vortex motion in jets
Displacement,Flow (Dynamics),Jets,Pressure,Swirling flow,Turbulence,Vortex motion
family=De Boer,
abstractAir classifier technology (ACT) is introduced as part of formulation integrated dry powder inhaler development (FIDPI) to optimise the de-agglomeration of inhalation powders. Carrier retention and de-agglomeration results obtained with a basic classifier concept are discussed. The theoretical cut-off diameter for lactose of the classifier used, is between 35 and 15$\mu$m for flow rates ranging from 20 to 70l/min. Carrier retention of narrow size fractions is higher than 80% for flow rates between 30 and 60l/min, inhalation times up to 6s and classifier payloads between 0 and 30mg. The de-agglomeration efficiency for adhesive mixtures, derived from carrier residue (CR) measurement, increases both with increasing flow rate and inhalation time. At 30l/min, 60% fine particle detachment can be obtained within 3s circulation time, whereas at 60l/min only 0.5s is necessary to release more than 70%. More detailed information of the change of detachment rate within the first 0.5s of inhalation is obtained from laser diffraction analysis (LDA) of the aerosol cloud. The experimental results can be explained with a novel force distribution concept (FDC) which is introduced to better understand the complex effects of mixing and inhalation parameters on the size distributions of adhesion and removal forces and their relevance to the de-agglomeration in the classifier. © 2003 Elsevier B.V. All rights reserved.
journaltitleInternational Journal of Pharmaceutics
titleAir classifier technology (ACT) in dry powder inhalation: Part 1. Introduction of a novel force distribution concept (FDC) explaining the performance of a basic air classifier on adhesive mixtures
Adhesive mixtures,Air classifier technology,Carrier retention,Dry powder inhalation,Force distribution concept,Powder dispersion
family=De Boer,
Francis Ltd
abstractIntroduction: Early dry powder inhalers (DPIs) were designed for low drug doses in asthma and COPD therapy. Nearly all concepts contained carrier-based formulations and lacked efficient dispersion principles. Therefore, particle engineering and powder processing are increasingly applied to achieve acceptable lung deposition with these poorly designed inhalers. Areas covered: The consequences of the choices made for early DPI development with respect of efficacy, production costs and safety and the tremendous amount of energy put into understanding and controlling the dispersion performance of adhesive mixtures are discussed. Also newly developed particle manufacturing and powder formulation processes are presented as well as the challenges, objectives, and new tools available for future DPI design. Expert opinion: Improved inhaler design is desired to make DPIs for future applications cost-effective and safe. With an increasing interest in high dose drug delivery, vaccination and systemic delivery via the lungs, innovative formulation technologies alone may not be sufficient. Safety is served by increasing patient adherence to the therapy, minimizing the use of unnecessary excipients and designing simple and self-intuitive inhalers, which give good feedback to the patient about the inhalation maneuver. For some applications, like vaccination and delivery of hygroscopic formulations, disposable inhalers may be preferred.
journaltitleExpert Opinion on Drug Delivery
titleDry powder inhalation: past, present and future
Adhesive mixtures,drug deposition,drug formulation,dry powder inhaler design,inhalation,particle engineering
Elsevier Ltd
abstractThe deposition of monodisperse aerosols entering an idealized oral cavity geometry through a variety of inlets was experimentally measured. Aerosol particles with diameters of 2.5, 3.8 and 5.0 $\mu$m were investigated at flow rates ranging from 15 to 90 L/min. The tested inlets ranged in diameter from 3 to 17 mm and included contraction nozzles, straight tubes, a turbulence generator and six commercially available dry powder inhalers (DPIs). A model for predicting the oral cavity deposition was derived from the data based on the particle Stokes number near the primary impaction location modified to incorporate the turbulent kinetic energy at the inlet. The model predicted similar (but slightly underestimated) deposition for monodisperse aerosols entering through DPIs, with increasing deposition for decreasing inlet diameter. The model was then extended to predict extrathoracic deposition for polydisperse aerosol formulations in vivo. Improved agreement was found between the in vitro predictions and the in vivo measurements compared to previous attempts. © 2003 Elsevier Ltd. All rights reserved.
journaltitleJournal of Aerosol Science
titlePredicting extrathoracic deposition from dry powder inhalers
Dry powder inhaler,Impaction,In vitro,Mouth,Oral cavity,Stokes number
Elsevier B.V.
abstractThe influence of grid generated mixing on the fluidization of pharmaceutical carrier powders is studied in a channel-flow experiment using direct high-speed imaging and particle image velocimetry (PIV). Four different lactose powders with mass median diameters that range between 61 µm and 121 µm are used. The degree of powder mixing in the flow as a function of grid position relative to the powder bed and grid area blockage ratios (ranging from 25% to 40%) is studied for a range of flow-rates. The study presents comprehensive mappings of how pharmaceutical powders are fluidised under the influence of mixing, by examining powder bed morphology, powder emptying rate, and the local flow-field surrounding the pocket. The use of a grid results in higher evacuation percentages (void fraction) and a faster evacuation rate but is associated with randomized evacuation behaviour as observed from the powder bed morphology. Use of a grid can enable evacuation of powder at lower overall flow-rates, which may have important implications on respiratory drug delivery. PIV results show the trend of mean velocities with the mass median powder diameter and demonstrates how a grid with lower blockage ratio can increase the degree of mixing of the evacuating powder and make the evacuation process more rapid. This study contributes towards a better understanding of fluidization processes as relevant to dry powder inhaler devices and sheds light on how simple design alterations, such as adding an upstream grid, can be incorporated to optimise device effectiveness.
journaltitleInternational Journal of Pharmaceutics
titleEffect of an upstream grid on the fluidization of pharmaceutical carrier powders
Dry powder inhalers,Fluidization,Pharmaceutical powders
Institute of Physics Publishing
abstractA BeagleBone Black (BBB) single-board open-source computer was implemented as a low-cost fully programmable pulse generator. The pulse generator makes use of the BBB Programmable Real-Time Unit (PRU) subsystem to achieve a deterministic temporal resolution of 5 ns, an RMS jitter of 290 ps and a timebase stability on the order of 10 ppm. A Python-based software framework has also been developed to simplify the usage of the pulse generator.
journaltitleMeasurement Science and Technology
titleApplication of a single-board computer as a low-cost pulse generator
BeagleBone Black,experiment timing control,pulse generator
family=De Boer,
Expert Opin Drug Deliv
abstractThe pulmonary route is an interesting route for drug administration, both for effective local therapy (asthma, chronic obstructive pulmonary disease or cystic fibrosis) and for the systemic administration of drugs (e.g., peptides and proteins). Well-designed dry powder inhalers are highly efficient systems for pulmonary drug delivery. However, they are also complicated systems, the the performance of which relies on many aspects, including the design of the inhaler (e.g., resistance to air flow and the used de-agglomeration principle to generate the inhalation aerosol), the powder formulation and the air flow generated by the patient. The technical background of these aspects, and how they may be tuned in order to obtain desired performance profiles, is reviewed. In light of the technical background, new developments and possibilities for further improvements are discussed. © 2004 Ashley Publications Ltd.
journaltitleExpert Opinion on Drug Delivery
titleDry powder inhalers for pulmonary drug delivery
Adhesion,Aerosol,Asthma,COPD,Cohesion,Cystic fibrosis,De-agglomeration,Devices,Dry powder inhaler,Flow-rate dependency,Lung deposition,Particle engineering,Peptides,Powder formulation,Proteins,Pulmonary drug administration,Size distribution,Systemic drug administration,Vaccines
abstractThe recirculating flow field generated by a swirling jet and a coaxial annular stream entering a pipe is investigated with the use of 2D-DPIV. Parametric change of inlet flow rates (constant tangential injection with change of annular flow and vice versa) is being considered in order to study the mean and turbulent flow field. A recirculation bubble stabilized close to the swirler exit is the dominating feature of the interaction between the inner swirling jet and the annular stream. Results are discussed in terms of bubble topology and dynamics on the basis of a modified Rossby number that appears to describe the trends of the complex flow field. © 2008 Elsevier Inc. All rights reserved.
journaltitleExperimental Thermal and Fluid Science
titleA swirling jet under the influence of a coaxial flow
Coaxial flow,DPIV,Swirling jets,Vortex breakdown
Elsevier Inc.
abstractThis paper presents time-averaged and temporally evolving mean flow and turbulence statistics for a turbulent zero-net-mass-flux (ZNMF) jet at high Reynolds and Strouhal numbers in the far-field in a plane perpendicular to the jet axis. The measurements have been obtained using time-resolved stereo particle image velocimetry (TR-SPIV). The jet is generated by the oscillation of a piston, which discharges filtered water to the quiescent fluid in a tank through a round orifice. A multigrid cross correlation digital particle image velocimetry algorithm (MCCDPIV) has been used to compute the images from each camera that subsequently have been combined to obtain the three components of the velocity. Velocity and turbulence statistics are in good agreement with results obtained in previous work. A comparative study has been performed to determine the contribution to the spreading rate due to the displacement of the center of the jet as rigid body. Different criteria to define the instantaneous jet center have been considered. For the different cases carried out for this study it has been observed that subtracting the movement as a rigid body the width of the jet becomes similar to that of a continuous jet. Velocity fluctuation statistics are also influenced by this restriction. The image series have also been analyzed to determine the temporal evolution of the velocity field and its fluctuations. The fast Fourier transforms (FFT) have been used to calculate the power spectra of these variables. © 2014 Elsevier Inc.
journaltitleExperimental Thermal and Fluid Science
titleTime-resolved stereo PIV measurements in the far-field of a turbulent zero-net-mass-flux jet
SPIV,Spreading rate,ZNMF
Tunbridge Wells, Kent : Abacus Press
seriesEnergy and engineering science series
titleSwirl Flows
journaltitleAerosol Science & Technology
titleFlow field measurement inside the mouthpiece of the Spiros inhaler using particle image velocimetry
journaltitleExperiments in fluids
titlePIV error correction
abstractThe dose emission characteristics of eight marketed dry powder inhalers (DPIs: Intal Spinhaler®, Ventolin and Becotide Diskhalers®, Ventolin and Becotide Rotahalers®, Bricanyl and Pulmicort Turbohalers®, Berotec Inhalator® have been investigated using the proposed USP dosage unit sampling apparatus for DPIs. Intra- and inter-device variation in emitted doses was determined at air flow rates of 60 and 100 1/min using a 4 1 air throughput in each case except Inhalator®, which was tested at 30 l/min only. The sampling apparatus was found to be suitable for quantifying single emitted doses from all of these devices which comprise examples of low, medium and high airflow resistance DPIs (Table 1 footnote). Dose emissions from the DPIs are presented as percentages of the manufacturers' label claims. Under all test flow conditions variability was high, when compared to the uniformity of content standards usually applied to pharmaceutical products; in some cases relative standard deviations (RSD) were greater than 15%, both within and between devices. However, under the proposed USP test flow rate conditions, the total RSD (n = 25) was < 15% around the average emitted dose in all cases except Pulmicort Turbohaler®; such variance (RSD< 15%) is proposed to be acceptable for DPIs delivering current medications. Only the Intal Spinhaler® emitted an average dose similar to its label claim. Testing at 100 1/min vs 60 1/min significantly increased DPI drug emission and reduced the device retention of both the Ventolin® and Becotide® versions of the low resistance devices, Rotahaler® and Diskhaler®. Using these same flow rates for testing the dose emissions from the medium resistance Bricanyl and Pulmicort Turbohalers®, there was no significant difference in drug output between the two flow rates. © 1995.
journaltitleInternational Journal of Pharmaceutics
titleDose emissions from marketed dry powder inhalers
Aerosol,Dose emission,Dry powder inhaler,Flow rate,Pharmacopeial test,Resistance
Elsevier B.V.
abstractThe potential of the force control agent magnesium stearate (MgSt) to enhance the aerosol performance of lactose-based dry powder inhaled (DPI) formulations was investigated in this study. The excipient-blends were investigated with analytical techniques including time-of-flight secondary ion mass spectrometry and single particle aerosol mass spectrometry (SPAMS), and particle size, morphology, and surface properties were evaluated. Excipient-blends were manufactured either by high-shear or low-shear blending lactose carrier with different amounts of MgSt in the range from 0% to 10% (w/w). Fluticasone propionate (FP) and salmeterol xinafoate (SX) used as model active pharmaceutical ingredients were added by low-shear mixing. The in vitro aerosol performance in terms of aerodynamic particle size distribution and fine particle fraction (FPF) of the FP and SX DPI formulations was evaluated with the Next Generation Impactor and also with SPAMS using a Breezhaler® inhalation device. The distribution of MgSt on the lactose carrier in the blends was visualized and found to depend strongly on the blending method. This affected drug particle detachment from the carrier and thus impacted aerosol performance for FP and SX. Compared with blends without force control agent, low-shear blending of MgSt increases the FPF of the model drug SX, whereas high-shear blending significantly increased FPF of both SX and FP. The interactions between drug and carrier particles were substantially affected by the choice of blending technique of MgSt with lactose. This allows detailed control of aerosol performance of a DPI by an adequate choice of the blending technique. SPAMS successfully demonstrated that it is capable to distinguish changes in DPI formulations blended with different amounts of MgSt, and additional information in terms of dispersibility of fine particles could be generated.
journaltitleJournal of Pharmaceutical Sciences
titleInvestigations on the mechanism of magnesium stearate to modify aerosol performance in dry powder inhaled formulations
blending method,dispersibility,dry powder inhaler (DPI),force control agent (FCA),next generation impactor (NGI),single particle aerosol mass spectrometry (SPAMS),time-of-flight secondary ion mass spectrometry (To
abstractProper orthogonal decomposition (POD) was performed on both the fluctuating velocity and vorticity fields of a backward-facing step (BFS) flow at Reynolds numbers of 580 and 4,660. The data was obtained from particle image velocimetry (PIV) measurements. The vorticity decomposition captured the fluctuating enstrophy more efficiently than the equivalent velocity field decomposition for a given number of modes. Coherent structures in the flow are also more easily identifiable using vorticity-based POD. A common structure of the low-order vorticity POD modes suggests that a large-scale similarity, independent of the Reynolds number, may be present for the BFS flow. The POD modes obtained from a vorticity-based decomposition would help in determining a basis for constructing simplified vortex skeletons and low-order flow descriptions based on the vorticity of turbulent flows. © Springer-Verlag 2005.
journaltitleExperiments in Fluids
titleA comparison between snapshot POD analysis of PIV velocity and vorticity data
ermodynamics,Fluid,Heat and Mass Transfer,and Aerodynamics
Elsevier B.V.
abstractThe goal of this work was to evaluate the ability of Particle Image Velocimetry (PIV) to visually assess dry powder dispersion within an inhaler. Herein, the study reports particle movement characterization of entrained low-micron particles within an inhaler to further scheme of potential mechanisms. Carrier based DPI formulations were prepared and placed in a transparent model Rotahaler® chamber for the aerosolization experiments. Then using the PIV, a high-speed camera, the dried powder dispersion was directly observed and analyzed for all, neat, binary and ternary systems. Powder dispersion mechanisms proposed include drag force, impact with obstacle and particle-particle collision; these different mechanisms depended on the powder flow properties. A revised ratio of aerodynamic response time ($\tau$A) to the mean time between collisions ($\tau$C) was found to be 6.8 indicating that particle collisions were of strong influence to particle dispersion. With image analysis techniques, visualization of particle flow pattern and collision regions was possible; suggesting that the various mechanisms proposed did govern the powder dispersion.
journaltitleInternational Journal of Pharmaceutics
titlePowder dispersion mechanisms within a dry powder inhaler using microscale particle image velocimetry
Collision,DPI,Dispersion mechanism,Image analysis,Impact,PIV
Cambridge University Press
abstractThe ‘plug' flow emerging from a long rotating tube into a large stationary reservoir has been used in an experimental investigation of centrifugally unstable swirling jets. A moderate Reynolds number, .
journaltitleJournal of Fluid Mechanics
titleAn experimental investigation of swirling jets
abstractIt is no longer acceptable, in most circles, to present experimental results without describing the uncertainties involved. Besides its obvious role in publishing, uncertainty analysis provides the experimenter a rational way of evaluating the significance of the scatter on repeated trials. This can be a powerful tool in locating the source of trouble in a misbehaving experiment. To the user of the data, a statement (by the experimenter) of the range within which the results of the present experiment might have fallen by chance alone is of great help in deciding whether the present data agree with past results or differ from them. These benefits can be realized only if both the experimenter and the reader understand what an uncertainty analysis is, what it can do (and cannot do), and how to interpret its results. This paper begins with a general description of the sources of errors in engineering measurements and the relationship between error and uncertainty. Then the path of an uncertainty analysis is traced from its first step, identifying the intended true value of a measurement, through the quantitative estimation of the individual errors, to the end objective-the interpretation and reporting of the results. The basic mathematics of both single-sample and multiple-sample analysis are presented, as well as a technique for numerically executing uncertainty analyses when computerized data interpretation is involved. The material presented in this paper covers the method of describing the uncertainties in an engineering experiment and the necessary background material. © 1988.
journaltitleExperimental Thermal and Fluid Science
titleDescribing the uncertainties in experimental results
analysis,error analysis,experimental uncertainty,multiple-sample analysis,single-sample,system errors
abstractThe study aims to investigate the impact of various design parameters of a dry powder inhaler on the turbulence intensities generated and the performance of the dry powder inhaler. The flow fields and turbulence intensities in the dry powder inhaler are measured using particle image velocimetry (PIV) techniques. In vitro aerosolization and deposition a blend of budesonide and lactose are measured using an Andersen Cascade Impactor. Design parameters such as inhaler grid hole diameter, grid voidage and chamber length are considered. The experimental results reveal that the hole diameter on the grid has negligible impact on the turbulence intensity generated in the chamber. On the other hand, hole diameters smaller than a critical size can lead to performance degradation due to excessive particle-grid collisions. An increase in grid voidage can improve the inhaler performance but the effect diminishes at high grid voidage. An increase in the chamber length can enhance the turbulence intensity generated but also increases the powder adhesion on the inhaler wall. © 2013 Elsevier B.V. All rights reserved.
journaltitleInternational Journal of Pharmaceutics
titleExperimental investigation of design parameters on dry powder inhaler performance
Dry powder inhaler,Emitted dose,Fine particle fraction,Flow field,In vitro deposition,Turbulence intensity
The Stationery Office
seriesBritish Pharmacopoeia
titleBritish Pharmacopoeia
Elsevier B.V.
abstractEffective drug delivery to the lungs by a DPI device requires the air-stream through the device to have sufficient power to aerosolise the powder. Furthermore, sufficient turbulence must be induced, along with particle-wall and particle-particle collisions, in order to de-aggregate small drug particles from large carrier particles. As a result, the emitted and the fine particle doses produced by many commercially available DPI devices tend to be strongly affected by the natural inter-patient variability of the inhaled air flow. The Nexthaler® is a multi-dose breath-actuated dry-powder inhaler with minimum drug delivery-flow rate dependency and incorporating a dose protector. The actuation mechanism of the dose-protector ensures that the dose is only exposed to the inhaled air flow if the flow has sufficient power to cause complete aerosolisation. For this study, a proprietary lactose placebo powder blend was filled into "transparent" Nexthaler® to allow application of high-speed imaging and particle image velocimetry (PIV) techniques to successfully interrogate and reveal details of the powder entrainment and emission processes coupled with characterisation of the flow environment in the vicinity of the mouthpiece exit. The study showed that fluidisation of the bulk of the powder occurs very quickly (∼20 ms) after withdrawal of the dose protector followed by powder emission from the device within ∼50 ms thereafter. The bulk of the metered placebo dose was emitted within 100-200 ms. The visualisation study also revealed that a very small fraction of powder fines is emitted whilst the dose protector still covers the dosing cup as the flow rate through the device accelerates. The PIV results show that the flow exiting the device is highly turbulent with a rotating flow structure, which forces the particles to follow internal paths having a high probability of wall impacts, suggesting that the flow environment inside the Nexthaler® DPI will be very beneficial for carrier-drug de-aggregation.
journaltitleInternational Journal of Pharmaceutics
titleOptical diagnostics study of air flow and powder fluidisation in Nexthaler® - Part I: Studies with lactose placebo formulation
Breath-actuated,Dry powder inhaler,High-speed imaging,Optical diagnostics,Particle image velocimetry,Powder emission,Powder fluidization
Springer Verlag
abstractIn this paper, we investigate the flow structures and pressure fields of a free annular swirling jet flow undergoing vortex breakdown. The flow field is analyzed by means of time-resolved tomographic particle image velocimetry measurements, which enable the reconstruction of the three-dimensional time-resolved pressure fields using the governing flow equations. Both time-averaged and instantaneous flow structures are discussed, including a characterization of the first- and second-order statistical moments. A Reynolds decomposition of the flow field shows that the time-averaged flow is axisymmetric with regions of high anisotropic Reynolds stresses. Two recirculation zones exist that are surrounded by regions of very intense mixing. Notwithstanding the axisymmetric nature of the time-averaged flow, a non-axisymmetric structure of the instantaneous flow is revealed, comprising a central vortex core which breaks up into a precessing vortex core. The winding sense of this helical structure is opposite to the swirl direction and it is wrapped around the vortex breakdown bubble. It precesses around the central axis of the flow at a frequency corresponding to a Strouhal number of 0.27. The precessing vortex core is associated with a low-pressure region along the central axis of the jet and the maximum pressure fluctuations occur upstream of the vortex breakdown location, where the azimuthal velocity component also reaches peak values as a result of the inward motion of the fluid and the conservation of angular momentum. The POD analysis of the pressure fields suggests that the precessing helical vortex formation is the dominant coherent structure in the instantaneous flow.
journaltitleExperiments in Fluids
titleAnalysis of the pressure fields in a swirling annular jet flow
ic particle image velocimetry,Vortex breakdown
CSIRO Australia
booktitleInternational Colloquium on Jets, Wakes and Shear Layers
titleDigital cross-correlation particle image velocimetry measurements in the near wake of a circular cylinder
Elsevier Inc.
abstractA cross-correlation particle image velocimetry (PIV) technique has been developed to measure the spatiotemporal in-plane velocity vector field evolution of time-dependent flows. A novel iterative two-stage cross-correlation scheme of two sequential images of flow tracers has been incorporated in the image analysis. The implementation in hardware and software of this complete recording and analysis system are described. The expected accuracy of the velocity measurements was investigated and is discussed. The technique has been applied to study the near wake behind a circular cylinder at low Reynolds numbers (Red). The measurements presented pertain to cylinders with d = 12.5 and 25 mm (l/d = 19.5 and 9.8, respectively). The respective Reynolds numbers Red are 875 and 769. Two planes of this flow are considered in this study: (1) plane normal to the cylinder axis (xy plane) and (2) a plane containing the cylinder axis and the stream direction (xz plane). Instantaneous in-plane velocity vector fields and out-of-plane vorticity fields are presented for both planes. The effect of spatial resolution on peak vorticity is discussed using velocity vector field measurements in the near wake of the cylinder that were conducted using different spatial resolutions. The three-dimensional nature of the near wake of circular cylinders at low Red is demonstrated using quantitative in-plane velocity vector field and out-of-plane vorticity measurements. An upstream influx of relatively high velocity fluid into the stagnant near-wake region in the xy plane and the subsequent deflection of the fluid normal to this plane as it approaches the stagnation region at the cylinder are shown to be responsible for the generation of three-dimensional flow in the near wake of a circular cylinder.
journaltitleExperimental Thermal and Fluid Science
titleAn investigation of the near wake of a circular cylinder using a video-based digital cross-correlation particle image velocimetry technique
Cross-correlation PIV,Cylinder wake,Vorticity measurement
Monash University
booktitle13th Australasian Fluid Mechanics Conference
titleMultigrid approach to cross-correlation digital PIV and HPIV analysis
Elsevier Science Ltd
abstractThis paper describes a novel approach of acquiring two single-exposed, non-overlapping images using half frame image shift (HFIS) recording on photographic film. This technique permits the recording of two single-exposed, non-overlapping images of seed particles in a flow plane on high spatial resolution film with any arbitrary time delay between exposures. A new multigrid CCDPIV (MCCDPIV) analysis method is used to analyze the single-exposed, non-overlapping sequential images resulting in PIV measurements with a larger velocity dynamic range, lower random error and better spatial resolution than standard CCDPIV analysis. HFIS recording followed by MCCDPIV analysis was employed to measure the spatio-temporal evolution of the in-plane velocity vector and the out-of-plane vorticity fields of a turbulent starting jet at Reynolds numbers based on the orifice diameter and piston velocity of 10,780 and 13,860.
journaltitleOptics and Laser Technology
titleHigh resolution multigrid cross-correlation digital PIV measurements of a turbulent starting jet using half frame image shift film recording
abstractThis paper presents an experimental investigation on swirling jets with well-defined initial conditions. The axial, radial, and azimuthal velocity components, with their respective fluctuations were measured using high spatial-resolution particle image velocimetry. These detailed measurements allow the initial conditions of the swirling jets to be established and the jets to be characterized using various swirl number definitions. The significance of each term in the swirl number calculations are quantified, and the effect of the common assumptions and simplifications are examined. The characteristics of the jets in relation to the initial conditions are then investigated and compared with the previous studies using similar characterization parameters. Jets with Reynolds number of approximately 5700 and swirl conditions ranging from a non-swirling reference case to high swirl are studied. General properties of swirling jets such as higher spreading rate, higher centerline velocity decay, and higher turbulence level are observed. When the degree of swirl is sufficiently high, vortex breakdown occurs. A swirl number of 0.94 is recorded for a high swirl case prior to vortex breakdown, much higher than the critical swirl number reported in the literature. This behavior is attributed to the effect of the initial conditions on the swirl number calculation. © 2009 Springer-Verlag.
journaltitleExperiments in Fluids
titleAxial plus tangential entry swirling jet
ermodynamics,Fluid,Heat and Mass Transfer,and Aerodynamics
family=Van Dyck,
family=Van den Bulck,
American Institute of Physics Inc.
abstractIn this paper, the flow dynamics in the wake of a turbulent annular jet is studied using Time-Resolved Stereoscopic Particle Image Velocimetry and Proper Orthogonal Decomposition (POD). In this wake, a central recirculation zone is present which, under certain conditions, shows a low-frequency precessing motion. POD analysis of the measured velocity data shows that at zero swirl, an asymmetry is present in the wake, which motion is random in time. This asymmetry originates from a bifurcation of the flow once a threshold Reynolds number is exceeded. For low-swirl numbers, ranging from 0 < S < 0.12, the asymmetry is still present and its motion becomes structured into a well defined precession. For S > 0.12, the precession is gone and the motion of the asymmetric wake is again random in time, similar like the non-swirling jet. In this paper, a model is developed to describe the influence of swirl on the wake dynamics. The model assumes that perturbations in the inner shear layer near the bluff body wall are convected towards the stagnation point. These perturbations cause a shift in the stagnation points position. This shift is convected back to the inner shear layer through convection in the recirculating flow. The dynamics of this feedback mechanism can be modeled by the nonlinear delayed saturation model. In this paper, the model is adapted for swirling flow and simulations show that good agreement is found with the experiments.
journaltitlePhysics of Fluids
titleSymmetry breaking and vortex precession in low-swirling annular jets
abstractThe effect of turbulence and mechanical impaction on dry powder aerosol deaggregation was tested using a novel powder deagglomeration rig, with fine particle fraction (FPFED<5.6 $\mu$m), defined here as particles sized smaller than 5.6 $\mu$m, measured using an Anderson inertial impactor. Powder from GlaxoSmithKline Ventodisks™ was deaggregated either using turbulence generated with a ring of impinging jets, or by impacting the powder on bars of a wire mesh. This deaggregation was compared with deaggregation achieved with the GlaxoSmithKline Diskhaler. The turbulence levels in the test rig and at the exit of the Diskhaler were quantified using laser Doppler velocimetry (LDV). In addition, the Ventodisk powder's auto-adhesion properties were altered by introducing the powder into a high humidity environment (25°C and 25% R.H.) and then deagglomerated by both the rig (using turbulence as the primary deagglomeration mechanism) and the Diskhaler. Fine particle fractions were found to increase from 13 to 24% as the level of turbulence in the rig was increased. However, fine particle fractions found with the Diskhaler were 35%. Turbulence levels found in the rig at the highest jet flow rate were significantly higher than that at the outlet of the Diskhaler, leading to the conclusion that turbulence is not the only method of deaggregation in this inhaler. The humidified powders were significantly more difficult to deaggregate, giving a FPFED<5.6 $\mu$m of 9% when using the rig and 15% when using the Diskhaler. Fine particle fractions produced when deagglomerating the powder with the wire meshes were similar to those produced without a mesh, showing that mechanical impaction had little effect. The results underline the utility of having a rig that can explore the ability of a powder to deagglomerate with controlled variations in the deaggregation forces. © 2002 Elsevier Science B.V. All rights reserved.
journaltitleInternational Journal of Pharmaceutics
titleDeagglomeration of dry powder pharmaceutical aerosols
Aerosol,Anderson impactor,Diskhaler,Dry powder inhaler,Powder deagglomeration,Powder deaggregation,Powder inhalation,Turbulence
Int J Pharm
abstractThe dispersion of Ventodisk® (salbutamol sulphate with lactose) from different drug reservoirs by an air jet at normal impingement is examined experimentally. The effect on dispersion efficiency of jet velocity, nozzle location, reservoir size and shape, and the loaded dose is investigated for possible design of new dosing methods or inhalers. Results show that higher jet velocity (as high as feasible), lower drug loading (2mg or smaller), a cylindrical hole reservoir (6mm in diameter and 3mm in depth) and a medium distance (approximately 5 jet diameters) from the nozzle to the reservoir yield optimum dispersion. The dispersed fine particle dose improves by a factor of 2-3 times between optimized conditions and poor conditions. © 2004 Elsevier B.V. All rights reserved.
journaltitleInternational Journal of Pharmaceutics
titleUse of an impinging jet for dispersion of dry powder inhalation aerosols
Aerosol,Cascade impactor,Dry powder inhaler,Impinging jet,Powder deagglomeration,Powder dispersion
abstractA statistical model is introduced that describes the occurence of spurious vectors in PIV data. This model is used to investigate the performance of three different post-interrogation procedures: the global-mean, the local-mean and the local-median test. The model is also used to optimize the performance of these procedures. Predicted performances agree very well with those obtained from an artificially generated PIV record. It is demonstrated that the "detectability" as the conventional measure for the reliability of a measured displacement vector is very inefficient, compared to the three tests described here. The local-median test has the highest efficiency. © 1994 Springer-Verlag.
journaltitleExperiments in Fluids
titleEfficient detection of spurious vectors in particle image velocimetry data
ermodynamics,Fluid,Heat and Mass Transfer,and Aerodynamics
John Wiley
Sons Inc.
abstractThis study aimed to investigate the influence of grid structures on the break-up and aerosol performance of a model inhalation formulation through the use of standardised entrainment tubes in combination with computational fluid dynamics (CFD). A series of entrainment tubes with grid structures of different aperture size and wire diameters were designed in silico and constructed using three-dimensional printing. The flow characteristics were simulated using CFD, and the deposition and aerosol performance of a model agglomerate system (496.3-789.2 $\mu$m agglomerates containing 3.91 $\mu$m median diameter mannitol particles) were evaluated by chemical analysis and laser diffraction, respectively. Analysis of the mannitol recovery from the assembly indicated that mass deposition was primarily on the grid structure with little before or after the grid. Mass deposition was minimal down to 532 $\mu$m; however, for smaller grid apertures, significant blockage was observed at all airflow rates (60-140 L·min-1). Analysis of the particle size distribution exiting the impactor assembly suggested that mannitol aerosolisation was dependent on the void percentage of the grid structure. It is proposed that initial particle-grid impaction results in a shearing force causing agglomerate fragmentation followed by immediate re-entrainment into the turbulent airstream within the grid apertures which causes further dispersion of the fine particles. Such observations have significant implications in the design of dry powder inhaler devices. © 2011 Wiley-Liss, Inc.
journaltitleJournal of Pharmaceutical Sciences
titleParticle aerosolisation and break-up in dry powder inhalers: Evaluation and modelling of the influence of grid structures for agglomerated systems
Aerosols,Agglomerate,CFD,Deagglomeration,Dry powder inhaler,Impaction,In silico modelling,Particle size,Pulmonary drug delivery,Simulations
Elsevier B.V.
abstractPurpose: This study was performed to investigate how increasing the active pharmaceutical ingredient (API) content within a formulation affects the dispersion of particles and the aerosol performance efficiency of a carrier based dry powder inhalable (DPI) formulation, using a custom dry powder inhaler (DPI) development rig. Methods: Five formulations with varying concentrations of API beclomethasone dipropionate (BDP) between 1% and 30% (w/w) were formulated as a multi-component carrier system containing coarse lactose and fine lactose with magnesium stearate. The morphology of the formulation and each component were investigated using scanning electron micrographs while the particle size was measured by laser diffraction. The aerosol performance, in terms of aerodynamic diameter, was assessed using the British pharmacopeia Apparatus E cascade impactor (Next generation impactor). Chemical analysis of the API was observed by high performance liquid chromatography (HPLC). Results: Increasing the concentration of BDP in the blend resulted in increasing numbers and size of individual agglomerates and densely packed BDP multi-layers on the surface of the lactose carrier. BDP present within the multi-layer did not disperse as individual primary particles but as dense agglomerates, which led to a decrease in aerosol performance and increased percentage of BDP deposition within the Apparatus E induction port and pre-separator. Conclusion: As the BDP concentration in the blends increases, aerosol performance of the formulation decreases, in an inversely proportional manner. Concurrently, the percentage of API deposition in the induction port and pre-separator could also be linked to the amount of micronized particles (BDP and Micronized composite carrier) present in the formulation. The effect of such dose increase on the behaviour of aerosol dispersion was investigated to gain greater insight in the development and optimisation of higher dosed carrier-based formulations.
journaltitleInternational Journal of Pharmaceutics
titleLimitations of high dose carrier based formulations
Agglomerates,Carrier,Dry powder inhaler,High dose formulation,Multi-layer
Springer New York LLC
abstractThis study aims to investigate the implications of loaded formulation mass on aerosol performance using a reservoir novel dry powder inhaler containing a custom dosing cup to deliver carrier-based formulation to the lungs. A 3D printed dosing cup with volume size of 133.04 mm 3 was manufactured to allow for the progressive loading of different carrier formulation masses of 1% beclomethasone dipropionate BDP (w/w) formulation (10 to 60 mg, with increments of 10 mg), in a novel customizable DPI device. Scanning electron micrographs were used to investigate BDP detachment from carrier particles post-aerosolisation and particle deposition on the USP induction port. The subsequent aerosol performance analysis was performed using the next generation impactor (NGI). Incrementally increasing the loading mass to 60 mg led to decreases in BDP detachment from carrier particles, resulting in significant decreases in aerosol performance. Increases in loading dose mass led to progressively decreased detachment of BDP from the carrier and the overall aerosol performance in comparison to the initial mass of 10 mg. These results are likely to be due to a decrease in void volume within the dosing cup with increased loading mass leading to altered airflow, decreased impaction forces and the possibility of a significant quantity of large carrier particles introducing a ‘sweeping' effect on the inhaler inner surface. This study has shown that despite the decreased BDP detachment from the carrier and decreased aerosol performance, the dose delivered to the lung still increased due to the higher loaded dose.
journaltitleAAPS PharmSciTech
titleAssessing aerosol performance of a dry powder carrier formulation with increasing doses using a novel inhaler
aerosol performance,carrier formulation,dispersion forces,loading dose,novel dry powder inhaler
AAPS J
abstractThe objective of this study is to investigate the effect of device design of the Aerolizer® on the aerosolization of a carrier-based dry powder inhaler formulation (Foradile®). The Aerolizer was modified by reducing the air inlet size and mouthpiece length to 1/3 of the original dimensions, or by increasing the grid voidage. Aerosolization of the powder formulation was assessed on a multi-stage liquid impinger at air flow rates of 30, 60, and 100 L/min. Coupled CFD-DEM simulations were performed to investigate the air flow pattern and particle impaction. There was no significant difference in the aerosolization behavior between the original and 1/3 mouthpiece length devices. Significant increases in FPF total and FPF emitted were demonstrated when the inlet size was reduced, and the results were explained by the increases in air velocity and turbulence from the CFD analysis. No significant differences were shown in FPF total and FPF emitted when the grid voidage was increased, but more drugs were found to deposit in induction port and to a lesser extent, the mouthpiece. This was supported by the CFD-DEM analysis which showed the particle-device collisions mainly occurred in the inhaler chamber, and the cross-grid design increased the particle-device collisions on both mouthpiece and induction port. The air inlet size and grid structure of the Aerolizer® were found to impact significantly on the aerosolization of the carrier-based powder. © 2013 American Association of Pharmaceutical Scientists.
journaltitleAAPS Journal
titleEffect of device design on the aerosolization of a carrier-based dry powder inhaler - A case study on Aerolizer® Foradile®
aerosolization,computational fluid dynamics,device design,discrete element method,dry powder inhalers
|
# UNIT: Unifying Tensorized Instruction Compilation
Jian Weng12, Animesh Jain2, Jie Wang12, Leyuan Wang2, Yida Wang2, Tony
Nowatzki1 1University of California, Los Angeles, USA 2Amazon Web Services,
USA<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Because of the increasing demand for intensive computation in deep neural
networks, researchers have developed both hardware and software mechanisms to
reduce the compute and memory burden. A widely adopted approach is to use
mixed precision data types. However, it is hard to benefit from mixed
precision without hardware specialization because of the overhead of data
casting. Recently, hardware vendors offer _tensorized_ instructions
specialized for mixed-precision tensor operations, such as Intel VNNI, Nvidia
Tensor Core, and ARM DOT. These instructions involve a new computing idiom,
which reduces multiple low precision elements into one high precision element.
The lack of compilation techniques for this emerging idiom makes it hard to
utilize these instructions. In practice, one approach is to use vendor-
provided libraries for computationally-intensive kernels, but this is
inflexible and prevents further optimizations. Another approach is to manually
write hardware intrinsics, which is error-prone and difficult for programmers.
Some prior works tried to address this problem by creating compilers for each
instruction. This requires excessive efforts when it comes to many tensorized
instructions.
In this work, we develop a compiler framework, UNIT, to unify the compilation
for tensorized instructions. The key to this approach is a unified semantics
abstraction which makes the integration of new instructions easy, and the
reuse of the analysis and transformations possible. Tensorized instructions
from different platforms can be compiled via UNIT with moderate effort for
favorable performance. Given a tensorized instruction and a tensor operation,
UNIT automatically detects the applicability of the instruction, transforms
the loop organization of the operation, and rewrites the loop body to take
advantage of the tensorized instruction. According to our evaluation, UNIT is
able to target various mainstream hardware platforms. The generated end-to-end
inference model achieves 1.3$\times$ speedup over Intel oneDNN on an x86 CPU,
1.75$\times$ speedup over Nvidia cuDNN on an Nvidia GPU, and 1.13$\times$
speedup over a carefully tuned TVM solution for ARM DOT on an ARM CPU.
1212footnotetext: Work done during Jian and Jie’s internship at AWS.
## I Introduction
Dense tensor operations like matrix multiplication (Matmul) and convolution
(Conv) have long been the workhorses in many domains, including deep learning
workloads [14]. The popularity of deep learning means that aggressively
optimizing these operations has a high payoff. Essentially, Matmul and Conv
are a series of multiply-accumulate (MAC) operations, which perform
accumulation over a number of elementwise multiplications.
To capture the reduction behavior and perform it more efficiently, recent
general-purpose processors offer native tensor operation specialized
instructions (hereinafter referred to as _tensorized instructions_), like
Intel VNNI [2], Nvidia Tensor Core [5], and ARM DOT [1]. Unlike the
conventional SIMD instructions, after performing elementwise arithmetic
operations, these instructions introduce a “horizontal computation” to
accumulate elementwise results. Further, tensorized instructions are often
mixed-precision, meaning that elementwise operations use less precise and
lower bitwidth operands (e.g., fp16 and int8), while accumulation occurs with
higher bitwidth, where it is needed. This offers a good balance between data
width and precision that is generally sufficient for deep learning workloads
[24, 18], and enables the use of quantized data types.
Mixed-precision is difficult to express in a single SIMD instruction, because
the output vector width is different than the input vector width. In most ISAs
this paradigm requires multiple SIMD instructions to express. In a tensorized
instruction, by definition there are fewer outputs, so allocating more
bitwidth to them for the output vector is natural. In addition, tensorized
instructions sometimes reuse the same inputs multiple times, which reduces the
required register file bandwidth. Overall, tensorized instructions offer
significant advantages over SIMD for executing MACs.
While promising, the absence of appropriate compilation techniques limit c the
applicability of these tensorized instructions. Conventional SIMD instructions
are vector instructions, so industry standard compilers only try parallelizing
the innermost loops. In addition, it is difficult for the high-level language
programmer to express the compute flow in a tensorization-friendly way and
hint the compiler to try tensorization upon a loop nest, because the
dependency of reduction is more complicated and error-prone.
In practice, there are normally two options to leverage tensorized
instructions. One way is to call the vendor-provided libraries such as Intel
oneDNN [6], Nvidia cuBLAS and cuDNN [4], which provides highly optimized
performance in some pre-defined single kernels using tensorized instructions
[17, 44]. However, it also brings inflexibility when it comes to new workloads
or when further performance exploitation is desired. The other option is to
manually write assembly intrinsics, which sets a high bar to ordinary
developers and hence lacks productivity. Some prior works tried to solve this
problem by developing a compiler [35, 36] for each instruction. This requires
too much effort when there are many tensorized instructions, both within and
across hardware platforms.
Our Goal: Although different processors may provide different tensorized
instructions, in the context of deep learning workloads, we observe that these
instructions essentially handle a similar compute pattern, i.e., elementwise
multiplication and then horizontal accumulation. They primarily differ in the
number of elementwise computation lanes and the accepting data types.
Therefore, we aim to develop a unified approach to compile these tensorized
instructions on multiple platforms to optimize the tensor operations in deep
learning workloads. Our techniques are extensible to the tensorized
instructions with other data types and operations as well.
Challenges: There are several challenges to attain a unified compilation
pipeline:
* •
_Instructions Integration:_ Instead of building a new specialized compiler for
each new instruction, it is desirable to create a unified and extensible
compilation flow;
* •
_Detecting the applicability:_ Given a tensorized instruction, a first
question is whether and how this instruction can be applied to the target
tensor operation, which may require loop reorganization to make it applicable;
* •
_Code rewriting:_ When applicable, the compiler must determine how the loops
involved should be rewritten by the tensorized instruction, and how the loops
should be rearranged to achieve high performance.
Our Insight: We envision that the key to addressing these three challenges is
to have a unified semantics abstraction for tensorized instructions so that
the analysis and transformation can also be unified.
This paper presents UNIT, an end-to-end compilation pipeline to surmount the
above three challenges. UNIT takes the tensorized instructions (e.g., Intel
VNNI instructions on CPUs, or Nvidia Tensor Core instructions on GPUs) and a
deep learning model as input, lowers the tensor operations of the model into
loop-based IRs to identify the tensorizable components, and inserts the
tensorized instructions by transforming and rewriting the loop. It achieves
high performance for tensor operations, and consequently, model inference. To
the best of our knowledge, this is the first work to tackle tensorized
instruction compilation and optimization with a unified solution. UNIT not
only achieves high performance for single tensor operations, but also provides
desirable model inference latency in practice.
Key Results: According to our evaluation, UNIT is expressive enough to target
many tensorized instructions on multiple hardware platforms, including Intel
VNNI, Nvidia Tensor Core, and ARM DOT. The generated programs for end-to-end
model inference are 1.3$\times$ and 1.75$\times$ faster than the solutions
backed up by Intel oneDNN and Nvidia cuDNN on CPU and GPU, respectively. In
addition, UNIT can be extended to new tensorized instructions with moderate
effort. Although we designed UNIT to target Intel CPUs and Nvidia GPUs, on an
ARM Cortex A-72 CPU with DOT instructions, UNIT achieves up to 1.13$\times$
speedup against a carefully manual tuned solution.
To sum up, our contribution is an end-to-end compilation pipeline of
tensorized instructions for deep learning workloads, which includes:
* •
A unified abstraction for tensorized instructions.
* •
An algorithm that detects the applicability of these tensorized instructions.
* •
A rewriting and tuning mechanism that looks for favorable loop transformations
of the tensor operations to plug in the tensorized instructions for high
performance.
Paper Organization: We first introduce the background and challenges of
tensorized compilation in Section II. The design of UNIT is presented in
Section III. We explain the implementation details in Section IV. We clarify
our experiment methodology in Section V, and evaluate our work in Section VI.
Finally, we discuss the related work in Section VII.
## II Background
UNIT is an end-to-end compilation pipeline capable of automatically mapping
tensorized instructions to the deep learning tensor operations. It defines the
tensorized instruction’s semantics using a suitable intermediate
representation (IR) and inserts them in proper places of the program of tensor
operations. In this section, we give an overview of popular mixed precision
tensorized instructions, followed by the limitations of existing solutions in
automatic mapping of these tensorized instructions. Finally, we discuss the
background of tensor domain specific language and the multi-level intermediate
representation.
### II-A Mixed Precision Tensorized Instructions
Deep learning is computationally expensive, requiring substantial compute and
memory resources. As deep learning becomes more pervasive, researchers are
designing both software and hardware techniques to reduce the compute and
memory burden. A widely adopted approach in this context is using mixed
precision for expensive operations, e.g., convolution or dense operations [24,
18]. In practice, this means representing 32-bit floating point (fp32)
operands with a lower bitwidth datatype - 16-bit floating point numbers (fp16)
or 8/16-bit integer numbers (int8, int16). To keep the accuracy in check, it
is helpful to accumulate the results in higher precision (fp32 or int32). This
type of mixed precision computation is often called _quantization_ for integer
values [18]. In this paper, we will always use _mixed precision_ for brevity.
Figure 1: Performance comparison on Nvidia V100-SXM2 between fp32 and fp16
without mixed precision instruction support.
While using mixed precision data types reduces memory footprint, it might not
necessarily lead to performance improvement. To investigate this, we conducted
an experiment to compare the performance of Nvidia cuDNN performance for fp16
and fp32 in the absence of Nvidia mixed precision tensorized instructions
(Tensor Core). As shown in Figure 1, we observe that blindly using mixed
precision leads to substantial slowdown because of the overhead of casting
between two data types.
Therefore, mainstream hardware vendors (Intel, ARM and Nvidia) have introduced
mixed precision tensorized instructions to achieve better performance. These
instructions add mixed precision arithmetic support where operands are of
lower precision while the accumulation happens in higher precision,
potentially leading to 2$\times$ \- 4$\times$ speedup. The most popular
examples of these tensorized instructions are Intel VNNI, ARM DOT and Nvidia
Tensor Core. We will discuss the semantics of these operations in Section III.
Figure 2: The semantics of Intel VNNI and Nvidia Tensor Core. The text beside
is the name of the corresponding LLVM intrinsic.
Hardware vendors have a long history of adding new instructions to accelerate
important applications. However, the mixed precision tensorized instructions
introduce a unique idiom - horizontal accumulation. These tensorized
instructions typically conduct a sequence of elementwise multiplications
governed by a memory access pattern, followed by a horizontal accumulation.
The accumulation is termed horizontal because all values to be accumulated are
present in the same vector register. For example, as it is shown in Figure
2(a), Intel VNNI executes a dot product of two vectors, each having 4 int8
elements, while performing the accumulation in int32. We observe a similar
pattern, though with different numbers of entries and data types, for Nvidia
Tensor Core (in Figure 2(b)) and ARM DOT instructions (this is omitted,
because it is similar to VNNI).
### II-B Limitations of Existing Solutions
Though tensorized instructions seem promising, their adoption pace is limited
because of the absence of an automatic technique that can detect and use these
instructions seamlessly. Currently, their usage in the deep learning domain is
limited to hardware vendor libraries like Intel oneDNN and Nvidia cuDNN, which
may provide high performance for the pre-defined operations but are inflexible
as discussed in Section I.
Similarly, conventional loop vectorizers find it hard to exploit the
profitability of these tensorized instructions, as they are not designed to
work with the horizontal reduction idiom. Conventional loop vectorizers in
general-purpose compilers like GCC and LLVM mainly focus on either analyzing
the innermost loop body or combining instructions in the unrolled loop bodies.
When it comes to the horizontal reduction idiom, these compilers often reorder
the computation and generate epilogue reduction, preventing us from using the
tensorized instructions.
There have been some recent works in compiling programs to leverage tensorized
instructions. PolyDL [36] generates CPU programs for convolution kernels in
neural networks that call a GEMM micro-kernel using Intel VNNI instructions.
Bhaskaracharya et al. [35] generate CUDA programs for matrix computation
leveraging Nvidia Tensor Core. However, these works are limited to one
platform and its specific instruction, which lacks generalizability. A generic
solution to handle tensorized instructions from multiple platforms together is
still missing.
### II-C Multi-Level Intermediate Representation
Compilers often have multiple levels of intermediate representation (IR) to
express the program; each level is designed to enable different analyses and
transformations. In this section, we describe the background of a tensor
domain specific language (DSL) and the multi-level IR.
#### II-C1 Graph-Level IR
Deep learning compilers like TVM [10], Glow [34], and XLA [43] adopt a graph-
level IR to represent a deep learning model as a directed acyclic graph (DAG)
of operations. This graph-level IR is useful for inter-tensor-operation
optimization, like tensor shape padding, operation fusion, and choosing the
proper data layout [23]. Our tensorized analysis relies on tensor padding so
that loops can be tiled by the number of lanes of the instruction perfectly.
However, this IR has little knowledge about the implementation of each tensor
operation. When compiling a graph-level IR, each node of the DAG will be
dispatched to its implementation in tensor DSL as explained next.
#### II-C2 Tensor DSL
Tensor domain-specific languages, like Halide [31], TVM [10], and Tensor
Comprehension [37], have been developed to productively and portably express
tensor programs while enabling efficient performance tuning. As shown in
Figure 4 and Figure 5, programs written in tensor DSLs follow this paradigm:
Users first declare the tensors and the loop variables, and then the
computation is described by expressions involving the declared tensors and
loop variables. These DSLs also provide interfaces to split, reorder, and
annotate loops without affecting the computation semantics for performance
tuning.
All the information gathered from the tensor DSL frontend will be stored in a
_tensor Op_ data structure, including the declared tensors, loop variables,
expressions, and loop manipulation.
#### II-C3 Tensor IR
Each _tensor Op_ is then lowered to _Tensor IR_ , which is an imperative
program IR with additional constraints: All the loops are canonical (starting
from 0, and increased by 1 each time), and all the array operations are
restricted (i.e., an element cannot be accessed by two different pointers).
These two properties enable making strong assumptions for analysis and
transformation. Our work conducts analysis on the _tensor Op_ data structure
level and then performs transformation on the tensor IR. Although the tensor
IR provides essentially identical information for analysis, as discussed
above, it is easier to reorganize the loops via the _tensor Op_ data
structure.
#### II-C4 Low-Level IR
The tensor IR is lowered to a general-purposed low-level IR like LLVM, after
all the specialized analysis and transformations on the tensor IR are done, to
get ready for assembly code generation.
Figure 3: The overview of our framework, UNIT.
## III Unified Tensorization
Our goal is to automatically _tensorize_ 111We coin the word to mean rewrite
and optimize a given code by the tensorized instruction. mixed-precision deep
learning tensor operations across a variety of hardware platforms. We resolve
the challenges discussed in Section I by presenting UNIT with the following
techniques:
1. 1.
_Tensorized Instruction in Tensor DSL:_ To abstract the diverse tensorized
instructions on different hardware platforms, we leverage the existing tensor
DSL to represent their semantics.
2. 2.
_Applicability Inspection:_ To determine if and how a tensorized instruction
can be applied to a tensor operation, we developed an analysis pass in the
_Inspector_ component of UNIT, which analyzes the _tensor Op_ data structure
of both the instruction and the operation. The result of analysis will guide
the loop reorganization and instruction injection.
3. 3.
_Code Rewriter:_ Once the tensorized instruction is determined applicable, the
Rewriter reorganizes the loop nests in accordance with the Inspector so that
the innermost loop nests resemble the tensorized instruction and are ready to
be replaced. Finally, it sets up the tuning space for the remaining loop nests
to exploit high performance.
These components of UNIT together enable a unified compilation flow to
simplify the mapping of tensorized instructions across a variety of hardware
platforms. In the rest of this section, the details of each of the above steps
will be discussed.
Figure 4: Tensorized instructions as abstracted in the tensor DSL.
### III-A Semantics Abstraction - Tensor DSL
In order to unify the compilation of tensorized instructions from different
platforms and keep the system open to integrate new instructions, the first
question to answer is how to have a unified description of the semantics of
tensorized instructions. As explained in Section II, we employ ubiquitous
tensor DSL and tensor IR to solve the abstraction problem. All mixed precision
tensorized instructions perform some elementwise operations for vectors,
followed by a horizontal reduction. Each tensorized instruction, therefore,
can be regarded as a small tensor operation program written in the tensor DSL.
Figure 4(a) shows how an Intel VNNI instruction is described in the tensor
DSL. Three source operands of Intel VNNI are 512-bit registers. Two of them
are 64 lanes of unsigned 8-bit integers (uint8) and signed 8-bit integers
(int8), and the other one is 16 lanes of signed 32-bit integers (int32), which
correspond to the tensors a, b, c we defined. The arithmetic behavior is
defined by the loop variables and the expression of d[i]. Here we annotate
that loop i is data parallel, since these 16 elements are independent from
each other; loop j is reduction since for every independent element it sums up
4 elements along with this loop. A similar loop pattern appears in the other
tensor operations shown in Figure 5. The description of ARM DOT, shown in
Figure 4(b), is similar to Intel VNNI, with a different number of lanes and
data types.
Nvidia Tensor Core, on the other hand, performs a $16^{3}$ square matrix
multiplication as shown in Figure 4(c). Comparing with (a) and (b), a key
difference is that it requires the accumulator register to be the same as the
addition register (note the +=). This is due to the data type opaqueness of
the Tensor Core instruction, which prevents us from giving arbitrary initial
values for the accumulators.
We describe the semantics of each tensorized instruction in tensor DSL. The
deep learning compiler pipeline parses the operation into _tensor Op_ , which
preserves tensor information like the expression tree, the loop trip count,
and the array buffers. This information is essential for the analysis and
transformation passes in Inspector and Rewriter.
### III-B Applicability Detection - Inspector
To determine if a tensorized instruction can be applied to a tensor operation,
the Inspector pass uses a two-step approach. It first determines if (part of)
the tensor operation program and the instruction can be arithmetically
equivalent by checking a form of isomorphism between their associated
expression trees. After that, it inspects the data access pattern to confirm
the assembly operands can be prepared so as to guide the Rewriter
transformation.
Figure 5: An example of applying Intel VNNI to Conv using UNIT. Algorithm 1:
Determine the isomorphism between expression trees. _a_ is for the
instruction, and _b_ is for the operation.
function Inspect(a,b)
if a.type=b.type then
if isleaf(a)$\land$isleaf(b) then
if a is not bound then
bind[a]:=b
else if bind[a]$\neq$b then
return False
end if
return True
else if isarith(a), isarith(b) then
cond:=a.opcode=b.opcode
cond:=cond$\land$Inspect(a.lhs, b.lhs)
cond:=cond$\land$Inspect(a.rhs, b.rhs)
return cond
end if
end if
return False
end function
#### III-B1 Compute Isomorphism
Algorithm 1 shows the algorithm we adopt to determine the isomorphism of two
expression trees. It recursively traverses both trees and matches the data
type and opcode of each pair of nodes. Figure 5(b).1 shows that the two trees
of convolution and pbpdusd (an Intel VNNI instruction) are in exactly the same
topology and data type, so these two programs are arithmetically isomorphic.
This analysis also finds a mapping from the operands in the tensor program to
the operands in the tensorized instruction. As we explained, tensor operands
in the tensorized instruction are the abstraction for registers. Therefore, a
register cannot correspond to multiple data sources. This property still
requires further checks, which will be explained in the next section.
#### III-B2 Array Access Isomorphism
Once compute isomorphism is determined, the next concern is how the data are
fed to this instruction. The enforcement explained in the last subsection
already determines each register operand only corresponds to one array in the
tensor operation. On top of this, we need to determine each element in the
operand tensor corresponds to only one memory address in the tensor program
when mapping to the tensorized instruction. To map a tensor program to a
tensorized instruction, we need to know which loop levels are tensorized. We
enumerate the loop levels to be tensorized, and these loop levels will be
mapped to loops in the tensorized instruction. Note that only loops with the
same annotation (data parallel or reduction) can be mapped to each other. Then
we check if this enumerated mapping is feasible, by scanning each pair of
operand correspondence determined in the last paragraph. If the operand in the
tensor program is a constant, we just skip it222If it is a constant, the
correspondence was already checked in the last section. This register
corresponds to this constant.. If the operand is a memory operation, we
inspect the index expressions of both memory operations in the operation and
instruction. We define:
* •
$A$ is the set of loop variables to be mapped to the tensorized instruction.
* •
$B$ is the set of loop variables of the tensorized instruction.
* •
$f:A\mapsto B$ is the mapping we enumerate.
* •
$S(u):=\\{x|x\text{ is loop variable in the index expression }u\\}$
* •
$S^{\prime}(u):=\\{f(x)|x\in S(u)\cap A\\}$
A mapping is considered feasible, if every pair of memory operation’s index
expressions $(u,v)$, where $u$ is from the operation and $v$ is from the
instruction, holds $S^{\prime}(u)\subseteq S(v)$. Figure 5(b).2 shows an
example of inspection. If $S^{\prime}(u)$ is a subset of $S(v)$, this means
the data loaded by the tensor operation should be broadcast along with the
loop variables that do not exist in $S(v)$ to fill all the register lanes. If
not, this means each register lane corresponds to multiple memory addresses
under this mapping, which is not realistic for code generation, so we should
try another enumeration.
If there are multiple feasible mappings, we leave this as a dimension of code
tuning space. Once this mapping is determined, it will guide the further loop
transformation and code generation.
### III-C Code Transformation - Rewriter
There are three phases in the code transformation: loop reorganization,
tensorized instruction replacement, and tuning.
#### III-C1 Loop Reorganization
As discussed in Subsection III-B, the inspector selects the loop levels to be
executed by the given instruction. To get poised for code generation, as shown
in Figure 5(c), we need to tile these loops and reorder them to the innermost
loop levels so that those innermost loops perform exactly the same semantics
as the instruction. As we explained, tensor DSL provides the capability to
reorganize the loops nests easily.
#### III-C2 Tensorized Instruction Replacement
After identifying the code region to be replaced by a tensorized instruction,
the code generator should prepare each operand of this instruction. It is
difficult to fully automate the operand preparation for different platforms
because of their diverse execution models and assembly formats. Therefore, we
formalize a unified programming interface to compiler developers to manually
specify the rule of operand generation. In this interface, each loop variable
to be replaced, and their coefficients in the index expression are exposed.
For example, as shown in Figure 5(c), by analyzing the strides and trip count
of ki, and ci, the array access c[x,y,c] will be transformed to a 16-lane
vector; a[x,y,rc] will be vectorized along with c by 4, and broadcast along
with ki by 16; b[r,s,k,c] will be vectorized along with ci by 4, and unrolled
and concatenated along with ki.
#### III-C3 Tuner
All the other loop levels that are not involved in instruction rewriting can
be reorganized to tune the performance. Here, we develop strategies to
optimize the performance of tensor programs on both CPU and GPU. The generic
philosophy is to exploit both fine- and coarse-grained parallelism. We also
developed specialized strategies because of the different execution models and
memory hierarchy.
CPU Tuning: On CPU, data-parallel loops are distributed to multiple threads to
achieve coarse-grained parallelism. On the other hand, the loop-carried
dependence in reduction loops introduces RAW hazards in the execution
pipeline. To avoid this penalty, and achieve instruction-level parallelism, we
reorder and unroll a small degree of data parallel loops below the innermost
reduction loop.
The tuning space of CPU involves two dimensions, the degree of unrolling and
parallelization. We enumerate these two parameters and profile the execution
time to search for the best one. If the unrolling degree is too small, there
will not be enough independent instructions to fill in the idle penalty cycles
caused by RAW hazards. If it is too large, it will cause I-cache misses.
Similarly, the number of threads can neither be too few or too many. If it is
too few, the computing cores would have insufficient utilization and memory
latency would not be hidden. Too many threads introduce context switching
overhead. We rely on the tuning process to look for the best combination.
Figure 6: Accumulating a p$\times$p “square window” avoids loop-carried data
dependences, and reuses buffered submatrices.
GPU Tuning: On GPU, coarse-grained parallelism is achieved by distributing the
data parallel loops across the streaming multiprocessors. Similar to CPU,
fine-grained parallelism is also achieved by reordering and unrolling a small
degree of data parallel loops to avoid the pipeline penalty caused by loop-
carried dependences. Moreover, on GPU, data reuse is explicitly managed by the
software. Therefore, as it is shown in Figure 6, we adopt an outer-product
style matrix multiply accumulation to reuse the buffered submatrices.
Besides the generic optimization, we also developed optimization mechanisms
specialized for DNN kernels. Among popular DNN models, there are many layers
with relatively small width and height and deep channels. We apply _dimension
fusion_ to layers with small width and height – these two dimensions are fused
into one to save the redundant padding. In addition, we apply _split
reduction_ to layers with deep channels. For a reduction loop with large trip
count, we can split it and parallelize each split segment on threadIdx. After
all the segments are done, we synchronize the threads and reduce the splitted
segments in the shared memory.
## IV Implementation
In this section, we will discuss technical details in our implementation. UNIT
is implemented by extending Apache TVM [10], a full-stack deep learning
compiler, with tensorized instruction support. We leverage TVM’s tensor DSL,
tensor Op, tensor IR infrastructure, and the tuning infrastructure mechanisms
[11, 23] to generate high performance kernels. In addition, implementing UNIT
on top of TVM enables end-to-end model inference with other optimizations such
as operator fusion, in addition to tensorization.
### IV-A Inspector
The inspector pass is implemented by analyzing TVM’s ComputeOp data structure.
This matches the expression tree of both the instruction and program and
enumerates mappings between the loop variables. We enumerate the loops from
the tensor’s innermost dimension to outermost dimension, and greedily return
the first eligible one because of the better potential data locality for inner
dimensions. The enumerated mapping provides us with the correspondence of loop
variables between the instructions and the tensor operations.
### IV-B Rewriter
These following steps will be performed by the rewriter:
1. 1.
According to the loop correspondence analyzed by the inspector, we reorganize
the loops to be tensorized by tiling these loops by the trip counts of the
corresponding loops in the instruction, and reorder them to be the innermost
loops. These loops will be annotated by a tensorize pragma to hint the
instruction injection.
2. 2.
Based on the strategies discussed in Section III-C, we reorganize the loops
above not involved in instruction rewriting to tune the performance.
3. 3.
We lower the manipulated loop nest to the tensor IR, and replace the loop body
annotated with the tensorize pragma with the target instructions, as shown in
Figure 5(c).
Steps 1 and 2 are achieved by invoking TVM scheduling primitives on the tensor
DSL level, and step 3 is a tensor IR transformation pass.
Figure 7: The code sketch of CPU tuning.
Next, we discuss the implementation of the tuning strategies discussed in the
last section.
CPU Tuning: The code sketch of tuned CPU code is shown in Figure 7. To
implement the tuning we discussed in Section III-C, we enumerate two breaking
points on the data parallel loop nest, which define how the loop levels are
parallelized and unrolled. A breaking point is defined by a _loop level_ and
_tiling factor_ , giving more flexibility to the division. Loops before the
first breaking point, will be fused and parallelized. Loops between these two
points will be executed in serialized order. Loops after the second breaking
point will be reordered to the innermost and unrolled.
GPU Tuning: As it is discussed in the last paragraph of Section III-C, both
coarse-grained and fine-grained parallelism optimizations are applied on data-
parallel loops, so there is a tradeoff between them: data reuse is increased
by increasing the unrolling degree (each buffered submatrix is reused p
times), but the coarse-grained parallelism is decreased. Also, a large
unrolling degree may overwhelm the register resources. Therefore, the key to
generic optimization is to choose a proper unrolling degree.
On the other hand, greedily applying each specialized optimization does not
always improve the performance. Though dimension fusion may save the memory
traffic, it also introduces software overhead on data rearrangement.
Similarly, though splitting the reduction loop introduces more parallelism, it
also introduces thread synchronization overhead and register pressure. We
enumerate each parameter, including the degree of reduction parallelization
and whether to fuse the width and height dimensions, and then apply these
transformations to the program and profile the performance to determine which
transformation leads to the best performance.
Figure 8: Quantized network inference (bs=1) accelerated by Intel VNNI.
Figure 9: Mixed precision network inference (bs=1) accelerated by Tensor Core.
## V Methodology
### V-A Target Hardware Platforms
We assess UNIT on three hardware platforms:
Intel x86 CPU: We use Amazon EC2 C5.12xlarge instance as our x86 platform with
24-core Intel Xeon Platinum 8275CL CPU @3.00GHz (codename: Cascade Lake) and
96GB memory.
ARM CPU: We use Amazon EC2 M6g.8xlarge instance as our ARM platform with AWS
Graviton2 CPU, which features 32-core ARM Cortex-A72 CPU @2.30GHz and 128GB
memory.
Nvidia GPU: We use Amazon EC2 P3.2xlarge instance as our GPU platform with
Nvidia Tesla V100 SXM2 GPU that has 16GB host memory.
### V-B Software Frameworks
Code Generation: All programs implemented in Apache TVM are emitted to LLVM IR
for code generation. We choose LLVM-10 as our backend, and to be compatible,
we use CUDA-10.0 as the NVPTX linker and runtime.
Baseline: We use vendor-provided libraries for baseline performance of
operators whenever possible. Specifically, Intel oneDNN v1.6.1 and Nvidia
cuDNN 7.6.5 are used as our CPU and GPU baselines, respectively. For end-to-
end model inference, we looked for the best available solutions with those
libraries, which was MXNet integrated with oneDNN for CPU and TVM integrated
with cuDNN for GPU. Another set of baselines is the manually written
implementation. To this end, we use the existing TVM solutions for Intel and
ARM CPUs, which involve heavy engineering effort to carefully write intrinsics
to use Intel VNNI and ARM DOT instructions. We did not find a manually written
Tensor Core implementation that covers our evaluated workloads.
### V-C Workloads
DNN Models: All DNN models are from the MXNet Model Zoo and converted to TVM’s
graph IR, Relay [32], for quantization [19], layout transformation, and data
padding. All these models adopt NCHW[x]c data layout [23] for the data and
KCRS[y]k[x]c for the kernel. Here N denotes the batch size, C denotes the
input channels, H and W are the width and height of the input image, and [x]c
denotes that the original C is split by x. Similarly, K denotes the number of
output channels, R and S are the height and width of the kernel, and [y]k
denotes the original dimension K is split by y. [x] equals to the number of
lanes of the instruction output, and [y] equals to the width of reduction.
In the evaluation, we target the N=1 cases, because it is hard to optimize but
critical for inference use cases. Comparing with batched cases where N>1, we
cannot reuse the kernel tensor across samples, or exploit the parallelism
brought by the data-parallel batching dimension.
## VI Evaluation
Our evaluation of UNIT attempts to answer these questions:
1. 1.
What is the performance of the end-to-end deep learning model inference
powered by _tensorized_ instructions?
2. 2.
How does each optimization technique that UNIT uses impact the performance?
3. 3.
Can UNIT be extended to support new hardware platforms and tensor operations?
### VI-A End-to-End Performance
In this subsection, we show the UNIT end-to-end effectiveness on Intel x86 and
Nvidia GPU processors for tensorizing mixed precision instructions. For Intel
x86 experiments, we use MXNet integrated with Intel oneDNN (referred to as
MXNet-oneDNN) as the baseline. Another comparison of ours is TVM with manually
written schedules using Intel’s VNNI instruction. The findings of this
experiment are shown in Figure 9.
We observe that UNIT achieves significant speedup compared to MXNet-oneDNN.
Note that Intel oneDNN has access to manually written schedules that have been
aggressively optimized and tuned by domain experts. We also observe that TVM
overall achieves better performance than MXNet-oneDNN, but has suboptimal
performance on resnet50 and resnet50b, which were heavily tuned by oneDNN
engineers. On the other hand, UNIT outperforms both baselines, by 1.3$\times$
over MXNet-oneDNN and by 1.18$\times$ over TVM.
Next, we test the efficacy of UNIT on utilizing Nvidia Tensor Core
instructions for Nvidia GPUs. For the baseline, we integrate TVM with cuDNN,
which has access to manually written aggressively tuned Tensor Core schedules.
The findings of this experiment are shown in Figure 9. We observe that UNIT
consistently achieves better performance than cuDNN with a mean speedup of
1.75$\times$ and up to 2.2$\times$.
### VI-B Optimization Implications
In this subsection, we focus on the convolution operators of the DNN models to
perform an in-depth analysis of the impact of different optimization
techniques used by UNIT’s Rewriter. This is essentially an ablation study,
showing how important different parts of UNIT are. There are 148 different
convolution workloads (i.e., convolution with different feature map sizes,
kernel sizes, strides, etc.) in the models, out of which we choose 16
representative convolution layers. These kernels cover diverse input shapes
and strides. Other workloads behave similarly in the ablation study. We
summarize the characteristics, namely, convolution attributes, like shapes,
strides, etc., of the selected workloads in Table I.
Intel x86 servers: As we discussed in Section III-C, we have two breaking
points in CPU scheduling. The loop nests before the first breaking point are
parallelized and the loop nests after the second breaking point are unrolled,
while the ones in between the breaking point are executed serially. As loop
nests can either be parallelized or unrolled (remaining one is serialized), we
have a search space represented by the tuning pairs. Rewriter tunes this
search space to generate a high-performance kernel. In this experiment, we
incrementally measure the performance improvements brought by parallelizing,
unrolling and tuning. The findings of this experiment are shown in Figure 11,
normalizing the speedup to Intel oneDNN execution latency.
First we fuse outer loop nests such that the loop bound of the fused loop nest
is $<$ 3000, and measure the latency of the resulting kernel (shown by
_Parallel_). Then, we take the remaining loop nests, and tile and unroll them
such the unrolling factor is $<$ 8, and measure this performance (shown by
_+Unroll_). Finally, instead of setting the limits as 3000 and 8, we tune the
search space and measure performance (shown by _+Tune_), getting the final
latency UNIT achieves. We observe that Parallel and Unroll together is
responsible for most of the speedup. The additional speedup introduced by
Tuning is quite small. It turns out that more than half of the kernels get the
optimal performance on the first tuning pair (i.e. 3000 and 8), and more than
95% of the kernels get the optimal performance within the first 8 tuning
pairs.
CPU does poorly on workloads #1 and #4, because their output shapes (OH/OW)
can neither be perfectly tiled nor fully unrolled. Inherited from TVM, loop
residues are handled the by guarding it with a likely clause, which results in
an if-branch that harms the performance.
Nvidia GPU servers: As discussed in Section III-C, we employ three
optimizations on GPU: generic coarse- and fine-grained parallelism, fusing
width and height to save memory bandwidth, and parallelizing the reduction
dimension. In this subsection, we study the impact of these optimizations on
the performance. We show the findings in Figure 11, normalizing the speedup to
Nvidia cuDNN.
According to our evaluation, any unrolling degree (p in Figure 6) larger than
2 may overwhelm the registers, so we use p=2 to apply the generic
optimization. The generic optimization already beat cuDNN in most cases (shown
by _Generic_). Then, depending on the height and width values, Rewriter fuses
the height and width dimensions to save memory bandwidth (shown by
_+FuseDim_). Then, we split the reduction dimension K by 64 and measure the
performance (_+SplitK_). Finally, we let Rewriter to choose the sizes for
these 3 optimizations and measure performance (shown by _+Tune_).
We observe that SplitK leads to the maximal speedup, as it leads to
significant parallelism and keeps the Tensor Cores busy. More than 70% of the
kernels can get high performance by employing fusion and parallelizing the
reduction dimension. Similar to CPUs, the additional speedup by tuning is
small.
UNIT cannot outperform cuDNN on #1 and #15, because the strided data accesses
lead to less data locality. However, since these adversarial cases (both CPU
and GPU) only occupy a very small portion among all these models, we can still
outperform vendor-provided libraries because of the generality of our
optimization.
Figure 10: The performance impact of the code space exploration.
Figure 11: The performance impact of the code space exploration. TABLE I: Characteristics of the selected convolution layers. | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
C | 288 | 160 | 1056 | 80 | 128 | 192 | 256 | 1024 | 128 | 576 | 96 | 1024 | 576 | 64 | 64 | 608
IHW | 35 | 9 | 7 | 73 | 16 | 16 | 16 | 14 | 16 | 14 | 16 | 14 | 14 | 29 | 56 | 14
K | 384 | 224 | 192 | 192 | 128 | 192 | 256 | 512 | 160 | 192 | 128 | 256 | 128 | 96 | 128 | 192
R=S | 3 | 3 | 1 | 3 | 3 | 3 | 3 | 1 | 3 | 1 | 3 | 1 | 1 | 3 | 1 | 1
Stride | 2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 2 | 1
OHW | 17 | 7 | 7 | 71 | 14 | 14 | 14 | 14 | 14 | 14 | 14 | 14 | 14 | 27 | 28 | 14
### VI-C Extensibility
We evaluate the extensibility of UNIT in two aspects: to new hardware
platforms and to new deep learning tensor operations. We observe that by just
representing the semantics of the new tensorized instruction in tensor DSL,
UNIT can easily extend to new tensorized instructions and tensor operations.
New Hardware Platforms: To demonstrate the capability of extending to new
hardware platforms, we apply UNIT to an ARM CPU supporting the ARM DOT
instruction. To the best of our knowledge, there is a lack of a deep learning
framework with well-integrated ARM backend library support. In the absence of
a framework baseline, we choose TVM compiling to ARM Neon assembly as the
baseline (shown by TVM-NEON). Additionally, we find that TVM has manually-
written schedules using ARM DOT instructions, which forms our second
comparison baseline (shown by TVM-Manual). Note that in contrast to UNIT’s
automatic approach, this is a manually written schedule requiring intense
engineering efforts. Finally, we represent the semantics of ARM DOT
instruction in UNIT’s tensor DSL and use UNIT to compile the models. The
findings of this experiment are shown in Figure 12, showing normalized speedup
compared to the TVM-Neon baseline. The results show that UNIT consistently
outperforms both TVM-NEON and TVM-Manual, proving UNIT’s effectiveness in
extending to new hardware platforms.
3D Convolution: We test UNIT on 3D convolution operation for mapping Intel
VNNI tensorized instructions. Note that this does not require any changes from
UNIT perspective; we are just giving a new input (tensor-level IR for conv3d)
to UNIT. To evaluate this extensibility, we take all the 2D convolutions from
Resnet18 and manually convert them to 3D convolutions. We then apply UNIT on
these kernels and show the speedup compared to oneDNN baseline in Figure 13.
We observe that UNIT easily extends to 3D convolution, as it has comparable
performance for many convolution kernels, with an average of 1.2$\times$
speedup.
Figure 12: The performance of ARM on model inference. Figure 13: The
performance of each layer on res18-3d.
## VII Related Work
Compilation support for hardware intrinsics: There exists a large body of
literature on compilation support for various hardware intrinsics [33, 20, 27,
29, 16, 22, 28, 15, 36, 35]. Existing production compilers such as GCC and
LLVM implement auto-vectorization to leverage SIMD intrinsics. Prior works
such as [20, 33] propose various approaches to further improve the performance
of the auto-vectorizer. These approaches cannot be extended to support tensor
computation intrinsics which introduce “horizontal computation” within each
lane. TVM [10] implements an extensible interface to support new hardware
intrinsics that are not limited to SIMD instructions. However, programmers
need to transform the program to match the behavior of the intrinsics and
declare the lowering rule for the intrinsics prior to compilation. TVM will
match the computation and replace it with the code snippets that call the
target hardware intrinsics. Compared to TVM, UNIT performs the code detection
and transformation automatically. This achieves higher flexibility and
productivity. There are some prior works that, similar to UNIT, also perform
program transformation and code generation automatically for tensor
computation [36, 35]. However, these are limited to one platform or certain
intrinsics and hence are not as flexible as UNIT.
Decoupled Computation and Data Access: The analysis pass of UNIT is inspired
by the decoupled-access execute (DAE) architectures [21, 30, 26, 41, 13, 42,
40]. Computation and data access are decoupled and specialized separately. The
computation is offloaded onto a programmable data path and data access is
encoded in hardware intrinsics and executed on specialized address generation
unit (AGU). UNIT adopts a reversed approach, it matches computation on a fixed
data path, and analyzes data access fed to the data path.
Polyhedral model: Many prior works have built program analysis and
transformation frameworks based on the polyhedral model for tensor programs
[20, 36, 35, 15, 37, 12, 25, 38]. Loop Tactics [9] is one representative work
which matches the pre-defined computation patterns in the polyhedral IR and
transforms the matched patterns to optimized programs. UNIT distinguishes
itself from Loop Tactics in: 1) Compared with the schedule tree [39] in the
polyhedral model, the tensor DSL provides more information such as loop
reduction properties and operand types; 2) UNIT provides an end-to-end
solution including auto-tuning to obtain the optimal performance, whereas Loop
Tactics requires the optimized schedules to be provided manually.
Deep learning frameworks: UNIT is complementary to the existing deep learning
frameworks. Existing frameworks such as Tensorflow [8], PyTorch [7], and MXNet
[3] rely on vendor-crafted libraries to support the new tensor intrinsics. TVM
[10] requires code re-writing at the user side. UNIT is able to handle new
operators which might not be covered by the vendor libraries and spare the
user from having to perform manual re-writing. We have demonstrated the
effectiveness of the methodology of UNIT based on TVM. Similar technique can
be applied to other frameworks to further boost their performance.
## VIII Conclusion
Deep learning has prompted hardware vendors to add specialized tensorized
instructions for dense tensor operations. These instructions perform
“horizontal reduction” accumulate elementwise computation. While promising,
introducing this new idiom complicates its general purpose applicability, as
one has to rely on hand-written kernels to gain high performance brought by
these instructions. In this paper, we introduce UNIT, a unified compilation
pipeline, that represents the tensorized instructions from different hardware
platforms using the same IR, then automatically detects the applicability of
the tensorized instructions in a given tensor operation, transforms the loop
nest to enable easy mapping of the tensorized instruction, and finally
rewrites the loop body with the tensorized instructions. UNIT enables
automatic tensorized instruction compilation over a variety of hardware
platforms like Intel/ARM CPUs and Nvidia GPUs. Our analysis shows that UNIT
achieves 1.3$\times$ speedup over oneDNN (VNNI instruction), 1.75$\times$ over
cuDNN (Tensor Core instruction), and 1.13$\times$ over the manually written
ARM intrinsics in TVM (DOT instruction).
## Acknowledgements
This work is supported by NSF grant CCF-1751400 and Mu Li’s team at Amazon Web
Services.
## Appendix A Artifact Appendix
### A-A Abstract
This guide describes how to set up _UNIT_ compilation infrastructure and run
the workloads we discussed in Section VI. This guide provides instructions to:
* •
Set up the experiment environment for _UNIT_ through Docker.
* •
Run end-to-end inference model shown in Figure 9, 9, and 12.
* •
Run the experiments to demonstrate the effects of our tuning strategies shown
in Figure 11, and 11.
* •
Run the 3D-convolution results shown in Figure 13.
Our experiments are conducted on Amazon EC2 c5.12xlarge for Intel VNNI,
p3.2xlarge for Nvidia TensorCore, and m6g.8xlarge for ARM VDOT. To download
and install our infrastructure, approximately 32GB of disk is required. We
provide Dockerfile to set up the environment, and scripts to automatically run
the experiments and plot the figures.
### A-B Artifact Checklist
* •
Program: As it is demonstatrated in Section V, we use nine typical DNN
models, including ResNet, ImageNet, and MobileNet.
* •
Compilation: We need specific versions of TVM to run our experiments and
baselines. They are included in the zip release.
* •
Data set: The test data is included in our zip release.
* •
Runtime environment: We run our artifact all on Ubuntu 18.04. For GPU, Nvidia
GPU driver and additional runtime for Docker should be installed.
* •
Hardware: We run our experiments on AWS EC2 instances — c5.12xlarge for Intel
VNNI, p3.2xlarge for Nvidia TensorCore, and m6g.8xlarge for ARM DOT.
* •
Execution: We provide scripts to run our experiments discussed in Section VI.
It takes 2 hours to compile the models in Figure 9, half an hour to compile
the models in Figure 9, and 1.4 hours to compile the models in Figure 12. It
takes half an hour to run the experiments in Figure 11, and 11.
* •
Output: Our scripts both run the experiments and plot the figures in PDF
files.
* •
Experiments: The results reported in our paper are generated by a physical
machine, but in this artifact evaluation they all run on a virtual machine in
Docker. Performance fluctuation may happen because of the overhead of
virtualization.
### A-C Description
#### A-C1 How Delivered
Download our Dockerfile, scripts, and model test data at
https://doi.org/10.5281/zenodo.4420522.
#### A-C2 Hardware Dependencies
* •
AVX512_VNNI: This is available on Intel CPUs with Cascadelake architecture.
In this work, we use AWS EC2 c5.12xlarge. The CPU model is Intel(R) Xeon(R)
Platinum 8275CL CPU @3.00GHz. The rate is $2.04/hour, and it takes
approximately one hour to set up the environment and 5 hours to run all the
related experiments.
* •
TensorCore: This is avaiable on Nvidia GPUs with TensorCore extension. In
this work, we use AWS EC2 p3.2xlarge. The GPU model is Tesla V100. Please
install the GPU driver. The rate is $3.06/hour, and it takes approximately 1
hour to set up the environment, and another one hour run all the related
experiments.
* •
ARM VDOT: This is available on ARM CPU v8.2 with dotprod extension. In this
work, we use AWS EC2 m6g.8xlarge. The CPU model is Amazon Graviton 2. The rate
is $1.232/hour, and it takes 1 hour to set up the environment and run the
experiments.
#### A-C3 Software Dependencies
All our software dependences are installed automatically in Docker. Refer to
this link for Docker installation. When setting up the last step of the the
package repository, do choose the proper tab for your CPU platform (x86 or
ARM). Refer to this to install Docker that runs Nvidia GPU. Nvidia Docker
requires GPU driver installed, use this command to install:
⬇
$ sudo apt-get install nvidia-driver-455
### A-D Installation
Unzip the downloaded file, and there are three sub-zips — tensorcore.zip,
vnni.zip, and arm.zip to evaluate the three platform we discussed in this
paper.
### A-E Experiment Workflow
#### A-E1 GPU
We run the TensorCore experiment on an AWS EC2 p3.2xlarge instance.
* •
After building the docker image, an image hash value will be generated in the
console log:
⬇
$ unzip tensorcore.zip && cd tensorcore
$ sudo docker build . # 20 mins to build
$ sudo docker run -tid \--runtime=nvidia <image>
$ sudo docker attach <container>
* •
After entering the container, the experiment scripts are all in $HOME
directory:
⬇
$ cd $HOME
* •
To replicate experiments run in Figure 9, and 11:
⬇
$ bash run_e2e.sh # Fig.9: e2e.pdf
$ bash run_ablation.sh # Fig.11: gpu-dse.pdf
* •
It takes half an our to run these two scripts. Both the experiments and data
plotting are done in these two scripts. Use these commands to take the
generated PDF out of the container and look at them:
⬇
$ <ctrl-p><ctrl-q> # Temporarily detach
$ sudo docker cp <container>:/root/e2e.pdf gpu-e2e.pdf
$ sudo docker cp <container>:/root/gpu-dse.pdf .
#### A-E2 CPU
We run the Intel VNNI experiment on an AWS EC2 c5.12xlarge instance. It is
also used to cross-compile ARM target.
* •
After building the docker image, an image hash value will be generated in the
console log:
⬇
$ unzip vnni.zip && cd vnni
$ sudo docker build .
$ sudo docker run -tid <image>
$ sudo docker attach <container>
* •
After entering the container, the experiment scripts are all in $HOME
directory:
⬇
$ cd $HOME
* •
To replicate experiments run in Figure 9, 11, and 13:
⬇
$ bash run_e2e.sh # Fig.8: e2e.pdf
$ bash run_ablation.sh # Fig.10: cpu-dse.pdf
$ bash run_3d.sh # Fig.13: conv3d.pdf
* •
It takes about 2.5 hours to run these experiments, and you can use the
following commands to take out these plotted figures and look at them:
⬇
$ <ctrl-p><ctrl-q> # Temporarily detach
$ sudo docker cp <container>:/root/e2e.pdf .
$ mv e2e.pdf cpu-e2e.pdf # Avoid conflict
$ sudo docker cp <container>:/root/gpu-dse.pdf .
$ sudo docker cp <container>:/root/conv3d.pdf .
* •
Use the following script to run ARM target compilation:
⬇
$ bash run_arm.sh
It takes about two hours to get all models compiled on ARM. The compiled
models will be in $HOME/arm-base and $HOME/arm-unit.
* •
Copy the compiled model to the ARM machine:
⬇
$ scp -i key.pem -r arm-unit <arm-machine>:~
$ scp -i key.pem -r arm-base <arm-machine>:~
$ ssh -i key.pem <arm-machine>
* •
Set up the ARM environment and run the experiments on ARM machine:
⬇
$ unzip arm.zip && cd arm
$ mv ../arm-unit .
$ mv ../arm-base .
$ sudo docker build .
$ sudo docker run -tid <image>
$ sudo docker attach <container>
$ cd $HOME && bash run_e2e.sh
<ctrl-p> <ctrl-q>
$ sudo docker cp \
<container>:/root/baseline.result .
$ sudo docker cp \
<container>:/root/tensorize.result .
* •
Bring these two .result files to a x86 machine, and plot the graph:
⬇
$ python plot_e‘2e.py baseline.result tensorize.result
# Fig. 13
$ mv e2e.pdf arm-e2e.pdf
### A-F Evaluation and Expected Result
Finally, we have these PDF files:
* •
Figure 9, 9, and 12 should be compared against cpu-e2e.pdf, gpu-e2e.pdf, and
arm-e2e.pdf.
* –
The ARM results reported in this paper were generated by an old version of
TVM. The performance is improved in the newer version. We will fix this in
camera ready.
* •
Figure 11, and 11 should be compared against cpu-dse.pdf, and gpu-dse.pdf.
* •
Figure 13 should be compared against conv3d.pdf.
## References
* [1] Exploring the Arm dot product instructions. https://community.arm.com/developer/tools-software/tools/b/tools-software-ides-blog/posts/exploring-the-arm-dot-product-instructions, 2017\.
* [2] Introduction to Intel deep learning boost on second generation Intel Xeon scalable processors. https://software.intel.com/content/www/us/en/develop/articles/introduction-to-intel-deep-learning-boost-on-second-generation-intel-xeon-scalable.html, 2019\.
* [3] Apache MXNet | a flexible and efficient library for deep learning. https://mxnet.apache.org/versions/1.6/, 2020.
* [4] Nvidia CUDA® deep neural network library (cuDNN). https://developer.nvidia.com/cudnn, 2020.
* [5] Nvidia tensor cores. https://www.nvidia.com/en-us/data-center/tensor-cores/, 2020.
* [6] oneAPI deep neural network library (oneDNN). https://github.com/oneapi-src/oneDNN, 2020.
* [7] Pytorch. https://pytorch.org/, 2020.
* [8] Tensorflow. https://www.tensorflow.org/, 2020.
* [9] Lorenzo Chelini, Oleksandr Zinenko, Tobias Grosser, and Henk Corporaal. Declarative loop tactics for domain-specific optimization. ACM Transactions on Architecture and Code Optimization (TACO), 16(4):1–25, 2019.
* [10] Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze, et al. TVM: An automated end-to-end optimizing compiler for deep learning. In 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18), pages 578–594, 2018.
* [11] Tianqi Chen, Lianmin Zheng, Eddie Yan, Ziheng Jiang, Thierry Moreau, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. Learning to optimize tensor programs. In Advances in Neural Information Processing Systems, pages 3389–3400, 2018.
* [12] Jason Cong and Jie Wang. PolySA: Polyhedral-based systolic array auto-compilation. In 2018 IEEE/ACM International Conference on Computer-Aided Design (ICCAD), pages 1–8. IEEE, 2018.
* [13] Vidushi Dadu and Tony Nowatzki. Towards general purpose acceleration by exploiting common data-dependence forms. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019.
* [14] J. Deng, W. Dong, R. Socher, L. Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255, 2009.
* [15] Andi Drebes, Lorenzo Chelini, Oleksandr Zinenko, Albert Cohen, Henk Corporaal, Tobias Grosser, Kanishkan Vadivel, and Nicolas Vasilache. TC-CIM: Empowering tensor comprehensions for computing-in-memory. In IMPACT 2020-10th International Workshop on Polyhedral Compilation Techniques, 2020.
* [16] Alexandre E Eichenberger, Peng Wu, and Kevin O’brien. Vectorization for simd architectures with alignment constraints. Acm Sigplan Notices, 39(6):82–93, 2004.
* [17] Qingchang Han, Yongmin Hu, Fengwei Yu, Hailong Yang, Bing Liu, Peng Hu, Ruihao Gong, Yanfei Wang, Rui Wang, Zhongzhi Luan, and Depei Qian. Extremely low-bit convolution optimization for quantized neural network on modern computer architectures. In ICPP ’20: 49th International Conference on Parallel Processing - ICPP, 2020.
* [18] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew G. Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient journal. CoRR, abs/1712.05877, 2017.
* [19] Animesh Jain, Shoubhik Bhattacharya, Masahiro Masuda, Vin Sharma, and Yida Wang. Efficient execution of quantized deep learning models: A compiler approach. arXiv preprint arXiv:2006.10226, 2020.
* [20] Martin Kong, Richard Veras, Kevin Stock, Franz Franchetti, Louis-Noël Pouchet, and Ponnuswamy Sadayappan. When polyhedral transformations meet SIMD code generation. In Proceedings of the 34th ACM SIGPLAN conference on Programming language design and implementation, pages 127–138, 2013.
* [21] Hyoukjun Kwon, Ananda Samajdar, and Tushar Krishna. MAERI: Enabling flexible dataflow mapping over DNN accelerators via reconfigurable interconnects. SIGPLAN Not., 53(2):461–475, March 2018.
* [22] Samuel Larsen and Saman Amarasinghe. Exploiting superword level parallelism with multimedia instruction sets. Acm Sigplan Notices, 35(5):145–156, 2000.
* [23] Yizhi Liu, Yao Wang, Ruofei Yu, Mu Li, Vin Sharma, and Yida Wang. Optimizing CNN model inference on CPUs. In 2019 USENIX Annual Technical Conference (USENIX ATC 19), pages 1025–1040, Renton, WA, July 2019. USENIX Association.
* [24] Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory F. Diamos, Erich Elsen, David García, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed precision training. CoRR, abs/1710.03740, 2017.
* [25] MLIR. Multi-level IR compiler framework. https://mlir.llvm.org.
* [26] Tony Nowatzki, Vinay Gangadhar, Newsha Ardalani, and Karthikeyan Sankaralingam. Stream-dataflow acceleration. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), 2017.
* [27] Dorit Nuzman and Richard Henderson. Multi-platform auto-vectorization. In Proceedings of the International Symposium on Code Generation and Optimization, CGO ’06, page 281–294, USA, 2006. IEEE Computer Society.
* [28] Dorit Nuzman, Ira Rosen, and Ayal Zaks. Auto-vectorization of interleaved data for SIMD. In Michael I. Schwartzbach and Thomas Ball, editors, Proceedings of the ACM SIGPLAN 2006 Conference on Programming Language Design and Implementation, Ottawa, Ontario, Canada, June 11-14, 2006, pages 132–143. ACM, 2006.
* [29] Phitchaya Mangpo Phothilimthana, Archibald Samuel Elliott, An Wang, Abhinav Jangda, Bastian Hagedorn, Henrik Barthels, Samuel J Kaufman, Vinod Grover, Emina Torlak, and Rastislav Bodik. Swizzle inventor: data movement synthesis for GPU kernels. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, pages 65–78, 2019.
* [30] Raghu Prabhakar, Yaqi Zhang, David Koeplinger, Matt Feldman, Tian Zhao, Stefan Hadjis, Ardavan Pedram, Christos Kozyrakis, and Kunle Olukotun. Plasticine: A reconfigurable architecture for parallel paterns. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), 2017.
* [31] Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Frédo Durand, and Saman Amarasinghe. Halide: A language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI ’13, pages 519–530, New York, NY, USA, 2013. ACM.
* [32] Jared Roesch, Steven Lyubomirsky, Logan Weber, Josh Pollock, Marisa Kirisame, Tianqi Chen, and Zachary Tatlock. Relay: A new IR for machine learning frameworks. In Proceedings of the 2nd ACM SIGPLAN International Workshop on Machine Learning and Programming Languages, MAPL 2018, page 58–68, New York, NY, USA, 2018. Association for Computing Machinery.
* [33] Ira Rosen, D. Nuzman, and A. Zaks. Loop-aware SLP in GCC. pages 131–142, 01 2007.
* [34] Nadav Rotem, Jordan Fix, Saleem Abdulrasool, Summer Deng, Roman Dzhabarov, James Hegeman, Roman Levenstein, Bert Maher, Satish Nadathur, Jakob Olesen, Jongsoo Park, Artem Rakhov, and Misha Smelyanskiy. Glow: Graph lowering compiler techniques for neural networks. CoRR, abs/1805.00907, 2018.
* [35] Vinod Grover Somashekaracharya G. Bhaskaracharya, Julien Demouth. Automatic kernel generation for Volta tensor cores. arXiv preprint arXiv:2006.12645, 2020.
* [36] Sanket Tavarageri, Alexander Heinecke, Sasikanth Avancha, Gagandeep Goyal, Ramakrishna Upadrasta, and Bharat Kaul. PolyDL: Polyhedral optimizations for creation of high performance DL primitives. arXiv preprint arXiv:2006.02230, 2020.
* [37] Nicolas Vasilache, Oleksandr Zinenko, Theodoros Theodoridis, Priya Goyal, Zachary DeVito, William S Moses, Sven Verdoolaege, Andrew Adams, and Albert Cohen. Tensor comprehensions: Framework-agnostic high-performance machine learning abstractions. arXiv preprint arXiv:1802.04730, 2018.
* [38] Sven Verdoolaege, Juan Carlos Juega, Albert Cohen, Jose Ignacio Gomez, Christian Tenllado, and Francky Catthoor. Polyhedral parallel code generation for CUDA. ACM Transactions on Architecture and Code Optimization (TACO), 9(4):1–23, 2013.
* [39] Sven Verdoolaege, Serge Guelton, Tobias Grosser, and Albert Cohen. Schedule trees. In International Workshop on Polyhedral Compilation Techniques, Date: 2014/01/20-2014/01/20, Location: Vienna, Austria, 2014.
* [40] Z. Wang and T. Nowatzki. Stream-based memory access specialization for general purpose processors. In 2019 ACM/IEEE 46th Annual International Symposium on Computer Architecture (ISCA), pages 736–749, 2019.
* [41] J. Weng, S. Liu, V. Dadu, Z. Wang, P. Shah, and T. Nowatzki. DSAGEN: Synthesizing programmable spatial accelerators. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), pages 268–281, 2020.
* [42] J. Weng, S. Liu, Z. Wang, V. Dadu, and T. Nowatzki. A hybrid systolic-dataflow architecture for inductive matrix algorithms. In 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA), pages 703–716, 2020.
* [43] XLA Team. Xla - tensorflow, compiled, March 2017.
* [44] D. Yan, W. Wang, and X. Chu. Demystifying tensor cores to optimize half-precision matrix multiply. In 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 634–643, 2020.
|
# Fire Threat Detection From Videos with Q-Rough Sets
Debarati B. Chakraborty<EMAIL_ADDRESS>Vinay Detani
<EMAIL_ADDRESS>Shah Parshv Jigneshkumar<EMAIL_ADDRESS>Dept. of
Computer Science and Engineering, Indian Institute of Technology, Jodhpur,
India
###### Abstract
This article defines new methods for unsupervised fire region segmentation and
fire threat detection from video stream. Fire in control serves a number of
purposes to human civilization, but it could simultaneously be a threat once
its spread becomes uncontrolled. There exists many methods on fire region
segmentation and fire non-fire classification. But the approaches to determine
the threat associated with fire is relatively scare, and no such unsupervised
method has been formulated yet. Here we focus on developing an unsupervised
method with which the threat of fire can be quantified and accordingly
generate an alarm in automated surveillance systems in indoor as well as in
outdoors. Fire region segmentation without any manual intervention/ labelled
data set is a major challenge while formulating such a method. Here we have
used rough approximations to approximate the fire region, and to manage the
incompleteness of the knowledge base, due to absence of any prior information.
Utility maximization of Q-learning has been used to minimize ambiguities in
the rough approximations. The new set approximation method, thus developed
here, is named as Q-rough set. It is used for fire region segmentation from
video frames. The threat index of fire flame over the input video stream has
been defined in sync with the relative growth in the fire segments on the
recent frames. All theories and indices defined here have been experimentally
validated with different types of fire videos, through demonstrations and
comparisons, as superior to the state of the art.
###### keywords:
Fire segmentation, video processing , rough sets , Q learning , fire threat
detection , granular computing
## 1 Introduction
Fire is both a threat and a necessity of mankind. We light fires to fulfill
our necessity (e.g., for cooking, generating heat), simultaneously it could be
a threat if it becomes uncontrolled. We know that there are some objects that
are easily flammable (e.g., cloths, plastics) and some that burns out
gradually (e.g., candle, wood or logs in the fireplace etc.). We burn the
latter kind of elements to meet our need in a controlled environment, whereas
fires concerning the former group of elements could be dangerous. Fire spreads
quite fast in the flammable objects, and from one several other objects catch
the fire. Therefore, measuring the spread of the fire could be a solution in
automated fire detection systems to find out the possible threat of the
visible fire. Here we tried to come up with a solution so that the fire threat
can be detected automatically only with a surveillance camera, and no other
sensors will be required. Besides, we tried to make the method fit for both
indoor and outdoor surveillance.
Proper classification of fire region(s) from the non-fire ones and automated
segmentation of the fire region is a very important step while quantifying the
threat of the fire. The aim of this work could be summarized as follows: i) to
develop an unsupervised method for fire pixel classification, ii) to identify
fire regions from different varieties of videos, and iii) to determine the
threat associated with the fire flame over the input video stream. Here in
this article we have formulated a new method of fire region segmentation in
videos using hybridization of rough sets and Q-learning. Besides, we have also
defined a new measure, namely, fire threat index, to quantify the threat
associated with a fire from video streams. Before discussing the other methods
on vision-based fire detection, here in this section we shall roughly discuss
the key concepts of rough set theory and Q-learning relevant to this article.
Theory of rough sets, as explained by Pawlak [1], has become a popular
mathematical framework for granular computing. The focus of the theory is on
the ambiguity caused by limited discernibility of objects in the domain of
discourse. Its key concepts are those of object ’indiscernibility’ and ’set
approximation’. Two major characteristics of the theory that have drawn the
attention of applied researchers are uncertainty handling (using lower and
upper approximations) and granular computing (using information granules).
Theory of rough sets has been proven to be successful in many areas, like,
feature extraction from video streams with information flow [2], on-line
multi-label feature selection from streaming data [3], and information entropy
based fast feature selection for big data analytics [4]. We have aimed to
develop an unsupervised fire detection method here in this article. Since we
have limited access to the knowledge-base while identifying fire in a video
frame, we have approximated the fire region with lower and upper
approximations of rough sets.
Q-learning is a reinforcement learning algorithm [5]. The primary difference
between Q-learning and other reinforcement learning techniques is that it is
model-free [6], i.e., here the agent does not need any state transition model
either for learning or for action selection. Q-learning prefers the best
Q-value from the state reached in the observed transition [7]. The actual
policy, being followed by the agent, is not taken into account in Q-learning,
that is why, it is also known as off-policy control. In Q-learning, the agent
learns the action utility function (or Q-function) to estimate the expected
utility of choosing a certain action when it is in a particular state.
Here in this work we are not considering any prior information on the fire
region or possible spread of the fire pixels. All information is to be
determined by the unsupervised process itself. Therefore, the proposed
solution has an initial rough approximation of fire regions, and then employs
a Q-agent to minimize the ambiguities that is present is the rough
approximation, with utility maximization. This is the broad idea underlying
Q-rough set with utility-based agent. States, in which the Q-agents are
supposed to be in, are different granules (clump of data points [8]) and
actions are insertion or deletion of those granules from fire regions. The
same Q-agents are then employed over the video stream to extract out the fire
regions in each frame. The fire threat index is then formulated using the
segmented fire regions over the video stream.
The novelty of the technique described in the article can be summarized as: i)
definition and application of the Q-rough set (that use unsupervised set
estimations) to minimize ambiguities in rough approximations, ii) segmentation
of fire regions from a video frame with Q-rough set, iii) employing Q-agents
over a sequence of frames to extract fire regions in each frame, and iv)
formulation of a fire threat index to quantify the threat of fire flame in a
video stream. All the theoretical formulations have been experimentally
verified and proven to be effective over the state of the art methods.
The rest of the article is organised as follows. A few state of the art
methods for fire detection are discussed in Section 2. The underlying steps of
the proposed work is described in Section 3. Q-rough set is defined in Section
4 alongwith brief descriptions of rough sets and Q-learning. The method of
fire region segmentation with Q-rough set is developed in Section 5. The
quantification of the threat of fire flame is carried out in Section6. The
qualitative and quantitative experimental results of the proposed methods are
given in Section 7 alongwith suitable comparative studies. The overall
conclusions of this article are drawn in Section 8.
## 2 Related Work
The problem of fire detection from images is being addressed over a decade
[9]. Here we are going to discuss a few bench-mark methods. The problem was
first addressed by Chen _et al._ [10] where fire was detected with RGB values
and rule-base. Ferneds et al. then developed a method for forest fire
detection by classifying lidar signals with neural network committee machine
in [11]. Flame detection by modelling the RGB distributions of fire -pixels
with mixture of Gaussian (MoG) model [12], and with motion-information and
hidden Markov model (HMM) [13] were then done by Toreyin _et al._ Fire pixel
identification in videos was then carried out by Celik [14] where CIE L*a*b
color space was used to identify fire pixels. The method of forest fire
detection by incorporating static and dynamic features in videos in HSV color
space was proposed by Zhao _et al._ in [15]. Chino _et al._ then developed a
way of fire detection from still images by combining color and texture
features in [16]. The method of fire detection with spatio-temporal
consistency energy of each candidate fire region was estimated by
Dimitropoulos _et al._ [17] with prior knowledge about the possible existence
of fire in neighboring blocks from the current and previous video frames, and
an SVM-based classification of fire-non fire regions was also executed there.
Recently a couple of fire detection method has been developed with deep
learning techniques [18]. Muhammad _et al._ [19] came up with a solution by
incorporationg deep features of CNNs in fire-detection and high-priority
cameras based on cognitive radio networks were developed. Kim and Lee [20]
developed a method of detecting fire with faster region grow method using deep
CNN (convolutional neural network). Cai et al. [21] formulated an improved
deep CNN that uses the global average pooling layer instead of the full
connected layer to fuse the acquired depth features and detect fire.
All the methods, we have discussed so far, either requires some initial
information about fire pattern or needs initial manual labelling for training.
Most of the methods are focused on some specific applications, such as, forest
fire, indoor fire, outdoor fire, etc. The method that we developed here is
completely unsupervised and does not need any prior information/ manual
intervention. Besides, it is very effective in classifying fire pixels in any
types of video frame, that is it’s a general method for fire segmentation,
which could be applicable to anywhere for fire detection. The method of
quantification of fire threat is also new to literature. We are going to
describe the basic steps of the method in the following section.
## 3 Proposed Work
Our work can be subdivided into two parts, viz. i) detection of fire from
videos and ii) determination of spreading of fire or quantifying the threat
associated with the fire. The step-wise formulations of the two methods are
shown in Figs. 1(a) and 1(b).
Figure 1: Block diagram of (a) fire region segmentation and (b) threat
detection methods
In Fig. 1(a) proposed fire-flame detection method is shown, that will
accumulate information from different color space and collate the information
judiciously to classify the fire pixels correctly from the video frames. Here
Q-learning is rough because the state-action policy is determined with the
lower approximated regions (obvious fire region here), and the ambiguities
between the two approximations get minimized by utility maximization of the
Q-agents. The same Q-agents are then employed over the rest of the video
sequence to have faster decision making over known states. This part is
described in details in Section 5.
In the second part of the work we aim to quantify the threat associated with
the fire flame. It is shown in Fig. 1(b). Here the relative growth of fire
over recent past of the video stream is considered. The average region of the
fire flame throughout the video stream and over a few recent frames of video
stream are being computed in this part. The fire threat index
($\mathcal{T_{F}}$) is then quantified with these information. This part is
explained in Section 6.
## 4 Formulation of Q-Rough Sets
This section discusses the formulation of Q-rough sets. Prior to that
formulation, the basics of rough set theory and Q-learning have been briefly
described.
### 4.1 Rough Sets
Let there be an information system $S=(\mathcal{U},A)$, where $\mathcal{U}$ is
the universe and $A$ is the set of attributes. For any set $B\subseteq A$,
there is an equivalance relation $IND(B)$ such that
$IND(B)=\\{(x,y)\in\mathcal{U}^{2}|\forall p\in B,p(x)=p(y)\\}$, where $p(x)$
function returns the value of the attribute $p$ for data point $x$. The
relation $IND(B)$ is called B-indiscernibility relation and any two points
$(x,y)\in IND(B)$, i.e., satisfying the B-indiscernibility relation, indicate
that $x$ and $y$ can not be distiguishable using the attribute set $B$. Let
the equivalence class of B-indiscernibility relation be denoted by $[x]_{B}$,
and $\mathcal{U}|B$ denotes all such equivalence classes. Here $[x]_{B}$ is
called a ’granule’ around the data point $x$, created by B-indiscernibility
relation. (As stated before, a granule is a clump of object which can not be
discriminated with a given attribute set.) Let us denote this granulated
information system with $S_{B}=(\mathcal{U},A,[x]_{B})$.
Let $X$ be a set in the universe $\mathcal{U}$ ($X\subseteq\mathcal{U}$) to be
approximated based on the equivalence classes $[x]_{B}$ (i.e., granules)
defined over $B$. Then, $X$ can be approximated in terms of granules from
inner and outer sides as _B-lower approximation_ $\underline{B}X$ and _B-upper
approximation_ $\overline{B}X$ respectively. They are defined as follows:
$\underline{B}X=\\{x\in\mathcal{U}:[x]_{B}\subseteq X\\}$ (1)
$\overline{B}X=\\{x\in\mathcal{U}:[x]_{B}\cap X\neq\emptyset\\}$ (2)
$\underline{B}X$ represents the granules definitely belonging to $X$, while
$\overline{B}X$ means granules definitely and possibly belonging to $X$. That
means all the elements in $\underline{B}X$ can be certainly classified as
member of $X$ on the basis of the knowledge in $B$, while some objects in
$\overline{B}X$ can only be classified as possible members of $X$ on the basis
of $B$.
### 4.2 Q-Learning
Let the state and action in a given environment be denoted as $s$ and $a$
respectively, and the $Q$-value of doing action $a$ in state $s$ be denoted as
$Q(s,a)$. The relation between the direct utility of the state ($U(s)$) and
Q-value is as follows.
$U(s)=\max_{a}Q(s,a)$ (3)
If the Q-values are computed correctly, the following equation (Eqn. 4) will
then only reach to an equilibrium. That is, $LHS=RHS$, iff Q-value is correct
in Eqn. (4).
$Q(s,a)=R(s)+\gamma\sum_{s^{\prime}}P(s^{\prime}|s,a)\max_{a^{\prime}}Q(s^{\prime},a^{\prime})$
(4)
In Eqn. 4 $R(s)$ represents the reward value associated with the state $s$;
$\gamma$ is the discount factor that determines the importance of future
rewards; $P(s^{\prime}|s,a)$ determines the probability of reaching to the
state $s^{\prime}$ from state $s$ given action $a$, it is determined from the
state-transition model; and $Q(s^{\prime},a^{\prime})$ is the $Q$-value of
future action $a^{\prime}$ in future state $s^{\prime}$. As we can see from
Eqn. 4 an estimated state transition model is required here, therefore, it is
not completely model-free. The temporal difference approach or TD Q-learning
is the actually model-free one, i.e., state-transition model is not required
here. The Q-value updation in TD Q-learning is carried out as follows.
$Q(s,a)\leftarrow Q(s,a)+\alpha(R(s)+\gamma(Q(s^{\prime},a^{\prime})-Q(s,a))$
(5)
$\alpha$ is the learning rate in Eqn. 5.
### 4.3 Q-Rough Set: Definition
Here in this article we have focused on unsupervised estimation of a set. The
method is envisioned to be completely unsupervised and only some basic set of
features of the set can be provided apriori. Let us assume the set to be
estimated be denoted as $X:X\in\mathcal{U}^{2}$. The given set of features be
$B:B\subseteq A$, where $A$ is the complete set of features. The lower and
upper approximations of the set: $\underline{B}X$ and $\overline{B}X$ can now
be estimated according to Eqns. 1 and 2 respectively. However, the exact
region of the set cannot be estimated through these equations. To extract the
exact set and minimize the vagueness in the approximation here we employ a
Q-learning agent. Here the set $\underline{B}X$ is used to determine the value
function, policy and the state-transition model in Q-learning. The states
($s$) are treated to be the clump of similar data points (granules $[x]_{B}$)
present in the set $\overline{B}X$, the set of actions, is of cardinality
equals to 2 and it involves: $a=\\{$updation of
$\underline{B}X:\rightarrow\underline{B}X\cup[x]_{B}$ and moving to the next
connected boundary granule to $[x]_{B}$, keeping $\underline{B}X$ the same and
moving to the next boundary granule connected to $\underline{B}_{X}$$\\}$.
The reward function $R(s)$ is chosen based on the similarities between
$[x]_{B}\leavevmode\nobreak\ s$ and $\underline{B}X$ over $n$-dimensional
feature space. The similarities ($D$) between the state ($s\equiv[x]_{B}$) and
model ($M\equiv\underline{B}X$) and reward for that granule ($R([x]_{B})$) are
now defined as per the following equations.
$\displaystyle D(M,s)=||M-s||_{n}$ (6) $\displaystyle R([x]_{B})=1-2D(M,s).$
(7)
In Eqn (6) $R([x]_{B})$ is defined in such that the value of the reward should
be within the range of $[-1,1]$. Maximum reward (+1) will be generated upon
finding total similarity and minimum reward (-1) will be generated upon
finding no similarities.
While determining the Q-value of a boundary granule ($s\equiv[x]_{B}$), it can
be observed that, here the state-action model will lead to two different
future states ($s^{\prime}\equiv[x]_{B}^{\prime}$) based on two different
actions. But the there is a single possible state for a particular action. Let
$[x]_{B1}^{\prime}$ and $[x]_{B2}^{\prime}$ be the two possible future states
that the agent can reach on taking the actions $a_{1}$ and $a_{2}$
respectively. Therefore we are going to get a simple binarized state
transition model as follows.
$\displaystyle P([x]_{B1}^{\prime}|[x]_{B},a1)=1$ $\displaystyle
P([x]_{B2}^{\prime}|[x]_{B},a1)=0$ $\displaystyle
P([x]_{B1}^{\prime}|[x]_{B},a2)=0$ $\displaystyle
P([x]_{B2}^{\prime}|[x]_{B},a2)=1$ (8)
Therefore, Eqn. (4) reduces to Eqn. (9) as follows.
$Q([x]_{B},a)=R([x]_{B})+\gamma\max_{a^{\prime}}Q([x]_{B}^{\prime},a^{\prime})$
(9)
The action, that is maximizing the $Q$-value will be selected. Upon completion
of the same process over all the boundary granules, the final set
$\underline{B}X$, that we are getting is named as Q-rough approximation of the
set $X$. This is how the uncertainty present in the rough approximation of the
set $X$ can be minimized, and exact is found out in the incomplete knowledge-
base.
## 5 Q-Rough Set in Fire Detection
Let the fire region $F$ be approximated in the input frame $f_{t}$, given a
set of features ($B:B\in A$). We have used the set of rules as defined in
[10], for R-G-B and Y-Cr-Cb color space to segment out initial fire regions
from a video frame. Since our method is unsupervised, we are initially
approximating the fire-region in a video frame with these rules. The lower
approximation of fire region is done deploying the information of segmented
regions in Y-Cr-Cb color space and the upper approximation is done considering
the information of segmented region in RGB-color space on the top of the lower
approximated region. The final approximation of the fire region in a frame is
carried out with Q-rough set. A visual illustration of fire segmentation with
Q-rough set is shown in Fig. 2. The process of set formation and approximation
are shown in the following sections.
Figure 2: Pictorial Representation of Fire Segmentation with Q-Rough Set
### 5.1 Formation of Granules
Formation of proper granules plays an important role in approximation in
decision making systems. Here while defining Q-rough set for fire segmentation
we aim to granulate the image frame based on spatio-color similarities. This
is how the separation will be closer to the natural ones. Therefore, we have
used spatio-color granulation technique defined in [2]. That is, a granule
around ($\aleph$) an point $p_{i}$ in the frame $f_{t}$ is formed according to
Eqn. (10).
$\displaystyle\aleph_{sp-clr}(p_{i})$ $\displaystyle=\cup p_{j}\in U$ (10)
where $p_{i}\quad and\quad p_{j}$ are binary connected over the condition,
$|RGB(p_{j})-RGB(p_{i})|<Thr$. The upper and lower approximations of the fire
region ($F$) are carried out over these granules.
### 5.2 Lower Approximation of Fire-Region in a Frame
The lower approximated region will consume the segmented region in Y-Cr-Cb
color space. The rule-base mentioned in [15] for fire segmentation is used
here in Y-Cr-Cb. As we know, in Y-Cr-Cb color space [22], ‘ Y ’ represents the
luma component(very similar to the grayscale conversion of the original image
), whereas ‘ Cb ’ and ’Cr’ represent chroma components of blue-difference red-
difference respectively. The rule-base that is followed for fire segmentation
is in Y-Cr-Cb feature space is as follows.
A pixel $p_{i}$, with Y-Cr-Cb values as $Y(p_{i})$, $Cr(p_{i})$ and
$Cb(p_{i})$ will be treated as a fire pixel if it follows any of the following
rules.
* •
Rule 1: $Y(p_{i})\geq Cb(p_{i})$ and $Cr(p_{i})\geq Cb(p_{i})$
* •
Rule 2: $Y(x)>Y_{mean}$ and $Cr(p_{i})>Cr_{mean}$ and $Cb(p_{i})>Cb_{mean}$
* •
Rule 3: $Cb(p_{i})\leq 120$ and $Cr(p_{i})>150$
$Y_{mean}$, $Cr_{mean}$, and $Cb_{mean}$ are the mean values of Y, Cr and Cb
on that image frame. Let, $F_{YCrCb}$ be the fire-segmented region in the
frame $f_{t}$. Then the lower approximation of the fire region
($\underline{B}F$) is defined according to Eqn. (11). This results in spatio-
color granules that are a subset of $F_{YCrCb}$ are considered to be in the
lower approximation of the set $F$.
$\underline{B}F=\\{\aleph(p):\aleph(p)\in F_{YCrCb}\\}$ (11)
### 5.3 Upper Approximation of Fire Region in a Frame
The upper approximated region will consume both the segmented outputs of Y-Cr-
Cb feature space and RGB feature space. The rules that are defined in [10, 14,
15] for fire detection in RGB feature space as as follows.
A pixel $p_{i}$ will be detected as fire pixel if it follows the following
rules.
* •
Rule 1: $R(p_{i})\geq R_{mean}$
* •
Rule 2: $R(p_{i})>G(p_{i})>B(p_{i})$
$R_{mean}$ is the mean value of R over the image frame. Let, $F_{RGB}$ be the
segemnted region of fire in RGB feature space. The upper approximated fire
region will be derived from $F_{RGB}\cup F_{YCrCb}$ region. The upper
approximated fire region ($\overline{B}F$) is defined as according to Eqn.
(12).
$\overline{B}F=\\{\aleph(p):\aleph(p)\cap(F_{RGB}\cup
F_{YCrCb})\neq\emptyset\\}$ (12)
### 5.4 Computation of Reward Function
Proper computation of reward function is another major concern while defining
Q-rough sets. As defined in Eqn. (6), the distance between the lower
approximated region and a boundary granule is to be computed to determine the
reward function. We know that the no. of data points in a $\underline{B}F$ and
a boundary granule $\aleph(p)$ can never be the same. Therefore, instead of
computing point-to-point distance between these two sets, we are computing the
distance between their mean values in different feature space. Therefore, Eqn.
(6) can be re-written here as follows.
$\displaystyle
D(\underline{B}F,\aleph(p))=||mean(\underline{B}F)-mean(\aleph(p))||_{RGBYCrCb}$
(13) $\displaystyle R(\aleph(p))=1-2D(\underline{B}F,\aleph(p)).$ (14)
In Eqn. (13) normalized Euclidean Distance is considered as a distance metric.
The Q-values will now be updated according to Eqn. (15).
$Q(\aleph(p),a)=R(\aleph(p))+\gamma\max_{a^{\prime}}Q(\aleph(p)^{\prime},a^{\prime})$
(15)
### 5.5 Algorithm for Fire Detection with Q-Rough Set
We are going to describe the proposed method step-wise for fire detection from
a video frame $f_{t}$ with Q rough set. The theoretical details are presented
in the preceding sections. The detailed methodology is summarized in the form
of an algorithm in Algorithm 1.
Algorithm 1 Fire Detection from A Frame with Q-Rough Set
INPUT: $f_{t}$
OUTPUT: $f_{t}$ with segmented fire region $F$
INITIALIZE: $\underline{B}F=\overline{B}F$ $\Leftarrow\emptyset$
1: Granulate the frame $f_{t}$ as described in Section 5.1
2: Segment out $F_{YCrCb}$ and $F_{RGB}$ as described in Sections 5.2 and 5.3
respectively.
3: Define $\underline{B}F$ and $\overline{B}F$ following Eqns. (11) and (12).
4: For a boundary granule $\aleph{p}\in\\{\overline{B}F-\underline{B}F\\}$ do
the following. i) Compute $R(\aleph(p))$ with Eqn. (13) ii) Compute
$Q(\aleph(p),a_{1})$ and $Q(\aleph(p),a_{2})$, with Eqn. (15)
if $Q(\aleph(p),a_{1})>Q(\aleph(p),a_{2})$ then
Set $\underline{B}F=\underline{B}F\cup\aleph(p)$ and move to the next
connected granule of $\aleph(p)$
else
Remove $\aleph(p)$ from from $\overline{B}X$ and moving to the granule
connected to $\underline{B}X$.
end if
5: Repeat Step 4 for the next boundary granule.
6: Repeat Step 4 and 5 till $\\{\overline{B}F-\underline{B}F\\}=\emptyset$
7: Set $F=\underline{B}F$
### 5.6 Fire Segmentation Over The Video Sequence
So far we have discussed the method of fire segmentation with Q-rough sets
over a video frame. Since we are dealing with video sequence, we do not need
to repeat the entire process for each frame. Rather, we will employ our
trained Q-agents to explore the fire region in the upcoming frames. That is,
the Q-agents are going to check the spatio-color granules and in rest of the
sequence, and similar granules will automatically be included in the lower
approximated fire region.
Let fire region in the $t^{th}$ frame ($f_{t}$) be approximated as $F_{t}$
with Algorithm 1. Let $\aleph(p_{t+1})$ be a granule under consideration in
the frame $f_{t+1}$. The action over $\aleph(p_{t+1})$ will get automatically
decided by the Q-agent in $f_{t+1}$, since it already knows which action
maximizes the utility from $f_{t}$. The computation of Q-value will only be
required in $f_{t+1}$ if $\aleph(p_{t+1})$ is an unknown state to the Q agent.
## 6 Determination of Threat of Fire
After determination of fire regions in video sequences, we focus to determine
the the threat associated with the fire. Fire is a necessity as well as a
threat in human civilization. Fire is used for light, for heat, to cook etc.
But fire is desired to be used keeping it in control, once it becomes
uncontrolled, it creates threat and damages. Here we plan to develop a
methodology to determine whether the fire is in use or it is becoming a
threat. We have observed that, fire is generally in a controlled position
while the flames are within a certain space, but it becomes a threat when the
flame starts to spread. The faster the flames get spread, greater is the
threat. Here we aim to quantify this phenomenon and accordingly generate an
alarm with the possible threat.
Here we have estimated relative spread in fire regions in a video sequence to
quantify the threat. If the flame is in control, the fire regions may change
from frame to frame, but there will be a flicker effect. That is, the fire
regions may remain within some limits throughout the sequence. But if the fire
becomes uncontrolled, the fire region will spread rapidly compared to its
previous frames. This is why relative spread is considered for the threat
computation. Let $F_{t}$ be the segmented fire region in frame $f_{t}$. Let
the information from $P$ number of previous frames be used to determine the
threat. The average fire segments ($F_{\mu}$) throughout the sequence and the
and recent average fire segments ($F_{\mu P}$) are computed based on the
following equations.
$\displaystyle F_{\mu}=\frac{\sum_{i=1}^{N}F_{i}}{N}$ (16) $\displaystyle
F_{\mu P}=\frac{\sum_{i=N-P}^{N}F_{i}}{P}:P<N$ (17)
The threat index of fire ($\mathcal{T_{F}}$) is now defined with relative
increment of spread in recent frames. It is computed with Eqn. (18).
$\mathcal{T_{F}}=\frac{F_{\mu P}-F_{\mu}}{F_{\mu}}$ (18)
Please note that a signed difference is considered in the numerator of right
hand side of in Eqn. (18). This is because, the threat will only be positive
if there if a relative spread in fire region, if the region is concentrated or
the fire is extinguishing, the threat will become negative. Thereby
eliminating occurrences of false alarms.
Selection of the value of $P$ plays a major role in this index. Here we have
decided to consider the relative growth of fire over the last one second.
Therefore, $P$ is chosen based on the frame rate of video acquisition per
second (fps). That is, if the video is acquired with a frame rate of 30 fps,
$P$ is set to be $30$.
## 7 Experimental Results
Here in this section we have experimentally demonstrated the effectiveness of
the proposed methods in identifying the followings. The unsupervised fire
segmentation method, developed here is proven to be superior in terms of
qualitative and quantitative results over different types of input videos,
from, a few state of the art methods. The threat detection method is also
found to be effective in quantifying the threat associated with the fire.
We have conducted our experiment with 30 different types of videos including
more than 25000 video frames, with fire flame. Ten video sequences out of the
thirty considered videos contain spreading fire or uncontrolled fire, and the
rest contain controlled fire. Ten video sequence sets were acquired from the
sets from ’Firesence’ database [23], and twenty video sequences from different
links freely available on ’YouTube’. To limit the siz of the article we have
shown the qualitative and quantative results obtained over seven different
video sequences. Out of which we have selected only two videos where the fire
region remained to be within a range, fire is either rapidly growing, or
slowly spreading in the rest of the five videos shown here.
The video sequences, over which we have shown the results are described as
follows. ’PosVideo1’ [23] shows a part of a bus that is set on fire, the fire
is gradually growing here. ’PosVideo2’ [23] shows fire is set into a kitchen
and it is growing gradually. In ’PosVideo4’ and ’PosVideo5’ [23] fire is under
control, two different burning fire-places are captured in these videos. Two
men are setting fire in a forest region and fire start to grow is the content
of ’PosVideo6’ [23]. In ’WeddingDress’ [24] video, fire is initially spreading
over the wedding dresses and then becomes constant, whereas in ’ChristmasTree’
[25] video a Christmas tree catches fire that spreads rapidly. The
effectiveness of Q-rough sets in segmenting out the fire regions is shown in
the next section.
### 7.1 Fire Region Segmentation with Q-rough Set
Here in this section we are going to demonstrate the effectiveness of Q-Rough
sets in identifying the fire segments throughout the video sequences. The
visual segmented outputs for four different frames over seven different
sequences, described above, are shown in Figs. 3(1) to 4(7). It can be
concluded from the visual results that the fire regions are well segmented out
with Q-rough sets.
The segmentation accuracy is quantitatively demonstrated in Table 1. Please
note that, there was no ground truth available for the video sequences used
here for the experimentations. Therefore, we have manually annotated ten
frames, selected randomly from each video sequence, and performed the
quantitative analysis. The average $Falsepositive\leavevmode\nobreak\ (FP)$,
$FalseNegative\leavevmode\nobreak\ (FN)$, $Precision$, and $Recall$ values are
given here in Table 1.
Table 1: Fire Segmentation Accuracy of Q-Rough Sets Sequence | $FP(\%)$ | $FN(\%)$ | $Precision$ | $Recall$
---|---|---|---|---
PosVideo1 | 0.5 | 15 | 0.99 | 0.87
PosVideo2 | 3 | 5 | 0.97 | 0.95
PosVideo4 | 4 | 1 | 0.96 | 0.99
PosVideo5 | 3 | 2 | 0.97 | 0.98
PosVideo11 | 2 | 10 | 0.98 | 0.9
WeddingDress | 2 | 8 | 0.98 | 0.91
ChristmasTree | 23 | 5 | 0.77 | 0.93
From Table 1 it can be observed that, the Q-rough set method is found to be
quite efficient in segmenting out the fire regions almost in every video
except ChistmasTree. The fire regions are well segmented even the ones that
are small in size (PosVideo11 and PosVideo1), continuously burning (PosVideo4
and PosVideo5) or spreading rapidly (PosVideo2 and WeddingDress). In the
ChristmassTree video the fire gets reflected in the wall and the wall reflects
almost of similar color as that of the fire. Therefore, some part of the wall
also gets segmented out as fire.
(1)
(2)
(3)
(4)
Figure 3: Fire segmentation results with Q-rough sets for frame nos. (1) 15,
45, 75, 105 of ’PosVideo1’ sequence (2) 25, 45, 85, 105 of ’PosVideo2’
sequence (3) 25, 45, 85, 105 of ’PosVideo4’ sequence, (4) 25, 45, 85, 105 of
’PosVideo5’ sequence
(5)
(6)
(7)
Figure 4: Continuation of Fig. 3 (5) 25, 45, 85, 105 of ’PosVideo11’ sequence,
(6) 25, 45, 85, 105 of ’WeddingDress’ sequence, and (7) 25, 45, 85, 105 of
’CristmassTree’ sequence; (i) input, (ii) segmented
### 7.2 Comparative Study
Here in this section we have demonstrated qualitative and quantitative
comparison of our fire segmentation method with four state of the art methods.
The methods, with which the comparative studies are performed include: i) RGB
and motion feature based method with HMM (RGB-M) [13], ii) HSV model based
method developed for forest fire detection with static and dynamic features
(HSV-SD) [15], iii) spatio-temporal flame modelling method (ST-F) [16], and
iv) fire detection with improved deep learning (F-IDL) [21] method.
The visual comparison over a frame of the seven different video sequences on
with the five different methods are shown in Fig. 5. From the visual results
it can be seen that the Q-rough set method is segmenting out the fire region
most efficiently.
(1)
(2)
(3)
(4)
(5)
(6)
(7)
Figure 5: Visual comparison on fire segmentation on (1) PosVideo1 sequence,
(2) PosVideo2 sequence, (3) PosVideo4 sequence, (4) PosVideo5 Sequence, (5)
PosVideo11 sequence, (6) WeddingDress sequence, and (7) ChristmassTree
Sequence; (a) input, (b) RGB-M method, (c) HSV-SD method, (d) ST-F method, (e)
F-IDL method, and (f) Q-rough set (proposed) method
Please note that, none of the methods used in the comparative study focus on
classifying each fire pixel to the non-fire ones, therefore, the measures of
classification accuracy, like, true positive, false negative is not going to
be a fair parameter here for the comparative study. Therefore, a measure,
namely, RMSE (root mean square error) is taken under consideration here for
quantitative comparison. The root mean square error between the four corner
pixels of the bounding boxes, covering the ground truth regions of fire, and
the segmented regions of fire are considered here for the measurement. The
average RMSE values for the seven videos sequences with five methods are given
in Table 2
Table 2: Fire Region Segmentation: Comparison with Average RMSE Sequence | $RGB-M$ | $HSV-SD$ | $ST-F$ | $F-IDL$ | $Q-RoughSet$
---|---|---|---|---|---
PosVideo1 | 52.3 | 8.2 | 6.6 | 13.4 | 5.8
PosVideo2 | 46.8 | 11.1 | 7.2 | 5.8 | 6.9
PosVideo4 | 9.2 | 6.3 | 5.9 | 6.1 | 5.5
PosVideo5 | 8.8 | 5.2 | 4.7 | 5 | 4.9
PosVideo11 | 48.6 | 11.2 | 9.5 | 7.3 | 4.9
WeddingDress | 19.4 | 17.8 | 8.3 | 12.2 | 5.1
ChristmasTree | 34.6 | 13.5 | 12.6 | 21.2 | 16.3
It can be observed from Fig. 5 and Table 2 that the proposed unsupervised
method performs superior or equally well to those of the state of the art
methods. Besides, it can also be seen from the visual results (Fig. 5), that
Q-rough set method can classify the fire pixels from the non-fire ones better
than the other methods in comparison.
### 7.3 Effectiveness of Fire Threat Index $\mathcal{T_{F}}$
The effectiveness of $\mathcal{T_{F}}$ index in identifying the threat of fire
is validated here with extensive experiments. The values of $\mathcal{T_{F}}$
for each frame throughout the seven video sequences are plotted here in Fig.
6. The $\mathcal{T_{F}}$ values for all the five methods, over which the
comparative studies are performed, are computed here and plotted accordingly
in Fig. 6. The red line represents the values of $\mathcal{T_{F}}$ indices
obtained by Q-rough set method, the green line indicates $RGB-M$ method, blue
line indicated $HSV-SD$ method, gray ’- -’ line indicates $ST-F$ method, and
the solid black line indicates $F-IDL$ method in Fig. 6. The increament or
decrement of fire flame is well reflected by the proposed $\mathcal{T_{F}}$
index. For example, fire is initially increasing and then decreasing in
$PosVideo1$-sequence which is reflected in the curve in Fig. 6(1) almost by
all the methods. The increasing threat of fire of $PosVideo2$-sequence can be
inferred from Fig. 6(2) with the increasing values of $\mathcal{T_{F}}$. Fire
flames have a flicker effect, and no threat in $PosVideo4$ and $PosVideo5$
sequences. It is best reflected by Q rough set method in Figs. 6(3) and 6(4),
where the $\mathcal{T_{F}}$ values are withing a fixed range closed to the
value zero. The gradually spreading fire in $PosVideo11$-sequence can be
inferred from 6(5) with Q-rough set method. Initial spread and then
concentration of fire in $WeddingDress$-sequence is reflected well in Fig.
6(6). The sudden spread of fire in $ChristmasTree$-sequence can be well
detected in Fig. 6(7). Therefore, the proposed $\mathcal{T_{F}}$ index is
successful in identifying the threat of fire correctly.
(1) (2)
(3) (4)
(5) (6)
(7)
Figure 6: $\mathcal{T_{F}}$ index values over 1) PosVideo1 sequence, (2)
PosVideo2 sequence, (3) PosVideo4 sequence, (4) PosVideo5 Sequence, (5)
PosVideo11 sequence, (6) WeddingDress sequence, and (7) ChristmassTree
Sequence
## 8 Conclusions and Future Work
In this article we have primarily addressed two tasks related to fire threat
detection from videos. The first one is unsupervised fire region segmentation
with Q-rough sets and the second one is definition of fire threat index. The
unsupervised method of fire region segmentation with Q-rough set has been
proven to be effective visually and quantitatively in classifying fire pixels
correctly. The performance of Q-rough set based fire segmentation has been
proven to be superior in identifying the correct fire region over different
types of video sequences with respect to different state of the art methods.
The fire threat index is proven to be effective in reflecting the spread of
the fire quickly. This index works well with all the fire segmentation
methods, but it reflects the threat the best with Q-rough set method.
Therefore it can be concluded that the proposed methods can be integrated in
video surveillance systems to generate quick fire alarm. The Q-rough set based
segmentation method could also be used in the other areas of video analysis,
like, object tracking since the Q-agents could learn and explore the object of
interest by themselves alongwith the input stream data. Besides, the lower-
upper approximations with Q-rough set, defined here, could be used other areas
to deal with the incompleteness of knowledge base and stream data.
The fire threat index is defined here assuming that the surveillance videos
will be captured by static cameras. But the method may fail if the
surveillance is carried out with moving cameras or ego-centric videos. The
definition of the index could be modified in future to make it more general
and applicable to any kind of surveillance systems.
## References
* [1] Z. Pawlak. Rough Sets: Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Norwell, MA, 1992.
* [2] S. K. Pal and D. B. Chakraborty. Granular flow graph, adaptive rough rule generation and tracking. IEEE Trans. on Cyberns., 47(12):4096 – 4107, 2017.
* [3] J. Liu, Y. Lin, Y. Li, W. Weng, and S. Wu. Online multi-label streaming feature selection based on neighborhood rough set. Pattern Recognition, Elsevier., 84(12):273–287, 2018.
* [4] J. Zhaoa, J. Liang, Z. Dong, D. Tang, and Z. Liu. Accelerating information entropy-based feature selection using rough set theory with classified nested equivalence classes. Pattern Recognition, Elsevier, 107(11):273–287, 2020.
* [5] H. V. Hasselt. Double q-learning. In J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 2613–2621. Curran Associates, Inc., 2010.
* [6] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018.
* [7] S. Russell and P. Norvig. Artificial Intelligence: A Modern Approach. Prentice Hall, 4 edition, 2020.
* [8] Lotfi A. Zadeh. Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic. Fuzzy Sets Syst., 90(2):111–127, 1997.
* [9] A. Gaur, A. Singh, A. Kumar, A. Kumar, and K. Kapoor. Video flame and smoke based fire detection algorithms: A literature review. Fire Technology, Springer, 56(5):1943–1980, 2020.
* [10] P. Wu T. Chen and Y. Chiou. An early fire-detection method based on image processing. In Proc. IEEE Int. Image Process., pages 1707–1710, 2004.
* [11] A. M. Fernandesa, A. B.Utkin, A. V.Lavrov, and R. M.Vilar. Development of neural network committee machines for automatic forest fire detection using lidar. Pattern Recognition, Elsevier., 37(10):2039–2047, 2004.
* [12] Y. Dedeoglu B.U. Toreyin and A.E. Cetin. Flame detection in video using hidden markov models. In Proc. IEEE Int. Conf. Image Process., pages 1230–1233, 2005\.
* [13] Y. Dedeoglu B.U. Toreyin and A.E. Cetin. Computer vision based method for real-time fire and flame detection. Pattern Recognition Lett., 27:49–58, 2006.
* [14] T. Celik. Fast and efficient method for fire detection using image processing. ETRI Journal, 32:881–890, 2010.
* [15] J. Zhao, Z. Zhang, S. Han, C. Qu, Z. Yuan, and D. Zhang. Svm based forest fire detection using static and dynamic features. Computer Science and Information Systems, 8:821–841, 2011.
* [16] D. Y. T. Chino, L. P. S. Avalhais, J. F. Rodrigues, and A. J. M. Traina. Bowfire: Detection of fire in still images by integrating pixel color and texture analysis. In SIBGRAPI Conference on Graphics, Patterns and Images, pages 95–102, 2015.
* [17] K. Dimitropoulos, P. Barmpoutis, and N. Grammalidis. Spatio-temporal flame modeling and dynamic texture analysis for automatic video-based fire detection. IEEE Trans. on Cir. Sys. Vid. Tech., 25(2):339–351, 2015.
* [18] P. Li and W. Zhao. Image fire detection algorithms based on convolutional neural networks. Case Studies in Thermal Engineering, June 2020 , Elsevier, 19:1–11, 2020.
* [19] K. Muhammad, J. Ahmad, and S. W. Baik. Early fire detection using convolutional neural networks during surveillance for effective disaster management. Neurocomputing, Elsevier, 288:30–42, 2018.
* [20] B. Kim and J. Lee. A video-based fire detection using deep learning models. Appl. Sci., MDPI, 9:2862 – 2874, 2019.
* [21] Y. Cai, Y. Guo, Y. Li, H. Li, and J. Liu. Fire detection method based on improved deep convolution neural network. In ACM ICCPR: Int. Conf. on Computing and Pattern Recognition, pages 466–470, 2019.
* [22] E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting (The Morgan Kaufmann Series in Computer Graphics). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2005.
* [23] N. Grammalidis, K. Dimitropoulos, and E. Cetin. Firesense database of videos for flame and smoke detection [data set]. In Zenodo, pages 466–470, 2017.
* [24] Inside Edition. Daring brides light wedding dresses on fire while wearing them. In YouTube, Link: https://www.youtube.com/watch?v=yNdaJblEqiU, 2018\.
* [25] National Fire Protection Association. Christmas tree fires can turn devastating and deadly within seconds. In YouTube, Link: https://www.youtube.com/watch?v=xr6b9b8FYKkt=5s, 2015.
|
# Segmenting Transparent Object in the Wild with Transformer
Enze Xie1 , Wenjia Wang2, Wenhai Wang3, Peize Sun1,
Hang Xu1, Ding Liang2, Ping Luo1
1The University of Hong Kong 2Sensetime Research 3Nanjing University
###### Abstract
This work presents a new fine-grained transparent object segmentation dataset,
termed Trans10K-v2, extending Trans10K-v1, the first large-scale transparent
object segmentation dataset. Unlike Trans10K-v1 that only has two limited
categories, our new dataset has several appealing benefits. (1) It has 11
fine-grained categories of transparent objects, commonly occurring in the
human domestic environment, making it more practical for real-world
application. (2) Trans10K-v2 brings more challenges for the current advanced
segmentation methods than its former version. Furthermore, a novel
transformer-based segmentation pipeline termed Trans2Seg is proposed. Firstly,
the transformer encoder of Trans2Seg provides the global receptive field in
contrast to CNN’s local receptive field, which shows excellent advantages over
pure CNN architectures. Secondly, by formulating semantic segmentation as a
problem of dictionary look-up, we design a set of learnable prototypes as the
query of Trans2Seg’s transformer decoder, where each prototype learns the
statistics of one category in the whole dataset. We benchmark more than 20
recent semantic segmentation methods, demonstrating that Trans2Seg
significantly outperforms all the CNN-based methods, showing the proposed
algorithm’s potential ability to solve transparent object segmentation. Code
is available in github.com/xieenze/Trans2Seg.
## 1 Introduction
Modern robots, mainly mobile robots and mechanical manipulators, would benefit
a lot from the efficient perception of the transparent objects in residential
environments since the environments vary drastically. The increasing
utilization of glass wall and transparent door in the building interior and
the glass cups and bottles in residential rooms has resulted in the wrong
detection in various range sensors. In robotic research, most systems perceive
the environment by multi-data sensor fusion via sonars or lidars. The sensors
are relatively consistent in detecting opaque objects but are still affected
by the scan mismatching due to transparent objects. The unique feature of
reflection, refraction, and light projection from the transparent objects may
confuse the sensors. Thus a reliable vision-based method, which is much
cheaper and more robust than high-precision sensors, would be efficient.
(a) Selected images and corresponding high-quality masks.
(b) Performance comparison on Trans10K-v2.
Figure 1: (a) shows the high diversity of our dataset and high-quality
annotations. (b) is Comparisons between Trans2Seg and other CNN-based semantic
segmentation methods. All methods are trained on Trans10K-v2 with same epochs.
mIoU is chosen as the metric. Deeper color bar indicates methods with larger
FLOPS. Our Trans2Seg significantly surpasses other methods with lower flops.
Although some transparent objects dataset Xu et al. (2015); Chen et al.
(2018a); Mei et al. (2020) were proposed, there are some obvious problems. (1)
Limited dataset scale. These datasets often have less than 1K images captured
from the real-world and less than 10 unique objects. (2) Poor diversity. The
scene of these datasets is monotonous. (3) Fewer classes. All these datasets
have only two classes, background and transparent objects. They lack fine-
grained categories, which limited their practicality. Recently, Xie et al.
(2020) proposed a large-scale and high-diversity dataset termed Trans10K,
which divide transparent objects as ‘Things’ and ‘Stuff’. The dataset is high
diversity, but it also lacks fine-grained transparent categories.
In this paper, we proposes a fine-grained transparent object segmentation
dataset termed Trans10K-v2 with more elaborately defined categories. The
images are inherit from Trans10K-v1 Xie et al. (2020). We annotate the 10428
images with 11 fine-grained categories: shelf, jar, freezer, window, glass
door, eyeglass, cup, glass wall, glass bowl, water bottle, storage box. In
Trans10K-v1, transparent things are defined to be grabbed by the manipulators
and stuff are for robot navigation. Though two basic categories can partially
help robots to interact with transparent objects, the provided fine-grained
classes in Trans10K-v2 can provide more. We analyze these objects’ functions
and how robots interact with them in appendix.
Based on this challenging dataset, we design Trans2Seg, introducing
Transformer into segmentation pipeline for its encoder-decoder architecture.
First, the transformer encoder provides a global receptive field via self-
attention. Larger receptive field is essential for segmenting transparent
objects because transparent objects often share similar textures and context
with its surroundings. Second, the decoder stacks successive layers to
interact query embedding with transformer encoder output. To facilitate the
robustness of transparent objects, we carefully design a set of learnable
class prototype embeddings as the query for transformer decoder and the key is
the feature map from the transformer encoder. Compared with convolutional
paradigm, where the class prototypes is the fixed parameters of convolution
kernel weight, our design provides a dynamic and context-aware implementation.
As shown in Figure. 1(b), we train and evaluate 20 existing representative
segmentation methods on Trans10K-v2, and found that simply applying previous
methods to this task is far from sufficient. By successfully introducing
Transformer into this task, our Trans2Seg significantly surpasses the best
TransLab Xie et al. (2020) by a large margin (72.1 vs. 69.0 on mIoU).
In summary, our main contributions are three-fold:
* •
We propose the largest glass segmentation dataset (Trans10K-v2) with 11 fine-
grained glass image categories with a diverse scenario and high resolution.
All the images are elaborately annotated with fine-shaped masks and function-
oriented categories.
* •
We introduce a new transformer-based network for transparent object
segmentation with transformer encoder-decoder architecture. Our method
provides a global receptive field and is more dynamic in mask prediction,
which shows excellent advantages.
* •
We evaluate more than 20 semantic segmentation methods on Trans10K-v2, and our
Trans2Seg significantly outperforms these methods. Moreover, we show this task
is largely unsolved. Thus more research is needed.
## 2 Related Work
Semantic Segmentation. In deep learning era, convolutional neural network
(CNN) puts forwards the development of semantic segmentation in various
datasets, such as ADE20K, CityScapes and PASCAL VOC. One of the pioneer works
approaches, FCN Long et al. (2015), transfers semantic segmentation into an
end-to-end fully convolutional classification network. For improving the
performance, especially around object boundaries, Chen et al. (2017); Lin et
al. (2016); Zheng et al. (2015) propose to use structured prediction module,
conditional random fields (CRFs) Chen et al. (2014), to refine network output.
Dramatic improvements in performance and inference speed have been driven by
aggregating features at multiples scales, for example, PSPNet Zhao et al.
(2017) and DeepLab Chen et al. (2017, 2018b), and propagating structured
information across intermediate CNN representations Gadde et al. (2016); Liu
et al. (2017); Wang et al. (2018).
Transparent Object Datasets. Xu et al. (2015) introduces TransCut dataset
which only contain 49 images of 7 unique objects. To generate the segmentation
result, Xu et al. (2015) optimized an energy function based on LF-linearity
which also need to utilize the light-field cameras. Chen et al. (2018a)
proposed TOM-Net. It contains 876 real images and 178K synthetic images which
are generated by POV-Ray. However, only 4 unique objects are used in
synthesizing the training data. Recnetly, Xie et al. (2020) introduce a first
large-scale real-world transparent object segmentation dataset, termed
Trans10K. It has 10K+ images. However, there are two categories in this
dataset, which limits its practical use. In this work, our Trans10K-v2
inherited the data and annotates 11 fine-grained categories.
Figure 2: Images in Trans10K-v2 dataset are carefully annotated with high
quality. The first row shows sample images and the second shows the
segmentation masks. The color scheme which encodes the object categories are
listed on the right of the figure. Zoom in for best view.
Trans10Kv2 | shelf | door | wall | box | freezer | window | cup | bottle | jar | bowl | eyeglass
---|---|---|---|---|---|---|---|---|---|---|---
image num | 280 | 1572 | 3059 | 603 | 90 | 501 | 3315 | 1472 | 997 | 340 | 410
CMCC | 3.36 | 5.19 | 5.61 | 2.57 | 3.36 | 4.27 | 1.97 | 1.82 | 1.99 | 1.31 | 2.56
pixel ratio(%) | 2.49 | 9.23 | 38.42 | 3.67 | 1.02 | 4.28 | 22.61 | 6.23 | 6.75 | 3.67 | 0.78
Table 1: Statistic information of Translabv2. ‘CMCC’ denotes Mean Connected
Components of each category. ‘image num’ denotes the image number. ‘pixel
ratio’ is the pixel number of a certain category accounts in all the pixels of
transparent objects in Trans10K-v2.
Transformer in Vision Tasks. Transformer Vaswani et al. (2017) has been
successfully applied in both high-level vision and low-level vision Han et al.
(2020). In ViT Dosovitskiy et al. (2020), Transformer is directly applied to
sequences of image patches to complete image classification. In object
detection areas Carion et al. (2020); Zhu et al. (2020), DETR reasons about
the relations of the object queries and the global image context via
Transformer and outputs the final set of predictions in parallel without non-
maximum suppression(NMS) procedures and anchor generation. SETR Zheng et al.
(2020) views semantic segmentation from a sequence-to-sequence perspective
with Transformer. IPT Chen et al. (2020) applies Transformer model to low-
level computer vision task, such as denoising, super-resolution and deraining.
In video processing, Transformer has received significantly growing attention.
VisTR Wang et al. (2020) accomplishes instance sequence segmentation by
Transformer. Multiple-object tracking Sun et al. (2020); Meinhardt et al.
(2021) employs Transformers to decode object queries and feature queries of
the previous frame into bounding boxes of the current frame, and merged by
Hungarian Algorithm or NMS.
## 3 Trans10K-v2 Dataset
Dataset Introduction. Our Trans10K-v2 dataset is based on Trans10K dataset Xie
et al. (2020). Following Trans10K, we use 5000, 1000 and 4428 images in
training, validation and testing respectively. The distribution of the images
is abundant in occlusion, spatial scales, perspective distortion. We further
annotate the images with more fine-grained categories due to the functional
usages of different objects. Trans10K-v2 dataset contains 10,428 images, with
two main categories and 11 fine-grained categories: (1) Transparent Things
containing cup, bottle, jar, bowl and eyeglass. (2) Transparent Stuff
containing windows, shelf, box, freezer, glass walls and glass doors. In
respect to fine-grained categories and high diversity, Trans10K-v2 is very
challenging, and have promising potential in both computer vision and robotic
researches.
Annotation Principle. The transparent objects are manually labeled by expert
annotators with professional labeling tool. The annotators were asked to
provide more than 100 points when they trace the boundaries of each
transparent object, which ensures the high-quality outline of the mask shapes.
The way of annotation is mostly the same with semantic segmentation datasets
such as ADE20K. We set the background with 0, and the 11 categories from 1 to
11. We also provide the scene environment of each image locates at. The
annotators are asked to strictly following principles when they label the
images: (I) Only highly transparent pixels are annotated as masks, other semi-
transparent and non-transparent pixels are ignored. Highly transparent objects
no matter made of glass, plastics or crystals should also be annotated. (II)
When occluded by opaque objects, the pixels will be cropped from the masks.
(III) The setting of all 11 fine-grained categories are elaborately observed
and induced from the point of function. We analyze firstly how the robots need
to deal with the transparent objects as avoiding or grasping or manipulating,
then categorize the objects similar in shape and function into a fine-grained
category. The detailed principle of how we categorize the objects is listed in
appendix.
Dataset Statistics. The statistic information of CMCC, imaga number, pixel
proportion are listed in Table 1 in detail. From Table1, the sum of all the
image numbers is larger than 10428 since some image has multiple category of
objects. CMCC denotes Mean Connected Components of each category. It is
caculated by dividing the connected components number of a certain category by
the image number. The number of connected components are counted by the
boundary of the masks. It represents the complexity of the transparent
objects.
Evaluation Metrics. Results are reported in three metrics that are widely used
in semantic segmentation to benchmark the performance of fine-grained
transparent object segmentation. (1) Pixel Accuracy indicates the proportion
of correctly classified pixels. (2) Mean IoU indicates mean intersection over
union. (3) Category IoU indicates the intersection over union of each
category.
## 4 Method
### 4.1 Overall Pipeline
The overall Trans2Seg architecture contains a CNN backbone, an encoder-decoder
transformer, and a small convolutional head, as shown in Figure 3. For an
input image of $(H,W,3)$,
* •
The CNN backbone generates image feature map of
$(\frac{H}{16},\frac{W}{16},C)$.
* •
The encoder takes in the summation of flattened feature of
$(\frac{H}{16}\frac{W}{16},C)$ and positional embedding of
$(\frac{H}{16}\frac{W}{16},C)$, and outputs encoded feature of
$(\frac{H}{16}\frac{W}{16},C)$.
* •
The decoder interacts the learned class prototypes of $(N,C)$ with encoded
feature, and generates attention map of $(N,M,\frac{H}{16}\frac{W}{16})$,
where $N$ is number of categories, $M$ is number of heads in multi-head
attention.
* •
The small convolutional head up-samples the attention map to
$(N,M,\frac{H}{4},\frac{W}{4})$, fuses it with high-resolution feature map
Res2 and outputs attention map of $(N,\frac{H}{4},\frac{W}{4})$.
The final segmentation is obtained by pixel-wise argmax operation on the
output attention map.
Figure 3: The whole pipeline of our hybrid CNN-Transformer architecture.
First, the input image is fed to CNN to extract features $F$. Second, for
transformer encoder, the features and position embedding are flatten and fed
to transformer for self-attention, and output feature($F_{e}$) from
transformer encoder. Third, for transformer decoder, we specifically define a
set of learnable class prototype embeddings($E_{cls}$) as query, $F_{e}$ as
key, and calculate the attention map with $E_{cls}$ and $F_{e}$. Each class
prototype embedding corresponds to a category of final prediction. We also add
a small conv head to fuse attention map and Res2 feature from CNN backbone.
Details of transformer decoder and small conv head refer to Figure 4. Finally,
we can get the predict results by doing pixel-wise argmax on the attention
map. For example, in this figure, the segmentation mask of two categories
(Bottle and Eyeglass) corresponds to two class prototypes with same colors.
Figure 4: Detail of Transformer Decoder and small conv head. Input: The
learnable category prototypes as query, features from transformer encoder as
key and value. The inputs are fed to transformer decoder, which consists of
several decoder layers. The attention map from last decoder layer and the Res2
feature from CNN backbone are combined and fed to a small conv head to get
final prediction result. We also provide the Pseudo Code of small conv head
for better understanding.
### 4.2 Encoder
The Transformer encoder takes a sequence as input, so the spatial dimensions
of the feature map $(\frac{H}{16},\frac{W}{16},C)$ is flattened into one
dimension$(\frac{H}{16}\frac{W}{16},C)$. To compensate missing spatial
dimensions, positional embedding Gehring et al. (2017) is supplemented to one
dimension feature to provide information about the relative or absolute
position of the feature in the sequence. The positional embedding has the same
dimension $(\frac{H}{16}\frac{W}{16},C)$ with the flattened feature. The
encoder is composed of stacked encoder layers, each of which consists of a
multi-head self-attention module and a feed forward network Vaswani et al.
(2017).
### 4.3 Decoder
The Transformer decoder takes input a set of learnable class prototype
embeddings as query, denoted by $E_{cls}$, the encoded feature as key and
value, denoted by $F_{e}$, and output the attention map followed by Small Conv
Head to obtain final segmentation result, as shown in Figure 4.
The class prototype embeddings are learned category prototypes, updated
iteratively by a series of decoder layers through multi-head attention
mechanisms. We denoted iterative update rule by $\bigodot$, then the class
prototype in each decoder layer is:
$\rm{E_{cls}^{s}}=\bigodot_{i=0,..,s-1}\rm{softmax}(E_{cls}^{i}F_{e})F_{e}$
(1)
In the final decoder layer, the attention map is extracted out to into small
conv head:
$\rm{attention\ map}=E_{cls}^{s}F_{e}$ (2)
The pseudo code of small conv head is shown in shown in Figure 4. The
attention map from Transformer decode is the shape of
$(N,M,\frac{H}{16}\frac{W}{16})$, where $N$ is number of categories, $M$ is
number of heads in multi-head attention. It is up-sampled to
$(N,M,\frac{H}{4},\frac{W}{4})$, then fused with high-resolution feature map
Res2 in the second dimension to $(N,M+C,\frac{H}{4},\frac{W}{4})$, and finally
transformed into output attention map of $(N,\frac{H}{4},\frac{W}{4})$. The
final segmentation is obtained by pixel-wise argmax operation on the output
attention map.
### 4.4 Discussion
The most related work with Trans2Seg is SETR and DETR Zheng et al. (2020);
Carion et al. (2020). In this section we discuss the relations and differences
in details.
SETR. Trans2Seg and SETR are both segmentation pipelines. Their key difference
is reflected in the design of the decoder. In SETR, the decoder is simple
several convolutional layers, which is similar with most previous methods.
However, the decoder of Trans2Seg is also transformer, which fully utilize the
advantages of attention mechanism in semantic segmentation.
DETR. Trans2Seg and DETR share similar components in the pipeline, including
CNN backbone, Transformer encoder and decoder. The biggest difference is the
definition of query. In DETR, the decoder’s queries represents $N$ learnable
objects because DETR is designed for object detection. However, in Trans2Seg,
the queries represents $N$ learnable class prototypes, where each query
represents one category. We could see that the minor change on query design
could generalize Transformer architecture to apply to diverse vision tasks,
such as object detection and semantic segmentation.
## 5 Experiments
### 5.1 Implementation Details.
We implement Trans2Seg with Pytorch. The ResNet-50 He et al. (2016) with
dilation convolution at last stage. is adoped as the CNN extractor. For loss
optimization, we use Adam optimizer with epsilon 1e-8 and weight decay 1e-4.
Batch size is 8 per GPU. We set learning rate 1e-4 and decayed by the poly
strategy Yu et al. (2018) for 50 epochs. We use 8 V100 GPUs for all
experiments. For all CNN based methods, we random scale and crop the image to
$480\times 480$ in training, and resize image to $513\times 513$ in inference,
following common setting on PASCAL VOC Everingham and Winn (2011). For our
Trans2Seg, we adopt transformer architecture and need to keep the shape of
learned position embedding same in training/inference, so we directly resize
the image to $512\times 512$. Code has been released for community to follow.
### 5.2 Ablation Studies.
We use the FCN Long et al. (2015) as our baseline. FCN is a fully
convolutional network with very simple design, and it is also a very classic
semantic segmentation method. First, we demonstrate that transformer encoder
can build long range attention between pixels, which has much larger receptive
field than CNN filters. Second, we remove the CNN decoder in FCN and replace
by our Transformer decoder, we design a set of learnable class prototypes as
queries and show that this design further helps improve the accuracy. Third,
we verify our method with transformer at different scales.
id Trans. Enc. Trans. Dec. CNN Dec. mIoU 0 $\times$ $\times$ ✓ 62.7 1 ✓
$\times$ ✓ 68.8 2 ✓ ✓ $\times$ 72.1
Table 2: Effectiveness of Transformer encoder and decoder. ‘Trans.’ indicates
Transformer. ‘Enc.’ and ‘Dec.’ means encoder and decoder.
Scale hyper-param. GFlops MParams mIoU small e128-n1-m2 40.9 30.5 69.2 medium
e256-n4-m3 49.0 56.2 72.1 large e768-n12-m4 221.8 327.5 70.3
Table 3: Performance of Transformer at different scales. ‘e{a}-n{b}-m{c}’
means the transformer with number of ‘a’ embedding dims, ‘b’ layers and ‘c’
mlp ratio.
Self-Attention of Transformer Encoder. As shown in Figure 3, the FCN baseline
without transformer encoder achieves 62.7% mIoU, when adding transformer
encoder, the mIoU directly improves 6.1%, achieving 66.8% mIoU. It
demonstrates that the self-attention module in transformer encoder provides
global receptive filed, which is better than CNN’s local receptive field in
transparent object segmentation.
Category Prototypes of Transformer Decoder. In Figure 3, we verify the
effectiveness of learnable category prototypes in transformer decoder. In
column 2, with traditional CNN decoder, the mIoU is 68.8%. However, with our
transformer decoder, the mIoU boosts up to 72.1% with 3.3% improvement. The
strong performance benefits from the flexible representation that learnable
category prototypes as queries to find corresponding pixels in feature map.
Scale of Transformer. The scale of transformer is mainly influenced by three
hyper-parameters: (1) embedding dim of feature. (2) number of attention
layers. (3) mlp ratio in feed forward layer. We are interested in whether
enlarge the model size can continuously improve performance. So we set three
combinations, as shown in Figure 3. We can find that with the size of
transformer increase, the mIoU first increase then decrease. We argue that if
without massive data to pretrain, e.g. BERT Devlin et al. (2019) used large-
scale nlp data, the transformer size is not the larger the better for our
task.
Figure 5: Visual comparison of Trans2Seg to other CNN-based semantic
segmentation methods. Our Trans2Seg clearly outperforms others thanks to the
transformer’s global receptive field and attention mechanism, especially in
dash region. Zoom in for best view. Refer to supplementary materials for more
visualized results.
### 5.3 Comparison to the state-of-the-art.
Method | FLOPs | ACC $\uparrow$ | mIoU $\uparrow$ | Category IoU $\uparrow$
---|---|---|---|---
| | bg | shelf | Jar | freezer | window | door | eyeglass | cup | wall | bowl | bottle | box
FPENet | 0.76 | 70.31 | 10.14 | 74.97 | 0.01 | 0.00 | 0.02 | 2.11 | 2.83 | 0.00 | 16.84 | 24.81 | 0.00 | 0.04 | 0.00
ESPNetv2 | 0.83 | 73.03 | 12.27 | 78.98 | 0.00 | 0.00 | 0.00 | 0.00 | 6.17 | 0.00 | 30.65 | 37.03 | 0.00 | 0.00 | 0.00
ContextNet | 0.87 | 86.75 | 46.69 | 89.86 | 23.22 | 34.88 | 32.34 | 44.24 | 42.25 | 50.36 | 65.23 | 60.00 | 43.88 | 53.81 | 20.17
FastSCNN | 1.01 | 88.05 | 51.93 | 90.64 | 32.76 | 41.12 | 47.28 | 47.47 | 44.64 | 48.99 | 67.88 | 63.80 | 55.08 | 58.86 | 24.65
DFANet | 1.02 | 85.15 | 42.54 | 88.49 | 26.65 | 27.84 | 28.94 | 46.27 | 39.47 | 33.06 | 58.87 | 59.45 | 43.22 | 44.87 | 13.37
ENet | 2.09 | 71.67 | 8.50 | 79.74 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 22.25 | 0.00 | 0.00 | 0.00
HRNet_w18 | 4.20 | 89.58 | 54.25 | 92.47 | 27.66 | 45.08 | 40.53 | 45.66 | 45.00 | 68.05 | 73.24 | 64.86 | 52.85 | 62.52 | 33.02
HardNet | 4.42 | 90.19 | 56.19 | 92.87 | 34.62 | 47.50 | 42.40 | 49.78 | 49.19 | 62.33 | 72.93 | 68.32 | 58.14 | 65.33 | 30.90
DABNet | 5.18 | 77.43 | 15.27 | 81.19 | 0.00 | 0.09 | 0.00 | 4.10 | 10.49 | 0.00 | 36.18 | 42.83 | 0.00 | 8.30 | 0.00
LEDNet | 6.23 | 86.07 | 46.40 | 88.59 | 28.13 | 36.72 | 32.45 | 43.77 | 38.55 | 41.51 | 64.19 | 60.05 | 42.40 | 53.12 | 27.29
ICNet | 10.64 | 78.23 | 23.39 | 83.29 | 2.96 | 4.91 | 9.33 | 19.24 | 15.35 | 24.11 | 44.54 | 41.49 | 7.58 | 27.47 | 3.80
BiSeNet | 19.91 | 89.13 | 58.40 | 90.12 | 39.54 | 53.71 | 50.90 | 46.95 | 44.68 | 64.32 | 72.86 | 63.57 | 61.38 | 67.88 | 44.85
DenseASPP | 36.20 | 90.86 | 63.01 | 91.39 | 42.41 | 60.93 | 64.75 | 48.97 | 51.40 | 65.72 | 75.64 | 67.93 | 67.03 | 70.26 | 49.64
DeepLabv3+ | 37.98 | 92.75 | 68.87 | 93.82 | 51.29 | 64.65 | 65.71 | 55.26 | 57.19 | 77.06 | 81.89 | 72.64 | 70.81 | 77.44 | 58.63
FCN | 42.23 | 91.65 | 62.75 | 93.62 | 38.84 | 56.05 | 58.76 | 46.91 | 50.74 | 82.56 | 78.71 | 68.78 | 57.87 | 73.66 | 46.54
OCNet | 43.31 | 92.03 | 66.31 | 93.12 | 41.47 | 63.54 | 60.05 | 54.10 | 51.01 | 79.57 | 81.95 | 69.40 | 68.44 | 78.41 | 54.65
RefineNet | 44.56 | 87.99 | 58.18 | 90.63 | 30.62 | 53.17 | 55.95 | 42.72 | 46.59 | 70.85 | 76.01 | 62.91 | 57.05 | 70.34 | 41.32
Translab | 61.31 | 92.67 | 69.00 | 93.90 | 54.36 | 64.48 | 65.14 | 54.58 | 57.72 | 79.85 | 81.61 | 72.82 | 69.63 | 77.50 | 56.43
DUNet | 123.69 | 90.67 | 59.01 | 93.07 | 34.20 | 50.95 | 54.96 | 43.19 | 45.05 | 79.80 | 76.07 | 65.29 | 54.33 | 68.57 | 42.64
UNet | 124.55 | 81.90 | 29.23 | 86.34 | 8.76 | 15.18 | 19.02 | 27.13 | 24.73 | 17.26 | 53.40 | 47.36 | 11.97 | 37.79 | 1.77
DANet | 198.00 | 92.70 | 68.81 | 93.69 | 47.69 | 66.05 | 70.18 | 53.01 | 56.15 | 77.73 | 82.89 | 72.24 | 72.18 | 77.87 | 56.06
PSPNet | 187.03 | 92.47 | 68.23 | 93.62 | 50.33 | 64.24 | 70.19 | 51.51 | 55.27 | 79.27 | 81.93 | 71.95 | 68.91 | 77.13 | 54.43
Trans2Seg | 49.03 | 94.14 | 72.15 | 95.35 | 53.43 | 67.82 | 64.20 | 59.64 | 60.56 | 88.52 | 86.67 | 75.99 | 73.98 | 82.43 | 57.17
Table 4: Evaluated state-of-the-art semantic segmentation methods. Sorted by FLOPs. Our proposes Trans2Seg surpasses all the other methods in pixel accuracy and mean IoU, as well as most of the category IoUs (8 in 11). Method | #Param (M) | GFLOPs | mIoU (%)
---|---|---|---
R50-SemanticFPN | 28.5 | 45.6 | 36.7
R50-d8+DeeplabV3+ | 26.8 | 120.5 | 41.5
R50-d16+DeeplabV3+ | 26.8 | 45.5 | 40.6
R50-d16+Trans2Seg | 56.1 | 79.3 | 39.7
Table 5: Performance of Trans2Seg on ADE20K dataset. Trans2Seg also works well
on general semantic segmentation tasks. “d8” and “d16” means dilation 8 and
16, respectively. “R50” means ResNet-50 backbone.
We select more than 20 semantic segmentation methods Xie et al. (2020); Chen
et al. (2018c); Li et al. (2019a); Zhao et al. (2017); Yuan and Wang (2018);
Yang et al. (2018); Long et al. (2015); Ronneberger et al. (2015); Yu et al.
(2018); Lin et al. (2017); Chao et al. (2019); Wang et al. (2019a); Poudel et
al. (2019, 2018); Wang et al. (2019b); Jin et al. (2019); Zhao et al. (2018);
Li et al. (2019a); Liu and Yin (2019); Li et al. (2019b); Fu et al. (2019);
Mehta et al. (2019) to evaluate on our Trans10K-v2 dataset, the methods
selection largely follows the benchmark of TransLab Xie et al. (2020). For
fair comparsion, we train all the methods with 50 epochs.
Table 4 reports the overall quantitative comparison results on test set. Our
Trans2Seg achieves state-of-the-art 72.15% mIoU and 94.14% pixel ACC,
significant outperforms other pure CNN-based methods. For example, our method
is 2.1% higher than TransLab, which is the previous SOTA method. We also find
that our method tend to performs much better on small objects, such as
‘bottle’ and ’eyeglass’ (10.0% and 5.0% higher than previous SOTA). We
consider that the transformer’s long range attention benefits the small
transparent object segmentation.
In Figure 5, we visualize the mask prediction of Trans2Seg and other CNN-based
methods. We can find that benefit from transformer’s large receptive field and
attention mechanism, our method can distinguish background and different
categories transparent objects much better than other methods, especially when
multiple objects with different categories occurs in one image. Moreover, our
method can obtain high quality detail information,e.g. boundary of object, and
tiny transparent objects, while other CNN-based methods fail to do so. More
results are shown in supplementary material.
### 5.4 General Semantic Segmentation
We try to transfer Trans2Seg on general semantic segmentation and it also
achieves satisfied performance.
Experiment Settings. We choose ADE20K Zhou et al. (2017), a challenging scene
parsing benchmark for semantic segmentation. ADE20K contains 150 fine-grained
semantic categories, where there are 20210, 2000, and 3352 images for
training, validation and, testing, respectively. We set learning rate to
2.5e-5 for ADE20K experiments. We train all models with 40k iterations with 8
images/GPU and 8 GPUs, and use single-scale test in inference. The data
augmentation is same as DeeplabV3+ Chen et al. (2018c).
Results. As shown in Table 5, compared with Semantic FPN and DeeplabV3+, our
Trans2Seg achieves 39.7 mIoU, which is a satisfied performance. Our Trans2Seg
verifies robust transfer ability on challenging general segmentation dataset.
Please note that we do not carefully tuned the hyper-parameters of Trans2Seg
on ADE20K dataset. We are highly interested to design a better transformer-
based general semantic segmentation pipeline in the future.
## 6 Conclusion
In this paper, we present a new fine-grained transparent object segmentation
dataset with 11 common categories, termed Trans10K-v2, where the data is based
on the previous Trans10K. We also discuss the challenging and practical of the
proposed dataset. Moreover, we propose a transformer-based pipeline, termed
Trans2Seg, to solve this challenging task. In Trans2Seg, the transformer
encoder provides global receptive field, which is essential for transparent
objects segmentation. In the transformer decoder, we model the segmentation as
dictionary look up with a set of learnable queries, where each query
represents one category. Finally, we evaluate more than 20 mainstream semantic
segmentation methods and shows our Trans2Seg clearly surpass these CNN-based
segmentation methods.
In the future, we are interested in exploring our Transformer encoder-decoder
design on general segmentation tasks, such as Cityscapes and PASCAL VOC. We
will also put more effort to solve transparent object segmentation task.
## 7 Appendix
### 7.1 Detailed Dataset Information
#### 7.1.1 More Visualized Demonstration of Trans10K-v2.
In this section we show more visualized demonstrations to show the diversity
and quality of Trans10K-v2. In Figure 6 and Figure 7, we show more cropped
objects to illustrate the high-diversity of the objects. We also show more
images and ground-truth masks in Figure 8. All images and transparent objects
in Trans10K-v2 are selected from complex real-world scenarios that have large
variations such as scale, viewpoint, contrast, occlusion, categories and
transparency. From Figure 8, we can also find that it is challenging for
current semantic segmentation methods.
Figure 6: Cropped objects of 5 kinds of transparent things: cup, jar, bottle,
bowl, eyeglass. Zoom in for the best view. Figure 7: Cropped objects of 6
kinds of transparent stuff: wall, freezer, box, door, shelf, window. Zoom in
for the best view.
Figure 8: More images and corresponding high-quality masks in Trans10K-v2. Our
dataset is high diversity in scale, categories, pose, contrast, occlusion, and
transparency. Zoom in for the best view.
#### 7.1.2 Scene information
We also provide each image with a scene label that represents where the
objects located in. As shown in the upper part of Table 6, we list the
statistics of the distribution in different scenarios of each category in
detail. The distribution highly follows the distribution of our residential
environments. For example, the cups, bowls, and bottles are mostly placed on
the desk, while glass walls are often located in mega-malls or office
buildings.
The visualized demonstration of our diverse scene distribution is shown in
Figure 9. Trans10k-v2 contains abundant scenarios and we induce them into 13
categories: on the desk, mega-mall, store, bedroom, sitting room, kitchen,
bathroom, windowsill, office, office building, outdoor, in the vehicle, study-
room. This information is mainly used to demonstrate our abundant image
distribution which could cover most of the common real-life scenarios. Each
image is provided with a scene label.
Figure 9: The image number distribution and selected images of different
scenes in Trans10K-v2. For better demonstration, the image number in vertical
axis is listed as logarithmic.
#### 7.1.3 How Robots Deal with Transparent Objects
Transparent objects are widespread in human residential environments, so the
human-aiding robots find ways to deal with transparent objects. Some former
robotic research illustrates the substantial value of solving this problem,
mainly from grasping and navigation. This research primarily focuses on
modifying the algorithm to deal with optical signals reflected from the
transparent objects.
For the manipulator grasping, previous work mainly focuses on grabbing water
cups. Klank et al. (2011) propose an approach to reconstruct an approximate
surface of the transparent cups and bottles by the internal sensory
contradiction from two ToF (time of flight) images captured from an SR4k
camera. The robot arm could grasp and manipulate the objects. Spataro et al.
(2015) set up a BCI-robot platform to help patients suffering from limb muscle
paralysis by grasping a glass cup for the patients. Starting from the point
that the usual glass material absorbs light in specific wavelengths, Zhou et
al. (2018) propose the Depth Likelihood Volume (DLV), which uses a Monte Carlo
object localization algorithm to help the Michigan Progress Fetch robot
localize and manipulate translucent objects.
For the mobile robot navigation, some work also finds ways to exclude the
side-effect of transparent stuff in residential scenarios. Foster et al.
(2013) modify the standard occupancy grid algorithm during the procedure of
autonomous-mapping robot localize transparent objects from certain angles. Kim
and Chung (2016) design a novel scan matching algorithm by comparing all
candidate distances scanned by the laser range finder penetrate and reflected
from the glass walls. Singh et al. (2018) use information fusion by combining
a laser scanner and a sonar on an autonomous-mapping mobile robot to reduce
the uncertainty caused by glass.
We analyze how robots deal with transparent objects from previous work and
grade them into 4 patterns: navigation, grasping, manipulation, human-aiding.
Navigation and grasping are the two fundamental interactions between robots
and objects. Manipulation happens on complex objects like windows, doors, or
bottles with lids. Human-aiding is the highest level of robot mission, and
this kind of interaction always involve human, especially disabled patients.
From these 4 patterns, we can then analyze and categorize the transparent
objects in respect to functions.
#### 7.1.4 Categorization Principle
The 11 fine-grained categories are based on how the robots need to deal with
transparent objects like avoiding or grasping or manipulating. For example,
the goblet and cup are both open-mouthed and mainly used to drink water. These
objects need to be grasped carefully since they do not have lids. They have
different interactive actions with the robots. So they are both categorized as
cup. We show the detailed demonstration of each category: (1) Shelf.
Containing bookshelf, showcase, cabinet, etc. They mostly have sliding glass
doors and are used to store goods. (2) Freezer. Containing vending machine,
horizontal freezer, etc. They are electrical equipment and are used to storing
drinks and food. (3) Door. Containing automatic glass door, standard glass
door, etc. The doors are located in mega-mall, bathroom or office building.
They are highly transparent and extensive. They could be used in navigation
and helping disabled people pass through. (4) Wall. Glass walls look like
doors. However, walls can not be opened. This clue should be perceived during
mobile robots’ mapping procedure. Glass walls are common in mega-mall and
office buildings. (5) Window. Windows could be opened like glass doors but
should not be traveled through. (6) Box. Large boxes may not need to be
grasped, but the manipulator robot needs to open the box and search for
specific items. (7) Cup. We category all open-mouthed cups like goblets and
regular cups into this category. Cups are used for drinking water. The
manipulators need to grasp a cup carefully and be able to assist disabled
people to drink water. (8) Bottle. Bottles are also used to drink water. But
bottles have lids, so they need careful manipulation. (9) Eyeglass. Eyeglasses
need careful grasping and manipulation to help disable people wear the
eyeglasses. (10) Jar. This category contains jars, kettles and other
transparent containers used to hold water, flavoring and food. (11) Bowl.
Bowls are usually used to contain water or food. Different from jars, they do
not have lids and need careful grasping. The sample objects of these
categories could be find in Figure 8. We show the most common type of
different categories by cropping the objects through masks.
As shown in the lower part of Table 6, we analyze and list the interactive
patterns of all the 11 fine-grained categories of objects. Navigation is the
basic interactive pattern of stuff and grasping is the basic interactive
pattern of things. All the objects with some complex interactions need to be
manipulated like the robots helping people open the shelf or window. Human-
aiding is the highest level of interaction and it always involves patients.
The patients need robots to help with opening the door, or feeding water by a
cup or bottle.
(a) Occlusion and Crowd.
(b) Extreme Transparency.
Figure 10: Failure cases analysis. Our Trans2Seg fails to segment transparent
objects in some complex scenarios.
### 7.2 More Visual Results Comparison.
In this section, we visualize more test examples produced by our Trans2Seg and
other CNN-based methods on Trans10K-v2 dataset in Figure 11. From these
results, we can easily observe that our Trans2Seg outputs very high-quality
transparent object segmentation masks than other methods. Such strong results
mainly benefit from the successfully introducing Transformer into transparent
object segmentation, which is the lack in other CNN-based methods.
$\frac{Scene/Category}{Interaction}$ | Stuff | | Things
---|---|---|---
| shelf | freezer | door | wall | window | box | | cup | bottle | eyeglass | jar | bowl
on the desk | 3 | 0 | 0 | 2 | 4 | 227 | | 1946 | 834 | 239 | 302 | 117
mega-mall | 219 | 35 | 450 | 1762 | 76 | 128 | | 169 | 36 | 75 | 94 | 14
store | 13 | 36 | 5 | 19 | 3 | 75 | | 444 | 111 | 1 | 175 | 57
bedroom | 6 | 0 | 4 | 9 | 23 | 2 | | 23 | 33 | 6 | 6 | 1
living room | 10 | 0 | 7 | 14 | 19 | 52 | | 310 | 167 | 25 | 139 | 67
kitchen | 0 | 8 | 6 | 4 | 4 | 19 | | 79 | 23 | 0 | 46 | 66
bathroom | 0 | 0 | 33 | 31 | 8 | 4 | | 5 | 3 | 4 | 0 | 2
windowsill | 0 | 0 | 0 | 31 | 209 | 4 | | 17 | 8 | 8 | 17 | 2
office room | 15 | 7 | 25 | 43 | 12 | 84 | | 298 | 235 | 51 | 158 | 2
office building | 8 | 3 | 1021 | 1107 | 131 | 5 | | 1 | 5 | 0 | 2 | 0
outdoor | 0 | 0 | 13 | 20 | 2 | 0 | | 0 | 2 | 0 | 0 | 0
in the vehicle | 0 | 0 | 2 | 0 | 1 | 0 | | 4 | 0 | 0 | 0 | 0
study-room | 4 | 0 | 3 | 2 | 4 | 1 | | 4 | 1 | 0 | 2 | 0
navigation | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | | | | |
grasping | | | | | | | | ✓ | ✓ | ✓ | ✓ | ✓
manipulation | ✓ | ✓ | ✓ | | ✓ | ✓ | | | ✓ | ✓ | ✓ |
human-aiding | | | ✓ | | | | | ✓ | ✓ | ✓ | | ✓
Table 6: The upper part of this table: the number of the scene. The lower part
of this table: the interaction pattern of each category.
Figure 11: Visualized results of comparison with state-of-the-art methods. Our
Trans2Seg has the best mask prediction among all methods. Zoom in for the best
view.
### 7.3 Failure Case Analysis
As shown in Figure 10, our method also has some limitations. For instance, in
Figure 10 (a), when transparent objects are occluded by different categories,
our method would confuse and fail to segment part of the items. In Figure 10
(b), when the objects are of extreme transparency, our method would also
confuse and output wrong segmentation results. In such a case, even humans
would also fail to distinguish these transparent objects.
## References
* Carion et al. [2020] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-End object detection with transformers. In ECCV, 2020.
* Chao et al. [2019] Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, and Youn-Long Lin. Hardnet: A low memory traffic network. In ICCV, 2019.
* Chen et al. [2014] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv, 2014.
* Chen et al. [2017] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. TPAMI, 2017.
* Chen et al. [2018a] Guanying Chen, Kai Han, and Kwan-Yee K. Wong. Tom-net: Learning transparent object matting from a single image. In CVPR, 2018.
* Chen et al. [2018b] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
* Chen et al. [2018c] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In ECCV, 2018.
* Chen et al. [2020] Hanting Chen, Yunhe Wang, Tianyu Guo, Chang Xu, Yiping Deng, Zhenhua Liu, Siwei Ma, Chunjing Xu, Chao Xu, and Wen Gao. Pre-trained image processing transformer. arXiv preprint arXiv:2012.00364, 2020.
* Devlin et al. [2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics, 2019\.
* Dosovitskiy et al. [2020] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
* Everingham and Winn [2011] Mark Everingham and John Winn. The pascal visual object classes challenge 2012 (voc2012) development kit. Pattern Analysis, Statistical Modelling and Computational Learning, Tech. Rep, 2011.
* Foster et al. [2013] Paul Foster, Zhenghong Sun, Jong Jin Park, and Benjamin Kuipers. Visagge: Visible angle grid for glass environments. In 2013 IEEE International Conference on Robotics and Automation, pages 2213–2220. IEEE, 2013.
* Fu et al. [2019] Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. Dual attention network for scene segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3146–3154, 2019.
* Gadde et al. [2016] Raghudeep Gadde, Varun Jampani, Martin Kiefel, Daniel Kappler, and Peter V Gehler. Superpixel convolutional networks using bilateral inceptions. In ECCV, 2016.
* Gehring et al. [2017] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017.
* Han et al. [2020] Kai Han, Yunhe Wang, Hanting Chen, Xinghao Chen, Jianyuan Guo, Zhenhua Liu, Yehui Tang, An Xiao, Chunjing Xu, Yixing Xu, et al. A survey on visual transformer. arXiv preprint arXiv:2012.12556, 2020.
* He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016.
* Jin et al. [2019] Qiangguo Jin, Zhaopeng Meng, Tuan D Pham, Qi Chen, Leyi Wei, and Ran Su. Dunet: A deformable network for retinal vessel segmentation. Knowledge-Based Systems, 2019.
* Kim and Chung [2016] Jiwoong Kim and Woojin Chung. Localization of a mobile robot using a laser range finder in a glass-walled environment. IEEE Transactions on Industrial Electronics, 63(6):3616–3627, 2016\.
* Klank et al. [2011] Ulrich Klank, Daniel Carton, and Michael Beetz. Transparent object detection and reconstruction on a mobile platform. In IEEE International Conference on Robotics & Automation, 2011\.
* Li et al. [2019a] Gen Li, Inyoung Yun, Jonghyun Kim, and Joongkyu Kim. Dabnet: Depth-wise asymmetric bottleneck for real-time semantic segmentation. arXiv, 2019.
* Li et al. [2019b] Hanchao Li, Pengfei Xiong, Haoqiang Fan, and Jian Sun. Dfanet: Deep feature aggregation for real-time semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9522–9531, 2019.
* Lin et al. [2016] Guosheng Lin, Chunhua Shen, Anton Van Den Hengel, and Ian Reid. Efficient piecewise training of deep structured models for semantic segmentation. In CVPR, 2016.
* Lin et al. [2017] Guosheng Lin, Anton Milan, Chunhua Shen, and Ian Reid. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In CVPR, 2017.
* Liu and Yin [2019] Mengyu Liu and Hujun Yin. Feature pyramid encoding network for real-time semantic segmentation. arXiv, 2019.
* Liu et al. [2017] Sifei Liu, Shalini De Mello, Jinwei Gu, Guangyu Zhong, Ming-Hsuan Yang, and Jan Kautz. Learning affinity via spatial propagation networks. In NIPS, 2017.
* Long et al. [2015] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015.
* Mehta et al. [2019] Sachin Mehta, Mohammad Rastegari, Linda Shapiro, and Hannaneh Hajishirzi. Espnetv2: A light-weight, power efficient, and general purpose convolutional neural network. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9190–9200, 2019.
* Mei et al. [2020] Haiyang Mei, Xin Yang, Yang Wang, Yuanyuan Liu, Shengfeng He, Qiang Zhang, Xiaopeng Wei, and Rynson W.H. Lau. Don’t hit me! glass detection in real-world scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
* Meinhardt et al. [2021] Tim Meinhardt, Alexander Kirillov, Laura Leal-Taixe, and Christoph Feichtenhofer. Trackformer: Multi-object tracking with transformers. arXiv preprint arXiv:2101.02702, 2021.
* Poudel et al. [2018] Rudra PK Poudel, Ujwal Bonde, Stephan Liwicki, and Christopher Zach. Contextnet: Exploring context and detail for semantic segmentation in real-time. arXiv, 2018.
* Poudel et al. [2019] Rudra PK Poudel, Stephan Liwicki, and Roberto Cipolla. Fast-scnn: fast semantic segmentation network. arXiv, 2019.
* Ronneberger et al. [2015] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In MICCAI, 2015.
* Singh et al. [2018] Ravinder Singh, Kuldeep Singh Nagla, John Page, and John Page. Multi-data sensor fusion framework to detect transparent object for the efficient mobile robot mapping. International Journal of Intelligent Unmanned Systems, pages 00–00, 2018.
* Spataro et al. [2015] R. Spataro, R. Sorbello, S. Tramonte, G. Tumminello, M. Giardina, A. Chella, and V. La Bella. Reaching and grasping a glass of water by locked-in als patients through a bci-controlled humanoid robot. Frontiers in Human Neuroscience, 357:e48–e49, 2015.
* Sun et al. [2020] Peize Sun, Yi Jiang, Rufeng Zhang, Enze Xie, Jinkun Cao, Xinting Hu, Tao Kong, Zehuan Yuan, Changhu Wang, and Ping Luo. Transtrack: Multiple-object tracking with transformer. arXiv preprint arXiv:2012.15460, 2020.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
* Wang et al. [2018] Xiaolong Wang, Ross Girshick, Abhinav Gupta, and Kaiming He. Non-local neural networks. In CVPR, 2018.
* Wang et al. [2019a] Jingdong Wang, Ke Sun, Tianheng Cheng, Borui Jiang, Chaorui Deng, Yang Zhao, Dong Liu, Yadong Mu, Mingkui Tan, Xinggang Wang, et al. Deep high-resolution representation learning for visual recognition. arXiv, 2019.
* Wang et al. [2019b] Yu Wang, Quan Zhou, Jia Liu, Jian Xiong, Guangwei Gao, Xiaofu Wu, and Longin Jan Latecki. Lednet: A lightweight encoder-decoder network for real-time semantic segmentation. In ICIP, 2019.
* Wang et al. [2020] Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao Shen, and Huaxia Xia. End-to-end video instance segmentation with transformers. arXiv preprint arXiv:2011.14503, 2020.
* Xie et al. [2020] Enze Xie, Wenjia Wang, Wenhai Wang, Mingyu Ding, Chunhua Shen, and Ping Luo. Segmenting transparent objects in the wild. arXiv preprint arXiv:2003.13948, 2020.
* Xu et al. [2015] Yichao Xu, Hajime Nagahara, Atsushi Shimada, and Rin-ichiro Taniguchi. Transcut: Transparent object segmentation from a light-field image. In ICCV, 2015.
* Yang et al. [2018] Maoke Yang, Kun Yu, Chi Zhang, Zhiwei Li, and Kuiyuan Yang. Denseaspp for semantic segmentation in street scenes. In CVPR, 2018.
* Yu et al. [2018] Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang. Bisenet: Bilateral segmentation network for real-time semantic segmentation. In ECCV, 2018.
* Yuan and Wang [2018] Yuhui Yuan and Jingdong Wang. Ocnet: Object context network for scene parsing. arXiv, 2018.
* Zhao et al. [2017] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia. Pyramid scene parsing network. In CVPR, 2017.
* Zhao et al. [2018] Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, and Jiaya Jia. Icnet for real-time semantic segmentation on high-resolution images. In ECCV, 2018.
* Zheng et al. [2015] Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip HS Torr. Conditional random fields as recurrent neural networks. In ICCV, 2015.
* Zheng et al. [2020] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip HS Torr, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers. arXiv preprint arXiv:2012.15840, 2020.
* Zhou et al. [2017] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Barriuso, and Antonio Torralba. Scene parsing through ade20k dataset. 2017\.
* Zhou et al. [2018] Zheming Zhou, Zhiqiang Sui, and Odest Chadwicke Jenkins. Plenoptic monte carlo object localization for robot grasping under layered translucency. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 1–8. IEEE, 2018.
* Zhu et al. [2020] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159, 2020.
|
# Programming Boundary Deformation Patterns in Active Networks
Zijie Qu1, Jialong Jiang1, Heun Jin Lee2, Rob Phillips1,2,3 Shahriar Shadkhoo1
<EMAIL_ADDRESS>Matt Thomson1<EMAIL_ADDRESS>1Division of Biology
and Biological Engineering, California Institute of Technology, Pasadena, CA,
USA. 2Department of Applied Physics, California Institute of Technology,
Pasadena, CA, USA. 3Department of Physics, California Institute of
Technology, Pasadena, CA, USA.
###### Abstract
Active materials take advantage of their internal sources of energy to self-
organize in an automated manner. This feature provides a novel opportunity to
design micron-scale machines with minimal required control. However, self-
organization goes hand in hand with predetermined dynamics that are hardly
susceptible to environmental perturbations. Therefore utilizing this feature
of active systems requires harnessing and directing the macroscopic dynamics
to achieve specific functions; which in turn necessitates understanding the
underlying mechanisms of active forces. Here we devise an optical control
protocol to engineer the dynamics of active networks composed of microtubules
and light-activatable motor proteins. The protocol enables carving activated
networks of different shapes, and isolating them from the embedding solution.
Studying a large set of shapes, we observe that the active networks contract
in a shape-preserving manner that persists over the course of contraction. We
formulate a coarse-grained theory and demonstrate that self-similarity of
contraction is associated with viscous-like active stresses. These findings
help us program the dynamics of the network through manipulating the light
intensity in space and time, and maneuver the network into bending in specific
directions, as well as temporally alternating directions. Our work improves
understanding the active dynamics in contractile networks, and paves a new
path towards engineering the dynamics of a large class of active materials.
The rich and exotic dynamical behavior of active systems originates from
energy consumption at the level of their constituents, which drives them out
of equilibrium and endows them with the capability of self-organizing into
micron-scale machines [1, 2, 3, 4]. A central goal is to harness the
internally-generated dynamics and programming active stresses to accomplish
desired tasks, through modulating the system boundaries and forces at
macroscopic scales. Biology has served as the major source of inspiration in
designing synthetic active systems [5, 6, 7]. In cells, cross-linked polymer
networks mediate the active forces that are generated by motor proteins
through hydrolyzing ATP. In vitro experiments with cell extracts and
reconstituted networks of Microtubules (MTs) and kinesin motor proteins show
self-organization into structures including asters and contractile/extensile
networks [8, 9, 10, 11]. Mechanical properties of active networks have been
extensively studied, experimentally [12, 13, 14, 15, 11, 16, 17, 18] as well
as theoretically [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30].
Important questions to be answered include: What modes of dynamics can
potentially be probed in a controllable way, and how do we accomplish that? In
this paper we address these questions in MT-motor proteins active networks.
The interactions of such networks can be categorized into active and passive
internal interactions, and network–environment interactions. The latter depend
on the specific instrumentation of the experiments, often in an uncontrollable
manner.
Here, we develop an optical control protocol to activate the motor proteins
within a region of illumination, form active MT-motor networks, and isolate
them from the surrounding solution. Our strategy utilizes a recently developed
optical experimental system to form and isolate active networks of different
geometries. Dynamics of the isolated networks are dominated by active stresses
with negligible fluid drag; Fig. (1a) [31, 32]. For a large set of distinct
geometries we demonstrate that the active networks undergo shape-preserving
contractions. Using a hydrodynamic model we demonstrate that the shape
preservation is the direct consequence of viscous-like active stresses. The
model teaches us how to program active stresses by modulating the light
pattern and intensity. Specifically, we design protocols for spatiotemporal
modulations of light intensity to achieve static bending, as well as
temporally-alternating bending directions in the network.
Figure 1: Optical-control protocol first activates cross-linking motor
proteins to form the MT networks and isolates the network from embedding
solution, allowing them to contract self-similarly. (a) An initial pulse of
light activates motor proteins within a region of illumination. Activated
motor proteins crosslink the MTs and form a contractile network. Isolation of
the network from the solutions requires a second pulse at around $\sim
50-80$s. (b) shows the macroscopic (top row) and microscopic (second row)
snapshots of the network, from left to right: during the activation and
network formation, at the time of isolation, and shape preserving contraction.
The colored dots in the second row track the loci of four distinct microscopic
asters in time. The bottom panel shows the profile of the contracting network
in time (horizontal axis). The major three phases of the dynamics are
separated by vertical lines. (c) For three different shapes the self-similar
contraction of networks is portrayed by overlaying the networks’ boundaries as
they shrink in time.
Figure 2: Self-similarity persists over time and is the consequence of linear
scaling of velocity with radius. (a) depicts the algorithm for evaluating
self-similarity between two timepoints. The boundaries of the two shapes are
extracted. The larger shape (earlier time) is scaled down to match the linear
scale of the smaller one. The self-similarity is then found by calculating the
correlation between the two shapes. (b) For various geometries (and initial
sizes) the deviation from self-similarity $(\delta)$ is measured between
$t=100$s and later timepoints $t\leq 360$s (end of contraction). The bars
start from zero deviation at $t=100$s and reach their maximum at $t=360$s. All
shapes retain their initial shape up to at least $90\%$ accuracy. (c) The
magnitude of radial velocity of the contracting networks, at different times.
While the radial component increases linearly with distance from the center
(hence the cones), the slopes ($\tau_{\xi}^{-1}=|v_{r}(r)|/r$) decrease in
time. This linearity is assessed in (d): top panel shows the scatter plots of
$v_{r}(r,t)$ vs. $r$. The Pearson correlation coefficient $\varrho(r,v_{r})$
is calculated at different timepoints, which varies in time between $0.999$
and $0.952$. The bottom panel shows the relative contribution of the radial
and angular components of the velocity. The least contribution of radial
component remains above $(v_{r}/v)^{2}\simeq 0.65$.
### Activity preserves the shape memory of contracting networks
Performing experiments on several distinct geometries reveals striking
universal dynamics that shed light on the underlying active mechanism. We
first studied contracting circles as well as polygonal networks (squares,
triangles, and hexagons) of different sizes; $460,\,600,\,750$ and
$900{\mu}{\text{m}}$. We used a combination of microscopy and image analysis
to track and infer network dynamics using labeled MTs.
We found that across a wide range of geometries the MT-motor networks generate
a contraction that is self-similar, i.e. shape preserving. We realized that
the dynamics of networks consist of three phases: (I) Formation of MT-motor
contractile networks, the shapes of which are determined by the region of
illumination. The activated network is isolated from the background solution
by the end of this phase. (II) Contraction phase during which the area of the
network decreases over time while density of cross-linked network increases.
(III) Deceleration of contraction as the density of filaments, and thus the
MT-MT steric interactions increase. During the contractile phases (II and
III), the network retains the initial shape of the light pattern.
In order to assess self-similarity, we first segment images to find the
regions occupied by the networks at different times. Next, for two shapes at
timepoints $t_{1}$ and $t_{2}>t_{1}$, with areas $A_{1}$ and $A_{2}<A_{1}$, we
scale down the larger shape by $\sqrt{A_{2}/A_{1}}$, and align the centers of
the two shapes. Self-similarity is defined as the ratio of the bitwise overlap
(AND operator) area, and $A_{2}$ (Fig. (2a)). To account for stochastic rigid
rotations of each network around its center of mass, we maximize the self-
similarity with respect to relative rotations over the range of $(-20\,,+20)$
degrees. The deviation from self-similarity, $\delta(t_{1},t_{2})$, is
calculated by subtracting the self-similarity from unity. Across all networks
examined we found that $\delta\lesssim 10\%$ over the entire course of the
dynamics; Fig. (2b).
The self-similar scaling of the network boundary over time is strongly
suggestive of an underlying contractile mechanism that is distinct from those
in passive systems. In a passive system, competition between bulk and boundary
energies, along with the dissipative drag forces induced by the fluid, lead to
distortions in the curvature of the initial network boundary that increase in
time. The absence of these “equilibrating” (stress releasing) deformations in
our system is indicative of strongly activity-driven dynamics, counteracting
the dissipative effects.
In comparison to convex shapes that are identified by uniformly positive
boundary curvature, the richer geometric features of concave shapes (arcs of
positive and negative curvatures) make the deviations from self-similar
contraction easier to detect. Furthermore, boundary deformations in concave
shapes are more probable to occur due to the bulk-boundary couplings, making
the dynamics of concave shapes more informative from a physical perspective.
Passive systems with free boundaries equilibrate to round shapes to minimize
the sum of the bulk and boundary free energy, and perturbing the boundaries
induces stresses in the bulk. Therefore, probing concave active networks
provides a more stringent test for verifying the activity-dominated and drag-
free contraction.
We prepared networks in two concave geometries: hexagrams and cardioids, and
found that these shapes contract with self-similarities indistinguishable from
those generated in convex networks. In Figure (2b), we show for all shapes,
the maximum deviation from self-similarity over the course of contraction
measured with respect to the reference time $t=100$s. The deviation from self-
similar contraction remains below $10\%$ for all convex shapes—in many cases
below $5\%$. Between the two concave shapes, the cardioid shows a very small
deviation of $2\%$, the hexagram reaches almost $10\%$ deviation, comparable
to triangles and rectangles. The absence of such effects in concave shapes of
active networks indicates that the contractile motion of our system is stress-
free. More precisely, the contraction corresponds to uniform scaling of the
intrinsic metric, in accord with the uniform velocity gradient.
Figure 3: Comparison and agreement between experiments and theory supports the
role of activity-induced viscous interaction in shape-preserving contraction.
(a) For three shapes of hexagram, triangle and ellipse the velocity vector
fields are shown as extracted via PIV in experiments (red) and simulated
(blue); note the radial velocity in all cases. (b) The theoretical counterpart
of the previously shown $v_{r}$ vs. $v$ experimental data in Fig. (2d). (c)
For closer comparison the velocity along $x-$axis is plotted at different
times for a contracting square. (d) The linear length scale $\xi_{\text{th.}}$
in time as predicted by theory, for different initial sizes. In (e) the
fractional contraction $\lambda-\lambda_{\text{eq.}}$ for theoretical results
collapse onto a single curve that is compared with those extracted from
experiments. The curves decay exponentially over timescale $\tau_{c}$.
### Persistent self-similarity suggests linear radial velocity field
High degree of persisting shape preservation suggests spatially-uniform and
isotropic contraction of the networks. In accord with self-similarity, we
found that the contracting networks generate a velocity field, as inferred
from Particle Image Velocimetry (PIV), that remains linear throughout the
dynamics across all network shapes and sizes. Specifically, Fig. (2c) shows
the radial component of velocity field in the $x-y$ plane, generated by a
contracting circle at different times. Similarly in Fig. (2d) top panel, the
radial velocity is plotted as a function of distance $r$ from the center of
mass. Linearity of velocity is evident from the Pearson correlation
coefficient $\varrho(r,v_{r})$ which remains very close to unity. The slope of
$v_{r}$ vs. $r$, corresponding to $\tau_{\xi}^{-1}$ in Fig. (2c), changes as a
function of time and size of the network, hence the subscript $\xi$. The
inverse of this slope ($\tau_{\xi}$) can be interpreted as the time it takes
for a network of size $\xi$ to shrink to zero, if the contraction would not
decelerate. However the contraction of the network leads to accumulation of
mass which slows down the contraction, and $\tau_{\xi}$ diverges at an
equilibrium density. For a system with free boundary conditions, locally
uniform and isotropic contraction implies zero angular velocities. To verify
this, we measured the contributions of radial and angular velocity components;
Fig. (2d) bottom panel, and observed that the contribution of angular velocity
remains very low for almost the entire course of contraction. In Fig. (2c),
for visual clarity, we only show the velocity cones for a circle. However, the
linearity of velocity as a function of distance, and the decrease of the
slopes in time, hold true across all networks with different shapes and sizes.
### Hydrodynamic model reveals mechanism of universality of self-similar
contraction
Programming active contractile networks requires quantitative understanding of
the response of the system to the external probes, e.g. light in our
experiments. To understand how self-similar contractions emerge in response to
internally generated stress, we developed and analyzed a coarse-grained
hydrodynamic model of active networks. Our phenomenology draws on the
following experimentally grounded postulates: (1) Isotropicity: the initially
randomly oriented MTs organize small asters that are connected to each other
via some intermediate MTs. The asters are, however, connected in random
directions. Therefore for length scales of multiple asters size isotropicity
seems to be a reasonable assumption; see the zoomed panels in Fig. (1b). (2)
Activated motor proteins induce contractile stress. (3) Steric interactions
become progressively stronger as the network contracts, and balance out the
contractile stress at an equilibrium density of the network.
The hydrodynamics of the system is governed by the conservation laws of total
mass and momentum, where total refers to the MT network and the fluid. Mass
conservation demands
$\partial_{t}(\rho_{n}+\rho_{f})=-\nabla\cdot(\rho_{n}\mathbf{v}_{n}+\rho_{f}\mathbf{v}_{f})=0$,
where $\partial_{t}$ denotes the partial time derivative, and $\rho_{n/f}$ are
network/fluid densities. We drop the network’s subscript hereafter. Neglecting
the inertial terms on macroscopic time scales, momentum conservation (force
balance) for the network requires
$\nabla\cdot\bm{\sigma}^{p}=\gamma(\mathbf{v}-\mathbf{v}_{f}).$ Here
$\nabla\cdot\bm{\sigma}^{p}$ is the passive external force exerted from the
surrounding fluid on the network, and $\gamma$ is the effective drag
coefficient. On the other hand, the viscoelastic response of the network to
the total stress reads $\bm{\sigma}^{p}+\bm{\sigma}^{a}=\eta\nabla\mathbf{v}$,
in which $\bm{\sigma}^{a}$ is the active stress, and $\eta$ is the effective
network viscosity. Under the assumption of $|\mathbf{v}_{f}|\ll|\mathbf{v}|$,
we get
$\displaystyle\partial_{t}\,\rho+\nabla\cdot(\rho\mathbf{v})=0,$ (1a)
$\displaystyle\eta\nabla^{2}\mathbf{v}-\gamma\mathbf{v}=\nabla\cdot\bm{\sigma}^{a}.$
(1b)
The dependency of the active stress on the intensity of light is crucial to
programming the dynamics of network. In order to understand this dependency we
simulate the dynamics of contractile networks based on the following
assumptions, and assess their validity by comparing the results against
experiments. Active stress is assumed to be isotropic, namely proportional to
the identity matrix $\mathbb{1}$. In 2D we have
$\bm{\sigma}^{a}=\frac{1}{2}\text{tr}({\bm{\sigma}^{a}})\mathbb{1}\equiv\sigma^{a}\mathbb{1}$.
The active stress can be decomposed into two opposing terms: a contractile
term $(\propto\rho)$, and an expansile steric term $(\propto\rho^{2})$.
Strictly speaking, steric interactions are not intrinsically active, but
emerge due to the activity-induced compression. The proportionality constants
are assumed to increase linearly with the density of activated motor proteins,
in turn an increasing function of the light intensity. The competition between
the contractile and the steric interactions vanishes at an equilibrium density
$\rho_{\text{eq.}}$, corresponding to the final size of the network when the
contraction stops $\xi_{\text{eq.}}=\xi(t\to\infty)$.
Simulating the network contraction over a range of convex and concave shapes
we observe self-similar contractions across all geometries. In the activity
dominated regime, the model yields a linear velocity field whose magnitude
scales linearly with the distance from the network’s center of mass.
Specifically, the ratio $\gamma\xi^{2}/\eta$ specifies the relative magnitude
of passive and active forces over the longest contractile mode of contraction.
In the high-activity regime, the model asymptotically reduces to
$\eta\nabla\mathbf{v}=\bm{\sigma}^{a}$, and the velocity field can be solved
given a MT network density. For a network of instantaneous size $\xi(t)$, with
uniform MT density and free boundary conditions, the solutions of Eqs. (1) are
radially symmetric vector fields with constant radial gradient of the form
$|\nabla\mathbf{v}(t)|=|v(r=\xi(t))|/\xi(t)=|\sigma^{a}(t)|/\eta$.
The linearity of the radial vector field
$\mathbf{v}=-\sigma^{a}\,\mathbf{r}/\eta$ persists over the course of
dynamics, when the two Eqs. (1) are solved simultaneously. As such, the
velocity field generates angle-preserving dynamics: given points
$\mathbf{x}_{1}$ and $\mathbf{x}_{2}$ in material (Lagrangian) coordinates of
the network, their relative position vectors in Eulerian description is scaled
by a factor that depends on the time points $t,s$, such that
$\left[\mathbf{r}(\mathbf{x}_{1},t)-\mathbf{r}(\mathbf{x}_{2},t)\right]\propto\left[\mathbf{r}(\mathbf{x}_{1},s)-\mathbf{r}(\mathbf{x}_{2},s)\right]$.
Thus, the linear velocity field generated in the activity-dominated regime,
induces a self-similar, distance scaling map.
Figure 4: Contraction of the network can be programmed via modulating the
pattern of illumination in space and time. (a)/(b) show purely-spatial
modulations of light, where the top segment of the rectangle is illuminated
more/less strongly. Greater intensity of light activates larger number of
motor proteins and thus generates larger active stresses, which leads to
larger and faster contraction on the brighter side, and causes bending. In (c)
the pattern of illumination varies in time to interpolate between the two
static patterns of (a) and (b). Using this dynamic modulation we manage to
change the bending direction as the network contracts.
Linear velocity field with density-dependent gradient leads to universal
dynamics. The density dependence of the velocity field appears through the
stress tensor which determines the instantaneous slope. Active stress
$\sigma^{a}$ is proportional to (a) $\rho-\rho_{\text{eq.}}$, and (b) activity
$a_{\mathcal{I}}$, determined by the concentration of activated motor
proteins, assumed to be proportional to the light intensity $\mathcal{I}$.
Together with continuity equation, our model suggests a universality in the
velocity field across different shapes and initial conditions. For linear
contractions, the density of the MT network remains uniform during the initial
phases of the contraction. From continuity equation the density of the network
can be expressed as a function of contracted area as
$\rho(t)\xi^{2}(t)=\rho_{0}\xi_{0}^{2}=\rho_{\text{eq.}}\xi_{\text{eq.}}^{2}$.
Here $t_{0}$ is a reference time at which the density equals $\rho_{0}$.
Combined with momentum conservation which determines the velocity field, we
obtain:
$\rho(t)=\rho_{\text{eq.}}+(\rho_{0}-\rho_{\text{eq.}})\exp(-2(t-t_{0})/\tau_{c})$,
where $\tau_{c}$ is the contraction timescale and can be expressed in terms of
model parameters $\tau_{c}=\eta/a_{\mathcal{I}}$; i.e. inversely proportional
to activity. Correspondingly the linear size of the network can be expressed
as
$\xi(t)=\xi_{\text{eq.}}+(\xi_{0}-\xi_{\text{eq.}})\exp(-(t-t_{0})/\tau_{c})$.
The normalized fractional contraction thus follows an exponential decay of the
form $\lambda(t)-\lambda_{\text{eq.}}=\exp(-(t-t_{0})/\tau_{c})$.
The results of the simulations for all shapes reproduce the same dynamics as
observed in experiments, specifically for $\mathbf{v}(r,t)$ and $\lambda(t)$.
The velocity field extracted by PIV from contracting networks, and those
obtained from simulations are both linear and radial over shapes and over
time; see Fig. (3a) for qualitative comparison. Consistent with experiments,
we observe a linear velocity field over complete contractile dynamics for all
shapes analyzed, and the divergence/slope of the velocity field decreases with
decreasing size, or equivalently increasing density; Fig. (3b,c). In our
experiments, we held the initial MT density constant across networks of
different sizes and shapes. The fractional contraction for experiments on
several shapes, as well as those obtained from the theory, as plotted in Fig.
(3d), collapse onto the an exponential curve with decay time of $\tau_{c}$
which is inversely proportional to the activity. Given that activity is an
increasing function of the light intensity, we expect the contraction to speed
up upon cranking up the intensity.
### Programming deformation through spatial and temporal modulation of
activity
The hydrodynamic model suggests a simple strategy for programming the
mechanical properties of MT networks through spatial-temporal modulation of
activity. In our hydrodynamic model, the divergence/slope of the contractile
velocity field depends on MT density and the activity $a_{\mathcal{I}}$, which
sets the magnitude of stress and thus the contraction timescale. Activity can
be modulated experimentally in time and space with light, providing a
mechanism to modulate the mechanical behavior of the networks. Spatially-
uniform illumination induces uniform activity and isotropic stress which leads
to shape-preserving contraction. However, modulation of light pattern in space
can generate a nonuniform stress tensor which leads to network regions that
contract at different rates. By modulating light levels we modulate the
relative local contraction rates which no longer preserve the shape of the
network.
Specifically, we generated networks where spatially distinct regions
experience two different light intensities and, thus, generate two different
contractile fields in close proximity. Differing contractile forces along the
boundary between the two networks lead to deformation and bending. Thus, by
modulating relative activity, we can induce deformation along the boundary of
a network and program novel mechanical behaviors that deviate from the self-
similar contractions observed in networks at uniform activity.
We created a series of light patterns that modulate the relative activity to
induce bending deformations. For example, we created a hinge pattern where
distinct contractile networks are separated by a joint region, and in the
joint region differences in activity lead to relative differences in
contractile velocity fields and network bending; Fig. (4a). In a complement
hinge pattern, we induce bending along the opposite direction by switching the
orientation of the joint, see Fig. (4b).
In addition to generating static deformations, spatial and temporal modulation
of light patterns allow the generation of dynamical contraction and
deformation through temporal modulation of relative activity. In particular,
we temporally modulated the relative light intensity in the two regions of the
hinge according to the following protocol. First we shine a light pattern that
induces downward bending. The light pattern is subsequently swapped to the
complementary pattern at around $t=100$s after the initial illumination. The
differential intensities lead to reversal of the bending direction. The rates
of the bending and reversal depend on the relative sizes of the two regions of
illumination, relative light intensities, and the time at which swapping to
complementary pattern takes place. Here we chose a relatively straightforward
protocol with the same intensities and densities of MTs as chosen in the
previously discussed case of self-similar contractions.
Broadly, these experiments show that both spatial-temporal modulation of light
intensity allows us to induce programmed patterns of mechanical deformation
into active MT networks. In this way, the natural shape preservation property
of active MT networks can be simply modulated through relative differences in
activity in distinct parts of an induced network. This controllability of MT
networks allows us to program units of networks in which different possess
engineered mechanical properties and can perform work in a programmed and
predetermined manner through internal couplings.
### Discussion
Active networks are ubiquitous in biology, and their non-equilibrium
properties are poorly understood. Our work reveals signature of activity in
the mechanical properties at macroscopic scales. The self-similar contraction
is intrinsically related to the non-equilibrium nature of the system, which
preserves a geometric memory, unlike in passive systems where equilibration
increases entropy and erases the memory of the initial state. This memory
preservation property makes the behavior of the system more controllable
without the need to tuning the microscopic degrees of freedom.
Previous works analyzed active contractions in networks of MT and actin in
cell extracts, where the contracting network is embedded in a viscous
solution, thus subjected to drag forces. Our optical control strategies allow
us to isolate the networks from passive boundaries while using light to
modulate the shape and activity. Further, in conventional materials altering
mechanical properties requires changing the microscopic structure of the
material, for example, through doping. These changes are generically
irreversible (plastic), and are hard to be modulated at the microscopic level.
In our systems, the degree of linking of the network and the active stresses
can be tuned in space and time, enabling a separate strategy for the
programming and control over material mechanics. Activity induced deformations
provide a strategy for engineering novel behaviors at micron length scales.
###### Acknowledgements.
The authors are grateful to Inna-Marie Strazhnik for making illustrations, and
to John Brady, Dominik Schildknecht and Enrique Amaya Perez for useful
discussions. MT was supported by Packard Foundation, Rosen Center for
Bioengineering, and Heritage Medical Research Institute. RP was supported by
NIH grant number 1R35 GM118043-01. RP and MT would like to thank Foundational
Questions Institute and Fetzer Franklin Fund through FQXi 1816 for funding the
research.
## Appendix A Instrumentation and Imaging
### A.1 Active Matter System and Sample Chambers
The system consists of stabilized microtubule, kinesin motors (constructed
with light-induced hetero-dimer system) and an energy mix. All ingredients and
buffer preparation protocol are documented in a previous paper by Ross et.
al.[32], and we follow the exact same procedure in our study. The sample
chambers are made by sandwiching pre-cut Parafilm M by coated slides and
coverslips [32, 33]. The measured depth of the chamber is approximately
80$\mu$m.
### A.2 Microscope Instrumentation
The experiment is conducted on an epifluorescence microscope (Nikon Ti2) with
10X magnification (Nikon Plan Apo $\lambda$ 10X). We customize the system by
adding a programmable digital light projector (EKB Technologies DLP
LightCrafter E4500 MKII Fiber Couple), which is used to image the light
pattern activating the dimerization of kinesin motors. The DLP chip is
illuminated by the four-channeled LED (ThorLabs LED4D067) at the wavelength of
470nm. Fluorescently labeled microtubules are illuminated by 660nm and imaged
with digital camera (Hamamatsu orca-flash 4.0). The system is controlled with
Micro-Manager on PC.
### A.3 Control Strategy for Isolating the Contracting Network
When the light patterns are constantly projected onto the reaction sample, the
contraction is accompanied by the formation of canals at the sharp corners of
the pattern (e.g. vertices of polygons). These canals pave paths for the
background solution—containing floating Mts—to pour into the region of
illumination. These MTs get cross-linked upon entering this region, and form a
steady state of flow; hence coupling of network and the background fluid. To
isolate the cross-linked network from the ambient solution, we decrease the
size of the projected pattern to prevent new MT-solution mix from flowing in.
As shown in Fig.(1a), we first projected a pattern at full size to initiate
the network cross-linking. After 80s, a shrunken pattern is projected, with
the same geometry and light intensity, but with 70% linear size of the initial
pattern. After this initial phase, the sample is constantly illuminated every
10s with 30ms duration, during which the light pattern is further decreased to
50% original linear size gradually over the course of contraction which stops
at 5min.
## Appendix B Image Processing
### B.1 Segmentation and Detection of the Network
The time lapse images of contracting network is segmented and isolated from
the background solution utilizing a few built-in function of MATLAB Image
Processing toolbox. During the contraction (phase II) the boundaries of the
network is well separated from the solution that allows for segmentation. The
steps are as follows: We first subtract the local background intensity using
imflatfield function over regions of sizes of $\sim 300\,({\mu}m)$. This is
required to remove artificial shadows. Next we use the watershed algorithm to
separate the network from the background fluid.
### B.2 Measuring Velocity and Density Fields
Velocity field is extracted at different time points using the built-in MATLAB
function imregtform. This function estimates the displacement field
$\mathbf{D}_{1\to 2}$ that warps the images at times $t_{1}$ onto the image at
$t_{2}$. In the Lagrangian picture for a point labeled by $\mathbf{p}$, we get
$\mathbf{r}(\mathbf{p},t_{2})=\mathbf{r}(\mathbf{p},t_{1})+\mathbf{D}_{1\to
2}(\mathbf{p})$. The displacement field is then converted to our units using
the pixel value of $0.65\mu$m. We define the velocity field in terms of
$\overline{\mathbf{r}}$ and $\overline{t}$, where
$\displaystyle\overline{\mathbf{r}}=\frac{1}{2}\,\bigg{(}\mathbf{r}(\mathbf{p},t_{2})+\mathbf{r}(\mathbf{p},t_{1})\bigg{)},\qquad\overline{t}=\frac{1}{2}\,\big{(}t_{1}+t_{2}\big{)}.$
(2)
The velocity field reads
$\mathbf{v}(\overline{\mathbf{r}},\overline{t})=\mathbf{D}_{1\to
2}(\mathbf{p})\times 0.65/(t_{2}-t_{1})$ (3)
In order to measure the velocity and density as a function of distance from
center of mass (CoM), the center of mass of the network is found at each time
point. Under the assumption that the local density of the network
$\rho(\mathbf{r})$, is proportional to the intensity of light captured in
gray-scale images $\mathcal{I}(\mathbf{r})$, the center of mass is obtained by
${}\mathbf{R}(t)=\frac{\int_{\text{net.}}d^{2}r\;\mathbf{r}\,\mathcal{I}(\mathbf{r})}{\int_{\text{net.}}d^{2}r\;\mathcal{I}(\mathbf{r})},$
(4)
where $\int_{\text{net.}}d^{2}r$ integrates over the area of the network.
For later time points when the intensity is saturated; hence not proportional
to density, an alternative method is to use the velocity field of the network
to estimate the position (and velocity) of the CoM. From Eq. (4), we have:
$\mathbf{V}_{\text{CoM}}(t)=M^{-1}\,\int_{\text{net.}}d^{2}r\;\mathbf{v}\,\rho(\mathbf{r}).$
(5)
Here $M$ is the total mass of the network—assumed to be conserved during the
course of contraction; thus calculable from earlier time points when the
density is safely assumed to be proportional to intensity. Redefining the
position vector and velocities relative to those of the CoM we get
$\mathbf{r}\equiv\mathbf{r}-\mathbf{R}$; and
$\mathbf{v}(\mathbf{r},t)\equiv\mathbf{v}(\mathbf{r},t)-\mathbf{v}(\mathbf{R},t)$.
Note that although on average the CoM is stationary, on the short timescales
it is subject to small and fast random fluctuations due to the noisy
background flows. The redefinition of velocity field ensures
$\int_{\text{net.}}d^{2}r\,\mathbf{v}(\mathbf{r})\,\mathcal{I}(\mathbf{r})=0$.
Therefore the CoM is now determined as the point at which the relative
velocity vanishes. To find the velocity as a function of $|\mathbf{r}|$ (from
CoM), the magnitude of the relative velocity is averaged over all points at a
radius.
## References
* [1] M Cristina Marchetti, Jean-François Joanny, Sriram Ramaswamy, Tanniemola B Liverpool, Jacques Prost, Madan Rao, and R Aditi Simha. Hydrodynamics of soft active matter. Reviews of Modern Physics, 85(3):1143, 2013.
* [2] Sriram Ramaswamy. The mechanics and statistics of active matter. Annual Review of Condensed Matter Physics, 2010.
* [3] Clemens Bechinger, Roberto Di Leonardo, Hartmut Löwen, Charles Reichhardt, Giorgio Volpe, and Giovanni Volpe. Active particles in complex and crowded environments. Reviews of Modern Physics, 88(4):045006, 2016.
* [4] Federica Burla, Yuval Mulla, Bart E Vos, Anders Aufderhorst-Roberts, and Gijsje H Koenderink. From mechanical resilience to active material properties in biopolymer networks. Nature Reviews Physics, 1(4):249–263, 2019.
* [5] Edwin A Peraza-Hernandez, Darren J Hartl, Richard J Malak Jr, and Dimitris C Lagoudas. Origami-inspired active structures: a synthesis and review. Smart Materials and Structures, 23(9):094001, 2014.
* [6] Matthew B Pinson, Menachem Stern, Alexandra Carruthers Ferrero, Thomas A Witten, Elizabeth Chen, and Arvind Murugan. Self-folding origami at any energy scale. Nature communications, 8(1):1–8, 2017.
* [7] Sebastian Fürthauer, Daniel J Needleman, and Michael J Shelley. A design framework for actively crosslinked filament networks. arXiv preprint arXiv:2009.09006, 2020.
* [8] FJ Nedelec, Thomas Surrey, Anthony C Maggs, and Stanislas Leibler. Self-organization of microtubules and motors. Nature, 389(6648):305, 1997.
* [9] Thomas Surrey, François Nédélec, Stanislas Leibler, and Eric Karsenti. Physical properties determining self-organization of motors and microtubules. Science, 292(5519):1167–1171, 2001.
* [10] Tim Sanchez, Daniel TN Chen, Stephen J DeCamp, Michael Heymann, and Zvonimir Dogic. Spontaneous motion in hierarchically assembled active matter. Nature, 491(7424):431–434, 2012.
* [11] Peter J Foster, Sebastian Fürthauer, Michael J Shelley, and Daniel J Needleman. Active contraction of microtubule networks. Elife, 4:e10837, 2015.
* [12] Daisuke Mizuno, Catherine Tardin, Christoph F Schmidt, and Frederik C MacKintosh. Nonequilibrium mechanics of active cytoskeletal networks. Science, 315(5810):370–373, 2007.
* [13] Todd Thoresen, Martin Lenz, and Margaret L Gardel. Reconstitution of contractile actomyosin bundles. Biophysical journal, 100(11):2698–2705, 2011.
* [14] Simone Köhler, Volker Schaller, and Andreas R Bausch. Structure formation in active networks. Nature materials, 10(6):462–468, 2011.
* [15] Simone Köhler and Andreas R Bausch. Contraction mechanisms in composite active actin networks. PloS one, 7(7):e39869, 2012.
* [16] Matthias Schuppler, Felix C Keber, Martin Kröger, and Andreas R Bausch. Boundaries steer the contraction of active gels. Nature communications, 7(1):1–10, 2016.
* [17] Peter J Foster, Wen Yan, Sebastian Fürthauer, Michael J Shelley, and Daniel J Needleman. Connecting macroscopic dynamics with microscopic properties in active microtubule network contraction. New Journal of Physics, 19(12):125011, 2017.
* [18] Kazuya Suzuki, Makito Miyazaki, Jun Takagi, Takeshi Itabashi, and Shin’ichi Ishiwata. Spatial confinement of active microtubule networks induces large-scale rotational cytoplasmic flow. Proceedings of the National Academy of Sciences, 114(11):2922–2927, 2017.
* [19] Ha Youn Lee and Mehran Kardar. Macroscopic equations for pattern formation in mixtures of microtubules and molecular motors. Physical Review E, 64(5):056113, 2001.
* [20] Tanniemola B Liverpool and M Cristina Marchetti. Instabilities of isotropic solutions of active polar filaments. Physical review letters, 90(13):138102, 2003.
* [21] Karsten Kruse, Jean-François Joanny, Frank Jülicher, Jacques Prost, and Ken Sekimoto. Asters, vortices, and rotating spirals in active gels of polar filaments. Physical review letters, 92(7):078101, 2004.
* [22] Igor S Aranson and Lev S Tsimring. Pattern formation of microtubules and motors: Inelastic interaction of polar rods. Physical Review E, 71(5):050901, 2005.
* [23] Tanniemola B Liverpool and M Cristina Marchetti. Rheology of active filament solutions. Physical review letters, 97(26):268101, 2006.
* [24] Frank Juelicher, Karsten Kruse, Jacques Prost, and J-F Joanny. Active behavior of the cytoskeleton. Physics reports, 449(1-3):3–28, 2007.
* [25] Fred C MacKintosh and Alex J Levine. Nonequilibrium mechanics and dynamics of motor-activated gels. Physical review letters, 100(1):018104, 2008.
* [26] Gijsje H Koenderink, Zvonimir Dogic, Fumihiko Nakamura, Poul M Bendix, Frederick C MacKintosh, John H Hartwig, Thomas P Stossel, and David A Weitz. An active biopolymer network controlled by molecular motors. Proceedings of the National Academy of Sciences, 106(36):15192–15197, 2009.
* [27] Tong Gao, Robert Blackwell, Matthew A Glaser, Meredith D Betterton, and Michael J Shelley. Multiscale polar theory of microtubule and motor-protein assemblies. Physical review letters, 114(4):048101, 2015.
* [28] Pierre Ronceray, Chase P Broedersz, and Martin Lenz. Fiber networks amplify active stress. Proceedings of the national academy of sciences, 113(11):2827–2832, 2016.
* [29] J Gladrow, N Fakhri, FC MacKintosh, CF Schmidt, and CP Broedersz. Broken detailed balance of filament dynamics in active networks. Physical review letters, 116(24):248301, 2016.
* [30] Sebastian Fürthauer, Bezia Lemma, Peter J Foster, Stephanie C Ems-McClung, Che-Hang Yu, Claire E Walczak, Zvonimir Dogic, Daniel J Needleman, and Michael J Shelley. Self-straining of actively crosslinked microtubule networks. Nature Physics, 15(12):1295–1300, 2019.
* [31] Gurkan Guntas, Ryan A Hallett, Seth P Zimmerman, Tishan Williams, Hayretin Yumerefendi, James E Bear, and Brian Kuhlman. Engineering an improved light-induced dimer (ilid) for controlling the localization and activity of signaling proteins. Proceedings of the National Academy of Sciences, 112(1):112–117, 2015.
* [32] Tyler D Ross, Heun Jin Lee, Zijie Qu, Rachel A Banks, Rob Phillips, and Matt Thomson. Controlling organization and forces in active matter through optically defined boundaries. Nature, 572(7768):224–229, 2019.
* [33] A. W. C. Lau, A. Prasad, and Z. Dogic. Condensation of isolated semi-flexible filaments driven by depletion interactions. EPL (Europhysics Letters), 87(4):48006, 2009.
|
# Momentum dependent mean-fields of (anti)hyperons
T. Gaitanos, A. Chorozidou Department of Theoretical Physics, Aristotle
University of Thessaloniki, GR-54124 Thessaloniki, Greece email:
<EMAIL_ADDRESS>
###### Abstract
We investigate the in-medium properties of hyperons and anti-hyperons in the
framework of the Non-Linear Derivative (NLD) model. We focus on the momentum
dependence of in-medium strangeness optical potentials. The NLD model is based
on the simplicity of the well-established Relativistic Mean-Field (RMF)
approximation, but it incorporates an explicit momentum dependence on a field-
theoretical level. The extension of the NLD model to the (anti)baryon-octet is
formulated in the spirit of SU(6) and G-parity arguments. It is shown that
with an appropriate choice of momentum cut-offs the $\Lambda$, $\Sigma$ and
$\Xi$ optical potentials are consistent with recent studies of the chiral
effective field theory and Lattice-QCD calculations over a wide momentum
region. In addition, we present NLD predictions for the in-medium momentum
dependence of $\overline{\Lambda}$-, $\overline{\Sigma}$\- and
$\overline{\Xi}$-hyperons. This work is important for future experimental
studies such as CBM, PANDA at the Facility for Antiproton and Ion Research
(FAIR). It is relevant for nuclear astrophysics too.
###### keywords:
Equations of state of hadronic matter, optical potential, in-medium hyperon
potentials.
## 1 Introduction
Astrophysical observations on particularly massive neutron stars [1, 2, 3]
have driven the nuclear physics and astrophysics communities to detailed
investigations of the nuclear equation of state (EoS) under conditions far
beyond the ordinary matter [4]. On one hand, theoretical and experimental
studies on heavy-ion collisions over the last few decades concluded a
softening of the high-density EoS in agreement with phenomenological and
microscopic models [5, 6, 7]. On the other hand, the observations of two-solar
mass pulsars [1, 2, 3] together with additional constraints on the high-
density limit of the speed of sound [8] gave some controversial insights on
the EoS of compressed baryonic matter. They provide an upper limit for the
neutron star mass by excluding soft-type hadronic EoS’s at high baryon
densities.
Compressed baryonic matter may consist not only of nucleons. It can include
fractions of heavier baryons, when their production is energetically allowed.
These are the hyperons $\Lambda,~{}\Sigma$ and $\Xi$ as a part of the
irreducible representations of SU(3). While the nucleon-nucleon (NN)
interaction is very well known, the hyperon interactions are still not fully
understood. Indeed, there are many experimental data for NN-scattering in free
space and inside hadronic media (finite nuclei, heavy-ion collisions, hadron-
induced reactions) allowing a precise determination of the NN-interaction.
Concerning the strangeness sector (hyperon-nucleon (YN) or hyperon-hyperon
(YY) interactions), there exist phenomenological and microscopic models with
predictions for the in-medium hyperon properties at matter densities close to
saturation and at higher densities. However, the experimental access to the
strangeness sector is still scarce. A common prediction of theoretical models
is a considerable softening of the hadronic EoS at high densities by adding to
a system more degrees of freedom such as strangeness particles. The inclusion
of hyperons into nuclear approaches made many of them, which were successfully
applied to nuclear systems (nuclear matter, finite nuclei, nuclear reactions),
incompatible with the astrophysical observations of two-solar mass pulsars [1,
2]. This is the so-called hyperon-puzzle [9, 10]. This puzzle has received
recently theoretical attraction by a new observation of a quite massive
neutron star [3]. A comprehensive theoretical view concerning the microscopic
descriptions of in-medium properties of the baryon-octet is given in Ref.
[11]. There exist also theoretical reviews based on the RMF approximation, see
for instance Refs. [12, 13, 14].
It is thus of great interest to address the in-medium behaviour of hyperons in
nuclear matter, as we do in this work. We use an alternative RMF approach
based on the fact, that compressed matter consists of particles with high
relative momenta. Therefore, not only the density dependence, but the momentum
dependence of the in-medium interactions is important too. The reason for
doing so is that conventional RMF-models do not explain the empirical
saturation of the in-medium interactions of high-momenta (anti)nucleons. In
terms of SU(6) this issue appears for high-momenta (anti)hyperons too. This is
the Non-Linear Derivative (NLD) model [15]. It retains the basic RMF
Lagrangian formulation, but it includes higher-order derivatives in the NN-
interaction Lagrangians. It has been demonstrated that this Ansatz corrects
the high-momentum behaviour of the interaction, makes the EoS softer at
densities just above saturation, but at the same time it reproduces the two-
solar mass pulsars at densities far beyond saturation [15]. Here we extend the
NLD approach by including strangeness into the nuclear matter and discuss the
momentum dependence of the in-medium hyperon potentials.
## 2 The NLD Model for the baryon octet
In this section we briefly introduce the non-linear derivative (NLD) model and
extend it to the baryon octet. A detailed description of the NLD model for
nucleons can be found in Ref. [15]. The NLD-Lagrangian is based on the
conventional Relativistic Hadro-Dynamics (RHD) [16] and it reads as
$\displaystyle{\cal L}=$
$\displaystyle\frac{1}{2}\sum_{B}\left[\overline{\Psi}_{B}\gamma_{\mu}i\overrightarrow{\partial}^{\mu}\Psi_{B}-\overline{\Psi}_{B}i\overleftarrow{\partial}^{\mu}\gamma_{\mu}\Psi_{B}\right]-\sum_{B}m_{B}\overline{\Psi}_{B}\Psi_{B}$
$\displaystyle-$
$\displaystyle\frac{1}{2}m^{2}_{\sigma}\sigma^{2}+\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma-U(\sigma)$
$\displaystyle+$
$\displaystyle\frac{1}{2}m^{2}_{\omega}\omega_{\mu}\omega^{\mu}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}$
$\displaystyle+$
$\displaystyle\frac{1}{2}m^{2}_{\rho}\vec{\rho}\,_{\mu}\vec{\rho}\,^{\mu}-\frac{1}{4}\vec{G}\,_{\mu\nu}\vec{G}\,^{\mu\nu}-\frac{1}{2}m^{2}_{\delta}\vec{\delta}\,^{2}+\frac{1}{2}\partial_{\mu}\vec{\delta}\,\,\partial^{\mu}\vec{\delta}\,$
$\displaystyle+$ $\displaystyle{\cal L}_{int}^{\sigma}+{\cal
L}_{int}^{\omega}+{\cal L}_{int}^{\rho}+{\cal L}_{int}^{\delta}\,.$ (1)
The sum over $B$ runs over the baryonic octet
$\displaystyle\Psi_{B}=$
$\displaystyle(\Psi_{N},\Psi_{\Lambda},\Psi_{\Sigma},\Psi_{\Xi})^{T}$ (2)
with
$\displaystyle\Psi_{N}=$
$\displaystyle(\psi_{p},\psi_{n})^{T},~{}~{}\Psi_{\Lambda}=\psi_{\Lambda}$ (3)
$\displaystyle\Psi_{\Sigma}=$
$\displaystyle(\psi_{\Sigma^{+}},\psi_{\Sigma^{0}},\psi_{\Sigma^{-}})^{T},~{}~{}\Psi_{\Xi}=(\psi_{\Xi^{0}},\psi_{\Xi^{-}})^{T}$
(4)
for the isospin-doublets $\Psi_{N}$ and $\Psi_{\Xi}$, isospin-triplet
$\Psi_{\Sigma}$ and the neutral $\Psi_{\Lambda}$. The interactions between the
nucleon fields are described by the exchange of meson fields. These are the
scalar $\sigma$ and vector $\omega^{\mu}$ mesons in the isoscalar channel, as
well as the scalar $\vec{\delta}\,$ and vector $\vec{\rho}\,^{\mu}$ mesons in
the isovector channel. Their corresponding Lagrangian densities are of the
Klein-Gordon and Proca types, respectively. The term
$U(\sigma)=\frac{1}{3}b\sigma^{3}+\frac{1}{4}c\sigma^{4}$ contains the usual
selfinteractions of the $\sigma$ meson. The notations for the masses of fields
in Eq. (1) are obvious. The field strength tensors are defined as
$F^{\mu\nu}=\partial^{\mu}\omega^{\nu}-\partial^{\nu}\omega^{\mu}$,
$\vec{G}\,^{\mu\nu}=\partial^{\mu}\vec{\rho}\,^{\nu}-\partial^{\nu}\vec{\rho}\,^{\mu}$
for the isoscalar and isovector fields, respectively. In the following we
restrict to a minimal set of interaction degrees of freedom. In the iso-scalar
sector, the $\sigma$\- and $\omega$-fields are obviously considered. In the
iso-vector channel, we keep the vector, iso-vector $\rho$-meson field and
neglect the $\delta$-field.
The NLD interaction Lagrangians contain the conventional RHD combinations
between the bilinear baryon- and linear meson-fields, however, they are
extended by the inclusion of non-linear derivative operators
$\overrightarrow{{\cal D}},\overleftarrow{{\cal D}}$ for each baryon species
$B$:
${\cal L}_{int}^{\sigma}=\sum_{B}\frac{g_{\sigma
B}}{2}\left[\overline{\Psi}_{B}\,\overleftarrow{{\cal
D}}_{B}\Psi_{B}\sigma+\sigma\overline{\Psi}_{B}\,\overrightarrow{{\cal
D}}_{B}\Psi_{B}\right]\,,$ (5) ${\cal
L}_{int}^{\omega}=-\sum_{B}\frac{g_{\omega
B}}{2}\left[\overline{\Psi}_{B}\,\overleftarrow{{\cal
D}}_{B}\gamma^{\mu}\Psi_{B}\omega_{\mu}+\omega_{\mu}\overline{\Psi}_{B}\gamma^{\mu}\,\overrightarrow{{\cal
D}}_{B}\Psi_{B}\right]\,,$ (6) ${\cal
L}_{int}^{\rho}=-\sum_{B}\frac{g_{\omega\rho}}{2}\left[\overline{\Psi}_{B}\,\overleftarrow{{\cal
D}}_{B}\gamma^{\mu}\vec{\tau}\Psi_{B}\vec{\rho}\,_{\mu}+\vec{\rho}\,_{\mu}\overline{\Psi}_{B}\vec{\tau}\gamma^{\mu}\,\overrightarrow{{\cal
D}}_{B}\Psi_{B}\right]\,,$ (7)
for the isoscalar-scalar, isoscalar-vector and isovector-vector vertices,
respectively. The arrows on the non-linear operator ${\cal D}_{B}$ indicate
the direction of their action. The only difference with respect to the
conventional RHD Lagrangian is the presence of additional operator functions
$\overrightarrow{{\cal D}}_{B},~{}\overleftarrow{{\cal D}}_{B}$. As we will
see, they will regulate the high momentum component of hyperons. For this
reason we will call them as regulators too. The operator functions (or
regulators) $\overrightarrow{{\cal D}}_{B},~{}\overleftarrow{{\cal D}}_{B}$
are hermitian and generic functions of partial derivative operator. That is,
$\overrightarrow{{\cal D}}_{B}:={\cal D}\left(\overrightarrow{\xi}_{B}\right)$
and $\overleftarrow{{\cal D}}_{B}:={\cal
D}\left(\overleftarrow{\xi}_{B}\right)$ with the operator arguments
$\overrightarrow{\xi}_{B}=-\zeta_{B}^{\alpha}i\overrightarrow{\partial}_{\alpha},~{}\overleftarrow{\xi}_{B}=i\overleftarrow{\partial}_{\alpha}\zeta_{B}^{\alpha}$.
The four vector $\zeta_{B}^{\mu}=v^{\mu}/\Lambda_{B}$ contains the cut-off
$\Lambda_{B}$ and $v^{\mu}$ is an auxiliary vector. These regulators are
assumed to act on the baryon spinors $\Psi_{B}$ and $\overline{\Psi}_{B}$ by a
formal Taylor expansion with respect to the operator argument. The functional
form of the regulators is constructed such that in the limit
$\Lambda_{B}\to\infty$ the original RHD Lagrangians are recovered, that is,
$\overrightarrow{{\cal D}}_{B}=\overleftarrow{{\cal D}}^{\dagger}_{B}\to 1$.
The presence of higher-order partial derivatives in the Lagrangian mediate a
modification of the field-theoretical prescriptions. As discussed in detail in
the original work of Ref. [15], the generalized Euler-Lagrange equations as
well as the Noether-currents contain additional infinite terms of higher-order
partial derivative contributions. However, the main advantage of the NLD
approach relies on the fact that these terms can be resummed to compact
expressions.
From the generalized Euler-Lagrange formalism we obtain the equations of
motion for the degrees of freedom in the NLD model. The meson field equations
of motion read
$\displaystyle\partial_{\alpha}\partial^{\alpha}\sigma+m_{\sigma}^{2}\sigma+\frac{\partial
U}{\partial\sigma}=\frac{1}{2}\sum_{B}g_{\sigma
B}\left[\overline{\Psi}_{B}\,\overleftarrow{{\cal
D}}_{B}\Psi_{B}+\overline{\Psi}_{B}\overrightarrow{{\cal
D}}_{B}\Psi_{B}\right]\,,$ (8)
$\displaystyle\partial_{\mu}F^{\mu\nu}+m_{\omega}^{2}\omega^{\nu}=\frac{1}{2}\sum_{B}g_{\omega
B}\left[\overline{\Psi}_{B}\,\overleftarrow{{\cal
D}}_{B}\gamma^{\nu}\Psi_{B}+\overline{\Psi}_{B}\gamma^{\nu}\overrightarrow{{\cal
D}}_{B}\Psi_{B}\right]\,,$ (9)
$\displaystyle\partial_{\mu}G^{\mu\nu}+m_{\rho}^{2}\vec{\rho}\,^{\nu}=\frac{1}{2}\sum_{B}g_{\rho
B}\left[\overline{\Psi}_{B}\,\overleftarrow{{\cal
D}}_{B}\gamma^{\nu}\vec{\tau}\Psi_{B}+\overline{\Psi}_{B}\vec{\tau}\gamma^{\nu}\overrightarrow{{\cal
D}}_{B}\Psi_{B}\right]\,,$ (10)
for the isoscalar-scalar, isoscalar-vector and isovector-vector exchange
mesons, respectively.
Each baryon-field obeys a Dirac-equation of the following type
$\left[\gamma_{\mu}(i\partial^{\mu}-\Sigma^{\mu}_{B})-(m_{B}-\Sigma_{sB})\right]\psi_{B}=0\;,$
(11)
with the selfenergies $\Sigma^{\mu}_{B}$ and $\Sigma_{sB}$ defined as
$\displaystyle\Sigma^{\mu}_{B}$ $\displaystyle=$ $\displaystyle
g_{\omega_{B}}\omega^{\mu}\overrightarrow{{\cal D}}_{B}+g_{\rho
B}\vec{\tau}\,_{B}\cdot\vec{\rho}\,^{\mu}\overrightarrow{{\cal D}}_{B}~{},$
(12) $\displaystyle\Sigma_{sB}$ $\displaystyle=$ $\displaystyle g_{\sigma
B}\sigma\overrightarrow{{\cal D}}_{B}\;.$ (13)
Both Lorentz-components of the selfenergy, $\Sigma^{\mu}$ and $\Sigma_{s}$,
show an explicit linear behaviour with respect to the meson fields $\sigma$,
$\omega^{\mu}$ and $\vec{\rho}\,^{\mu}$ as in the standard RHD. However, they
contain an additional dependence on the regulators. General expressions for
the Noether-current and energy-momentum tensor can also be derived. We give
them below in the RMF approximation.
The RMF application of the NLD formalism to static hadronic matter follows the
same procedure as in the conventional RHD. The spatial components of the meson
fields in Minkowski- and isospin-spaces vanish,
$\omega^{\mu}\to(\omega^{0},~{}\vec{0}\,)$ and
$\vec{\rho}\,^{\mu}\to(\rho^{0}_{3},~{}\vec{0}\,_{3})$. For simplicity, we
denote in the following the remaining isospin component of the isovector
fields as $\rho^{\mu}$. The solutions of the RMF equations start with the
usual plane wave ansatz
$\psi_{B}(s,\vec{p}\,)=u_{B}(s,\vec{p}\,)e^{-ip^{\mu}x_{\mu}}$ where $B$
stands for the various isospin states of the baryons and
$p^{\mu}=(E,\vec{p}\,)$ is the single baryon 4-momentum. The application of
the non-linear derivative operator ${\cal D}_{B}$ to the plane wave Ansatz of
the spinor fields results in regulators ${\cal D}_{B}$ which are now functions
of the scalar argument $\xi_{B}=-\frac{v_{\alpha}p^{\alpha}}{\Lambda_{B}}$.
That is, they depend explicitly on the single baryon momentum $p$ (with an
appropriate choice of the auxiliary vector $v^{\alpha}$) and on the cut-off
$\Lambda_{B}$, which may differ for each baryon type $B$. Each baryon fulfils
a Dirac equation with the same form as in Eq. (11) and with corresponding
explicitly momentum dependent scalar and vector selfenergies. Their vector
components are given by
$\displaystyle\Sigma^{\mu}_{p}=$ $\displaystyle g_{\omega
N}\,\omega^{\mu}\,{\cal D}_{N}+g_{\rho N}\,\rho^{\mu}\,{\cal D}_{N}~{},$ (14)
$\displaystyle\Sigma^{\mu}_{n}=$ $\displaystyle g_{\omega
N}\,\omega^{\mu}\,{\cal D}_{N}-g_{\rho N}\,\rho^{\mu}\,{\cal D}_{N}~{},$ (15)
$\displaystyle\Sigma^{\mu}_{\Lambda}=$ $\displaystyle
g_{\omega\Lambda}\,\omega^{\mu}\,{\cal D}_{\Lambda}~{},$ (16)
$\displaystyle\Sigma^{\mu}_{\Sigma^{+}}=$ $\displaystyle
g_{\omega\Sigma}\,\omega^{\mu}\,{\cal
D}_{\Sigma}+g_{\rho\Sigma}\,\rho^{\mu}\,{\cal D}_{\Sigma}~{},$ (17)
$\displaystyle\Sigma^{\mu}_{\Sigma^{-}}=$ $\displaystyle
g_{\omega\Sigma}\,\omega^{\mu}\,{\cal
D}_{\Sigma}-g_{\rho\Sigma}\,\rho^{\mu}\,{\cal D}_{\Sigma}~{},$ (18)
$\displaystyle\Sigma^{\mu}_{\Sigma^{0}}=$ $\displaystyle
g_{\omega\Sigma}\,\omega^{\mu}\,{\cal D}_{\Sigma}~{},$ (19)
$\displaystyle\Sigma^{\mu}_{\Xi^{-}}=$ $\displaystyle
g_{\omega\Xi}\,\omega^{\mu}\,{\cal D}_{\Xi}-g_{\rho\Xi}\,\rho^{\mu}\,{\cal
D}_{\Xi}~{},$ (20) $\displaystyle\Sigma^{\mu}_{\Xi^{0}}=$ $\displaystyle
g_{\omega\Xi}\,\omega^{\mu}\,{\cal D}_{\Xi}+g_{\rho\Xi}\,\rho^{\mu}\,{\cal
D}_{\Xi}\,.$ (21)
Similar expressions result for the scalar selfenergies. In the following the
scalar and time-like component of the baryon selfenergy will be denoted as
$S_{B}$ and $V_{B}$, respectively. Note that the selfenergies are explicitly
momentum dependent due to the regulators ${\cal D}_{B}={\cal D}_{B}(p)$ as
specified below. The solutions of the Dirac equation are the standard Dirac-
spinors with a proper normalization $N_{B}$
$u_{B}(s,\vec{p}\,)=N_{B}\left(\begin{array}[]{c}\varphi_{s}\\\ \\\
\displaystyle\frac{\vec{\sigma}\,\cdot\vec{p}\,}{E^{*}_{B}+m^{*}_{B}}\varphi_{s}\\\
\end{array}\right)\;,$ (22)
but now for quasi-free baryons $B$ with an in-medium energy
$E^{*}_{B}:=E_{B}-V_{B}(p)~{},$ (23)
and a Dirac mass
$m^{*}_{B}:=m_{B}-S_{B}(p)~{}.$ (24)
At a given momentum the single particle energy $E$ is obtained from the in-
medium on-shell relation (23). These expressions are needed for evaluation of
expectation values, for instance, the source terms of the meson-field
equations. For the definition of the nuclear matter we need a conserved
nucleon density. It is obtained from the time-like component of the Noether-
current $J^{\mu}$ defined as
$\displaystyle
J^{\mu}=\frac{\kappa}{(2\pi)^{3}}\,\sum_{B=p,n}\,\int\limits_{|\vec{p}\,|\leq
p_{F_{B}}}\\!\\!\\!\\!\\!\\!d^{3}p\,\frac{\Pi^{\mu}_{B}}{\Pi^{0}_{B}}$ (25)
with the generalized $4$-momentum
$\displaystyle\Pi^{\mu}_{B}=p^{*\mu}_{B}+m^{*}_{B}\Big{(}\partial_{p}^{\mu}S_{B}\Big{)}-\Big{(}\partial_{p}^{\mu}\Sigma^{\beta}_{B}\Big{)}p^{*}_{B\beta}$
(26)
and the usual effective $4$-momentum
$p^{*\mu}_{B}=p^{\mu}-\Sigma^{\mu}_{B}\,.$ (27)
The EoS (Equation of State) is obtained from the time-like components of the
energy-momentum tensor. In nuclear matter the resummation procedure of the NLD
model results in the following expression
$\displaystyle
T^{\mu\nu}=\sum_{B}\frac{\kappa}{(2\pi)^{3}}\int\limits_{|\vec{p}\,|\leq
p_{F_{B}}}\\!\\!\\!\\!\\!\\!d^{3}p\,\frac{\Pi^{\mu}_{B}p^{\nu}}{\Pi^{0}_{B}}-g^{\mu\nu}\langle{\cal
L}\rangle\,,$ (28)
from which the energy density $\varepsilon\equiv T^{00}$ and the pressure $P$
can be calculated, see for details Ref. [15]. Finally, the NLD meson-field
equations in the RMF approach to nuclear matter can be resummed to the
following forms
$\displaystyle m_{\sigma}^{2}\sigma+\frac{\partial U}{\partial\sigma}=$
$\displaystyle\sum_{B}g_{\sigma B}\,\Big{<}\overline{\psi}_{B}{\cal
D}_{B}\psi_{B}\Big{>}=\sum_{B}g_{\sigma B}\,\rho_{sB}~{},$ (29) $\displaystyle
m_{\omega}^{2}\omega=$ $\displaystyle\sum_{B}g_{\omega
B}\,\Big{<}\overline{\Psi}_{B}\gamma^{0}{\cal
D}_{B}\Psi_{B}\Big{>}=\sum_{B}g_{\omega B}\,\rho_{0B}\,,$ (30)
with the scalar and vector density sources
$\rho_{sB}=\frac{\kappa}{(2\pi)^{3}}\int\limits_{|\vec{p}\,|\leq
p_{F_{B}}}\\!\\!\\!\\!\\!\\!d^{3}p\,\frac{m^{*}_{B}}{\Pi^{0}_{B}}\,{\cal
D}_{B}(p)~{},$ (31)
$\rho_{0B}=\frac{\kappa}{(2\pi)^{3}}\int\limits_{|\vec{p}\,|\leq
p_{F_{B}}}\\!\\!\\!\\!\\!\\!d^{3}p\,\frac{E^{*}_{B}}{\Pi^{0}_{B}}\,{\cal
D}_{B}(p)\,.$ (32)
The isovector densities are calculated through the standard isospin relations.
For a hyperon with a given momentum relative to nuclear matter at rest (at a
given nucleon density and isospin asymmetry) the mesonic sources contain only
nucleons, that is $B=p,n$.
The meson-field equations of motion show a similar structure as those of the
standard RMF approximation. However, the substantial difference between NLD
and other conventional RMF models appears in the source terms which now
contain in addition the momentum-dependent regulators ${\cal D}_{B}$. This is
an important feature of the NLD model. The cut-off leads naturally to a
particular suppression of the vector field at high densities or high Fermi-
momenta in agreement with phenomenology, as discussed in detail in the
previous work [15]. This feature is absent in conventional RHD approaches,
except if one introduces by hand additional scalar/vector self-interactions.
The key observable for general discussions related to momentum or energy
dependencies of in-medium hadronic potentials is the Schroedinger-equivalent
optical potential $U_{opt}$, which is a complex quantity. The imaginary part
describes the scattering processes of a given particle, e.g., a hyperon, with
a nucleon of the nuclear matter. The real part of the optical potential is
related to the mean-field that a particle, e.g., a hyperon with a given
momentum, experiences in the nuclear medium at a given density and isospin-
asymmetry. The imaginary part of $U_{opt}$ cannot be calculated within a
conventional RMF prescription. In RMF models one is usually interested in the
real part of an optical potential that can be then examined in more realistic
systems, for instance, in heavy-ion collisions or hadron-induced reactions
within a relativistic transport theory. The missing imaginary part is then
modelled within a collision term in terms of cross sections for elastic,
quasi-elastic and inelastic channels with a proper counting of Pauli-Blocking
effects.
In the NLD model one cannot calculate precisely the imaginary part of
$U_{opt}$. However, the NLD approach contains an explicit momentum dependence
of the mean-fields, and thus, of the optical potential. This particular
feature allow us to give, at least, estimations for the imaginary part of an
optical potential too. This will be discussed in the case of the anti-
hyperons, and we will mainly focus the study here on the real part of the
optical potentials.
The real part of the Schroedinger-equivalent optical potential for hyperons is
obtained from a non-relativistic reduction of the Dirac-equation and reads
$\displaystyle
U_{opt}^{B}=-S_{B}+\frac{E_{B}}{m_{B}}V_{B}+\frac{1}{2m_{B}}\left(S_{B}^{2}-V_{B}^{2}\right)\,.$
(33)
It describes the in-medium interaction of a baryon species $B$, e.g., a
hyperon, with a momentum $p$ (or single-particle energy $E_{B}=E_{B}(p)$, see
Eq. (23)) relative to nuclear matter at rest at a given density and isospin
asymmetry. We will use Eq. (33) to compare the NLD results with the
microscopic calculations from $\chi$-EFT and Lattice-QCD for the hyperon in-
medium potentials.
## 3 Results and discussion
### 3.1 Nucleonic sector
NLD parameters | $\Lambda_{sN}$ | $\Lambda_{vN}$ | $g_{\sigma N}$ | $g_{\omega N}$ | $g_{\rho N}$ | $b$ | $c$
---|---|---|---|---|---|---|---
$[\mbox{$\,{\rm GeV}$}]$ | $[\mbox{$\,{\rm GeV}$}]$ | | | | $[\frac{1}{fm}]$ |
$0.95$ | $1.125~{}~{}$ | $~{}~{}10.08$ | $10.13$ | $3.50$ | $15.341$ | $-14.735~{}~{}$
Bulk saturation properties | $\rho_{sat}$ | $E_{b}$ | $K$ | $a_{sym}$ | | |
$[\frac{1}{fm^{3}}]$ | $[\frac{MeV}{A}]$ | $[\mbox{$\,{\rm MeV}$}]$ | $[\mbox{$\,{\rm MeV}$}]$ | | |
$0.156$ | $-15.30$ | $251$ | $30$ | | |
Table 1: (Top) NLD parameters: meson-nucleon couplings
$g_{mN},~{}(m=\sigma,\omega,\rho)$, $\sigma$ self-interaction constants $b,c$,
and NLD cut-off for scalar ($\Lambda_{sN}$) and vector ($\Lambda_{vN}$) meson-
nucleon isoscalar vertices. The isovector meson-nucleon cut-off is the same as
the isoscalar-vector one. (Bottom) Bulk saturation properties of nuclear
matter: saturation density $\rho_{sat}$, binding energy per nucleon $E_{b}$,
compression modulus $K$ and asymmetry parameter $a_{sym}$ in the NLD model.
See Ref. [15] for more details.
We briefly give the status of the NLD model for the in-medium nucleons, before
starting the discussion on the in-medium hyperon potentials. As in detail
discussed in [15], a momentum dependent monopole form
${\cal D}(p)=\frac{\Lambda^{2}}{\Lambda^{2}+\vec{p\,}^{2}}$ (34)
for the regulators turned out to be very effective for a simultaneous
description of the low and high density nuclear matter properties. An example
is shown in table 1 for the extracted saturation properties together with the
model parameters. It is seen that the NLD model leads to a very good
description of the empirical values at saturation. The NLD EoS is rather soft
and similar to the density dependence of Dirac-Brueckner-Hartree-Fock
microscopic calculations. At high densities, however, the NLD EoS becomes
stiff. This feature makes a prediction of the maximum mass of neutron stars of
$2M_{\odot}$ possible even with a soft compression modulus. Note that the NLD
model gives a correct description of the Schroedinger-equivalent optical
potential for in-medium protons and antiprotons simultaneously by imposing
G-parity only [15].
### 3.2 Strangeness sector
For the strangeness sector we consider again nuclear matter at rest, at a
given density, isospin-asymmetry and at zero temperature, in which hyperons
($\Lambda,\Sigma,\Xi$) are situated at a given momentum relative to the
nuclear matter at rest. The quantity of interest will be the optical potential
$U_{opt}$ of the in-medium hyperons , see Eq. (33). Since there is no
experimental information on the momentum dependence of the in-medium hyperonic
potentials, we use for our comparisons the recent microscopic calculations
from Refs. [17] (see also Ref. [18]) and [19] as a guidance. They are based on
the $\chi$-EFT approach in Next-To-Leading (NLO) order and to Lattice-QCD.
In the NLD calculations we assume for the in-medium hyperon interactions no
additional parameters except of the strangeness cut-off of the hyperons. That
is, the various hyperon-nucleon couplings are fixed from the corresponding
nucleon-nucleon ones by means of SU(6). The hyperon cut-offs retain their
monopole form as in Eq. (34). In particular, they take the form
${\cal
D}_{Y}(p)=\frac{\Lambda^{2}_{\gamma_{1}}}{\Lambda^{2}_{\gamma_{2}}+\vec{p\,}^{2}}\,,$
(35)
with $\gamma=\sigma,~{}\omega,~{}\rho$ indicating the cut-off values for the
hyperon-nucleon $\sigma,~{}\omega$\- and $\rho$-vertices, respectively, and
$Y=\Lambda,\Sigma,\Xi$ denotes the hyperon type. In principle, one could use a
single cut-off $\Lambda_{\gamma_{1}}=\Lambda_{\gamma_{2}}=\Lambda_{\gamma}$
for each meson-hyperon vertex. However, in order to describe the non-trivial
momentum dependence of the microscopic calculations as precise as possible we
allow for different cut-off values for the vector-isoscalar $\omega$\- and
vector-isovector $\rho$-hyperon vertices, as shown in Eq. (35). For the
isoscalar meson-hyperon interactions a single cut-off
$\Lambda_{\sigma}=\Lambda_{\sigma_{1}}=\Lambda_{\sigma_{2}}$ for each hyperon
type is used. This prescription was found to be the most appropriate one when
comparing to the microscopic calculations. In fact, the scalar-like
interactions are in any case better controlled with increasing density
(respectively momentum) by $m^{\star}/E^{\star}$-suppression factors while the
vector-like vertices do not include them, besides the NLD-regulators in the
source terms of the meson-field equations (31,32). Note that
$\Pi^{0}=E^{\star}$ for momentum-dependent regulators $\Pi^{0}=E^{\star}$ and
for each baryon type B. Similar studies concerning the peculiar role of the
vector $\omega$-meson exist in the literature. For instance, in Refs. [20, 21,
22] non-linear quadratic $\omega$-field contributions were considered as an
alternative approach for the vector-like interaction Lagrangian leading to
more complex density dependencies of their mean-fields. In the NLD model all
higher-order non-linear terms are summed up into regulators. The novel feature
of NLD is that these regulators mediate a non-linear density and, at the same
time, a non-linear momentum dependence of in-medium potentials not only for
nucleons, but for hyperons too. This will become clear in the following
discussions.
Figure 1: Optical potential of $\Lambda$-hyperons as function of their
momentum $p$ in symmetric nuclear matter at saturation density. The NLD-
results (thick-solid curve) are compared with $\chi$-EFT microscopic
calculations (taken from [17]) at different orders LO (band with closed dashed
borders) and NLO (band with closed solid borders) [17]. Further microscopic
calculations from the Jülich group (dot-dashed curve) are shown too [23].
At first, the cut-offs of the hyperons have to be determined. The
strangeness-$S=1$ cut-offs are adjusted to the corresponding hyperonic optical
potentials at saturation density of symmetric and cold nuclear matter from
$\chi$-EFT calculations. This is shown in Fig. 1 for the optical potential of
$\Lambda$-hyperons. The gray bands correspond to the microscopic calculations
at different orders in $\chi$-EFT, while the solid curve represents the NLD
result. At low momenta the $\Lambda$ in-medium interaction is attractive, but
it becomes repulsive at high momenta. The non-trivial momentum dependence in
NLD arises from the explicitly momentum dependent regulators which show up
twice: in the scalar and vector selfenergies and in the source terms of the
meson fields. As a consequence, the cut-off regulates the $\Lambda$-potential
not only at zero momentum, but particularly over a wide momentum region. The
in-medium $\Lambda$-potential does not diverge with increasing $p$-values (not
shown here), but it saturates. Furthermore, the in-medium $\Lambda$-potential
at zero kinetic energy leads to a value of
$U_{opt}^{\Lambda}\simeq-28~{}\mbox{$\,{\rm MeV}$}$, which is consistent with
the NLO-calculations and also consistent with phenomenology. Therefore it
exists an appropriate choice of cut-off regulators that do reproduce the
microscopic calculations over a wide momentum range up to $p\simeq
1~{}\mbox{$\,{\rm GeV}$}$ very well. A similar picture occurs for the in-
medium potential of $\Sigma$-hyperons, as shown in Fig. 2. The NLD cut-off for
the $\Sigma$-particles can be regulated in such way to reproduce a repulsive
potential at vanishing momentum with a weak momentum dependence at finite
$\Sigma$-momentum. Again, the NLD calculations are able to describe the
microscopic $\chi$-EFT results in NLO very well. The corresponding values for
the strangeness cut-offs are tabulated in 2. Even if the origin of the cut-
offs is different between the NLD model and the microscopic calculations, it
may be interesting to note that these NLD cut-off values are close to the
region between 500 and 650 $\,{\rm GeV}$ used in the $\chi$-EFT calculations.
$\Lambda$ cut-off | $\Sigma$ cut-off | $\Xi$ cut-off
---|---|---
$\Lambda_{\sigma}$ | $\Lambda_{\omega_{1}}$ | $\Lambda_{\omega_{2}}$ | $\Lambda_{\rho_{1}}$ | $\Lambda_{\rho_{2}}$ | $\Lambda_{\sigma}$ | $\Lambda_{\omega_{1}}$ | $\Lambda_{\omega_{2}}$ | $\Lambda_{\rho_{1}}$ | $\Lambda_{\rho_{2}}$ | $\Lambda_{\sigma}$ | $\Lambda_{\omega_{1}}$ | $\Lambda_{\omega_{2}}$ | $\Lambda_{\rho_{1}}$ | $\Lambda_{\rho_{2}}$
0.7 | 0.85 | 0.79 | – | – | 0.67 | 0.95 | 0.79 | 0.47 | 0.47 | 0.6 | 0.8 | 0.71 | 1.3 | 1.2
| | | | | | | | 0.63 | 0.5 | | | | |
Table 2: $\Lambda$, $\Sigma$ and $\Xi$ cut-offs for $\sigma$\-
($\Lambda_{\sigma}$), $\omega$\- ($\Lambda_{\omega_{1,2}}$) and
$\rho$-hyperon-nucleon ($\Lambda_{\rho_{1,2}}$) vertices in units of $\,{\rm
GeV}$. In the cases for $\Sigma$ and $\Xi$ the isospin cut-offs
($\Lambda_{r1,2}$) are relevant for the charged particles only. For the
$\Sigma$-hyperon different cut-off values $\Lambda_{\rho_{1,2}}$ are used for
$\Sigma^{-}$ (upper line) and for $\Sigma^{+}$ (bottom line). Figure 2: Same
as in Fig. 1, but for $\Sigma$-hyperon.
We emphasize again the non-trivial momentum dependence of the in-medium
hyperon-potentials, as manifested in the $\chi$-EFT calculations at different
orders, see for instance Ref. [17]. This prescription modifies the momentum
dependencies in such a complex way, which cannot be reproduced in standard RMF
models by imposing SU(6) arguments. Furthermore, any standard RMF model leads
to a divergent behaviour of optical potentials at high momenta. Note that a
weak repulsive character of the $\Sigma$-potential, as proposed by the
microscopic calculations, cannot be achieved in conventional RMF. The
momentum-dependent NLD model resolves these issues effectively through
momentum cut-offs of natural hadronic scale. Since we are dealing with
hadronic matter, values of hadronic scale in the $\,{\rm GeV}$-regime for the
NLD regulators seem to be an adequate choice.
So far we have discussed the momentum dependence of the $\Lambda$ and $\Sigma$
hyperons at saturation density (Figs. 1 and 2). These comparisons served also
as a guideline for the NLD cut-offs for the $\Lambda$ and $\Sigma$ baryons.
Now we discuss the predictive power of the NLD approach by comparing in more
detail the density and momentum dependence of the NLD formalism with the
microscopic $\chi$-EFT calculations. This is shown in Figs. 3 and 4, where the
momentum dependence of the $\Lambda$ (Fig. 3) and $\Sigma$ (Fig. 4) particles
is displayed again, but now at various densities of symmetric nuclear matter.
At first, the $\Lambda$ and $\Sigma$ optical potentials become more repulsive
with increasing nuclear matter density in NLD. However, the non-trivial
momentum and density dependence, as manifested in the NLD selfenergies and the
meson-field sources, weakens the in-medium potentials with increasing
momentum. In particular, the NLD model predicts astonishingly well the complex
microscopic behaviours in momentum and at various densities of symmetric
nuclear matter.
In asymmetric matter besides the standard iso-scalar and iso-vector vertices
($\sigma$ and $\omega$ meson fields, respectively) the iso-vector and Lorentz-
vector $\rho$-meson must be taken into account. In NLD we assume again a
monopole form for the $\rho$-meson coupling to the hyperons too by using the
coupling constant of table 1 and the cut-off values of table 2 for the isospin
sector. Relevant are the cut-off values $\Lambda_{\rho_{1,2}}$ for the charged
$\Sigma^{\pm}$-hyperons. They have been fixed from the corresponding
$\chi$-EFT calculations for $\Sigma^{-}$ and $\Sigma^{+}$ at saturation
density. The NLD calculations for the neutral $\Lambda$\- and
$\Sigma^{0}$-hyperons are free of parameters here.
The results for pure neutron matter at three different baryon densities are
summarized in Fig. 5. The NLD model does predict the general microscopic
trends. In particular, in the case of the neutral hyperons ($\Lambda$ and
$\Sigma^{0}$), where within the RMF approximation the $\rho$-meson does not
appear at all, one would expect identical results between symmetric and pure
neutron matter (at same total baryon density and momentum). This is in fact
not the case. There is an inherent isospin dependence in the source terms of
the meson-field equations, see Eqs. (30) even for the $\sigma$\- and
$\omega$-fields. The upper limits in those integrals (31, 32) are different
for protons and neutrons between symmetric and asymmetric nuclear matter at
the same total density. This leads to a different cut value in the regulators
${\cal D}_{p,n}$ and thus to a different result between symmetric and
asymmetric matter. This NLD feature induces a hidden isospin dependence which
is qualitatively consistent with the microscopic calculations at the three
total densities as indicated in Fig. 5 for the ”isospin-blind” hyperons.
Concerning the charged $\Sigma^{\pm}$-hyperons, the comparison between NLD and
$\chi$-EFT calculations is obviously at best for densities close to
saturation. In general, the NLD predictions follow satisfactorily the details
of the microscopic in-medium potentials as function of momentum and matter
density.
Finally we discuss the in-medium properties of the cascade-hyperons as shown
in Figs. 6 and 7 for symmetric nuclear matter (SNM) and pure neutron matter
(PNM). Here we apply for comparison the latest microscopic calculations from
Lattice-QCD. The same NLD scheme with appropriate monopole-type regulators
leads to the results in Fig. 6 for symmetric nuclear matter at saturation
density. It is seen that a simple monopole-like regulator with hadronic cut-
off values can explain the microscopic Lattice calculations. Indeed, a soft
attractive potential for in-medium $\Xi$-hyperons is obtained in the NLD model
over a wide momentum range. The prediction of NLD is then displayed in Fig. 7
for pure neutron matter but at the same total density at saturation as in the
previous figure. The hidden isospin-dependence modifies slightly the momentum
dependence of the neutral $\Xi^{0}$-hyperon. In this case the Lattice
calculations are reproduced only qualitatively by the NLD model, while for the
charged cascade partner ($\Xi^{-}$) the comparison between NLD and Lattice is
very well for pure neutron matter at saturation and over a broad region in
single-particle cascade-momentum.
In the future experiments such as those at FAIR the in-medium properties of
anti-hadrons will be investigated too. We thus give predictions for anti-
hyperon in-medium potentials too. We recall the novel feature of the NLD
formalism [15], that is, a parameter free predictions for anti-baryon optical
potentials in the spirit of G-parity. In fact, once the cut-off parameters are
fixed from saturation properties, the application of NLD to anti-matter gave
very successful results by imposing G-Parity only. Note that in conventional
RMF models one has to introduce by hand additional scaling factors in order to
reproduce the weak attractiveness of the anti-proton optical potential at
vanishing momenta [24]. We therefore use the same NLD formalism for the
description of anti-hyperons too and performed additional calculations for the
$\overline{\Lambda}$, $\overline{\Sigma}$ and $\overline{\Xi}$ optical
potentials as function of momentum and density. These results are shown in
Fig. 8 for anti-$\Lambda$ (left), anti-$\Sigma$ (middle) and anti-$\Xi$
optical potentials versus their momentum at three densities of symmetric
nuclear matter. Due to the negative sign in the Lorentz-vector component of
the hyperon self-energy these potentials are in general attractive over a wide
momentum range. Compared to the anti-proton potential at saturation these
potentials are less attractive with a similar dependence on single-particle
momentum.
Since for anti-hyperons we make predictions and for anti-particles in general
one may expect significant contributions to the imaginary part of $U_{opt}$
too, we briefly discuss the imaginary part of the anti-hyperon optical
potentials too. An exact treatment of the imaginary part of the optical
potential is not possible within an RMF model. However, within the NLD
approach one can estimate the strength of $Im~{}U_{opt}$ from dispersion
relations [15]. This prescription was successfully applied to the antiproton
case in a previous work (see Ref. [15]), thus we apply it here for the anti-
hyperons too. The results for $Im~{}U_{opt}$ are shown in the same figure 8 by
the thin curves. One generally observes a strong contribution to the in-medium
anti-hyperon interactions from the imaginary parts of the optical potentials
too. These contributions are quite similar to the imaginary potential of
antiprotons with a value around -150 $\,{\rm MeV}$ at very low kinetic
energies (see for instance in [15] the second citation of 2015). However, in
the antiproton-case the imaginary potential is rather strong relative to its
real part, while for anti-hyperons both parts of the potential are sizeable.
Even if the NLD results for the $Im~{}U_{opt}$ are only estimations, we can
give a physical interpretation. In antinucleon-nucleon scattering annihilation
can occur through the production of light pions. On the other hand, the
interaction of anti-hyperons with nucleons can happen via the production of
the heavier kaons due to strangeness conservation, which may influence the
imaginary potential at low energies. This might be one reason why the
imaginary part of the anti-hyperon optical potential is comparable with its
corresponding real part particularly at very low energies. These calculations
can be applied to anti-hadron induced reactions in the spirit of relativistic
transport theory and can be tested in the future experiments at FAIR.
Figure 3: Optical potential of $\Lambda$-hyperons versus their momentum at
various densities of symmetric nuclear matter, as indicated by the Fermi-
momenta in units of 1/fm-1. The NLD calculations at these three Fermi-momenta
(thick-solid, thick-dashed and thick-dot-dashed curves) are compared to the
$\chi$-EFT calculations at NLO [17]. Figure 4: Same as in Fig. 3, but for the
$\Sigma$-hyperons. Figure 5: Optical potentials for hyperons (as indicated)
versus their momentum for pure neutron matter. Solid curves with symbols
indicate the NLD calculations while pure curves without symbols are the
microscopic $\chi$-EFT results at NLO from Ref. [17]. Green pairs (circles-
solid for NLD and solid for $\chi$-EFT) refer to low density of $p_{F}=1$
fm-1, red pairs (diamonds-dashed for NLD and dashed for $\chi$-EFT) refer to
saturation density of $p_{F}=1.35$ fm-1 and blue pairs (triangles-dot-dashed
for NLD and dot-dashed for $\chi$-EFT) refer to a density of $p_{F}=1.53$
fm-1. Figure 6: Optical potential of cascade hyperons versus their momentum
for symmetric nuclear matter (SNM) at saturation density. The solid curve
indicates the NLD predictions while the dashed curve and the gray band refer
to recent Lattice calculations from Refs. [19] (Lattice2016 and Lattice2019).
Figure 7: Same as in Fig. 6, but for pure neutron matter (PNM). The curves and
bands belonging to $\Xi^{-}$ and $\Xi^{0}$ are indicated in this figure.
Figure 8: Optical potentials for anti-hyperons versus their kinetic energy
for symmetric nuclear matter (SNM) at various densities, as indicated. The NLD
predictions for saturation density $\rho_{0}$ (thick-solid) and higher
densities of $2\rho_{0}$ (thick-dashed) and $3\rho_{0}$ (thick-dashed-dot) are
shown. For the anti-hyperons we show estimates for the imaginary part of their
optical potentials too at saturation density $\rho_{0}$ (thin-solid), at
$2\rho_{0}$ (thin-dashed) and at $3\rho_{0}$ (thin-dashed-dot).
## 4 Summary
We have investigated the properties of strangeness particles inside nuclear
matter in the framework of the NLD approach. The NLD model is based on the
simplicity of the relativistic mean-field theory, but it includes the missing
momentum dependence in a manifestly covariant fashion. This is realized by the
introduction of non-linear derivative series in the interaction Lagrangian. In
momentum space this prescription leads to momentum dependent regulators, which
are determined by a cut-off. The NLD approach does not only resolve the
optical potential issues of protons and antiprotons at high momenta, but it
affects the density dependence. That is, the cut-off regulators make the EoS
softer at densities close to saturation and stiffer at very high densities
relevant for neutron stars.
Because of the successful application of the NLD model to infinite nuclear
matter (and to finite nuclei [25]), it is a natural desire to extend this
approach to hadronic matter by taking strangeness degrees of freedom into
account. This is realized in the spirit of SU(6) symmetry. We applied the NLD
model to the description of in-medium hyperon interactions for ordinary
nuclear matter. It was found that the strangeness cut-off regulates the
momentum dependence of the optical potentials of hyperons in multiple ways. At
first, the optical potentials do not diverge with increasing hyperon momentum.
Furthermore, the NLD model predicts an attractive $\Lambda$-optical potential
at low momenta, which becomes repulsive at high energies and finally
saturates. In particular, it is possible to predict a weak and repulsive in-
medium interaction for $\Sigma$-hyperons inside nuclear matter at saturation
density. These results are in consistent agreement with calculations based on
the chiral effective field theory. Regarding $\Xi$-hyperons, the NLD
predictions turned out to be in agreement with recent Lattice-QCD
calculations. In symmetric nuclear matter the cascade optical potential is
attractive and it follows the Lattice-QCD results. In pure neutron matter the
isospin-separation as predicted by the NLD model agrees with the Lattice-QCD
behaviours qualitatively. While the potential of the neutral cascade particle
remains attractive, the $\Xi^{-}$-hyperon shows a weak repulsion in neutron
matter. The weak repulsion of those hyperons may likely effect to a stiffer
EoS for neutron star matter.
We briefly discussed the imaginary part of $U_{opt}$ of anti-hyperons too.
These estimations indicate a significant contribution of the imaginary part to
the anti-hyperon dynamics that could be explored in anti-hadron induced
reactions. For instance, the present calculations can be tested in anti-proton
induced reactions and in reactions with secondary $\Xi$-beams, as they are
planned at FAIR in the future PANDA experiment.
Obviously this study is relevant not only for hadron physics, but also for
nuclear astrophysics. The application of the NLD approach to
$\beta$-equilibrated compressed matter is under progress, in order to
investigate the hyperon-puzzle in neutron stars. Another interesting
application concerns the dynamics of neutron star binaries. To do so, an
extension to hot and compressed hadronic matter is necessary and under
progress too. Note that the NLD formalism is fully thermodynamically
consistent, which is an important requirement before applying it to hot and
dense systems. In summary, we conclude the relevance of our studies for future
experiments at FAIR and for nuclear astrophysics.
## Acknowledgments
This work is partially supported by COST (THOR, CA 15213) and by the European
Union’s Horizon 2020 research and innovation programme under grant agreement
No. 824093. We also acknowledge H. Lenske and J. Haidenbauer for fruitful
discussions and for providing us the $\chi$-EFT calculations.
## References
## References
* [1] P. Demorest, T. Pennucci, S. Ransom, M. Roberts, and J. Hessels, Nature 467 (2010) 1081.
* [2] J. Antoniadis et al., Science 340 (2013) 6131.
* [3] M. Linares, T. Shahbaz and J. Casares, Astrophys. J. 859 (1) (2018) 54
H.T. Cromartie, E. Fonseca, et al., Nature Astronomy 4 (2020) 72.
* [4] J.M. Lattimer, A.W. Steiner, Astrophys. J. 784 (2014) 123.
* [5] T. Klähn, et al., Phys. Rev. C 74 (2006) 035802.
* [6] C. Fuchs, Prog. Part. Nucl. Phys. 56 (2006) 1.
* [7] C. Hartnack, et al., Phys. Rept. 510 (2012) 119.
* [8] Ch. Moustakidis, T. Gaitanos, Ch. Margaritis, G.A. Lalazissis, Phys. Rev. C 95 (2017) 045801.
* [9] I. Bombaci, arXiV:1601.05339 [nucl-th],
D. Chatterjee, I. Vidana, Eur. Phys. J. A 52 (2016) 29.
* [10] J. Haidenbauer, U. G. Meißner, N. Kaiser and W. Weise, Eur. Phys. J. A 53 (2017) no.6, 121.
* [11] S. Petschauer, J. Haidenbauer, N. Kaiser, U. G. Meißner and W. Weise, Front. in Phys. 8 (2020) 12.
* [12] J. Schaffner and I. N. Mishustin, Phys. Rev. C 53 (1996) 1416.
* [13] N. Hornick, L. Tolos, A. Zacchi, J. E. Christian and J. Schaffner-Bielich, Phys. Rev. C 98 (2018) no.6, 065804.
* [14] J. E. Christian and J. Schaffner-Bielich, Astrophys. J. Lett. 894 (2020) no.1, L8.
* [15] T. Gaitanos, M. Kaskulov, Nucl. Phys. A 899 (2013) 133,
T. Gaitanos, M. Kaskulov, Nucl. Phys. A 940 (2015) 181.
* [16] H.-P. Duerr, Phys. Rev. 103 (1956) 469,
J.D. Walecka, Ann. Phys. 83 (1974) 491,
J. Boguta, A. Bodmer, Nucl. Phys. A 292 (1977) 413.
* [17] S. Petschauer, et al., Eur. Phys. J. A 52 (2016) 15,
J. Haidenbauer, private communication.
* [18] J. Haidenbauer, U. G. Meißner and A. Nogga, Eur. Phys. J. A 56 (2020) no.3, 91.
* [19] T. Inoue [LATTICE-HALQCD], PoS INPC2016 (2016), 277,
T. Inoue [HAL QCD], AIP Conf. Proc. 2130 (2019) no.1, 020002.
* [20] M. Fortin, S. S. Avancini, C. Providência and I. Vidaña, Phys. Rev. C 95 (2017) no.6, 065803.
* [21] C. Providência and A. Rabhi, Phys. Rev. C 87 (2013) 055801.
* [22] Y. Sugahara, and H. Toki, Nucl. Phys. A 579 (1994) 557.
* [23] J. Haidenbauer and U.-G. Meissner, Phys. Rev. C 72 (2005) 044005.
* [24] A.B. Larionov, et al., Phys. Rev. C 80 (2009) 021601(R).
* [25] S. Antic and S. Typel, AIP Conf. Proc. 1645 (2015) 276.
|
# Collaborative Teacher-Student Learning via Multiple Knowledge Transfer
Liyuan Sun Jianping Gou<EMAIL_ADDRESS>School of Computer Science
and Telecommunication Engineering, Jiangsu University, Zhenjiang, 212013,
China Baosheng Yu Lan Du Dacheng Tao Faculty of information technology,
Monash University, Australia UBTECH Sydney AI Centre, School of Computer
Science, Faculty of Engineering, The University of Sydney, Darlington, NSW
2008, Australia.
###### Abstract
Knowledge distillation (KD), as an efficient and effective model compression
technique, has been receiving considerable attention in deep learning. The key
to its success is to transfer knowledge from a large teacher network to a
small student one. However, most of the existing knowledge distillation
methods consider only one type of knowledge learned from either instance
features or instance relations via a specific distillation strategy in
teacher-student learning. There are few works that explore the idea of
transferring different types of knowledge with different distillation
strategies in a unified framework. Moreover, the frequently used offline
distillation suffers from a limited learning capacity due to the fixed
teacher-student architecture. In this paper we propose a collaborative
teacher-student learning via multiple knowledge transfer (CTSL-MKT) that
prompts both self-learning and collaborative learning. It allows multiple
students learn knowledge from both individual instances and instance relations
in a collaborative way. While learning from themselves with self-distillation,
they can also guide each other via online distillation. The experiments and
ablation studies on four image datasets demonstrate that the proposed CTSL-MKT
significantly outperforms the state-of-the-art KD methods.
###### keywords:
## 1 Introduction
Deep neural networks have achieved state-of-the-art performance on many
applications such as computer vision, natural language processing, and speech
recognition in recent years. The remarkable performance of deep learning
relies on designing deeper or wider network architectures with many layers and
millions of parameters to enhance the learning capacity. However, it is almost
impossible to deploy the large-scale networks on platforms with limited
computation and storage resources, e.g., mobile devices and embedded systems.
Thus, the model compression and acceleration techniques mainly including
network pruning [2, 3], model quantization [34, 35] and knowledge distillation
[20, 21, 39, 40, 36] are proposed for training lightweight deep models. Among
compressing methods, knowledge distillation, which carries out knowledge
transfer from a high-capacity teacher network to a low-capacity student one,
has received increasing interest recently since it was first introduced in
[20].
In knowledge distillation, the type of knowledge, the distillation strategy
and the teacher-student architecture are three crucial factors that determine
the KD performance [1]. As pointed out in [1], there are three kinds of
knowledge, i.e., the response-based, the feature-based and the relation-based
knowledge. Generally, most of KD methods distill the response-based knowledge
(e.g., soft logits of the output layer) from a large teacher network and
transfer it to a small student [20, 32, 22]. To overcome the limitation of
knowledge from the output layer of teacher, the feature-based knowledge from
the middle layers of teacher is also used to train the student [21, 18, 19].
Unlike both the response-based and the feature-based knowledge from individual
instances, the relation-based knowledge from instance relations is modelled
for improving student learning [37, 38, 15, 16, 17]. Each kind of knowledge
can provide student training with an informative teacher guidance, and they
can also compensate each other to enrich learning. However, most existing KD
methods only consider either knowledge from individual instance features to
maintain instance consistency between teacher and student or knowledge from
instance relations to preserve the instance correlation consistency. There are
a few works that consider more than one kind of knowledge in knowledge
distillation [37, 14] at the same time and explore the efficacy of each kind
of knowledge.
Transferring different types of knowledge can be implemented with different
distillation methods, e.g., offline distillation, online distillation and
self-distillation [1]. Most of the KD methods employ offline distillation,
which is one-way knowledge transfer from a pre-trained large teacher to a
small student [20, 22, 13]. In offline distillation, the capacity gap caused
by a fixed teacher-student architecture and the requirement of a large dataset
for pre-training the teacher often result in a degraded performance [22].
Thus, finding a proper teacher-student architecture in offline distillation is
challenging. In contrast, online distillation provides a one-phase end-to-end
training scheme via teacher-student collaborative learning on a peer-network
architecture instead of a fixed one [25, 33, 36, 32, 28, 12]. Self-
distillation performs online distillation within the same network to reduce
model over-fitting [23, 24]. Online distillation and self-distillation are
promising methods for knowledge distillation as they bridge the capacity gap
via avoiding the need of a large teacher network, leading to an improved
performance. However, both KD methods used individually are limited to
knowledge distillation from a single source, i.e., individual instances,
online distillation could further suffer from the poor instance consistency
between peer networks caused by the discrepancy in their network outputs.
Figure 1: The overview diagram of CTSL-MKT.
Consequently, it is desirable to have a unified framework that can integrate
the advantages of different KD methods and make efficient use of different
types of knowledge. Inspired by the idea of knowledge distillation via
multiple distillation strategies to transfer more than one types of knowledge,
we propose a collaborative teacher-student learning via multiple knowledge
transfer (CTSL-MKT), which fuses self-distillation and online distillation in
such a way that the former transfers the response-based knowledge within each
peer network and the latter bidirectionally transfers both the response-based
knowledge and the relation-based knowledge between peer networks. CTSL-MKT can
overcome the aforementioned issued faced by existing KD methods that often use
only one distillation strategy to transfer a single type of knowledge. The
overview framework of CTSL-MKT is illustrated in Figure 1. To our knowledge,
this is the first framework that integrates different distillation strategies
together to transfer more than one type of knowledge simultaneously.
In CTSL-MKT, each pear network conducts self-learning via self-distillation.
Meanwhile, they carry out teacher-student collaborative learning to mutually
teach each other. CTSL-MKT can also adopt a variety of peer network
architectures, where the two peer networks can either share the same network
architecture or have different ones. We believe that multiple knowledge
transfer can provide much more informative knowledge to guide each peer
network so that they can obtain better performance with a better
generalization ability. We conduct a set of image classification experiments
on four commonly-used datasets i.e., CIFAR-10, CIFAR-100, Tiny-ImageNet, and
Market-1501. Experimental results demonstrate the superior performance of the
proposed CTSL-MKT over the state-of-the-art KD methods. The main contributions
in our works can be summarized as follows:
* •
A new teacher-student mutual learning framework effectively fuses the
knowledge from individual instances and the knowledge from instance
relationships.
* •
A self-learning enhanced collaborative learning integrates the advantages of
both self-learning and online learning.
* •
The extensive experiments on a variety of the peer teacher-student networks
that compare CTSL-MKT with the state-of-the-art methods to validate its
effectiveness in image classification tasks.
* •
A set of ablation studies of different combinations of knowledge and
distillation methods provides insights into how multiple knowledge transfer
contribute to knowledge distillation.
## 2 Related Work
### 2.1 Self-Distillation
Self-distillation is a novel training scheme for knowledge transfer [23, 24,
27, 11, 10]. In self-distillation, the teacher and student networks are
identical and knowledge transfer is carried out within the same network. Yuan
et al. empirically analyzed the performance of normal, reversed and defective
KD methods, and showed that a weak teacher can strengthen the student and
vice-versa [23]. A teacher-free knowledge distillation method (Tf-KD) instead
makes student model conduct self-learning. To enhance the generalization and
overcome over-fitting, class-wise self-knowledge distillation makes use of
soft logits of different intra-class samples within a model [27]. Phuong and
Lampert [11] proposed a distillation-based training method to reduce time
complexity, where the output of later exit layer supervises the early exit
layer via knowledge transfer. Rather than at the layer level, snapshot
distillation [10] transfers knowledge from earlier to later epochs while
training a deep model.
Overall, self-distillation can overcome the issue of over-fitting and the
capacity gap on the teacher-student architectures, improve the generalization
ability and reduce the inference time of a deep model. However, the self-
distillation performance could be limited by the one-sided response-based
knowledge from model itself. To further improve knowledge distillation, we
integrate both online and self-distillation into CTSL-MKT with more
informative relation-based knowledge.
### 2.2 Collaborative Learning
Recently, there are many new online distillation methods that train a teacher
and a student simultaneously during knowledge transfer. Collaborative learning
is the one used most often [25, 29, 32, 8, 9, 7], where the teacher and the
student as peer networks collaboratively teach and learn from each other, and
the peer network architectures can be different. In particular, Zhang et al.
[25] proposed a deep mutual learning method (DML) for online distillation
using the response-based knowledge. DML uses an ensemble of soft logits as
knowledge and transfers it among arbitrary peer networks via collaborative
learning [9]. Yao and Sun [8] further extended DML with dense cross-layer
mutual-distillation, which learns both the teacher and the student
collaboratively from scratch.
Unlike the ensemble of peer networks, the advantage of a mutual knowledge
distillation method is that it can fuse features of peer networks to
collaboratively learn a powerful classifier [32]. However, the knowledge
distilled by those online mutual distillation methods is limited to the
response-based knowledge from individual instance features. In contrast, our
work can further make use of the relation-based knowledge from instance
relationships to further enrich the transferred knowledge.
### 2.3 Structural Knowledge
Most of knowledge distillation approaches adopt the output logits of a deep
model from individual samples as knowledge and make the logits of the teacher
and student match each other. However, such response-based knowledge ignores
the structural knowledge from the mutual relations of data examples, known as
relation-based knowledge. In recent years, there are some newly proposed
knowledge distillation methods based on structural relations of data samples
[26, 30, 31, 6, 5, 4]. Park et al. [26] proposed a relational knowledge
distillation method (RKD), which transfers the instance relation knowledge
from a teacher to a student. Chen et al. [4] borrowed the idea of manifold
learning to design a novel knowledge distillation method, in which the student
preserves the feature embedding similarities of samples from the teacher. Peng
et al. [31] designed a knowledge transfer method that makes sure the student
matches the instance correlation consistently with the teacher.
However, those structural knowledge distillation methods often ignore the
knowledge directly from individual samples. Our proposed CTSL-MKT instead
considers the knowledge from both individual instances and instance
relationships, and the bidirectional knowledge transfer is carried out between
peer networks via collaborative learning.
## 3 The Proposed CTSL-MKT Method
Figure 2: The framework of CTSL-MKT with two peer networks. Note that the
losses $L_{KL}(\emph{{p}}_{1},\emph{{p}}_{2})$ and
$L_{KL}(\emph{{p}}_{2},\emph{{p}}_{1})$ are for mutual learning via the
response-based knowledge transfer, $L_{RD}$ for mutual learning via the
relation-based knowledge transfer,
$L_{SD}^{k}(\emph{{p}}_{k}^{t},\bar{\emph{{p}}}_{k}^{t})$ for self-learning
via the response-based knowledge transfer.
CTLS-MKT unifies student self-learning and teacher-student mutual learning
under one framework in such a way that it can utilise multiple types of
knowledge and more than one distillation methods during the teacher-student
learning. The teacher-student architectures used in CTSL-MKT are peer
networks, such as ResNet [41] and MobilenetV2 [42]. Different from previous
works, CTSL-MKT distills both the response-based knowledge from individual
instance features and the relation-based knowledge from instance
relationships. During teacher-student mutual learning, peer networks trained
collaboratively can teach each other via online distillation with the two
kinds of knowledge. Meanwhile, each peer network can also self-learn via self-
distillation with the response-based knowledge. The two leaning processes
working together can complement each other to explore different knowledge
space and enhance the learning. Our CTSL-MKT can be seen as a new model
compression technique that generalises the existing self-distillation and
online distillation methods and enables fast computations and improves the
generalization ability. The overall framework of CTSL-MKT with two peer
networks is shown in Figure 2 as an example, and notations are summarized in
Table 1.
Notations Descriptions $X=\\{x_{1},x_{2},\cdots,x_{n}\\}$ $n$ input samples
from $m$ classes $\emph{{y}}=\\{y^{1},y^{2},\cdots,y^{m}\\}$ the one-hot label
vector for $x\in X$
$\emph{{z}}_{k}(x)=\\{z_{k}^{1},z_{k}^{2},\ldots,z_{k}^{m}\\}$ logits of a
network $N_{k}$ for $x\in X$ where $z_{k}^{i}$ is the logit for class $i$
$\sigma_{i}(\emph{{z}}_{k}(x),t)$ softmax function with temperature $t$
$p_{k}^{it}=\sigma_{i}(\emph{{z}}_{k}(x),t)$ output of softmax for $z_{k}^{i}$
$\emph{{p}}_{k}^{t}=\\{p_{k}^{1t},p_{k}^{2t},\ldots,p_{k}^{mt}\\}$ predictions
of $N_{k}$ with $t$
$\emph{{p}}_{k}=\\{p_{k}^{1},p_{k}^{2},\ldots,p_{k}^{m}\\}$ predictions of
$N_{k}$ when $t=1$ $f(s_{1}^{k},s_{2}^{k},\cdots,s_{n}^{k})$ similarity loss
of $n$ samples in $N_{k}$
Table 1: Notations used in CTSL-MKT.
### 3.1 Teacher-Student Mutual Learning
Teacher-student mutual learning contains the response-based knowledge transfer
and the relation-based knowledge transfer among peer network architectures.
Response-Based Knowledge Transfer: The response-based knowledge (i.e., the
output of a peer network) is learned from individual instances. Given a peer
network $N_{k}$ and its output $\emph{{p}}_{k}$ with temperature parameter
$t=1$, the collaborative response-based knowledge transfer makes the student
network $N_{k}$ imitate the teacher network $N_{k^{\prime}}$ ($k\neq
k^{\prime}$) with the following Kullback-Leibler (KL) divergence loss,
$\small L_{KL}(\emph{{p}}_{k},\emph{{p}}_{k^{\prime}})=\sum_{x\in
X}\sum_{i=1}^{m}\sigma_{i}(\emph{{z}}_{k^{\prime}}(x),1)log\frac{\sigma_{i}(\emph{{z}}_{k^{\prime}}(x),1)}{\sigma_{i}(\emph{{z}}_{k}(x),1)}.$
(1)
Similarly, the loss that the student network $N_{k^{\prime}}$ uses to learn
from the teacher network $N_{k}$ is
$L_{KL}(\emph{{p}}_{k^{\prime}},\emph{{p}}_{k})$.
During the collaborative learning for a classification task, each peer network
$N_{k}$ will then be trained with both the KL divergence loss (Eq (1)) and the
cross-entropy (CE) loss (Eq (2)).
$L_{CE}(\emph{{y}},\emph{{p}}_{k})=-\sum_{x\in
X}\sum_{i=1}^{m}y^{i}log(\sigma_{i}(\emph{{z}}_{k}(x),1))~{}.$ (2)
Take two peer networks in Figure 2 as example, the losses used to train
$N_{1}$ and $N_{2}$ will be
$L_{CE}(\emph{{y}},\emph{{p}}_{1})+L_{KL}(\emph{{p}}_{1},\emph{{p}}_{2})$ and
$L_{CE}(\emph{{y}},\emph{{p}}_{2})+L_{KL}(\emph{{p}}_{2},\emph{{p}}_{1})$,
respectively.
Relation-Based Knowledge Transfer: CTSL-MKT further integrates the relation-
based knowledge learned from the instance relationships via the teacher-
student mutual leaning in order to enrich the transferred knowledge and
enhance the teacher guidance. Let $s_{j}^{k}=\phi_{k}(x_{j})$ (where
$\phi_{k}(.)$ is a feature mapping function of $N_{k}$) be the output of any
layer of the network $N_{k}$ for $x_{j}$, and $\chi^{\tau}$ denote a set of
$\tau$-tuples of different samples. A set of $2$-tuples and a set of
$3$-tuples thus correspond to $\chi^{2}=\left\\{(x_{u},x_{v})|u\neq
v\right\\}$ and $\chi^{3}=\left\\{x_{u},x_{v},x_{w})|u\neq v\neq w\right\\}$,
respectively. As in [26], the relation-based knowledge learned by the network
$N_{k}$ can be modelled jointly by a distance-wise function and an angle-wise
function.
Given $N_{k}$, the distance-wise function captures the similarities between
two samples in a $2$-tuple, which is defined as
$f(s_{u}^{k},s_{v}^{k})=\frac{1}{\pi}||s_{u}^{k}-s_{v}^{k}||_{2}~{},$ (3)
where
$\pi=\frac{1}{|\chi^{2}|}\sum_{(x_{u},x_{v})\in\chi^{2}}||s_{u}^{k}-s_{v}^{k}||_{2}$
is a normalization constant. Accordingly, the instance relationships between
any two peer networks $N_{k}$ and $N_{k^{\prime}}$ are transferred by the
following distance-wise distillation loss
$L_{DD}(x_{u},x_{v})=\sum_{(x_{u},x_{v})\in\chi^{2}}R\big{(}f(s_{u}^{k},s_{v}^{k}),f(s_{u}^{k^{\prime}},s_{v}^{k^{\prime}})\big{)}~{},$
(4)
where $R(.)$ is Huber loss that reflects instance relationships and is defined
as
$R(a,b)=\left\\{\begin{array}[]{lr}\frac{1}{2}(a-b)^{2},\quad if~{}|a-b|\leq
1&\\\ |a-b|-\frac{1}{2},\quad otherwise&\end{array}\right.~{}.$ (5)
Furthermore, the similarities between samples in a $3$-tuple are measured by
an angle-wise function
$f(s_{u}^{k},s_{v}^{k},s_{w}^{k})=\cos\angle
s_{u}^{k}s_{v}^{k}s_{w}^{k}=<e^{uv},e^{wv}>~{},$ (6)
where $e^{uv}=\frac{s_{u}^{k}-s_{v}^{k}}{||s_{u}^{k}-s_{v}^{k}||_{2}}$ and
$e^{wv}=\frac{s_{w}^{k}-s_{v}^{k}}{||s_{w}^{k}-s_{v}^{k}||_{2}}$. The instance
relationships are transferred between any two peer networks $N_{k}$ and
$N_{k^{\prime}}$ with the angle-wise distillation loss, defined as
$\displaystyle L_{AD}(x_{u},x_{v},x_{w})$ (7) $\displaystyle=$
$\displaystyle\sum_{(x_{u},x_{v},x_{w})\in\chi^{3}}R\big{(}f(s_{u}^{k},s_{v}^{k},s_{w}^{k}),f(s_{u}^{k^{\prime}},s_{v}^{k^{\prime}},s_{w}^{k^{\prime}})\big{)}~{}.$
It has been shown that the relation-based knowledge transfer can be more
effective if the distance-wise function is used jointly with the angle-wise
function [26], as they capture different degrees of similarities between
samples. We formulate the instance relation distillation loss used in the
collaborative learning between peer networks as
$L_{RD}=L_{DD}(x_{u},x_{v})+\beta_{1}L_{AD}(x_{u},x_{v},x_{w})~{},$ (8)
where $\beta_{1}$ is a tuning parameter that controls the balance between loss
terms.
Consequently, the mutual distillation loss with both the response-based and
the relation-based knowledge between two peer networks ($N_{k}$ and
$N_{k^{\prime}}$) is defined as: for network $N_{k}$, we have
$L_{MD}^{k}=L_{RD}+\beta_{2}L_{KL}(\emph{{p}}_{k},\emph{{p}}_{k^{\prime}})~{},$
(9)
where $\beta_{2}$ is a tuning parameter; for network $N_{k^{\prime}}$, we have
$L_{MD}^{k^{\prime}}=L_{RD}+\beta_{2}L_{KL}(\emph{{p}}_{k^{\prime}},\emph{{p}}_{k})~{}.$
(10)
Algorithm 1 The proposed CSL-MKT
0: Input samples $X$ with labels, learning rate $\eta$, hyperparameters
$\alpha$, $\beta$, $\gamma$, $\beta_{1}$ and $\beta_{2}$.
1: Initialize: Initialize peer networks $N_{1}$ and $N_{2}$ to different
conditions.
2: Stage 1: Pre-train $N_{1}$ and $N_{2}$ for use of the process of self-
learning.
3: for k=1 to 2 do
4: Repeat:
5: Compute stochastic gradient of $L_{CE}$ in Eq. (2) and update $N_{k}$:
6: $N_{k}\leftarrow\text{$N_{k}$}$+$\eta$$\frac{\partial L_{CE}}{\partial
N_{k}}$.
7: Until: $L_{CE}$ converges.
8: end for
9: Stage 2: Train $N_{1}$ and $N_{2}$ collaboratively.
10: Repeat:
11: for k=1 to 2 do
12: Compute stochastic gradient of $L_{KD}^{k}$ in Eq. (12) and update
$N_{k}$:
13: $N_{k}\leftarrow\text{$N_{k}$}$+$\eta$$\frac{\partial L_{KD}^{k}}{\partial
N_{k}}$.
14: end for
15: Until: $L_{KD}^{k}$ converges.
16: return _$N_{k}$_.
### 3.2 Student Self-learning
During the collaborative learning between peer networks, if the outputs of
peer networks are very diverse, the mutual knowledge transfer could become
poor. Since the self-learning via self-distillation can improve the power of
knowledge transfer [23], CTSL-MKT further introduces the self-learning of each
peer network into collaborative learning via the response-based knowledge
self-distillation. To conduct self-learning for each peer network $N_{k}$, we
use the outputs $\bar{\emph{{p}}}_{k}^{t}$ of the pre-trained network $N_{k}$
to supervise itself with the following self-distillation loss:
$\small L_{SD}^{k}(\emph{{p}}_{k}^{t},\bar{\emph{{p}}}_{k}^{t})=\sum_{x\in
X}\sum_{i=1}^{m}\sigma_{i}^{t}(\bar{\emph{{z}}}_{k}(x),t)log\frac{\sigma_{i}^{t}(\bar{\emph{{z}}}_{k}(x),t)}{\sigma_{i}^{t}(\emph{{z}}_{k}(x),t)}~{}.$
(11)
### 3.3 The CSL-MKT Algorithm
Finally, CSL-MKT conducts mutual learning and self-learning simultaneously in
a unified framework, as shown in Algorithm 1. Its objective function for each
pear network is defined as
$L_{KD}^{k}=\alpha L_{CE}^{k}+\beta L_{MD}^{k}+\gamma L_{SD}^{k}~{},$ (12)
where $\alpha$, $\beta$ and $\gamma$ are the tuning parameters, which balance
the contribution of each loss in the collaborative learning, $L_{CE}^{k}$ is
defined in Eq. (2), $L_{MD}^{k}$ for two peer networks in Eqs. (9) or (10),
and $L_{SD}^{k}$ in Eq. (11).
## 4 Experiments
Network | Parameter size | Baseline
---|---|---
(CIFAR-100) | B_10 | B_100 | B_Tiny
ResNet14 | 6.50M | 94.94 | 76.12 | -
ResNet18 | 11.22M | 95.13 | 75.77 | 62.90
ResNet34 | 21.33M | 95.39 | 77.66 | -
VGG19 | 139.99M | 92.83 | 69.42 | -
MobileNetV2 | 2.37M | 90.97 | 68.23 | -
ShuffleNetV2 | 1.36M | 91.03 | 70.10 | -
AlexNet | 57.41M | - | - | 50.25
SqueezeNet | 0.77M | - | - | 43.68
Table 2: The parameter size of each peer network on CIFAR-100 and its
classification performance on three datasets. Note that B_10, B_100, and
B_Tiny denote Top-1 accuracy (%) achieved by each peer network on CIFAR-10,
CIFAR-100 and Tiny-ImageNet, respectively.
We conducted extensive experiments to verify the effectiveness of CTSL-MKT on
image classification tasks using datasets including CIFAR-10 [47], CIFAR-100
[47], Tiny-ImageNet [48] and Market-1501 [49]. The peer network architectures
were chosen from ResNet [41], MobileNet [42], ShuffleNet [43], VGG [45],
AlexNet [44] and SqueezeNet [46]. CTSL-MKT was compared to the state-of-the-
art KD methods, which are DML [25], Tf-KD [23] and RKD [26]. For a fair
comparison, RKD uses online distillation with the peer networks. In all the
experiments, the relation-based knowledge was modelled by the final feature
embedding outputs by the peer networks.
### 4.1 Datasets and Settings
CIFAR-10 and CIFAR-100. Both datasets have 60,000 $32\times 32$ images, where
50,000 images are for training and the other 10,000 images are for testing.
The number of classes is 10 for CIFAR-10 and 100 for CIFAR-100, and each class
has the same numbers of samples in both the training and the testing sets. On
each dataset, data augmentation with random crops and horizontal flips was
used to change the zero-padded $40\times 40$ images to $32\times 32$ ones. The
peer networks were trained for 200 epochs with batch size 128 and initial
learning rate 0.1 which is then multiplied by 0.2 at 60, 120, and 160 epochs.
Temperature parameter was set to 10 for CIFAR-10 and 3 for CIFAR-100.
Tiny-ImageNet. Tiny-ImageNet contains 100,000 training and 10,000 testing
$64\times 64$ images from 200 classes, each of which has the same number of
samples. Each image was randomly resized to $224\times 224$. The peer networks
were trained for 90 epochs with batch size 64 and initial learning rate 0.1
which is then multiplied by 0.1 at 30, 60, 80 epochs. Temperature parameter
was set to 2.
Market-1501. Market-1501 includes 32,688 images taken from 1,501 identities
under condition of six camera views. It has 751 identities for training and
750 ones for testing. Each image was zero-padded by 10 pixels on each side,
and data augmentation with random crops and horizontal flips was used to
change the zero-padded images to $256\times 128$ ones. The peer networks were
trained for 60 epochs with batch size 32 and initial learning rate 0.05 which
is then multiplied by 0.2 at 40 epochs. Temperature parameter was set to 6.
We used the SGD optimizer for training the peer networks with momentum 0.9 and
weight decay 5e-4, and all input images were normalized by Mean-Std
normalization. All the hyper-parameters were greedily searched and set as
follows. The hyper-parameters used on CIFAR and Tiny-ImageNet were set as
$\alpha=0.1$, $\beta=0.05$ and $\gamma=0.9$ for MobileNet and ShuffleNet, and
$\alpha=0.4$, $\beta=0.4$ and $\gamma=0.6$ for the other networks. On
Market-1501, the hyper-parameters were set as $\alpha=1$, $\beta=0.9$ and
$\gamma=1$ for all the networks. Besides, both $\beta_{1}$ and $\beta_{2}$
were set to 2. In all the experiments, we considered a pair of peer networks
that have the same architecture or different architectures. Table 2 shows the
parameter size of each network on CIFAR-100 and its top-1 accuracy on the two
CIFAR datasets and the Tiny-ImageNet dataset, which serves as a baseline.
### 4.2 Results on CIFAR-10
Table 3 reports the average Top-1 accuracy of CTSL-MKT and the state-of-the-
art competitors on CIFAR-10. It is not surprising that those knowledge
distillation methods perform better than the corresponding single network due
to the knowledge transfer, except for DML and RKD with ResNet14 and ResNet18.
The possible reason for the slightly poor performance of DML and RKD with
ResNet14 or ResNet18 could be that the discrepancies between outputs of small
peer networks for individual instances hinder mutual learning. Among those
knowledge distillation methods, CTSL-MKT performs the best with a significant
improvement, which indicates that our idea of collaborative learning with
multiple knowledge transfer is effective. Meanwhile, CTSL-MKT outperforms all
the corresponding baselines shown in Table 2 with a noticeable margin. For
example, CTSL-MKT with ShuffleNetV2-MobileNetV2 has increased the Top-1
accuracy by 1.68% and 1.18%, compared with the corresponding signal network
baselines, i.e., ShuffleNetV2 and MobileNetV2 respectively. Moreover, although
CTSL-MKT, DML, and RKD collaboratively learn the two peer networks, which can
have the same network structures or different ones, each peer network in CTSL-
MKT (i.e., CTSL-MKT_$N_{1}$ or CTSL-MKT_$N_{2}$) performs much better than its
counterpart in DML and RKD, due to the multiple knowledge transfer.
Network $N_{1}$ Network $N_{2}$ Tf-KD DML_$N_{1}$ DML_$N_{2}$ RKD_$N_{1}$
RKD_$N_{2}$ CTSL-MKT_$N_{1}$ CTSL-MKT_$N_{2}$ ResNet14 ResNet14 95.08$\pm$0.01
94.78$\pm$0.02 94.92$\pm$0.01 94.95$\pm$0.01 94.83$\pm$0.02 95.28$\pm$0.01
95.22$\pm$0.01 ResNet18 ResNet18 95.20$\pm$0.01 94.88$\pm$0.01 94.99$\pm$0.01
94.98$\pm$0.04 94.92$\pm$0.01 95.29$\pm$0.04 95.33$\pm$0.03 ResNet34 ResNet34
95.41$\pm$0.01 95.42$\pm$0.01 95.32$\pm$0.01 95.45$\pm$0.01 95.45$\pm$0.01
95.69$\pm$0.03 95.59$\pm$0.01 MobileNetV2 MobileNetV2 91.72$\pm$0.01
91.19$\pm$0.07 91.32$\pm$0.04 91.12$\pm$0.03 90.71$\pm$0.06 92.12$\pm$0.02
92.12$\pm$0.02 ShuffleNetV2 ShuffleNetV2 92.47$\pm$0.01 91.97$\pm$0.03
91.92$\pm$0.01 92.08$\pm$0.01 91.59$\pm$0.01 92.64$\pm$0.01 92.49$\pm$0.01
ResNet18 ResNet34 - 95.09$\pm$0.01 95.41$\pm$0.03 95.12$\pm$0.01
95.31$\pm$0.01 95.24$\pm$0.01 95.60$\pm$0.01 ResNet18 VGG19 - 95.11$\pm$0.03
93.49$\pm$0.02 95.03$\pm$0.01 93.50$\pm$0.01 95.16$\pm$0.01 93.91$\pm$0.01
ShuffleNetV2 MobileNetV2 - 91.78$\pm$0.03 91.25$\pm$0.08 91.73$\pm$0.01
90.72$\pm$0.01 92.71$\pm$0.01 92.15$\pm$0.01
Table 3: The average Top-1 accuracy (%) over three individual runs on
CIFAR-10.
### 4.3 Results on CIFAR-100
Table 4 reports the average Top-1 accuracy of all the competing knowledge
distillation methods with various network architectures on CIFAR-100. We have
similar observations as those on CIRFAR-10. Overall, each competing method
improves on the performance of the corresponding baseline, and CTSL-MKT gains
the largest improvement. Compare to Tf-KD, DML and RKD, Top-1 accuracy of each
peer network in CTSL-MKT has been improved by about 1% on average. For
example, the accuracy of the two MobileNetV2 networks in CTSL-MKT has been
increased by 2.46% and 2.68% respectively, compared to those in DML, and
increased by 3.08% and 2.96% Top-1 accuracy compared to those in RKD.
Network $N_{1}$ Network $N_{2}$ Tf-KD DML_$N_{1}$ DML_$N_{2}$ RKD_$N_{1}$
RKD_$N_{2}$ CTSL-MKT_$N_{1}$ CTSL-MKT_$N_{2}$ ResNet14 ResNet14 76.67$\pm$0.02
75.97$\pm$0.01 76.16$\pm$0.11 76.36$\pm$0.03 76.30$\pm$0.01 77.00$\pm$0.05
76.85$\pm$0.04 ResNet18 ResNet18 77.04$\pm$0.12 76.10$\pm$0.10 76.27$\pm$0.07
76.43$\pm$0.01 76.09$\pm$0.01 77.43$\pm$0.10 77.46$\pm$0.01 ResNet34 ResNet34
77.93$\pm$0.01 77.88$\pm$0.12 77.61$\pm$0.03 77.63$\pm$0.03 77.65$\pm$0.05
78.58$\pm$0.01 78.24$\pm$0.02 MobileNetV2 MobileNetV2 70.82$\pm$0.02
68.98$\pm$0.01 68.58$\pm$0.18 68.36$\pm$0.01 68.30$\pm$0.01 71.44$\pm$0.06
71.26$\pm$0.08 ShuffleNetV2 ShuffleNetV2 71.79$\pm$0.02 70.47$\pm$0.15
70.29$\pm$0.04 70.24$\pm$0.01 69.98$\pm$0.03 72.13$\pm$0.02 71.69$\pm$0.05
ResNet18 ResNet34 - 76.15$\pm$0.10 77.71$\pm$0.01 76.41$\pm$0.05
77.83$\pm$0.01 77.61$\pm$0.08 78.15$\pm$0.12 ResNet18 VGG19 - 76.51$\pm$0.02
68.80$\pm$3.74 76.29$\pm$0.02 68.28$\pm$0.87 77.23$\pm$0.02 72.72$\pm$0.06
ShuffleNetV2 MobileNetV2 - 70.47$\pm$0.13 68.83$\pm$0.14 70.50$\pm$0.28
67.87$\pm$0.01 72.46$\pm$0.15 71.34$\pm$0.09
Table 4: The average Top-1 accuracy (%) over three individual runs on
CIFAR-100.
(a) ShuffleNetV2 (b) MobileNetV2
Figure 3: The values of training loss over epochs on the CIFAR-100 training
dataset.
(a) ShuffleNetV2 (b) MobileNetV2
Figure 4: The values of Top-1 accuracy over epochs on the CIFAR-100 testing
dataset.
To further illustrate the learning process of the peer networks in CTSL-MKT,
Figure 3 plots the training loss of ShuffleNetV2 and MobileNetV2 as a function
of epochs, compared to Tf-KD, DML and RKD. It shows that CTSL-MKT and Tf-KD
converge better than DML and RKD. The possible reason is that each network can
self-learn in CTSL-MKT and Tf-KD to overcome the discrepancy in the outputs of
peer networks in DML and RKD during learning. Although CTSL-MKT with multiple
knowledge transfer introduces extra hyper-parameters, it can still converge
faster, achieving comparable training loss with Tf-KD. The loss becomes stable
around 120 epochs in general. Furthermore, Figure 4 displays the corresponding
Top-1 accuracy of each peer network after each epoch on the testing dataset.
It shows that the proposed CTSL-MKT outperforms the others after the
convergence, its performance improves along with the decrement of the training
loss. Overall, the patterns show that two peer networks in CTSL-MKT can work
collaboratively, via teaching and learning from each other at each epoch, and
each network gradually improves itself to achieve a better performance.
### 4.4 Results on Tiny-ImageNet
Table 5 shows the average Top-1 accuracy of the competing methods with five
various peer network architectures on Tiny-ImageNet. From the comparative
results, it can be seen that CTSL-MKT significantly outperforms the baselines,
DML, RKD and Tf-KD. However, on these five peer network architectures, some
peer networks in DML, RKD and Tf-KD achieve poor performance, compared to
their baselines. The possible reason is that the used peer networks are
smaller with less informative knowledge in knowledge transfer and the outputs
of peer networks for the same individual instances migth be different, which
degrades the effectiveness of mutual learning. With multiple kinds of
knowledge and distillation strategies, our CTSL-MKT can well improve the
performance via mutual learning.
Network $N_{1}$ Network $N_{2}$ Tf-KD DML_$N_{1}$ DML_$N_{2}$ RKD_$N_{1}$
RKD_$N_{2}$ CTSL-MKT_$N_{1}$ CTSL-MKT_$N_{2}$ ResNet18 ResNet18 63.29$\pm$0.02
62.30$\pm$0.01 62.39$\pm$0.03 62.80$\pm$0.01 62.42$\pm$0.08 63.63$\pm$0.08
63.64$\pm$0.02 AlexNet AlexNet 49.78$\pm$0.01 44.47$\pm$0.01 44.80$\pm$0.01
43.54$\pm$0.01 42.97$\pm$0.01 51.39$\pm$0.01 51.28$\pm$0.01 SqueezeNet
SqueezeNet 41.66$\pm$0.01 47.16$\pm$0.03 46.95$\pm$0.19 48.22$\pm$0.01
48.55$\pm$0.09 48.60$\pm$0.30 48.86$\pm$0.03 AlexNet SqueezeNet -
44.35$\pm$0.51 46.15$\pm$0.30 44.66$\pm$1.87 46.86$\pm$0.41 50.98$\pm$0.08
47.99$\pm$0.03 ResNet18 AlexNet - 62.62$\pm$0.11 43.53$\pm$0.62 62.37$\pm$0.01
46.64$\pm$0.03 63.37$\pm$0.01 51.56$\pm$0.02
Table 5: The average Top-1 accuracy (%) over three individual runs on Tiny-
ImageNet.
### 4.5 Results on Market-1501
We further compared those methods on Market-1501, which is used for a re-
identification (re-id) task. In this set of experiments, we adopted ResNet50
that is usually used on this dataset to form a peer network architecture.
Figure 5 shows the performance of Tf-KD, DML, RKD and CTSL-MKT, measured by
Rank-1, Rank-5, Rank-10 and mAP. Note that these results were computed for
only one peer network. DML and RKD with collaborative learning consistently
perform better than Tf-KD via self-learning. Our CTSL-MKT outperforms both DML
and RKD across all the metrics. Specifically, in terms of mAP, the improvement
of CTSL-MKT over DML, RKD and Tf-KD is 1.22%, 0.8% and 4.69%, respectively.
Figure 5: Comparative results (%) on Market-1501.
### 4.6 Ablation Study
CTSL-MKT contains three knowledge distillation strategies, i.e., mutual
learning via response-based knowledge transfer from individual instances
(MLI), mutual learning via relation-based knowledge transfer from instance
relationships (MLR) and self-learning via response-based knowledge transfer
from individual instances (SLI). To study how each strategy contributes to the
model performance, we consider the following four variants of CTSL-MKT:
1. _A_)
the full model using the three strategies altogether, where we used both
online distillation and self-distillation with the two kinds of knowledge;
2. _B_)
the model using online distillation only with both the response-based
knowledge (MLI) and the relation-based knowledge (MLR);
3. _C_)
the model using online distillation with the relation-based knowledge (MLR)
and self-distillation with the response-based knowledge (SLI);
4. _D_)
the model using both online distillation and self-distillation with only the
response-based knowledge, corresponding to MLI + SLI.
Table 6 reports the average Top-1 accuracy of these four variations with
different pairs of peer network architectures. We have the following
observations: 1) Variant A (i.e., the full model) outperforms the other
variants where one knowledge distillation strategy has been removed. It
implies that the use of multiple types of knowledge with both online
distillation and self-distillation plays a crucial role in the performance
gain. 2) Variant B without self-distillation has the largest performance drop,
compared with variants C and D. It indicates that self-distillation
contributes substantially to the overall performance as it could offset the
diversity issue caused by mutual learning and further enhance the knowledge
distillation efficiency. 3) DML, Tf-KD and RKD can be seen as a special case
of CTSL-MKT using only one knowledge distillation strategy. Jointly looking at
Tables 6 and 4 reveals that knowledge distillation methods with two or more
strategies almost always outperform those using only one strategy. Therefore,
it is clear that knowledge distillation via proper multiple knowledge transfer
is very beneficial for improving the performance of model compression.
Case MLI MLR SLI ResNet14 ResNet14 ResNet18 ResNet18 MobileNetV2 MobileNetV2 A
✓ ✓ ✓ 77.00$\pm$0.05 76.85$\pm$0.04 77.43$\pm$0.10 77.46$\pm$0.01
71.44$\pm$0.06 71.26$\pm$0.08 B ✓ ✓ ✗ 76.57$\pm$0.04 76.37$\pm$0.02
76.67$\pm$0.04 76.66$\pm$0.09 69.10$\pm$0.01 69.23$\pm$0.04 C ✗ ✓ ✓
76.69$\pm$0.02 76.70$\pm$0.04 77.35$\pm$0.04 77.29$\pm$0.08 71.18$\pm$0.05
71.10$\pm$0.08 D ✓ ✗ ✓ 76.73$\pm$0.03 76.70$\pm$0.08 77.26$\pm$0.07
77.12$\pm$0.04 71.14$\pm$0.02 71.04$\pm$0.10
(a) Ablation experiments on the same peer network architectures
Case MLI MLR SLI ResNet14 ResNet18 ResNet18 ResNet34 ShuffleNetV2 MobileNetV2
A ✓ ✓ ✓ 77.07$\pm$0.03 77.28$\pm$0.04 77.61$\pm$0.08 78.15$\pm$0.12
72.46$\pm$0.15 71.34$\pm$0.09 B ✓ ✓ ✗ 76.35$\pm$0.01 76.53$\pm$0.07
76.53$\pm$0.02 77.83$\pm$0.01 70.57$\pm$0.03 68.69$\pm$0.01 C ✗ ✓ ✓
76.69$\pm$0.23 77.06$\pm$0.02 77.12$\pm$0.01 77.99$\pm$0.03 72.06$\pm$0.04
71.10$\pm$0.11 D ✓ ✗ ✓ 76.68$\pm$0.03 77.13$\pm$0.02 77.39$\pm$0.01
77.68$\pm$0.01 72.20$\pm$0.09 71.05$\pm$0.15
(b) Ablation experiments on the different peer network architectures
Table 6: Ablation study of CTSL-MKT in terms of the average Top-1 accuracy
over three individual runs on CIFAR-100.
### 4.7 Experiment Discussion
The experimental results reported above have demonstrated the effectiveness of
the proposed CTSL-MKT, while being compared with several state-of-the-art
knowledge distillation methods. We have the following remarks:
* •
Collaborative learning can make peer networks teach and learn from each other,
and iteratively improve themselves.
* •
Self-learning of each peer network can further enhance the ability of mutual
learning among peer networks by compensating the loss caused by the diversity
issue.
* •
Multiple knowledge transfer with more than one types of knowledge and
distillation strategies can significantly improve the KD performance.
* •
Various peer network architectures (i.e., teacher-student architectures) can
be easily adopted for knowledge transfer via collaborative learning.
## 5 Conclusions
In this paper, we propose a novel knowledge distillation method called
collaborative teacher-student learning via multiple knowledge transfer (CTSL-
MKT). It naturally integrates both self-learning via self-distillation and
collaborative learning via online distillation in a unified framework so that
multiple kinds of knowledge can be transferred effectively in-between
different teacher-student architectures, and CTSL-MKT can achieve individual
instance consistency and instance correlation consistency among the peer
networks. Experimental results on four image classification datasets have
demonstrated CTSL-MKT outperforms the competitors with a noticeable margin,
which proves the necessity of using different distillation schemes to transfer
multiple types of knowledge simultaneously. We believe that our proposed
framework opens a door to design multiple knowledge transfer for knowledge
distillation.
## References
* [1] J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge Distillation: A Survey,” _arXiv:2006.05525_ , 2020.
* [2] H. Li, A. Kadav, I. Durdanovic, H. Samet, and H. P. Graf, “Pruning Filters for Efficient ConvNets,” _Int. Conf. Learn. Represent._ , 2017.
* [3] S. Han, J. Pool, J. Tran, and W. J. Dally, “Learning both Weights and Connections for Efficient Neural Networks,” _Adv. Neural Inform. Process. Syst._ , 2015.
* [4] H. Chen, Y. Wang, C. Xu, C. Xu, and D. Tao, “Learning student networks via feature embedding,” _IEEE Trans. Neur. Net. Lear._ , 2020.
* [5] F. Tung, and G. Mori, “Similarity-preserving knowledge distillation,” _Int. Conf. Comput. Vis._ , 2019.
* [6] Y. Liu, C. Shu, J. Wang, and C. Shen, “Structured knowledge distillation for dense prediction,” _IEEE Trans. Pattern Anal. Mach. Intell._ , 2020.
* [7] K. Li, L. Yu, S. Wang, and P. A. Heng, “Towards Cross-Modality Medical Image Segmentation with Online Mutual Knowledge Distillation,” _AAAI_ , 2020.
* [8] A. Yao, and D. Sun, “Knowledge Transfer via Dense Cross-Layer Mutual-Distillation,” _Eur. Conf. Comput. Vis._ , 2020.
* [9] Q. Guo, X. Wang, Y. Wu, Z. Yu, D. Liang, X. Hu, and P. Luo, “Online Knowledge Distillation via Collaborative Learning,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [10] C. Yang, L. Xie, C. Su, and A. L. Yuille, “OSnapshot distillation: Teacher-student optimization in one generation,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [11] M. Phuong, and C. H. Lampert, “Distillation-based training for multi-exit architectures,” _Int. Conf. Comput. Vis._ , 2019.
* [12] D. Walawalkar, Z. Shen, and M. Savvides, “Online Ensemble Model Compression using Knowledge Distillation,” _Eur. Conf. Comput. Vis._ , 2020.
* [13] T. Li, J. Li, Z. Liu, and C. Zhang, “Few sample knowledge distillation for efficient network compression,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [14] C. Shen, M. Xue, X. Wang, J. Song, L. Sun, and M. Song, “Customizing student networks from heterogeneous teachers via adaptive knowledge amalgamation,” _Int. Conf. Comput. Vis._ , 2019.
* [15] J. Yim, D. Joo, J. Bae, and J. Kim, “A gift from knowledge distillation: Fast optimization, network minimization and transfer learning,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2017.
* [16] N. Passalis, M. Tzelepi, and A. Tefas, “Heterogeneous Knowledge Distillation using Information Flow Modeling,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [17] L. Yu, V. O. Yazici, X. Liu, J. Weijer, Y. Cheng, and A. Ramisa, “Learning metrics from teachers: Compact networks for image embedding,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [18] K. Xu, L. Rui, Y. Li, and L. Gu, “Feature Normalized Knowledge Distillation for Image Classification,” _Eur. Conf. Comput. Vis._ , 2020.
* [19] Y. Guan, P. Zhao, B. Wang, Y. Zhang, C. Yao, K. Bian, and J. Tang, “Differentiable Feature Aggregation Search for Knowledge Distillation,” _Eur. Conf. Comput. Vis._ , 2020.
* [20] G. Hinton, O. Vinyals, and J. Dean, “Distilling the Knowledge in a Neural Network,” _arXiv: 1503.02531_ , 2015.
* [21] A. Romero, N. Ballas, and S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “FitNets: Hints for Thin Deep Nets,” _Int. Conf. Learn. Represent._ , 2015.
* [22] S.-I. Mirzadeh, M. Farajtabar, A. Li, and H. Ghasemzadeh, “Improved Knowledge Distillation via Teacher Assistant,” _AAAI_ , 2020.
* [23] L. Yuan, F. E. Tay, G. Li, T. Wang, and J. Feng, “Revisiting Knowledge Distillation via Label Smoothing Regularization,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [24] L. Zhang, J. Song, A. Gao, J. Chen, C. Bao, and K. Ma, “Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation,” _Int. Conf. Comput. Vis._ , 2019.
* [25] Y. Zhang, T. Xiang, T. M. Hospedales, and H. Lu, “Deep Mutual Learning,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2018.
* [26] W. Park, D. Kim, Y. Lu, and M. Cho, “Relational Knowledge Distillation,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [27] S. Yun, J. Park, K. Lee, and J. Shin, “Regularizing Class-wise Predictions via Self-knowledge Distillation,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2020.
* [28] X. Lan, X. Zhu, and S. Gong, “Knowledge Distillation by On-the-Fly Native Ensemble,” _IAdv. Neural Inform. Process. Syst._ , 2018.
* [29] G. Wu and S. Gong, “Peer Collaborative Learning for Online Knowledge Distillation,” _IarXiv: 2006.04147_ , 2020.
* [30] Y. Liu, J. Cao, B. Li, C. Yuan, W. Hu, Y. Li, and Y. Duan, “Knowledge Distillation via Instance Relationship Graph,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2019.
* [31] B. Peng, X. Jin, J. Liu, D. Li, Y. Wu, Y. Liu, S. Zhou, and Z. Zhang, “Correlation Congruence for Knowledge Distillation,” _Int. Conf. Comput. Vis._ , 2019.
* [32] J. Kim, M. Hyun, I. Chung, and N. Kwak, “Feature Fusion for Online Mutual Knowledge Distillation,” _Int. Conf. Pattern Recog._ , 2019.
* [33] S. Hou, X. Liu, and Z. Wang, “DualNet: Learn Complementary Features for Image Recognition,” _Int. Conf. Comput. Vis._ , 2017.
* [34] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks,” _Eur. Conf. Comput. Vis._ , 2016.
* [35] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, “Binarized Neural Networks,” _Adv. Neural Inform. Process. Syst._ , 2016.
* [36] D. Chen, J.-P. Mei, C. Wang, Y. Feng, and C. Chen, “Online Knowledge Distillation with Diverse Peers,” _AAAI_ , 2020.
* [37] S. You, C. Xu, C. Xu, and D. Tao, “Learning from Multiple Teacher Networks,” _ACM SIGKDD_ , 2017.
* [38] X. Wu, R. He, Y. Hu, and Z. Sun, “Learning an Evolutionary Embedding via Massive Knowledge Distillation,” _Int. J. Comput. Vis._ , 2020, pp. 2089–2106.
* [39] T. Xu, and C. Liu, “Data-Distortion Guided Self-Distillation for Deep Neural Networks,” _AAAI_ , 2019.
* [40] J. H. Cho, and B. Hariharan, “On the Efficacy of Knowledge Distillation,” _Int. Conf. Comput. Vis._ , 2019.
* [41] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2016.
* [42] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2018.
* [43] N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design,” _Eur. Conf. Comput. Vis._ , 2018.
* [44] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” _Adv. Neural Inform. Process. Syst._ , 2012.
* [45] K. Simonyan, and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” _Int. Conf. Pattern Recog._ , 2015.
* [46] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and ¡0.5MB model size,” _Int. Conf. Pattern Recog._ , 2017.
* [47] A. Krizhevsky, and G. Hinton, “Learning multiple layers of features from tiny images,” _Technical Report._ , 2009.
* [48] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and F.-F. Li, “ImageNet: A large-scale hierarchical image database,” _IEEE Conf. Comput. Vis. Pattern Recog._ , 2009.
* [49] L. Zheng, L. Shen, L. Tian, S. Wang, J. Wang, Q. Tian, “Scalable Person Re-Identification: A Benchmark,” _Int. Conf. Comput. Vis._ , 2015.
|
# A priori and a posteriori error analysis of the lowest-order NCVEM for
second-order linear indefinite elliptic problems
Carsten Carstensen, Rekha Khot and Amiya K. Pani22footnotemark: 2 Department
of Mathematics, Humboldt-Universität zu Berlin, 10099 Berlin, Germany. Email:
<EMAIL_ADDRESS>of Mathematics, Indian Institute of Technology
Bombay, Powai, Mumbai, 400076. Email<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The nonconforming virtual element method (NCVEM) for the approximation of the
weak solution to a general linear second-order non-selfadjoint indefinite
elliptic PDE in a polygonal domain $\Omega$ is analyzed under reduced elliptic
regularity. The main tool in the a priori error analysis is the connection
between the nonconforming virtual element space and the Sobolev space
$H^{1}_{0}(\Omega)$ by a right-inverse $J$ of the interpolation operator
$I_{h}$. The stability of the discrete solution allows for the proof of
existence of a unique discrete solution, of a discrete inf-sup estimate and,
consequently, for optimal error estimates in the $H^{1}$ and $L^{2}$ norms.
The explicit residual-based a posteriori error estimate for the NCVEM is
reliable and efficient up to the oscillation terms. Numerical experiments on
different types of polygonal meshes illustrate the robustness of an error
estimator and support the improved convergence rate of an adaptive mesh-
refinement in comparison to the uniform mesh-refinement.
Keywords: second-order linear indefinite elliptic problems, virtual elements,
nonconforming,
polytopes, enrichment, stability, a priori error estimates, a residual-based a
posteriori error
estimate, adaptive mesh-refinement.
AMS subject classifications: 65N12, 65N15, 65N30, 65N50.
## 1 Introduction
The nonconforming virtual element method approximates the weak solution $u\in
H^{1}_{0}(\Omega)$ to the second-order linear elliptic boundary value problem
$\displaystyle{\cal L}u:=-\text{div}(\textbf{A}\nabla u+\textbf{b}u)+\gamma
u=f\quad\mbox{in}\quad\Omega$ (1.1)
for a given $f\in L^{2}(\Omega)$ in a bounded polygonal Lipschitz domain
$\Omega\subset{\mathbb{R}}^{2}$ subject to homogeneous Dirichlet boundary
conditions.
### 1.1 General introduction
The virtual element method (VEM) introduced in [4] is one of the well-received
polygonal methods for approximating the solutions to partial differential
equations (PDEs) in the continuation of the mimetic finite difference method
[7]. This method is becoming increasingly popular [1, 6, 5, 16, 17, 3] for its
ability to deal with fairly general polygonal/polyhedral meshes. On the
account of its versatility in shape of polygonal domains, the local finite-
dimensional space (the space of shape functions) comprises non-polynomial
functions. The novelty of this approach lies in the fact that it does not
demand for the explicit construction of non-polynomial functions and the
knowledge of degrees of freedom along with suitable projections onto
polynomials is sufficient to implement the method.
Recently, Beirão da Veiga et al. discuss a conforming VEM for the indefinite
problem (1.1) in [6]. Cangiani et al. [17] develop a nonconforming VEM under
the additional condition
$\displaystyle 0\leq\gamma-\frac{1}{2}\text{div}(\textbf{b}),$ (1.2)
which makes the bilinear form coercive and significantly simplifies the
analysis. The two papers [6, 17] prove a priori error estimates for a solution
$u\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)$ in a convex domain $\Omega$. The a
priori error analysis for the nonconforming VEM in [17] can be extended to the
case when the exact solution $u\in H^{1+\sigma}(\Omega)\cap H^{1}_{0}(\Omega)$
with $\sigma>1/2$ as it is based on traces. This paper shows it for all
$\sigma>0$ and circumvents any trace inequality. Huang et al. [31] discuss a
priori error analysis of the nonconforming VEM applied to Poisson and
Biharmonic problems for $\sigma>0$. An a posteriori error estimate in [16]
explores the conforming VEM for (1.1) under the assumption (1.2). There are a
few contributions [16, 9, 34] on residual-based a posteriori error control for
the conforming VEM. This paper presents a priori and a posteriori error
estimates for the nonconforming VEM without (1.2), but under the assumption
that the Fredholm operator ${\cal L}$ is injective.
### 1.2 Assumptions on (1.1)
This paper solely imposes the following assumptions (A1)-(A3) on the
coefficients $\textbf{A},\textbf{b},\gamma$ and the operator ${\cal L}$ in
(1.1) with $f\in L^{2}(\Omega)$.
1. (A1)
The coefficients $\textbf{A}_{jk},\textbf{b}_{j},\gamma$ for $j,k=1,2$ are
piecewise Lipschitz continuous functions. For any decomposition $\cal{T}$
(admissible in the sense of Subsection $2.1$) and any polygonal domain
$P\in\cal{T}$, the coefficients $\textbf{A},\textbf{b},\gamma$ are bounded
pointwise a.e. by
$\|\textbf{A}\|_{\infty},\|\textbf{b}\|_{\infty},\|\gamma\|_{\infty}$ and
their piecewise first derivatives by
$|\textbf{A}|_{1,\infty},|\textbf{b}|_{1,\infty},|\gamma|_{1,\infty}$.
2. (A2)
There exist positive constants $a_{0}$ and $a_{1}$ such that, for a.e.
$x\in\Omega$, $\textbf{A}(x)$ is SPD and
$\displaystyle
a_{0}|\xi|^{2}\leq\sum_{j,k=1}^{2}\textbf{A}_{jk}(x)\xi_{j}\xi_{k}\leq
a_{1}|\xi|^{2}\quad\text{for all}\;\xi\in{\mathbb{R}}^{2}.$ (1.3)
3. (A3)
The linear operator ${\cal L}:H^{1}_{0}(\Omega)\to H^{-1}(\Omega)$ is
injective, i.e., zero is not an eigenvalue of ${\cal L}$ .
Since the bounded linear operator ${\cal L}$ is a Fredholm operator [30, p.
321], (A3) implies that ${\cal L}$ is bijective with bounded inverse ${\cal
L}^{-1}:H^{-1}(\Omega)\to H^{1}_{0}(\Omega)$. The Fredholm theory also entails
the existence of a unique solution to the adjoint problem, that is, for every
$g\in L^{2}(\Omega)$, there exists a unique solution $\Phi\in
H^{1}_{0}(\Omega)$ to
$\displaystyle{\cal
L}^{*}\Phi:=-\text{div}(\textbf{A}\nabla\Phi)+\textbf{b}\cdot\nabla\Phi+\gamma\Phi=g.$
(1.4)
The bounded polygonal Lipschitz domain $\Omega$, the homogeneous Dirichlet
boundary conditions, and (A1)-(A2) lead to some $0<\sigma\leq 1$ and positive
constants $C_{\text{reg}}$ and $C^{*}_{\text{reg}}$ (depending only on
$\sigma,\Omega$ and coefficients of ${\cal L}$) such that, for any $f,g\in
L^{2}(\Omega)$, the unique solution $u$ to (1.1) and the unique solution
$\Phi$ to (1.4) belong to $H^{1+\sigma}(\Omega)\cap H^{1}_{0}(\Omega)$ and
satisfy
$\displaystyle\|u\|_{1+\sigma,\Omega}\leq
C_{\text{reg}}\|f\|_{L^{2}(\Omega)}\;\text{
and}\;\;\|\Phi\|_{1+\sigma,\Omega}\leq
C^{*}_{\text{reg}}\|g\|_{L^{2}(\Omega)}.$ (1.5)
(The restriction $\sigma\leq 1$ is for convenience owing to the limitation to
first-order convergence of the scheme.)
### 1.3 Weak formulation
Given the coefficients $\textbf{A},\textbf{b},\gamma$ with (A1)-(A2), define,
for all $u,v\in V:=H^{1}_{0}(\Omega)$,
$a(u,v):=(\textbf{A}\nabla u,\nabla v)_{L^{2}(\Omega)},\hskip
14.22636ptb(u,v):=(u,\textbf{b}\cdot\nabla v)_{L^{2}(\Omega)},\hskip
14.22636ptc(u,v):=(\gamma u,v)_{L^{2}(\Omega)}$ (1.6)
and
$B(u,v):=a(u,v)+b(u,v)+c(u,v)$ (1.7)
(with piecewise versions $a_{\mathrm{pw}},b_{\mathrm{pw}},c_{\mathrm{pw}}$ and
$B_{\mathrm{pw}}$ for $\nabla$ replaced by the piecewise gradient
$\nabla_{\mathrm{pw}}$ and local contributions $a^{P},b^{P},c^{P}$ defined in
Subsection 3.1 throughout this paper). The weak formulation of the problem
(1.1) seeks $u\in V$ such that
$B(u,v)=(f,v)\quad\text{for all}\;v\in V.$ (1.8)
Assumptions (A1)-(A3) imply that the bilinear form $B(\cdot,\cdot)$ is
continuous and satisfies an inf-sup condition [11]
$\displaystyle 0<\beta_{0}:=\inf_{0\neq v\in V}\sup_{0\neq w\in
V}\frac{B(v,w)}{\|v\|_{1,\Omega}\|w\|_{1,\Omega}}.$ (1.9)
### 1.4 Main results and outline
Section $2$ introduces the VEM and guides the reader to the first-order
nonconforming VEM on polygonal meshes. It explains the continuity of the
interpolation operator and related error estimates in detail. Section $3$
starts with the discrete bilinear forms and their properties, followed by some
preliminary estimates for the consistency error and the nonconformity error.
The nonconformity error uses a new conforming companion operator resulting in
the well-posedness of the discrete problem for sufficiently fine meshes.
Section $4$ proves the discrete inf-sup estimate and optimal a priori error
estimates. Section $5$ discusses both reliability and efficiency of an
explicit residual-based a posteriori error estimator. Numerical experiments in
Section $6$ for three computational benchmarks illustrate the performance of
an error estimator and show the improved convergence rate in adaptive mesh-
refinement.
### 1.5 Notation
Throughout this paper, standard notation applies to Lebesgue and Sobolev
spaces $H^{m}$ with norm $\|\cdot\|_{m,\cal{D}}$ (resp. seminorm
$|\cdot|_{m,\cal{D}}$) for $m>0$, while $(\cdot,\cdot)_{L^{2}({\cal D})}$ and
$\|\cdot\|_{L^{2}({\cal D})}$ denote the $L^{2}$ scalar product and $L^{2}$
norm on a domain ${\cal D}$. The space $C^{0}(\cal D)$ consists of all
continuous functions vanishing on the boundary of a domain ${\cal D}$. The
dual space of $H^{1}_{0}(\Omega)$ is denoted by $H^{-1}(\Omega)$ with dual
norm $\|\cdot\|_{-1}$. An inequality $A\lesssim B$ abbreviates $A\leq CB$ for
a generic constant $C$, that may depend on the coefficients of ${\cal L}$, the
universal constants $\sigma$, $\rho$ (from (M2) below), but that is
independent of the mesh-size. Let $\mathcal{P}_{k}({\cal D})$ denote the set
of polynomials of degree at most $k\in\mathbb{N}_{0}$ defined on a domain
${\cal D}$ and let $\Pi_{k}$ denote the piecewise $L^{2}$ projection on
$\mathcal{P}_{k}({\cal T})$ for any admissible partition
$\mathcal{T}\in\mathbb{T}$ (hidden in the notation $\Pi_{k}$). The notation
$H^{s}(P):=H^{s}(\text{int}P)$ for a compact polygonal domain $P$ means the
Sobolev space $H^{s}$ [30] defined in the interior $\text{int}(P)$ of $P$
throughout this paper. The outward normal derivative is denoted by
$\frac{\partial\;\bullet}{\partial\textbf{n}_{P}}=\textbf{n}_{P}\cdot\nabla\bullet$
for the exterior unit normal vector $\textbf{n}_{P}$ along the boundary
$\partial P$ of the domain $P$.
## 2 First-order virtual element method on a polygonal mesh
This section describes class of admissible partitions of $\Omega$ into
polygonal domains and the lowest-order nonconforming virtual element method
for the problem (1.1) [17, 3].
### 2.1 Polygonal meshes
A polygonal domain $P$ in this paper is a non-void compact simply-connected
set $P$ with polygonal boundary $\partial P$ so that $\text{int}(P)$ is a
Lipschitz domain. The polygonal boundary $\partial P$ is a simple closed
polygon described by a finite sequence of distinct points. The set ${\cal
N}(\partial P)=\\{z_{1},z_{2},\dots,z_{J}\\}$ of nodes of a polygon $P$ is
enumerated with $z_{J+1}:=z_{1}$ such that
$E(j):=\text{conv}\\{z_{j},z_{j+1}\\}$ defines an edge and all $J$ edges cover
the boundary $\partial P=E(1)\cup\dots\cup E(J)$ with an intersection
$E(j)\cap E(j+1)=\\{z_{j+1}\\}$ for $j=1,\dots,J-1$ and $E(J)\cap
E(1)={z_{1}}$ with $\text{dist}(E(j),E(k))>0$ for all distinct indices $j\neq
k$.
Let $\mathbb{T}$ be a family of partitions of $\overline{\Omega}$ into
polygonal domains, which satisfies the conditions (M1)-(M2) with a universal
positive constant $\rho$.
1. (M1)
Admissibility. Any two distinct polygonal domains $P$ and $P^{\prime}$ in
$\mathcal{T}\in\mathbb{T}$ are disjoint or share a finite number of edges or
vertices.
Figure 2.1:
2. (M2)
Mesh regularity. Every polygonal domain $P$ of diameter $h_{P}$ is star-shaped
with respect to every point of a ball of radius greater than equal to $\rho
h_{P}$ and every edge $E$ of $P$ has a length $|E|$ greater than equal to
$\rho h_{P}$.
Here and throughout this paper, $h_{\mathcal{T}}|_{P}:=h_{P}$ denotes the
piecewise constant mesh-size and
$\mathbb{T}(\delta):=\\{\mathcal{T}\in\mathbb{T}:h_{\text{max}}\leq\delta\leq
1\\}$ with the maximum diameter $h_{\text{max}}$ of the polygonal domains in
$\mathcal{T}$ denotes the subclass of partitions of $\overline{\Omega}$ into
polygonal domains of maximal mesh-size $\leq\delta$. Let $|P|$ denote the area
of polygonal domain $P$ and $|E|$ denote the length of an edge $E$. With a
fixed orientation to a polygonal domain $P$, assign the outer unit normal
$\textbf{n}_{P}$ along the boundary $\partial P$ and
$\textbf{n}_{E}:=\textbf{n}_{P}|_{E}$ for an edge $E$ of $P$. Let
$\mathcal{E}$ (resp. $\widehat{\mathcal{E}}$) denote the set of edges $E$ of
$\mathcal{T}$ (resp. of $\widehat{{\cal T}}$) and $\mathcal{E}(P)$ denote the
set of edges of polygonal domain $P\in\mathcal{T}$. For a polygonal domain
$P$, define
$\displaystyle\text{mid}(P):=\frac{1}{|P|}\int_{P}x\,dx\quad\text{and}\quad\text{mid}(\partial
P):=\frac{1}{|\partial P|}\int_{\partial P}x\,ds.$
Let $\mathcal{P}_{k}({\cal T}):=\\{v\in L^{2}(\Omega):\forall
P\in\mathcal{T}\quad v|_{P}\in\mathcal{P}_{k}(P)\\}$ for $k\in\mathbb{N}_{0}$
and $\Pi_{k}$ denote the piecewise $L^{2}$ projection onto
$\mathcal{P}_{k}({\cal T})$. The notation $\Pi_{k}$ hides its dependence on
$\mathcal{T}$ and also assume $\Pi_{k}$ applies componentwise to vectors.
Given a decomposition ${\cal T}\in\mathbb{T}$ of $\Omega$ and a function $f\in
L^{2}(\Omega)$, its oscillation reads
$\displaystyle\mathrm{osc}_{k}(f,P):=\|h_{P}(1-\Pi_{k})f\|_{L^{2}(P)}\quad\text{and}\quad\mathrm{osc}_{k}(f,{\cal
T}):=\left(\sum_{P\in{\cal
T}}\|h_{P}(1-\Pi_{k})f\|_{L^{2}(P)}^{2}\right)^{\displaystyle\nicefrac{{1}}{{2}}}$
with $\mathrm{osc}(f,\bullet):=\mathrm{osc}_{0}(f,\bullet)$.
###### Remark 1 (consequence of mesh regularity assumption).
There exists an interior node $c$ in the sub-triangulation $\widehat{{\cal
T}}(P):=\\{T(E)=\text{conv}(c,E):E\in\mathcal{E}(P)\\}$ of a polygonal domain
$P$ with $h_{T(E)}\leq h_{P}\leq C_{\text{sr}}h_{T(E)}$ as illustrated in
Figure 2.2. Each polygonal domain $P$ can be divided into triangles so that
the resulting sub-triangulation $\widehat{{\cal T}}|_{P}:=\widehat{{\cal
T}}(P)$ of $\mathcal{T}$ is shape-regular. The minimum angle in the sub-
triangulation solely depends on $\rho$ [13, Sec. 2.1].
(a)
(b)
Figure 2.2: (a) Polygon $P$ and (b) its sub-triangulation $\widehat{{\cal
T}}(P)$.
###### Lemma 2.1 (Poincaré-Friedrichs inequality).
There exists a positive constant $C_{\mathrm{PF}}$, that depends solely on
$\rho$, such that
$\displaystyle\|f\|_{L^{2}(P)}\leq C_{\mathrm{PF}}h_{P}|f|_{1,P}$ (2.1)
holds for any $f\in H^{1}(P)$ with $\sum_{j\in J}\int_{E(j)}f\,ds=0$ for a
nonempty subset $J\subseteq\\{1,\dots,m\\}$ of indices in the notation
$\partial P=E(1)\cup\dots\cup E(m)$ of Figure 2.2. The constant
$C_{\mathrm{PF}}$ depends exclusively on the number $m:=|\mathcal{E}(P)|$ of
the edges in the polygonal domain $P$ and the quotient of the maximal area
divided by the minimal area of a triangle in the triangulation $\widehat{{\cal
T}}(P)$.
Some comments on $C_{\mathrm{PF}}$ for anisotropic meshes are in order before
the proof gives an explicit expression for $C_{\mathrm{PF}}$.
###### Example 2.1.
Consider a rectangle $P$ with a large aspect ratio divided into four congruent
sub-triangles all with vertex $c=\text{mid}(P)$. Then, $m=4$ and the quotient
of the maximal area divided by the minimal area of a triangle in the criss-
cross triangulation $\widehat{{\cal T}}(P)$ is one. Hence $C_{\mathrm{PF}}\leq
1.4231$ (from the proof below) is independent of the aspect ratio of $P$.
###### Proof of Lemma 2.1.
The case $J=\\{1,\dots,m\\}$ with $f\in H^{1}(P)$ and $\int_{\partial
P}f\,ds=0$ is well-known cf. e.g. [13, Sec. 2.1.5], and follows from the
Bramble-Hilbert lemma [14, Lemma 4.3.8] and the trace inequality [13, Sec.
2.1.1]. The remaining part of the proof shows the inequality (2.1) for the
case $J\subseteq\\{1,\dots,m\\}$. The polygonal domain $P$ and its
triangulation $\widehat{{\cal T}}(P)$ from Figure 2.2 has the center $c$ and
the nodes $z_{1},\dots,z_{m}$ for the $m:=|\mathcal{E}(P)|=|\widehat{{\cal
T}}(P)|$ edges $E(1),\dots,E(m)$ and the triangles $T(1),\dots,T(m)$ with
$T(j)=T(E(j))=\text{conv}\\{c,E(j)\\}=\text{conv}\\{c,z_{j},z_{j+1}\\}$ for
$j=1,\dots,m$. Here and throughout this proof, all indices are understood
modulo $m$, e.g., $z_{0}=z_{m}$. The proof uses the trace identity
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E(j)}f\,ds=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j)}f\,dx+\frac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j)}(x-c)\cdot\nabla f(x)\,dx$ (2.2)
for $f\in H^{1}(P)$ as in the lemma. This follows from an integration by parts
and the observation that $(x-c)\cdot\textbf{n}_{F}=0$ on
$F\in\mathcal{E}(T(j))\backslash E(j)$ and the height
$(x-c)\cdot\textbf{n}_{E(j)}=\frac{2|T(j)|}{|E(j)}$ of the edge $E(j)$ in the
triangle $T(j)$, for $x\in E(j)$; cf. [24, Lemma 2.1] or [25, Lemma 2.6] for
the remaining details. Another version of the trace identity (2.2) concerns
$\text{conv}\\{z_{j},c\\}=:F(j)=\partial T(j-1)\cap\partial T(j)$ and reads
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{F(j)}f\,ds$
$\displaystyle=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j-1)}f\,dx+\frac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j-1)}(x-z_{j-1})\cdot\nabla f(x)\,dx$
$\displaystyle=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j)}f\,dx+\frac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j)}(x-z_{j+1})\cdot\nabla f(x)\,dx$ (2.3)
in $T(j-1)$ and $T(j)$. The three trace identities in (2.2)-(2.3) are
rewritten with the following abbreviations, for $j=1,\dots m$,
$\displaystyle x_{j}:=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E(j)}f\,ds,\quad
f_{j}:=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j)}f\,dx,\quad
a_{j}:=\frac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j)}(x-c)\cdot\nabla f(x)\,dx,$ $\displaystyle
b_{j}:=\frac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j)}(x-z_{j})\cdot\nabla f(x)\,dx,\quad
c_{j}:=\frac{1}{2}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j)}(x-z_{j+1})\cdot\nabla f(x)\,dx.$
Let $t_{\text{min}}=\min_{T\in\widehat{{\cal T}}(P)}|T|$ and
$t_{\text{max}}=\max_{T\in\widehat{{\cal T}}(P)}|T|$ abbreviate the minimal
and maximal area of a triangle in $\widehat{{\cal T}}(P)$ and let
$\widehat{\Pi}_{0}f\in\mathcal{P}_{0}(\widehat{{\cal T}}(P))$ denote the
piecewise integral means of $f$ with respect to the triangulation
$\widehat{{\cal T}}(P)$. The Poincaré inequality in a triangle with the
constant $C_{\text{P}}:=1/j_{1,1}$ and the first positive root $j_{1,1}\approx
3.8317$ of the Bessel function $J_{1}$ from [24, Thm. 2.1] allows for
$\displaystyle\|f-\widehat{\Pi}_{0}f\|_{L^{2}(T(j))}\leq
C_{\text{P}}h_{T(j)}|f|_{1,T(j)}\quad\text{for}\;j=1,\dots,m.$
Hence $\|f-\widehat{\Pi}_{0}f\|_{L^{2}(P)}\leq C_{\text{P}}h_{P}|f|_{1,P}$.
This and the Pythagoras theorem (with
$f-\widehat{\Pi}_{0}f\perp\mathcal{P}_{0}(\widehat{{\cal T}}(P))$ in
$L^{2}(P)$) show
$\displaystyle\|f\|^{2}_{L^{2}(P)}=\|\widehat{\Pi}_{0}f\|^{2}_{L^{2}(P))}+\|f-\widehat{\Pi}_{0}f\|^{2}_{L^{2}(P))}\leq\|\widehat{\Pi}_{0}f\|^{2}_{L^{2}(P))}+C_{\text{P}}^{2}h_{P}^{2}|f|^{2}_{1,P}.$
(2.4)
It remains to bound the term $\|\widehat{\Pi}_{0}f\|^{2}_{L^{2}(P))}$. The
assumption on $f$ reads $\sum_{j\in J}\int_{E(j)}f\,ds=\sum_{j\in
J}|E(j)|x_{j}=0$ for a subset $J\subset\\{1,\dots,m\\}$ so that
$0\in\text{conv}\\{|E(1)|x_{1},\dots,|E(m)|x_{m}\\}$. It follows
$0\in\text{conv}\\{x_{1},\dots,x_{m}\\}$ and it is known that this implies
$\displaystyle\sum_{k=1}^{m}x_{k}^{2}\leq{\cal{M}}\sum_{k=1}^{m}(x_{k}-x_{k-1})^{2}$
(2.5)
for a constant ${\cal{M}}=\frac{1}{2(1-\cos(\pi/m))}$ that depends exclusively
on $m$ [25, Lemma 4.2]. Recall (2.2) in the form $x_{j}=f_{j}+a_{j}$ to deduce
from a triangle inequality and (2.5) that
$\displaystyle\frac{1}{2}\sum_{j=1}^{m}f_{j}^{2}\leq\sum_{k=1}^{m}x_{k}^{2}+\sum_{\ell=1}^{m}a_{\ell}^{2}\leq{\cal{M}}\sum_{k=1}^{m}(x_{k}-x_{k-1})^{2}+\sum_{\ell=1}^{m}a_{\ell}^{2}.$
This shows that
$\displaystyle
t_{\text{max}}^{-1}\|\widehat{\Pi}_{0}f\|^{2}_{L^{2}(P)}=t_{\text{max}}^{-1}\sum_{j=1}^{m}|T(j)|f_{j}^{2}\leq\sum_{j=1}^{m}f_{j}^{2}\leq
2{\cal{M}}\sum_{k=1}^{m}(x_{k}-x_{k-1})^{2}+2\sum_{\ell=1}^{m}a_{\ell}^{2}.$
Recall (2.2)-(2.3) in the form $f_{j}-f_{j-1}=b_{j-1}-c_{j}$ and
$x_{j}-x_{j-1}=f_{j}-f_{j-1}+a_{j}-a_{j-1}=b_{j-1}-a_{j-1}+a_{j}-c_{j}$ for
all $j=1,\dots,m$. This and the Cauchy-Schwarz inequality imply the first two
estimates in
$\displaystyle 2|x_{j}-x_{j-1}|$
$\displaystyle=\bigg{|}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j-1)}(c-z_{j-1})\cdot\nabla
f(x)\,dx+\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T(j)}(z_{j+1}-c)\cdot\nabla f(x)\,dx\bigg{|}$
$\displaystyle\leq\max\\{|c-z_{j-1}|,|c-z_{j+1}|\\}\Big{(}|T(j-1)|^{-1/2}|f|_{1,T(j-1)}+|T(j)|^{-1/2}|f|_{1,T(j)}\Big{)}$
$\displaystyle\leq h_{P}t_{\text{min}}^{-1/2}|f|_{1,T(j-1)\cup T(j)}$
with the definition of $h_{P}$ and $t_{\text{min}}$ in the end. The inequality
$\int_{T(j)}|x-c|^{2}\;dx\leq\frac{1}{2}h^{2}_{T(j)}|T(j)|$ [25, Lemma 2.7]
and the Cauchy-Schwarz inequality show, for $j=1,\dots,m$, that
$\displaystyle|a_{j}|\leq 2^{-3/2}h_{T(j)}|T(j)|^{-1/2}|f|_{1,|T(j)|}\leq
2^{-3/2}h_{P}t_{\text{min}}^{-1/2}|f|_{1,|T(j)|}.$
The combination of the previous three displayed estimates result in
$\displaystyle
4h_{P}^{-2}(t_{\text{min}}/t_{\text{max}})\|\widehat{\Pi}_{0}f\|^{2}_{L^{2}(P)}\leq
2{\cal{M}}\sum_{k=1}^{m}|f|^{2}_{T(k-1)\cup
T(k)}+\sum_{\ell=1}^{m}|f|^{2}_{1,T(\ell)}=(4{\cal{M}}+1)|f|^{2}_{1,P}.$
This and (2.4) conclude the proof with the constant
$C_{\mathrm{PF}}^{2}=({\cal{M}}+1/4)(t_{\text{max}}/t_{\text{min}})+C_{\text{P}}^{2}$.
∎
In the nonconforming VEM, the finite-dimensional space $V_{h}$ is a subset of
the piecewise Sobolev space
$H^{1}(\mathcal{T}):=\\{v\in L^{2}(\Omega):\forall P\in\mathcal{T}\quad
v|_{P}\in H^{1}(P)\\}\equiv\prod_{P\in\mathcal{T}}H^{1}(P).$
The piecewise $H^{1}$ seminorm (piecewise with respect to $\mathcal{T}$ hidden
in the notation for brevity) reads
$|v_{h}|_{1,\text{pw}}:=\bigg{(}\sum_{P\in\mathcal{T}}|v_{h}|_{1,P}^{2}\bigg{)}^{1/2}\quad\text{for
any}\;v_{h}\in H^{1}(\mathcal{T}).$
### 2.2 Local virtual element space
The first nonconforming virtual element space [3] is a subspace of harmonic
functions with edgewise constant Neumann boundary values on each polygon. The
extended nonconforming virtual element space [1, 17] reads
$\displaystyle\widehat{V}_{h}(P):=\begin{rcases}\begin{dcases}v_{h}\in
H^{1}(P):&\Delta v_{h}\in\mathcal{P}_{1}(P)\quad\text{and}\quad\forall
E\in\mathcal{E}(P)\quad{\frac{\partial
v_{h}}{\partial\textbf{n}_{P}}}\Big{|}_{E}\in\mathcal{P}_{0}(E)\end{dcases}\end{rcases}.$
(2.6)
###### Definition 2.2 (Ritz projection).
Let $\Pi^{\nabla}_{1}$ be the Ritz projection from $H^{1}(P)$ onto the affine
functions $\mathcal{P}_{1}(P)$ in the $H^{1}$ seminorm defined, for $v_{h}\in
H^{1}(P)$, by
$\displaystyle(\nabla\Pi^{\nabla}_{1}v_{h}-\nabla
v_{h},\nabla\chi)_{L^{2}(P)}=0\quad\text{for
all}\;\chi\in\mathcal{P}_{1}(P)\quad\text{and}\quad\int_{\partial
P}\Pi^{\nabla}_{1}v_{h}\,ds=\int_{\partial P}v_{h}\,ds.$ (2.7)
###### Remark 2 (integral mean).
For $P\in\mathcal{T}$ and $f\in H^{1}(P)$,
$\nabla\Pi^{\nabla}_{1}f=\Pi_{0}\nabla f$. (This follows from (2.7.a) and the
definition of the $L^{2}$ projection operator $\Pi_{0}$ (acting componentwise)
onto the piecewise constants $\mathcal{P}_{0}(P;\mathbb{R}^{2})$.)
###### Remark 3 (representation of $\Pi^{\nabla}_{1}$).
For $P\in\mathcal{T}$ and $f\in H^{1}(P)$, the Ritz projection
$\Pi^{\nabla}_{1}f$ reads
$\displaystyle(\Pi^{\nabla}_{1}f)(x)=\frac{1}{|P|}\Big{(}\int_{\partial
P}f\textbf{n}_{P}\,ds\Big{)}\cdot\Big{(}x-\text{mid}(\partial
P)\Big{)}+\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\partial P}f\,ds\quad\text{for}\;x\in P.$ (2.8)
(The proof of (2.8) consists in the verification of (2.7): The equation
(2.7.a) follows from Remark 2 with an integration by parts. The equation
(2.7.b) follows from the definition of $\text{mid}(\partial P)$ as the
barycenter of $\partial P$. ∎)
The enhanced virtual element spaces [1, 17] are designed with a computable
$L^{2}$ projection $\Pi_{1}$ onto $\mathcal{P}_{1}(\mathcal{T})$. The
resulting local discrete space under consideration throughout this paper reads
$\displaystyle
V_{h}(P):=\begin{rcases}\begin{dcases}v_{h}\in\widehat{V}_{h}(P):v_{h}-\Pi^{\nabla}_{1}v_{h}\perp\mathcal{P}_{1}(P)\quad\text{in}\;L^{2}(P)\end{dcases}\end{rcases}.$
(2.9)
The point in the selection of $V_{h}(P)$ is that the Ritz projection
$\Pi^{\nabla}_{1}v_{h}$ coincides with the $L^{2}$ projection $\Pi_{1}v_{h}$
for all $v_{h}\in V_{h}(P)$. The degrees of freedom on $P$ are given by
$\displaystyle\text{dof}_{E}(v)=\frac{1}{|E|}\int_{E}v\,ds\quad\textrm{for
all}\;E\in\mathcal{E}(P)\;\text{and}\;v\in V_{h}(P).$ (2.10)
###### Proposition 2.3.
$(a)$ The vector space $\widehat{V}_{h}(P)$ from (2.6) is of dimension
$3+|\mathcal{E}(P)|$. $(b)$ $V_{h}(P)$ from (2.9) is of dimension
$|\mathcal{E}(P)|$ and the triplet
$(P,V_{h}(P),\text{dof}_{E}:E\in\mathcal{E}(P))$ is a finite element in the
sense of Ciarlet [28].
###### Proof.
Let $E(1),\dots,E(m)$ be an enumeration of the edges $\mathcal{E}(P)$ of the
polygonal domain $P$ in a consecutive way as depicted in Figure 2.2.a and
define
$W(P):=\mathcal{P}_{1}(P)\times\mathcal{P}_{0}(E{(1)})\times\dots\times\mathcal{P}_{0}(E{(m)})$.
Recall $\widehat{V}_{h}(P)$ from (2.6) and identify the quotient space
$\widehat{V}_{h}(P)/\mathbb{R}\equiv\left\\{f\in\widehat{V}_{h}(P):\right.\\\
\left.\int_{\partial P}f\,ds=0\right\\}$ with all functions in
$\widehat{V}_{h}(P)$ having zero integral over the boundary $\partial P$ of
$P$. Since the space $\widehat{V}_{h}(P)$ consists of functions with an affine
Laplacian and edgewise constant Neumann data, the map
$\displaystyle S:\widehat{V}_{h}(P)/\mathbb{R}\to W(P),\quad\quad
f\mapsto\left(-\Delta f,\frac{\partial
f}{\partial\textbf{n}_{P}}\Big{|}_{E{(1)}},\dots,\frac{\partial
f}{\partial\textbf{n}_{P}}\Big{|}_{E{(m)}}\right)$
is well-defined and linear. The compatibility conditions for the existence of
a solution of a Laplacian problem with Neumann data show that the image of $S$
is equal to
$\displaystyle\mathcal{R}(S)=\left\\{(f_{1},g_{1},\dots,g_{m})\in
W(P):\int_{P}f_{1}dx+\sum_{j=1}^{m}g_{j}|E(j)|=0\right\\}.$
(The proof of this identity assumes the compatible data
$(f_{1},g_{1},\dots,g_{m})$ from the set on the right-hand side and solves the
Neumann problem with a unique solution $\widehat{u}$ in
$\widehat{V}_{h}(P)/\mathbb{R}$ and $S\widehat{u}=(f_{1},g_{1},\dots,g_{m})$.)
It is known that the Neumann problem has a unique solution up to an additive
constant and so $S$ is a bijection and the dimension $m+2$ of
$\widehat{V}_{h}(P)/\mathbb{R}$ is that of $\mathcal{R}(S)$. In particular,
dimension of $\widehat{V}_{h}(P)$ is $m+3$. This proves $(a)$.
Let $\Lambda_{0},\Lambda_{1},\Lambda_{2}:H^{1}(P)\to\mathbb{R}$ be linear
functionals
$\displaystyle\Lambda_{0}f:=\Pi_{0}f,\quad\Lambda_{j}f:={\cal
M}_{j}((\Pi^{\nabla}_{1}-\Pi_{1})f)$
with ${\cal M}_{j}f:=\Pi_{0}((x_{j}-c_{j})f)$ for $j=1,2$ and $f\in H^{1}(P)$
that determines an affine function $p_{1}\in\mathcal{P}_{1}(P)$ such that
$(P,\mathcal{P}_{1}(P),(\Lambda_{0},\Lambda_{1},\Lambda_{2}))$ is a finite
element in the sense of Ciarlet. For any edge $E(j)\in\mathcal{E}(P)$, define
$\Lambda_{j+2}f=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E(j)}f\,ds$ as integral mean of the traces of $f$
in $H^{1}(P)$ on $E(j)$. It is elementary to see that
$\Lambda_{0},\dots,\Lambda_{m+2}$ are linearly independent: If $f$ in
$\widehat{V}_{h}(P)$ belongs to the kernel of all the linear functionals, then
$\Pi^{\nabla}_{1}f=0$ from (2.8) with $\Lambda_{j}f=0$ for each
$j=3,\dots,2+m$. Since the functionals $\Lambda_{j}f=0$ for $j=1,2$,
$(x_{j}-c_{j})(\Pi^{\nabla}_{1}-\Pi_{1})f=0$ and $\Pi^{\nabla}_{1}f=0$ imply
$\Pi_{1}f=0$. An integration by parts leads to
$\displaystyle\|\nabla f\|_{L^{2}(P)}^{2}=(-\Delta
f,f)_{L^{2}(P)}+\Big{(}f,\frac{\partial
f}{\partial\textbf{n}_{P}}\Big{)}_{L^{2}(\partial P)}=0.$
This and $\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{\partial P}f\,ds=0$ show $f\equiv 0$.
Consequently, the intersection $\cap_{j=0}^{m+2}\text{Ker}(\Lambda_{j})$ of
all kernels Ker$(\Lambda_{0}),\dots,\text{Ker}(\Lambda)_{m+2}$ is trivial and
so that the functionals $\Lambda_{0},\dots,\Lambda_{m+2}$ are linearly
independent. Since the number of the linear functionals is equal to the
dimension of $\widehat{V}_{h}(P)$,
$(P,\widehat{V}_{h}(P),\\{\Lambda_{0},\dots,\Lambda_{m+2}\\})$ is a finite
element in the sense of Ciarlet and there exists a nodal basis
$\psi_{0},\dots,\psi_{m+2}$ of $\widehat{V}_{h}(P)$ with
$\displaystyle\Lambda_{j}(\psi_{k})=\delta_{jk}\quad\text{for
all}\;j,k=0,\dots,m+2.$
The linearly independent functions $\psi_{3},\dots,\psi_{m+2}$ belong to
$V_{h}(P)$ and so dim$(V_{h}(P))\geq m$. Since
$V_{h}(P)\subset\widehat{V}_{h}(P)$ and three linearly independent conditions
$(1-\Pi^{\nabla}_{1})v_{h}\perp\mathcal{P}_{1}(P)$ in $L^{2}(P)$ are imposed
on $\widehat{V}_{h}(P)$ to define $V_{h}(P)$, dim$(V_{h}(P))\leq m$. This
shows that dim$(V_{h}(P))=m$ and hence, the linear functionals
$\text{dof}_{E}=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}\bullet\,ds$ for $E\in\mathcal{E}(P)$ form a
dual basis of $V_{h}(P)$. This concludes the proof of $(b)$. ∎
###### Remark 4 (stability of $L^{2}$ projection).
The $L^{2}$ projection $\Pi_{k}$ for $k=0,1$ is $H^{1}$ and $L^{2}$ stable in
$V_{h}(P)$, in the sense that any $v_{h}$ in $V_{h}(P)$ satisfies
$\displaystyle\|\Pi_{k}v_{h}\|_{L^{2}(P)}\leq\|v_{h}\|_{L^{2}(P)}\;\text{and}\;\|\nabla(\Pi_{k}v_{h})\|_{L^{2}(P)}\leq\|\nabla
v_{h}\|_{L^{2}(P)}.$ (2.11)
(The first inequality follows from the definition of $\Pi_{k}$. The
orthogonality in (2.9) and the definition of $\Pi_{1}$ imply that the Ritz
projection $\Pi^{\nabla}_{1}$ and the $L^{2}$ projection $\Pi_{1}$ coincide on
the space $V_{h}(P)$ for $P\in\mathcal{T}$. This with the definition of the
Ritz projection $\Pi^{\nabla}_{1}$ verifies the second inequality. ∎)
###### Definition 2.4 (Fractional order Sobolev space [14]).
Let $\alpha:=(\alpha_{1},\alpha_{2})$ denote a multi-index with
$\alpha_{j}\in\mathbb{N}_{0}$ for $j=1,2$ and
$|\alpha|:=\alpha_{1}+\alpha_{2}.$ For a real number $m$ with $0<m<1$, define
$\displaystyle H^{1+m}(\omega):=\left\\{v\in
H^{1}(\omega):\frac{|v^{\alpha}(x)-v^{\alpha}(y)|}{|x-y|^{(1+m)}}\in
L^{2}(\omega\times\omega)\quad\text{for all}\;|\alpha|=1\right\\}$
with $v^{\alpha}$ as the partial derivative of $v$ of order $\alpha$. Define
the seminorm $|\cdot|_{1+m}$ and Sobolev-Slobodeckij norm $\|\cdot\|_{1+m}$ by
$\displaystyle|v|_{1+m,\omega}^{2}=\sum_{|\alpha|=1}\int_{\omega}\int_{\omega}\frac{{|v^{\alpha}(x)-v^{\alpha}(y)|}^{2}}{|x-y|^{2(1+m)}}\,dx\,dy\quad\text{and}\quad\|v\|_{1+m,\omega}^{2}=\|v\|^{2}_{1,\omega}+|v|_{1+m,\omega}^{2}.$
###### Proposition 2.5 (approximation by polynomials [29, Thm. 6.1]).
Under the assumption (M2), there exists a positive constant $C_{\mathrm{apx}}$
(depending on $\rho$ and on the polynomial degree $k$) such that, for every
$v\in H^{m}(P)$, the $L^{2}$ projection $\Pi_{k}(P)$ on $\mathcal{P}_{k}$ for
$k\in\mathbb{N}_{0}$ satisfies
$\displaystyle\|v-\Pi_{k}v\|_{L^{2}(P)}+h_{P}|v-\Pi_{k}v|_{1,P}\leq
C_{\mathrm{apx}}h_{P}^{m}|v|_{m,P}\quad\text{for}\;1\leq m\leq k+1.$ (2.12)
### 2.3 Global virtual element space
Define the global nonconforming virtual element space, for any
$\mathcal{T}\in\mathbb{T}$, by
$\displaystyle V_{h}:=\left\\{v_{h}\in H^{1}(\mathcal{T}):\forall
P\in\mathcal{T}\quad v_{h}|_{P}\in V_{h}(P)\quad\text{and}\quad\forall
E\in\mathcal{E}\quad\int_{E}[v_{h}]_{E}\,ds=0\right\\}.$ (2.13)
Let $[\cdot]_{E}$ denote the jump across an edge $E\in\mathcal{E}$: For two
neighboring polygonal domains $P^{+}$ and $P^{-}$ sharing a common edge
$E\in\mathcal{E}(P^{+})\cap\mathcal{E}(P^{-})$,
$[v_{h}]_{E}:=v_{h|P^{+}}-v_{h|P^{-}}$, where $P^{+}$ denote the adjoint
polygonal domain with $\textbf{n}_{P^{+}|E}=\textbf{n}_{E}$ and $P^{-}$ denote
the polygonal domain with $\textbf{n}_{P^{-}|E}=-\textbf{n}_{E}$. If
$E\subset\partial\Omega$ is a boundary edge, then $[v_{h}]_{E}:=v_{h}|_{E}$.
###### Example 2.2.
If each polygonal domain $P$ is a triangle, then the finite-dimensional space
$V_{h}$ coincides with CR-FEM space. (Since the dimension of the vector space
$V_{h}(P)$ is three and $\mathcal{P}_{1}(P)\subset V_{h}(P)$,
$V_{h}(P)=\mathcal{P}_{1}(P)$ for $P\in\mathcal{T}$.)
###### Lemma 2.6.
There exists a universal constant $C_{\mathrm{F}}$ (that depends only on
$\rho$ from (M2)) such that, for all ${\cal T}\in\mathbb{T}$, any $v_{h}\in
V_{h}$ from (2.13) satisfies
$\displaystyle\|v_{h}\|_{L^{2}(\Omega)}\leq
C_{\mathrm{F}}|v_{h}|_{1,\mathrm{pw}}.$ (2.14)
###### Proof.
Recall from Remark 1 that $\widehat{{\cal T}}$ is a shape regular sub-
triangulation of $\mathcal{T}$ into triangles. Since $V_{h}\subset
H^{1}(\widehat{{\cal T}})$ and the Friedrichs’ inequality holds for all
functions in $H^{1}(\widehat{{\cal T}})$ [14, Thm. 10.6.16], there exists a
positive constant $C_{\text{F}}$ such that the (first) inequality holds in
$\displaystyle\|v_{h}\|_{L^{2}(\Omega)}\leq
C_{\text{F}}\left(\sum_{T\in\widehat{{\cal T}}}\|\nabla
v_{h}\|_{L^{2}(T)}^{2}\right)^{1/2}=C_{\text{F}}|v_{h}|_{1,\mathrm{pw}}.$
The (second) equality follows for $v_{h}\in H^{1}(P)$ with $P\in\mathcal{T}$.
∎
Lemma 2.6 implies that the seminorm $|\cdot|_{1,\mathrm{pw}}$ is equivalent to
the norm
$\|\cdot\|_{1,\mathrm{pw}}:=\|\cdot\|^{2}_{L^{2}(\Omega)}+|\cdot|^{2}_{1,\mathrm{pw}}$
in $V_{h}$ with mesh-size independent equivalence constants.
### 2.4 Interpolation
###### Definition 2.7 (interpolation operator).
Let $(\psi_{E}:E\in\mathcal{E})$ be the nodal basis of $V_{h}$ defined by
$\text{dof}_{E}(\psi_{E})=1$ and $\text{dof}_{F}(\psi_{E})=0$ for all other
edges $F\in\mathcal{E}\setminus\\{E\\}$. The global interpolation operator
$I_{h}:H^{1}_{0}(\Omega)\to V_{h}$ reads
$\displaystyle
I_{h}v:=\sum_{E\in\mathcal{E}}\Big{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}v\,ds\Big{)}\psi_{E}\quad\text{for}\;v\in V.$
Since a Sobolev function $v\in V$ has traces and the jumps $[v]_{E}$ vanish
across any edge $E\in\mathcal{E}$, the interpolation operator $I_{h}$ is well-
defined. Recall $\rho$ from (M2), $C_{\mathrm{PF}}$ from Lemma 2.1, and
$C_{\text{apx}}$ from Proposition 2.5.
###### Theorem 2.8 (interpolation error).
1. $\left(a\right)$
There exists a positive constant $C_{\mathrm{Itn}}$ (depending on $\rho$) such
that any $v\in H^{1}(P)$ and its interpolation $I_{h}v\in V_{h}(P)$ satisfy
$\displaystyle\|\nabla I_{h}v\|_{L^{2}(P)}\leq C_{\mathrm{Itn}}\|\nabla
v\|_{L^{2}(P)}.$
2. $\left(b\right)$
Any $P\in\mathcal{T}\in\mathbb{T}$ and $v\in H^{1}(P)$ satisfy
$|v-I_{h}v|_{1,P}\leq(1+C_{\mathrm{Itn}})\|(1-\Pi_{0})\nabla v\|_{L^{2}(P)}$
and
$\displaystyle
h_{P}^{-1}\|(1-\Pi_{1}I_{h})v\|_{L^{2}(P)}+|(1-\Pi_{1}I_{h})v|_{1,P}\leq(1+C_{\mathrm{PF}})\|(1-\Pi_{0})\nabla
v\|_{L^{2}(P)}.$
3. $\left(c\right)$
The positive constant
$C_{\mathrm{I}}:=C_{\mathrm{apx}}(1+C_{\mathrm{Itn}})(1+C_{\mathrm{PF}})$, any
$0<\sigma\leq 1$, and any $v\in H^{1+\sigma}(P)$ with the local interpolation
$I_{h}v|_{P}\in V_{h}(P)$ satisfy
$\displaystyle\|v-I_{h}v\|_{L^{2}(P)}+h_{P}|v-I_{h}v|_{1,P}\leq
C_{\mathrm{I}}h^{1+\sigma}_{P}|v|_{1+\sigma,P}.$ (2.15)
###### Proof of $(a)$.
The boundedness of the interpolation operator in $V_{h}(P)$ is mentioned in
[17] with a soft proof in its appendix. The subsequent analysis aims at a
clarification that $C_{\text{I}}$ depends exclusively on the parameter $\rho$
in (M2). The elementary arguments apply to more general situations in
particular to 3D. Given $I_{h}v\in V_{h}(P)$, $q_{1}:=-\Delta
I_{h}v\in\mathcal{P}_{1}(P)$ is affine and $\int_{E}(v-I_{h}v)\,ds=0$. Since
$\frac{\partial I_{h}v}{\partial\textbf{n}_{P}}$ is edgewise constant, this
shows $\int_{E}{\frac{\partial
I_{h}v}{\partial\textbf{n}_{P}}}|_{E}(v-I_{h}v)\,ds=0$ for all
$E\in\mathcal{E}(P)$ and so $\big{\langle}\frac{\partial
I_{h}v}{\partial\textbf{n}_{P}},v-I_{h}v\big{\rangle}_{\partial P}=0$. An
integration by parts leads to
$\displaystyle(\nabla
I_{h}v,\nabla(I_{h}v-v))_{L^{2}(P)}=(q_{1},I_{h}v-v)_{L^{2}(P)}=(q_{1},\Pi^{\nabla}_{1}I_{h}v-v)_{L^{2}(P)}$
with $q_{1}\in\mathcal{P}_{1}(P)$ and $\Pi_{1}v_{h}=\Pi^{\nabla}_{1}v_{h}$ for
$v_{h}\in V_{h}(P)$ in the last step. Consequently,
$\displaystyle\|\nabla I_{h}v\|_{L^{2}(P)}^{2}$ $\displaystyle=(\nabla
I_{h}v,\nabla(I_{h}v-v))_{L^{2}(P)}+(\nabla I_{h}v,\nabla v)_{L^{2}(P)}$
$\displaystyle=(q_{1},\Pi^{\nabla}_{1}I_{h}v-v)_{L^{2}(P)}+(\nabla
I_{h}v,\nabla v)_{L^{2}(P)}$
$\displaystyle\leq\|q_{1}\|_{L^{2}(P)}\|v-\Pi^{\nabla}_{1}I_{h}v\|_{L^{2}(P)}+\|\nabla
I_{h}v\|_{L^{2}(P)}\|\nabla v\|_{L^{2}(P)}$ (2.16)
with the Cauchy inequality in the last step. Remark 2 and 3 on the Ritz
projection, and the definition of $I_{h}$ show
$\displaystyle\Pi_{0}\nabla v=\nabla\Pi^{\nabla}_{1}v=|P|^{-1}\int_{\partial
P}v\,\textbf{n}_{P}\,ds=|P|^{-1}\int_{\partial
P}I_{h}v\textbf{n}_{P}\,ds=\Pi_{0}\nabla I_{h}v=\nabla\Pi^{\nabla}_{1}I_{h}v.$
(2.17)
The function $f:=v-\Pi^{\nabla}_{1}I_{h}v\in H^{1}(P)$ satisfies
$\int_{\partial P}f\,ds=\int_{\partial P}(v-I_{h}v)\,ds=0$ and the Poincaré-
Friedrichs inequality from Lemma 2.1.a shows
$\displaystyle\|v-\Pi^{\nabla}_{1}I_{h}v\|_{L^{2}(P)}\leq
C_{\text{PF}}h_{P}\|\nabla(v-\Pi^{\nabla}_{1}I_{h}v)\|_{L^{2}(P)}=C_{\text{PF}}h_{P}\|(1-\Pi_{0})\nabla
v\|_{L^{2}(P)}$ (2.18)
with (2.17) in the last step. Let $\phi_{c}\in S^{1}_{0}(\widehat{{\cal
T}}(P)):=\\{w\in C^{0}(P):w|_{T(E)}\in\mathcal{P}_{1}(T(E))\quad\text{for
all}\;E\in\mathcal{E}(P)\\}$ denote the piecewise linear nodal basis function
of the interior node $c$ with respect to the triangulation $\widehat{{\cal
T}}(P)=\\{T(E):E\in\mathcal{E}(P)\\}$ (cf. Figure 2.2.b for an illustration of
$\widehat{{\cal T}}(P)$). An inverse estimate
$\displaystyle\|f_{1}\|_{L^{2}(T(E))}\leq
C_{1}\|\phi_{c}^{1/2}f_{1}\|_{L^{2}(T(E))}\quad\text{for
all}\;f_{1}\in\mathcal{P}_{1}(\widehat{{\cal T}}(P))$
on the triangle $T(E):=\text{conv}(E\cup\\{c\\})$ holds with the universal
constant $C_{1}$. A constructive proof computes the mass matrices for $T$ with
and without the weight $\phi_{c}$ to verify that the universal constant
$C_{1}$ does not depend on the shape of the triangle $T(E)$. This implies
$\displaystyle
C_{1}^{-1}\|q_{1}\|_{L^{2}(P)}^{2}\leq(\phi_{c}q_{1},q_{1})_{L^{2}(P)}=(-\Delta
I_{h}v,\phi_{c}q_{1})=(\nabla I_{h}v,\nabla(\phi_{c}q_{1}))_{L^{2}(P)}$ (2.19)
with an integration by parts for $\phi_{c}q_{1}\in H^{1}_{0}(P)$ and $I_{h}v$
in the last step. The mesh-size independent constant $C_{2}$ in the standard
inverse estimate
$\displaystyle h_{T(E)}\|\nabla q_{2}\|_{L^{2}(T(E))}\leq
C_{2}\|q_{2}\|_{L^{2}(T(E))}\quad\text{for
all}\;q_{2}\in\mathcal{P}_{2}(T(E))$
depends merely on the angles in the triangle $T(E),E\in\mathcal{E}(P),$ and so
exclusively on $\rho$. With $C^{-1}_{\text{sr}}h_{P}\leq h_{T(E)}$ from Remark
1, this shows
$\displaystyle
C_{2}^{-1}C_{\text{sr}}^{-1}h_{P}\|\nabla\phi_{c}q_{1}\|_{L^{2}(P)}\leq\|\phi_{c}q_{1}\|_{L^{2}(P)}\leq\|q_{1}\|_{L^{2}(P)}.$
This and (2.19) lead to
$\displaystyle\|q_{1}\|_{L^{2}(P)}\leq
C_{1}C_{2}C_{\text{sr}}h_{P}^{-1}\|\nabla I_{h}v\|_{L^{2}(P)}.$ (2.20)
The combination with (2.16)-(2.18) proves
$\displaystyle\|\nabla I_{h}v\|_{L^{2}(P)}^{2}$
$\displaystyle\leq(C_{1}C_{2}C_{\text{sr}}C_{\text{PF}}\|(1-\Pi_{0})\nabla
v\|_{L^{2}(P)}+\|\nabla v\|_{L^{2}(P)})\|\nabla I_{h}v\|_{L^{2}(P)}$
$\displaystyle\leq(1+C_{1}C_{2}C_{\text{sr}}C_{\text{PF}})\|\nabla
v\|_{L^{2}(P)}\|\nabla I_{h}v\|_{L^{2}(P)}.\qed$
###### Proof of $(b)$.
The identity (2.17) reads $\Pi_{0}\nabla(1-I_{h})v=0$ and the triangle
inequality results in
$\displaystyle|v-I_{h}v|_{1,P}$
$\displaystyle=\|(1-\Pi_{0})\nabla(1-I_{h})v\|_{L^{2}(p)}$
$\displaystyle\leq\|(1-\Pi_{0})\nabla v\|_{L^{2}(P)}+\|(1-\Pi_{0})\nabla
I_{h}v\|_{L^{2}(P)}.$ (2.21)
Since $I_{h}$ is the identity in $\mathcal{P}_{1}(P)$, it follows
$(1-\Pi_{0})\nabla I_{h}v=(1-\Pi_{0})\nabla I_{h}(v-\Pi^{\nabla}_{1}v).$ This
and the boundedness of the interpolation operator $I_{h}$ lead to
$\displaystyle\|(1-\Pi_{0})\nabla I_{h}v\|_{L^{2}(P)}$
$\displaystyle\leq\|\nabla I_{h}(1-\Pi^{\nabla}_{1})v\|_{L^{2}(P)}$
$\displaystyle\leq
C_{\mathrm{Itn}}\|\nabla(1-\Pi^{\nabla}_{1})v\|_{L^{2}(P)}=C_{\mathrm{Itn}}\|(1-\Pi_{0})\nabla
v\|_{L^{2}(P)}$ (2.22)
with Remark 2 in the last step. The combination of (2.21) and (2.22) proves
the first part of $(b)$.
The identity $|(1-\Pi_{1}I_{h})v|_{1,P}=\|(1-\Pi_{0})\nabla v\|_{L^{2}(P)}$
follows from (2.17). Since $\Pi_{1}=\Pi^{\nabla}_{1}$ in $V_{h}$ and
$\int_{\partial P}v\,ds=\int_{\partial P}I_{h}v\,ds=\int_{\partial
P}\Pi^{\nabla}_{1}I_{h}v\,ds$, the Poincaré-Friedrichs inequality
$\|(1-\Pi_{1}I_{h})v\|_{L^{2}(P)}\leq
C_{\text{PF}}h_{P}|(1-\Pi_{1}I_{h})v|_{1,P}$
follows from Lemma 2.1.a. This concludes the proof of $(b)$. ∎
###### Proof of $(c)$.
This is an immediate consequence of the part $(b)$ with (2.12) and the
Poincaré-Friedrichs inequality for $v-I_{h}v$ (from above) in Lemma 2.1.a. ∎
## 3 Preliminary estimates
This subsection formulates the discrete problem along with the properties of
the discrete bilinear form such as boundedness and a G$\mathring{a}$rding-type
inequality.
### 3.1 The discrete problem
Denote the restriction of the bilinear forms $a(\cdot,\cdot),\hskip
2.84526ptb(\cdot,\cdot)$ and $c(\cdot,\cdot)$ on a polygonal domain
$P\in\mathcal{T}$ by $a^{P}(\cdot,\cdot),\hskip 2.84526ptb^{P}(\cdot,\cdot)$
and $c^{P}(\cdot,\cdot)$. The corresponding local discrete bilinear forms are
defined for $u_{h},v_{h}\in V_{h}(P)$ by
$\displaystyle a_{h}^{P}(u_{h},v_{h})$
$\displaystyle:=(\textbf{A}\nabla\Pi_{1}u_{h},\nabla\Pi_{1}v_{h})_{L^{2}(P)}+S^{P}((1-\Pi_{1})u_{h},(1-\Pi_{1})v_{h}),$
(3.1) $\displaystyle b_{h}^{P}(u_{h},v_{h})$ $\displaystyle:=\
(\Pi_{1}u_{h},\textbf{b}\cdot\nabla\Pi_{1}v_{h})_{L^{2}(P)},$ (3.2)
$\displaystyle c_{h}^{P}(u_{h},v_{h})$
$\displaystyle:=(\gamma\Pi_{1}u_{h},\Pi_{1}v_{h})_{L^{2}(P)},$ (3.3)
$\displaystyle B_{h}^{P}(u_{h},v_{h})$
$\displaystyle:=a_{h}^{P}(u_{h},v_{h})+b_{h}^{P}(u_{h},v_{h})+c_{h}^{P}(u_{h},v_{h}).$
(3.4)
Choose the stability term $S^{P}(u_{h},v_{h})$ as a symmetric positive
definite bilinear form on $V_{h}(P)\times V_{h}(P)$ for a positive constant
$C_{s}$ independent of $P$ and $h_{P}$ satisfying
$C_{s}^{-1}a^{P}(v_{h},v_{h})\leq S^{P}(v_{h},v_{h})\leq
C_{s}a^{P}(v_{h},v_{h})\quad\text{for all}\hskip 2.84526ptv_{h}\in
V_{h}(P)\hskip 2.84526pt\text{with}\hskip 2.84526pt\Pi_{1}v_{h}=0.$ (3.5)
For some positive constant approximation $\overline{\textbf{A}}_{P}$ of A over
$P$ and the number $N_{P}:=|\mathcal{E}(P)|$ of the degrees of freedom (2.10)
of $V_{h}(P)$, a standard example of a stabilization term from [4],[36, Sec.
4.3] with a scaling coefficient $\overline{\textbf{A}}_{P}$ reads
$\displaystyle
S^{P}(v_{h},w_{h}):=\overline{\textbf{A}}_{P}\sum_{r=1}^{N_{P}}\text{dof}_{r}(v_{h})\text{dof}_{r}(w_{h})\quad\text{for
all}\;v_{h},w_{h}\in V_{h}.$ (3.6)
Note that an approximation $\overline{\textbf{A}}_{P}$ is a positive real
number (not a matrix) and can be chosen as $\sqrt{a_{0}a_{1}}$ with the
positive constants $a_{0}$ and $a_{1}$ from (A2). For $f\in L^{2}(\Omega)$ and
$v_{h}\in V_{h}$, define the right-hand side functional $f_{h}$ on $V_{h}$ by
$\displaystyle(f_{h},v_{h})_{L^{2}(P)}$
$\displaystyle:=(f,\Pi_{1}v_{h})_{L^{2}(P)}.$ (3.7)
The sum over all the polygonal domains $P\in\mathcal{T}$ reads
$\displaystyle a_{h}(u_{h},v_{h})$
$\displaystyle:=\sum_{P\in\mathcal{T}}a_{h}^{P}(u_{h},v_{h}),\hskip
14.22636ptb_{h}(u_{h},v_{h}):=\sum_{P\in\mathcal{T}}b_{h}^{P}(u_{h},v_{h}),$
$\displaystyle c_{h}(u_{h},v_{h})$
$\displaystyle:=\sum_{P\in\mathcal{T}}c_{h}^{P}(u_{h},v_{h}),\hskip
14.22636pts_{h}(u_{h},v_{h}):=\sum_{P\in\mathcal{T}}S^{P}((1-\Pi_{1})u_{h},(1-\Pi_{1})v_{h}),$
$\displaystyle B_{h}(u_{h},v_{h})$
$\displaystyle:=\sum_{P\in\mathcal{T}}B_{h}^{P}(u_{h},v_{h}),\hskip
14.22636pt(f_{h},v_{h}):=\sum_{P\in\mathcal{T}}(f_{h},v_{h})_{P}\quad\text{for
all}\;u_{h},v_{h}\in V_{h}.$
The discrete problem seeks $u_{h}\in V_{h}$ such that
$\displaystyle B_{h}(u_{h},v_{h})=(f_{h},v_{h})\quad\text{for all}\;v_{h}\in
V_{h}.$ (3.8)
###### Remark 5 (polygonal mesh with small edges).
The conditions (M1)-(M2) are well established and apply throughout the paper.
The sub-triangulation $\widehat{{\cal T}}$ may not be shape-regular without
the edge condition $|E|\geq\rho h_{P}$ for an edge $E\in\mathcal{T}(P)$ and
$P\in\mathcal{T}$, but satisfies the maximal angle condition and the arguments
employed in the proof of [8, Lemma 6.3] can be applied to show (2.20) in
Theorem 2.8.a. For more general star-shaped polygon domains with short edges,
the recent anisotropic analysis [8, 15, 18] indicates that the stabilization
term has to be modified as well to avoid a logarithmic factor in the optimal
error estimates.
### 3.2 Properties of the discrete bilinear form
The following proposition provides two main properties of the discrete
bilinear form $B_{h}$.
###### Proposition 3.1.
There exist positive universal constants $M,\alpha$ and a universal
nonnegative constant $\beta$ depending on the coefficients
$\textbf{A},\textbf{b},\gamma$ such that
1. $\left(a\right)$
Boundedness: $|B_{h}(u_{h},v_{h})|\leq
M|u_{h}|_{1,\mathrm{pw}}|v_{h}|_{1,\mathrm{pw}}\quad\text{for all}\hskip
2.84526ptu_{h},v_{h}\in V_{h}.$
2. $\left(b\right)$
G$\mathring{a}$rding-type inequality:
$\alpha|v_{h}|^{2}_{1,\mathrm{pw}}-\beta\|v_{h}\|^{2}_{L^{2}(\Omega)}\leq
B_{h}(v_{h},v_{h})\quad\text{for all}\hskip 2.84526ptv_{h}\in V_{h}.$
###### Proof of $\left(a\right)$.
The upper bound of the coefficients from the assumption (A1), the Cauchy-
Schwarz inequality, the stability (2.11) of $\Pi_{1}$, and the definition
(3.5) of the stabilization term imply the boundedness of $B_{h}$ with
$M:=(1+C_{s})\|\textbf{A}\|_{\infty}+C_{\mathrm{F}}\|\textbf{b}\|_{\infty}+C_{\mathrm{F}}^{2}\|\gamma\|_{\infty}$.
The details of the proof follow as in [6, Lemma 5.2] with the constant
$C_{\mathrm{F}}$ from Lemma 2.6. ∎
###### Proof of $\left(b\right)$.
The first step shows that $a_{h}(\cdot,\cdot)$ is coercive. For $v_{h}\in
V_{h}(P)$, $\Pi_{1}v_{h}=\Pi^{\nabla}_{1}v_{h}$ and
$\nabla\Pi_{1}v_{h}\perp\nabla(v_{h}-\Pi^{\nabla}_{1}v_{h})$ in
$L^{2}(P;\mathbb{R}^{2})$. This orthogonality, the assumption (A2), and the
definition of the stability term (3.5) with the constant $C_{s}^{-1}\leq 1$
imply for $\alpha_{0}=a_{0}C_{s}^{-1}$ that
$\displaystyle\alpha_{0}|v_{h}|_{1,\mathrm{pw}}^{2}\leq
a_{0}\|\nabla_{\mathrm{pw}}\Pi_{1}v_{h}\|^{2}_{L^{2}(\Omega)}+a_{0}C_{s}^{-1}\|\nabla_{\mathrm{pw}}(1-\Pi_{1})v_{h}\|^{2}_{L^{2}(\Omega)}$
$\displaystyle\quad\leq\left(\textbf{A}\nabla_{\mathrm{pw}}\Pi_{1}v_{h},\nabla_{\mathrm{pw}}\Pi_{1}v_{h})_{L^{2}(\Omega)}+C_{s}^{-1}(\textbf{A}\nabla_{\mathrm{pw}}(1-\Pi_{1})v_{h},\nabla_{\mathrm{pw}}(1-\Pi_{1})v_{h}\right)_{L^{2}(\Omega)}$
$\displaystyle\quad\leq(\textbf{A}\nabla_{\mathrm{pw}}\Pi_{1}v_{h},\nabla_{\mathrm{pw}}\Pi_{1}v_{h})_{L^{2}(\Omega)}+s_{h}((1-\Pi_{1})v_{h},(1-\Pi_{1})v_{h})=a_{h}(v_{h},v_{h}).$
(3.9)
The Cauchy-Schwarz inequality, (2.11), and the Young inequality lead to
$\displaystyle|b_{h}(v_{h},v_{h})+c_{h}(v_{h},v_{h})|$
$\displaystyle\leq\|\textbf{b}\|_{\infty}\|\Pi_{1}v_{h}\|_{L^{2}(\Omega)}\|\nabla_{\mathrm{pw}}\Pi_{1}v_{h}\|_{L^{2}(\Omega)}+\|\gamma\|_{\infty}\|\Pi_{1}v_{h}\|_{L^{2}(\Omega)}^{2}$
$\displaystyle\leq\|\textbf{b}\|_{\infty}\|v_{h}\|_{L^{2}(\Omega)}|v_{h}|_{1,\mathrm{pw}}+\|\gamma\|_{\infty}\|v_{h}\|_{L^{2}(\Omega)}^{2}$
$\displaystyle\leq\frac{\|\textbf{b}\|^{2}_{\infty}}{2\alpha_{0}}\|v_{h}\|_{L^{2}(\Omega)}^{2}+\frac{\alpha_{0}}{2}|v_{h}|^{2}_{1,\mathrm{pw}}+\|\gamma\|_{\infty}\|v_{h}\|_{L^{2}(\Omega)}^{2}.$
(3.10)
The combination of (3.9)-(3.10) proves
$\displaystyle\frac{\alpha_{0}}{2}|v_{h}|^{2}_{1,\mathrm{pw}}-\left(\frac{\|\textbf{b}\|^{2}_{\infty}}{2\alpha_{0}}+\|\gamma\|_{\infty}\right)\|v_{h}\|^{2}_{L^{2}(\Omega)}\leq
B_{h}(v_{h},v_{h}).$
This concludes the proof of $\left(b\right)$ with
$\alpha=\frac{\alpha_{0}}{2}$ and
$\beta=\frac{\|\textbf{b}\|^{2}_{\infty}}{2\alpha_{0}}+\|\gamma\|_{\infty}$. ∎
###### Remark 6 ($\|\cdot\|_{h}\approx|\cdot|_{1,\mathrm{pw}}$).
The discrete space $V_{h}$ of the nonconforming VEM is endowed with the
natural norm $\|\cdot\|_{h}:=a_{h}(\cdot,\cdot)^{1/2}$ induced by the scalar
product $a_{h}$. The boundedness of $a_{h}$ is proven in $(a)$, while (3.9)
shows the converse estimate in the equivalence
$\|\cdot\|_{h}\approx|\cdot|_{1,\mathrm{pw}}$ in $V_{h}$, namely
$\alpha_{0}|v_{h}|^{2}_{1,\mathrm{pw}}\leq
a_{h}(v_{h},v_{h})\leq\|\textbf{A}\|_{\infty}(1+C_{s})|v_{h}|_{1,\mathrm{pw}}^{2}\quad\text{for
all}\;v_{h}\in V_{h}.$
### 3.3 Consistency error
This subsection discusses the consistency error between the continuous
bilinear form $B$ and the corresponding discrete bilinear form $B_{h}$. Recall
the definition $B^{P}(\cdot,\cdot)\equiv
a^{P}(\cdot,\cdot)+b^{P}(\cdot,\cdot)+c^{P}(\cdot,\cdot)$ and
$B_{h}^{P}(\cdot,\cdot)\equiv
a_{h}^{P}(\cdot,\cdot)+b_{h}^{P}(\cdot,\cdot)+c_{h}^{P}(\cdot,\cdot)$ for a
polygonal domain $P\in\mathcal{T}$ from Subsection 2.1.
###### Lemma 3.2 (consistency).
$(a)$ There exists a positive constant $C_{\text{cst}}$ (depending only on
$\rho$) such that any $v\in H^{1}(\Omega)$ and $w_{h}\in V_{h}$ satisfy
$\displaystyle B^{P}(\Pi_{1}v,w_{h})-B_{h}^{P}(\Pi_{1}v,w_{h})\leq
C_{\mathrm{cst}}\,h_{P}\|v\|_{1,P}|w_{h}|_{1,P}\quad\text{for
all}\;P\in\mathcal{T}.$ (3.11)
$(b)$ Any $f\in L^{2}(\Omega)$ and $f_{h}:=\Pi_{1}f$ satisfy
$\displaystyle\|f-f_{h}\|_{V_{h}^{*}}:=\sup_{0\neq v_{h}\in
V_{h}}\frac{(f-f_{h},v_{h})}{\|v_{h}\|_{1,\mathrm{pw}}}\leq
C_{\mathrm{PF}}\,\mathrm{osc}_{1}(f,\mathcal{T}).$ (3.12)
###### Proof.
Observe that $S^{P}((1-\Pi_{1})\Pi_{1}v,(1-\Pi_{1})w_{h})=0$ follows from
$(1-\Pi_{1})\Pi_{1}v=0$. The definition of $B^{P}$ and $B_{h}^{P}$ show
$\displaystyle
B^{P}(\Pi_{1}v,w_{h})-B_{h}^{P}(\Pi_{1}v,w_{h})=:T_{1}+T_{2}+T_{3}.$ (3.13)
The term $T_{1}$ in (3.13) is defined as the difference of the contributions
from $a^{P}$ and $a^{P}_{h}$. Their definitions prove the equality (at the end
of the first line below) and the definition of $\Pi_{1}$ prove the next
equality in
$\displaystyle T_{1}$
$\displaystyle:=a^{P}(\Pi_{1}v,w_{h})-a_{h}^{p}(\Pi_{1}v,w_{h})=(\textbf{A}\nabla\Pi_{1}v,\nabla(1-\Pi_{1})w_{h})_{L^{2}(P)}$
$\displaystyle=((\textbf{A}-\Pi_{0}\textbf{A})(\nabla\Pi_{1}v),\nabla(1-\Pi_{1})w_{h})_{L^{2}(P)}\leq
h_{P}|\textbf{A}|_{1,\infty}|v|_{1,P}|w_{h}|_{1,P}.$
The last inequality follows from the Cauchy-Schwarz inequality, the Lipschitz
continuity of A, and the stabilities
$\|\nabla\Pi_{1}v_{h}\|_{L^{2}(P)}\leq\|\nabla v_{h}\|_{L^{2}(P)}$ and
$\|\nabla(1-\Pi_{1})w_{h}\|_{L^{2}(P)}\leq\|\nabla w_{h}\|_{L^{2}(P)}$ from
Remark 4. Similar arguments apply to $T_{2}$ from the differences of $b^{P}$
and $b^{P}_{h}$, and $T_{3}$ from those of $c^{P}$ and $c_{h}^{P}$ in (3.13).
This leads to
$\displaystyle T_{2}$
$\displaystyle:=b^{P}(\Pi_{1}v,w_{h})-b_{h}^{P}(\Pi_{1}v,w_{h})$
$\displaystyle=((\textbf{b}-\Pi_{0}\textbf{b})\Pi_{1}v,\nabla(1-\Pi_{1})w_{h})_{L^{2}(P)}+((\Pi_{0}\textbf{b})(1-\Pi_{0})(\Pi_{1}v),\nabla(1-\Pi_{1})w_{h})_{L^{2}(P)}$
$\displaystyle\leq(|\textbf{b}|_{1,\infty}+C_{\mathrm{apx}}\|\textbf{b}\|_{\infty})h_{P}\|v\|_{1,P}|w_{h}|_{1,P},$
$\displaystyle T_{3}$
$\displaystyle:=c^{P}(\Pi_{1}v,w_{h})-c_{h}^{P}(\Pi_{1}v,w_{h})=(\gamma\Pi_{1}v,(1-\Pi_{1})w_{h})_{L^{2}(P)}\leq
C_{\mathrm{PF}}\,\|\gamma\|_{\infty}h_{P}\|v\|_{L^{2}(P)}|w_{h}|_{1,P}.$
The inequality for the last step in $T_{2}$ follows from the Cauchy-Schwarz
inequality, the Lipschitz continuity of b, the estimate
$\|(1-\Pi_{0})\Pi_{1}v\|_{L^{2}(P)}\leq\|(1-\Pi_{0})v\|_{L^{2}(P)}\leq
C_{\text{apx}}h_{P}|v|_{1,P}$ from (2.12), and the above stabilities
$\|\nabla\Pi_{1}v_{h}\|_{L^{2}(P)}\leq\|\nabla v_{h}\|_{L^{2}(P)}$ and
$\|\nabla(1-\Pi_{1})w_{h}\|_{L^{2}(P)}\leq\|\nabla w_{h}\|_{L^{2}(P)}$. The
inequality for the last step in $T_{3}$ follows from the Cauchy-Schwarz
inequality, $\|\Pi_{1}v\|_{L^{2}(P)}$ $\leq\|v\|_{L^{2}(P)}$ from (2.11) and
the Poincaŕe-Friedrichs inequality in Lemma 2.1.a for $w_{h}-\Pi_{1}w_{h}$
with $\int_{\partial P}(w_{h}-\Pi_{1}w_{h})\,ds=0$ from
$\Pi_{1}=\Pi^{\nabla}_{1}$ in $V_{h}$. The combination of the above estimates
shows (3.11). The proof of (3.12) adapts the arguments in the above analysis
of $T_{3}$ and the definition of $\mathrm{osc}_{1}(f,\mathcal{T})$ in
Subsection 2.1 for the proof of
$\displaystyle(f-f_{h},w_{h})_{L^{2}(P)}=(f-\Pi_{1}f,w_{h}-\Pi_{1}w_{h})_{L^{2}(P)}\leq
C_{\text{PF}}|w_{h}|_{1,P}\,\mathrm{osc}_{1}(f,P).$
This concludes the proof. ∎
### 3.4 Nonconformity error
Enrichment operators play a vital role in the analysis of nonconforming finite
element methods [12]. For any $v_{h}\in V_{h},$ the objective is to find a
corresponding function $Jv_{h}\in H_{0}^{1}(\Omega)$. The idea is to map the
VEM nonconforming space into the Crouzeix-Raviart finite element space
$\displaystyle\text{CR}_{0}^{1}(\widehat{{\cal T}}):=\\{v\in{\cal
P}_{1}(\widehat{{\cal T}}):\hskip 2.84526pt$
$\displaystyle\forall\;E\in\widehat{{\cal E}}\quad v\hskip 2.84526pt\text{is
continuous at mid}(E)\quad\text{and}$ $\displaystyle\forall\;E\in{\cal
E}(\partial\Omega)\quad v(\text{mid}(E))=0\\}$
with respect to the shape-regular triangulation $\widehat{{\cal T}}$ from
Remark 1. Let $\psi_{E}$ be the edge-oriented basis functions of
CR${}_{0}^{1}(\widehat{{\cal T}})$ with $\psi_{E}(\text{mid}\hskip
2.84526ptE)=1$ and $\psi_{E}(\text{mid}\hskip 2.84526ptF)=0$ for all other
edges $F\in\widehat{{\cal E}}\setminus\\{E\\}.$ Define the interpolation
operator $I_{\text{CR}}:V_{h}\to\text{CR}_{0}^{1}(\widehat{{\cal T}})$, for
$v_{h}\in V_{h}$, by
$\displaystyle I_{\text{CR}}v_{h}=\sum_{F\in\widehat{{\cal
E}}}\left(\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{F}v_{h}\,ds\right)\psi_{F}.$ (3.14)
The definition of $V_{h}$ implies $\int_{F}[v_{h}]\,ds=0$ for $v_{h}\in V_{h}$
and for all $F\in\mathcal{E}$. Since $v_{h}|_{P}\in H^{1}(P)$, it follows
$\int_{F}[v_{h}]\,ds=0$ for all $F\in\widehat{{\cal E}}\setminus\mathcal{E}$.
This shows $\int_{F}v_{h|T^{\pm}}\,ds$ is unique for all edges $F=\partial
T^{+}\cap\partial T^{-}\in\widehat{{\cal E}}$ and, consequently,
$I_{\text{CR}}v_{h}$ is well-defined (independent of the choice of traces
selected in the evaluation of $\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{F}v_{h}\,ds=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{F}v_{h}|_{T^{+}}\,ds=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{F}v_{h}|_{T^{-}}\,ds$). The approximation property
of $I_{\text{CR}}$ on each $T\in\widehat{{\cal T}}$ reads
$\displaystyle
h_{T}^{-1}\|v_{h}-I_{\text{CR}}v_{h}\|_{L^{2}(T)}+|v_{h}-I_{\text{CR}}v_{h}|_{1,T}\leq
2|v_{h}|_{1,T}$ (3.15)
(cf. [23, Thm 2.1] or [21, Thm 4] for explicit constants). Define an
enrichment operator $E_{h}:\text{CR}_{0}^{1}(\widehat{{\cal T}})\to
H_{0}^{1}(\Omega)$ by averaging the function values at each interior vertex
$z$, that is,
$\displaystyle E_{h}v_{\text{CR}}(z)=\frac{1}{|\widehat{{\cal
T}}(z)|}\sum_{T\in\widehat{{\cal T}}(z)}{v_{\text{CR}}}|_{T}(z)$ (3.16)
and zero on boundary vertices. In (3.16) the set $\widehat{{\cal
T}}(z):=\\{T\in\widehat{{\cal T}}\hskip 2.84526pt|\hskip 2.84526ptz\in T\\}$
of neighboring triangles has the cardinality $|\widehat{{\cal T}}(z)|\geq 3$.
The following lemma describes the construction of a modified companion
operator $J:V_{h}\to H_{0}^{1}(\Omega)$, which is a right-inverse of the
interpolation operator $I_{h}$ from Definition 2.7.
###### Lemma 3.3 (conforming companion operator).
There exists a linear map $J:V_{h}\to H^{1}_{0}(\Omega)$ and a universal
constant $C_{\mathrm{J}}\lesssim 1$ such that any $v_{h}\in V_{h}$ satisfies
$I_{h}Jv_{h}=v_{h}$ and
1. (a)
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}Jv_{h}\,ds=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}v_{h}\,ds$ for any edge $E\in\widehat{{\cal
E}},$
2. (b)
$\displaystyle\nabla_{\mathrm{pw}}(v_{h}-Jv_{h})\perp\mathcal{P}_{0}(\mathcal{T};\mathbb{R}^{2})$
in $L^{2}(\Omega;\mathbb{R}^{2}),$
3. (c)
$\displaystyle v_{h}-Jv_{h}\perp\mathcal{P}_{1}(\mathcal{T})$ in
$L^{2}(\Omega),$
4. (d)
$\|h_{\mathcal{T}}^{-1}(v_{h}-Jv_{h})\|_{L^{2}(\Omega)}+|v_{h}-Jv_{h}|_{1,\mathrm{pw}}\leq
C_{\mathrm{J}}|v_{h}|_{1,\mathrm{pw}}.$
###### Design of $J$ in Lemma 3.3.
∎ Given $v_{h}\in V_{h}$, let
$v_{\text{CR}}:=I_{\text{CR}}v_{h}\in\text{CR}^{1}_{0}(\widehat{{\cal T}})$.
There exists an operator $J^{\prime}:\text{CR}_{0}^{1}(\widehat{{\cal T}})\to
H_{0}^{1}(\Omega)$ from [22, Prop. 2.3] such that any
$v_{\text{CR}}\in\text{CR}_{0}^{1}(\widehat{{\cal T}})$ satisfies
1. (a’)
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}J^{\prime}v_{CR}\,ds=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}v_{CR}\,ds$ for any edge $E\in\widehat{{\cal
E}},$
2. (b’)
$\displaystyle\int_{P}\nabla_{\mathrm{pw}}(v_{\text{CR}}-J^{\prime}v_{\text{CR}})\,dx=0$
for all $P\in\mathcal{T}$,
3. (c’)
$\displaystyle\|h_{\widehat{{\cal
T}}}^{-1}(v_{\text{CR}}-J^{\prime}v_{\text{CR}})\|_{L^{2}(\Omega)}+|v_{\text{CR}}-J^{\prime}v_{\text{CR}}|_{1,\mathrm{pw}}\leq
C_{\mathrm{J^{\prime}}}\min_{v\in
H^{1}_{0}(\Omega)}|v_{\text{CR}}-v|_{1,\mathrm{pw}}$
with a universal constant $C_{\mathrm{J^{\prime}}}$ from [25]. Set
$v:=J^{\prime}I_{\text{CR}}v_{h}\in V:=H^{1}_{0}(\Omega)$. Recall that
$\widehat{{\cal T}}(P)$ is a shape-regular triangulation of $P$ into a finite
number of triangles. For each $T\in\widehat{{\cal T}}(P)$, let $b_{T}\in
W_{0}^{1,\infty}(T)$ denote the cubic bubble-function
$27\lambda_{1}\lambda_{2}\lambda_{3}$ for the barycentric co-ordinates
$\lambda_{1},\lambda_{2},\lambda_{3}\in\mathcal{P}_{1}(T)$ of $T$ with
$\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{T}b_{T}\,dx=9/20$ and $\|\nabla
b_{T}\|_{L^{2}(T)}\lesssim h_{T}^{-1}|T|^{1/2}\approx 1.$ Let $b_{T}$ be
extended by zero outside $T$ and, for $P\in\mathcal{T}$, define
$\displaystyle b_{P}:=\frac{20}{9}\sum_{T\in\widehat{{\cal T}}(P)}b_{T}\in
W_{0}^{1,\infty}(P)\subset W_{0}^{1,\infty}(\Omega)$ (3.17)
with $\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{P}b_{P}\,dx=1$ and $\|\nabla
b_{P}\|_{L^{2}(P)}\lesssim h_{P}^{-1}|P|^{1/2}\approx 1$. Let
$v_{P}\in\mathcal{P}_{1}(\mathcal{T})$ be the Riesz representation of the
linear functional $\mathcal{P}_{1}(\mathcal{T})\to\mathbb{R}$ defined by
$w_{1}\mapsto(v_{h}-v,w_{1})_{L^{2}(\Omega)}$ for
$w_{1}\in\mathcal{P}_{1}(\mathcal{T})$ in the Hilbert space
$\mathcal{P}_{1}(\mathcal{T})$ endowed with the weighted $L^{2}$ scalar
product $(b_{P}\bullet,\bullet)_{L^{2}(P)}$. Hence $v_{P}$ exists uniquely and
satisfies $\Pi_{1}(v_{h}-v)=\Pi_{1}(b_{P}v_{P})$. Given the bubble-functions
$(b_{P}:P\in\mathcal{T})$ from (3.17) and the above functions
$(v_{P}:P\in\mathcal{T})$ for $v_{h}\in V_{h}$, define
$\displaystyle Jv_{h}:=v+\sum_{P\in\mathcal{T}}v_{P}b_{P}\in V.$ (3.18)
###### Proof of (a).
Since $b_{P}$ vanishes at any $x\in E\in\mathcal{E}$, it follows for any
$E\in\widehat{\mathcal{E}}$ that
$\displaystyle\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}Jv_{h}\,ds=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}v\,ds=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}J^{\prime}v_{\text{CR}}\,ds=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}v_{\text{CR}}\,ds=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}v_{h}\,ds,$
where the definition of $v=J^{\prime}v_{\text{CR}}$, (a’), and
$v_{\text{CR}}=I_{\text{CR}}v_{h}$ lead to the second, third, and fourth
equality. This proves (a). ∎
###### Proof of (b).
An integration by parts and (a) show, for all $v_{h}\in V_{h}$ with $Jv_{h}$
from (3.18), that
$\displaystyle\int_{P}\nabla Jv_{h}\,dx=\int_{\partial
P}Jv_{h}\textbf{n}_{P}\,ds=\sum_{E\in\mathcal{E}(P)}\Big{(}\int_{E}Jv_{h}\textbf{n}_{E}\,ds\Big{)}=\sum_{E\in\mathcal{E}(P)}\Big{(}\int_{E}v_{h}\textbf{n}_{E}\,ds\Big{)}=\int_{P}\nabla
v_{h}\,dx.$
Since this holds for all $P\in\mathcal{T}$, it proves (b). ∎
###### Proof of (c).
This is $\Pi_{1}v_{h}=\Pi_{1}Jv_{h}$ and guaranteed by the design of $J$ in
(3.18). ∎
###### Proof of (d).
This relies on the definition of $J$ in (3.18) and $J^{\prime}$ with (c’).
Since (a) allows for $\int_{\partial P}(v_{h}-Jv_{h})\,ds=0$, the Poincaré-
Friedrichs inequality from Lemma 2.1.a implies
$\displaystyle h_{P}^{-1}\|v_{h}-Jv_{h}\|_{L^{2}(P)}\leq
C_{\text{PF}}|v_{h}-Jv_{h}|_{1,P}.$
Hence it remains to prove
$|v_{h}-Jv_{h}|_{1,\mathrm{pw}}\lesssim|v_{h}|_{1,\mathrm{pw}}.$ Triangle
inequalities with $v_{h},Jv_{h},v=J^{\prime}v_{\text{CR}}$ and
$v_{\text{CR}}=I_{\text{CR}}v_{h}$ show the first and second inequality in
$\displaystyle|v_{h}-Jv_{h}|_{1,\mathrm{pw}}-|v-Jv_{h}|_{1,\mathrm{pw}}$
$\displaystyle\leq|v-v_{h}|_{1,\mathrm{pw}}$
$\displaystyle\leq|v_{h}-I_{\text{CR}}v_{h}|_{1,\mathrm{pw}}+|v_{\text{CR}}-J^{\prime}v_{\text{CR}}|_{1,\mathrm{pw}}\leq(1+C_{\mathrm{J^{\prime}}})|v_{h}|_{1,\mathrm{pw}}$
(3.19)
with (b’) for
$|v_{\text{CR}}|_{1,\mathrm{pw}}=\|\Pi_{0}\nabla_{\mathrm{pw}}v_{h}\|_{L^{2}(\Omega)}\leq\|\nabla_{\mathrm{pw}}v_{h}\|_{L^{2}(\Omega)}=|v_{h}|_{1,\mathrm{pw}}$
in the last step. The equivalence of norms in the finite-dimensional space
$\mathcal{P}_{1}(P)$ assures the existence of a positive constant $C_{b}$,
independent of $h_{P}$, such that any $\chi\in\mathcal{P}_{1}(P)$ satisfies
the inverse inequalities
$\displaystyle C_{b}^{-1}\|\chi\|^{2}_{L^{2}(P)}\leq$
$\displaystyle(b_{P},\chi^{2})_{L^{2}(P)}\leq C_{b}\|\chi\|^{2}_{L^{2}(P)},$
(3.20) $\displaystyle
C_{b}^{-1}\|\chi\|_{L^{2}(P)}\leq\|b_{P}\chi\|_{L^{2}(P)}$
$\displaystyle+h_{P}\|\nabla(b_{P}\chi)\|_{L^{2}(P)}\leq
C_{b}\|\chi\|_{L^{2}(P)}.$ (3.21)
These estimates are completely standard on shape-regular triangles [2, p. 27]
or [37]; so they hold on each $T\in\widehat{{\cal T}}$ and, by definition of
$b_{P}$, their sum is (3.20)-(3.21). The analysis of the term
$|v-Jv_{h}|_{1,\mathrm{pw}}$ starts with one $P\in\mathcal{T}$ and (3.18) for
$\displaystyle|v-Jv_{h}|_{1,P}=|v_{P}b_{P}|_{1,P}\leq
C_{b}h_{P}^{-1}\|v_{P}\|_{L^{2}(P)}$ (3.22)
with (3.21) in the last step. The estimate (3.20) leads to the first
inequality in
$\displaystyle
C_{b}^{-1}\|v_{P}\|^{2}_{L^{2}(P)}\leq(b_{P}v_{P},v_{P})_{L^{2}(P)}=(v_{h}-v,v_{P})_{L^{2}(P)}\leq\|v_{h}-v\|_{L^{2}(P)}\|v_{P}\|_{L^{2}(P)}.$
The equality results from $\Pi_{1}(v_{h}-v)=\Pi_{1}(v_{P}b_{P})$ and
$v_{P}\in\mathcal{P}_{1}(\mathcal{T})$, while the last step is the Cauchy-
Schwarz inequality. Consequently, $\|v_{P}\|_{L^{2}(P)}\leq
C_{b}\|v_{h}-v\|_{L^{2}(P)}$. This and (3.22) show
$\displaystyle|v-Jv_{h}|_{1,\mathrm{pw}}\leq
C_{b}^{2}\|h^{-1}_{\mathcal{T}}(v-v_{h})\|_{L^{2}(\Omega)}\leq
C_{b}^{2}C_{\mathrm{PF}}|v-v_{h}|_{1,\mathrm{pw}}$
with $\int_{\partial P}(v-v_{h})\,ds=0$ from $(a)$ and hence the Poincaré-
Friedrichs inequality for $v-v_{h}$ from Lemma 2.1.a in the last step. Recall
$|v-v_{h}|_{1,\mathrm{pw}}\lesssim|v_{h}|_{1,\mathrm{pw}}$ from (3.19) to
conclude $|v-Jv_{h}|_{1,\mathrm{pw}}\lesssim|v_{h}|_{1,\mathrm{pw}}$ from the
previous displayed inequality. This concludes the proof of (d). ∎
###### Proof of $I_{h}J=\text{id}\;\text{ in}\;V_{h}$.
Definition 2.7 and Lemma 3.3.a show, for all $v_{h}\in V_{h}$, that
$\displaystyle
I_{h}Jv_{h}=\sum_{E\in\mathcal{E}}\Big{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}Jv_{h}\,ds\Big{)}\psi_{E}=\sum_{E\in\mathcal{E}}\Big{(}\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}v_{h}\,ds\Big{)}\psi_{E}=v_{h}.$
This concludes the proof of Lemma 3.3. ∎
Since $V_{h}$ is not a subset of $H^{1}_{0}(\Omega)$ in general, the
substitution of discrete function $v_{h}$ in the weak formulation leads to a
nonconformity error.
###### Lemma 3.4 (nonconformity error).
There exist positive universal constants $C_{\mathrm{NC}},C^{*}_{\mathrm{NC}}$
(depending on the coefficients $\textbf{A},\textbf{b}$ and the universal
constants $\rho,\sigma$) such that all $f,g\in L^{2}(\Omega)$ and all
$\mathcal{T}\in\mathbb{T}(\delta)$ (with the assumption
$h_{\text{max}}\leq\delta\leq 1$) satisfy $(a)$ and $(b)$.
$(a)$ The solution $u\in H^{1+\sigma}(\Omega)\cap H^{1}_{0}(\Omega)$ to
$(\ref{1})$ satisfies
$\displaystyle\sup_{0\neq v_{h}\in
V_{h}}\frac{|B_{\mathrm{pw}}(u,v_{h})-(f,v_{h})_{L^{2}(\Omega)}|}{\|v_{h}\|_{1,\mathrm{pw}}}\leq
C_{\mathrm{NC}}h_{\text{max}}^{\sigma}\|f\|_{L^{2}(\Omega)}.$ (3.23)
$(b)$ The solution $\Phi\in H^{1+\sigma}(\Omega)\cap H^{1}_{0}(\Omega)$ to the
dual problem $(\ref{5})$ satisfies
$\displaystyle\sup_{0\neq v_{h}\in
V_{h}}\frac{|B_{\mathrm{pw}}(v_{h},\Phi)-(g,v_{h})_{L^{2}(\Omega)}|}{\|v_{h}\|_{1,\mathrm{pw}}}\leq
C^{*}_{\mathrm{NC}}h_{\text{max}}^{\sigma}\|g\|_{L^{2}(\Omega)}.$ (3.24)
###### Proof of $(a)$.
Given $v_{h}\in V_{h}$, define $Jv_{h}\in V$ and the piecewise averages
$\overline{\textbf{A}}:=\Pi_{0}(\textbf{A}),\overline{\textbf{b}}:=\Pi_{0}(\textbf{b})$,
and $\overline{\gamma}:=\Pi_{0}(\gamma)$ of the coefficients
$\textbf{A},\textbf{b}$, and $\gamma$. The choice of test function
$v:=Jv_{h}\in V$ in the weak formulation (1.8) having extra properties
provides the terms with oscillations in the further analysis. Abbreviate
$\bm{\sigma}:=\textbf{A}\nabla u+\textbf{b}u$. The weak formulation (1.8),
Lemma 3.3.b-c, and the Cauchy-Schwarz inequality reveal that
$\displaystyle
B_{\mathrm{pw}}(u,v_{h})-(f,v_{h})_{L^{2}(\Omega)}=B_{\mathrm{pw}}(u,v_{h}-Jv_{h})-(f,v_{h}-Jv_{h})_{L^{2}(\Omega)}$
$\displaystyle\leq\|\bm{\sigma}-\Pi_{0}\bm{\sigma}\|_{L^{2}(\Omega)}\|\nabla_{\mathrm{pw}}(1-J)v_{h}\|_{L^{2}(\Omega)}+\|h_{\mathcal{T}}(1-\Pi_{1})(f-\gamma
u)\|_{L^{2}(\Omega)}\|h_{\mathcal{T}}^{-1}(1-J)v_{h}\|_{L^{2}(\Omega)}.$
(3.25)
The first term on the right-hand side of (3.25) involves the factor
$\displaystyle\|\bm{\sigma}-\Pi_{0}\bm{\sigma}\|_{L^{2}(\Omega)}$
$\displaystyle\leq\|\textbf{A}\nabla u-\Pi_{0}(\textbf{A}\nabla
u)\|_{L^{2}(\Omega)}+\|\textbf{b}u-\Pi_{0}(\textbf{b}u)\|_{L^{2}(\Omega)}$
$\displaystyle\leq\|(\textbf{A}-\overline{\textbf{A}})\nabla
u+\overline{\textbf{A}}(1-\Pi_{0})\nabla
u\|_{L^{2}(\Omega)}+\|(\textbf{b}-\overline{\textbf{b}})u+\overline{\textbf{b}}(1-\Pi_{0})u\|_{L^{2}(\Omega)}$
$\displaystyle\leq\Big{(}h_{\text{max}}(|\textbf{A}|_{1,\infty}+|\textbf{b}|_{1,\infty})+C_{\text{apx}}(h_{\text{max}}^{\sigma}\|\textbf{A}\|_{\infty}+h_{\text{max}}\|\textbf{b}\|_{\infty})\Big{)}\;\|u\|_{1+\sigma,\Omega}.$
The last inequality follows from the Lipschitz continuity of the coefficients
A and b, and the estimate (2.12). Lemma 3.3.d leads to the estimates
$\|\nabla_{\mathrm{pw}}(1-J)v_{h}\|_{L^{2}(\Omega)}\leq
C_{J}|v_{h}|_{1,\mathrm{pw}}$ and
$\displaystyle\|h_{\mathcal{T}}(1-\Pi_{1})(f-\gamma
u)\|_{L^{2}(\Omega)}\|h_{\mathcal{T}}^{-1}(1-J)v_{h}\|_{L^{2}(\Omega)}\leq\mathrm{osc}_{1}(f-\gamma
u,\mathcal{T})C_{J}|v_{h}|_{1,\mathrm{pw}}.$
The substitution of the previous estimates in (3.25) with
$h_{\mathrm{max}}\leq 1$ (from $\delta\leq 1$ by assumption) and the
regularity (1.5) show
$\displaystyle B_{\mathrm{pw}}(u,v_{h})-(f,v_{h})\leq
C_{\mathrm{NC}}h_{\text{max}}^{\sigma}\|f\|_{L^{2}(\Omega)}\|v_{h}\|_{1,\mathrm{pw}}$
with
$C_{\mathrm{NC}}:=C_{J}\Big{(}(|\textbf{A}|_{1,\infty}+|\textbf{b}|_{1,\infty}+C_{\text{apx}}(\|\textbf{A}\|_{\infty}+\|\textbf{b}\|_{\infty})+\|\gamma\|_{\infty})C_{\text{reg}}+1\Big{)}$.
This concludes the proof of Lemma 3.4.a. ∎
###### Proof of $(b)$.
The solution $\Phi\in V$ to (1.4) satisfies $B(v,\Phi)=(g,v)_{L^{2}(\Omega)}$
for all $v\in V.$ This implies
$B_{\mathrm{pw}}(v_{h},\Phi)-(g,v_{h})_{L^{2}(\Omega)}=B_{\mathrm{pw}}(v_{h}-Jv_{h},\Phi)-(g,v_{h}-Jv_{h})_{L^{2}(\Omega)}.$
The arguments in the proof of $(a)$ lead to the bound (3.24) with
$C^{*}_{\mathrm{NC}}:=C_{J}\Big{(}(|\textbf{A}|_{1,\infty}+C_{\text{apx}}\|\textbf{A}\|_{\infty}+\|\textbf{b}\|_{\infty}+\|\gamma\|_{\infty})C^{*}_{\text{reg}}+1\Big{)}.$
The remaining analogous details are omitted in the proof of Lemma 3.4.b for
brevity. ∎
## 4 A priori error analysis
This section focuses on the stability, existence, and uniqueness of the
discrete solution $u_{h}$. The a priori error analysis uses the discrete inf-
sup condition.
### 4.1 Existence and uniqueness of the discrete solution
###### Theorem 4.1 (stability).
There exist positive constants $\delta\leq 1$ and $C_{\mathrm{stab}}$
(depending on $\alpha,\beta,\sigma,\rho,$ and $C_{\mathrm{F}}$) such that, for
all $\mathcal{T}\in\mathbb{T}(\delta)$ and for all $f\in L^{2}(\Omega)$, the
discrete problem (3.8) has a unique solution $u_{h}\in V_{h}$ and
$\displaystyle|u_{h}|_{1,\mathrm{pw}}\leq
C_{\mathrm{stab}}\|f_{h}\|_{V_{h}^{*}}.$
###### Proof.
In the first part of the proof, suppose there exists some solution $u_{h}\in
V_{h}$ to the discrete problem (3.8) for some $f\in L^{2}(\Omega)$. (This is
certainly true for all $f\equiv 0\equiv u_{h}$, but will be discussed for all
those pairs at the end of the proof and shall lead to the uniqueness of
discrete solutions.) Since $u_{h}$ satisfies a G$\mathring{a}$rding-type
inequality in Proposition 3.1.b,
$\displaystyle\alpha|u_{h}|_{1,\mathrm{pw}}^{2}$
$\displaystyle\leq\beta\|u_{h}\|^{2}_{L^{2}(\Omega)}+B_{h}(u_{h},u_{h})=\beta\|u_{h}\|^{2}_{L^{2}(\Omega)}+(f_{h},u_{h})_{L^{2}(\Omega)}.$
This, (2.14), and the definition of the dual norm in (3.12) lead to
$\displaystyle\alpha|u_{h}|_{1,\mathrm{pw}}\leq\beta
C_{\text{F}}\|u_{h}\|_{L^{2}(\Omega)}+\|f_{h}\|_{V_{h}^{*}}.$ (4.1)
Given $g:=u_{h}\in L^{2}(\Omega)$, let $\Phi\in V\cap H^{1+\sigma}(\Omega)$
solve the dual problem ${\cal L}^{*}\Phi=g$ and let $I_{h}\Phi\in V_{h}$ be
the interpolation of $\Phi$ from Subsection 2.4. Elementary algebra shows
$\displaystyle\|u_{h}\|^{2}_{L^{2}(\Omega)}$
$\displaystyle=\Big{(}(g,u_{h})_{L^{2}(\Omega)}-B_{\mathrm{pw}}(u_{h},\Phi)\Big{)}+B_{\mathrm{pw}}(u_{h},\Phi-
I_{h}\Phi)$
$\displaystyle\quad+\Big{(}B_{\mathrm{pw}}(u_{h},I_{h}\Phi)-B_{h}(u_{h},I_{h}\Phi)\Big{)}+(f_{h},I_{h}\Phi)_{L^{2}(\Omega)}.$
(4.2)
Rewrite a part of the third term corresponding to diffusion on the right-hand
side of (4.2) as
$\displaystyle
a^{P}(u_{h},I_{h}\Phi)-a_{h}^{P}(u_{h},I_{h}\Phi)=(\textbf{A}\nabla
u_{h},\nabla(1-\Pi_{1})I_{h}\Phi)_{L^{2}(P)}$
$\displaystyle+(\nabla(1-\Pi_{1})u_{h},(\textbf{A}-\Pi_{0}\textbf{A})(\nabla\Pi_{1}I_{h}\Phi))_{L^{2}(P)}-S^{P}\big{(}(1-\Pi_{1})u_{h},(1-\Pi_{1})I_{h}\Phi\big{)}.$
The Cauchy-Schwarz inequality in the semi-scalar product
$S^{P}(\bullet,\bullet)$, and (3.5) with the upper bound
$\|\textbf{A}\|_{\infty}$ for the coefficient A in $a^{P}(\bullet,\bullet)$
lead to the estimate
$\displaystyle C_{s}^{-1}$ $\displaystyle
S^{P}\big{(}(1-\Pi_{1})u_{h},(1-\Pi_{1})I_{h}\Phi\big{)}\leq|(1-\Pi_{1})u_{h}|_{1,P}|(1-\Pi_{1})I_{h}\Phi|_{1,P}$
$\displaystyle\qquad\leq\|\textbf{A}\|_{\infty}|u_{h}|_{1,P}\Big{(}\|\nabla(I_{h}\Phi-\Phi)\|_{L^{2}(P)}+\|\nabla(1-\Pi_{1}I_{h})\Phi\|_{L^{2}(P)}\Big{)}$
$\displaystyle\qquad\leq\|\textbf{A}\|_{\infty}C_{\text{apx}}\Big{(}2+C_{\mathrm{PF}}+C_{\text{Itn}}\Big{)}h_{P}^{\sigma}|u_{h}|_{1,P}|\Phi|_{1+\sigma,P}$
(4.3)
with Theorem 2.8.b followed by (2.12) in the final step. This and Theorem 2.8
imply that
$\displaystyle|a^{P}(u_{h},I_{h}\Phi)-a_{h}^{P}(u_{h},I_{h}\Phi)|$
$\displaystyle\leq h_{P}^{\sigma}|u_{h}|_{1,P}\|\Phi\|_{1+\sigma,P}$
$\displaystyle\quad\times\Big{(}\|\textbf{A}\|_{\infty}C_{\text{apx}}(2+C_{\mathrm{PF}}+C_{\text{Itn}})(1+C_{s})+|\textbf{A}|_{1,\infty}C_{\text{Itn}}\Big{)}.$
The terms $b^{P}-b_{h}^{P}$ and $c^{P}-c_{h}^{P}$ are controlled by
$\displaystyle|b^{P}(u_{h},I_{h}\Phi)-b_{h}^{P}(u_{h},I_{h}\Phi)|+|c^{P}(u_{h},I_{h}\Phi)-c_{h}^{P}(u_{h},I_{h}\Phi)|$
$\displaystyle\leq
h_{P}^{\sigma}\|\Phi\|_{1+\sigma,P}\big{(}\|\textbf{b}\|_{\infty}(C_{\text{apx}}(2+C_{\mathrm{PF}}+C_{\text{Itn}})\|u_{h}\|_{L^{2}(P)}+C_{\mathrm{Itn}}C_{\mathrm{PF}}|u_{h}|_{1,P})$
$\displaystyle\hskip
71.13188pt+\|\gamma\|_{\infty}C_{\mathrm{PF}}(C_{\text{Itn}}\|u_{h}\|_{L^{2}(P)}+|u_{h}|_{1,P})\big{)}.$
The combination of the previous four displayed estimates with Lemma 2.6 leads
to an estimate for $P$. The sum over all polygonal domains $P\in\mathcal{T}$
reads
$\displaystyle B_{\mathrm{pw}}(u_{h},I_{h}\Phi)-B_{h}(u_{h},I_{h}\Phi)\leq
C_{d}h_{\text{max}}^{\sigma}|u_{h}|_{1,\mathrm{pw}}\|\Phi\|_{1+\sigma,\Omega}$
(4.4)
with a universal constant $C_{d}$. The bound for (4.2) results from Lemma
3.4.b for the first term, the boundedness of $B_{\mathrm{pw}}$ (with a
universal constant
$M_{b}:=\|\textbf{A}\|_{\infty}+C_{\mathrm{F}}\|\textbf{b}\|_{\infty}+C_{\mathrm{F}}^{2}\|\gamma\|_{\infty}$)
and (2.15) for the second term, (4.4) for the third term, and Theorem 2.8.a
for the last term on the right-hand side of (4.2). This shows
$\displaystyle\|u_{h}\|^{2}_{L^{2}(\Omega)}$
$\displaystyle\leq\Big{(}C^{*}_{\mathrm{NC}}+C_{\text{I}}M_{b}+C_{d}\Big{)}h_{\text{max}}^{\sigma}|u_{h}|_{1,\mathrm{pw}}\|\Phi\|_{1+\sigma,\Omega}+C_{\text{Itn}}\|f_{h}\|_{V_{h}^{*}}\|\Phi\|_{1,\Omega}.$
This and the regularity estimate (1.5) lead to
$C_{3}=C^{*}_{\mathrm{NC}}+C_{\text{I}}M_{b}+C_{d}$ in
$\displaystyle\|u_{h}\|_{L^{2}(\Omega)}\leq
C_{3}\,C^{*}_{\text{reg}}h_{\text{max}}^{\sigma}|u_{h}|_{1,\mathrm{pw}}+C_{\text{Itn}}\|f_{h}\|_{V_{h}^{*}}.$
The substitution of this in (4.1) proves
$\displaystyle\alpha|u_{h}|_{1,\mathrm{pw}}\leq\beta
C_{\text{F}}C_{3}C^{*}_{\text{reg}}h_{\text{max}}^{\sigma}|u_{h}|_{1,\mathrm{pw}}+(\beta
C_{\text{F}}C_{\text{Itn}}+1)\|f_{h}\|_{V_{h}^{*}}.$ (4.5)
For all $0<h_{\text{max}}\leq\delta:=(\frac{\alpha}{2\beta
C_{\text{F}}C_{3}C^{*}_{\text{reg}}})^{1/\sigma}$, the constant
$\overline{c}=(1-\frac{\beta}{\alpha}C_{\text{F}}C_{3}C^{*}_{\text{reg}}h_{\text{max}}^{\sigma})$
is positive and $C_{\text{stab}}:=\frac{\beta
C_{\text{F}}C_{\text{Itn}}+1}{\alpha-\beta
C_{\mathrm{F}}C_{3}C^{*}_{\text{reg}}h_{0}^{\sigma}}$ is well-defined. This
leads in (4.5) to
$\displaystyle|u_{h}|_{1,\mathrm{pw}}\leq
C_{\text{stab}}\|f_{h}\|_{V_{h}^{*}}.$ (4.6)
In the last part of the proof, suppose $f_{h}\equiv 0$ and let $u_{h}$ be any
solution to the resulting homogeneous linear discrete system. The stability
result (4.6) proves $u_{h}\equiv 0$. Hence, the linear system of equations
(3.8) has a unique solution and the coefficient matrix is regular. This proves
that there exists a unique solution $u_{h}$ to (3.8) for any right-hand side
$f_{h}\in V_{h}^{*}$. The combination of this with (4.6) concludes the proof.
∎
An immediate consequence of Theorem 4.1 is the following discrete inf-sup
estimate.
###### Theorem 4.2 (discrete inf-sup).
There exist $0<\delta\leq 1$ and $\overline{\beta}_{0}>0$ such that, for all
$\mathcal{T}\in\mathbb{T}(\delta)$,
$\displaystyle\overline{\beta}_{0}\leq\inf_{0\neq u_{h}\in V_{h}}\sup_{0\neq
v_{h}\in
V_{h}}\frac{B_{h}(u_{h},v_{h})}{|u_{h}|_{1,\mathrm{pw}}|v_{h}|_{1,\mathrm{pw}}}.$
(4.7)
###### Proof.
Define the operator ${\cal L}_{h}:V_{h}\to V_{h}^{*},$ $v_{h}\mapsto
B_{h}(v_{h},\bullet)$. The stability Theorem 4.1 can be interpreted as
follows: For any $f_{h}\in V_{h}^{*}$ there exists $u_{h}\in V_{h}$ such that
${\cal L}_{h}u_{h}=f_{h}$ and
$\displaystyle\overline{\beta}_{0}|u_{h}|_{1,\mathrm{pw}}\leq\|f_{h}\|_{V_{h}^{*}}=\sup_{0\neq
v_{h}\in V_{h}}\frac{(f_{h},v_{h})}{|v_{h}|_{1,\mathrm{pw}}}=\sup_{0\neq
v_{h}\in V_{h}}\frac{B_{h}(u_{h},v_{h})}{|v_{h}|_{1,\mathrm{pw}}}.$
The discrete problem $B_{h}(u_{h},\bullet)=(f_{h},\bullet)$ has a unique
solution in $V_{h}$. Therefore, $f_{h}$ and $u_{h}$ are in one to one
correspondence and the last displayed estimate holds for any $u_{h}\in V_{h}$.
The infimum over $u_{h}\in V_{h}$ therein proves (4.7) with
$\overline{\beta}_{0}=C_{\text{stab}}^{-1}$. ∎
### 4.2 A priori error estimates
This subsection establishes the error estimate in the energy norm
$|\cdot|_{1,\mathrm{pw}}$ and in the $L^{2}$ norm. The discrete inf-sup
condition allows for an error estimate in the $H^{1}$ norm and an Aubin-
Nitsche duality argument leads to an error estimate in the $L^{2}$ norm.
Recall $u\in H^{1}_{0}(\Omega)$ is a unique solution of (1.8) and $u_{h}\in
V_{h}$ is a unique solution of (3.8). Recall the definition of the bilinear
form $s_{h}(\cdot,\cdot)$ from Section 3.1 and define the induced seminorm
$|v_{h}|_{\mathrm{s}}:=s_{h}(v_{h},v_{h})^{1/2}$ for $v_{h}\in V_{h}$ as a
part of the norm $\|\cdot\|_{h}$ from Remark 6.
###### Theorem 4.3 (error estimate).
Set $\bm{\sigma}:=\textbf{A}\nabla u+\textbf{b}u\in H(\text{div},\Omega)$.
There exist positive constants $C_{4},C_{5},$ and $\delta$ such that, for all
$\mathcal{T}\in\mathbb{T}(\delta)$, the discrete problem (3.8) has a unique
solution $u_{h}\in V_{h}$ and
$\displaystyle|u-u_{h}|_{1,\mathrm{pw}}+|u-\Pi_{1}u_{h}|_{1,\mathrm{pw}}+h_{\mathrm{max}}^{-\sigma}(\|u-u_{h}\|_{L^{2}(\Omega)}+\|u-\Pi_{1}u_{h}\|_{L^{2}(\Omega)})+|u_{h}|_{\mathrm{s}}+|I_{h}u-u_{h}|_{\mathrm{s}}$
$\displaystyle\quad\leq
C_{4}\Big{(}\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(\Omega)}+\|(1-\Pi_{0})\nabla
u\|_{L^{2}(\Omega)}+\mathrm{osc}_{1}(f-\gamma u,\mathcal{T})\Big{)}\leq
C_{5}h_{\mathrm{max}}^{\sigma}\|f\|_{L^{2}(\Omega)}.$ (4.8)
###### Proof.
Step 1 (initialization). Let $I_{h}u\in V_{h}$ be the interpolation of $u$
from Definition 2.7. The discrete inf-sup condition (4.7) for $I_{h}u-u_{h}\in
V_{h}$ leads to some $v_{h}\in V_{h}$ with $|v_{h}|_{1,\mathrm{pw}}\leq 1$
such that
$\displaystyle\overline{\beta}_{0}|I_{h}u-u_{h}|_{1,\mathrm{pw}}=B_{h}(I_{h}u-u_{h},v_{h}).$
Step 2 (error estimate for $|u-u_{h}|_{1,\mathrm{pw}}$). Rewrite the last
equation with the continuous and the discrete problem (1.8) and (3.8) as
$\displaystyle\overline{\beta}_{0}|I_{h}u-u_{h}|_{1,\mathrm{pw}}=B_{h}(I_{h}u,v_{h})-B(u,v)+(f,v)_{L^{2}(\Omega)}-(f_{h},v_{h})_{L^{2}(\Omega)}.$
This equality is rewritten with the definition of $B(u,v)$ in (1.7), the
definition of $B_{h}(I_{h}u,v_{h})$ in Section 3.1, and with $f_{h}=\Pi_{1}f$.
Recall $v:=Jv_{h}\in V$ from Lemma 3.3 and recall
$\nabla_{\mathrm{pw}}\Pi_{1}I_{h}u=\Pi_{0}\nabla u$ from (2.17). This results
in
LHS
$\displaystyle:=\overline{\beta}_{0}|I_{h}u-u_{h}|_{1,\mathrm{pw}}-s_{h}((1-\Pi_{1})I_{h}u,(1-\Pi_{1})v_{h})$
$\displaystyle=(\textbf{A}\Pi_{0}\nabla
u+\textbf{b}\Pi_{1}I_{h}u,\nabla_{\mathrm{pw}}\Pi_{1}v_{h})_{L^{2}(\Omega)}+(\gamma\Pi_{1}I_{h}u,\Pi_{1}v_{h})_{L^{2}(\Omega)}-(\bm{\sigma},\nabla
v)_{L^{2}(\Omega)}$ $\displaystyle\qquad+(f-\gamma
u,v)_{L^{2}(\Omega)}-(f,\Pi_{1}v_{h})_{L^{2}(\Omega)}.$
Abbreviate $w:=v-\Pi_{1}v_{h}$ and observe the orthogonalities
$\nabla_{\mathrm{pw}}w\perp\mathcal{P}_{0}(\mathcal{T};\mathbb{R}^{2})$ in
$L^{2}(\Omega;\mathbb{R}^{2})$ and $w\perp\mathcal{P}_{1}(\mathcal{T})$ in
$L^{2}(\Omega)$ from Lemma 3.3.b-c and the definition of $\Pi_{1}$ with
$\Pi_{1}=\Pi^{\nabla}_{1}$ in $V_{h}$. Lemma 3.3.d, the bound
$|(1-\Pi^{\nabla}_{1})v_{h}|_{1,\mathrm{pw}}\leq|v_{h}|_{1,\mathrm{pw}}\leq
1$, and the Poincaré-Friedrichs inequality for $v_{h}-\Pi^{\nabla}_{1}v_{h}$
from Lemma 2.1.a lead to
$\displaystyle|w|_{1,\mathrm{pw}}$
$\displaystyle\leq|v-v_{h}|_{1,\mathrm{pw}}+|v_{h}-\Pi_{1}v_{h}|_{1,\mathrm{pw}}\leq
C_{\mathrm{J}}+1,$ (4.9)
$\displaystyle\|h_{\mathcal{T}}^{-1}w\|_{L^{2}(\Omega)}$
$\displaystyle\leq\|h_{\mathcal{T}}^{-1}(v-v_{h})\|_{L^{2}(\Omega)}+\|h_{\mathcal{T}}^{-1}(v_{h}-\Pi_{1}v_{h})\|_{L^{2}(\Omega)}\leq
C_{\mathrm{J}}+C_{\mathrm{PF}}.$ (4.10)
Elementary algebra and the above orthogonalities prove that
LHS $\displaystyle=((\textbf{A}-\Pi_{0}\textbf{A})(\Pi_{0}-1)\nabla
u+\textbf{b}(\Pi_{1}I_{h}u-u),\nabla_{\mathrm{pw}}\Pi_{1}v_{h})_{L^{2}(\Omega)}-((1-\Pi_{0})\bm{\sigma},\nabla_{\mathrm{pw}}w)_{L^{2}(\Omega)}$
$\displaystyle\qquad+(\gamma(\Pi_{1}I_{h}u-u),\Pi_{1}v_{h})_{L^{2}(\Omega)}+(h_{\mathcal{T}}(1-\Pi_{1})(f-\gamma
u),h_{\mathcal{T}}^{-1}w)_{L^{2}(\Omega)}$
$\displaystyle\leq\Big{(}|\textbf{A}|_{1,\infty}+(1+C_{\mathrm{PF}})(\|\textbf{b}\|_{\infty}+C_{\mathrm{F}}\|\gamma\|_{\infty})\Big{)}h_{\text{max}}\|(1-\Pi_{0})\nabla
u\|_{L^{2}(\Omega)}$
$\displaystyle\quad+(C_{\mathrm{J}}+1)\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(\Omega)}+(C_{\mathrm{J}}+C_{\text{PF}})\mathrm{osc}_{1}(f-\gamma
u,\mathcal{T})$ (4.11)
with the Lipschitz continuity of A, Lemma 2.8.b, the stabilities of $\Pi_{1}$
from (2.11), and (4.9)-(4.10) in the last step. The definition of stability
term (3.5) and Theorem 2.8.b lead to
$\displaystyle C_{s}^{-1}s_{h}((1-\Pi_{1})I_{h}u,(1-\Pi_{1})v_{h})$
$\displaystyle\leq\|\textbf{A}\|_{\infty}|(1-\Pi_{1})I_{h}u|_{1,\mathrm{pw}}|(1-\Pi_{1})v_{h}|_{1,\mathrm{pw}}$
$\displaystyle\leq\|\textbf{A}\|_{\infty}(|I_{h}u-u|_{1,\mathrm{pw}}+|u-\Pi_{1}I_{h}u|_{1,\mathrm{pw}})|v_{h}|_{1,\mathrm{pw}}$
$\displaystyle\leq\|\textbf{A}\|_{\infty}(2+C_{\text{Itn}}+C_{\mathrm{PF}})\|(1-\Pi_{0})\nabla
u\|_{L^{2}(\Omega)}|v_{h}|_{1,\mathrm{pw}}.$ (4.12)
The triangle inequality, the bound (2.15) for the term
$|u-I_{h}u|_{1,\mathrm{pw}}$, and (4.11)-(4.12) for the term
$|I_{h}u-u_{h}|_{1,\mathrm{pw}}$ conclude the proof of (4.8) for the term
$|u-u_{h}|_{1,{\mathrm{pw}}}$.
Step $3$ (duality argument). To prove the bound for $u-u_{h}$ in the $L^{2}$
norm with a duality technique, let $g:=I_{h}u-u_{h}\in L^{2}(\Omega)$. The
solution $\Phi\in H^{1}_{0}(\Omega)\cap H^{1+\sigma}(\Omega)$ to the dual
problem (1.4) satisfies the elliptic regularity (1.5),
$\displaystyle\|\Phi\|_{1+\sigma,\Omega}\leq
C^{*}_{\text{reg}}\|I_{h}u-u_{h}\|_{L^{2}(\Omega)}.$ (4.13)
Step $4$ (error estimate for $\|u-u_{h}\|_{L^{2}(\Omega)}$). Let $I_{h}\Phi\in
V_{h}$ be the interpolation of $\Phi$ from Definition 2.7. Elementary algebra
reveals the identity
$\displaystyle\|g\|^{2}_{L^{2}(\Omega)}$
$\displaystyle=((g,g)_{L^{2}(\Omega)}-B_{\mathrm{pw}}(g,\Phi))+B_{\mathrm{pw}}(g,\Phi-
I_{h}\Phi)$
$\displaystyle\quad+(B_{\mathrm{pw}}(g,I_{h}\Phi)-B_{h}(g,I_{h}\Phi))+B_{h}(g,I_{h}\Phi).$
(4.14)
The bound (4.4) with $g$ as the first argument shows
$\displaystyle B_{\mathrm{pw}}(g,I_{h}\Phi)-B_{h}(g,I_{h}\Phi)\leq
C_{d}h_{\text{max}}^{\sigma}|g|_{1,\mathrm{pw}}\|\Phi\|_{1+\sigma,\Omega}.$
This controls the third term in (4.14), Lemma 3.4.b controls the first term,
the boundedness of $B_{\mathrm{pw}}$ and the interpolation error estimate
(2.15) control the second term on the right-hand side of (4.14). This results
in
$\displaystyle\|I_{h}u-u_{h}\|^{2}_{L^{2}(\Omega)}\leq(C^{*}_{\mathrm{NC}}+C_{\mathrm{I}}M_{b}+C_{d})h_{\mathrm{max}}^{\sigma}|g|_{1,\mathrm{pw}}\|\Phi\|_{1+\sigma,\Omega}+B_{h}(g,I_{h}\Phi).$
(4.15)
It remains to bound $B_{h}(g,I_{h}\Phi)$. The continuous and the discrete
problem (1.8) and (3.8) imply
$\displaystyle
B_{h}(g,I_{h}\Phi)=B_{h}(I_{h}u,I_{h}\Phi)-B(u,\Phi)+(f,\Phi)_{L^{2}(\Omega)}-(f_{h},I_{h}\Phi)_{L^{2}(\Omega)}.$
The definition of $B_{h}$ and $\Pi_{0}$ lead to
$\displaystyle
B_{h}(g,I_{h}\Phi)-s_{h}((1-\Pi_{1})I_{h}u,(1-\Pi_{1})I_{h}\Phi)$
$\displaystyle=((\textbf{A}-\Pi_{0}\textbf{A})(\Pi_{0}-1)\nabla
u+\textbf{b}(\Pi_{1}I_{h}u-u),\nabla_{\mathrm{pw}}\Pi_{1}I_{h}\Phi)_{L^{2}(\Omega)}+(\gamma(\Pi_{1}I_{h}u-u),\Pi_{1}I_{h}\Phi)_{L^{2}(\Omega)}$
$\displaystyle\qquad-((1-\Pi_{0})\bm{\sigma},\nabla_{\mathrm{pw}}(1-\Pi_{1}I_{h})\Phi)_{L^{2}(\Omega)}+(f-\gamma
u,\Phi-\Pi_{1}I_{h}\Phi)_{L^{2}(\Omega)}.$ (4.16)
The bound for the stability term as in (4.12) is
$\displaystyle s_{h}((1-\Pi_{1})I_{h}u,(1-\Pi_{1})I_{h}\Phi)$
$\displaystyle\leq
C_{s}\|\textbf{A}\|_{\infty}|(1-\Pi_{1})I_{h}u|_{1,\mathrm{pw}}|(1-\Pi_{1})I_{h}\Phi|_{1,\mathrm{pw}}$
$\displaystyle\leq
C_{s}\|\textbf{A}\|_{\infty}(2+C_{\mathrm{Itn}}+C_{\mathrm{PF}})^{2}C_{\text{apx}}h_{\text{max}}^{\sigma}\|(1-\Pi_{0})\nabla
u\|_{L^{2}(\Omega)}|\Phi|_{1+\sigma,\Omega}.$ (4.17)
Step $5$ (oscillation). The last term in (4.16) is of optimal order
$O(h_{\text{max}}^{1+\sigma})$, but the following arguments allow to write it
as an oscillation. Recall the bubble-function $b_{\mathcal{T}}|_{P}:=b_{P}\in
H^{1}_{0}(P)$ from (3.17) extended by zero outside $P$. Given
$\Psi:=\Phi-\Pi_{1}I_{h}\Phi$, let $\Psi_{1}\in\mathcal{P}_{1}(\mathcal{T})$
be the Riesz representation of the linear functional
$\mathcal{P}_{1}(\mathcal{T})\to\mathbb{R}$ defined by
$w_{1}\mapsto(\Psi,w_{1})_{L^{2}(\Omega)}$ in the Hilbert space
$\mathcal{P}_{1}(\mathcal{T})$ endowed with the weighted scalar product
$(b_{\mathcal{T}}\bullet,\bullet)_{L^{2}(\Omega)}$. That means
$\Pi_{1}(b_{\mathcal{T}}\Psi_{1})=\Pi_{1}\Psi$. The identity $(f-\gamma
u,b_{\mathcal{T}}\Psi_{1})_{L^{2}(\Omega)}=(\bm{\sigma},\nabla(b_{\mathcal{T}}\Psi_{1}))_{L^{2}(\Omega)}$
follows from (1.8) with the test function $b_{\mathcal{T}}\Psi_{1}\in
H^{1}_{0}(\Omega)$. The $L^{2}$ orthogonalities $\Psi-
b_{\mathcal{T}}\Psi_{1}\perp\mathcal{P}_{1}(\mathcal{T})$ in $L^{2}(\Omega)$
and
$\nabla(b_{\mathcal{T}}\Psi_{1})\perp\mathcal{P}_{0}(\mathcal{T};\mathbb{R}^{2})$
in $L^{2}(\Omega;\mathbb{R}^{2})$ allow the rewriting of the latter identity
as
$\displaystyle(f-\gamma
u,\Psi)_{L^{2}(\Omega)}=(h_{\mathcal{T}}(1-\Pi_{1})(f-\gamma
u),h_{\mathcal{T}}^{-1}(\Psi-
b_{\mathcal{T}}\Psi_{1}))_{L^{2}(\Omega)}+((1-\Pi_{0})\bm{\sigma},\nabla(b_{\mathcal{T}}\Psi_{1}))_{L^{2}(\Omega)}$
$\displaystyle\qquad\leq\mathrm{osc}_{1}(f-\gamma
u,\mathcal{T})\|h_{\mathcal{T}}^{-1}(\Psi-
b_{\mathcal{T}}\Psi_{1})\|_{L^{2}(\Omega)}+\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(\Omega)}|b_{\mathcal{T}}\Psi|_{1,\mathrm{pw}}.$
(4.18)
It remains to control the terms $\|h_{\mathcal{T}}^{-1}(\Psi-
b_{\mathcal{T}}\Psi_{1})\|_{L^{2}(\Omega)}$ and
$|b_{\mathcal{T}}\Psi|_{1,\mathrm{pw}}$. Since the definition of $I_{h}$ and
the definition of $\Pi^{\nabla}_{1}$ with $\Pi_{1}=\Pi^{\nabla}_{1}$ in
$V_{h}$ imply $\int_{\partial P}\Psi\,ds=\int_{\partial
P}(\Phi-\Pi_{1}I_{h}\Phi)\,ds=0$, this allows the Poincaré-Friedrichs
inequality for $\Psi$ from Lemma 2.1.a on each $P\in\mathcal{T}$. This shows
$\displaystyle\|h_{\mathcal{T}}^{-1}\Psi\|_{L^{2}(\Omega)}\leq
C_{\mathrm{PF}}|\Psi|_{1,\mathrm{pw}}\leq
C_{\mathrm{PF}}C_{\text{apx}}h_{\text{max}}^{\sigma}|\Phi|_{1+\sigma,\Omega}$
(4.19)
with Theorem 2.8.b and (2.12) in the last inequality. Since $b_{P}\Psi_{1}\in
H^{1}_{0}(P)$ for $P\in\mathcal{T}$, the Poincaré-Friedrichs inequality from
Lemma 2.1.a leads to
$\displaystyle\|h_{P}^{-1}(b_{P}\Psi_{1})\|_{L^{2}(P)}\leq
C_{\mathrm{PF}}|b_{P}\Psi_{1}|_{1,P}.$ (4.20)
The first estimate in (3.20), the identity
$\Pi_{1}(b_{\mathcal{T}}\Psi_{1})=\Pi_{1}\Psi$, and the Cauchy-Schwarz
inequality imply
$\displaystyle
C_{b}^{-1}\|h_{P}^{-1}\Psi_{1}\|_{L^{2}(P)}^{2}\leq\|h_{P}^{-1}b_{P}^{1/2}\Psi_{1}\|_{L^{2}(P)}^{2}=(h_{P}^{-1}\Psi_{1},h_{P}^{-1}\Psi)_{L^{2}(P)}\leq\|h_{P}^{-1}\Psi_{1}\|_{L^{2}(P)}\|h_{P}^{-1}\Psi\|_{L^{2}(P)}.$
This proves $\|h_{P}^{-1}\Psi_{1}\|_{L^{2}(P)}\leq
C_{b}\|h_{P}^{-1}\Psi\|_{L^{2}(P)}$. The second estimate in (3.21) followed by
the first estimate in (3.20) leads to the first inequality and the arguments
as above lead to the second inequality in
$\displaystyle
C_{b}^{-3/2}|b_{P}\Psi_{1}|_{1,P}\leq\|h_{P}^{-1}b_{P}^{1/2}\Psi_{1}\|_{L^{2}(P)}\leq\|h_{P}^{-1}\Psi_{1}\|_{L^{2}(P)}^{1/2}\|h_{P}^{-1}\Psi\|_{L^{2}(P)}^{1/2}$
$\displaystyle\leq C_{b}^{1/2}\|h_{P}^{-1}\Psi\|_{L^{2}(P)}$
with $\|h_{P}^{-1}\Psi_{1}\|_{L^{2}(P)}^{1/2}\leq
C_{b}^{1/2}\|h_{P}^{-1}\Psi\|_{L^{2}(P)}^{1/2}$ from above in the last step.
The combination of the previous displayed estimate and (4.18)-(4.20) results
with $C_{6}:=C_{\mathrm{PF}}C_{\text{apx}}(1+C_{b}^{2}(1+C_{\mathrm{PF}}))$ in
$\displaystyle(f-\gamma u,\Psi)_{L^{2}(\Omega)}\leq
C_{6}(\mathrm{osc}_{1}(f-\gamma
u,\mathcal{T})+\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(\Omega)})h_{\text{max}}^{\sigma}|\Phi|_{1+\sigma,\Omega}.$
(4.21)
Step $6$ (continued proof of estimate for $\|u-u_{h}\|_{L^{2}(\Omega)}$). The
estimate in Step 2 for $|g|_{1,\mathrm{pw}}$, (4.15)-(4.17), and (4.21) with
the regularity (4.13) show
$\displaystyle\|I_{h}u-u_{h}\|_{L^{2}(\Omega)}\lesssim
h_{\mathrm{max}}^{\sigma}\Big{(}\|(1-\Pi_{0})\nabla
u\|_{L^{2}(\Omega)}+\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(\Omega)}+\mathrm{osc}_{1}(f-\gamma
u,\mathcal{T})\Big{)}.$ (4.22)
Rewrite the difference $u-u_{h}=(u-I_{h}u)+(I_{h}u-u_{h})$, and apply the
triangle inequality with (2.15) for the first term
$\|u-I_{h}u\|_{L^{2}(\Omega)}\leq
C_{\mathrm{I}}h_{\text{max}}^{1+\sigma}|u|_{1+\sigma,\Omega}.$
This and (4.22) for the second term $I_{h}u-u_{h}$ conclude the proof of the
estimate for the term $h_{\text{max}}^{-\sigma}\|u-u_{h}\|_{L^{2}(\Omega)}$ in
(4.8) .
Step $7$ (stabilisation error $|u_{h}|_{\mathrm{s}}$ and
$|I_{h}u-u_{h}|_{\mathrm{s}}$). The triangle inequality and the upper bound of
the stability term (3.5) lead to
$\displaystyle|u_{h}|_{\mathrm{s}}\leq|I_{h}u-u_{h}|_{\mathrm{s}}+|I_{h}u|_{\mathrm{s}}\leq
C_{s}^{1/2}\|\textbf{A}\|_{\infty}^{1/2}(|I_{h}u-u_{h}|_{1,\mathrm{pw}}+|(1-\Pi_{1})I_{h}u|_{1,\mathrm{pw}})$
with
$|(1-\Pi_{1})(I_{h}u-u_{h})|_{1,\mathrm{pw}}\leq|I_{h}u-u_{h}|_{1,\mathrm{pw}}$
in the last inequality. The arguments as in (4.12) prove that
$|(1-\Pi_{1})I_{h}u|_{1,\mathrm{pw}}\leq(2+C_{\text{Itn}}+C_{\mathrm{PF}})\|(1-\Pi_{0})\nabla
u\|_{L^{2}(\Omega)}$. This and the arguments in Step 2 for the estimate of
$|I_{h}u-u_{h}|_{1,\mathrm{pw}}$ show the upper bound in (4.8) for the terms
$|u_{h}|_{\mathrm{s}}$ and $|I_{h}u-u_{h}|_{\mathrm{s}}$.
Step $8$ (error estimate for $u-\Pi_{1}u_{h}$). The VEM solution $u_{h}$ is
defined by the computed degrees of freedom given in (2.10), but the evaluation
of the function itself requires expansive additional calculations. The later
are avoided if $u_{h}$ is replaced by the Ritz projection $\Pi_{1}u_{h}$ in
the numerical experiments. The triangle inequality leads to
$\displaystyle|u-\Pi_{1}u_{h}|_{1,\mathrm{pw}}\leq|u-u_{h}|_{1,\mathrm{pw}}+|u_{h}-\Pi_{1}u_{h}|_{1,\mathrm{pw}}.$
(4.23)
A lower bound of the stability term (3.5) and the assumption (A2) imply
$\displaystyle|u_{h}-\Pi_{1}u_{h}|_{1,P}\leq
a_{0}^{-1/2}C_{s}^{1/2}S^{P}((1-\Pi_{1})u_{h},(1-\Pi_{1})u_{h})^{1/2}.$ (4.24)
This shows that the second term in (4.23) is bounded by
$|u_{h}|_{\mathrm{s}}$. Hence Step 2 and Step 7 prove the estimate for
$|u-\Pi_{1}u_{h}|_{1,\mathrm{pw}}$. Since $\int_{\partial
P}(u_{h}-\Pi_{1}u_{h})\,ds=0$ from the definition of $\Pi^{\nabla}_{1}$ and
$\Pi_{1}=\Pi^{\nabla}_{1}$ in $V_{h}$, the combination of Poincaré-Friedrichs
inequality for $u_{h}-\Pi_{1}u_{h}$ from Lemma 2.1.a and (4.24) result in
$\displaystyle
C_{\mathrm{PF}}^{-1}a_{0}^{1/2}C_{s}^{-1/2}\|u_{h}-\Pi_{1}u_{h}\|_{L^{2}(P)}\leq
h_{P}S^{P}((1-\Pi_{1})u_{h},(1-\Pi_{1})u_{h})^{1/2}.$ (4.25)
The analogous arguments for $\|u-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}$, (4.25), and
the estimate for $|u_{h}|_{\mathrm{s}}$ prove the bound (4.8) for the term
$h_{\text{max}}^{-\sigma}\|u-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}$. This concludes
the proof of Theorem 4.3. ∎
## 5 A posteriori error analysis
This section presents the reliability and efficiency of a residual-type a
posteriori error estimator.
### 5.1 Residual-based explicit a posteriori error control
Recall $u_{h}\in V_{h}$ is the solution to the problem (3.8), and the
definition of jump $[\cdot]_{E}$ along an edge $E\in\mathcal{E}$ from Section
2. For any polygonal domain $P\in\mathcal{T}$, set
$\displaystyle\eta_{P}^{2}:=h_{P}^{2}\|f-\gamma\Pi_{1}u_{h}\|_{L^{2}(P)}^{2}$ | (volume residual),
---|---
$\displaystyle\zeta_{P}^{2}:=S^{P}((1-\Pi_{1})u_{h},(1-\Pi_{1})u_{h})$ | (stabilization),
$\displaystyle\Lambda_{P}^{2}:=\|(1-\Pi_{0})(\textbf{A}\nabla\Pi_{1}u_{h}+\textbf{b}\Pi_{1}u_{h})\|_{L^{2}(P)}^{2}$ | (inconsistency),
$\displaystyle\Xi_{P}^{2}:=\sum_{E\in\mathcal{E}(P)}|E|^{-1}\|[\Pi_{1}u_{h}]_{E}\|_{L^{2}(E)}^{2}$ | (nonconformity).
These local quantities $\bullet|_{P}$ form a family
($\bullet|_{P}:P\in\mathcal{T}$) over the index set $\mathcal{T}$ and their
Euclid vector norm $\bullet|_{\mathcal{T}}$ enters the upper error bound:
$\eta_{\mathcal{T}}:=(\sum_{P\in\mathcal{T}}\eta_{P}^{2})^{1/2}$,
$\zeta_{\mathcal{T}}:=(\sum_{P\in\mathcal{T}}\zeta_{P}^{2})^{1/2}$,
$\Lambda_{\mathcal{T}}:=(\sum_{P\in\mathcal{T}}\Lambda_{P}^{2})^{1/2}$, and
$\Xi_{\mathcal{T}}:=(\sum_{P\in\mathcal{T}}\Xi_{P}^{2})^{1/2}$. The following
theorem provides an upper bound to the error $u-u_{h}$ in the $H^{1}$ and the
$L^{2}$ norm. Recall the elliptic regularity (1.5) with the index
$0<\sigma\leq 1$, and recall the assumption $h_{\text{max}}\leq 1$ from
Subsection 2.1.
###### Theorem 5.1 (reliability).
There exist positive constants $C_{\text{rel}1}$ and $C_{\text{rel}2}$ (both
depending on $\rho$) such that
$\displaystyle
C_{\mathrm{rel}1}^{-2}|u-u_{h}|_{1,\mathrm{pw}}^{2}\leq\eta_{\mathcal{T}}^{2}+\zeta_{\mathcal{T}}^{2}+\Lambda_{\mathcal{T}}^{2}+\Xi_{\mathcal{T}}^{2}$
(5.1)
and
$\displaystyle\|u-u_{h}\|_{L^{2}(\Omega)}^{2}\leq
C_{\mathrm{rel}2}^{2}\sum_{P\in\mathcal{T}}\Big{(}h_{P}^{2\sigma}(\eta_{P}^{2}+\zeta_{P}^{2}+\Lambda_{P}^{2}+\Xi_{P}^{2})\Big{)}.$
(5.2)
The proof of this theorem in Subsection 5.3 relies on a conforming companion
operator elaborated in the next subsection. The upper bound in Theorem 5.1 is
efficient in the following local sense, where
$\omega_{E}:=\textrm{int}(\cup\mathcal{T}(E))$ denotes the patch of an edge
$E$ and consists of the one or the two neighbouring polygons in the set
$\mathcal{T}(E):=\\{P^{\prime}\in\mathcal{T}:E\subset\partial P^{\prime}\\}$
that share $E$. Recall $\bm{\sigma}=\textbf{A}\nabla u+\textbf{b}u$ from
Subsection 4.2 and the data-oscillation
$\mathrm{osc}_{1}(f,P):=\|h_{P}(1-\Pi_{1})f\|_{L^{2}(P)}$ from Subsection 2.1.
###### Theorem 5.2 (local efficiency up to oscillation).
The quantities $\eta_{P},\zeta_{P},\Lambda_{P},$ and $\Xi_{P}$ from Theorem
5.1 satisfy
$\displaystyle\zeta^{2}_{P}$
$\displaystyle\lesssim|u-u_{h}|^{2}_{1,P}+|u-\Pi_{1}u_{h}|^{2}_{1,P}$ (5.3)
$\displaystyle\eta_{P}^{2}$
$\displaystyle\lesssim\|u-u_{h}\|^{2}_{1,P}+|u-\Pi_{1}u_{h}|^{2}_{1,P}+\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(P)}^{2}+\mathrm{osc}_{1}^{2}(f-\gamma
u,P),$ (5.4) $\displaystyle\Lambda_{P}^{2}$
$\displaystyle\lesssim\|u-u_{h}\|^{2}_{1,P}+|u-\Pi_{1}u_{h}|^{2}_{1,P}+\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(P)}^{2},$
(5.5) $\displaystyle\Xi_{P}^{2}$
$\displaystyle\lesssim\sum_{E\in\mathcal{E}(P)}\sum_{P^{\prime}\in\omega_{E}}(\|u-u_{h}\|^{2}_{1,P^{\prime}}+|u-\Pi_{1}u_{h}|^{2}_{1,P^{\prime}}).$
(5.6)
The proof of Theorem 5.2 follows in Subsection 5.4. The reliability and
efficiency estimates in Theorem 5.1 and 5.2 lead to an equivalence up to the
approximation term
$\text{apx}:=\|\bm{\sigma}-\Pi_{0}\bm{\sigma}\|_{L^{2}(\Omega)}+\mathrm{osc}_{1}(f-\gamma
u,\mathcal{T}).$
Recall the definition of $|u_{h}|_{\mathrm{s}}$ from Subsection 4.2. In this
paper, the norm $|\cdot|_{1,\mathrm{pw}}$ in the nonconforming space $V_{h}$
has been utilised for simplicity and one alternative is the norm
$\|\cdot\|_{h}$ from Remark 6 induced by $a_{h}$. Then it appears natural to
have the total error with the stabilisation term as
$\text{total
error}:=|u-u_{h}|_{1,\mathrm{pw}}+|u-\Pi_{1}u_{h}|_{1,\mathrm{pw}}+h_{\text{max}}^{-\sigma}\|u-u_{h}\|_{L^{2}(\Omega)}+h_{\text{max}}^{-\sigma}\|u-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}+|u_{h}|_{\mathrm{s}}.$
The point is that Theorem 4.3 assures that total error $+$ apx converges with
the expected optimal convergence rate.
###### Corollary 5.3 (equivalence).
The
$\mathrm{estimator}:=\eta_{\mathcal{T}}+\zeta_{\mathcal{T}}+\Lambda_{\mathcal{T}}+\Xi_{\mathcal{T}}\approx\mathrm{total\;error}+\mathrm{apx}$.
###### Proof.
∎Theorem 5.2 motivates apx and shows
$\mathrm{estimator}\lesssim\|u-u_{h}\|_{1,\mathrm{pw}}+\|\bm{\sigma}-\Pi_{0}\bm{\sigma}\|_{L^{2}(\Omega)}+\mathrm{osc}_{1}(f-\gamma
u,\mathcal{T})+|u_{h}|_{\mathrm{s}}\leq\mathrm{total\;error}+\mathrm{apx}.$
This proves the first inequality $\lesssim$ in the assertion. Theorem 5.1, the
estimates in Subsection 5.3.3.1, and the definition of $|u_{h}|_{s}$ show
$\text{total error}\lesssim\text{estimator}$. The first of the terms in apx is
$\|\bm{\sigma}-\Pi_{0}\bm{\sigma}\|_{L^{2}(\Omega)}\leq\|\bm{\sigma}-\Pi_{0}\bm{\sigma}_{h}\|_{L^{2}(\Omega)}\leq\|\bm{\sigma}-\bm{\sigma}_{h}\|_{L^{2}(\Omega)}+\|(1-\Pi_{0})\bm{\sigma}_{h}\|_{L^{2}(\Omega)}.$
The definition of $\bm{\sigma}$ and $\bm{\sigma}_{h}$ plus the triangle and
the Cauchy-Schwarz inequality show
$\displaystyle\|\bm{\sigma}-\bm{\sigma}_{h}\|_{L^{2}(\Omega)}\leq\|\textbf{A}\|_{\infty}|u-\Pi_{1}u_{h}|_{1,\mathrm{pw}}+\|\textbf{b}\|_{\infty}\|u-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}\lesssim\|u-\Pi_{1}u_{h}\|_{1,\mathrm{pw}}.$
The upper bound is $\lesssim$ estimator as mentioned above. Since the term
$\|(1-\Pi_{0})\bm{\sigma}_{h}\|_{L^{2}(\Omega)}=\Lambda_{\mathcal{T}}$ is a
part of the estimator,
$\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(\Omega)}\lesssim\mathrm{estimator}$. The
other term in apx is
$\displaystyle\mathrm{osc}_{1}(f-\gamma u,\mathcal{T})$
$\displaystyle\leq\mathrm{osc}_{1}(f-\gamma\Pi_{1}u_{h},\mathcal{T})+\|h_{\mathcal{T}}\gamma(u-\Pi_{1}u_{h})\|_{L^{2}(\Omega)}$
$\displaystyle\leq\eta_{\mathcal{T}}+\|\gamma\|_{\infty}h_{\text{max}}\|u-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}\lesssim\text{estimator}.\qed$
Section 5 establishes the a posteriori error analysis of the nonconforming
VEM. Related results are known for the conforming VEM and the nonconforming
FEM.
###### Remark 7 (comparison with nonconforming FEM).
Theorem 5.1 generalizes a result for the nonconforming FEM in [19, Thm. 3.4]
from triangulations into triangles to those in polygons (recall Example 2.2).
The only difference is the extra stabilization term that can be dropped in the
nonconforming FEM.
###### Remark 8 (comparison with conforming VEM).
The volume residual, the inconsistency term, and the stabilization also arise
in the a posteriori error estimator for the conforming VEM in [16, Thm. 13].
But it also includes an additional term with normal jumps compared to the
estimator (5.1). The extra nonconformity term in this paper is caused by the
nonconformity $V_{h}\not\subset V$ in general.
### 5.2 Enrichment and conforming companion operator
The link from the nonconforming approximation $u_{h}\in V_{h}$ to a global
Sobolev function in $H^{1}_{0}(\Omega)$ can be designed with the help of the
underlying refinement $\widehat{{\cal T}}$ of the triangulation $\mathcal{T}$
(from Section 2). The interpolation
$I_{\text{CR}}:V+V_{h}\to\textrm{CR}^{1}_{0}(\widehat{{\cal T}})$ in the
Crouzeix-Raviart finite element space $\textrm{CR}^{1}_{0}(\widehat{{\cal
T}})$ from Subsection 3.4 allows for a right-inverse $J^{\prime}$. A companion
operator $J^{\prime}\circ I_{\text{CR}}:V_{h}\to H^{1}_{0}(\Omega)$ acts as
displayed
$V_{h}$$\text{CR}^{1}_{0}(\widehat{{\cal
T}})$$H^{1}_{0}(\Omega)$$J^{\prime}$$I_{\text{CR}}$
Define an enrichment operator $E_{\mathrm{pw}}:\mathcal{P}_{1}(\widehat{{\cal
T}})\to S^{1}_{0}(\widehat{{\cal T}})$ by averaging nodal values: For any
vertex $z$ in the refined triangulation $\widehat{{\cal T}}$, let
$\widehat{{\cal T}}(z)=\\{T\in\widehat{{\cal T}}:z\in T\\}$ denote the set of
$|\widehat{{\cal T}}(z)|\geq 1$ many triangles that share the vertex $z$, and
define
$\displaystyle E_{\mathrm{pw}}v_{1}(z)=\frac{1}{|\widehat{{\cal
T}}(z)|}\sum_{T\in\widehat{{\cal T}}(z)}{v_{1}}|_{T}(z)$
for an interior vertex $z$ (and zero for a boundary vertex $z$ according to
the homogeneous boundary conditions). This defines $E_{\mathrm{pw}}v_{1}$ at
any vertex of a triangle $T$ in $\widehat{{\cal T}}$, and linear interpolation
then defines $E_{\mathrm{pw}}v_{1}$ in $T\in\widehat{{\cal T}}$, so that
$E_{\mathrm{pw}}v_{1}\in S^{1}_{0}(\widehat{{\cal T}})$. Huang et al. [31]
design an enrichment operator by an extension of [32] to polygonal domains,
while we deduce it from a sub-triangulation. The following lemma provides an
approximation property of the operator $E_{\mathrm{pw}}$.
###### Lemma 5.4.
There exists a positive constant $C_{En}$ that depends only on the shape
regularity of $\widehat{{\cal T}}$ such that any
$v_{1}\in\mathcal{P}_{1}(\mathcal{T})$ satisfies
$\displaystyle\|h_{\mathcal{T}}^{-1}(1-E_{\mathrm{pw}})v_{1}\|_{L^{2}(\Omega)}+|(1-E_{\mathrm{pw}})v_{1}|_{1,\mathrm{pw}}\leq
C_{\mathrm{En}}\left(\sum_{E\in{\mathcal{E}}}|E|^{-1}\|[v_{1}]_{E}\|_{L^{2}(E)}^{2}\right)^{1/2}.$
(5.7)
###### Proof.
There exists a positive constant $C_{En}$ independent of $h$ and $v_{1}$ [32,
p. 2378] such that
$\displaystyle\|h_{\widehat{{\cal
T}}}^{-1}(1-E_{\mathrm{pw}})v_{1}\|_{L^{2}(\Omega)}+\left(\sum_{T\in\widehat{{\cal
T}}}\|\nabla(1-E_{\mathrm{pw}})v_{1}\|_{L^{2}(T)}^{2}\right)^{1/2}\leq
C_{\mathrm{En}}\left(\sum_{E\in\widehat{\mathcal{E}}}|E|^{-1}\|[v_{1}]_{E}\|_{L^{2}(E)}^{2}\right)^{1/2}.$
Note that any edge $E\in\mathcal{E}$ is unrefined in the sub-triangulation
$\widehat{{\cal T}}$. Since $v_{1|P}\in H^{1}(P)$ is continuous in each
polygonal domain $P\in\mathcal{T}$ and $h_{T}\leq h_{P}$ for all
$T\in\widehat{{\cal T}}(P)$, the above inequality reduces to (5.7). This
concludes the proof. ∎
Recall the $L^{2}$ projection $\Pi_{1}$ onto the piecewise affine functions
$\mathcal{P}_{1}(\mathcal{T})$ from Section 2. An enrichment operator
$E_{\mathrm{pw}}\circ\Pi_{1}:V_{h}\to H^{1}_{0}(\Omega)$ acts as displayed
$V_{h}$$\mathcal{P}_{1}(\mathcal{T})\hookrightarrow\mathcal{P}_{1}(\widehat{{\cal
T}})$$H^{1}_{0}(\Omega)$$E_{\mathrm{pw}}$$\Pi_{1}$
### 5.3 Proof of Theorem 5.1
#### 5.3.1 Reliable $H^{1}$ error control
Define $E_{1}u_{h}:=E_{\mathrm{pw}}\Pi_{1}u_{h}\in H^{1}_{0}(\Omega)$ so that
$u-E_{1}u_{h}\in H^{1}_{0}(\Omega)$. The inf-sup condition (1.9) leads to some
$v\in H^{1}_{0}(\Omega)$ with $\|v\|_{1,\Omega}\leq 1$ and
$\displaystyle\beta_{0}\|u-E_{1}u_{h}\|_{1,\Omega}=B(u-E_{1}u_{h},v)=((f,v)_{L^{2}(\Omega)}-B_{\mathrm{pw}}(\Pi_{1}u_{h},v))+B_{\mathrm{pw}}(\Pi_{1}u_{h}-E_{1}u_{h},v)$
(5.8)
with $B(u,v)=(f,v)$ from (1.8) and the piecewise version $B_{\mathrm{pw}}$ of
$B$ in the last step. The definition of $B_{h}$ from Subsection 3.1 and the
discrete problem (3.8) with $v_{h}=I_{h}v$ imply
$\displaystyle
B_{\mathrm{pw}}(\Pi_{1}u_{h},\Pi_{1}I_{h}v)+s_{h}((1-\Pi_{1})u_{h},(1-\Pi_{1})I_{h}v)=B_{h}(u_{h},I_{h}v)=(f,\Pi_{1}I_{h}v)_{L^{2}(\Omega)}.$
(5.9)
Abbreviate $w:=v-\Pi_{1}I_{h}v$ and
$\bm{\sigma}_{h}:=\textbf{A}\nabla_{\mathrm{pw}}\Pi_{1}u_{h}+\textbf{b}\Pi_{1}u_{h}$.
This and (5.9) simplify
$\displaystyle(f,v)_{L^{2}(\Omega)}-B_{\mathrm{pw}}(\Pi_{1}u_{h},v)=(f,w)_{L^{2}(\Omega)}-B_{\mathrm{pw}}(\Pi_{1}u_{h},w)+s_{h}((1-\Pi_{1})u_{h},(1-\Pi_{1})I_{h}v)$
$\displaystyle=(f-\gamma\Pi_{1}u_{h},w)_{L^{2}(\Omega)}-((1-\Pi_{0})\bm{\sigma}_{h},\nabla_{\mathrm{pw}}w)_{L^{2}(\Omega)}+s_{h}((1-\Pi_{1})u_{h},(1-\Pi_{1})I_{h}v)$
(5.10)
with $\int_{P}\nabla w\,dx=0$ for any $P\in\mathcal{T}$ from (2.17) in the
last step. Recall the notation $\eta_{P},\Lambda_{P}$, and $\zeta_{P}$ from
Subsection 5.1. The Cauchy-Schwarz inequality and Theorem 2.8.b followed by
$\|(1-\Pi_{0})\nabla v\|_{L^{2}(\Omega)}\leq|v|_{1,\Omega}\leq 1$ in the
second step show
$\displaystyle(f-\gamma\Pi_{1}u_{h},w)_{L^{2}(P)}$
$\displaystyle\leq\eta_{P}h_{P}^{-1}\|w\|_{L^{2}(P)}\leq(1+C_{\mathrm{PF}})\eta_{P},$
(5.11) $\displaystyle((1-\Pi_{0})\bm{\sigma}_{h},\nabla w)_{L^{2}(P)}$
$\displaystyle\leq\Lambda_{P}|w|_{1,P}\leq(1+C_{\mathrm{PF}})\Lambda_{P}.$
(5.12)
The upper bound $\|\textbf{A}\|_{\infty}$ of the coefficient A, (3.5), and the
Cauchy-Schwarz inequality for the stabilization term lead to the first
inequality in
$\displaystyle C_{s}^{-1/2}S^{P}((1-\Pi_{1})u_{h},(1-\Pi_{1})I_{h}v)$
$\displaystyle\leq\|\textbf{A}\|_{\infty}^{1/2}S^{P}((1-\Pi_{1})u_{h},(1-\Pi_{1})u_{h})^{1/2}|(1-\Pi_{1})I_{h}v|_{1,P}$
$\displaystyle\leq\|\textbf{A}\|_{\infty}^{1/2}(2+C_{\mathrm{PF}}+C_{\text{Itn}})\zeta_{P}.$
(5.13)
The second inequality in (5.13) follows as in (4.3) and with
$\|(1-\Pi_{0})\nabla v\|_{L^{2}(P)}\leq 1$. Recall the boundedness constant
$M_{b}$ of $B_{\mathrm{pw}}$ from Subsection 4.1 and deduce from (5.7) and the
definition of $\Xi_{\mathcal{T}}$ from Subsection 5.1 that
$\displaystyle B_{\mathrm{pw}}(\Pi_{1}u_{h}-E_{1}u_{h},v)\leq
M_{b}|\Pi_{1}u_{h}-E_{1}u_{h}|_{1,\mathrm{pw}}\leq
M_{b}C_{\mathrm{En}}\Xi_{\mathcal{T}}.$ (5.14)
The substitution of (5.10)-(5.14) in (5.8) reveals that
$\displaystyle\|u-E_{1}u_{h}\|_{1,\Omega}$ $\displaystyle\leq
C_{7}(\eta_{\mathcal{T}}+\Lambda_{\mathcal{T}}+\zeta_{\mathcal{T}}+\Xi_{\mathcal{T}})$
(5.15)
with
$\beta_{0}C_{7}=1+C_{\mathrm{PF}}+C_{s}^{1/2}\|\textbf{A}\|_{\infty}^{1/2}(2+C_{\mathrm{PF}}+C_{\text{Itn}})+M_{b}C_{\mathrm{En}}.$
The combination of (4.24), (5.15) and (5.7) leads in the triangle inequality
$\displaystyle|u-u_{h}|_{1,\mathrm{pw}}\leq|u-E_{1}u_{h}|_{1,\Omega}+|E_{1}u_{h}-\Pi_{1}u_{h}|_{1,\mathrm{pw}}+|\Pi_{1}u_{h}-u_{h}|_{1,\mathrm{pw}}$
to (5.1) with
$C_{\text{rel}1}/2=C_{7}+C_{\mathrm{En}}+a_{0}^{-1/2}C_{s}^{1/2}$. ∎
#### 5.3.2 Reliable $L^{2}$ error control
Recall $I_{\text{CR}}$ from (3.14) and $J^{\prime}$ from the proof of Lemma
3.3, and define $E_{2}u_{h}:=J^{\prime}I_{\text{CR}}u_{h}\in
H^{1}_{0}(\Omega)$ from Subsection 5.2. Let $\Psi\in H^{1}_{0}(\Omega)\cap
H^{1+\sigma}(\Omega)$ solve the dual problem $B(v,\Psi)=(u-E_{2}u_{h},v)$ for
all $v\in V$ and recall (from (1.5)) the regularity estimate
$\displaystyle\|\Psi\|_{1+\sigma,\Omega}\leq
C^{*}_{\text{reg}}\|u-E_{2}u_{h}\|_{L^{2}(\Omega)}.$ (5.16)
The substitution of $v:=u-E_{2}u_{h}\in V$ in the dual problem shows
$\displaystyle\|u-E_{2}u_{h}\|^{2}_{L^{2}(\Omega)}=B(u-E_{2}u_{h},\Psi).$
The algebra in (5.8)-(5.10) above leads with $v=\Psi$ to the identity
$\displaystyle\|u-E_{2}u_{h}\|^{2}_{L^{2}(\Omega)}-s_{h}((1-\Pi_{1})u_{h},(1-\Pi_{1})I_{h}\Psi)=(f-\gamma\Pi_{1}u_{h},\Psi-\Pi_{1}I_{h}\Psi)_{L^{2}(\Omega)}$
$\displaystyle\quad\quad-((1-\Pi_{0})\bm{\sigma}_{h},\nabla_{\mathrm{pw}}(\Psi-\Pi_{1}I_{h}\Psi))_{L^{2}(\Omega)}+B_{\mathrm{pw}}(\Pi_{1}u_{h}-E_{2}u_{h},\Psi).$
(5.17)
The definition of $I_{\text{CR}}$ and $J^{\prime}$ proves the first and second
equality in
$\int_{E}u_{h}\,ds=\int_{E}I_{\text{CR}}u_{h}\,ds=\int_{E}E_{2}u_{h}\,ds\quad\text{for
all}\;E\in\mathcal{E}.$
This and an integration by parts imply
$\int_{P}\nabla(u_{h}-E_{2}u_{h})\,dx=0$ for all $P\in\mathcal{T}$. Hence
Definition 2.2 of Ritz projection $\Pi^{\nabla}_{1}=\Pi_{1}$ in $V_{h}$ shows
$\int_{P}\nabla(\Pi_{1}u_{h}-E_{2}u_{h})\,ds=0$ for all $P\in\mathcal{T}$.
This $L^{2}$ orthogonality
$\nabla_{\mathrm{pw}}(\Pi_{1}u_{h}-E_{2}u_{h})\perp\mathcal{P}_{0}(\mathcal{T};\mathbb{R}^{2})$
and the definition of $B_{\mathrm{pw}}$ in the last term of (5.17) result with
elementary algebra in
$\displaystyle
B_{\mathrm{pw}}(\Pi_{1}u_{h}-E_{2}u_{h},\Psi)=((\textbf{A}-\Pi_{0}\textbf{A})\nabla_{\mathrm{pw}}(\Pi_{1}u_{h}-E_{2}u_{h}),\nabla\Psi)_{L^{2}(\Omega)}$
$\displaystyle\;+(\nabla_{\mathrm{pw}}(\Pi_{1}u_{h}-E_{2}u_{h}),(\Pi_{0}\textbf{A})(1-\Pi_{0})\nabla\Psi)_{L^{2}(\Omega)}+(\Pi_{1}u_{h}-E_{2}u_{h},\textbf{b}\cdot\nabla\Psi+\gamma\Psi)_{L^{2}(\Omega)}.$
(5.18)
The triangle inequality and (c’) from the proof of Lemma 3.3 imply the first
inequality in
$\displaystyle|\Pi_{1}u_{h}-E_{2}u_{h}|_{1,\mathrm{pw}}$
$\displaystyle\leq|\Pi_{1}u_{h}-I_{\text{CR}}u_{h}|_{1,\mathrm{pw}}+C_{\mathrm{J^{\prime}}}\min_{v\in
V}|I_{\text{CR}}u_{h}-v|_{1,\mathrm{pw}}$
$\displaystyle\leq|\Pi_{1}u_{h}-I_{\text{CR}}u_{h}|_{1,\mathrm{pw}}+C_{\mathrm{J^{\prime}}}|I_{\text{CR}}u_{h}-E_{1}u_{h}|_{1,\mathrm{pw}}$
$\displaystyle\leq|\Pi_{1}u_{h}-I_{\text{CR}}u_{h}|_{1,\mathrm{pw}}+C_{\mathrm{J^{\prime}}}(|I_{\text{CR}}u_{h}-\Pi_{1}u_{h}|_{1,\mathrm{pw}}+|\Pi_{1}u_{h}-E_{1}u_{h}|_{1,\mathrm{pw}})$
$\displaystyle\leq(1+C_{\mathrm{J^{\prime}}})|u_{h}-\Pi_{1}u_{h}|_{1,\mathrm{pw}}+C_{\mathrm{J^{\prime}}}|\Pi_{1}u_{h}-E_{1}u_{h}|_{1,\mathrm{pw}}.$
(5.19)
The second estimate in (5.19) follows from $E_{1}u_{h}\in V$, the third is a
triangle inequality, and eventually
$|\Pi_{1}u_{h}-I_{\text{CR}}u_{h}|_{1,\mathrm{pw}}\leq|u_{h}-\Pi_{1}u_{h}|_{1,\mathrm{pw}}$
results from the orthogonality
$\nabla_{\mathrm{pw}}(u_{h}-I_{\text{CR}})\perp\mathcal{P}_{0}(\widehat{{\cal
T}};\mathbb{R}^{2})$ and $\Pi_{1}u_{h}\in\mathcal{P}_{1}(\mathcal{T})$. The
Cauchy-Schwarz inequality, the Lipschitz continuity of A, and the
approximation estimate $\|(1-\Pi_{0})\nabla\Psi\|_{L^{2}(P)}\leq
C_{\text{apx}}h_{P}^{\sigma}|\Psi|_{1+\sigma,P}$ in (5.18) lead to the first
inequality in
$\displaystyle B_{\mathrm{pw}}(\Pi_{1}u_{h}-E_{2}u_{h},\Psi)$
$\displaystyle\leq\sum_{P\in\mathcal{T}}\Big{(}(h_{P}|\textbf{A}|_{1,\infty}+\|\textbf{A}\|_{\infty}C_{\text{apx}}h_{P}^{\sigma})|\Pi_{1}u_{h}-E_{2}u_{h}|_{1,P}$
$\displaystyle\qquad+\|\Pi_{1}u_{h}-E_{2}u_{h}\|_{L^{2}(P)}(\|\textbf{b}\|_{\infty}+\|\gamma\|_{\infty})\Big{)}\|\Psi\|_{1+\sigma,P}$
$\displaystyle\leq\sum_{P\in\mathcal{T}}\Big{(}h_{P}|\textbf{A}|_{1,\infty}+\|\textbf{A}\|_{\infty}C_{\text{apx}}h_{P}^{\sigma}+C_{\mathrm{PF}}(\|\textbf{b}\|_{\infty}+\|\gamma\|_{\infty})h_{P}\Big{)}|\Pi_{1}u_{h}-E_{2}u_{h}|_{1,P}\|\Psi\|_{1+\sigma,P}$
$\displaystyle\leq
C_{8}\sum_{P\in\mathcal{T}}h_{P}^{\sigma}((1+C_{\mathrm{J^{\prime}}})|u_{h}-\Pi_{1}u_{h}|_{1,P}+C_{\mathrm{J^{\prime}}}|\Pi_{1}u_{h}-E_{1}u_{h}|_{1,P})\|\Psi\|_{1+\sigma,P}.$
(5.20)
The second inequality in (5.20) follows from the Poincaré-Friedrichs
inequality in Lemma 2.1.a for $\Pi_{1}u_{h}-E_{2}u_{h}$ with $\int_{\partial
P}(\Pi_{1}u_{h}-E_{2}u_{h})\,ds=0$ (from above); the constant
$C_{8}:=|\textbf{A}|_{1,\infty}+C_{\text{apx}}\|\textbf{A}\|_{\infty}+C_{\mathrm{PF}}(\|\textbf{b}\|_{\infty}+\|\gamma\|_{\infty})$
results from (5.19) and $h_{P}\leq h_{P}^{\sigma}$ (recall $h_{\text{max}}\leq
1$). Lemma 5.4 with $v_{1}=\Pi_{1}u_{h}$ and (4.24) in (5.20) show
$\displaystyle B_{\mathrm{pw}}(\Pi_{1}u_{h}-E_{2}u_{h},\Psi)$
$\displaystyle\leq
C_{8}\sum_{P\in\mathcal{T}}h_{P}^{\sigma}((1+C_{\mathrm{J^{\prime}}})a_{0}^{-1/2}C_{s}^{1/2}\zeta_{P}+C_{\mathrm{J^{\prime}}}C_{\mathrm{En}}\Xi_{P})\|\Psi\|_{1+\sigma,P}.$
(5.21)
Rewrite (5.11)-(5.13) with $w=\Psi-\Pi_{1}I_{h}\Psi$ and
$h_{P}^{-1}\|w\|_{L^{2}(P)}+|w|_{1,P}\leq(1+C_{\mathrm{PF}})\|(1-\Pi_{0})\nabla\Psi\|_{L^{2}(P)}\leq
C_{\text{apx}}(1+C_{\mathrm{PF}})h_{P}^{\sigma}|\Psi|_{1+\sigma,P}$ from
(2.12). This and (5.21) lead in (5.17) to
$\displaystyle\|u-E_{2}u_{h}\|_{L^{2}(\Omega)}^{2}\leq
C_{9}\sum_{P\in\mathcal{T}}h_{P}^{\sigma}(\eta_{P}+\zeta_{P}+\Lambda_{P}+\Xi_{P})\|\Psi\|_{1+\sigma,P}$
for
$C_{9}:=C_{\text{apx}}(1+C_{\mathrm{PF}}+C_{s}^{1/2}\|\textbf{A}\|_{\infty}^{1/2}(2+C_{\mathrm{PF}}+C_{\text{Itn}}))+C_{8}((1+C_{\mathrm{J^{\prime}}})a_{0}^{-1/2}C_{s}^{1/2}+C_{\mathrm{J^{\prime}}}C_{\mathrm{En}}).$
This and the regularity (5.16) result in
$\displaystyle\|u-E_{2}u_{h}\|_{L^{2}(\Omega)}\leq
C_{9}C^{*}_{\text{reg}}\sum_{P\in\mathcal{T}}h_{P}^{\sigma}(\eta_{P}+\zeta_{P}+\Lambda_{P}+\Xi_{P}).$
(5.22)
The arguments in the proof of (5.20)-(5.21) also lead to
$\displaystyle\|E_{2}u_{h}-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}\leq
C_{\mathrm{PF}}((1+C_{\mathrm{J^{\prime}}})a_{0}^{-1/2}C_{s}^{1/2}+C_{\mathrm{J^{\prime}}}C_{\mathrm{En}})\sum_{P\in\mathcal{T}}h_{P}(\zeta_{P}+\Xi_{P}).$
(5.23)
The combination of (4.25), (5.22)-(5.23) and the triangle inequality
$\|u-u_{h}\|_{L^{2}(\Omega)}\leq\|u-E_{2}u_{h}\|_{L^{2}(\Omega)}+\|E_{2}u_{h}-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}+\|\Pi_{1}u_{h}-u_{h}\|_{L^{2}(\Omega)}$
lead to (5.2) with
$C_{rel2}/2=C_{9}C^{*}_{\text{reg}}+C_{\mathrm{PF}}\big{(}(2+C_{\mathrm{J^{\prime}}})a_{0}^{-1/2}C_{s}^{1/2}+C_{\mathrm{J^{\prime}}}C_{\mathrm{En}}\big{)}.$
This concludes the proof of the $L^{2}$ error estimate in Theorem 5.1. ∎
#### 5.3.3 Comments
##### 5.3.3.1 Estimator for $u-\Pi_{1}u_{h}$
The triangle inequality with (5.1) and (4.24) provide an upper bound for
$H^{1}$ error
$\displaystyle\frac{1}{2}|u-\Pi_{1}u_{h}|_{1,\mathrm{pw}}^{2}\leq|u-u_{h}|_{1,\mathrm{pw}}^{2}+|(1-\Pi_{1})u_{h}|_{1,\mathrm{pw}}^{2}\leq
2C_{\text{rel}1}^{2}(\eta_{\mathcal{T}}^{2}+\zeta_{\mathcal{T}}^{2}+\Lambda_{\mathcal{T}}^{2}+\Xi_{\mathcal{T}}^{2}).$
The same arguments for an upper bound of the $L^{2}$ error in Theorem 5.1 show
that
$\displaystyle\frac{1}{2}\|u-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}^{2}$
$\displaystyle\leq\|u-u_{h}\|_{L^{2}(\Omega)}^{2}+\|(1-\Pi_{1})u_{h}\|_{L^{2}(\Omega)}^{2}$
$\displaystyle\leq
C_{\text{rel}2}^{2}\sum_{P\in\mathcal{T}}h_{P}^{2\sigma}(\eta_{P}^{2}+2\zeta_{P}^{2}+\Lambda_{P}^{2}+\Xi_{P}^{2}).$
The numerical experiments do not display $C_{\text{rel}1}$ and
$C_{\text{rel}2}$, and directly compare the error
$H1e:=|u-\Pi_{1}u_{h}|_{1,\mathrm{pw}}$ in the piecewise $H^{1}$ norm and the
error $L2e:=\|u-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}$ in the $L^{2}$ norm with the
upper bound $H1\mu$ and $L2\mu$ (see, e.g., Figure 6.3).
##### 5.3.3.2 Motivation and discussion of apx
We first argue that those extra terms have to be expected and utilize the
abbreviations $\bm{\sigma}:=\textbf{A}\nabla u+\textbf{b}u$ and $g:=f-\gamma
u$ for the exact solution $u\in H^{1}_{0}(\Omega)$ to (1.8), which reads
$\displaystyle(\bm{\sigma},\nabla
v)_{L^{2}(\Omega)}=(g,v)_{L^{2}(\Omega)}\quad\text{for all}\;v\in
H^{1}_{0}(\Omega).$ (5.24)
Recall the definition of $s_{h}(\cdot,\cdot)$ from Subsection 3.1. The
discrete problem (3.8) with the discrete solution $u_{h}\in V_{h}$ assumes the
form
$\displaystyle(\bm{\sigma}_{h},\nabla\Pi_{1}v_{h})_{L^{2}(\Omega)}+s_{h}((1-\Pi_{1})u_{h},(1-\Pi_{1})v_{h})=(g_{h},\Pi_{1}v_{h})_{L^{2}(\Omega)}\quad\text{for
all}\;v_{h}\in V_{h}$ (5.25)
for $\bm{\sigma}_{h}:=\textbf{A}\nabla\Pi_{1}u_{h}+\textbf{b}\Pi_{1}u_{h}$,
and $g_{h}:=f-\gamma\Pi_{1}u_{h}$. Notice that $\bm{\sigma}_{h}$ and $g_{h}$
may be replaced in (5.25) by $\Pi_{0}\bm{\sigma}_{h}$ and $\Pi_{1}g_{h}$
because the test functions $\nabla\Pi_{1}v_{h}$ and $\Pi_{1}v_{h}$ belong to
$\mathcal{P}_{0}(\mathcal{T};\mathbb{R}^{2})$ and
$\mathcal{P}_{1}(\mathcal{T})$ respectively. In other words, the discrete
problems (3.8) and (5.25) do not see a difference of $\bm{\sigma}_{h}$ and
$g_{h}$ compared to $\Pi_{0}\bm{\sigma}_{h}$ and $\Pi_{1}g_{h}$ and so the
errors $\bm{\sigma}_{h}-\Pi_{0}\bm{\sigma}_{h}$ and $g_{h}-\Pi_{1}g_{h}$ may
arise in a posteriori error control. This motivates the a posteriori error
term
$\|\bm{\sigma}_{h}-\Pi_{0}\bm{\sigma}_{h}\|_{L^{2}(\Omega)}=\Lambda_{\mathcal{T}}$
as well as the approximation terms $\bm{\sigma}-\Pi_{0}\bm{\sigma}$ and
$g-\Pi_{1}g$ on the continuous level. The natural norm for the dual variable
$\bm{\sigma}$ is $L^{2}$ and that of $g$ is $H^{-1}$ and hence their norms
form the approximation term apx as defined in Subsection 5.1.
###### Example 5.1 ($\textbf{b}=0$).
The term $(1-\Pi_{0})\bm{\sigma}$ may not be visible in case of no advection
$\textbf{b}=0$ at least if A is piecewise constant. Suppose
$\textbf{A}\in\mathcal{P}_{0}(\mathcal{T};\mathbb{R}^{2\times 2})$ and
estimate
$\|(1-\Pi_{0})(\textbf{A}\nabla
u)\|_{L^{2}(\Omega)}\leq\|\textbf{A}\|_{\infty}\|(1-\Pi_{0})\nabla
u\|_{L^{2}(\Omega)}\lesssim|u-\Pi_{1}u_{h}|_{1,\mathrm{pw}}.$
If A is not constant, there are oscillation terms that can be treated properly
in adaptive mesh-refining algorithms, e.g., in [27].
###### Example 5.2 ($\gamma$ piecewise constant).
While the data approximation term $\mathrm{osc}_{1}(f,\mathcal{T})$ [10] is
widely accepted as a part of the total error in the approximation of nonlinear
problems, the term $\mathrm{osc}_{1}(\gamma u,\mathcal{T})=\|\gamma
h_{\mathcal{T}}(u-\Pi_{1}u)\|_{L^{2}(\Omega)}\lesssim
h_{\text{max}}^{1+\sigma}\|f\|_{L^{2}(\Omega)}$ is of higher order and may
even be absorbed in the overall error analysis for a piecewise constant
coefficient $\gamma\in\mathcal{P}_{0}(\mathcal{T})$. In the general case
$\gamma\in L^{\infty}(\Omega)\backslash\mathcal{P}_{0}(\mathcal{T})$, however,
$\mathrm{osc}_{1}(u,\mathcal{T})$ leads in particular to terms with
$\|\gamma-\Pi_{0}\gamma\|_{L^{\infty}(\Omega)}$.
##### 5.3.3.3 Higher-order nonconforming VEM
The analysis applied in Theorem 5.1 can be extended to the nonconforming VEM
space of higher order $k\in\mathbb{N}$ (see [17, Sec. 4] for the definition of
discrete space). Since the projection operators $\nabla\Pi_{k}^{\nabla}$ and
$\Pi_{k-1}\nabla$ are not the same for general $k$, and the first operator
does not lead to optimal order of convergence for $k\geq 3$, the discrete
formulation uses $\Pi_{k-1}\nabla$ (cf. [6, Rem. 4.3] for more details). The
definition and approximation properties of the averaging operator
$E_{\mathrm{pw}}$ extend to the operator $E^{k}:\mathcal{P}_{k}(\widehat{{\cal
T}})\to H^{1}_{0}(\Omega)$ (see [32, p. 2378] for a proof). The identity (5.9)
does not hold in general, but algebraic calculations lead to
$\displaystyle\eta_{P}^{2}$
$\displaystyle:=h_{P}^{2}\|f-\gamma\Pi_{k}u_{h}\|_{L^{2}(P)}^{2},\qquad\Lambda_{P}^{2}:=\|(1-\Pi_{k-1})(\textbf{A}\Pi_{k-1}\nabla
u_{h}+\textbf{b}\Pi_{k}u_{h})\|_{L^{2}(P)}^{2}$ $\displaystyle\zeta_{P}^{2}$
$\displaystyle:=S^{P}((1-\Pi_{k})u_{h},(1-\Pi_{k})u_{h}),\qquad\Xi_{P}^{2}:=\sum_{E\in\mathcal{E}(P)}|E|^{-1}\|[\Pi_{k}u_{h}]_{E}\|^{2}_{L^{2}(E)}.$
The analysis developed for the upper bound of $L^{2}$ norm also extends to the
general case. The model problem is chosen in 2D for the simplicity of the
presentation. The results of this work can be extended to the three-
dimensional case with appropriate modifications. The present analysis holds
for any higher regularity index $\sigma>0$ and avoids any trace inequality for
higher derivatives. This is possible by a medius analysis in the form of
companion operators [26].
##### 5.3.3.4 Inhomogeneous boundary data
The error estimator for general Dirichlet condition $u|_{\partial\Omega}=g\in
H^{1/2}(\partial\Omega)$ can be obtained with some modifications of [33] in
Theorem 5.1. The only difference is in the modified jump contributions of the
boundary edges in the nonconformity term
$\displaystyle\Xi_{\mathcal{T}}^{2}=\sum_{E\in\mathcal{E}(\Omega)}|E|^{-1}\|[\Pi_{1}u_{h}]\|_{L^{2}(E)}^{2}+\sum_{E\in\mathcal{E}(\partial\Omega)}|E|^{-1}\|g-\Pi_{1}u_{h}\|_{L^{2}(E)}^{2}.$
### 5.4 Proof of Theorem 5.2
Recall the notation $\bm{\sigma}=\textbf{A}\nabla u+\textbf{b}u$ and
$\bm{\sigma}_{h}=\textbf{A}\nabla\Pi_{1}u_{h}+\textbf{b}\Pi_{1}u_{h}$ from
Subsection 5.3.
###### Proof of (5.3).
The upper bound (3.5) for the stabilisation term and the triangle inequality
show
$\displaystyle\zeta_{P}^{2}\leq C_{s}|(1-\Pi_{1})u_{h}|^{2}_{1,P}\leq
2C_{s}(|u-u_{h}|^{2}_{1,P}+|u-\Pi_{1}u_{h}|^{2}_{1,P}).$
This concludes the proof of (5.3). ∎
###### Proof of (5.5).
The definition of $\Lambda_{P},\Pi_{0}$, and the triangle inequality lead to
$\displaystyle\Lambda_{P}=\|\bm{\sigma}_{h}-\Pi_{0}\bm{\sigma}_{h}\|_{L^{2}(P)}$
$\displaystyle\leq\|\bm{\sigma}_{h}-\Pi_{0}\bm{\sigma}\|_{L^{2}(P)}$
$\displaystyle\leq\|\textbf{A}\nabla(\Pi_{1}u_{h}-u)+\textbf{b}(\Pi_{1}u_{h}-u)\|_{L^{2}(P)}+\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(P)}.$
(5.26)
The upper bound $\|\textbf{A}\|_{\infty}$ and $\|\textbf{b}\|_{\infty}$ for
the coefficients and the triangle inequality lead to
$\displaystyle\Lambda_{P}-\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(P)}\leq(\|\textbf{A}\|_{\infty}+\|\textbf{b}\|_{\infty})\|\Pi_{1}u_{h}-u\|_{1,P}$
$\displaystyle\qquad\leq(\|\textbf{A}\|_{\infty}+\|\textbf{b}\|_{\infty})(\|u_{h}-\Pi_{1}u_{h}\|_{1,P}+\|u-u_{h}\|_{1,P})\leq
C_{10}(\zeta_{P}+\|u-u_{h}\|_{1,P})$ (5.27)
with
$\|u_{h}-\Pi_{1}u_{h}\|_{1,P}\leq(1+h_{P}C_{\mathrm{PF}})a_{0}^{-1/2}C_{s}^{1/2}\zeta_{P}$
from (4.24)-(4.25) and with
$C_{10}:=(\|\textbf{A}\|_{\infty}+\|\textbf{b}\|_{\infty})((1+h_{P}C_{\mathrm{PF}})a_{0}^{-1/2}C_{s}^{1/2}+1)$.
This followed by (5.3) concludes the proof of (5.5). ∎
Recall the bubble-function $b_{\mathcal{T}}|_{P}=b_{P}$ supported on a
polygonal domain $P\in\mathcal{T}$ from (3.17) as the sum of interior bubble-
functions supported on each triangle $T\in\widehat{{\cal T}}(P)$.
###### Proof of (5.4).
Rewrite the term
$\displaystyle
f-\gamma\Pi_{1}u_{h}=\Pi_{1}(f-\gamma\Pi_{1}u_{h})+(1-\Pi_{1})(f-\gamma\Pi_{1}u_{h})=:R+\theta,$
(5.28)
and denote $R_{P}:=R|_{P}$ and $\theta_{P}:=\theta|_{P}$. The definition of
$B_{\mathrm{pw}}(u-\Pi_{1}u_{h},v)$ and the weak formulation $B(u,v)=(f,v)$
from (1.8) for any $v\in V$ imply
$\displaystyle B_{\mathrm{pw}}(u-\Pi_{1}u_{h},v)+(\bm{\sigma}_{h},\nabla
v)_{L^{2}(\Omega)}$
$\displaystyle=(f-\gamma\Pi_{1}u_{h},v)_{L^{2}(\Omega)}=(R+\theta,v)_{L^{2}(\Omega)}.$
(5.29)
Since $b_{P}R_{P}$ belongs to $H^{1}_{0}(\Omega)$ (extended by zero outside
$P$), $v:=b_{P}R_{P}\in V$ is admissible in (5.29). An integration by parts
proves that $(\Pi_{0}\bm{\sigma}_{h},\nabla(b_{P}R_{P}))_{L^{2}(P)}=0$.
Therefore, (5.29) shows
$\displaystyle(R_{P},b_{P}R_{P})_{L^{2}(P)}=B^{P}(u-\Pi_{1}u_{h},b_{P}R_{P})-(\theta_{P},b_{P}R_{P})_{L^{2}(P)}+((1-\Pi_{0})\bm{\sigma}_{h},\nabla(b_{P}R_{P}))_{L^{2}(P)}.$
The substitution of
$\chi=R_{P}=\Pi_{1}(f-\gamma\Pi_{1}u_{h})|_{P}\in\mathcal{P}_{1}(P)$ in (3.20)
and the previous identity with the boundedness of $B$ and the Cauchy-Schwarz
inequality lead to the first two estimates in
$\displaystyle
C_{b}^{-1}\|R_{P}\|_{L^{2}(P)}^{2}\leq(R_{P},b_{P}R_{P})_{L^{2}(P)}$
$\displaystyle\quad\leq\Big{(}M_{b}|u-\Pi_{1}u_{h}|_{1,P}+\|(1-\Pi_{0})\bm{\sigma}_{h}\|_{L^{2}(P)}\Big{)}|b_{P}R_{P}|_{1,P}+\|\theta_{P}\|_{L^{2}(P)}\|b_{P}R_{P}\|_{L^{2}(P)}$
$\displaystyle\quad\leq
C_{b}\Big{(}M_{b}|u-\Pi_{1}u_{h}|_{1,P}+\Lambda_{P}+h_{P}\|\theta_{P}\|_{L^{2}(P)}\Big{)}h_{P}^{-1}\|R_{P}\|_{L^{2}(P)}.$
The last inequality follows from the definition of $\Lambda_{P}$, and (3.21)
with $\chi=R_{P}$. This proves that $C_{b}^{-2}h_{P}\|R_{P}\|_{L^{2}(P)}\leq
M_{b}|u-\Pi_{1}u_{h}|_{1,P}+\Lambda_{P}+h_{P}\|\theta_{P}\|_{L^{2}(P)}.$
Recall $\eta_{P}$ from Subsection 5.1 and
$\eta_{P}=h_{P}\|f-\gamma\Pi_{1}u_{h}\|_{L^{2}(P)}\leq
h_{P}\|R_{P}\|_{L^{2}(P)}+h_{P}\|\theta_{P}\|_{L^{2}(P)}$ from the split in
(5.28) and the triangle inequality. This and the previous estimate of
$h_{P}\|R_{P}\|_{L^{2}(P)}$ show the first estimate in
$\displaystyle\eta_{P}$ $\displaystyle\leq
C_{b}^{2}(M_{b}|u-\Pi_{1}u_{h}|_{1,P}+\Lambda_{P})+(C_{b}^{2}+1)h_{P}\|\theta_{P}\|_{L^{2}(P)}$
$\displaystyle\leq(C_{b}^{2}+1)\Big{(}M_{b}|u-\Pi_{1}u_{h}|_{1,P}+\Lambda_{P}+h_{P}\|(f-\gamma\Pi_{1}u_{h})-\Pi_{1}(f-\gamma
u)\|_{L^{2}(P)}\Big{)}$
$\displaystyle\leq(C_{b}^{2}+1)\Big{(}(M_{b}+h_{P}\|\gamma\|_{\infty})\|u-\Pi_{1}u_{h}\|_{1,P}+\Lambda_{P}+\mathrm{osc}_{1}(f-\gamma
u,P)\Big{)}.$
The second step results from the definition of
$\theta_{P}=(1-\Pi_{1})(f-\gamma\Pi_{1}u_{h})|_{P}$ in (5.28) followed by the
$L^{2}$ orthogonality of $\Pi_{1}$, and the last step results from an
elementary algebra with the triangle inequality and $\mathrm{osc}_{1}(f-\gamma
u,P)=h_{P}\|(1-\Pi_{1})(f-\gamma u)\|_{L^{2}(P)}$ from Subsection 5.1. The
triangle inequality for the term $u-\Pi_{1}u_{h}$ and the estimate of
$\|u_{h}-\Pi_{1}u_{h}\|_{1,P}$ as in (5.27) lead to
$\displaystyle
C_{11}^{-1}\eta_{P}\leq\|u-u_{h}\|_{1,P}+\zeta_{P}+\Lambda_{P}+\mathrm{osc}_{1}(f-\gamma
u,P)$
with
$C_{11}:=(C_{b}^{2}+1)(M_{b}+h_{P}\|\gamma\|_{\infty})((1+h_{P}C_{\mathrm{PF}})a_{0}^{-1/2}C_{s}^{1/2})+1)$.
The combination of (5.3) and (5.5) in the last displayed estimate concludes
the proof of (5.4). ∎
###### Proof of (5.6).
Recall for $u\in H^{1}_{0}(\Omega)$ and $u_{h}\in V_{h}$ that
$\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}u\,ds$ and
$\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}u_{h}\,ds$ are well defined for all edges
$E\in\mathcal{E}$, and so the constant
$\alpha_{E}:=\mathchoice{{\vbox{\hbox{$\textstyle-$
}}\kern-7.83337pt}}{{\vbox{\hbox{$\scriptstyle-$
}}\kern-5.90005pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.75003pt}}{{\vbox{\hbox{$\scriptscriptstyle-$
}}\kern-4.25003pt}}\\!\int_{E}(u-u_{h})\,ds$ is uniquely defined as well.
Since the jump of $u-\alpha_{E}$ across any edge $E\in\mathcal{E}$ vanishes,
$[\Pi_{1}u_{h}]_{E}=[\Pi_{1}u_{h}-u+\alpha_{E}]_{E}$. Recall
$\omega_{E}=\text{int}(P^{+}\cup P^{-})$ for $E\in\mathcal{E}(\Omega)$ and
$\omega_{E}=\text{int}(P)$ for $E\in\mathcal{E}(\partial\Omega)$ from
Subsection 5.1. The trace inequality $\|v\|^{2}_{L^{2}(E)}\leq
C_{T}(|E|^{-1}\|v\|^{2}_{L^{2}(\omega_{E})}+|E|\;\|\nabla
v\|^{2}_{L^{2}(\omega_{E})})$ (cf. [13, p. 554]) leads to
$\displaystyle|E|^{-1/2}\|[\Pi_{1}u_{h}]_{E}\|_{L^{2}(E)}\leq
C_{T}\left(|E|^{-1}\|\Pi_{1}u_{h}-u+\alpha_{E}\|_{L^{2}(\omega_{E})}+\|\nabla_{\mathrm{pw}}(\Pi_{1}u_{h}-u)\|_{L^{2}(\omega_{E})}\right).$
This and the triangle inequality show the first estimate in
$\displaystyle|E|^{-1/2}\|[\Pi_{1}u_{h}]_{E}\|_{L^{2}(E)}$ $\displaystyle\leq
C_{T}\Big{(}|E|^{-1}(\|u_{h}-\Pi_{1}u_{h}\|_{L^{2}(\omega_{E})}+\|u_{h}-u+\alpha_{E}\|_{L^{2}(\omega_{E})})$
$\displaystyle\qquad+\|\nabla_{\mathrm{pw}}(u_{h}-\Pi_{1}u_{h})\|_{L^{2}(\omega_{E})}+\|\nabla_{\mathrm{pw}}(u-u_{h})\|_{L^{2}(\omega_{E})}\Big{)}.$
(5.30)
The estimates (4.24)-(4.25) control the term $\|u_{h}-\Pi_{1}u_{h}\|_{1,P}$ as
in (5.27), and the Poincaré-Friedrichs inequality from Lemma 2.1.b for
$u_{h}-u+\alpha_{E}$ with $\int_{E}(u_{h}-u+\alpha_{E})\,ds=0$ (by the
definition of $\alpha_{E}$) implies that
$\|u_{h}-u+\alpha_{E}\|_{L^{2}(P)}\leq C_{\mathrm{PF}}h_{P}|u_{h}-u|_{1,P}$.
This with the mesh assumption $h_{P}\leq\rho^{-1}|E|$ and (5.30) result in
$\displaystyle|E|^{-1/2}\|[\Pi_{1}u_{h}]_{E}\|_{L^{2}(E)}\leq
C_{T}((C_{\mathrm{PF}}\rho^{-1}+1)a_{0}^{-1/2}C_{s}^{1/2}+C_{\mathrm{PF}}+1)\sum_{P^{\prime}\in\omega_{E}}(\Lambda_{P^{\prime}}+|u-u_{h}|_{1,P^{\prime}}).$
Since this holds for any edge $E\in\mathcal{E}(P)$, the sum over all these
edges and the bound (5.3) in the above estimate conclude the proof of (5.6). ∎
###### Remark 9 (convergence rates of $L^{2}$ error control for $0<\sigma\leq
1$).
The efficiency estimates (5.4)-(5.6) with a multiplication of
$h_{P}^{2\sigma}$ show that the local quantity
$h_{P}^{2\sigma}(\eta_{P}^{2}+\Lambda_{P}^{2}+\Xi_{P}^{2})$ converges to zero
with the expected convergence rate.
###### Remark 10 (efficiency up to stabilisation and oscillation for $L^{2}$
error control when $\sigma=1$).
For convex domains and $\sigma=1$, there is even a local efficiency result
that is briefly described in the sequel: The arguments in the above proof of
(5.4)-(5.5) lead to
$\displaystyle h_{P}^{2}\eta_{P}^{2}$
$\displaystyle\lesssim\|u-u_{h}\|^{2}_{L^{2}(P)}+h_{P}^{2}(\zeta_{P}^{2}+\mathrm{osc}_{1}^{2}(f-\gamma
u,P)+\|(1-\Pi_{0})\bm{\sigma}\|_{L^{2}(P)}^{2}),$ $\displaystyle
h_{P}^{2}\Lambda_{P}^{2}$
$\displaystyle\lesssim\|u-u_{h}\|^{2}_{L^{2}(P)}+h_{P}^{2}(\zeta_{P}^{2}+\|\textbf{A}-\Pi_{0}\textbf{A}\|_{L^{\infty}(P)}^{2}\|f\|^{2}_{L^{2}(\Omega)}+\|(1-\Pi_{0})\textbf{b}u\|_{L^{2}(P)}^{2}).$
The observation $[\Pi_{1}u_{h}]_{E}=[\Pi_{1}u_{h}-u]_{E}$ for the term
$\Xi_{P}$, the trace inequality, and the triangle inequality show, for any
$E\in\mathcal{E}$, that
$\displaystyle|E|^{1/2}\|[\Pi_{1}u_{h}]_{E}\|_{L^{2}(E)}$ $\displaystyle\leq
C_{T}\left(\|u_{h}-\Pi_{1}u_{h}\|_{L^{2}(\omega_{E})}+\|u-u_{h}\|_{L^{2}(\omega_{E})}\right.$
$\displaystyle\quad\left.+|E|(\|\nabla\Pi_{1}(u-u_{h})\|_{L^{2}(\omega_{E})}+\|\nabla(u-\Pi_{1}u)\|_{L^{2}(\omega_{E})})\right).$
The bound (4.25) for the first term and the inverse estimate
$\|\nabla\chi\|_{L^{2}(P)}\leq C_{\text{inv}}h_{P}^{-1}\|\chi\|_{L^{2}(P)}$
for $\chi\in\mathcal{P}_{k}(P)$ for the third term result in
$\displaystyle|E|^{1/2}\|[\Pi_{1}u_{h}]_{E}\|_{L^{2}(E)}\lesssim\|u-u_{h}\|_{L^{2}(\omega_{E})}+|E|\sum_{P^{\prime}\in\omega_{E}}\Big{(}\|\nabla(1-\Pi_{1})u\|_{L^{2}(P^{\prime})}+\Lambda_{P^{\prime}}\Big{)}.$
The mesh assumption (M2) implies that
$h_{P}^{2}\Xi_{P}^{2}\leq\rho^{-1}\sum_{E\in\mathcal{E}(P)}|E|\;\|[\Pi_{1}u_{h}]_{E}\|_{L^{2}(E)}^{2}$.
This and the above displayed inequality prove the efficiency estimate for
$h_{P}^{2}\Xi_{P}^{2}$.
## 6 Numerical experiments
This section manifests the performance of the a posteriori error estimator and
an associated adaptive mesh-refining algorithm with D$\ddot{o}$rfler marking
[37]. The numerical results investigate three computational benchmarks for the
indefinite problem (1.1).
### 6.1 Adaptive algorithm
Input: initial partition ${\cal T}_{0}$ of $\Omega$.
For $\ell=0,1,2,\dots$ do
1. 1.
SOLVE. Compute the discrete solution $u_{h}$ to (3.8) with respect to
$\mathcal{T}_{\ell}$ for $\ell=0,1,2\dots$ (cf. [5] for more details on the
implementation).
2. 2.
ESTIMATE. Compute all the four terms
$\eta_{\ell}:=\eta_{\mathcal{T}_{\ell}},\zeta_{\ell}:=\zeta_{\mathcal{T}_{\ell}},\Lambda_{\ell}:=\Lambda_{\mathcal{T}_{\ell}}$
and $\Xi_{\ell}:=\Xi_{\mathcal{T}_{\ell}}$, which add up to the upper bound
(5.1).
3. 3.
MARK. Mark the polygons $P$ in a subset ${\cal M}_{\ell}\subset$ ${\cal
T}_{\ell}$ with minimal cardinality and
$\displaystyle{H1\mu}_{\ell}^{2}:=H1\mu^{2}({\cal
T}_{\ell}):=\eta_{\ell}^{2}+\zeta_{\ell}^{2}+\Lambda_{\ell}^{2}+\Xi_{\ell}^{2}\leq
0.5\sum_{P\in{\cal
M}_{\ell}}(\eta_{P}^{2}+\zeta_{P}^{2}+\Lambda_{P}^{2}+\Xi_{P}^{2}).$
4. 4.
REFINE \- Refine the marked polygon domains by connecting the mid-point of the
edges to the centroid of respective polygon domains and update ${\cal
T}_{\ell}$. (cf. Figure 6.1 for an illustration of the refinement strategy.)
Figure 6.1: Refinement of a polygon into quadrilaterals
end do
Output: The sequences $\mathcal{T}_{\ell}$, and the bounds
$\eta_{\ell},\zeta_{\ell},\Lambda_{\ell},\Xi_{\ell}$, and $H1\mu_{\ell}$ for
$\ell=0,1,2,\dots$.
The adaptive algorithm is displayed for mesh adaption in the energy error
$H^{1}$. Replace estimator $H1\mu_{\ell}$ in the algorithm by $L2\mu_{\ell}$
(the upper bound in (5.2)) for local mesh-refinement in the $L^{2}$ error.
Both uniform and adaptive mesh-refinement run to compare the empirical
convergence rates and provide numerical evidence for the superiority of
adaptive mesh-refinement. Note that uniform refinement means all the polygonal
domains are refined. In all examples below, $\overline{\textbf{A}}_{P}=1$ in
(3.6). The numerical realizations are based on a MATLAB implementation
explained in [35] with a Gauss-like cubature formula over polygons. The
cubature formula is exact for all bivariate polynomials of degree at most
$2n-1$, so the choice $n\geq(k+1)/2$ leads to integrate a polynomial of degree
$k$ exactly. The quadrature errors in the computation of examples presented
below appear negligible for the input parameter $n=5$.
### 6.2 Square domain (smooth solution)
This subsection discusses the problem (1.1) with the coefficients
$\textbf{A}=I,\textbf{b}=(x,y)$ and $\gamma=x^{2}+y^{3}$ on a square domain
$\Omega=(0,1)^{2}$, and the exact solution
$\displaystyle u=16x(1-x)y(1-y)\arctan(25x-100y+50)$
with $f={\cal L}u$. Since
$\gamma-\frac{1}{2}\text{div}(\textbf{b})=x^{2}+y^{3}-1$ is not always
positive on $\Omega$, this is an indefinite problem. Initially, the error and
the estimators are large because of an internal layer around the line
$25x-100y+50=0$ with large first derivative of $u$ resolved after few
refinements as displayed in Figure 6.2.
Figure 6.2: Output $\mathcal{T}_{1},\mathcal{T}_{8},\mathcal{T}_{15}$ of the
adaptive algorithm
(a)
(b)
Figure 6.3: Convergence history plot of estimator $\mu$ and error
$e:=u-\Pi_{1}u_{h}$ in the (a) piecewise $H^{1}$ norm (b) $L^{2}$ norm vs
number ndof of degrees of freedom for both uniform and adaptive refinement
### 6.3 L-shaped domain (non-smooth solution)
This subsection shows an advantage of using adaptive mesh-refinement over
uniform meshing for the problem (1.1) with the coefficients as
$\textbf{A}=I,\textbf{b}=(x,y)\hskip 2.84526pt\text{and}\hskip
2.84526pt\gamma=-4$ on a L-shaped domain
$\Omega=(-1,1)^{2}\backslash[0,1)\times(-1,0]$ and the exact solution
$\displaystyle u=r^{2/3}\sin\left(\frac{2\theta}{3}\right)$
with $f:={\cal L}u$. Since the exact solution is not zero along the boundary
$\partial\Omega$, the error estimators are modified according to Subsection
5.3.3.4. Since $\gamma-\frac{1}{2}\text{div}(\textbf{b})=-5<0$, the problem is
non-coercive. Observe that with increase in number of iterations, refinement
is more at the singularity as highlighted in Figure 6.4. Since the exact
solution $u$ is in $H^{(5/3)-\epsilon}(\Omega)$ for all $\epsilon>0$, from a
priori error estimates the expected order of convergence in $H^{1}$ norm is
$1/3$ and in $L^{2}$ norm is at least $2/3$ with respect to number of degrees
of freedom for uniform refinement. Figure 6.5 shows that uniform refinement
gives the sub-optimal convergence rate, whereas adaptive refinement lead to
optimal convergence rates ($1/2$ for $H^{1}$ norm and $5/6$ in $L^{2}$ norm).
Figure 6.4: Output $\mathcal{T}_{1},\mathcal{T}_{10},\mathcal{T}_{15}$ of the
adaptive refinement
(a)
(b)
Figure 6.5: Convergence history plot of estimator $\mu$ and error
$e:=u-\Pi_{1}u_{h}$ in the (a) piecewise $H^{1}$ norm (b) $L^{2}$ norm vs
number ndof of degrees of freedom for both uniform and adaptive refinement
### 6.4 Helmholtz equation
This subsection considers the exact solution $u=1+\tanh(-9(x^{2}+y^{2}-0.25))$
to the problem
$\displaystyle-\Delta u-9u=f\quad\quad\text{in}\quad\Omega=(-1,1)^{2}.$
There is an internal layer around the circle centered at $(0,0)$ and of radius
$0.25$ where the second derivatives of $u$ are large because of steep increase
in the solution resulting in the large error at the beginning, and this gets
resolved with refinement as displayed in Figure 6.6.
Figure 6.6: Output $\mathcal{T}_{1},\mathcal{T}_{5},\mathcal{T}_{11}$ of the
adaptive refinement
(a)
(b)
Figure 6.7: Convergence history plot of estimator $\mu$ and error
$e:=u-\Pi_{1}u_{h}$ in the (a) piecewise $H^{1}$ norm (b) $L^{2}$ norm vs
number ndof of degrees of freedom for both uniform and adaptive refinement
### 6.5 Conclusion
The three computational benchmarks provide empirical evidence for the
sharpness of the mathematical a priori and a posteriori error analysis in this
paper and illustrate the superiority of adaptive over uniform mesh-refining.
The empirical convergence rates in all examples for the $H^{1}$ and $L^{2}$
errors coincide with the predicted convergence rates in Theorem 4.3, in
particular, for the non-convex domain and reduced elliptic regularity. The a
posteriori error bounds from Theorem 5.1 confirm these convergence rates as
well. The ratio of the error estimator $\mu_{\ell}$ by the $H^{1}$ error
$e_{\ell}$, sometimes called efficiency index, remains bounded up to a typical
value 6; we regard this as a typical overestimation factor for the residual-
based a posteriori error estimate. Recall that the constant $C_{\text{reg}}$
has not been displayed so the error estimator $\mu_{\ell}$ does not provide a
guaranteed error bound. Figure 6.8 and 6.9 display the four different
contributions volume residual $(\sum_{P}\eta_{P}^{2})^{1/2}$, stabilization
term $(\sum_{P}\zeta_{P}^{2})^{1/2}$, inconsistency term
$(\sum_{P}\Lambda_{P}^{2})^{1/2}$ and the nonconformity term
$(\sum_{P}\Xi_{P}^{2})^{1/2}$ that add up to the error estimator $\mu_{\ell}$.
We clearly see that all four terms converge with the overall rates that proves
that none of them is a higher-order term and makes it doubtful that some of
those terms can be neglected. The volume residual clearly dominates the a
posteriori error estimates, while the stabilisation term remains significantly
smaller for the natural stabilisation (with undisplayed parameter one). The
proposed adaptive mesh-refining algorithm leads to superior convergence
properties and recovers the optimal convergence rates. This holds for the
first example with optimal convergence rates in the large pre-asymptotic
computational range as well as in the second with suboptimal convergence rates
under uniform mesh-refining according to the typical corner singularity and
optimal convergence rates for the adaptive mesh-refining. The third example
with the Helmholtz equation and a moderate wave number shows certain moderate
local mesh-refining in Figure 6.6 but no large improvement over the optimal
convergence rates for uniform mesh-refining. The adaptive refinement generates
hanging nodes because of the way refinement strategy is defined, but this is
not troublesome in VEM setting as hanging node can be treated as a just
another vertex in the decompostion of domain. However, an increasing number of
hanging nodes with further mesh refinements may violate the mesh assumption
(M2), but numerically the method seems robust without putting any restriction
on the number of hanging nodes. The future work on the theoretical
investigation of the performance of adaptive mesh-refining algorithm is
clearly motivated by the successful numerical experiments. The aforementioned
empirical observation that the stabilisation terms do not dominate the a
posteriori error estimates raises the hope for a possible convergence analysis
of the adaptive mesh-refining strategy with the axioms of adaptivity [20]
towards a proof of optimal convergence rates: The numerical results in this
section support this conjecture at least for the lowest-order VEM in 2D for
indefinite non-symmetric second-order elliptic PDEs.
Figure 6.8: Estimator components corresponding to the error
$H1e=|u-\Pi_{1}u_{h}|_{1,\mathrm{pw}}$ of the adaptive refinement presented in
Subsection 6.2-6.4
Figure 6.9: Estimator components corresponding to the error
$L2e=\|u-\Pi_{1}u_{h}\|_{L^{2}(\Omega)}$ of the adaptive refinement presented
in Subsection 6.2-6.4
Acknowledgements The authors sincerely thank one anonymous referee for
suggestions that led to Remark 5. The authors thankfully acknowlege the
support from the MHRD SPARC project (ID 235) titled ”The Mathematics and
Computation of Plates” and the third author also thanks the hospitality of the
Humboldt-Universität zu Berlin for the corresponding periods 1st July
2019-31st July 2019. The second author acknowledges the financial support of
the University Grants Commission (UGC), Government of India.
## References
* [1] B. Ahmad, A. Alsaedi, F. Brezzi, L. D. Marini, and A. Russo. Equivalent projectors for virtual element methods. Comput. Math. Appl., 66(3):376–391, 2013.
* [2] M. Ainsworth and J. T. Oden. A posteriori error estimation in finite element analysis, volume 37. John Wiley & Sons, 2011.
* [3] B. Ayuso de Dios, K. Lipnikov, and G. Manzini. The nonconforming virtual element method. ESAIM: M2AN, 50(3):879–904, 2016.
* [4] L. Beirão da Veiga, F. Brezzi, A. Cangiani, G. Manzini, L. D. Marini, and A. Russo. Basic principles of virtual element methods. Math. Models Methods Appl. Sci., 23(01):199–214, 2013.
* [5] L. Beirão da Veiga, F. Brezzi, L. D. Marini, and A. Russo. The hitchhiker’s guide to the virtual element method. Math. Models Methods Appl. Sci., 24(08):1541–1573, 2014.
* [6] L. Beirão da Veiga, F. Brezzi, L.D. Marini, and A. Russo. Virtual element method for general second-order elliptic problems on polygonal meshes. Math. Models Methods Appl. Sci., 26(04):729–750, 2016.
* [7] L. Beirão da Veiga, K. Lipnikov, and G. Manzini. The mimetic finite difference method for elliptic problems, volume 11. Springer, 2014.
* [8] L. Beirão da Veiga, C. Lovadina, and A. Russo. Stability analysis for the virtual element method. Math. Models Methods Appl. Sci., 27(13):2557–2594, 2017.
* [9] L. Beirão da Veiga and G. Manzini. Residual a posteriori error estimation for the virtual element method for elliptic problems. ESAIM: M2AN, 49(2):577–599, 2015.
* [10] P. Binev, W. Dahmen, and R. DeVore. Adaptive finite element methods with convergence rates. Numer. Math., 97(2):219–268, 2004.
* [11] D. Braess. Finite elements: Theory, fast solvers, and applications in solid mechanics. Cambridge University Press, 2007.
* [12] S. Brenner. Forty years of the Crouzeix-Raviart element. Numer. Methods Partial Differ. Equ., 31(2):367–396, 2015.
* [13] S. Brenner, Q. Guan, and L.-Y. Sung. Some estimates for virtual element methods. Comput. Methods Appl. Math., 17(4):553–574, 2017.
* [14] S. Brenner and R. Scott. The mathematical theory of finite element methods, volume 15. Springer Science & Business Media, New York, 2007.
* [15] S. Brenner and L. Sung. Virtual element methods on meshes with small edges or faces. Math. Models Methods Appl. Sci., 28(07):1291–1336, 2018.
* [16] A. Cangiani, E. H. Georgoulis, T. Pryer, and O. J. Sutton. A posteriori error estimates for the virtual element method. Numer. Math., 137(4):857–893, 2017.
* [17] A. Cangiani, G. Manzini, and O. J. Sutton. Conforming and nonconforming virtual element methods for elliptic problems. IMA J. Numer. Anal., 37(3):1317–1354, 2016.
* [18] S. Cao and L. Chen. Anisotropic error estimates of the linear nonconforming virtual element methods. SIAM J. Numer. Anal., 57(3):1058–1081, 2019.
* [19] C. Carstensen, A. K. Dond, N. Nataraj, and A. K. Pani. Error analysis of nonconforming and mixed fems for second-order linear non-selfadjoint and indefinite elliptic problems. Numer. Math., 133(3):557–597, 2016.
* [20] C. Carstensen, M. Feischl, M. Page, and D. Praetorius. Axioms of adaptivity. Comput. Math. Appl., 67(6):1195–1253, 2014.
* [21] C. Carstensen and D. Gallistl. Guaranteed lower eigenvalue bounds for the biharmonic equation. Numer. Math., 126(1):33–51, 2014.
* [22] C. Carstensen, D. Gallistl, and M. Schedensack. Adaptive nonconforming Crouzeix-Raviart FEM for eigenvalue problems. Math. Comp., 84(293):1061–1087, 2015.
* [23] C. Carstensen and J. Gedicke. Guaranteed lower bounds for eigenvalues. Math. Comp., 83(290):2605–2629, 2014.
* [24] C. Carstensen, J. Gedicke, and D. Rim. Explicit error estimates for Courant, Crouzeix-Raviart and Raviart-Thomas finite element methods. J. Comput. Math., 30(4):337–353, 2012.
* [25] C. Carstensen and F. Hellwig. Constants in discrete Poincaré and Friedrichs inequalities and discrete quasi-interpolation. Comput. Methods Appl. Math., 18(3):433–450, 2018.
* [26] C. Carstensen and S. Puttkammer. How to prove the discrete reliability for nonconforming finite element methods. arXiv preprint arXiv:1808.03535, 2018.
* [27] J. M. Cascon, C. Kreuzer, R. H. Nochetto, and K. G. Siebert. Quasi-optimal convergence rate for an adaptive finite element method. SIAM J. Numer. Anal., 46(5):2524–2550, 2008.
* [28] P. G. Ciarlet. The finite element method for elliptic problems. North-Holland, 1978.
* [29] T. Dupont and R. Scott. Polynomial approximation of functions in Sobolev spaces. Math. Comp., 34(150):441–463, 1980.
* [30] L. C. Evans. Partial differential equations, volume 19. American Mathematical Society, Providence, RI, second edition, 2010.
* [31] J. Huang and Y. Yu. A medius error analysis for nonconforming virtual element methods for Poisson and biharmonic equations. J. Comput. Appl. Math., 386, 2021. doi:https://doi.org/10.1016/j.cam.2020.11322.
* [32] O. A. Karakashian and F. Pascal. A posteriori error estimates for a discontinuous Galerkin approximation of second-order elliptic problems. SIAM J. Numer. Anal., 41(6):2374–2399, 2003.
* [33] K. Kim. A posteriori error analysis for locally conservative mixed methods. Math. Comp., 76(257):43–66, 2007.
* [34] D. Mora, G. Rivera, and R. Rodríguez. A virtual element method for the Steklov eigenvalue problem. Math. Models Methods Appl. Sci., 25(08):1421–1445, 2015.
* [35] A. Sommariva and M. Vianello. Product Gauss cubature over polygons based on Green’s integration formula. BIT Numer. Math., 47(2):441–453, 2007.
* [36] O. J. Sutton. Virtual element methods. PhD thesis, University of Leicester, 2017.
* [37] R. Verfürth. A review of a posteriori error estimation and Adaptive Mesh-Refinement Techniques. Wiley-Teubner, New York, 1996.
|
# Online Streaming End-to-End Neural Diarization
Handling Overlapping Speech and Flexible Numbers of Speakers
###### Abstract
We propose a streaming diarization method based on an end-to-end neural
diarization (EEND) model, which handles flexible numbers of speakers and
overlapping speech. In our previous study, the speaker-tracing buffer (STB)
mechanism was proposed to achieve a chunk-wise streaming diarization using a
pre-trained EEND model. STB traces the speaker information in previous chunks
to map the speakers in a new chunk. However, it only worked with two-speaker
recordings. In this paper, we propose an extended STB for flexible numbers of
speakers, FLEX-STB. The proposed method uses a zero-padding followed by
speaker-tracing, which alleviates the difference in the number of speakers
between a buffer and a current chunk. We also examine buffer update strategies
to select important frames for tracing multiple speakers. Experiments on
CALLHOME and DIHARD II datasets show that the proposed method achieves
comparable performance to the offline EEND method with 1-second latency. The
results also show that our proposed method outperforms recently proposed
chunk-wise diarization methods based on EEND (BW-EDA-EEND).
Index Terms: online speaker diarization, EEND, overlapping speech, flexible
numbers of speakers
## 1 Introduction
Speaker diarization, a challenging technique that responds to the question
“who spoke when” [1, 2, 3, 4, 5, 6], assigns speaker labels to audio regions.
Diarization produces outcomes that downstream tasks can utilize. For example,
it can provide the turn-taking information and build a pre-processing pipeline
for automatic speech recognition in meetings [7, 8, 9, 10], call-center
telephone conversations [11, 12, 13], and home environments [14, 15, 16].
The three challenging aspects that current speaker diarization systems should
fulfill are overlapping speech, unknown number of speakers, and online
operation. However, it is still an open problem to solve these conditions at
once. Conventional clustering-based systems primarily focus on clustering
algorithms and speaker embeddings such as Gaussian mixture models (GMM) [17,
18], i-vector [19, 20, 21], d-vector [22, 23], and x-vector [24, 25]. However,
most clustering-based systems assume that there is only one speaker per
segment. As a result, these systems cannot deal with the overlapping speech in
general except for a few studies, e.g., [26].
To solve the overlapping issue, an end-to-end neural diarization model (EEND)
was proposed [27]. EEND directly minimizes the diarization error by mapping
the multi-speaker mixture recording to joint speech activities using a single
neural network. The model estimates the speech activity using a dedicated
stream for every speaker; hence, EEND inherently assigns two or more labels to
the overlapping regions. EEND has already shown significant performance
improvement on overlapping speech, especially after adopting the self-
attention mechanism (SA-EEND) [28], and with a fixed number of speakers.
Table 1: Comparison of speaker diarization methods.
Method | Online | Overlapping | Flexible #speakers
---|---|---|---
x-vector+clustering [24] | – | – | ✓
UIS-RNN [22, 23] | ✓ | – | ✓
EEND/SA-EEND [27, 29, 28] | – | ✓ | –
EEND-EDA/SC-EEND [30, 31] | – | ✓ | ✓
RSAN [32, 33] | ✓ | ✓ | ✓
BW-EDA-EEND [34] | ✓ | ✓ | ✓
This work | ✓ | ✓ | ✓
To deal with overlapping speech and flexible numbers of speakers, Horiguchi et
al. introduced the encoder-decoder based attractor (EDA) module to SA-EEND
[30], and Fujita et al. extended the SA-EEND to speaker-wise conditional EEND
(SC-EEND) [31, 35]. Both extensions have only been evaluated in offline mode.
To cope with online applications, the speaker-tracing buffer (STB) [36] was
proposed to trace the speaker permutation information across chunks which
enables the offline pre-trained SA-EEND model to work in an online manner. The
original STB achieved comparable diarization accuracy to the offline EEND with
$1\text{\,}\mathrm{s}$ chunk size but this method was limited to two-speaker
recordings. In [34], Han et al. proposed the block-wise-EDA-EEND (BW-EDA-EEND)
which makes the EDA-EEND work in an online fashion. Motivated by Transformer-
XL [37], this approach utilizes the previous hidden states of the transformer
encoder as input to the EDA-EEND.
To satisfy all the three requirements together, among the existing diarization
methods as shown in Table 1, the Recurrent Selective Attention Network (RSAN)
[32, 33] and the block-wise-EDA-EEND (BW-EDA-EEND) stand out. However, due to
the speech separation-based training objective, RSAN is hard to adapt to real
recordings, and the evaluations under real scenarios are not reported. On the
other hand, although BW-EDA-EEND [34] conducted online experiments on
$10\text{\,}\mathrm{s}$ chunk size conditions, which cause large latency. In
this paper, we consider more realistic streaming applications with a smaller
chunk size such as $1\text{\,}\mathrm{s}$.
In this work, we extend the inference algorithm of existing offline model
(e.g., EEND-EDA) to operate in an online mode using the speaker-tracing buffer
for flexible numbers of speakers (FLEX-STB) without re-training the offline
model. FLEX-STB is designed to deal with variable numbers of speakers using a
zero-padding mechanism with reasonable latency. Four frame selection
strategies are also proposed to contain the speaker permutation information in
FLEX-STB. The proposed diarization system can operate in an online mode
handling overlapping speech and flexible number of speakers, and working in
real scenarios such as CALLHOME and DIHARD II with $1\text{\,}\mathrm{s}$
chunk size.
## 2 Preliminary
In this section, we briefly explain two key elements: EEND for flexible
numbers of speakers and the original STB that enables the offline SA-EEND
systems to work online.
### 2.1 EEND for flexible numbers of speakers
Given a $T$-length sequence of $D$-dimensional log-scaled Mel-filterbank-based
acoustic features $\mathbf{X}\in\mathbb{R}^{D\times T}$, a neural network-
based function $\mathrm{EEND}:\mathbb{R}^{D\times T}\rightarrow(0,1)^{S\times
T}$ calculates posterior probabilities of speech activities at each time frame
$\hat{\mathbf{Y}}=(\hat{\mathbf{y}}_{t})_{t=1}^{T}\in(0,1)^{S\times T}$ as
follows:
$\hat{\mathbf{Y}}=\mathrm{EEND}(\mathbf{X}),$ (1)
Here,
$\hat{\mathbf{y}}_{t}\coloneqq\left[\hat{y}_{1,t},\dots,\hat{y}_{S,t}\right]^{\mathsf{T}}$
is the posterior of speech activities calculated for each speaker
$s\in\\{1,\dots,S\\}$ independently, where $\left(\cdot\right)^{\mathsf{T}}$
denotes the matrix transpose and $S$ is the number of speakers. Diarization
results $\tilde{\mathbf{Y}}=(\tilde{y}_{s,t})_{s,t}\in\\{0,1\\}^{S\times T}$
are obtained by applying a threshold value $\theta$ (e.g., 0.5) to the
posteriors $\hat{\mathbf{Y}}$. If
$\tilde{y}_{s,t}=\tilde{y}_{s^{\prime},t}=1~{}(s\neq s^{\prime})$, it means
that both speakers $s$ and $s^{\prime}$ are estimated to have spoken at time
$t$, which is regarded as the overlapping region. If $\forall
s\in\\{1,\dots,S\\},~{}\tilde{y}_{s,t}=0$, it indicates that no speaker is
estimated to have spoken at time $t$. Note that EEND used permutation
invariant training [27] so that there is no condition to decide the order of
output speakers.
While the original EEND [27, 29] fixes the number of speakers $S$ by its
network structure, variants of EEND [30, 31, 35] have been proposed to
estimate the number of speakers $\hat{S}$. However, these methods perform only
in the offline setting.
### 2.2 Speaker-tracing buffer for fixed number of speakers
One of the straightforward online extensions of EEND is to perform diarization
process for each chunk of acoustic features and concatenated diarization
results across the chunk. However, this cannot obtain a consistent speaker
permutation of the whole recording. This is because the EEND used permutation
invariant training [27] so that there is no condition to decide the order of
output speakers. We call this speaker permutation problem. To solve the
speaker permutation problem, we have proposed speaker-tracing buffer (STB)
[36] for the original EENDs, which assume that the number of speakers was
known as prior.
Let $\mathbf{X}_{i}\in\mathbb{R}^{D\times\Delta}$ represents the subsequence
of $\mathbf{X}$ at chunk $i\in\left\\{1,\dots,I\right\\}$ with a fixed chunk
length $\Delta$, i.e.,
$\mathbf{X}=\left[\mathbf{X}_{1},\dots,\mathbf{X}_{i},\dots,\mathbf{X}_{I}\right]$.
The $\mathrm{EEND}:\mathbb{R}^{D\times T}\rightarrow(0,1)^{S\times T}$
function accepts the input features of flexible length $T$ and produces the
posteriors of speech activities of the same length for each speaker. Note that
the number of speakers $S$ is fixed in this section.
#### 2.2.1 Initialization
The STB possesses two matrices: acoustic features
$\mathbf{X}^{(\text{buf})}_{i}\in\mathbb{R}^{D\times L_{i}}$ and the
corresponding posteriors $\mathbf{Y}_{i}^{\text{(buf)}}\in\mathbb{R}^{S\times
L_{i}}$ from $\mathrm{EEND}\left(\cdot\right)$, where $L_{i}$ is the buffer
length after $i$-th update. The matrices are initialized at the first chunk as
follows:
$\displaystyle\mathbf{X}_{1}^{\text{(buf)}}$ $\displaystyle=\mathbf{X}_{1},$
(2) $\displaystyle\mathbf{Y}_{1}^{\text{(buf)}}$
$\displaystyle=\hat{\mathbf{Y}}_{1}=\mathrm{EEND}(\mathbf{X}_{1}).$ (3)
As we assume that the chunk size $\Delta$ is smaller than the maximum number
of frames $L_{\text{max}}$ in the buffer, all the inputs and outputs of the
first chunk can be fed into STB.
#### 2.2.2 Chunk-wise processing handling speaker permutation
From the second chunk, posteriors $\hat{\mathbf{Y}}_{i}$ are computed using
the STB. Firstly, an input concatenated with the the buffer is fed into
$\mathrm{EEND}\left(\cdot\right)$:
$\left[\mathbf{\hat{Y}}_{i-1}^{\text{(buf)}},\mathbf{\hat{Y}}_{i}\right]=\mathrm{EEND}\left(\left[\mathbf{X}_{i-1}^{\text{(buf)}},\mathbf{X}_{i}\right]\right)\in(0,1)^{S\times(L_{i-1}+\Delta)}.$
(4)
Next, the optimal speaker permutation for the current chunk is calculated as
follows:
$\psi=\operatorname*{arg\,max}_{\phi\in\mathrm{Perm}(S_{i})}\mathrm{Corr}\left(\mathbf{Y}_{i-1}^{\text{(buf)}},\mathbf{P}_{\phi}\mathbf{\hat{Y}}^{\text{(buf)}}_{i-1}\right),$
(5)
where $\mathbf{P}_{\phi}\in[0,1]^{S\times S}$ is a permutation matrix for the
$\phi$-th permutation in $\mathrm{Perm}(S_{i})$, which is all the possible
permutations of the sequence $\left(1,\dots,S\right)$.
$\mathrm{Corr}\left(\mathbf{A},\mathbf{B}\right)$ calculates the correlation
between two matrices $\mathbf{A}=\left(a_{ij}\right)_{jk}$ and
$\mathbf{B}=\left(b_{jk}\right)_{ij}$ defined as
$\displaystyle\mathrm{Corr}\left(\mathbf{A},\mathbf{B}\right)\coloneqq\sum_{i,j}\left(a_{jk}-\bar{a}\right)\left(b_{jk}-\bar{b}\right),$
(6)
where $\bar{a}$ and $\bar{b}$ are the mean values of $\mathbf{A}$’s and
$\mathbf{B}$’s elements, respectively. Finally, the posterior probabilities of
the $i$-th chunk are calculated with the permutation matrix that gives the
highest correlation as follows:
$\mathbf{Y}_{i}=\mathbf{P}_{\psi}\mathbf{\hat{Y}}_{i}.$ (7)
If the length of $\left[\mathbf{Y}^{\text{(buf)}}_{i-1},\mathbf{Y}_{i}\right]$
is larger than the predetermined maximum buffer length $L_{\text{max}}$, we
select frames to be kept in the STB, which are used to solve the speaker
permutation problem occurred by the future inputs. In the paper [36], four
selection strategies have been proposed.
The STB is a solution to the online diarization problem; however, it cannot be
directly applied to EEND for unknown and flexible numbers of speakers. One
reason is because the number of speakers may be different across chunks so
that we cannot calculate correlation using Eq. (missing) 6. The other reason
is that the most promising selection strategy used the absolute difference of
probabilities of two speakers’ speech activities; thus, the method is limited
to two-speaker EENDs.
## 3 Proposed method
In this paper, we proposed the FLEX-STB which extends the STB coping with the
two obstacles to use it with EEND for unknown numbers of speakers [30, 31].
The FLEX-STB deals with the varying number of speakers across chunks by
increasing the number of speaker slots in the speaker-tracing buffer with the
zero-padding in Section 3.1. When the system detects new speakers, it adds new
zero-speaker-activity slots to the speaker buffer. We also propose four
selection strategies to update the buffer, each of which are not limited by
the number of speakers, in Section 3.2.
Figure 1: Proposed speaker-tracing buffer for unknown numbers of speakers.
Zero-padding is applied to mitigate the different number of speakers between
$\mathbf{Y}^{\text{(buf)}}_{i-1}$ and $\mathbf{\hat{Y}}^{\text{(buf)}}_{i-1}$.
### 3.1 Speaker-tracing buffer for flexible numbers of speakers (FLEX-STB)
In this section, we assume that EEND estimates not only speech activities but
also the number of speakers $S$, i.e., $\mathrm{EEND}:\mathbb{R}^{D\times
T}\rightarrow(0,1)^{S\times T}$. Firstly, to alleviate the different number of
speakers between the buffer $\mathbf{Y}_{i-1}^{\text{(buf)}}$ and the current
chunk’s output $\mathbf{\hat{Y}}_{i}$, the posterior of the no-speech-activity
speaker is considered as zero so that the zero-padding function is applied as
follows:
$\displaystyle\mathbf{Z}^{\text{(buf)}}_{i-1}$
$\displaystyle=\mathsf{ZeroPadding}\left(\mathbf{Y}^{\text{(buf)}}_{i-1},S_{i}\right),$
(8)
$\displaystyle\left[\mathbf{\hat{Z}}_{i-1}^{\text{(buf)}},\mathbf{\hat{Z}}_{i}\right]$
$\displaystyle=\mathsf{ZeroPadding}\left(\left[\mathbf{\hat{Y}}^{\text{(buf)}}_{i-1},\hat{\mathbf{Y}}_{i}\right],S_{i}\right),$
(9)
where $S_{i}=\max(S_{i-1},S_{i})$ and $\mathsf{ZeroPadding}(\mathbf{A},S)$
appends row zero vectors to $\mathbf{A}$ so that the first dimension becomes
$S$. Next, the speaker permutation $\mathbf{P}_{\psi}$ for the current chunk
is calculated between $\mathbf{Z}_{i-1}^{\text{(buf)}}$ and
$\mathbf{\hat{Z}}^{\text{(buf)}}_{i-1}$ using Eq. (missing) 5. Then, the
output for the current chunk is permuted as follows:
$\mathbf{Y}_{i}=\mathbf{P}_{\psi}\mathbf{\hat{Z}}_{i},$ (10)
where $\mathbf{Y}_{i}$ is the final diarization result of the chunk $i$. After
that, at most $L_{\text{max}}$ time indexes
$\mathcal{T}\subseteq\left\\{1,\dots,L_{i-1}+\Delta\right\\}$ are selected
based on the concatenated outputs
$\left[\mathbf{Z}^{\text{(buf)}}_{i-1},\mathbf{Y}_{i}\right]\in(0,1)^{S_{i}\times(L_{i-1}+T)}$,
and the FLEX-STB is updated as follows:
$\displaystyle\mathbf{X}^{\text{(buf)}}_{i}=\left[\mathbf{x}_{\tau}\mid\tau\in\mathcal{T}\right],\;\mathbf{Y}^{\text{(buf)}}_{i}=\left[\mathbf{y}_{\tau}\mid\tau\in\mathcal{T}\right],$
(11)
where $\mathbf{x}_{\tau}$ is the $\tau$-th column vector of
$[\mathbf{X}^{\text{(buf)}}_{i-1},\mathbf{X}_{i}]$, $\mathbf{y}_{\tau}$ is the
$\tau$-th column vector of $[\mathbf{Z}^{\text{(buf)}}_{i-1},\mathbf{Y}_{i}]$.
The frame selection strategies are described in Section 3.2.
### 3.2 Selection strategy
When the number of accumulated features becomes larger than the buffer size
$L_{\text{max}}$, a selection strategy is needed to keep relevant features
that contain the speaker permutation information from
$\left[\mathbf{X}^{\text{(buf)}}_{i-1},\mathbf{X}_{i}\right]$ and
$\left[\mathbf{Z}^{\text{(buf)}}_{i-1},\mathbf{Y}_{i}\right]$. In this
section, four selection functions are proposed for flexible numbers of
speakers.
* •
Uniform sampling: Uniform distribution sampling is applied to extract
$L_{\text{max}}$ frames.
* •
First-in-first-out (FIFO): The most recent $L_{\text{max}}$ features and the
corresponding diarization results are stored in the buffer, which follows the
first-in-first-out manner.
* •
Kullback-Leibler divergence based selection: We utilize the Kullback-Leibler
divergence (KLD) to measure the difference between two probability
distributions: the speaker activities distribution and the uniform
distribution at time $t$, which can be represented as follows:
$\displaystyle\text{KLD}_{t}$
$\displaystyle=\sum_{s=1}^{S_{i}}p_{s,t}\log{\frac{p_{s,t}}{q_{s,t}}},$ (12)
$\displaystyle p_{s,t}$
$\displaystyle=\frac{r_{s,t}}{\sum_{s^{\prime}=1}^{S_{i}}r_{s^{\prime},t}},$
(13) $\displaystyle q_{s,t}$ $\displaystyle=\frac{1}{S_{i}},$ (14)
where
$\left[\mathbf{Z}^{\text{(buf)}}_{i-1},\mathbf{Y}_{i}\right]=(r_{s,t})_{\begin{subarray}{c}1\leq
t\leq(L_{i-1}+\Delta)\\\ 1\leq s\leq{S_{i}}\end{subarray}}$ is the posteriors
from EEND with FLEX-STB and $q_{s,t}$ is the uniform distribution. Top
$L_{\text{max}}$ samples with the highest KLD values are selected from
$\left[\mathbf{Z}^{\text{(buf)}}_{i-1},\mathbf{Y}_{i}\right]$ and the
corresponding $\left[\mathbf{X}^{\text{(buf)}}_{i-1},\mathbf{X}_{i}\right]$.
* •
Weighted sampling using KLD: The combination of uniform sampling and KLD based
selection. $L_{\text{max}}$ features are randomly selected with the
probabilities which are proportional to $\text{KLD}_{t}$.
Table 2: DERs (%) of online EEND-EDA with chunk size
$\Delta=$1\text{\,}\mathrm{s}$$ using FLEX-STB and offline EEND-EDA with chunk
size $\Delta=\infty$. Note that all results are based on the estimated number
of speakers, including the overlapping regions without oracle SAD.
| Online ($\Delta=$1\text{\,}\mathrm{s}$$) | |
---|---|---|---
| CALLHOME | DIHARD II | Offline ($\Delta=\infty$)
| $L_{\text{max}}=$10\text{\,}\mathrm{s}$$ | $L_{\text{max}}=$50\text{\,}\mathrm{s}$$ | $L_{\text{max}}=$100\text{\,}\mathrm{s}$$ | $L_{\text{max}}=$10\text{\,}\mathrm{s}$$ | $L_{\text{max}}=$50\text{\,}\mathrm{s}$$ | $L_{\text{max}}=$100\text{\,}\mathrm{s}$$ | CALLHOME | DIHARD II
FLEX-STB with selection strategy | | | | | | | |
Uniform sampling | 27.6 | 20.2 | 19.3 | 52.4 | 39.3 | 36.8 | - | -
FIFO | 29.5 | 19.4 | 19.1 | 57.2 | 41.1 | 37.0 | - | -
KLD selection | 30.0 | 22.3 | 20.9 | 52.6 | 40.8 | 37.7 | - | -
Weighted sampling using KLD | 26.6 | 20.0 | 19.5 | 50.3 | 37.9 | 36.0 | - | -
Without FLEX-STB | - | - | - | - | - | - | 15.3 | 32.9
## 4 Experiment
### 4.1 Data
We generated 100k simulated mixtures of one to four speakers following the
procedure in [30] using Switchboard-2 (Phase I, II, III), Switchboard Cellular
(Part 1, 1), and the NIST Speaker Recognition Evaluation datasets (SRE).
Additionally, we added noises from the MUSAN corpus [38] and room impulse
responses (RIRs) from the Simulated Room Impulse Response Database [39]. These
simulated mixtures were used for training the EEND-based model. Two real
conversation datasets: the CALLHOME [11] and the DIHARD II [3] were prepared
for evaluation.
### 4.2 Experiment setting
In this paper, we evaluated the proposed method on the offline EEND-EDA model.
The EEND-EDA model was trained with four Transformer encoder blocks and 256
attention units containing four heads [30]. We firstly trained the model using
a two-speaker dataset for 100 epochs and then finetuned with the concatenation
of one- to four-speaker simulated datasets for 25 epochs. Finally, EEND-EDA
model was finetuned using a development set of CALLHOME, or DIHARD II,
respectively.
We evaluated all systems with the diarization error rate (DER) metric in both
overlapping and non-speech regions. A collar tolerance of 250 ms was applied
at the start and end of each segment for the CALLHOME dataset. Following the
regulation of the second DIHARD challenge [3], we did not use collar tolerance
for the DIHARD II dataset.
### 4.3 Results
#### 4.3.1 Effect of selection strategies and buffer size
Table 2 shows the effect of the selection strategies and the buffer size of
the FLEX-STB on the EEND-EDA model in the left part. Experiment conditions
varied from four selection methods with buffer sizes equal to
$10\text{\,}\mathrm{s}$, $50\text{\,}\mathrm{s}$ and $100\text{\,}\mathrm{s}$
but fixed the chunk size $\Delta$ to $1\text{\,}\mathrm{s}$. All results were
calculated with the estimated number of speakers including the overlapping
regions without oracle sound activity detection (SAD). It is shown that
incremental buffer size which provides more input information improved the
accuracy regardless of the selection strategies. Regarding the selection
strategies, on most cases weighted sampling using KLD outperformed other
strategies on both datasets. The best results from online system are
$19.1\text{\,}\mathrm{\char 37\relax}$ and $36.0\text{\,}\mathrm{\char
37\relax}$ for CALLHOME and DIHARD II, respectively.
#### 4.3.2 Comparison with the offline EEND-EDA system
We also compared the performance of our proposed online and baseline offline
systems in Table 2. The input of the offline EEND-EDA system is the whole
recording during inference while that for the online system is the
$1\text{\,}\mathrm{s}$ chunk. Compared with the offline system, DERs of the
online system increases by $3.8\text{\,}\mathrm{\char 37\relax}$ and
$3.1\text{\,}\mathrm{\char 37\relax}$ on these two datasets, which would be
acceptable degradation by considering the benefit of streaming diarization.
The performance degradation is supposed to come from the mismatch between the
offline model which was trained with fixed large chunk size and the online
mechanism whose input sizes are incrementally increased.
#### 4.3.3 Comparison with other online diarization systems
First, we compared our method with the recently proposed BW-EDA-EEND [34] on
the CALLHOME dataset. In order to compare with BW-EDA-EEND in the same
condition, we evaluated our method with a $10\text{\,}\mathrm{s}$ chunk size.
As shown in Table 3, in a $10\text{\,}\mathrm{s}$ chunk size and estimated SAD
condition, our proposed method outperforms the BW-EDA-EEND on all speaker-
number cases on the CALLHOME dataset.
Next, we compared our proposed method with other systems in more realistic
scenario, i.e., DIHARD II. For a fair comparison with other online methods, we
follow the DIHARD II track 1, where the oracle SAD information is provided. We
used the oracle SAD information to filter out non-speech frames of the
estimated diarization result. Table 4 shows the comparison with other systems.
The proposed online EEND-EDA with FLEX-STB achieved a DER of
$25.8\text{\,}\mathrm{\char 37\relax}$, which outperformed the UIS-RNN-SML,
and is comparable to the offline DIHARD II baseline.
#### 4.3.4 Real-time factor and latency
Our experiment was conducted on one NVIDIA Tesla P100 GPU. To calculate the
average computing time of one buffer, we filled the buffer with dummy values
for the first iteration to keep the buffer size always the same among chunks.
The real-time factor was equal to 0.13 when we applied FLEX-STB to EEND-EDA
with chunk size equal to $1\text{\,}\mathrm{s}$, and a buffer size of
$100\text{\,}\mathrm{s}$. This means that the average computation duration of
a $1\text{\,}\mathrm{s}$ chunk was $0.13\text{\,}\mathrm{s}$ which is
acceptable for the online processing.
Table 3: DERs (%) of each number of speakers on the CALLHOME dataset with $10\text{\,}\mathrm{s}$ chunk size. Both estimated the number of speakers and included the overlapping regions without using oracle SAD. | Number of speakers
---|---
Method | 2 | 3 | 4
BW-EDA-EEND [34] | 11.8 | 18.3 | 26.0
EEND-EDA w/ FLEX-STB | 10.0 | 14.0 | 21.1
Table 4: DERs (%) on DIHARD II dataset computed by using oracle SAD including overlapping regions. Online systems with STB were evaluated in a $1\text{\,}\mathrm{s}$ chunk size $\Delta$ and $100\text{\,}\mathrm{s}$ buffer size $L_{\mathrm{max}}$. Method | DER
---|---
DIHARD-2 baseline (offline) [3] | 26.0
UIS-RNN-SML [23] | 27.3
EEND-EDA w/ FLEX-STB | 25.8
## 5 Conclusion
In this paper, we proposed an online streaming speaker diarization method that
handles overlapping speech and flexible numbers of speakers. A speaker tracing
buffer for flexible numbers of speakers was proposed to mitigate the different
number of speakers among chunks. Experimental results showed that the proposed
online system achieves comparable results with the offline method and better
results than the BW-EDA-EEND online method. One of our future studies is to
incorporate various extensions developed at the recent DIHARD III challenge,
including semi-supervised training and model fusion [6].
## References
* [1] S. E. Tranter and D. A. Reynolds, “An overview of automatic speaker diarization systems,” _IEEE Trans. on ASLP_ , vol. 14, no. 5, pp. 1557–1565, 2006.
* [2] X. Anguera, S. Bozonnet, N. W. D. Evans, C. Fredouille, G. Friedland, and O. Vinyals, “Speaker diarization: A review of recent research,” _IEEE Trans. on ASLP_ , vol. 20, no. 2, pp. 356–370, 2012.
* [3] N. Ryant, K. Church, C. Cieri, A. Cristia, J. Du, S. Ganapathy, and M. Liberman, “The second DIHARD diarization challenge: Dataset, task, and baselines,” in _INTERSPEECH_ , 2019, pp. 978–982.
* [4] N. Ryant, K. Church, C. Cieri, J. Du, S. Ganapathy, and M. Liberman, “Third DIHARD challenge evaluation plan,” _arXiv preprint arXiv:2006.05815_ , 2020\.
* [5] T. J. Park, N. Kanda, D. Dimitriadis, K. J. Han, S. Watanabe, and S. Narayanan, “A review of speaker diarization: Recent advances with deep learning,” _arXiv preprint arXiv:2101.09624_ , 2021.
* [6] S. Horiguchi, N. Yalta, P. Garcia, Y. Takashima, Y. Xue, D. Raj, Z. Huang, Y. Fujita, S. Watanabe, and S. Khudanpur, “The Hitachi-JHU DIHARD III system: Competitive end-to-end neural diarization and x-vector clustering systems combined by DOVER-lap,” _arXiv preprint arXiv:2102.01363_ , 2021\.
* [7] W. Kang, B. C. Roy, and W. Chow, “Multimodal speaker diarization of real-world meetings using d-vectors with spatial features,” in _ICASSP_ , 2020, pp. 6509–6513.
* [8] T. Yoshioka, I. Abramovski, C. Aksoylar, Z. Chen, M. David, D. Dimitriadis, Y. Gong, I. Gurvich, X. Huang, Y. Huang _et al._ , “Advances in online audio-visual meeting transcription,” in _ASRU_ , 2019, pp. 276–283.
* [9] J. Carletta, S. Ashby, S. Bourban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, V. Karaiskos, W. Kraaij, M. Kronenthal _et al._ , “The AMI meeting corpus: A pre-announcement,” in _MLMI_ , 2005, pp. 28–39.
* [10] A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke _et al._ , “The ICSI meeting corpus,” in _ICASSP_ , vol. 1, 2003, pp. I–I.
* [11] “2000 NIST Speaker Recognition Evaluation,” https://catalog.ldc.upenn.edu/LDC2001S97.
* [12] A. Martin and M. Przybocki, “The NIST 1999 speaker recognition evaluation—an overview,” _Digital signal processing_ , vol. 10, no. 1-3, pp. 1–18, 2000.
* [13] M. Senoussaoui, P. Kenny, T. Stafylakis, and P. Dumouchel, “A study of the cosine distance-based mean shift for telephone speech diarization,” _IEEE Trans. on ASLP_ , vol. 22, no. 1, pp. 217–227, 2013.
* [14] J. Barker, S. Watanabe, E. Vincent, and J. Trmal, “The fifth “CHiME” speech separation and recognition challenge: Dataset, task and baselines,” in _INTERSPEECH_ , 2018.
* [15] N. Kanda, R. Ikeshita, S. Horiguchi, Y. Fujita, K. Nagamatsu, X. Wang, V. Manohar, N. E. Y. Soplin, M. Maciejewski, S.-J. Chen _et al._ , “The Hitachi/JHU CHiME-5 system: Advances in speech recognition for everyday home environments using multiple microphone arrays,” in _CHiME-5_ , 2018\.
* [16] S. Watanabe, M. Mandel, J. Barker, and E. Vincent, “CHiME-6 challenge: Tackling multispeaker speech recognition for unsegmented recordings,” in _CHiME-6_ , 2020.
* [17] J. Geiger, F. Wallhoff, and G. Rigoll, “GMM-UBM based open-set online speaker diarization,” in _INTERSPEECH_ , 2010.
* [18] K. Markov and S. Nakamura, “Improved novelty detection for online GMM based speaker diarization,” in _INTERSPEECH_ , 2008.
* [19] S. Madikeri, I. Himawan, P. Motlicek, and M. Ferras, “Integrating online i-vector extractor with information bottleneck based speaker diarization system,” in _INTERSPEECH_ , 2015, pp. 3105–3109.
* [20] D. Garcia-Romero, D. Snyder, G. Sell, D. Povey, and A. McCree, “Speaker diarization using deep neural network embeddings,” in _ICASSP_ , 2017, pp. 4930–4934.
* [21] W. Zhu and J. Pelecanos, “Online speaker diarization using adapted i-vector transforms,” in _ICASSP_ , 2016, pp. 5045–5049.
* [22] A. Zhang, Q. Wang, Z. Zhu, J. Paisley, and C. Wang, “Fully supervised speaker diarization,” in _ICASSP_ , 2019, pp. 6301–6305.
* [23] E. Fini and A. Brutti, “Supervised online diarization with sample mean loss for multi-domain data,” in _ICASSP_ , 2020, pp. 7134–7138.
* [24] M. Diez, L. Burget, S. Wang, J. Rohdin, and J. Černockỳ, “Bayesian HMM based x-vector clustering for speaker diarization,” in _INTERSPEECH_ , 2019, pp. 346–350.
* [25] A. McCree, G. Sell, and D. Garcia-Romero, “Speaker diarization using leave-one-out gaussian PLDA clustering of dnn embeddings.” in _INTERSPEECH_ , 2019, pp. 381–385.
* [26] Z. Huang, S. Watanabe, Y. Fujita, P. García, Y. Shao, D. Povey, and S. Khudanpur, “Speaker diarization with region proposal network,” in _ICASSP_ , 2020, pp. 6514–6518.
* [27] Y. Fujita, N. Kanda, S. Horiguchi, K. Nagamatsu, and S. Watanabe, “End-to-end neural speaker diarization with permutation-free objectives,” in _INTERSPEECH_ , 2019, pp. 4300–4304.
* [28] Y. Fujita, S. Watanabe, S. Horiguchi, Y. Xue, and K. Nagamatsu, “End-to-end neural diarization: Reformulating speaker diarization as simple multi-label classification,” _arXiv preprint arXiv:2003.02966_ , 2020.
* [29] Y. Fujita, N. Kanda, S. Horiguchi, Y. Xue, K. Nagamatsu, and S. Watanabe, “End-to-end neural speaker diarization with self-attention,” in _ASRU_ , 2019, pp. 296–303.
* [30] S. Horiguchi, Y. Fujita, S. Watanabe, Y. Xue, and K. Nagamatsu, “End-to-end speaker diarization for an unknown number of speakers with encoder-decoder based attractors,” in _INTERSPEECH_ , 2020, pp. 269–273.
* [31] Y. Fujita, S. Watanabe, S. Horiguchi, Y. Xue, J. Shi, and K. Nagamatsu, “Neural speaker diarization with speaker-wise chain rule,” _arXiv preprint arXiv:2006.01796_ , 2020.
* [32] T. von Neumann, K. Kinoshita, M. Delcroix, S. Araki, T. Nakatani, and R. Haeb-Umbach, “All-neural online source separation, counting, and diarization for meeting analysis,” in _ICASSP_ , 2019, pp. 91–95.
* [33] K. Kinoshita, M. Delcroix, S. Araki, and T. Nakatani, “Tackling real noisy reverberant meetings with all-neural source separation, counting, and diarization system,” in _ICASSP_ , 2020, pp. 381–385.
* [34] E. Han, C. Lee, and A. Stolcke, “BW-EDA-EEND: Streaming end-to-end neural speaker diarization for a variable number of speakers,” _arXiv preprint arXiv:2011.02678_ , 2020.
* [35] Y. Takashima, Y. Fujita, S. Watanabe, S. Horiguchi, P. García, and K. Nagamatsu, “End-to-end speaker diarization conditioned on speech activity and overlap detection,” in _SLT_ , 2021, pp. 849–856.
* [36] Y. Xue, S. Horiguchi, Y. Fujita, S. Watanabe, P. García, and K. Nagamatsu, “Online end-to-end neural diarization with speaker-tracing buffer,” in _SLT_ , 2021, pp. 841–848.
* [37] Z. Dai, Z. Yang, Y. Yang, J. G. Carbonell, Q. Le, and R. Salakhutdinov, “Transformer-xl: Attentive language models beyond a fixed-length context,” in _ACL_ , 2019, pp. 2978–2988.
* [38] D. Snyder, G. Chen, and D. Povey, “MUSAN: A music, speech, and noise corpus,” _arXiv preprint arXiv:1510.08484_ , 2015.
* [39] T. Ko, V. Peddinti, D. Povey, M. L. Seltzer, and S. Khudanpur, “A study on data augmentation of reverberant speech for robust speech recognition,” in _ICASSP_ , 2017, pp. 5220–5224.
|
# Sharp upper bounds for moments of quadratic Dirichlet $L$-functions
Peng Gao School of Mathematical Sciences, Beihang University, Beijing 100191,
P. R. China<EMAIL_ADDRESS>
###### Abstract.
We establish unconditional sharp upper bounds of the $k$-th moments of the
family of quadratic Dirichlet $L$-functions at the central point for $0\leq
k\leq 2$.
Mathematics Subject Classification (2010): 11M06
Keywords: moments, quadratic Dirichlet $L$-functions, upper bounds
## 1\. Introduction
Moments of central values of families of $L$-functions have attracted much
attention in research as they can be applied to address the non-vanishing
issue of these values, which in turn carry significant arithmetic
implications. Much progress has been made in late years that largely enhances
our understanding of these moments. A conjecture concerning asymptotic
expressions for the moments of various families of $L$-functions has been made
by J. P. Keating and N. C. Snaith in [Keating-Snaith02], in connection with
random matrix theory. Another conjecture of J. B. Conrey, D. W. Farmer, J. P.
Keating, M. O. Rubinstein and N. C. Snaith in [CFKRS] gives more precise
predictions by including lower order terms on the asymptotic behaviors of the
moments for certain families of $L$-functions.
In [R&Sound] and [R&Sound1], Z. Rudnick and K. Soundararajan developed a
simple and powerful method towards establishing lower bounds for rational
moments of families of $L$-functions of the conjectured order of magnitude.
The method is further extended by M. Radziwiłł and K. Soundararajan in
[Radziwill&Sound1] to all moments larger than the first. On the other hand, an
approach due to K. Soundararajan in [Sound01] enables one to obtain the
corresponding upper bounds under the generalized Riemann hypothesis (GRH).
This approach was further sharpened by A. J. Harper [Harper] to give upper
bounds of desired order of magnitude for moments of $L$-functions in many
families conditionally.
In [Radziwill&Sound], M. Radziwiłł and K. Soundararajan developed a new
principle that allows one to seek upper bounds of all smaller moments with the
knowledge of an upper bound for a particular moment. This principle is further
implemented by W. Heap and K. Soundararajan in [H&Sound] to treat the case of
lower bounds.
In this paper, we are interested in the family of quadratic Dirichlet
$L$-functions. Asymptotic formulas for the first two moments of this family
was obtained by M. Jutila in [Jutila] with the error terms being subsequently
improved in [ViTa, DoHo, Young1, sound1, Sono]. For the purpose of this paper,
we are interested in the family given by $\\{L(s,\chi_{8d})\\}$ for $d$ being
odd and square-free integers. Here $\chi_{8d}=\left(\frac{8d}{\cdot}\right)$
is the Kronecker symbol. The study of the above family was initiated by K.
Soundararajan in [sound1], who obtained mollified first two moments of the
family to show that at least $87.5\%$ of the members of this family have non-
vanishing central values. The third moment of the above family is also
obtained for the first time in [sound1]. The error term was improved by M. P.
Young in [Young2] for a smoothed version. In [Shen], Q. Shen obtained the
fourth moment of the above family under GRH.
Asymptotic formulas for all positive real moments of
$\\{L(\tfrac{1}{2},\chi_{8d})\\}$ are conjectured by J. C. Andrade and
Keating, J. P. in [Andrade-Keating01]. Combining the above mentioned work of
[Harper, Sound01, R&Sound, R&Sound1, Radziwill&Sound1], we now know that
$\displaystyle X(\log
X)^{\frac{k(k+1)}{2}}\ll_{k}\sideset{}{{}^{*}}{\sum}_{\begin{subarray}{c}0<d<X\\\
(d,2)=1\end{subarray}}|L(\tfrac{1}{2},\chi_{8d})|^{k}\ll_{k}X(\log
X)^{\frac{k(k+1)}{2}},$
where the lower bound above holds for all real $k\geq 1$ unconditionally and
the upper bound above holds for all $k\geq 0$ under GRH. Here and throughout
the paper, we denote $\sideset{}{{}^{*}}{\sum}$ for the sum over square-free
integers.
It is our goal in this paper to apply the principal of obtaining upper bounds
given in [Radziwill&Sound] by M. Radziwiłł and K. Soundararajan in our
setting. Before we state our result, we would like to recall the principle.
Although the principle works for general $L$-functions, we only adapt it for
the family $\\{L(s,\chi_{8d})\\}$ for simplicity. For this, we recall that a
result of B. Hough [Hough] shows that under GRH, the quantity
$\log|L(\tfrac{1}{2},\chi_{8d})|$ with $X<d\leq 2X$ for large $X$ is normally
distributed with mean $\tfrac{1}{2}\log\log d$ and variance $\log\log d$. On
the other hand, it is shown in [Hough] that the sum
(1.1) $\displaystyle\sum_{n\leq
z}\frac{\Lambda(n)\chi_{8d}(n)}{n^{\frac{1}{2}}}$
is similarly distributed as $\log|L(\tfrac{1}{2},\chi_{8d})|$ for
$z=X^{1/(\log\log X)^{2}}$, say. Here $\Lambda(n)$ is the von Mangoldt
function. As the contribution of prime powers $\geq 3$ is negligible in the
above sum and the contribution of prime squares is about $\log\log X$, we see
that the difference between the expression given in (1.1) and $\log\log X$ is
mainly determined by
$\displaystyle{\mathcal{P}}(d)=\sum_{p\leq z}\frac{1}{\sqrt{p}}\chi_{8d}(p).$
Taking exponentials, this implies that the quantity
$|L(\tfrac{1}{2},\chi_{8d})|(\log|d|)^{1/2}\exp(-{\mathcal{P}}(d))$ is usually
small. Thus, for any given real numbers $n>0,0<k<1$, we may estimate the
quantity $(|L(\tfrac{1}{2},\chi_{8d})|(\log|d|^{1/2})^{nk}$ by writing it as
$\displaystyle\Big{(}|L(\tfrac{1}{2},\chi_{8d})|(\log|d|)^{1/2}\Big{)}^{nk}=|L(\tfrac{1}{2},\chi_{8d})|^{nk}(\log|d|)^{nk/2}\exp(-nk(1-k){\mathcal{P}}(d))\cdot\exp(nk(1-k){\mathcal{P}}(d)).$
We now recall that Young’s inequality asserts that we have $ab\leq
a^{p}/p+b^{q}/q$ for real numbers $a,b\geq 0$ and real numbers $p,q\geq 1$
satisfying $1/p+1/q=1$. Applying this with
$a=|L(\tfrac{1}{2},\chi_{8d})|^{nk}(\log|d|)^{nk/2}\exp(-nk(1-k){\mathcal{P}}(d)),b=\exp(nk(1-k){\mathcal{P}}(d))$
and $p=1/k,q=1/(1-k)$, we see that
(1.2)
$\displaystyle\Big{(}|L(\tfrac{1}{2},\chi_{8d})|(\log|d|)^{1/2}\Big{)}^{nk}\leq
k|L(\tfrac{1}{2},\chi_{8d})|^{n}(\log|d|)^{n/2}\exp(-n(1-k){\mathcal{P}}(d))+(1-k)\exp(kn{\mathcal{P}}(d)).$
As we expect
$|L(\tfrac{1}{2},\chi_{8d})|^{n}(\log|d|)^{n/2}\exp(-n{\mathcal{P}}(d))$ to be
small most of the time, the right side of (1.2) should be bounded above by
$\exp(kn{\mathcal{P}}(d))$ on average. As expanding $\exp(kn{\mathcal{P}}(d))$
into Euler products leads to a too long Dirichlet polynomial to estimate, we
approximate it by taking a suitably long Taylor expansion to achieve our goal
since ${\mathcal{P}}(d)$ is often small in size.
We note that the bound given in (1.2) may be modified to establish similar
bounds for general $L$-functions. If we further know that the corresponding
$L$-function satisfies $L(\tfrac{1}{2})\geq 0$, then we may replace
$|L(\tfrac{1}{2})|^{n}$ on the right side of (1.2) by $L(\tfrac{1}{2})^{n}$ to
see that if we have a good understanding of the $n$-th moment (twisted by
another character) of the corresponding family of $L$-functions at the central
value, then we shall be able to obtain sharper upper bounds for every $m$-th
moment of the same family of $L$-functions with $0\leq m\leq n$. This is
precisely what has been carried out by Radziwiłł and Soundararajan in
[Radziwill&Sound] to treat the moments of quadratic twists of $L$-functions
attached to elliptic curves, since in this case it is known that the
corresponding $L$-functions have non-negative values at $1/2$ and the first
moment of the family can be evaluated.
For the family of quadratic twists of Dirichlet $L$-functions, although our
current stage of knowledge is short of determining whether
$L(\tfrac{1}{2},\chi_{8d})\geq 0$ is always valid, we do know that these
values are real so that $L(\tfrac{1}{2},\chi_{8d})^{2}\geq 0$ is always true.
Thanks to the work of K. Soundararajan in [sound1], we also have a good
knowledge of the twisted second moment of the same family. Combining these
with the above mentioned principle of Radziwiłł and Soundararajan, we are
therefore able to establish unconditionally the correct order of magnitude of
every moment less than the second of the family of quadratic Dirichlet
$L$-functions. This is given in the following result.
###### Theorem 1.1.
Unconditionally, for every $0\leq k\leq 2$, we have
(1.3) $\displaystyle\sideset{}{{}^{*}}{\sum}_{\begin{subarray}{c}0<d<X\\\
(d,2)=1\end{subarray}}|L(\tfrac{1}{2},\chi_{8d})|^{k}\ll_{k}X(\log
X)^{\frac{k(k+1)}{2}}.$
We notice that the twisted third moment of the quadratic family of Dirichlet
$L$-functions has been evaluated by M. P. Young in [Young2]. Thus, if one
assumes that $L(\tfrac{1}{2},\chi_{8d})\geq 0$ for all $d$ under consideration
(which follows from GRH), then we are able to obtain the following result.
###### Theorem 1.2.
Assume that $L(\tfrac{1}{2},\chi_{8d})\geq 0$ for all odd, square-free $d$.
Then the bound given in (1.3) is valid for every $0\leq k\leq 3$. In
particular, this is true under GRH.
We omit the proof of Theorem 1.2 in the paper as it is similar to that of
Theorem 1.1. We also notice that using the approach of Radziwiłł and
Soundararajan in [Radziwill&Sound], we may be able to give an alternative
proof of a result of B. Hough [Hough, Corollary 1.1] that says the
distribution of logarithms of central values of $L(\tfrac{1}{2},\chi_{8d})$ is
bounded above by the Gaussian distribution.
We would like to point out here that in order for the principle of Radziwiłł
and Soundararajan on obtaining upper bounds for moments of $L$-functions to
work, one in general needs to evaluate a given moment of $L$-functions twisted
by certain character instead of just evaluating the moment itself. Previously,
it is known that these twisted moments play vital roles when using mollifiers
to study the non-vanishing issues of central values of $L$-functions. They are
also necessary in the work of M. P. Young [Young1, Young2] to use a recursive
method to reduce the sizes of error terms in the moments of $L$-functions at
the central point. Now, the principle of Radziwiłł and Soundararajan provides
a further evidence on the importance of studying these twisted moments.
In the last section of this paper, we propose a variant of the above mentioned
principle of Radziwiłł and Soundararajan for obtaining upper bounds of moments
of $L$-functions. This may potentially lead to a slightly simpler treatment
(at least in some cases) when acquiring such upper bounds.
## 2\. Preliminaries
We include here some tools needed in our proof of Theorem 1.1 together with an
initial treatment of the proof.
### 2.1. Tools
From now on, we reserve the letter $p$ for a prime number and we recall the
following well-known Mertens’ formula (see [MVa1, Theorem 2.7]) and a
consequence of it (via partial summation).
###### Lemma 2.2.
Let $x\geq 2$. We have, for some constant $b$,
$\sum_{p\leq x}\frac{1}{p}=\log\log x+b+O\Big{(}\frac{1}{\log x}\Big{)}.$
Also, for any integer $j\geq 1$, we have
$\sum_{p\leq x}\frac{(\log p)^{j}}{p}=\frac{(\log x)^{j}}{j}+O((\log
x)^{j-1}).$
Now, we denote $\Phi$ for a smooth, non-negative function compactly supported
on $[1/2,5/2]$ with $\Phi(x)=1$ for $x\in[1,2]$, and define, for any complex
number $s$,
${\widehat{\Phi}}(s)=\int_{0}^{\infty}\Phi(x)x^{s}\frac{dx}{x}.$
We define $\delta_{n=\square}$ to be $1$ when $n=\square$ and $0$ otherwise,
where we write $\square$ for a perfect square. Similar to the proof of
[Radziwill&Sound, Proposition 1], we have the following result concerning a
smoothed sum of quadratic characters.
###### Lemma 2.3.
For large $X$ and any odd positive integer $n$, we have
$\displaystyle\sideset{}{{}^{*}}{\sum}_{\begin{subarray}{c}(d,2)=1\end{subarray}}\chi_{8d}(n)\Phi\Big{(}\frac{d}{X}\Big{)}=\displaystyle\delta_{n=\square}{\widehat{\Phi}}(1)\frac{2X}{3\zeta(2)}\prod_{p|n}\Big{(}\frac{p}{p+1}\Big{)}+O(X^{\frac{1}{2}+\epsilon}\sqrt{n}).$
We denote further $d(n)$ for the divisor function and $\sigma(n)$ for the sum
of the positive divisors of $n$. Also define $\Lambda_{j}(n)$ for all integers
$j\geq 0$ to be the coefficient of $n^{-s}$ in the Dirichlet series expansion
of $(-1)^{j}\zeta^{(j)}(s)/\zeta(s)$. Note that this implies that
$\Lambda_{1}(n)=\Lambda(n)$ and that $\Lambda_{j}(n)$ is supported on integers
having at most $j$ distinct prime factors such that
$\Lambda_{j}(n)\ll_{j}n^{j}$.
Combining Proposition 1.1 and Proposition 1.3 in [sound1] and setting
$Y=X^{1/4},M=1$ there, we readily deduce the following asymptotic result
concerning the twisted second moment of quadratic Dirichlet $L$-functions.
###### Lemma 2.4.
Writing any odd $l$ as $l=l_{1}l^{2}_{2}$ with $l_{1}$ square-free, we have
for any $\varepsilon>0$,
$\displaystyle\begin{split}\sideset{}{{}^{*}}{\sum}_{(d,2)=1}L(\tfrac{1}{2},\chi_{8d})^{2}\chi_{8d}(l)\Phi(\frac{d}{X})=&\frac{D\widehat{\Phi}(1)}{36\zeta(2)}\frac{d(l_{1})}{\sqrt{l_{1}}}\frac{l_{1}}{\sigma(l_{1})h(l)}X\Big{(}\log^{3}\Big{(}\frac{X}{l_{1}}\Big{)}-3\sum_{\begin{subarray}{c}p|l_{1}\end{subarray}}\log^{2}p\log\Big{(}\frac{X}{l_{1}}\Big{)}+O(l)\Big{)}+O\left(X^{\frac{3}{4}+\varepsilon}l^{\tfrac{1}{2}+\varepsilon}_{1}\right),\end{split}$
where $D=\frac{1}{8}\displaystyle\prod_{\begin{subarray}{c}p\geq
3\end{subarray}}\left(1-\frac{1}{p}\right)h(p)$ and $h$ is the multiplicative
function defined on prime powers by
$\displaystyle h(p^{k})=1+\frac{1}{p}+\frac{1}{p^{2}}-\frac{4}{p(p+1)},\quad
k\geq 1.$
Also,
$\displaystyle O(l)=$
$\displaystyle\sum^{3}_{j,k=0}\sum_{\begin{subarray}{c}m|l_{1}\end{subarray}}\sum_{\begin{subarray}{c}n|l_{1}\end{subarray}}\frac{\Lambda_{j}(m)}{m}\frac{\Lambda_{k}(n)}{n}D(m,n)Q_{j,k}\Big{(}\log\frac{X}{l_{1}}\Big{)}-3\Big{(}A+B\frac{\widehat{\Phi}^{\prime}(1)}{\widehat{\Phi}(1)}\Big{)}\sum_{\begin{subarray}{c}p|l\end{subarray}}\log^{2}p,$
where $A$ and $B$ are absolute constants and $D(m,n)\ll 1$ uniformly for all
$m$ and $n$. The $Q_{j,k}$ are polynomials of degree $\leq 2$ whose
coefficients involve only absolute constants and linear combinations of
$\frac{\widehat{\Phi}^{(j)}(1)}{\widehat{\Phi}(1)}$ for $1\leq j\leq 3$.
Lastly, we define for any non-negative integer $\ell$ and any real number $x$,
(2.1) $E_{\ell}(x)=\sum_{j=0}^{\ell}\frac{x^{j}}{j!}.$
We recall the following two key inequalities from [Radziwill&Sound, Lemma 1,
Lemma 2].
###### Lemma 2.5.
Let $\ell\geq 0$ be an even integer. The function $E_{\ell}(x)$ is positive,
convex and satisfies $E_{\ell}(x)\geq e^{x}$ for $x\leq 0$. Moreover, for
$x\leq\ell/e^{2}$, we have
$e^{x}\leq\Big{(}1+\frac{e^{-\ell}}{16}\Big{)}E_{\ell}(x).$
###### Lemma 2.6.
Let $x_{1}$, $\ldots$, $x_{R}$ be real numbers and let
$C=\exp((e^{-\ell_{1}}+\ldots+e^{-\ell_{R}})/16)$, where $\ell_{1}$,
$\ldots,\ell_{R}$ are positive even integers. Then we have for any $y\geq 0$
and $0\leq k\leq 1$,
$\displaystyle y^{k}\leq$ $\displaystyle
Cky\prod_{j=1}^{R}E_{\ell_{j}}((k-1)x_{j})+C(1-k)\prod_{j=1}^{R}E_{\ell_{j}}(kx_{j})$
$\displaystyle+\sum_{r=0}^{R-1}\Big{(}Cky\prod_{j=1}^{r}E_{\ell_{j}}((k-1)x_{j})+C(1-k)\prod_{j=1}^{r}E_{\ell_{j}}(kx_{j})\Big{)}\Big{(}\frac{e^{2}x_{r+1}}{\ell_{r+1}}\Big{)}^{\ell_{r+1}}.$
### 2.7. Initial treatment of the proof of Theorem 1.1
Let $X$ be a large number and $\\{\ell_{j}\\}_{1\leq j\leq R}$ be a sequence
of even natural numbers defined by $\ell_{1}=2\lceil 100\log\log X\rceil$ and
$\ell_{j+1}=2\lceil 100\log\ell_{j}\rceil$ for $j\geq 1$. Here we choose $R$
to be the largest natural number such that $\ell_{R}>10^{4}$ and observe that
we have $\ell_{j}>\ell_{j+1}^{2}$ for all $1\leq j\leq R-1$. Let ${P}_{1}$ be
the set of odd primes below $X^{1/\ell_{1}^{2}}$. For $2\leq j\leq R$, we
define ${P_{j}}$ to be the set of primes lying in the interval
$(X^{1/\ell_{j-1}^{2}},X^{1/\ell_{j}^{2}}]$. We also define
${\mathcal{P}}_{j}(d)=\sum_{p\in P_{j}}\frac{1}{\sqrt{p}}\chi_{8d}(p).$
Given a real number $\alpha$, we denote
(2.2)
$\displaystyle{\mathcal{N}}_{j}(d,\alpha)=E_{\ell_{j}}(\alpha{\mathcal{P}}_{j}(d)),\quad\mathcal{N}(d,\alpha)=\prod_{j=1}^{R}{\mathcal{N}}_{j}(d,\alpha).$
We further set for two real numbers $n,k$ satisfying $0\leq k\leq 1$,
${\mathcal{A}}_{j}(d)={\mathcal{N}}_{j}(d,(k-1)n),\quad{\mathcal{B}}_{j}(d)={\mathcal{N}}_{j}(d,nk).$
We apply Lemma 2.6 with $y=|L(\frac{1}{2},\chi_{8d})|^{n}(\log
d)^{-\frac{n}{2}}$ and $x_{j}=n{\mathcal{P}}_{j}(d)$ to obtain the following
bounds for the moments of $L(\frac{1}{2},\chi_{8d})$.
###### Proposition 2.8.
With notations as above, we have
(2.3) $\displaystyle\begin{split}\Big{(}|L(\frac{1}{2},\chi_{8d})|^{n}(\log
d)^{-\frac{n}{2}}\Big{)}^{k}\leq&Ck|L(\frac{1}{2},\chi_{8d})|^{n}(\log
d)^{-\frac{n}{2}}\Big{(}\prod_{j=1}^{R}{\mathcal{A}}_{j}(d)+\sum_{r=0}^{R-1}\prod_{j=1}^{r}{\mathcal{A}}_{j}(d)\Big{(}\frac{e^{2}n{\mathcal{P}}_{r+1}(d)}{\ell_{r+1}}\Big{)}^{\ell_{r+1}}\Big{)}\\\
&+C(1-k)\Big{(}\prod_{j=1}^{R}{\mathcal{B}}_{j}(d)+\sum_{r=0}^{R-1}\prod_{j=1}^{r}{\mathcal{B}}_{j}(d)\Big{(}\frac{e^{2}n{\mathcal{P}}_{r+1}(d)}{\ell_{r+1}}\Big{)}^{\ell_{r+1}}\Big{)}.\end{split}$
Arguing similar to [Radziwill&Sound, Section 3.4], we see that in order to
prove Theorem 1.1, it suffices to show that the right side of (2.3) averaged
over $d$ is $\ll X(\log X)^{\frac{(nk)^{2}}{2}}$. We close this section by
giving such an estimation for the terms involving with ${\mathcal{B}}_{j}(d)$.
###### Proposition 2.9.
With notations as above, we have
$\sideset{}{{}^{*}}{\sum}_{(d,2)=1}\Big{(}\prod_{j=1}^{R}{\mathcal{B}}_{j}(d)+\sum_{r=0}^{R-1}\prod_{j=1}^{r}{\mathcal{B}}_{j}(d)\Big{(}\frac{e^{2}n{\mathcal{P}}_{r+1}(d)}{\ell_{r+1}}\Big{)}^{\ell_{r+1}}\Big{)}\Phi\Big{(}\frac{d}{X}\Big{)}\ll
X(\log X)^{\frac{(nk)^{2}}{2}}.$
###### Proof.
Let $w(n)$ be the multiplicative function defined by $w(p^{\alpha})=\alpha!$
for prime powers $p^{\alpha}$ and let $\Omega(n)$ denote the number of
distinct prime powers dividing $n$. We also define functions
$b_{j}(n),p_{j}(n)$ for $1\leq j\leq R$ such that $b_{j}(n),p_{j}(n)=0$ or
$1$, and we have $b_{j}(n)=1$ ($p_{j}(n)=1$) if and only if $n$ is composed of
at most (exactly, counted with multiplicity) $\ell_{j}$ primes, all from the
interval $P_{j}$ . Using these notations, we see that
(2.4)
${\mathcal{B}}_{j}(d)=\sum_{n_{j}}\frac{1}{\sqrt{n_{j}}}\frac{(nk)^{\Omega(n_{j})}}{w(n_{j})}b_{j}(n_{j})\chi_{8d}(n_{j}),\quad\frac{1}{\ell_{j}!}{\mathcal{P}}_{j}(d)^{\ell_{j}}=\sum_{n_{j}}\frac{1}{w(n_{j})\sqrt{n_{j}}}p_{j}(n_{j})\chi_{8d}(n_{j}),\quad
1\leq j\leq R.$
We note here that both ${\mathcal{B}}_{j}(d)$ and
${\mathcal{P}}_{j}(d)^{\ell_{j}}$ are short Dirichlet polynomials since
$b_{j}(n_{j}),p_{j}(n_{j})=0$ unless
$n_{j}\leq(X^{1/\ell_{j}^{2}})^{\ell_{j}}=X^{1/\ell_{j}}$. It follows that the
expressions
$\prod_{j=1}^{R}{\mathcal{B}}_{j}(d),\prod_{j=1}^{r}{\mathcal{B}}_{j}(d){\mathcal{P}}_{r+1}^{\ell_{r+1}}(d)$
are all short Dirichlet polynomials of length at most
$X^{1/\ell_{1}+\ldots+1/\ell_{R}}<X^{1/1000}$.
We expand the term
$\prod_{j=1}^{r}{\mathcal{B}}_{j}(d){\mathcal{P}}_{r+1}^{\ell_{r+1}}(d)$ for
some $0\leq r\leq R-1$ using (2.4) and apply Lemma 2.3 to estimate it. By
doing so, we may ignore the error term in Lemma 2.3 as both
${\mathcal{B}}_{j}$ and ${\mathcal{P}_{j}}^{\ell_{j}}(d)$ are short Dirichlet
polynomials. Considering the main term contributions from Lemma 2.3, we see
that
$\displaystyle\prod_{j=1}^{r}{\mathcal{B}}_{j}(d){\mathcal{P}}_{r+1}^{\ell_{r+1}}\ll
X$
$\displaystyle\prod_{j=1}^{r}\Big{(}\sum_{n_{j}=\square}\frac{1}{\sqrt{n_{j}}}\frac{(nk)^{\Omega(n_{j})}}{w(n_{j})}\prod_{p|n_{j}}\Big{(}\frac{p}{p+1}\Big{)}b_{j}(n_{j})\Big{)}$
$\displaystyle\times\Big{(}\ell_{r+1}!\sum_{n_{r+1}=\square}\frac{1}{w(n_{r+1})\sqrt{n_{r+1}}}\prod_{p|n_{r+1}}\Big{(}\frac{p}{p+1}\Big{)}p_{r+1}(n_{r+1})\Big{)}.$
The proof of the proposition now follows by arguing in the same way as in the
proof of Proposition 4 in [Radziwill&Sound]. ∎
## 3\. Proof of Theorem 1.1
In view of our discussions in the previous section, it remains to show that
the right side of (2.3) averaged over $d$ for the terms involving with
${\mathcal{A}}_{j}(d)$ is also $\ll X(\log X)^{\frac{(nk)^{2}}{2}}$ for $n=2$.
As our approach here may be applied to treat other values of $n$, we shall
retain the symbol $n$ in most of the places in the rest of the section instead
of specitying it to be $2$. Thus, to conclude the proof of Theorem 1.1, it
suffices to establish the following result.
###### Proposition 3.1.
With notations as above, we have for $n=2$,
(3.1)
$\displaystyle\sideset{}{{}^{*}}{\sum}_{(d,2)=1}|L(\frac{1}{2},\chi_{8d})|^{n}\Big{(}\prod_{j=1}^{R}{\mathcal{A}}_{j}(d)+\sum_{r=0}^{R-1}\prod_{j=1}^{r}{\mathcal{A}}_{j}(d)\Big{(}\frac{e^{2}n{\mathcal{P}}_{r+1}(d)}{\ell_{r+1}}\Big{)}^{\ell_{r+1}}\Big{)}\Phi\Big{(}\frac{d}{X}\Big{)}\ll
X(\log X)^{\frac{(nk)^{2}+n}{2}}.$
In the remaining of this section, we give a proof of Proposition 3.1. First
note that we may replace $|L(\frac{1}{2},\chi_{8d})|^{n}$ by
$L(\frac{1}{2},\chi_{8d})^{n}$ when $n=2$ since $L(\frac{1}{2},\chi_{8d})$ is
real. As the arugments are similar, it suffices to show that
(3.2) $\displaystyle\sideset{}{{}^{*}}{\sum}_{(d,2)=1}$ $\displaystyle
L(\tfrac{1}{2},\chi_{8d})^{n}\sum_{r=0}^{R-1}\prod_{j=1}^{r}{\mathcal{A}}_{j}(d)\Big{(}\frac{e^{2}n{\mathcal{P}}_{r+1}(d)}{\ell_{r+1}}\Big{)}^{\ell_{r+1}}\Phi\Big{(}\frac{d}{X}\Big{)}\ll
X(\log X)^{\frac{(nk)^{2}+n}{2}}.$
Note first that we have
${\mathcal{A}}_{j}(d)=\sum_{n_{j}}\frac{1}{\sqrt{n_{j}}}\frac{(n(k-1))^{\Omega(n_{j})}}{w(n_{j})}b_{j}(n_{j})\chi_{8d}(n_{j}),\quad
1\leq j\leq R.$
Analogue to our discussions above, the product
$\prod_{j=1}^{r}{\mathcal{A}}_{j}(d){\mathcal{P}}_{{r+1}}^{\ell_{r+1}}(d)$ for
all $0\leq r\leq R-1$ are short Dirichlet polynomials of length at most
$X^{1/1000}$.
We now apply Lemma 2.4 to evaluate
$\prod_{j=1}^{r}{\mathcal{A}}_{j}(d){\mathcal{P}}_{r+1}^{\ell_{r+1}}$(d) for
some $0\leq r\leq R-1$ by expanding it into Dirichlet series. Once again we
may focus only on the main term to see that, upon writing
$n_{j}=(n_{j})_{1}(n_{j})_{2}^{2}$ with $(n_{j})_{1}$ being square-free,
(3.3)
$\displaystyle\begin{split}&\sideset{}{{}^{*}}{\sum}_{(d,2)=1}L(\tfrac{1}{2},\chi_{8d})^{n}\prod_{j=1}^{r}{\mathcal{A}}_{j}(d){\mathcal{P}}_{r+1}^{\ell_{r+1}}(d)\Phi\Big{(}\frac{d}{X}\Big{)}\\\
\ll&X\sum_{n_{1},\cdots,n_{r+1}}\Big{(}\prod_{j=1}^{r}\frac{1}{\sqrt{n_{j}(n_{j})_{1}}}\frac{(n(k-1))^{\Omega(n_{j})}}{w(n_{j})}b_{j}(n_{j})\frac{d((n_{j})_{1})(n_{j})_{1}}{\sigma((n_{j})_{1})h(n_{j})}\Big{)}\Big{(}\frac{\ell_{r+1}!}{\sqrt{n_{r+1}(n_{r+1})_{1}}}\frac{p_{r+1}(n_{r+1})}{w(n_{r+1})}\frac{d((n_{r+1})_{1})(n_{r+1})_{1}}{\sigma((n_{r+1})_{1})h(n_{r+1})}\Big{)}\\\
&\times\Big{(}\log^{3}\Big{(}\frac{X}{(n_{1})_{1}\cdots(n_{r+1})_{1}}\Big{)}-3\sum_{\begin{subarray}{c}p|(n_{1})_{1}\cdots(n_{r+1})_{1}\end{subarray}}\log^{2}p\log\Big{(}\frac{X}{(n_{1})_{1}\cdots(n_{r+1})_{1}}\Big{)}+O(n_{1}\cdots
n_{r+1})\Big{)}.\end{split}$
As the estimations are similar, we may consider the above sums involving with
the terms $\log^{3}(X/((n_{1})_{1}\cdots(n_{r+1})_{1}))=(\log
X-\log((n_{1})_{1}\cdots(n_{r+1})_{1}))^{3}$ only. Upon expanding, we observe
that we can write $\log^{3}(X/((n_{1})_{1}\cdots(n_{r+1})_{1}))$ as linear
combinations of the sum:
(3.4) $\displaystyle C(m_{0},\ldots,m_{r+1})(\log
X)^{m_{0}}\sum_{\begin{subarray}{c}p_{i}|(n_{i})_{1}\\\ 1\leq i\leq
r+1\end{subarray}}\prod_{1\leq i\leq r+1}(\log p_{i})^{m_{i}},$
where $m_{j},0\leq j\leq r+1$ are non-negative integers satisfying
$\sum_{0\leq j\leq r+1}m_{j}=3$ and where $C(m_{0},\ldots,m_{r+1})$ are
bounded constants. Without loss of generality, we may group terms to consider
the total contribution to (3.3) from all terms of the above form corresponding
to $m_{i_{1}}=m_{i_{2}}=m_{i_{3}}=1$ for some $1\leq i_{1}<i_{2}<i_{3}\leq
r+1$. For example, when the corresponding $p_{i_{1}}\in P_{1},p_{i_{2}}\in
P_{2},p_{i_{3}}\in P_{r+1}$, the contribution is
(3.5) $\displaystyle\begin{split}\ll&\sum_{l_{1},l_{2},l_{3}\geq
0}\prod^{3}_{s=1}\Big{(}\frac{\log
p_{i_{s}}}{p^{l_{s}+1}_{i_{s}}}\frac{|(n(k-1))|^{2l_{s}+1}}{(2l_{s}+1)!}\frac{np_{i_{s}}}{(p_{i_{s}}+1)h(p^{2l_{s}+1}_{i_{s}})}\Big{)}\\\
&\times\prod_{j=1}^{r}\Big{(}\sum_{(n_{j},p_{i_{1}}p_{i_{2}})=1}\frac{1}{\sqrt{n_{j}(n_{j})_{1}}}\frac{(n(k-1))^{\Omega(n_{j})}}{w(n_{j})}\widetilde{b}_{j,l_{1},l_{2}}(n_{j})\frac{d((n_{j})_{1})(n_{j})_{1}}{\sigma((n_{j})_{1})h(n_{j})}\Big{)}\\\
&\times\Big{(}\ell_{r+1}!\sum_{(n_{r+1},p_{i_{3}})=1}\frac{1}{\sqrt{n_{r+1}(n_{r+1})_{1}}}\frac{p_{r+1}(n_{r+1}p^{2l_{3}+1}_{i_{3}})}{w(n_{r+1})}\frac{d((n_{r+1})_{1})(n_{r+1})_{1}}{\sigma((n_{r+1})_{1})h(n_{r+1})}\Big{)},\end{split}$
where we define
$\widetilde{b}_{j,l_{1},l_{2}}(n_{j})=b_{j}(n_{j}p^{l_{j}}_{i_{j}})$ for
$j=1,2$ and $\widetilde{b}_{j,l_{1},l_{2}}(n_{j})=b_{j}(n_{j})$ otherwise.
Let us consider the sum over $n_{1}$ in (3.5). If we replace the factor
$\widetilde{b}_{l,l_{1},l_{2}}(n_{1})$ by $1$, then the sum becomes
(3.6) $\displaystyle\begin{split}&\prod_{\begin{subarray}{c}p\in P_{1}\\\
(p,p_{i_{1}})=1\end{subarray}}\Big{(}\sum_{j=0}^{\infty}\frac{1}{p^{j}}\frac{(n(k-1))^{2j}}{(2j)!h(p^{2j})}+\sum_{j=0}^{\infty}\frac{1}{p^{j+1}}\frac{(n(k-1))^{2j+1}}{(2j+1)!}\frac{np}{(p+1)h(p^{2j+1})}\Big{)}\\\
\ll&\Big{(}\prod_{\begin{subarray}{c}p\in P_{1}\\\
(p,p_{i_{1}})=1\end{subarray}}C(p)\Big{)}\times\exp\Big{(}\big{(}\frac{(n(k-1))^{2}}{2}+n^{2}(k-1)\big{)}\sum_{\begin{subarray}{c}p\in
P_{1}\end{subarray}}\frac{1}{p}\Big{)},\end{split}$
where for some constant $A$ independent of $p$,
$\displaystyle\begin{split}C(p)=&\exp(\frac{A}{p^{2}})\Big{(}1-\big{(}\frac{(n(k-1))^{2}}{2}+n^{2}(k-1)\big{)}\frac{1}{p}\Big{)}\Big{(}\sum_{j=0}^{\infty}\frac{1}{p^{j}}\frac{(n(k-1))^{2j}}{(2j)!h(p^{2j})}+\sum_{j=0}^{\infty}\frac{1}{p^{j+1}}\frac{(n(k-1))^{2j+1}}{(2j+1)!}\frac{np}{(p+1)h(p^{2j+1})}\Big{)}.\end{split}$
Note here that $C(p)$ is well-defined as one checks readily checked that each
factor in the above product is positive for $n=2,0\leq k\leq 1$ and $p\geq 3$.
Meanwhile, we note that the left side of (3.6) is also
(3.7)
$\displaystyle\begin{split}\gg\exp\Big{(}\big{(}\frac{(n(k-1))^{2}}{2}+n^{2}(k-1)\big{)}\sum_{\begin{subarray}{c}p\in
P_{1}\end{subarray}}\frac{1}{p}\Big{)}.\end{split}$
On the other hand, using Rankin’s trick by noticing that
$2^{\Omega(n_{1})-\ell_{1}}\geq 1$ if $\Omega(n_{1})>\ell_{1}$, we see that
the error introduced by replacing $\widetilde{b}_{l,l_{1},l_{2}}(n_{1})$ with
$1$ does not exceed
(3.8)
$\displaystyle\begin{split}&\sum_{n_{1}}\frac{1}{\sqrt{n_{1}(n_{1})_{1}}}\frac{|n(k-1)|^{\Omega(n_{1})}}{w(n_{1})}2^{\Omega(n_{1})-\ell_{1}}\frac{d((n_{1})_{1})(n_{1})_{1}}{\sigma((n_{1})_{1})h(n_{1})}\\\
\leq&2^{-\ell_{1}}\prod_{\begin{subarray}{c}p\in P_{1}\\\
(p,p_{i_{1}})=1\end{subarray}}\Big{(}1+\sum_{j=1}^{\infty}\frac{1}{p^{j}}\frac{(n(k-1))^{2j}2^{2j}}{(2j)!}\frac{1}{h(p^{2j})}+\sum_{j=0}^{\infty}\frac{1}{p^{j+1}}\frac{|n(k-1)|^{2j+1}2^{2j+1}}{(2j+1)!}\frac{np}{(p+1)h(p^{2j+1})}\Big{)}\\\
\ll&2^{-\ell_{1}}\exp\Big{(}\big{(}2(n(k-1))^{2}+2n^{2}(1-k)\big{)}\sum_{p\in
P_{1}}\frac{1}{p}\Big{)}.\end{split}$
We deduce from (3.6), (3.7) and (3.8) that the error term is
(3.9) $\displaystyle\ll$ $\displaystyle
2^{-\ell_{1}}\exp\Big{(}\big{(}\frac{3}{2}(n(k-1))^{2}+3n^{2}(1-k)\big{)}\sum_{p\in
P_{1}}\frac{1}{p}\Big{)}\Big{(}\prod_{\begin{subarray}{c}p\in P_{1}\\\
(p,p_{i_{1}})=1\end{subarray}}C(p)\Big{)}\times\exp\Big{(}\big{(}\frac{(n(k-1))^{2}}{2}+n^{2}(k-1)\big{)}\sum_{\begin{subarray}{c}p\in
P_{1}\end{subarray}}\frac{1}{p}\Big{)}.$
Note that Lemma 2.2 implies that for all $1\leq j\leq R$, we have $\sum_{p\in
P_{j}}1/p\leq 2\log\ell_{j-1}\leq\ell_{j}/36$ from our definition on
$\ell_{j}$. We obtain from this and (3.9) that for $n=2$, we have
(3.10)
$\displaystyle\begin{split}&\sum_{(n_{1},p_{i_{1}})=1}\frac{1}{\sqrt{n_{1}(n_{1})_{1}}}\frac{(n(k-1))^{\Omega(n_{1})}}{w(n_{1})}\widetilde{b}_{1,l_{1},l_{2}}(n_{1})\frac{d((n_{1})_{1})(n_{1})_{1}}{\sigma((n_{1})_{1})h(n_{1})}\\\
\ll&(1+O(2^{-\ell_{1}/2}))\Big{(}\prod_{\begin{subarray}{c}p\in P_{1}\\\
(p,p_{i_{1}})=1\end{subarray}}C(p)\Big{)}\times\exp\Big{(}\big{(}\frac{(n(k-1))^{2}}{2}+n^{2}(k-1)\big{)}\sum_{\begin{subarray}{c}p\in
P_{1}\end{subarray}}\frac{1}{p}\Big{)}.\end{split}$
We may also establish similar estimations for sums over $n_{j},2\leq j\leq r$.
Next, we apply Rankin’s trick again to see that the contribution of the
$n_{r+1}$ terms in (3.5) is
$\leq\ell_{r+1}!10^{-\ell_{r+1}}\prod_{p\in
P_{r+1}}\Big{(}\sum_{j=0}^{\infty}\frac{10^{2j}}{p^{j}(2j)!h(p^{2j})}+\sum_{j=1}^{\infty}\frac{10^{2j+1}}{p^{j+1}(2j+1)!}\frac{np}{(p+1)h(p^{2j+1})}\Big{)}.$
We apply Lemma 2.2, the estimation
$\ell_{r+1}!\leq\ell_{r+1}(\ell_{r+1}/e)^{\ell_{r+1}}$ and the definition of
$\ell_{r+1}$ to see that the above is
$\ll\ell_{r+1}\Big{(}\frac{\ell_{r+1}}{10e}\Big{)}^{\ell_{r+1}}\exp\Big{(}70\sum_{p\in
P_{r+1}}\frac{1}{p}\Big{)}\ll\ell_{r+1}\Big{(}\frac{\ell_{r+1}}{10e}\Big{)}^{\ell_{r+1}}\exp(\tfrac{5}{7}\ell_{r+1}).$
Combining the this with (3.5) and (3.10), we conclude that the contribution
from all terms of the forms given in (3.4) corresponding to
$m_{i_{1}}=m_{i_{2}}=m_{i_{3}}=1$ for some $1\leq i_{1}<i_{2}<i_{3}\leq r+1$
to the left side of (3.2) is
$\displaystyle\ll$ $\displaystyle{X}e^{-\ell_{r+1}/7}\prod_{1\leq j\leq
r}\Big{(}1+O(2^{-\ell_{j}/2})\Big{)}\prod_{\begin{subarray}{c}p\in\bigcup_{j=1}^{r}P_{j}\end{subarray}}C(p)\times\exp\Big{(}\big{(}\frac{(n(k-1))^{2}}{2}+n^{2}(k-1)\big{)}\sum_{\begin{subarray}{c}p\in\bigcup_{j=1}^{r}P_{j}\end{subarray}}\frac{1}{p}\Big{)}$
$\displaystyle\times\Big{(}\sum_{p\in\bigcup_{j=1}^{r}P_{j}}\sum_{l\geq
0}\frac{\log
p}{p^{l+1}}\frac{|(n(k-1))|^{2l+1}}{(2l+1)!}\frac{np}{(p+1)h(p^{2l+1})}\Big{)}^{3}$
$\displaystyle\ll$ $\displaystyle e^{-\ell_{r+1}/7}X(\log
X)^{\frac{(n(k-1))^{2}}{2}+n^{2}(k-1)+3},$
by using Lemma 2.2 while noting that $\displaystyle\prod_{p}C(p)\ll 1$ and
$\displaystyle\prod_{1\leq j\leq r}\Big{(}1+O(2^{-\ell_{j}/2})\Big{)}\ll 1$
since $\ell_{j}>\ell_{j+1}^{2}$.
Summing over $r$, we deduce that the estimation in (3.2) is valid and this
completes the proof of the proposition.
## 4\. A further remark
In this section we describe a variant on the principle of Radziwiłł and
Soundararajan mentioned in Section 1 for obtaining upper bounds of moments of
$L$-functions. This is based on the following simple observation.
###### Lemma 4.1.
Let $\ell$ be a non-negative even integer. For all real number $x$, let
$E_{\ell}(x)$ be defined in (2.1). We have
$E_{\ell}(x)E_{\ell}(-x)\geq 1.$
###### Proof.
We expand out the product $f_{\ell}(x):=E_{\ell}(x)E_{\ell}(-x)-1$ as a
polynomial to see that it suffices to show that the coefficients of
$x^{i},0\leq j\leq 2\ell$ are all non-negative. Note that the function
$f_{\ell}(x)$ is even so that only even powers of $x$ appear in the expansion.
Furthermore, it follows from the binomial theorem that the only non-zero even
powers of $x$ involved are those of the form $x^{2j}$ for $2j>l$. We then set
$\ell=2k$ to see that the coefficient of $x^{2(k+j)}$ for some integer $j>0$
is given by
$\frac{1}{(2(k+j))!}\sum^{2k}_{i=2j}\binom{2(k+j)}{i}(-1)^{i}=-\frac{2}{(2(k+j))!}\sum^{2j-1}_{i=0}\binom{2(k+j)}{i}(-1)^{i}\geq
0,$
where the last inequality above follows from [FI10, (6.6)] and this completes
the proof. ∎
Now, for two real numbers $n,k$ with $0<k<1$, we apply Lemma 4.1 to see that
$\displaystyle\sideset{}{{}^{*}}{\sum}_{(d,2)=1}|L(\tfrac{1}{2},\chi_{8d})|^{nk}\Phi(\frac{d}{X})\leq\sideset{}{{}^{*}}{\sum}_{(d,2)=1}|L(\tfrac{1}{2},\chi_{8d})|^{nk}{\mathcal{N}}(d,n(k-1))^{k}{\mathcal{N}}(d,n(1-k))^{k}\Phi(\frac{d}{X}),$
where we recall that the definition of ${\mathcal{N}}(d,\alpha)$ is given in
(2.2). We further apply Hölder’s inequality to the right side expression above
to deduce that
$\displaystyle\sideset{}{{}^{*}}{\sum}_{(d,2)=1}|L(\tfrac{1}{2},\chi_{8d})|^{nk}\Phi(\frac{d}{X})\leq\Big{(}\sideset{}{{}^{*}}{\sum}_{(d,2)=1}|L(\tfrac{1}{2},\chi_{8d})|^{n}{\mathcal{N}}(d,n(k-1))\Phi(\frac{d}{X})\Big{)}^{k}\Big{(}\sideset{}{{}^{*}}{\sum}_{(d,2)=1}{\mathcal{N}}(d,n(1-k))^{k/(1-k)}\Phi(\frac{d}{X})\Big{)}^{1-k}.$
This approach can be applied to obtain upper bounds for the $k$-th moment of
$L$-functions. In particalur, it is convenient to study the $\tfrac{1}{2}$-th
moment as it suffices to evaluate the average over $d$ of
$|L(\tfrac{1}{2},\chi_{8d})|{\mathcal{N}}(d,-\tfrac{1}{2})$ and
${\mathcal{N}}(d,\tfrac{1}{2})$ by taking $n=1,k=1/2$ above. If we assume that
$L(\tfrac{1}{2},\chi_{8d})\geq 0$ (which follows from GRH), then we just need
to consider $L(\tfrac{1}{2},\chi_{8d}){\mathcal{N}}(d,-\tfrac{1}{2})$, which
is relatively simpler compared to our work above. The same method applies
equally to the study on the moments of quadratic twists of $L$-functions
attached to elliptic curves (as we know in this case the corresponding
$L$-functions have non-negative values at the central point).
More generally, when $k=1/m$ with $m>2$ a positive integer, we denote
$\\{L(s,f)\\}_{f\in\mathcal{F}}$ for a general family of $L$-functions, we may
apply Lemma 4.1 (with a suitable adjustment on the defintion of
${\mathcal{N}}$) to see that
$\displaystyle\sum_{f\in\mathcal{F}}|L(\tfrac{1}{2},f)|^{k}\leq\sum_{f\in\mathcal{F}}|L(\tfrac{1}{2},f)|^{k}{\mathcal{N}}(f,k-1)^{k}\Big{(}\prod^{m-2}_{i=1}\big{(}{\mathcal{N}}(f,1-ik){\mathcal{N}}(f,(i+1)k-1)\big{)}^{k}\Big{)}{\mathcal{N}}(f,1-(m-1)k)^{k}.$
Applying Hölder’s inequality $m-1$ times to the right side expression above,
we deduce that
$\displaystyle\sum_{f\in\mathcal{F}}|L(\tfrac{1}{2},f)|^{k}\leq$
$\displaystyle\Big{(}\sum_{f\in\mathcal{F}}|L(\tfrac{1}{2},f)|{\mathcal{N}}(f,k-1)\Big{)}^{k}\prod^{m-2}_{i=1}\Big{(}\sum_{f\in\mathcal{F}}{\mathcal{N}}(f,1-ik){\mathcal{N}}(f,(i+1)k-1)\Big{)}^{k}\Big{(}\sum_{f\in\mathcal{F}}{\mathcal{N}}(f,1-(m-1)k)\Big{)}^{k}.$
It follows that in order to obtain upper bounds for the $1/m$-th moment of the
corresponding family, we only need to be able to evaluate the average over
$f\in\mathcal{F}$ of $|L(\tfrac{1}{2},f)|{\mathcal{N}}(f,k-1)$ and quantities
involving with products of at most two copies of ${\mathcal{N}}$. The above
approach can also be adapted to treat moments of the Riemann zeta function on
the critical line. We shall however not go any further in this direction here.
Acknowledgments. P. G. is supported in part by NSFC grant 11871082.
## References
|
# Critical behaviors of the $O(4)$ and $Z(2)$ symmetries in the QCD phase
diagram
Yong-rui Chen School of Physics, Dalian University of Technology, Dalian,
116024, P.R. China Rui Wen School of Physics, Dalian University of
Technology, Dalian, 116024, P.R. China Wei-jie Fu<EMAIL_ADDRESS>School of
Physics, Dalian University of Technology, Dalian, 116024, P.R. China
###### Abstract
In this work we have studied the QCD phase structure and critical dynamics
related to the 3-$d$ $O(4)$ and $Z(2)$ symmetry universality classes in the
two-flavor quark-meson low energy effective theory within the functional
renormalization group approach. We have employed the expansion of Chebyshev
polynomials to solve the flow equation for the order-parameter potential. The
chiral phase transition line of $O(4)$ symmetry in the chiral limit, and the
$Z(2)$ line of critical end points related to the explicit chiral symmetry
breaking are depicted in the phase diagram. Various critical exponents related
to the order parameter, chiral susceptibilities and correlation lengths have
been calculated for the 3-$d$ $O(4)$ and $Z(2)$ universality classes in the
phase diagram, respectively. We find that the critical exponents obtained in
the computation, where a field-dependent mesonic nontrivial dispersion
relation is taken into account, are in quantitative agreement with results
from other approaches, e.g., the conformal bootstrap, Monte Carlo simulations
and $d=3$ perturbation expansion, etc. Moreover, the size of the critical
regime in the QCD phase diagram is found to be very small.
††preprint:
## I Introduction
Significant progress has been made in studies of QCD phase structure over the
last decade, both from the experimental and theoretical sides; see, e.g.
Stephanov (2006); Friman _et al._ (2011); Luo and Xu (2017); Andronic _et
al._ (2018); Fischer (2019); Bzdak _et al._ (2020); Fu _et al._ (2020);
Bazavov _et al._ (2020); Borsanyi _et al._ (2020); Fu _et al._ (2021). One
of the most prominent features of the QCD phase structure is the probable
presence of a second order critical end point (CEP) in the phase diagram
spanned by the temperature $T$ and baryon chemical potential $\mu_{B}$ or
densities, which separates the first order phase transition at high $\mu_{B}$
from the continuous crossover at low $\mu_{B}$ Stephanov (2006). The existence
and location of CEP are, however, still open questions, whose answers would
definitely help us to unravel the most mysterious veil related to the
properties of strongly interacting matter under extreme conditions. The Beam
Energy Scan (BES) Program at the Relativistic Heavy Ion Collider (RHIC) is
aimed at searching for and locating the critical end point, where fluctuation
observables sensitive to the critical dynamics, e.g., high-order cumulants of
net-proton, net-charge, net-kaon multiplicity distributions, have been
measured Adamczyk _et al._ (2014a, b); Luo (2015); Adamczyk _et al._ (2018).
Notably, a non-monotonic dependence of the kurtosis of the net-proton
multiplicity distribution on the beam energy with $3.1\sigma$ significance in
central collisions has been reported by the STAR collaboration recently Adam
_et al._ (2020).
On the other hand, lattice QCD simulations have provided us with a plethora of
knowledge about the QCD phase structure, e.g., the crossover nature of the
chiral phase transition at finite $T$ and vanishing $\mu_{B}$ with physical
current quark mass Aoki _et al._ (2006), pseudo-critical temperature Borsanyi
_et al._ (2014); Bazavov _et al._ (2014), curvature of the phase boundary
Bazavov _et al._ (2019); Borsanyi _et al._ (2020), etc. Because of the
notorious sign problem at finite chemical potential, the reliability regime of
lattice calculations is restricted to be $\mu_{B}/T\lesssim 2\sim 3$, where no
CEP has been found. Free from the sign problem, the first-principle functional
approaches, e.g, the functional renormalization group (fRG) and Dyson-
Schwinger equations (DSE), could potentially extend the regime of reliability
to $\mu_{B}/T\sim 4$ Fischer (2019); Fu _et al._ (2020). With benchmark tests
of observables at finite $T$ and low $\mu_{B}$ in comparison to lattice
calculations, e.g., the quark condensate, curvature of the phase boundary,
etc., functional approaches, both fRG and DSE, have predicted a CEP located in
a region of $450\,\mathrm{MeV}\lesssim\mu_{B}\lesssim 650\,\mathrm{MeV}$
Fischer (2019); Fu _et al._ (2020); Isserstedt _et al._ (2019); Gao and
Pawlowski (2020a, b) recently.
An alternative method used to circumvent the possible location of CEP, is to
determine the critical temperature $T_{c}$ of the chiral phase transition in
the chiral limit, more specifically, i.e., massless light up and down quarks
and a physical strange quark mass. Since it is believed that the value of
$T_{c}$ sets an upper bound for the temperature of CEP Halasz _et al._
(1998); Buballa and Carignano (2019). Very recently, the critical temperature
$T_{c}$ in the chiral limit has been investigated and its value is
extrapolated from both lattice simulations Ding _et al._ (2019) and
functional approach Braun _et al._ (2020). Moreover, further lattice
calculations indicate that axial anomaly remains manifested at $T\approx
1.6\,T_{c}$, which implies that the chiral phase transition of QCD in the
chiral limit is of 3-$d$ $O(4)$ universality class Ding _et al._ (2020); see,
e.g., Pisarski and Wilczek (1984) for more discussions about the relation
between the axial anomaly and the symmetry universality classes.
In this work, we would like to study the QCD phase structure in the chiral
limit and finite current quark mass, i.e., with a finite pion mass, in the
two-flavor quark-meson low energy effective theory (LEFT) within the fRG
approach. For more discussions about the fRG approach, see, e.g., QCD related
reviews Berges _et al._ (2002); Pawlowski (2007); Schaefer and Wambach
(2008); Gies (2012); Rosten (2012); Braun (2012); Pawlowski (2014); Dupuis
_et al._ (2020). In contrast with the lattice simulation and the first-
principle fRG-QCD calculation Ding _et al._ (2019); Braun _et al._ (2020),
the chiral limit could be accessed strictly in the LEFT. Furthermore, we would
also like to study the critical behaviors of the 3-$d$ $O(4)$ and $Z(2)$
universality classes, including various critical exponents, which belong to
the second-order chiral phase transitions in the chiral limit and at the
critical end point with finite quark mass, respectively. To that end, we
expand the effective potential of order parameter as a sum of Chebyshev
polynomials in the computation of fRG flow equations; see Risch (2013) for
more details. The Chebyshev expansion of solutions to a set of
integrodifferential equations is, in fact, a specific formalism of more
generic pseudo-spectral methods Boyd (2000), and see also, e.g., Borchardt and
Knorr (2015, 2016); Knorr (2020) for applications of pseudo-spectral methods
in the fRG.
In fact, another two numerical methods are more commonly used in solving the
flow equation for the effective potential: one is the Taylor expansion of the
effective potential around some value Pawlowski and Rennecke (2014); Yin _et
al._ (2019), and the other discretization of the effective potential on a grid
Schaefer and Wambach (2005). The (dis)advantages of these two methods are
distinct. The former is liable to implementation of the numerical
calculations, but short of global properties of the effective potential, that
is, however, indispensable to studies of chiral phase transition in the chiral
limit or around CEP; the latter is encoded with global information on the
potential, but it loses numerical accuracy near the phase transition point
which is necessary especially for the computation of critical exponents. The
Chebyshev expansion used in this work combines the merits from both
approaches, i.e., the global potential and the numerical accuracy, and thus it
is very suitable for the studies of critical behaviors in the QCD phase
diagram. Remarkably, a discontinuous Galerkin scheme has been applied in the
context of fRG recently Grossi and Wink (2019), which is well-suited for
studies of the first-order phase transition.
This paper is organized as follows: In Sec. II we briefly introduce the flow
equations in the quark-meson LEFT and the method of the Chebyshev expansion
for the effective potential. The obtained phase diagram and QCD phase
structure are presented and discussed in Sec. III. In Sec. IV scaling analyses
for the the 3-$d$ $O(4)$ and $Z(2)$ universality classes are performed, and
various critical exponents are obtained. We also discuss the size of the
critical regime there. In Sec. V we give a summary and conclusion. Some
threshold functions and anomalous dimension in the flow equations, and some
relations for the Chebyshev polynomials are collected in Appendix A and
Appendix B, respectively.
## II Functional renormalization group and the low energy effective theories
Thanks to the Wilson’s idea of the renormalization group (RG), see, e.g.,
Wilson and Kogut (1974), it has been well known that usually the active
degrees of freedom are quite different, when the energy scale of a system
evolves from a hierarchy into another. The relevant dynamics in different
hierarchies are connected with each other through the evolution of RG
equations. To be more specific, in QCD the partonic degrees of freedom, i.e.,
the quarks and gluons, in the high energy perturbative regime are transformed
into the collective hadronic ones in the nonperturbative region of low energy,
with the RG scale evolving from the ultraviolet (UV) to infrared (IR) limits
Weinberg (1979), and see also, e.g., Gies and Wetterich (2002, 2004);
Pawlowski (2007); Floerchinger and Wetterich (2009); Braun _et al._ (2016);
Mitter _et al._ (2015); Cyrol _et al._ (2018a); Eser _et al._ (2018); Fu
_et al._ (2020) for recent development of the relevant ideas within the fRG
approach. When the momentum or RG scale is below, say $\sim 1$ GeV, which is
related to a narrow transition region from the perturbative to nonperturbative
QCD, calculated results of Yang-Mills theory and QCD in Landau gauge indicate
that the gluons develop a finite mass gap and decouple from the system, and
see, e.g. Mitter _et al._ (2015); Cyrol _et al._ (2016); Fu _et al._
(2020); Huber (2020) for more details. As a consequence, contributions to the
flow equations of effective action from the glue sector could be safely
neglected, if the initial evolution scale is set at a UV scale
$\Lambda\lesssim 1$ GeV.
Hence, within the fRG approach, one is left with the flow equation for the low
energy effective theory, which reads
$\displaystyle\partial_{t}\Gamma_{k}[\Phi]=$
$\displaystyle-\mathrm{Tr}\Big{(}G_{q\bar{q},k}\partial_{t}R_{q,k}\Big{)}+\frac{1}{2}\mathrm{Tr}\Big{(}G_{\phi\phi,k}\partial_{t}R_{\phi,k}\Big{)}\,,$
(1)
with the RG scale $k$ and the RG time defined as $t=\ln(k/\Lambda)$.
Apparently, Eq. (1) is an ordinary differential equation for the $k$-dependent
effective action, $\Gamma_{k}[\Phi]$, the arguments $\Phi=(q,\bar{q},\phi)$ of
which are the quark and mesonic fields in the LEFT. The equation in Eq. (1),
which describes the evolution of the effective action with the RG scale, is
also well known as the Wetterich equation Wetterich (1993), see also Ellwanger
(1994); Morris (1994). The flow receives contributions from both the quark and
mesonic degrees of freedom, as shown on the r.h.s. of Eq. (1), where
$G_{q\bar{q},k}$ and $G_{\phi\phi,k}$ are the $k$-dependent full quark and
meson propagators, respectively, and are related to the quadratic derivatives
of $\Gamma_{k}[\Phi]$ with respect to their respective fields, viz.
$\displaystyle
G_{\phi\phi/q\bar{q}}[\Phi]=\left(\frac{1}{\frac{\delta^{2}\Gamma_{k}[\Phi]}{\delta\Phi^{2}}+R_{\Phi,k}}\right)_{\phi\phi/q\bar{q}}\,.$
(2)
where $R_{q,k}$ and $R_{\phi,k}$ as well as in Eq. (1) are the IR regulators,
which are employed to suppress quantum fluctuations of momenta $q\lesssim k$,
and their explicit expressions used in the work are given in Eqs. (60) and
(61). Moreover, interested readers could refer to QCD related fRG review
articles Berges _et al._ (2002); Pawlowski (2007); Schaefer and Wambach
(2008); Gies (2012); Rosten (2012); Braun (2012); Pawlowski (2014); Dupuis
_et al._ (2020) for more details about the formalism of fRG, and also Braun
_et al._ (2010); Braun (2009); Braun _et al._ (2011a); Mitter _et al._
(2015); Braun _et al._ (2016); Cyrol _et al._ (2016, 2018a, 2018b); Fu _et
al._ (2020); Braun _et al._ (2020); Fu _et al._ (2021) for recent progress
on relevant studies.
In this work, we adopt a truncation for the effective action in Eq. (1) as
follows
$\displaystyle\Gamma_{k}[\Phi]=$
$\displaystyle\int_{x}\bigg{\\{}Z_{q,k}\bar{q}\big{(}\gamma_{\mu}\partial_{\mu}-\gamma_{0}\hat{\mu}\big{)}q+\frac{1}{2}Z_{\phi,k}(\rho)\big{(}\partial_{\mu}\phi\big{)}^{2}$
$\displaystyle+h_{y,k}\bar{q}\big{(}T^{0}\sigma+i\gamma_{5}\vec{T}\cdot\vec{\pi}\big{)}q+V_{k}(\rho)-c\sigma\bigg{\\}}\,,$
(3)
with the shorthand notation $\int_{x}=\int_{0}^{1/T}dx_{0}\int d^{3}x$, where
the quark field $q=(u\,,d)^{T}$ and the meson field
$\phi=\left(\sigma,\vec{\pi}\right)$ are in the fundamental and adjoint
representations of $SU(N_{f})$ in the flavor space with $N_{f}=2$,
respectively. They interact with each other via a Yukawa coupling with a
coupling strength $h_{Y,k}$, where the subscript Y is used to distinguish it
from the reduced external field $h$ in Eq. (15). Here $T^{i}$ ($i=1\,,2\,,3$)
are the generators of $SU(2)$ with
$\operatorname{Tr}(T^{i}T^{j})=\frac{1}{2}\delta^{ij}$ and
$T^{0}=\frac{1}{\sqrt{2N_{f}}}\mathbb{1}_{N_{f}\times N_{f}}$. Note that both
the effective potential $V_{k}(\rho)$ and the mesonic wave function
renormalization $Z_{\phi,k}(\rho)$ in Eq. (3) depend on the meson field by
means of $\rho=\phi^{2}/2$, which are $O(4)$ invariant. $Z_{q,k}$ is the quark
wave function renormalization. Notice that the term linear in the order
parameter field, i.e., $-c\sigma$ in Eq. (3), breaks the chiral symmetry
explicitly, and thus here $c$ is essentially an external “magnetic” field in
the language of magnetization. Moreover,
$\hat{\mu}=\mathrm{diag}(\mu_{u},\mu_{d})$ is the matrix of quark chemical
potentials in the flavor space, and $\mu=\mu_{u}=\mu_{d}$ is assumed
throughout this work, which is related to the baryon chemical potential via
$\mu=\mu_{B}/3$. For more discussions about the quark-meson LEFT in Eq. (3) or
its extensions, e.g., Polyakov-loop quark-meson LEFT, QCD assisted LEFT, etc.,
and their applications in calculations of QCD thermodynamics and phase
structure, fluctuations and correlations of conserved charges, etc., see,
e.g., Schaefer and Wambach (2005); Schaefer _et al._ (2007); Skokov _et al._
(2010); Herbst _et al._ (2011); Skokov _et al._ (2011); Karsch _et al._
(2011); Morita _et al._ (2011); Skokov _et al._ (2012); Haas _et al._
(2013); Herbst _et al._ (2013, 2014); Fu and Pawlowski (2016, 2015); Fu _et
al._ (2016); Sun _et al._ (2018); Fu _et al._ (2018, 2019); Wen _et al._
(2019); Wen and Fu (2019); Yin _et al._ (2019); Hansen _et al._ (2020); Fu
_et al._ (2021).
### II.1 Flow equations
Substituting the effective action in Eq. (3) into the Wetterich equation in
Eq. (1), one readily obtains the flow equation of the effective potential as
follows
$\displaystyle\partial_{t}V_{k}(\rho)=$
$\displaystyle\frac{k^{4}}{4\pi^{2}}\bigg{[}\big{(}N^{2}_{f}-1\big{)}l^{(B,4)}_{0}(\bar{m}^{2}_{\pi,k},\eta_{\phi,k};T)$
$\displaystyle+l^{(B,4)}_{0}(\bar{m}^{2}_{\sigma,k},\eta_{\phi,k};T)$
$\displaystyle-4N_{c}N_{f}l^{(F,4)}_{0}(\bar{m}^{2}_{q,k},\eta_{q,k};T,\mu)\bigg{]}\,,$
(4)
with the threshold functions $l^{(B,4)}_{0}$ and $l^{(F,4)}_{0}$ given in Eq.
(64) and Eq. (65), respectively. Here, the scale-dependent meson and quark
masses read
$\displaystyle\bar{m}^{2}_{\pi,k}$
$\displaystyle=\frac{V^{\prime}_{k}(\rho)}{k^{2}Z_{\phi,k}}\,,\qquad\bar{m}^{2}_{\sigma,k}=\frac{V^{\prime}_{k}(\rho)+2\rho
V^{\prime\prime}_{k}(\rho)}{k^{2}Z_{\phi,k}}\,,$ (5)
$\displaystyle\bar{m}^{2}_{q,k}$
$\displaystyle=\frac{h_{y,k}^{2}\rho}{2k^{2}Z^{2}_{q,k}}\,,$ (6)
which are RG invariant and dimensionless.
The meson and quark anomalous dimensions in the threshold functions in Eq. (4)
are defined as follows
$\displaystyle\eta_{\phi,k}$
$\displaystyle=-\frac{\partial_{t}Z_{\phi,k}}{Z_{\phi,k}}\,,\quad\eta_{q,k}=-\frac{\partial_{t}Z_{q,k}}{Z_{q,k}}\,,$
(7)
where the meson anomalous dimension is obtained by projecting the flow
equation in Eq. (1) onto the inverse pion propagator, to wit,
$\displaystyle\eta_{\phi,k}(\rho)$
$\displaystyle=-\frac{1}{3Z_{\phi,k}}\delta_{ij}\frac{\partial}{\partial(|\bm{p}|^{2})}\frac{\delta^{2}\partial_{t}\Gamma_{k}}{\delta\pi_{i}(-p)\delta\pi_{j}(p)}\Bigg{|}_{\begin{subarray}{c}p_{0}=0\\\
\bm{p}=0\end{subarray}}\,,$ (8)
the explicit expression of which is presented in Eq. (68). Note that
$\eta_{\phi,k}$ is dependent on the meson field via $\rho$.
In comparison to the effects of the meson wave function renormalization on the
chiral phase transition at finite temperature and density, it has been found
that those of quark wave function renormalization and the running Yukawa
coupling are relatively milder, see, e.g., Pawlowski and Rennecke (2014); Fu
and Pawlowski (2015); Yin _et al._ (2019). Therefore, in this work we adopt
the simplification as follows
$\displaystyle\eta_{q,k}$
$\displaystyle=0\,,\quad\quad\partial_{t}\bar{h}_{y,k}=0\,,$ (9)
with the renormalized Yukawa coupling given in Eq. (69), and use two different
truncations: one is the usual local potential approximation (LPA), where the
mesonic anomalous dimension is vanishing as well, and the $k$-dependent term
in Eq. (3) is just the effective potential; the other is the truncation with
the field-dependent mesonic anomalous dimension in Eq. (8) taken into account
besides the potential, which is denoted as LPA′ in this work. Note that the
notation LPA′ in literatures, e.g., Helmboldt _et al._ (2015); Fu and
Pawlowski (2015), usually stands for the truncation with a field-independent
mesonic anomalous dimension which is, strictly speaking, different from the
case in this work.
Figure 1: Dependence of the mesonic wave function renormalization $Z_{\phi}$
on the order-parameter field $\bar{\sigma}$ at vanishing baryon chemical
potential $\mu_{B}=0$ and several values of temperature $T=\Delta T+T_{c}$.
See text for more details.
As an illustrative example, we show the mesonic wave function renormalization
$Z_{\phi}\equiv Z_{\phi,k=k_{\mathrm{IR}}}$ as a function of the renormalized
sigma field $\bar{\sigma}=Z_{\phi}^{1/2}\sigma$ obtained in LPA′ in Fig. 1,
where $k_{\mathrm{IR}}$ is the RG scale in the IR limit, and one would has
$k_{\mathrm{IR}}\rightarrow 0$ in principle, which, however, is impossible to
realize in numerical calculations. In our calculation the value of
$k_{\mathrm{IR}}$ is reduced as small as possible, and we find the convergence
is obtained when $k_{\mathrm{IR}}=1$ MeV. Note that the mesonic wave function
renormalization at the scale of UV cutoff $\Lambda$, see Sec. III in the
following, is assumed to be identical to unity, i.e., $Z_{\phi,k=\Lambda}=1$.
In Fig. 1, we choose several values of temperature $T=\Delta T+T_{c}$ at and
above the critical temperature that is $T_{c}=143.6$ MeV in the chiral limit
and at vanishing $\mu_{B}$. One observes that with the increase of the
temperature, the peak structure of $Z_{\phi}$ as a function of the
renormalized sigma field $\bar{\sigma}$ becomes smoother.
### II.2 Chebyshev expansion of the effective potential
Figure 2: Phase diagrams in the plane of $T$ and $\mu_{B}$, obtained in the
quark-meson low energy effective theory within the fRG approach. Two
truncations for the fRG calculations have been employed: one is the local
potential approximation (LPA) and the other is that beyond the LPA, in which a
field-dependent mesonic wave function renormalization is taken into account,
i.e., the truncation LPA′, and see text for more details. The relevant results
are presented in the left and right panels, respectively.
The black dashed lines in both panels denote the $O(4)$ chiral phase
transition in the chiral limit, and the black circles indicate the location of
the tricritical point. The solid lines of different colors in the left panel
denote the first-order phase transitions with different pion masses in the
vacuum, i.e. different values of $c$ in Eq. (3), and the solid one in the
right panel is the first-order phase transition line in the chiral limit. The
red dashed lines in both panels stand for line composed of critical end points
(CEP) corresponding to continuously varying pion masses, which belong to the
$Z(2)$ symmetry class. The star in the left panel indicates the location of
CEP with physical pion mass. In both phase diagrams we use red and blue
crosses to label the locations where critical exponents in Sec. IV are
calculated for the $O(4)$ and $Z(2)$ universality classes, respectively.
In this work we solve the flow equation in Eq. (4) by expanding the effective
potential as a sum of Chebyshev polynomials up to an order $N_{v}$, to wit,
$\displaystyle\bar{V}_{k}(\bar{\rho})$
$\displaystyle=\sum^{N_{v}}_{n=1}c_{n,k}T_{n}(\bar{\rho})+\frac{1}{2}c_{0,k}\,,$
(10)
with $\bar{V}_{k}(\bar{\rho})=V_{k}(\rho)$, $\bar{\rho}=Z_{\phi,k}\rho$, where
quantities with a bar denote renormalized variables. The Chebyshev polynomial
$T_{n}(\bar{\rho})$ is given in Eq. (78), and the superscript
$[0,\bar{\rho}_{\mathrm{max}}]$ in Eq. (78) denoting the interval of
$\bar{\rho}$ is neglected for brevity here. Differentiating Eq. (10) with
respect to the RG time $t$ with $\rho$ fixed, one is led to
$\displaystyle\partial_{t}\big{|}_{\rho}\bar{V}_{k}(\bar{\rho})=$
$\displaystyle\sum^{N_{v}}_{n=1}\Big{(}\partial_{t}c_{n,k}-d_{n,k}\eta_{\phi,k}(\bar{\rho})\bar{\rho}\Big{)}T_{n}(\bar{\rho})$
$\displaystyle+\frac{1}{2}\Big{(}\partial_{t}c_{0,k}-d_{0,k}\eta_{\phi,k}(\bar{\rho})\bar{\rho}\Big{)}\,,$
(11)
where we have used the Chebyshev expansion for the derivative of the effective
potential as shown in Eq. (82) and coefficients $d_{n,k}$’s are the respective
expanding coefficients. Employing the discrete orthogonality relation in Eq.
(76) by summing up the $N+1$ zeros of $T_{N+1}(\bar{\rho})$ in Eq. (79), one
arrives at
$\displaystyle\partial_{t}c_{m,k}=$
$\displaystyle\frac{2}{N+1}\sum^{N}_{i=0}\Big{(}\partial_{t}\big{|}_{\rho}\bar{V}_{k}(\bar{\rho}_{i})\Big{)}T_{m}(\bar{\rho}_{i})$
$\displaystyle+\frac{2}{N+1}\sum^{N_{v}}_{n=1}\sum^{N}_{i=0}d_{n,k}T_{m}(\bar{\rho}_{i})T_{n}(\bar{\rho}_{i})\eta_{\phi,k}(\bar{\rho}_{i})\bar{\rho}_{i}$
$\displaystyle+\frac{1}{N+1}d_{0,k}\sum^{N}_{i=0}T_{m}(\bar{\rho}_{i})\eta_{\phi,k}(\bar{\rho}_{i})\bar{\rho}_{i}\,,$
(12)
which is the flow equation for the expansion coefficients in Eq. (10).
## III Phase diagram
It is left to specify the parameters in the LEFT, prior to presenting our
calculated results. The UV cutoff of flow equations in the LEFT is chosen to
be $\Lambda=500$ MeV, and the effective potential in Eq. (3) at $k=\Lambda$
reads
$\displaystyle V_{\Lambda}(\rho)$
$\displaystyle=\frac{\lambda_{\Lambda}}{2}\rho^{2}+\nu_{\Lambda}\rho\,,$ (13)
with $\lambda_{\Lambda}=20$ and $\nu_{\Lambda}=0$ . The Yukawa coupling is
$k$-independent as shown in Eq. (9) and is given by $\bar{h}_{y}=6.4$.
Concerning the Chebyshev expansion, we choose $N=81$ for the number of zeros
and $N_{v}=21$ for the maximal order of Chebyshev polynomials. We have also
checked that there is no difference when the value of $N_{v}$ is increased.
Moreover, the upper bound of $\bar{\rho}$ is chosen to be
$\bar{\rho}_{\mathrm{max}}=9\times 10^{3}\,\mathrm{MeV}^{2}$, well above the
value of minimum of the potential in the IR. In the LPA, these values of
parameters lead to the pion decay constant $f_{\pi}=$87 MeV and the
constituent quark mass $m_{q}=$278.4 MeV in the vacuum and in the chiral
limit. While if the explicit breaking strength of the chiral symmetry in Eq.
(3) is increased up to $c=1.85\times 10^{-3}\,(\mathrm{GeV})^{3}$, one obtains
the physical pion mass $m_{\pi}=$138 MeV, as well as $f_{\pi}=$93 MeV and
$m_{q}=$297.6 MeV in the vacuum. Note that in order to facilitate the
comparison between the calculation with the truncation LPA and that with LPA′,
we use the same values of parameters above in the LPA′ computation as in LPA.
In Fig. 2 we show the phase diagrams of LEFT in the $T\\!-\\!\mu_{B}$ plane,
calculated within the fRG approach with the truncations LPA and LPA′, in the
left and right panels, respectively. The black dashed lines in both panels
denote the second-order $O(4)$ chiral phase transition of $N_{f}=2$ flavor in
the chiral limit. The black circles indicate the location of the tricritical
point, beyond which the second-order phase transition evolves into a
discontinuous first-order one, which are shown by the solid lines. Note that
the solid lines of different colors in the left panel denote the first-order
phase transitions with different pion masses in the vacuum, i.e. different
values of $c$ in Eq. (3), and in the right panel, we only give the first-order
phase transition line in the chiral limit, since numerical calculations become
quite difficult in the region of high $\mu_{B}$ and low $T$ with the
truncation LPA′. The red dashed lines in both panels are the trajectories of
the critical end points with the change of the strength of explicit chiral
symmetry breaking $c$, which belong to the 3-$d$ $Z(2)$ Ising university
class.
The critical temperature at vanishing baryon chemical potential is found to be
$T_{c}=144$ MeV in LPA and 143 MeV in LPA′ in the chiral limit. The
tricritical point is located at
$(T_{\mathrm{tri}},{\mu_{B}}_{\mathrm{tri}})_{{}_{\tiny{\mathrm{LPA}}}}=(50,764)$
MeV in the LPA and
$(T_{\mathrm{tri}},{\mu_{B}}_{\mathrm{tri}})_{{}_{\tiny{\mathrm{\mathrm{LPA}^{\prime}}}}}=(47,687)$
MeV in the LPA′, which are shown in the phase diagrams by the black circles.
The location of CEP corresponding to the physical pion mass in the LPA, shown
in the left panel of Fig. 2 by the star, is
$(T_{{}_{\tiny{\mathrm{CEP}}}},{\mu_{B}}_{{}_{\tiny{\mathrm{CEP}}}})_{{}_{\tiny{\mathrm{LPA}}}}=(8,885)$
MeV. In both phase diagrams in Fig. 2 we also use red and blue crosses to
label the locations where critical exponents in Sec. IV would be calculated
for the 3-$d$ $O(4)$ and $Z(2)$ universality classes, respectively. The
calculated points for the $O(4)$ and $Z(2)$ phase transition in the LPA are
given by
$(T_{{}_{O(4)}},{\mu_{B}}_{{}_{O(4)}})_{{}_{\tiny{\mathrm{LPA}}}}=(144,0)$ MeV
and
$(T_{{}_{Z(2)}},{\mu_{B}}_{{}_{Z(2)}})_{{}_{\tiny{\mathrm{LPA}}}}=(38,795)$
MeV, respectively; and the relevant values in the LPA′ read
$(T_{{}_{O(4)}},{\mu_{B}}_{{}_{O(4)}})_{{}_{\tiny{\mathrm{LPA}^{\prime}}}}=(143,0)$
MeV and
$(T_{{}_{Z(2)}},{\mu_{B}}_{{}_{Z(2)}})_{{}_{\tiny{\mathrm{LPA}^{\prime}}}}=(41,702)$
MeV.
## IV Critical behavior and critical exponents
A variety of scaling analysis has been performed for the $O(4)$ universality
class, e.g., in the $O(N)$ model Toussaint (1997); Engels and Mendes (2000);
Parisen Toldin _et al._ (2003); Engels _et al._ (2003); Braun and Klein
(2008); Engels and Vogt (2010) and two-flavor quark-meson model Berges _et
al._ (1999); Schaefer and Pirner (1999); Bohr _et al._ (2001); Stokic _et
al._ (2010). The dynamics of a system in the critical regime near a second-
order critical point is governed by long-wavelength fluctuations, and the
correlation length tends to be divergent as the system moves towards the
critical point. Critical exponents play a pivotal role in studies of the
critical dynamics, which are independent of micro interactions, but rather
universal for the same symmetry class, dimension of the system, etc., and see
Stokic _et al._ (2010); Braun and Klein (2008) for more details. In the
following, we follow the standard procedure and give our notations for the
relevant various critical exponents.
To begin with, from the effective action in Eq. (3) one readily obtains the
thermodynamic potential density, which reads
$\displaystyle\Omega\big{(}T,\mu_{B},\,c\big{)}$
$\displaystyle=V_{k=0}(\rho)-c\sigma\,,$ (14)
where the order parameter field $\sigma\equiv\langle\sigma\rangle$ or
$\rho=\sigma^{2}/2$ is on its equation of motion. We then introduce the
reduced temperature and reduced external “magnetic” field as follows
$\displaystyle t$ $\displaystyle=\frac{T-T_{c}}{T_{0}}\,,\qquad
h=\frac{c}{c_{0}}\,,$ (15)
where $T_{c}$ is the critical temperature, and they are normalized by $T_{0}$
and $c_{0}$, i.e., some appropriate values of $T$ and $c$. In the language of
magnetization under an external magnetic field, the order parameter $\sigma$
here is just the corresponding magnetization density, i.e., $M\equiv\sigma$,
and the explicit chiral symmetry breaking parameter $c$ is equivalent to the
magnetic field strength $H\equiv c$. We will not distinguish them in the
following any more. In the critical regime the thermodynamic potential in Eq.
(14) is dominated by its singular part $f_{s}$, i.e.,
$\displaystyle\Omega\big{(}t,h\big{)}$
$\displaystyle=f_{s}(t,h)+f_{reg}(t,h)\,,$ (16)
where the second term on the r.h.s. is the regular one, and the notation for
the baryon chemical potential is suppressed. In what follows we adopt the
notations in Braun _et al._ (2011b), and the scaling function $f_{s}(t,h)$ on
the r.h.s. of Eq. (16) satisfies the scale relation to leading order, viz.
$\displaystyle f_{s}(t,h)$
$\displaystyle=\ell^{-d}f_{s}(t\,\ell^{y_{t}},\,h\,\ell^{y_{h}})\,,$ (17)
where $\ell$ is a dimensionless rescaling factor. The scaling function in Eq.
(17) leads us to a variety of relations for various critical exponents Berges
_et al._ (1999); Tetradis (2003); Schaefer and Pirner (1999); Braun and Klein
(2008), e.g.,
$\displaystyle y_{t}$
$\displaystyle=\frac{1}{\nu}\,,\quad\\!\\!y_{h}=\frac{\beta\delta}{\nu}\,,\quad\\!\\!\beta=\frac{\nu}{2}(d-2+\eta)\,,\quad\\!\\!\gamma=\beta(\delta-1)\,,$
$\displaystyle\gamma$
$\displaystyle=(2-\eta)\nu\,,\quad\delta=\frac{d+2-\eta}{d-2+\eta}\,,\quad\nu
d=\beta(1+\delta)\,,$ (18)
with the spacial dimension $d$. The critical exponents $\beta$ and $\delta$
describe the critical behavior of the order parameter in the direction of $t$
or $h$, respectively, i.e.,
$\displaystyle M(t,h=0)$ $\displaystyle\sim(-t)^{\beta}\quad\mathrm{with}\quad
t<0\,,$ (19) $\displaystyle M(t=0,h)$ $\displaystyle\sim h^{1/\delta}\,.$ (20)
The exponent $\gamma$ is related to the susceptibility of order parameter
$\chi$, and $\nu$ to the correlation length $\xi$, which reads
$\displaystyle\chi$
$\displaystyle\sim|t|^{-\gamma}\,,\quad\mathrm{and}\quad\xi\sim|t|^{-\nu}\,,$
(21)
The scaling relation in Eq. (17) allows us to readily obtain the critical
behavior for various observables. For instance, the order parameter and its
susceptibilities read
$\displaystyle M$ $\displaystyle=-\frac{\partial f_{s}}{\partial
H}\,,\quad\chi_{\sigma}=\frac{\partial M}{\partial
H}\,,\quad\chi_{\pi}=\frac{M}{H}\,,$ (22)
where $\chi_{\sigma}\equiv\chi_{l}$ and $\chi_{\pi}\equiv\chi_{t}$ are also
called as the longitudinal and transverse susceptibilities, respectively.
Choosing an appropriate value of the rescaling factor such that
$h\,\ell^{y_{h}}=1$ in Eq. (17), one is led to
$\displaystyle f_{s}(t,h)$ $\displaystyle=h^{d/y_{h}}f_{s}(z,1)\,,$ (23)
with the scaling variable $z=t/h^{1/(\beta\delta)}$. Inserting Eq. (23) into
the first equation in Eq. (22), one arrives at
$\displaystyle M$ $\displaystyle=h^{1/\delta}f(z)\,,$ (24)
where we have introduced
$\displaystyle f(z)$
$\displaystyle\equiv\frac{1}{H_{0}}\Big{[}\frac{z}{\beta\delta}\frac{\partial
f_{s}(z,1)}{\partial z}-\frac{d\nu}{\beta\delta}f_{s}(z,1)\Big{]}\,,$ (25)
which is a scaling function dependent only on $z$. With appropriate values of
$H_{0}$ and $T_{0}$ in Eq. (15), it can be shown that the scaling function in
Eq. (25) has the properties $f(0)=1$ and $f(z)\simeq(-z)^{\beta}$ with
$z\rightarrow-\infty$ Braun _et al._ (2011b).
Consequently, it is straightforward to express the longitudinal and transverse
susceptibilities in Eq. (22) in terms of the scaling function $f(z)$, to wit,
$\displaystyle\chi_{\sigma}$
$\displaystyle=\frac{1}{H_{0}}h^{1/\delta-1}f_{\chi}(z)\,,$ (26)
with
$\displaystyle f_{\chi}(z)$
$\displaystyle\equiv\frac{1}{\delta}\Big{[}f(z)-\frac{z}{\beta}f^{\prime}(z)\Big{]}\,,$
(27)
and
$\displaystyle\chi_{\pi}$ $\displaystyle=\frac{1}{H_{0}}h^{1/\delta-1}f(z)\,.$
(28)
Alternative to the choice of $h\,\ell^{y_{h}}=1$ in Eq. (17), one can also
employ $t\,\ell^{y_{t}}=1$, which is equivalent to the Widom-Griffiths
parametrization Widom (1965); Griffiths (1967) of the equation of state by
means of the scaling variables, as follows
$\displaystyle x$ $\displaystyle\equiv\frac{t}{M^{1/\beta}}\,,\qquad
y\equiv\frac{h}{M^{\delta}}\,,$ (29)
which are obviously related to the other parametrization by the relations
which read
$\displaystyle z$ $\displaystyle=\frac{x}{y^{1/(\beta\delta)}}\,,\qquad
f(z)=\frac{1}{y^{1/\delta}}\,.$ (30)
Hence the scaling function $y(x)$ has the properties $y(0)=1$ and $y(-1)=0$.
In the same way, one readily obtains the expressions of susceptibilities in
this parametrization, which read
$\displaystyle\chi_{\sigma}$
$\displaystyle=\frac{1}{H_{0}M^{\delta-1}}\Big{[}\delta
y(x)-\frac{1}{\beta}xy^{\prime}(x)\Big{]}^{-1}\,,$ (31)
$\displaystyle\chi_{\pi}$
$\displaystyle=\frac{1}{H_{0}M^{\delta-1}}\frac{1}{y}\,.$ (32)
### IV.1 Order parameter
Figure 3: Logarithm of the reduced order parameter $\tilde{\sigma}$ in Eq. (33) as a function of $\ln(-t)$ (left panel) or $\ln(h)$ (right panel) for the second-order $O(4)$ and $Z(2)$ phase transitions with truncations LPA and LPA′, where the phase transition points are chosen to be the locations of the red and blue crosses in the phase diagrams in Fig. 2 for the $O(4)$ and $Z(2)$ universality classes, respectively. The solid lines represent linear fits to the calculated discrete data points, from which values of the critical exponents $\beta$ and $\delta$ are extracted. $T_{c}-T$ (MeV) | ($10^{-4}$, $5\\!\times\\!10^{-3}$) | ($10^{-2}$, 0.1) | (0.1, 0.5 ) | (0.5, 1) | (1, 5)
---|---|---|---|---|---
$\beta^{{}^{O(4)}}_{{}_{\mathrm{LPA}}}$ | 0.3989(41) | 0.5164(65) | 0.4374(36) | 0.4077(44) | 0.3921(43)
$\beta^{{}^{Z(2)}}_{{}_{\mathrm{LPA}}}$ | 0.3352(12) | 0.2830(26) | 0.2724(18) | 0.2689(17) | 0.247(17)
Table 1: Values of the critical exponent $\beta$ extracted from different
ranges of temperature, which are denoted by their distances to the
corresponding critical temperature, i.e., $T_{c}-T$. The calculations are
performed with the truncation LPA, and the phase transition points are chosen
to be the locations of the red and blue crosses in the phase diagrams in Fig.
2 for the $O(4)$ and $Z(2)$ universality classes, respectively.
The flow equation of effective potential in Eq. (4) is solved by the use of
the Chebyshev expansion as discussed in Sec. II.2, i.e., evolving the flow
equations of the expansion coefficients in Eq. (12) from the UV cutoff
$\Lambda$ to the infrared limit $k\rightarrow 0$, and then the expectation
value of the order parameter $\sigma$ is determined by minimizing the
thermodynamic potential in Eq. (14). Note that two different truncations,
i.e., LPA and LPA′ as shown in Sec. II.1, are employed in the calculations.
The critical exponents $\beta$ and $\delta$ are given in Eqs. (19) and (20),
which are related to the scaling behavior of the order parameter as the phase
transition is approached towards in the temperature or external field
direction, respectively. Note, however, that in the case of $Z(2)$ phase
transition as indicated by the blue cross in the phase diagram in Fig. 2, the
order parameter should be modified slightly and we introduce the reduced order
parameter which reads
$\displaystyle\tilde{\sigma}$
$\displaystyle=\frac{\sigma-\sigma^{\prime}}{f_{\pi}}\,,$ (33)
where $f_{\pi}$ is the pion decay constant in the vacuum and $\sigma^{\prime}$
is the expectation value of sigma field at the phase transition point, which
is nonvanishing on the red dashed lines of $Z(2)$ in the phase diagrams in
Fig. 2. Correspondingly, the reduced external field in Eq. (15) is modified
into
$\displaystyle h$ $\displaystyle=\frac{c-c^{\prime}}{c_{0}}\,,$ (34)
where $c^{\prime}$ is the $\sigma^{\prime}$-related external field on the
$Z(2)$ phase transition line. Notice that both $c^{\prime}$ and
$\sigma^{\prime}$ are vanishing on the $O(4)$ phase transition line, viz., the
black dashed lines in Fig. 2. In our calculations below, the normalized
external field strength $c_{0}$ in Eq. (34) is chosen to be the value
corresponding to the physical pion mass, and the normalized temperature in Eq.
(15) is to be the critical one $T_{0}=T_{c}$.
In Fig. 3 we show the log-log plots of the reduced order parameter
$\tilde{\sigma}$ versus the reduced temperature $-t$ or external field $h$ for
the second-order $O(4)$ and $Z(2)$ phase transitions. The calculations are
performed in the quark-meson LEFT with the fRG in both LPA and LPA′. The phase
transition points are chosen to be the locations of the red and blue crosses
in the phase diagrams in Fig. 2 for the $O(4)$ and $Z(2)$ universality
classes, respectively. A linear relation is used to fit the calculated
discrete data points in Fig. 3, and as shown in Eq. (19) and Eq. (20), one
could extract the values of the critical exponents $\beta$ and $\delta$ from
the slope of these linear curves. This leads us to
$\displaystyle\beta^{{}^{O(4)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=0.3989(41)\,,\qquad\beta^{{}^{O(4)}}_{{}_{\mathrm{LPA}^{\prime}}}=0.3832(31)\,,$
(35)
for the $O(4)$ universality class in LPA and LPA′, respectively. In the case
of the $Z(2)$ universality class, one arrives at
$\displaystyle\beta^{{}^{Z(2)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=0.3352(12)\,,\qquad\beta^{{}^{Z(2)}}_{{}_{\mathrm{LPA}^{\prime}}}=0.3259(01)\,.$
(36)
In the same way, the values of $\delta$ are obtained as follows
$\displaystyle\delta^{{}^{O(4)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=4.975(57)\,,\qquad\delta^{{}^{O(4)}}_{{}_{\mathrm{LPA}^{\prime}}}=4.859(37)\,,$
(37) $\displaystyle\delta^{{}^{Z(2)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=4.941(22)\,,\qquad\delta^{{}^{Z(2)}}_{{}_{\mathrm{LPA}^{\prime}}}=4.808(14)\,.$
(38)
It is found that the critical exponents $\beta$ and $\delta$ of the $O(4)$ and
$Z(2)$ phase transitions in 3-$d$ systems calculated in this work are
consistent with previous results, e.g., Monte Carlo simulation of spin model
Kanaya and Kaya (1995) and $d=3$ expansion for $Z(2)$Zinn-Justin (2001).
Comparing the relevant results in LPA and LPA′, one observes that both $\beta$
and $\delta$ obtained in LPA′ are slightly smaller than those in LPA.
### IV.2 Preliminary assessment of the size of the critical region
It is well known that critical exponents are universal for the same
universality classes. The size of the critical region is, however, non-
universal and depends on the interactions and other details of system
concerned. Furthermore, there has been a longstanding debate on the size of
the critical region in QCD. Lattice QCD simulations show that the chiral
condensate, i.e., the order parameter in Eq. (24), for physical quark masses
are well described by Eq. (24) plus a small analytic regular term Ejiri _et
al._ (2009); Kaczmarek _et al._ (2011); Ding _et al._ (2019), which, in
another word, implies that the size of the critical regime of QCD is large
enough, such that QCD with physical quark mass is still in the chiral critical
regime. On the contrary, it is found in Braun and Klein (2008); Braun _et
al._ (2011b); Klein (2017) that the pion mass required to observe the scaling
behavior is very small, at least one order of magnitude smaller than the
physical pion mass. Moreover, it is also found that the critical region around
the CEP in the QCD phase diagram is very small Schaefer and Wambach (2007). In
Tab. 1 we present the values of the critical exponent $\beta$ extracted from
different ranges of temperature. One observes that when the temperature range
is away from the critical temperature larger than $0.01$ MeV, the value of
$\beta$ deviates from its universal value pronouncedly. This applies for both
the $O(4)$ and $Z(2)$ universality classes. Given the systematic errors in the
computation of this work, one could safely conclude that our calculation
indicates that the critical region in the QCD phase diagram is probably very
small, and it is smaller than 1 MeV in the direction of temperature.
### IV.3 Chiral susceptibility
Figure 4: Logarithm of the longitudinal susceptibility $\chi_{\sigma}$ as a
function of $\ln(t)$ in the chiral symmetric phase. The calculation is done in
the quark-meson LEFT within the fRG approach with truncations LPA and LPA′,
where the phase transition points are chosen to be at the locations of the red
and blue crosses in the phase diagrams in Fig. 2 for the $O(4)$ and $Z(2)$
universality classes, respectively. The solid lines represent linear fits to
the calculated discrete data points, from which value of the critical exponent
$\gamma$ is extracted. Figure 5: Longitudinal susceptibility of the order
parameter $\chi_{\sigma}$ as a function of the reduced temperature $t$ with
several different values of the reduced external field $h$, calculated in the
LPA (left panel) and LPA′ (right panel). The phase transition is chosen to be
near the location of the red cross in the phase diagrams in Fig. 2 for the
$O(4)$ symmetry universality class. Figure 6: Left panel: logarithm of the
reduced pseudo-critical temperature $t_{pc}$, defined by the peak of the
susceptibility $\chi_{\sigma}$ as shown in Fig. 5, as a function of the
logarithm of the reduced external field strength $h$. Right panel: logarithm
of the peak height of the susceptibility, $\chi_{\sigma}\big{|}_{t_{pc}}$,
versus the logarithm of the reduced pseudo-critical temperature.
Calculations are done within the fRG approach with the truncations LPA and
LPA′. The phase transition is chosen to be near the location of the red cross
in the phase diagrams in Fig. 2 for the $O(4)$ symmetry universality class.
Figure 7: Logarithms of the transverse (left panel) and longitudinal (right
panel) susceptibilities as functions of the logarithm of $-t$ with a fixed
value of the reduced external field $h=8.4\times 10^{-9}$ in the chiral broken
phase near the coexistence line. Calculations are performed within the fRG
approach with the truncations LPA and LPA′. The phase transition is chosen to
be near the location of the red cross in the phase diagrams in Fig. 2 for the
$O(4)$ symmetry universality class, where the baryon chemical potential is
vanishing. Figure 8: Left panel: logarithm of the correlation length as a
function of the logarithm of the reduced external field strength with $t=0$.
Right panel: logarithm of the correlation length as a function of the
logarithm of the reduced temperature with $h=0$.
Both calculations are performed in the quark-meson LEFT within the fRG
approach with truncations LPA and LPA′, where the phase transition points are
chosen to be at the locations of the red and blue crosses in the phase
diagrams in Fig. 2 for the $O(4)$ and $Z(2)$ universality classes,
respectively. The solid lines represent linear fits to the calculated discrete
data points, from which values of the critical exponent $\nu_{c}$ and $\nu$
are yielded.
According to Eq. (29), the reduced order parameter reads
$\displaystyle\tilde{\sigma}$ $\displaystyle\sim
h^{1/\delta}y(x)^{-1/\delta}\,.$ (39)
Moreover, it has been shown in Griffiths (1967) that given $x>0$ and $M>M_{0}$
for some value $M_{0}$ in Eq. (29), the scaling function can be expanded as
$\displaystyle y(x)$
$\displaystyle=\sum_{n=1}^{\infty}c_{n}x^{\gamma-2\beta(n-1)}$
$\displaystyle=x^{\gamma}\big{(}c_{1}+c_{2}x^{-2\beta}+c_{3}x^{-4\beta}+\dots\big{)}\,.$
(40)
Inserting the leading term in Eq. (40) into Eq. (39) and utilizing the
relation $\gamma=\beta(\delta-1)$ as shown in Eq. (18), one is led to the
reduced order parameter with $t>0$ and $h\rightarrow 0$, which reads
$\displaystyle\tilde{\sigma}$ $\displaystyle\sim t^{-\gamma}h\,.$ (41)
Consequently, the longitudinal and transverse susceptibilities of the order
parameter as defined in Eq. (22) are readily obtained as follows
$\displaystyle\chi_{\sigma}$ $\displaystyle=\chi_{\pi}\sim t^{-\gamma}\,,$
(42)
which is in agreement with Eq. (21) in the limit $h\rightarrow 0$ and in the
symmetric phase, as it should be. Equation (41) also allows us to extract the
value of the exponent $\gamma$, by directly investigating the scaling relation
of $\tilde{\sigma}$ and $t$ in the chiral symmetric phase with a fixed, small
value of $h$. In Fig. 4 we show the logarithm of the longitudinal
susceptibility $\chi_{\sigma}$ versus that of the reduced temperature, where
$h=3.5\times 10^{-10}$ is chosen in the calculations. We have checked that
this value of $h$ is small enough to make sure that the value of $\gamma$
obtained from the linear fit of $\ln(\chi_{\sigma})$-$\ln(t)$ is convergent.
In the same way, the flow equations of fRG are resolved with two truncations
LPA and LPA′, and the phase transition points are chosen to be at the
locations of the red and blue crosses in the phase diagrams in Fig. 2 for the
$O(4)$ and $Z(2)$ universality classes, respectively. The values of the
exponent $\gamma$ are obtained as follows
$\displaystyle\gamma^{{}^{O(4)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=1.5458(68)\,,\qquad\gamma^{{}^{O(4)}}_{{}_{\mathrm{LPA}^{\prime}}}=1.4765(76)\,,$
(43) $\displaystyle\gamma^{{}^{Z(2)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=1.3313(96)\,,\qquad\gamma^{{}^{Z(2)}}_{{}_{\mathrm{LPA}^{\prime}}}=1.2362(77)\,.$
(44)
Once more, one observes that these values, in particular those obtained in the
LPA′, are in good agreement with the values of $\gamma$ for the $O(4)$ and
$Z(2)$ symmetry universality classes, respectively; see, e.g., Kanaya and Kaya
(1995); Zinn-Justin (2001).
In Fig. 5 the longitudinal susceptibility of the order parameter
$\chi_{\sigma}$, as shown in Eq. (22), is depicted versus the reduced
temperature with several different values of the reduced external field. Here
we only focus on the case of $O(4)$ symmetry, and thus choose the phase
transition to be near the location of the red cross in the phase diagrams in
Fig. 2, i.e., the phase transition with vanishing baryon chemical potential.
When the external field $h$ that breaks the chiral symmetry explicitly is
nonzero, the second-order phase transition becomes a continuous crossover, as
shown in Fig. 5. One can define a pseudo-critical temperature $T_{pc}$, which
is the peak position of the curve $\chi_{\sigma}$ versus $T$, and thus the
reduced pseudo-critical temperature reads
$\displaystyle t_{pc}$ $\displaystyle=\frac{T_{pc}-T_{c}}{T_{c}}\,.$ (45)
One observes from Fig. 5 that with the increasing $h$, the peak height of the
susceptibility decreases and the pseudo-critical temperature $t_{pc}$
increases. The rescaling relation between $t_{pc}$ and $h$ as well as that
between the peak height of $\chi_{\sigma}$ and $t_{pc}$ reads
$\displaystyle t_{pc}$ $\displaystyle\sim
h^{1/(\gamma+\beta)}\,,\qquad\chi_{\sigma}\big{|}_{t_{pc}}\sim
t_{pc}^{-\gamma}\,,$ (46)
and see, e.g., Pelissetto and Vicari (2002) for more details.
In Fig. 6 we show the logarithm of the reduced pseudo-critical temperature
versus the logarithm of the reduced external field strength, and the logarithm
of the peak height of the susceptibility versus the logarithm of the reduced
pseudo-critical temperature in the left and right panels, respectively. The
phase transition is also chosen to be near the location of the red cross in
the phase diagrams in Fig. 2 for the $O(4)$ symmetry universality class, where
the baryon chemical potential is vanishing. Linear fitting to the calculated
discrete data in Fig. 6 yields $\beta=0.403(19)$ and $\gamma=1.543(15)$ for
the LPA, and $\beta=0.405(22)$ and $\gamma=1.454(17)$ for the LPA′, which are
in agreement with the relevant values in Eq. (35) and Eq. (43) within errors
for the $O(4)$ second-order phase transition in 3-$d$ space. In turn, the
agreement of critical exponents obtained from different scaling relations also
provides us with the necessary check for the inner consistency of
computations. Note, however, that the critical exponents $\beta$ and $\gamma$
determined from the scaling relations in Eq. (46) are significantly less
accurate than those in Eq. (35) and Eq. (43).
As another check for the consistency, we consider the susceptibilities in the
chiral broken phase near the coexistence line, i.e., $x=-1$, with $t<0$ and
$h\rightarrow 0$. Inserting Eq. (39) into Eqs. (31) (32), one is led to
$\displaystyle\chi_{\sigma}$ $\displaystyle\sim h^{1/\delta-1}\frac{\beta
y(x)^{1-1/\delta}}{\beta\delta y(x)-xy^{\prime}(x)}\,,$ (47)
$\displaystyle\chi_{\pi}$ $\displaystyle\sim
h^{1/\delta-1}y(x)^{-1/\delta}\,.$ (48)
when the system is near the coexistence line, one has $x\rightarrow-1$ and
$y\sim h/(-t)^{\beta\delta}$. Hence, the transverse susceptibility is readily
obtained as follows
$\displaystyle\chi_{\pi}$ $\displaystyle\sim h^{-1}(-t)^{\beta}\,.$ (49)
In order to obtain a similar expression for the longitudinal susceptibility,
one needs further information on the equation of state $y(x)$. As the system
is located in the broken phase near the coexistence line, the dynamics is
dominated by Goldstone modes, which are massless in the chiral limit. The
relevant critical behavior in this regime is governed by a Gaussian fixed
point, and thus the corresponding exponents are as same as values of mean
fields Wallace and Zia (1975); Brezin and Wallace (1973), which leaves us with
$\displaystyle y(x)$ $\displaystyle\sim(1+x)^{2}\,,\qquad\mathrm{for}\qquad
x\rightarrow-1\,,$ (50)
and see, e.g., Braun and Klein (2008); Stokic _et al._ (2010) for more
relevant discussions. Substituting equation above into Eq. (47), one arrives
at
$\displaystyle\chi_{\sigma}$ $\displaystyle\sim
h^{-1/2}(-t)^{\beta-(\beta\delta/2)}\,.$ (51)
As Eqs. (49) (51) show, the transverse and longitudinal susceptibilities are
proportional to the external field with different powers in the broken phase,
i.e., $-1$ and $-1/2$ for the former and latter, respectively.
In Fig. 7 we show $\ln(\chi_{\pi})$ and $\ln(\chi_{\sigma})$ versus $\ln(-t)$
with a fixed value of the reduced external field $h=8.4\times 10^{-9}$ in the
chiral broken phase near the coexistence line. Similarly, here we only
consider the phase transition of $O(4)$ symmetry with $\mu_{B}=0$ in the phase
diagrams in Fig. 2. As shown in Eqs. (49) (51), the ratios of the linear
fitting to $\ln(\chi_{\pi})$-$\ln(-t)$ and $\ln(\chi_{\sigma})$-$\ln(-t)$ are
just the values of $\beta$ and $\beta-(\beta\delta/2)$, respectively.
Consequently, one arrives at $\beta=0.3979(41)$ and $\delta=4.984(74)$ in LPA
and $\beta=0.3832(54)$ and $\delta=4.86(10)$ in LPA′, which agree very well
with the relevant values in Eq. (35) and Eq. (37).
### IV.4 Correlation length
| Method | $\beta$ | $\delta$ | $\gamma$ | $\nu$ | $\nu_{c}$ | $\eta$
---|---|---|---|---|---|---|---
$O(4)$ QM LPA (this work) | fRG Chebyshev | 0.3989(41) | 4.975(57) | 1.5458(68) | 0.7878(25) | 0.3982(17) | 0
$O(4)$ QM LPA′ (this work) | fRG Chebyshev | 0.3832(31) | 4.859(37) | 1.4765(76) | 0.7475(27) | 0.4056(19) | 0.0252(91)*
$Z(2)$ QM LPA (this work) | fRG Chebyshev | 0.3352(12) | 4.941(22) | 1.3313(96) | 0.6635(17) | 0.4007(45) | 0
$Z(2)$ QM LPA′ (this work) | fRG Chebyshev | 0.3259(01) | 4.808(14) | 1.2362(77) | 0.6305(23) | 0.4021(43) | 0.0337(38)*
$O(4)$ scalar theories Tetradis and Wetterich (1994) | fRG Taylor | 0.409 | 4.80* | 1.556 | 0.791 | | 0.034
$O(4)$ KT phase transition Von Gersdorff and Wetterich (2001) | fRG Taylor | 0.387* | 4.73* | | 0.739 | | 0.047
$Z(2)$ KT phase transition Von Gersdorff and Wetterich (2001) | fRG Taylor | | | | 0.6307 | | 0.0467
$O(4)$ scalar theories Litim and Pawlowski (2001) | fRG Taylor | 0.4022* | 5.00* | | 0.8043 | |
$O(4)$ scalar theories LPABraun and Klein (2008) | fRG Taylor | 0.4030(30) | 4.973(30) | | 0.8053(60) | |
$O(4)$ QM LPA Stokic _et al._ (2010) | fRG Taylor | 0.402 | 4.818 | 1.575 | 0.787 | 0.396 |
$O(4)$ scalar theories Bohr _et al._ (2001) | fRG Grid | 0.40 | 4.79 | | 0.78 | | 0.037
$Z(2)$ scalar theories Bohr _et al._ (2001) | fRG Grid | 0.32 | 4.75 | | 0.64 | | 0.044
$O(4)$ scalar theories De Polsi _et al._ (2020) | fRG DE $\mathcal{O}(\partial^{4})$ | | | | 0.7478(9) | | 0.0360(12)
$Z(2)$ scalar theories Balog _et al._ (2019); De Polsi _et al._ (2020) | fRG DE $\mathcal{O}(\partial^{6})$ | | | | 0.63012(5) | | 0.0361(3)
$O(4)$ CFTs Kos _et al._ (2015) | conformal bootstrap | | | | 0.7472(87) | | 0.0378(32)
$Z(2)$ CFTs Kos _et al._ (2014) | conformal bootstrap | | | | 0.629971(4) | | 0.0362978(20)
$O(4)$ spin model Kanaya and Kaya (1995) | Monte Carlo | 0.3836(46) | 4.851(22) | 1.477(18) | 0.7479(90) | 0.4019(71)* | 0.025(24)*
$Z(2)$ $d=3$ expansion Zinn-Justin (2001) | summed perturbation | 0.3258(14) | 4.805(17)* | 1.2396(13) | 0.6304(13) | 0.4027(23) | 0.0335(25)
Mean Field | | 1/2 | 3 | 1 | 1/2 | 1/3 | 0
Table 2: Critical exponents for the $O(4)$ and $Z(2)$ symmetry universality
classes in 3-$d$ space, obtained in the quark-meson LEFT within the fRG
approach with truncations LPA and LPA′, where the effective potential is
expanded as a sum of Chebyshev polynomials. Our calculated results are also in
comparison to relevant results from previous fRG calculations, e.g., scalar
theories with the effective potential expanded in a Taylor series Tetradis and
Wetterich (1994); Von Gersdorff and Wetterich (2001); Litim and Pawlowski
(2001); Braun and Klein (2008), or discretized on a grid Bohr _et al._
(2001), the quark-meson (QM) low energy effective theory with LPA Stokic _et
al._ (2010), derivative expansions (DE) up to orders of
$\mathcal{O}(\partial^{4})$ and $\mathcal{O}(\partial^{6})$ Balog _et al._
(2019); De Polsi _et al._ (2020). Moreover, results from other approaches,
such as the conformal bootstrap for the 3-$d$ conformal field theories (CFTs)
Kos _et al._ (2014, 2015), Monte Carlo simulation Kanaya and Kaya (1995), and
$d=3$ perturbation expansion Zinn-Justin (2001), as well as the mean-field
values of exponents are also presented. Note that values with an asterisk are
obtained with scaling laws in Eq. (18).
It is well known that the correlation length $\xi$, plays a pivotal role in
the critical dynamics, since fluctuations of wavelength $\sim\xi$ are
inevitably involved in the dynamics. As a system is approaching towards a
second-order phase transition, the most relevant degrees of freedom are the
long-wavelength modes of low energy, and the correlation length is divergent
at the phase transition Landau and Lifshitz (1980).
The critical behavior of correlation length is described by the critical
exponent $\nu$, as shown in Eq. (21). In the symmetric phase $t>0$, it reads
$\displaystyle\xi$ $\displaystyle\sim t^{-\nu}\,,\qquad\mathrm{with}\qquad
h=0\,,$ (52)
which illustrates the scaling relation between the correlation length and the
reduced temperature. Moreover, one can also define another critical exponent
$\nu_{c}$ related to the scaling relation between the correlation length and
the reduced external field, to wit,
$\displaystyle\xi$ $\displaystyle\sim h^{-\nu_{c}}\,,\qquad\mathrm{with}\qquad
t=0\,.$ (53)
In our setup in the quark-meson LEFT, cf. Sec. II, the correlation length is
proportional to the inverse of the renormalized $\sigma$-meson mass, viz.,
$\displaystyle\xi$ $\displaystyle\sim\frac{1}{m_{\sigma}}\,,$ (54)
where $m_{\sigma}$ is related to the dimensionless $k$-dependent sigma mass
$\bar{m}_{\sigma,k}$ in Eq. (5) via the relation as follows
$\displaystyle m_{\sigma}$
$\displaystyle=\bar{m}_{\sigma,k}(\sigma=\sigma_{{}_{\mathrm{EoM}}})k\,,\quad\mathrm{with}\quad
k\rightarrow 0\,,$ (55)
where the scale $k$ is chosen to be in the IR limit $k\rightarrow 0$, and the
mass is calculated on the equation of motion of the order parameter field. In
Fig. 8 we show the scale relation between the correlation length and the
reduced external field strength, and that between the correlation length and
the reduced temperature, respectively. In the same way, we adopt the two
different truncations: LPA and LPA′. The phase transition points are also
chosen to be at the locations of the red and blue crosses in the phase
diagrams in Fig. 2 for the $O(4)$ and $Z(2)$ universality classes,
respectively. By the use of the linear fitting to the calculated data, one
obtains values of the critical exponent $\nu$ as follows
$\displaystyle\nu^{{}^{O(4)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=0.7878(25)\,,\qquad\nu^{{}^{O(4)}}_{{}_{\mathrm{LPA}^{\prime}}}=0.7475(27)\,,$
(56) $\displaystyle\nu^{{}^{Z(2)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=0.6635(17)\,,\qquad\nu^{{}^{Z(2)}}_{{}_{\mathrm{LPA}^{\prime}}}=0.6305(23)\,,$
(57)
as well as those of the critical exponent $\nu_{c}$, i.e.,
$\displaystyle{\nu_{c}}^{{}^{O(4)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=0.3982(17)\,,\qquad{\nu_{c}}^{{}^{O(4)}}_{{}_{\mathrm{LPA}^{\prime}}}=0.4056(19)\,,$
(58) $\displaystyle{\nu_{c}}^{{}^{Z(2)}}_{{}_{\mathrm{LPA}}}$
$\displaystyle=0.4007(45)\,,\qquad{\nu_{c}}^{{}^{Z(2)}}_{{}_{\mathrm{LPA}^{\prime}}}=0.4021(43)\,.$
(59)
Finally, we close Sec. IV with a summary of various critical exponents
calculated in this work in Tab. 2. Respective results for the $O(4)$ and
$Z(2)$ symmetry universality classes with truncation LPA or LPA′ are presented
in the first several rows in Tab. 2. As we have discussed in Sec. II, the
effective potential is expanded as a sum of Chebyshev polynomials in our
calculations, which captures global properties of the order-parameter
potential very well. In Tab. 2 we also present values of critical exponents
obtained from other computations, e.g., scalar theories calculated within the
fRG with the effective potential expanded in a Taylor series Tetradis and
Wetterich (1994); Von Gersdorff and Wetterich (2001); Litim and Pawlowski
(2001); Braun and Klein (2008), or discretized on a grid Bohr _et al._
(2001), quark-meson LEFT within the fRG in LPA Stokic _et al._ (2010),
derivative expansion of the fRG up to orders of $\mathcal{O}(\partial^{4})$
and $\mathcal{O}(\partial^{6})$ Balog _et al._ (2019); De Polsi _et al._
(2020), the conformal bootstrap for the 3-$d$ conformal field theories Kos
_et al._ (2014, 2015), Monte Carlo simulation Kanaya and Kaya (1995), and the
$d=3$ perturbation expansion Zinn-Justin (2001). One observes that our
calculated results are in good agreement with the relevant results from
previous fRG calculations as well as those from the conformal bootstrap, Monte
Carlo simulation, and the $d=3$ perturbation expansion. Remarkably, the
calculation with the truncation LPA′ is superior to that with LPA, and the
former has already provided us with quantitative reliability for the
prediction of the critical exponents in comparison to other approaches.
## V summary
QCD phase structure and related critical behaviors have been studied in the
two-flavor quark-meson low energy effective theory within the fRG approach in
this work. More specifically, we have expanded the effective potential as a
sum of Chebyshev polynomials to solve its flow equation. Consequently, both
the global properties of the effective potential and the numerical accuracy
necessary for the computation of critical exponents are retained in our
calculations. Moreover, we have employed two different truncations for the
effective action: one is the usually used local potential approximation and
the other is that beyond the local potential approximation, in which a field-
dependent mesonic wave function renormalization is encoded.
With the numerical setup within the fRG approach described above, we have
obtained the phase diagram in the plane of $T$ and $\mu_{B}$ for the two-
flavor quark-meson LEFT in the chiral limit, including the second-order phase
transition line of $O(4)$, the tricritical point and the first-order phase
transition line. Furthermore, we also show the $Z(2)$ line in the phase
diagram, which is the trajectory of the critical end point moving with the
successive variance of the strength of explicit chiral symmetry breaking, or
the varying pion mass.
In the phase diagram, we have performed detailed scaling analyses for the
3-$d$ $O(4)$ and $Z(2)$ symmetry universality classes, and investigated the
critical behaviors in the vicinity of phase transition both in the chiral
symmetric and broken phases. Moreover, the transverse and longitudinal
susceptibilities of the order parameter have been calculated in the chiral
broken phase near the coexistence line.
A variety of critical exponents related to the order parameter, chiral
susceptibilities and correlation lengths have been calculated for the 3-$d$
$O(4)$ and $Z(2)$ symmetry universality classes in the phase diagram,
respectively. The calculated results are also compared with those from
previous fRG calculations, either employing the Taylor expansion for the
order-parameter potential or discretizing it on a grid, derivative expansion
of the effective action, the conformal bootstrap, Monte Carlo simulations, and
the $d=3$ perturbation expansion. We find that the critical exponents obtained
in the quark-meson LEFT within the fRG approach, where the order-parameter
potential is expanded in terms of Chebyshev polynomials and a field-dependent
mesonic wave function renormalization is taken into account, are in
quantitative agreement with results from approaches aforementioned.
Furthermore, we have also investigated the size of the critical regime, and it
is found that the critical region in the QCD phase diagram is probably very
small, and it is smaller than 1 MeV in the direction of temperature.
###### Acknowledgements.
We thank Jan M. Pawlowski for illuminating discussions. We also would like to
thank other members of the fQCD collaboration Braun _et al._ (2021) for work
on related subjects. The work was supported by the National Natural Science
Foundation of China under Contract No. 11775041, and the Fundamental Research
Funds for the Central Universities under Contract No. DUT20GJ212.
## Appendix A Threshold functions and anomalous dimensions
We employ the $3d$ flat regulators Litim (2001, 2000) for quarks and mesons in
this paper
$\displaystyle R_{\phi,k}(q_{0},\bm{q})$
$\displaystyle=Z_{\phi,k}\bm{q}^{2}r_{B}(\bm{q}^{2}/k^{2})\,,$ (60)
$\displaystyle R_{q,k}(q_{0},\bm{q})$
$\displaystyle=Z_{q,k}i\bm{\gamma}\cdot\bm{q}r_{F}(\bm{q}^{2}/k^{2})\,,$ (61)
with
$\displaystyle r_{B}(x)$
$\displaystyle=\left(\frac{1}{x}-1\right)\Theta(1-x)\,,$ (62) $\displaystyle
r_{F}(x)$ $\displaystyle=\left(\frac{1}{\sqrt{x}}-1\right)\Theta(1-x)\,.$ (63)
The threshold functions in Eq. (4) are given by
$\displaystyle l_{0}^{(B,d)}(\bar{m}^{2}_{\phi,k},\eta_{\phi,k};T)$
$\displaystyle=$
$\displaystyle\frac{2}{d-1}\left(1-\frac{\eta_{\phi,k}}{d+1}\right)\frac{1}{\sqrt{1+\bar{m}^{2}_{\phi,k}}}$
$\displaystyle\times\bigg{(}\frac{1}{2}+n_{B}(\bar{m}^{2}_{\phi,k};T)\bigg{)}\,,$
(64)
and
$\displaystyle l_{0}^{(F,d)}(\bar{m}^{2}_{q,k},\eta_{q,k};T,\mu)$
$\displaystyle=$
$\displaystyle\frac{2}{d-1}\left(1-\frac{\eta_{q,k}}{d}\right)\frac{1}{2\sqrt{1+\bar{m}^{2}_{q,k}}}$
$\displaystyle\times\Big{(}1-n_{F}(\bar{m}^{2}_{q,k};T,\mu)-n_{F}(\bar{m}^{2}_{q,k};T,-\mu)\Big{)}\,.$
(65)
with the bosonic and fermionic distribution functions reading
$\displaystyle n_{B}(\bar{m}^{2}_{\phi,k};T)=$
$\displaystyle\frac{1}{\exp\bigg{\\{}\frac{k}{T}\sqrt{1+\bar{m}_{\phi,k}^{2}}\bigg{\\}}-1}\,,$
(66)
and
$\displaystyle n_{F}(\bar{m}^{2}_{q,k};T,\mu)=$
$\displaystyle\frac{1}{\exp\bigg{\\{}\frac{1}{T}\Big{[}k\sqrt{1+\bar{m}^{2}_{q,k}}-\mu\Big{]}\bigg{\\}}+1}\,,$
(67)
respectively.
The meson anomalous dimension in Eq. (8) is given by
$\displaystyle\eta_{\phi,k}(\rho)$
$\displaystyle=\frac{1}{6\pi^{2}}\Bigg{\\{}\frac{4}{k^{2}}\bar{\rho}(\bar{V}^{\prime\prime}_{k}(\bar{\rho}))^{2}\mathcal{BB}_{(2,2)}(\bar{m}^{2}_{\pi,k},\bar{m}^{2}_{\sigma,k};T)$
$\displaystyle+N_{c}\bar{h}^{2}_{y,k}\bigg{[}\mathcal{F}_{(2)}(\bar{m}^{2}_{q,k};T,\mu)(2\eta_{q,k}-3)$
$\displaystyle-4(\eta_{q,k}-2)\mathcal{F}_{(3)}(\bar{m}^{2}_{q,k};T,\mu)\bigg{]}\Bigg{\\}}\,,$
(68)
with
$\displaystyle\bar{h}_{y,k}$
$\displaystyle=\frac{h_{y,k}}{Z_{q,k}(Z_{\phi,k})^{1/2}}\,.$ (69)
Note that threshold functions $\mathcal{BB}_{(2,2)}$, $\mathcal{F}_{(2)}$ and
$\mathcal{F}_{(3)}$ in Eq. (68) can be found in e.g., Fu and Pawlowski (2015);
Yin _et al._ (2019).
## Appendix B Some relations for the Chebyshev polynomials
In this appendix we collect some relations for the Chebyshev polynomials,
which are used in solving the flow equation for the effective potential in Eq.
(10). The Chebyshev polynomial of order $n$ reads
$\displaystyle T_{n}(x)$ $\displaystyle=\cos\big{(}n\arccos(x)\big{)}\,,$ (70)
with nonnegative integers $n$’s and $x\in[-1,1]$. The explicit expressions for
the Chebyshev polynomials could be obtained by the recursion relation as
follows
$\displaystyle T_{n+2}(x)$ $\displaystyle=2xT_{n+1}(x)-T_{n}(x)\,,\quad n\geq
0\,,$ (71)
with $T_{0}(x)=1$ and $T_{1}(x)=x$.
The $N+1$ zeros of $T_{N+1}(x)$ in the region $-1\leq x\leq 1$ are given by
$\displaystyle x_{k}$
$\displaystyle=\cos\left(\frac{\pi(k+\frac{1}{2})}{N+1}\right)\,,\quad
k=0,\,1,\,\cdots N\,.$ (72)
A discrete orthogonality relation is fulfilled by the Chebyshev polynomials,
to wit,
$\displaystyle\sum_{k=0}^{N}T_{i}(x_{k})T_{j}(x_{k})$
$\displaystyle=\left\\{\begin{array}[]{l}0\qquad\qquad\qquad i\neq
j\\\\[4.30554pt] (N+1)/2\qquad i=j\neq 0\\\\[4.30554pt]
N+1\qquad\quad\;\;\;i=j=0\end{array}\right.\,,$ (76)
where $x_{k}$’s are the $N+1$ zeros of $T_{N+1}(x)$ in Eq. (72), and
$i,\,j\leq N$. The interval $[-1,1]$ for $x$ could be extended to an arbitrary
one $[y_{\mathrm{min}},y_{\mathrm{max}}]$ for $y$ via the linear relation as
follows
$\displaystyle x$
$\displaystyle=\frac{2y-(y_{\mathrm{max}}+y_{\mathrm{min}})}{y_{\mathrm{max}}-y_{\mathrm{min}}}\,,$
(77)
and the generalized Chebyshev polynomials are defined by
$\displaystyle T_{n}^{[y_{\mathrm{min}},y_{\mathrm{max}}]}(y)$
$\displaystyle\equiv T_{n}\big{(}x(y)\big{)}\,.$ (78)
Therefore, the zeros in $y$ corresponding to Eq. (72) read
$\displaystyle y_{k}$
$\displaystyle=\frac{y_{\max}-y_{\min}}{2}\cos\left(\frac{\pi(k+\frac{1}{2})}{N+1}\right)+\frac{y_{\max}+y_{\min}}{2}\,,$
(79)
with $k=0,\,1,\,\cdots N$. Then, a function $f(y)$ with
$y\in[y_{\mathrm{min}},y_{\mathrm{max}}]$ can be approximated as
$\displaystyle f(y)$
$\displaystyle\approx\left[\sum_{i=1}^{N}c_{i}T_{i}^{[y_{\mathrm{min}},y_{\mathrm{max}}]}(y)\right]+\frac{1}{2}c_{0}\,,$
(80)
where the coefficients could be readily obtained by the use of the
orthogonality relation in Eq. (76), which yields
$\displaystyle c_{i}$
$\displaystyle=\frac{2}{N+1}\sum_{k=0}^{N}f(y_{k})T_{i}^{[y_{\mathrm{min}},y_{\mathrm{max}}]}(y_{k})\,,$
(81)
with $i=0,\,1,\,\cdots N$.
With the Chebyshev approximation of the function $f(y)$ in Eq. (80), it is
straightforward to obtain its derivative, viz.
$\displaystyle f^{\prime}(y)$
$\displaystyle\approx\sum_{i=1}^{N}c_{i}\frac{d}{dy}T_{i}^{[y_{\mathrm{min}},y_{\mathrm{max}}]}(y)$
$\displaystyle=\left[\sum_{i=1}^{N}d_{i}T_{i}^{[y_{\mathrm{min}},y_{\mathrm{max}}]}(y)\right]+\frac{1}{2}d_{0}\,,$
(82)
where the coefficients $d_{i}$’s can be deduced by the recursion relation,
that reads
$\displaystyle d_{N}$ $\displaystyle=0\,,\qquad
d_{N-1}=\frac{2}{y_{\mathrm{max}}-y_{\mathrm{min}}}2Nc_{N}\,,$ $\displaystyle
d_{i-1}$
$\displaystyle=d_{i+1}+\frac{2}{y_{\mathrm{max}}-y_{\mathrm{min}}}2ic_{i}\quad(i=N-1,\cdots,1)\,.$
(83)
## References
* Stephanov (2006) M. A. Stephanov, _Proceedings, 24th International Symposium on Lattice Field Theory (Lattice 2006): Tucson, USA, July 23-28, 2006_ , PoS LAT2006, 024 (2006), arXiv:hep-lat/0701002 [hep-lat] .
* Friman _et al._ (2011) B. Friman, C. Hohne, J. Knoll, S. Leupold, J. Randrup, R. Rapp, and P. Senger, Lect. Notes Phys. 814, pp.1 (2011).
* Luo and Xu (2017) X. Luo and N. Xu, Nucl. Sci. Tech. 28, 112 (2017), arXiv:1701.02105 [nucl-ex] .
* Andronic _et al._ (2018) A. Andronic, P. Braun-Munzinger, K. Redlich, and J. Stachel, Nature 561, 321 (2018), arXiv:1710.09425 [nucl-th] .
* Fischer (2019) C. S. Fischer, Prog. Part. Nucl. Phys. 105, 1 (2019), arXiv:1810.12938 [hep-ph] .
* Bzdak _et al._ (2020) A. Bzdak, S. Esumi, V. Koch, J. Liao, M. Stephanov, and N. Xu, Phys. Rept. 853, 1 (2020), arXiv:1906.00936 [nucl-th] .
* Fu _et al._ (2020) W.-j. Fu, J. M. Pawlowski, and F. Rennecke, Phys. Rev. D 101, 054032 (2020), arXiv:1909.02991 [hep-ph] .
* Bazavov _et al._ (2020) A. Bazavov _et al._ , Phys. Rev. D 101, 074502 (2020), arXiv:2001.08530 [hep-lat] .
* Borsanyi _et al._ (2020) S. Borsanyi, Z. Fodor, J. N. Guenther, R. Kara, S. D. Katz, P. Parotto, A. Pasztor, C. Ratti, and K. K. Szabo, Phys. Rev. Lett. 125, 052001 (2020), arXiv:2002.02821 [hep-lat] .
* Fu _et al._ (2021) W.-j. Fu, X. Luo, J. M. Pawlowski, F. Rennecke, R. Wen, and S. Yin, (2021), arXiv:2101.06035 [hep-ph] .
* Adamczyk _et al._ (2014a) L. Adamczyk _et al._ (STAR), Phys. Rev. Lett. 112, 032302 (2014a), arXiv:1309.5681 [nucl-ex] .
* Adamczyk _et al._ (2014b) L. Adamczyk _et al._ (STAR), Phys. Rev. Lett. 113, 092301 (2014b), arXiv:1402.1558 [nucl-ex] .
* Luo (2015) X. Luo (STAR), _Proceedings, 9th International Workshop on Critical Point and Onset of Deconfinement (CPOD 2014): Bielefeld, Germany, November 17-21, 2014_ , PoS CPOD2014, 019 (2015), arXiv:1503.02558 [nucl-ex] .
* Adamczyk _et al._ (2018) L. Adamczyk _et al._ (STAR), Phys. Lett. B785, 551 (2018), arXiv:1709.00773 [nucl-ex] .
* Adam _et al._ (2020) J. Adam _et al._ (STAR), (2020), arXiv:2001.02852 [nucl-ex] .
* Aoki _et al._ (2006) Y. Aoki, G. Endrodi, Z. Fodor, S. D. Katz, and K. K. Szabo, Nature 443, 675 (2006), arXiv:hep-lat/0611014 [hep-lat] .
* Borsanyi _et al._ (2014) S. Borsanyi, Z. Fodor, C. Hoelbling, S. D. Katz, S. Krieg, and K. K. Szabo, Phys. Lett. B 730, 99 (2014), arXiv:1309.5258 [hep-lat] .
* Bazavov _et al._ (2014) A. Bazavov _et al._ (HotQCD), Phys. Rev. D 90, 094503 (2014), arXiv:1407.6387 [hep-lat] .
* Bazavov _et al._ (2019) A. Bazavov _et al._ (HotQCD), Phys. Lett. B795, 15 (2019), arXiv:1812.08235 [hep-lat] .
* Isserstedt _et al._ (2019) P. Isserstedt, M. Buballa, C. S. Fischer, and P. J. Gunkel, Phys. Rev. D 100, 074011 (2019), arXiv:1906.11644 [hep-ph] .
* Gao and Pawlowski (2020a) F. Gao and J. M. Pawlowski, Phys. Rev. D 102, 034027 (2020a), arXiv:2002.07500 [hep-ph] .
* Gao and Pawlowski (2020b) F. Gao and J. M. Pawlowski, (2020b), arXiv:2010.13705 [hep-ph] .
* Halasz _et al._ (1998) A. M. Halasz, A. Jackson, R. Shrock, M. A. Stephanov, and J. Verbaarschot, Phys. Rev. D 58, 096007 (1998), arXiv:hep-ph/9804290 .
* Buballa and Carignano (2019) M. Buballa and S. Carignano, Phys. Lett. B 791, 361 (2019), arXiv:1809.10066 [hep-ph] .
* Ding _et al._ (2019) H. T. Ding _et al._ , Phys. Rev. Lett. 123, 062002 (2019), arXiv:1903.04801 [hep-lat] .
* Braun _et al._ (2020) J. Braun, W.-j. Fu, J. M. Pawlowski, F. Rennecke, D. Rosenblüh, and S. Yin, Phys. Rev. D 102, 056010 (2020), arXiv:2003.13112 [hep-ph] .
* Ding _et al._ (2020) H.-T. Ding, S.-T. Li, S. Mukherjee, A. Tomiya, X.-D. Wang, and Y. Zhang, (2020), arXiv:2010.14836 [hep-lat] .
* Pisarski and Wilczek (1984) R. D. Pisarski and F. Wilczek, Phys. Rev. D29, 338 (1984).
* Berges _et al._ (2002) J. Berges, N. Tetradis, and C. Wetterich, Phys. Rept. 363, 223 (2002), arXiv:hep-ph/0005122 [hep-ph] .
* Pawlowski (2007) J. M. Pawlowski, Annals Phys. 322, 2831 (2007), arXiv:hep-th/0512261 [hep-th] .
* Schaefer and Wambach (2008) B.-J. Schaefer and J. Wambach, _Helmholtz International Summer School on Dense Matter in Heavy Ion Collisions and Astrophysics Dubna, Russia, August 21-September 1, 2006_ , Phys. Part. Nucl. 39, 1025 (2008), arXiv:hep-ph/0611191 [hep-ph] .
* Gies (2012) H. Gies, _Renormalization group and effective field theory approaches to many-body systems_ , Lect. Notes Phys. 852, 287 (2012), arXiv:hep-ph/0611146 [hep-ph] .
* Rosten (2012) O. J. Rosten, Phys. Rept. 511, 177 (2012), arXiv:1003.1366 [hep-th] .
* Braun (2012) J. Braun, J. Phys. G39, 033001 (2012), arXiv:1108.4449 [hep-ph] .
* Pawlowski (2014) J. M. Pawlowski, _Proceedings, 24th International Conference on Ultra-Relativistic Nucleus-Nucleus Collisions (Quark Matter 2014): Darmstadt, Germany, May 19-24, 2014_ , Nucl. Phys. A931, 113 (2014).
* Dupuis _et al._ (2020) N. Dupuis, L. Canet, A. Eichhorn, W. Metzner, J. Pawlowski, M. Tissier, and N. Wschebor, (2020), arXiv:2006.04853 [cond-mat.stat-mech] .
* Risch (2013) A. Risch, _On the chiral phase transition in QCD by means of the functional renormalisation group_ , Master’s thesis, University of Heidelberg (2013).
* Boyd (2000) J. P. Boyd, _Chebyshev and Fourier Spectral Methods_ , second edition ed. (DOVER Publications, Inc., 2000).
* Borchardt and Knorr (2015) J. Borchardt and B. Knorr, Phys. Rev. D 91, 105011 (2015), [Erratum: Phys.Rev.D 93, 089904 (2016)], arXiv:1502.07511 [hep-th] .
* Borchardt and Knorr (2016) J. Borchardt and B. Knorr, Phys. Rev. D 94, 025027 (2016), arXiv:1603.06726 [hep-th] .
* Knorr (2020) B. Knorr, (2020), arXiv:2012.06499 [hep-th] .
* Pawlowski and Rennecke (2014) J. M. Pawlowski and F. Rennecke, Phys. Rev. D90, 076002 (2014), arXiv:1403.1179 [hep-ph] .
* Yin _et al._ (2019) S. Yin, R. Wen, and W.-j. Fu, Phys. Rev. D100, 094029 (2019), arXiv:1907.10262 [hep-ph] .
* Schaefer and Wambach (2005) B.-J. Schaefer and J. Wambach, Nucl. Phys. A757, 479 (2005), arXiv:nucl-th/0403039 [nucl-th] .
* Grossi and Wink (2019) E. Grossi and N. Wink, (2019), arXiv:1903.09503 [hep-th] .
* Wilson and Kogut (1974) K. Wilson and J. B. Kogut, Phys. Rept. 12, 75 (1974).
* Weinberg (1979) S. Weinberg, Physica A 96, 327 (1979).
* Gies and Wetterich (2002) H. Gies and C. Wetterich, Phys. Rev. D65, 065001 (2002), arXiv:hep-th/0107221 [hep-th] .
* Gies and Wetterich (2004) H. Gies and C. Wetterich, Phys. Rev. D69, 025001 (2004), arXiv:hep-th/0209183 [hep-th] .
* Floerchinger and Wetterich (2009) S. Floerchinger and C. Wetterich, Phys. Lett. B680, 371 (2009), arXiv:0905.0915 [hep-th] .
* Braun _et al._ (2016) J. Braun, L. Fister, J. M. Pawlowski, and F. Rennecke, Phys. Rev. D94, 034016 (2016), arXiv:1412.1045 [hep-ph] .
* Mitter _et al._ (2015) M. Mitter, J. M. Pawlowski, and N. Strodthoff, Phys. Rev. D91, 054035 (2015), arXiv:1411.7978 [hep-ph] .
* Cyrol _et al._ (2018a) A. K. Cyrol, M. Mitter, J. M. Pawlowski, and N. Strodthoff, Phys. Rev. D97, 054006 (2018a), arXiv:1706.06326 [hep-ph] .
* Eser _et al._ (2018) J. Eser, F. Divotgey, M. Mitter, and D. H. Rischke, Phys. Rev. D 98, 014024 (2018), arXiv:1804.01787 [hep-ph] .
* Cyrol _et al._ (2016) A. K. Cyrol, L. Fister, M. Mitter, J. M. Pawlowski, and N. Strodthoff, Phys. Rev. D94, 054005 (2016), arXiv:1605.01856 [hep-ph] .
* Huber (2020) M. Q. Huber, Phys. Rev. D 101, 114009 (2020), arXiv:2003.13703 [hep-ph] .
* Wetterich (1993) C. Wetterich, Phys. Lett. B301, 90 (1993).
* Ellwanger (1994) U. Ellwanger, _Proceedings, Workshop on Quantum field theoretical aspects of high energy physics: Bad Frankenhausen, Germany, September 20-24, 1993_ , Z. Phys. C62, 503 (1994), [,206(1993)], arXiv:hep-ph/9308260 [hep-ph] .
* Morris (1994) T. R. Morris, Int. J. Mod. Phys. A9, 2411 (1994), arXiv:hep-ph/9308265 [hep-ph] .
* Braun _et al._ (2010) J. Braun, H. Gies, and J. M. Pawlowski, Phys.Lett. B684, 262 (2010), arXiv:0708.2413 [hep-th] .
* Braun (2009) J. Braun, Eur. Phys. J. C64, 459 (2009), arXiv:0810.1727 [hep-ph] .
* Braun _et al._ (2011a) J. Braun, L. M. Haas, F. Marhauser, and J. M. Pawlowski, Phys. Rev. Lett. 106, 022002 (2011a), arXiv:0908.0008 [hep-ph] .
* Cyrol _et al._ (2018b) A. K. Cyrol, M. Mitter, J. M. Pawlowski, and N. Strodthoff, Phys. Rev. D97, 054015 (2018b), arXiv:1708.03482 [hep-ph] .
* Schaefer _et al._ (2007) B.-J. Schaefer, J. M. Pawlowski, and J. Wambach, Phys. Rev. D76, 074023 (2007), arXiv:0704.3234 [hep-ph] .
* Skokov _et al._ (2010) V. Skokov, B. Stokic, B. Friman, and K. Redlich, Phys. Rev. C82, 015206 (2010), arXiv:1004.2665 [hep-ph] .
* Herbst _et al._ (2011) T. K. Herbst, J. M. Pawlowski, and B.-J. Schaefer, Phys. Lett. B696, 58 (2011), arXiv:1008.0081 [hep-ph] .
* Skokov _et al._ (2011) V. Skokov, B. Friman, and K. Redlich, Phys. Rev. C83, 054904 (2011), arXiv:1008.4570 [hep-ph] .
* Karsch _et al._ (2011) F. Karsch, B.-J. Schaefer, M. Wagner, and J. Wambach, Phys. Lett. B 698, 256 (2011), arXiv:1009.5211 [hep-ph] .
* Morita _et al._ (2011) K. Morita, V. Skokov, B. Friman, and K. Redlich, Phys. Rev. D 84, 074020 (2011), arXiv:1108.0735 [hep-ph] .
* Skokov _et al._ (2012) V. Skokov, B. Friman, and K. Redlich, Phys. Lett. B 708, 179 (2012), arXiv:1108.3231 [hep-ph] .
* Haas _et al._ (2013) L. M. Haas, R. Stiele, J. Braun, J. M. Pawlowski, and J. Schaffner-Bielich, Phys. Rev. D87, 076004 (2013), arXiv:1302.1993 [hep-ph] .
* Herbst _et al._ (2013) T. K. Herbst, J. M. Pawlowski, and B.-J. Schaefer, Phys. Rev. D88, 014007 (2013), arXiv:1302.1426 [hep-ph] .
* Herbst _et al._ (2014) T. K. Herbst, M. Mitter, J. M. Pawlowski, B.-J. Schaefer, and R. Stiele, Phys. Lett. B731, 248 (2014), arXiv:1308.3621 [hep-ph] .
* Fu and Pawlowski (2016) W.-j. Fu and J. M. Pawlowski, Phys. Rev. D93, 091501 (2016), arXiv:1512.08461 [hep-ph] .
* Fu and Pawlowski (2015) W.-j. Fu and J. M. Pawlowski, Phys. Rev. D92, 116006 (2015), arXiv:1508.06504 [hep-ph] .
* Fu _et al._ (2016) W.-j. Fu, J. M. Pawlowski, F. Rennecke, and B.-J. Schaefer, Phys. Rev. D 94, 116020 (2016), arXiv:1608.04302 [hep-ph] .
* Sun _et al._ (2018) K.-x. Sun, R. Wen, and W.-j. Fu, Phys. Rev. D98, 074028 (2018), arXiv:1805.12025 [hep-ph] .
* Fu _et al._ (2018) W.-j. Fu, J. M. Pawlowski, and F. Rennecke, (2018), 10.21468/SciPostPhysCore.2.1.002, arXiv:1808.00410 [hep-ph] .
* Fu _et al._ (2019) W.-j. Fu, J. M. Pawlowski, and F. Rennecke, Phys. Rev. D 100, 111501 (2019), arXiv:1809.01594 [hep-ph] .
* Wen _et al._ (2019) R. Wen, C. Huang, and W.-J. Fu, Phys. Rev. D 99, 094019 (2019), arXiv:1809.04233 [hep-ph] .
* Wen and Fu (2019) R. Wen and W.-j. Fu, (2019), arXiv:1909.12564 [hep-ph] .
* Hansen _et al._ (2020) H. Hansen, R. Stiele, and P. Costa, Phys. Rev. D 101, 094001 (2020), arXiv:1904.08965 [hep-ph] .
* Helmboldt _et al._ (2015) A. J. Helmboldt, J. M. Pawlowski, and N. Strodthoff, Phys. Rev. D 91, 054010 (2015), arXiv:1409.8414 [hep-ph] .
* Toussaint (1997) D. Toussaint, Phys. Rev. D 55, 362 (1997), arXiv:hep-lat/9607084 .
* Engels and Mendes (2000) J. Engels and T. Mendes, Nucl. Phys. B 572, 289 (2000), arXiv:hep-lat/9911028 .
* Parisen Toldin _et al._ (2003) F. Parisen Toldin, A. Pelissetto, and E. Vicari, JHEP 07, 029 (2003), arXiv:hep-ph/0305264 [hep-ph] .
* Engels _et al._ (2003) J. Engels, L. Fromme, and M. Seniuch, Nucl. Phys. B675, 533 (2003), arXiv:hep-lat/0307032 [hep-lat] .
* Braun and Klein (2008) J. Braun and B. Klein, Phys. Rev. D 77, 096008 (2008), arXiv:0712.3574 [hep-th] .
* Engels and Vogt (2010) J. Engels and O. Vogt, Nucl. Phys. B 832, 538 (2010), arXiv:0911.1939 [hep-lat] .
* Berges _et al._ (1999) J. Berges, D. U. Jungnickel, and C. Wetterich, Phys. Rev. D59, 034010 (1999), arXiv:hep-ph/9705474 [hep-ph] .
* Schaefer and Pirner (1999) B.-J. Schaefer and H.-J. Pirner, Nucl. Phys. A660, 439 (1999), arXiv:nucl-th/9903003 [nucl-th] .
* Bohr _et al._ (2001) O. Bohr, B. Schaefer, and J. Wambach, Int. J. Mod. Phys. A 16, 3823 (2001), arXiv:hep-ph/0007098 .
* Stokic _et al._ (2010) B. Stokic, B. Friman, and K. Redlich, Eur. Phys. J. C67, 425 (2010), arXiv:0904.0466 [hep-ph] .
* Braun _et al._ (2011b) J. Braun, B. Klein, and P. Piasecki, Eur. Phys. J. C 71, 1576 (2011b), arXiv:1008.2155 [hep-ph] .
* Tetradis (2003) N. Tetradis, Nucl. Phys. A726, 93 (2003), arXiv:hep-th/0303244 [hep-th] .
* Widom (1965) B. Widom, J. Phys. Chem. 43, 3898 (1965).
* Griffiths (1967) R. B. Griffiths, Phys. Rev. 158, 176 (1967).
* Kanaya and Kaya (1995) K. Kanaya and S. Kaya, Phys. Rev. D 51, 2404 (1995), arXiv:hep-lat/9409001 .
* Zinn-Justin (2001) J. Zinn-Justin, Phys. Rept. 344, 159 (2001), arXiv:hep-th/0002136 .
* Ejiri _et al._ (2009) S. Ejiri, F. Karsch, E. Laermann, C. Miao, S. Mukherjee, P. Petreczky, C. Schmidt, W. Soeldner, and W. Unger, Phys. Rev. D 80, 094505 (2009), arXiv:0909.5122 [hep-lat] .
* Kaczmarek _et al._ (2011) O. Kaczmarek, F. Karsch, E. Laermann, C. Miao, S. Mukherjee, P. Petreczky, C. Schmidt, W. Soeldner, and W. Unger, Phys. Rev. D 83, 014504 (2011), arXiv:1011.3130 [hep-lat] .
* Klein (2017) B. Klein, Phys. Rept. 707-708, 1 (2017), arXiv:1710.05357 [hep-ph] .
* Schaefer and Wambach (2007) B.-J. Schaefer and J. Wambach, Phys. Rev. D 75, 085015 (2007), arXiv:hep-ph/0603256 .
* Pelissetto and Vicari (2002) A. Pelissetto and E. Vicari, Phys. Rept. 368, 549 (2002), arXiv:cond-mat/0012164 .
* Wallace and Zia (1975) D. Wallace and R. Zia, Phys. Rev. B 12, 5340 (1975).
* Brezin and Wallace (1973) E. Brezin and D. Wallace, Phys. Rev. B 7, 1967 (1973).
* Tetradis and Wetterich (1994) N. Tetradis and C. Wetterich, Nucl. Phys. B 422, 541 (1994), arXiv:hep-ph/9308214 .
* Von Gersdorff and Wetterich (2001) G. Von Gersdorff and C. Wetterich, Phys. Rev. B 64, 054513 (2001), arXiv:hep-th/0008114 .
* Litim and Pawlowski (2001) D. F. Litim and J. M. Pawlowski, Phys. Lett. B 516, 197 (2001), arXiv:hep-th/0107020 .
* De Polsi _et al._ (2020) G. De Polsi, I. Balog, M. Tissier, and N. Wschebor, Phys. Rev. E 101, 042113 (2020), arXiv:2001.07525 [cond-mat.stat-mech] .
* Balog _et al._ (2019) I. Balog, H. Chaté, B. Delamotte, M. Marohnic, and N. Wschebor, Phys. Rev. Lett. 123, 240604 (2019), arXiv:1907.01829 [cond-mat.stat-mech] .
* Kos _et al._ (2015) F. Kos, D. Poland, D. Simmons-Duffin, and A. Vichi, JHEP 11, 106 (2015), arXiv:1504.07997 [hep-th] .
* Kos _et al._ (2014) F. Kos, D. Poland, and D. Simmons-Duffin, JHEP 11, 109 (2014), arXiv:1406.4858 [hep-th] .
* Landau and Lifshitz (1980) L. Landau and E. Lifshitz, _Statistical Physics (Part 1)_ , 3rd ed. (Pergamon Press Ltd., 1980).
* Braun _et al._ (2021) J. Braun, Y.-r. Chen, W.-j. Fu, C. Huang, F. Ihssen, J. Horak, J. M. Pawlowski, F. Rennecke, D. Rosenblüh, B. Schallmo, C. Schneider, S. Töpfel, Y.-y. Tan, R. Wen, N. Wink, and S. Yin, (members as of January 2021).
* Litim (2001) D. F. Litim, Phys. Rev. D64, 105007 (2001), arXiv:hep-th/0103195 [hep-th] .
* Litim (2000) D. F. Litim, Phys. Lett. B 486, 92 (2000), arXiv:hep-th/0005245 .
|
# bssm: Bayesian Inference of Non-linear and Non-Gaussian State Space Models
in R
by Jouni Helske and Matti Vihola
###### Abstract
We present an R package bssm for Bayesian non-linear/non-Gaussian state space
modelling. Unlike the existing packages, bssm allows for easy-to-use
approximate inference based on Gaussian approximations such as the Laplace
approximation and the extended Kalman filter. The package accommodates also
discretely observed latent diffusion processes. The inference is based on
fully automatic, adaptive Markov chain Monte Carlo (MCMC) on the
hyperparameters, with optional importance sampling post-correction to
eliminate any approximation bias. The package implements also a direct pseudo-
marginal MCMC and a delayed acceptance pseudo-marginal MCMC using intermediate
approximations. The package offers an easy-to-use interface to define models
with linear-Gaussian state dynamics with non-Gaussian observation models, and
has an Rcpp interface for specifying custom non-linear and diffusion models.
## Introduction
State space models (SSM) are a flexible class of latent variable models
commonly used in analysing time series data (cf. Durbin and Koopman, 2012).
There are a number of packages available for state space modelling for R,
especially for two special cases: a linear-Gaussian SSM (LGSSM) where both the
observation and state densities are Gaussian with linear relationships with
the states, and an SSM with discrete state space, which is sometimes called a
hidden Markov model (HMM). These classes admit analytically tractable marginal
likelihood functions and conditional state distributions (conditioned on the
observations), making inference relatively straightforward. See for example
(Petris and Petrone, 2011; Tusell, 2011; Helske, 2017; Helske and Helske,
2019) for review of some of the R packages dealing with these type of models.
The present R package bssm is designed for Bayesian inference of general state
space models with non-Gaussian and/or non-linear observational and state
equations. The package primary aim is to provide easy-to-use and fast
functions for fully Bayesian inference with common time series models such as
basic structural time series model (Harvey, 1989) with exogenous covariates
and simple stochastic volatility models. The package accomodates also custom
non-linear models and discretised diffusion models.
When extending the state space modelling to non-linear or non-Gaussian models,
some difficulties arise. As the likelihood is no longer analytically
tractable, computing the latent state distributions, as well as hyperparameter
estimation of the model becomes more challenging. One general option is to use
Markov chain Monte Carlo (MCMC) methods targeting the full joint posterior of
hyperparameters and the latent states, for example by Gibbs sampling or
Hamiltonian Monte Carlo. Unfortunately, the joint posterior is typically very
high dimensional and due to the strong autocorrelation structures of the state
densities, the efficiency of such methods can be relatively poor. Another
asymptotically exact approach is based on the pseudo-marginal particle MCMC
approach (Andrieu et al., 2010), where the likelihood function and the state
distributions are estimated using sequential Monte Carlo (SMC) i.e. the
particle filter (PF) algorithm. Instead of computationally demanding Monte
Carlo methods, approximation-based methods such extended and unscented Kalman
filters may be used, as well as Laplace approximations, which are provided for
example by the INLA (Lindgren and Rue, 2015) R package. The latter are
computationally appealing, but may lead to hard-to-quantify biases of the
posterior.
Some of the R packages suitable for Bayesian state space modelling include
pomp (King et al., 2016), rbi (Jacob and Funk, 2020), nimbleSMC (Michaud et
al., 2020; NIMBLE Development Team, 2020), and rstan (Stan Development Team,
2020). With the package pomp, user defines the model using R or C snippets for
simulation from and evaluation of the latent state and observation level
densities, allowing flexible model construction. The rbi package is an
interface to LibBi (Murray, 2015), a standalone software with a focus on
Bayesian state space modelling on high-performance computers. The pomp package
provides several simulation-based inference methods mainly based on iterated
filtering and maximum likelihood, whereas rbi is typically used for Bayesian
inference via particle MCMC. For a more detailed comparison of differences of
rbi/LibBi and pomp with examples, see (Funk and King, 2020). The nimbleSMC
package contains some particle filtering algorithms which can be used in the
general Nimble modelling system (de Valpine et al., 2017), whereas the rstan
package provides an R interface to the Stan C++ package, a general statistical
modelling platform (Carpenter et al., 2017).
The key difference to the aforementioned packages and motivation behind the
present bssm package is to combine the use of fast approximation-based methods
with Monte Carlo correction step, leading to computationally efficient and
unbiased (approximation error free) inference of the joint posterior of
hyperparameters and latent states, as suggested in (Vihola et al., 2020). In a
nutshell, the method uses MCMC which targets an approximate marginal posterior
of the hyperparameters, and an importance sampling type weighting which
provides asymptotically exact inference on the joint posterior of
hyperparameters and the latent states. In addition to this two-stage
procedure, the bssm supports also delayed acceptance pseudo-marginal MCMC
(Christen and Fox, 2005) using the approximations, and direct pseudo-marginal
MCMC. To our knowledge, importance sampling and delayed acceptance in this
form are not available in other Bayesian state space modelling packages in R.
## Supported models
We denote the sequence of observations $(y_{1},\ldots,y_{T})$ as $y$, and the
sequence of latent state variables $(\alpha_{1},\ldots,\alpha_{T})$ as
$\alpha$. The latent states $\alpha_{t}\in\mathbb{R}^{d}$ are typically
vector-valued, whereas we focus mainly on scalar observations
$y_{t}\in\mathbb{R}$ (vector-valued observations are also supported, assuming
conditional independence (given $\alpha_{t}$) in case of non-Gaussian
observations).
A general state space model consists of two parts: observation level densities
$g_{t}^{(\theta)}(y_{t}|\alpha_{t})$ and latent state transition densities
$\mu_{t}^{(\theta)}(\alpha_{t+1}|\alpha_{t})$. Typically both
$g_{t}^{(\theta)}$ and $\mu_{t}^{(\theta)}$ depend on unknown parameter vector
$\theta$ for which we can define arbitrary prior $p(\theta)$.
In a linear-Gaussian SSM, both $g_{t}^{(\theta)}$ and $\mu_{t}^{(\theta)}$ are
Gaussian densities and they depend linearly on the current and previous state
vectors, respectively. Section Models with linear-Gaussian state dynamics
describes a common extension to these models supported by bssm, which relaxes
the assumptions on observational density $g_{t}^{(\theta)}$, by allowing
exponential family links, and stochastic volatility models. While the main
focus of bssm is in state space models with linear-Gaussian state dynamics,
there is also support for more general non-linear models, discussed briefly in
Section Other state space models. Section Using the bssm package describes how
arbitrary models based on these definitions are constructed in bssm.
### Models with linear-Gaussian state dynamics
The primary class of models supported by bssm consists of SSMs with linear-
Gaussian state dynamics of form
$\displaystyle\alpha_{t+1}$
$\displaystyle=c_{t}+T_{t}\alpha_{t}+R_{t}\eta_{t},$
where $c_{t}\in\mathbb{R}^{d}$, $T_{t}\in\mathbb{R}^{d\times d}$, and
$R_{t}\in\mathbb{R}^{d\times k}$ can depend on the unknown parameters $\theta$
and covariates. The noise terms $\eta_{t}\sim N(0,I_{k})$ and $\alpha_{1}\sim
N(a_{1},P_{1})$ are independent. These state dynamics can be combined with the
observational level density $g_{t}$ of form
$g_{t}(y_{t}|d_{t}+Z_{t}\alpha_{t},\phi,u_{t}),$
where parameters $\phi$ and the known vector $u_{t}$ are distribution specific
and can be omitted in some cases. Currently, following observational level
distributions are supported:
* •
Gaussian distribution: $y_{t}=d_{t}+Z_{t}\alpha_{t}+H_{t}\epsilon_{t}$ with
$\epsilon_{t}\sim N(0,I)$.
* •
Poisson distribution:
$g_{t}(y_{t}|d_{t}+Z_{t}\alpha_{t},u_{t})=\textrm{Poisson}(u_{t}\exp(d_{t}+Z_{t}\alpha_{t}))$,
where $u_{t}$ is the known exposure at time $t$.
* •
Binomial distribution:
$g_{t}(y_{t}|d_{t}+Z_{t}\alpha_{t},u_{t})=\textrm{B}(u_{t},\operatorname{logit}^{-1}(d_{t}+Z_{t}\alpha_{t}))$,
where $u_{t}$ is the number of trials and
$\operatorname{logit}^{-1}(d_{t}+Z_{t}\alpha_{t})$ is the probability of the
success.
* •
Negative binomial distribution:
$g_{t}(y_{t}|d_{t}+Z_{t}\alpha_{t},\phi,u_{t})=\textrm{NB}(\exp(d_{t}+Z_{t}\alpha_{t}),\phi,u_{t})$,
where $u_{t}\exp(d_{t}+Z_{t}\alpha_{t})$ is the expected value, $\phi$ is the
dispersion parameter, and $u_{t}$ is a known offset term.
* •
Gamma distribution:
$g_{t}(y_{t}|d_{t}+Z_{t}\alpha_{t},\phi,u_{t})=\textrm{Gamma}(\exp(d_{t}+Z_{t}\alpha_{t}),\phi,u_{t})$,
where $u_{t}\exp(d_{t}+Z_{t}\alpha_{t})$ is the expected value, $\phi$ is the
shape parameter, and $u_{t}$ is a known offset term.
* •
Stochastic volatility model:
$g_{t}(y_{t}|Z_{t}\alpha_{t})=\exp(\alpha_{t}/2)\epsilon_{t}$, with
$\epsilon_{t}\sim N(0,1)$. Here the state dynamics is also fixed as
$\alpha_{t+1}=\mu+\rho(\alpha_{t}-\mu)+\sigma_{\eta}\eta_{t}$, with
$\eta_{t}\sim N(0,1)$ and $\alpha_{1}\sim
N(\mu,\sigma^{2}_{\eta}/(1-\rho^{2}))$.
For multivariate models, these distributions can be combined arbitrarily,
except the stochastic volatility model case which is currently handled
separately. Also for fully Gaussian model, the observational level errors
$\epsilon_{t}$ can be correlated across time series.
### Other state space models
The general non-linear Gaussian model in the bssm has following form:
$\displaystyle y_{t}$
$\displaystyle=Z(t,\alpha_{t},\theta)+H(t,\alpha_{t},\theta)\epsilon_{t},$
$\displaystyle\alpha_{t+1}$
$\displaystyle=T(t,\alpha_{t},\theta)+R(t,\alpha_{t},\theta)\eta_{t},$
$\displaystyle\alpha_{1}$ $\displaystyle\sim N(a_{1}(\theta),P_{1}(\theta)),$
with $t=1,\ldots,n$, $\epsilon_{t}\sim N(0,\textrm{I}_{p})$, and $\eta\sim
N(0,\textrm{I}_{k})$.
The bssm package also supports models where the state equation is defined as a
continuous-time diffusion model of the form
$\textrm{d}\alpha_{t}=\mu(\alpha_{t},\theta)\textrm{d}t+\sigma(\alpha_{t},\theta)\textrm{d}B_{t},\quad
t\geq 0,$
where $B_{t}$ is a Brownian motion and where $\mu$ and $\sigma$ are scalar-
valued functions, with the univariate observation density
$p(y_{k}|\alpha_{k})$ defined at integer times $k=1\ldots,n$.
## Inference methods
The main goal of bssm is to facilitate easy-to-use full Bayesian inference of
the joint posterior $p(\alpha,\theta|y)$ for models discussed in Section
Supported models. The inference methods implemented in bssm are based on a
factorised approach where the joint posterior of hyperparameters $\theta$ and
latent states $\alpha$ is given as
$p(\alpha,\theta|y)\propto
p(\theta)p(\alpha,y|\theta)=p(\theta)p(y|\theta)p(\alpha|y,\theta),$
where $p(y|\theta)$ is the parameter marginal likelihood and
$p(\alpha|y,\theta)$ is the smoothing distribution.
All the inference algorithms are based on a Markov chain Monte Carlo on the
parameters $\theta$, whose single iteration may be summarised as follows:
1: Draw a proposal $\theta^{\prime}\sim N(\theta^{i-1},\Sigma_{i-1})$.
2: Calculate the (approximate) marginal likelihood
$\hat{p}(y|\theta^{\prime})$.
3: Accept the proposal with probability
$\alpha:=\min\Big{\\{}1,\frac{p(\theta^{\prime})\hat{p}(y|\theta^{\prime})}{p(\theta^{i-1})\hat{p}(y|\theta^{i-1})}\Big{\\}}$.
4: If the proposal $\theta^{\prime}$ is accepted, set
$\theta^{i}=\theta^{\prime}$. Otherwise, set $\theta^{i}=\theta^{i-1}$.
5: Adapt the proposal covariance matrix $\Sigma_{i-1}\to\Sigma_{i}$.
Algorithm 1 One iteration of MCMC algorithm for sampling $p(\theta|y)$.
The adaptation step 5 in bssm currently implements the robust adaptive
Metropolis algorithm (Vihola, 2012) with fixed target acceptance rate (0.234
by default) provided by the ramcmc package (Helske, 2016). The (approximate)
marginal likelihood $\hat{p}(y|\theta)$ takes different forms, leading to
different inference algorithms, discussed below.
### Direct inference: marginal algorithm and particle MCMC
The simplest case is with a linear-Gaussian SSM, where we can use the exact
marginal likelihood $\hat{p}(y|\theta)=p(y|\theta)$, in which case Algorithm 1
reduces to (an adaptive) random-walk Metropolis algorithm targeting the
posterior marginal of the parameters $\theta$. Inference from the full
posterior may be done using the simulation smoothing algorithm (Durbin and
Koopman, 2002) conditional to the sampled hyperparameters.
The other ‘direct’ option, which can be used with any model, is using the
bootstrap particle filter (BSF) (Gordon et al., 1993), which leads to a
_random_ $\hat{p}(y|\theta)$ which is an unbiased estimator of $p(y|\theta)$.
In this case, Algorithm 1 reduces to (an adaptive) particle marginal
Metropolis-Hastings (Andrieu et al., 2010). Full posterior inference is
achieved simultaneously, by picking particle trajectories based on their
ancestries as in the filter-smoother algorithm (Kitagawa, 1996). Note that
with BSF, the desired acceptance rate needs to be lower, depending on the
number of particles used (Doucet et al., 2015).
### Approximate inference: Laplace approximation and the extended Kalman
filter
The direct BSF discussed above may be used with any non-linear and/or non-
Gaussian model, but may be slow and/or poor mixing. To alleviate this, bssm
provides computationally efficient (intermediate) approximate inference in
case of non-Gaussian observation models in Section Models with linear-Gaussian
state dynamics, and in case of non-linear dynamics in Section Other state
space models.
With non-Gaussian models of Section Models with linear-Gaussian state
dynamics, we use an approximating Gaussian model $\tilde{p}(y,\alpha|\theta)$
which is a Laplace approximation of $p(\alpha,y|\theta)$ following (Durbin and
Koopman, 2000). We write the likelihood as follows
$\displaystyle p(y|\theta)$ $\displaystyle=\int
p(\alpha,y|\theta)\textrm{d}\alpha=\tilde{p}(y|\theta)E\left[\frac{p(y|\alpha,\theta)}{\tilde{p}(y|\alpha,\theta)}\right],$
where $\tilde{p}(y|\theta)$ is the likelihood of the Laplace approximation and
the expectation is taken with respect to its conditional
$\tilde{p}(\alpha|y,\theta)$ (Durbin and Koopman, 2012). Indeed, denoting
$\hat{\alpha}$ as the mode of $\tilde{p}(\alpha|\theta,y)$, we may write
$\displaystyle\log p(y|\theta)$
$\displaystyle=\log\tilde{p}(y|\theta)+\log\frac{p(y|\hat{\alpha},\theta)}{\tilde{p}(y|\hat{\alpha},\theta)}+\log
E\left[\frac{p(y|\alpha,\theta)/p(y|\hat{\alpha},\theta)}{\tilde{p}(y|\alpha,\theta)/\tilde{p}(y|\hat{\alpha},\theta)}\right].$
If $\tilde{p}$ resembles $p$ with typical values of $\alpha$, the latter
logarithm of expectation is zero. We take $\hat{p}(y|\theta)$ as the
expression on the right, dropping the expectation.
When $\hat{p}$ is approximate, the MCMC algorithm targets an approximate
posterior marginal. Approximate full inference may be done analogously as in
Section Direct inference: marginal algorithm and particle MCMC, by simulating
trajectories conditional to the sampled parameter configurations $\theta^{i}$.
We believe that approximate inference is often good enough for model
development, but strongly recommend using post-correction as discussed in
Section Post-processing by importance weighting to check the validity of the
final inference.
In addition to these algorithms, bssm also supports $\hat{p}(y|\theta)$ based
on the extended KF (EKF) or iterated EKF (IEKF) (Jazwinski, 1970) which can be
used for models with non-linear dynamics (Section Other state space models).
Approximate smoothing based on (iterated) EKF is also supported. It is also
possible to perform direct inference as in Section Direct inference: marginal
algorithm and particle MCMC, but instead of the BSF, employ particle filter
based on EKF (Van Der Merwe et al., 2001).
### Post-processing by importance weighting
The inference methods in Section Approximate inference: Laplace approximation
and the extended Kalman filter are computationally efficient, but come with a
bias. The bssm implements importance-sampling type post-correction as
discussed in (Vihola et al., 2020). Indeed, having MCMC samples $(\theta^{i})$
from the approximate posterior constructed as in Section Approximate
inference: Laplace approximation and the extended Kalman filter, we may
produce (random) weights and latent states, such that the weighted samples
form estimators which are consistent with respect to the true posterior
$p(\alpha,\theta|y)$.
The primary approach which we recommend for post-correction is based on a
“$\psi$-APF’ ’ — a particle filter using the Gaussian approximations of
Section Approximate inference: Laplace approximation and the extended Kalman
filter. In essence, this particle filter employs the dynamics and a look-ahead
strategy coming from the approximation, which leads to low-variance
estimators; see (Vihola et al., 2020) and package
vignettes111https://cran.r-project.org/package=bssm/vignettes/psi_pf.html for
more detailed description. Naturally $\psi$-APF can also be used in place of
BSF in direct inference of Section Direct inference: marginal algorithm and
particle MCMC.
### Direct inference using approximation-based delayed acceptance
An alternative to approximate MCMC and post-correction, bssm also supports an
analogous delayed acceptance method (Christen and Fox, 2005; Banterle et al.,
2019) (here denoted by DA-MCMC). This algorithm is similar to 1, but in case
of ‘acceptance’, leads to second-stage acceptance using the same weights as
the post-correction would; see (Vihola et al., 2020) for details. Note that as
in direct approach for non-Gaussian/non-linear models, the desired acceptance
rate with DA-MCMC should be lower than the default 0.234.
The DA-MCMC also leads to consistent posterior estimators, and often
outperforms the direct particle marginal Metropolis-Hastings. However,
empirical findings (Vihola et al., 2020) and theoretical considerations
(Franks and Vihola, 2020) suggest that approximate inference with post-
correction may often be preferable. The bssm supports parallelisation with
post-correction using OpenMP, which may further promote the latter.
### Inference with diffusion state dynamics
For general continuous-time diffusion models, the transition densities are
intractable. The bssm uses Millstein time-discretisation scheme for
approximate simulation, and inference is based on the corresponding BSF. Fine
time-discretisation mesh gives less bias than the coarser one, with increased
computational complexity. The DA and IS approaches can be used to speed up the
inference by using coarse discretisation in the first stage and then using
more fine mesh in the second stage. For comparison of DA and IS approaches in
case of geometric Brownian motion model, see (Vihola et al., 2020).
## Using the bssm package
Main functions of bssm related to the MCMC sampling, approximations, and
particle filtering are written in C++, with help of Rcpp (Eddelbuettel and
François, 2011) and RcppArmadillo (Eddelbuettel and Sanderson, 2014) packages.
On the R side, the package uses S3 methods to provide a relatively unified
workflow independent of the type of the model one is working with. The model
building functions such as bsm_ng and svm are used to construct the model
objects of same name which can be then passed to other methods, such as logLik
and run_mcmc which compute the log-likelihood value and run MCMC algorithm
respectively. We will now briefly describe the main functionality of bssm. For
more detailed descriptions of different functions and their arguments, see the
corresponding documentation in R and the package vignettes.
### Constructing the model
For models with linear-Gaussian state dynamics, bssm includes some predefined
models such as bsm_lg and bsm_ng for univariate Gaussian and non-Gaussian
structural time series models with external covariates, for which user only
needs to supply the data and priors for unknown model parameters. In addition,
bssm supports general model building functions ssm_ulg, ssm_mlg for custom
univariate and multivariate Gaussian models and ssm_ung, and ssm_mng for their
non-Gaussian counterparts. For these models, users need to supply their own R
functions for the evaluation of the log prior density and for updating the
model matrices given the current value of the parameter vector $\theta$. It is
also possible to avoid defining the matrices manually by leveraging the
formula interface of the KFAS package (Helske, 2017) together with as_bssm
function which converts KFAS model to a bssm equivalent model object. This is
especially useful in case of complex multivariate models with covariates.
As an example, consider a Gaussian local linear trend model of the form
$\displaystyle y_{t}$ $\displaystyle=\mu_{t}+\epsilon_{t},$
$\displaystyle\mu_{t+1}$ $\displaystyle=\mu_{t}+\nu_{t}+\eta_{t},$
$\displaystyle\nu_{t+1}$ $\displaystyle=\nu_{t}+\xi_{t},$
with zero-mean Gaussian noise terms $\epsilon_{t},\eta_{t},\xi_{t}$ with
unknown standard deviations. Using the time series of the mean annual
temperature (in Fahrenheit) in New Haven, Connecticut, from 1912 to 1971
(available in the datasets package) as an example, this model can be built
with bsm function as
library("bssm")data("nhtemp", package = "datasets")prior <\- halfnormal(1,
10)bsm_model <\- bsm_lg(y = nhtemp, sd_y = prior, sd_level = prior, sd_slope =
prior)
Here we use helper function halfnormal which defines half-Normal prior
distribution for the standard deviation parameters, with the first argument
defining the initial value of the parameter, and second defines the scale
parameter of the half-Normal distribution. Other prior options are normal,
tnormal (truncated normal), gamma, and uniform.
As an example of multivariate model, consider bivariate Poisson model with
latent random walk model, defined as
$\displaystyle y_{i,t}$ $\displaystyle\sim\textrm{Poisson}(\exp(x_{t})),\quad
i=1,2,$ $\displaystyle x_{t+1}$ $\displaystyle=x_{t}+\eta_{t},$
with $\eta_{t}\sim N(0,\sigma^{2})$, and prior
$\sigma\sim\textrm{Gamma}(2,0.01)$. This model can be built with ssm_mng
function as
# Generate observationsset.seed(1)x <\- cumsum(rnorm(50, sd = 0.2))y <\-
cbind( rpois(50, exp(x)), rpois(50, exp(x)))# Log prior density
functionprior_fn <\- function(theta) { dgamma(theta, 2, 0.01, log = TRUE)}#
Model parameters from hyperparametersupdate_fn <\- function(theta) { list(R =
array(theta, c(1, 1, 1)))}# define the modelmng_model <\- ssm_mng(y = y, Z =
matrix(1,2,1), T = 1, R = 0.1, P1 = 1, distribution = "poisson", init_theta =
0.1, prior_fn = prior_fn, update_fn = update_fn)
Here the user-defined functions prior_fn and update_fn define the log-prior
for the model and how the model components depend on the hyperparameters
$\theta$ respectively.
For models where the state equation is no longer linear-Gaussian, we use
pointer-based interface by defining all model components as well as functions
defining the Jacobians of $Z(\cdot)$ and $T(\cdot)$ needed by the extended
Kalman filter as C++ snippets. General non-linear Gaussian model can be
defined with the function ssm_nlg. Discretely observed diffusion models where
the state process is assumed to be continuous stochastic process can be
constructed using the ssm_sde function, which takes pointers to C++ functions
defining the drift, diffusion, the derivative of the diffusion function, and
the log-densities of the observations and the prior. As an example of the
latter, let us consider an Ornstein–Uhlenbeck process
$\textrm{d}\alpha_{t}=\rho(\nu-\alpha_{t})\textrm{d}t+\sigma\textrm{d}B_{t},$
with parameters $\theta=(\phi,\nu,\sigma)=(0.5,2,1)$ and the initial condition
$\alpha_{0}=1$. For observation density, we use Poisson distribution with
parameter $\exp(\alpha_{k})$. We first simulate a trajectory
$x_{0},\ldots,x_{n}$ using the sde.sim function from the sde package (Iacus,
2016) and use that for the simulation of observations $y$:
library("sde")x <\- sde.sim(t0 = 0, T = 100, X0 = 1, N = 100, drift =
expression(0.5 * (2 - x)), sigma = expression(1), sigma.x = expression(0))y
<\- rpois(100, exp(x[-1]))
We then compile and build the model as
Rcpp::sourceCpp("ssm_sde_template.cpp")pntrs <\- create_xptrs()sde_model <\-
ssm_sde(y, pntrs$drift, pntrs$diffusion, pntrs$ddiffusion, pntrs$obs_density,
pntrs$prior, c(0.5, 2, 1), 1, FALSE)
The templates for the C++ functions for SDE and non-linear Gaussian models can
be found from the package vignettes on the
CRAN222https://CRAN.R-project.org/package=bssm.
### Markov chain Monte Carlo in bssm
The main purpose of the bssm is to allow computationally efficient MCMC-based
inference for various state space models. For this task, a method run_mcmc can
be used. The function takes a number of arguments, depending on the model
class, but for many of these, default values are provided. For linear-Gaussian
models, we only need to supply the number of iterations. Using the previously
created local linear trend model for the New Haven temperature data of Section
Constructing the model, we run an MCMC with 100,000 iterations where first
10,000 is discarded as a burn-in (burn-in phase is also used for the
adaptation of the proposal distribution):
mcmc_bsm <\- run_mcmc(bsm_model, iter = 1e5, burnin = 1e4)
The print method for the output of the MCMC algorithms gives a summary of the
results, and detailed summaries for $\theta$ and $\alpha$ can be obtained
using summary function. For all MCMC algorithms, bssm uses so-called jump
chain representation of the Markov chain $X_{1},\ldots,X_{n}$, where we only
store each accepted $X_{k}$ and the number of steps we stayed on the same
state. So for example if $X_{1:n}=(1,2,2,1,1,1)$, we present such chain as
$\tilde{X}=(1,2,1)$, $N=(1,2,3)$. This approach reduces the storage space and
makes it more computationally efficient to use importance sampling type
correction algorithms. One drawback of this approach is that the results from
the MCMC run correspond to weighted samples from the target posterior, so some
of the commonly used postprocessing tools need to be adjusted. Of course, in
case of other methods than IS-weighting, the simplest option is to just expand
the samples to typical Markov chain using the stored counts $N$. This can be
done using the function expand_sample which returns an object of class "mcmc"
of the coda package (Plummer et al., 2006) (thus the plotting and diagnostic
methods of coda can also be used). We can also directly transform the
posterior samples to a "data.frame" object by using as.data.frame method for
the MCMC output (for IS-weighting, the returned data frame contains additional
column weights). This is useful for example for visualization purposes with
the ggplot2 (Wickham, 2016) package:
library("ggplot2")d <\- as.data.frame(mcmc_bsm, variable = "theta")ggplot(d,
aes(x = value)) + geom_density(bw = 0.1, fill = "#9ebcda") + facet_wrap(~
variable, scales = "free") + theme_bw()
Figure 1: Posterior densities of hyperparameters $\theta$ of the linear-
Gaussian model for nhtemp data.
Figure 1 shows the estimated posterior densities of the three standard
deviation parameters of the model. The relatively large observational level
standard deviation $\sigma_{y}$ suggests that the underlying latent
temperature series is much smoother than the observed series, which can also
be seen from Figure 2 which show the original observations (black dots) spread
around the estimated temperature series (solid line).
library("dplyr")d <\- as.data.frame(mcmc_bsm, variable = "states")summary_y
<\- d %>% filter(variable == "level") %>% group_by(time) %>% summarise(mean =
mean(value), lwr = quantile(value, 0.025), upr = quantile(value,
0.975))ggplot(summary_y, aes(x = time, y = mean)) + geom_ribbon(aes(ymin =
lwr, ymax = upr), alpha = 0.25) + geom_line() + geom_point(data =
data.frame(mean = nhtemp, time = time(nhtemp))) + theme_bw() + xlab("Year") +
ylab("Mean annual temperature in New Haven")
Figure 2: Observed annual average temperatures in New Haven (black dots) and
predicted mean (solid line) with 95% prediction intervals (grey ribbon) from
‘bssm‘.
For non-Gaussian models the default MCMC algorithm is approximate inference
based on Laplace approximation combined with importance sampling post-
correction (Section Post-processing by importance weighting). It is also
possible to perform first approximate MCMC using the argument mcmc_type =
"approx", and then perform the post-correction step using the results from the
approximate MCMC. In doing so, we can also use the function suggest_N to find
a suitable number of particles $N$ for $\psi$-APF in the spirit of Doucet et
al. (2015):
out_approx <\- run_mcmc(mng_model, mcmc_type = "approx", iter = 50000)est_N
<\- suggest_N(mng_model, out_approx)out_exact <\- post_correct(mng_model,
out_approx, particles = est_N$N)
The function suggest_N computes the standard deviation of the logarithm of the
post-correction weights (i.e. the random part of log-likelihood of $\psi$-APF)
at the approximate MAP estimator of $\theta$ using a range of $N$ and returns
a list with component N which is the smallest number of particles where the
standard deviation was less than one. For small and moderate problems
typically 10-20 particles is enough.
### Filtering and smoothing
The bssm also offers separate methods for performing (approximate) state
filtering and smoothing which may be useful in some custom settings.
For LGSSM, methods kfilter and smoother perform Kalman filtering and
smoothing. For non-Gaussian models with linear-Gaussian dynamics, approximate
filtering and smoothing estimates can be obtained by calls to kfilter and
smoother, in which case these functions first construct an approximating
Gaussian model for which the Kalman filter/smoother is then applied. For non-
linear models defined by nlg_ssm we can run approximate filtering using the
extended Kalman filter with the function ekf, the unscented Kalman filter with
the function ukf, or the iterated EKF (IEKF) by changing the argument
iekf_iter of the ekf function. Function ekf_smoother can be used for smoothing
based on EKF/IEKF.
For particle filtering the bssm package supports general bootstrap particle
filter for all model classes of the bssm (function bootstrap_filter). For
nlg_ssm, extended Kalman particle filtering (Van Der Merwe et al., 2001) is
also supported (function ekpf_filter).
For particle smoothing, function particle_smoother with the smoothing based on
BSF is available for all models. In addition, $\psi$-APF (using argument
method = "psi") is available for all models except of ssm_sde class.
Currently, only filter-smoother approach (Kitagawa, 1996) for particle
smoothing is supported.
## Comparison of IS-MCMC and HMC
Vihola et al. (2020) compared the computational efficiency of delayed
acceptance MCMC and importance sampling type MCMC approaches in various
settings. Here we make a small experiment comparing the generic Hamiltonian
Monte Carlo using the NUTS sampler (Hoffman and Gelman, 2014) with rstan, and
IS-MCMC with bssm. Given that the bssm is specialized for state space models
whereas Stan is a general purpose tool suitable for wider range of problems,
it is to be expected that bssm performs better in terms of computational
efficiency. The purpose of this experiment is to illustrate this fact, i.e.,
that there is still demand for specialized algorithms for various types of
statistical models. For complete code of the experiment, see Appendix: Code
for section Comparison of IS-MCMC and HMC.
We consider the case of a random walk with drift model with negative binomial
observations and some known covariate $x_{t}$, defined as
$\displaystyle y_{t}$ $\displaystyle\sim\textrm{NB}(\exp(\beta
x_{t}+\mu_{t}),\phi)$ $\displaystyle\mu_{t+1}$
$\displaystyle=\mu_{t}+\nu_{t}+\eta_{t},$ $\displaystyle\nu_{t+1}$
$\displaystyle=\nu_{t},$
with zero-mean Gaussian noise term $\eta_{t}$ with unknown standard deviation
$\sigma_{\mu}$. Based on this we simulate one realization of $y$ and $x$ with
$n=200$, $\phi=5$, $\beta=-0.9$, $\nu=0.01$, $\sigma_{\mu}=0.1$.
For the IS approach we use ng_bsm function for model building, with prior
variances 100 and 0.01 for the initial states $\mu_{1}$ and $\nu_{1}$. For
hyperparameters, we used fairly uninformative half-Normal distribution with
standard deviation 0.5 for $\sigma_{\mu}$ and 0.1 for $\sigma_{\nu}$. We then
ran the IS-MCMC algorithm with run_mcmc using a burn-in phase of length 10,000
and run 50,000 iterations after the burn-in, with 10 particles per SMC.
Using the same set up, we ran the MCMC with rstan using 15,000 iterations
(with first 5000 used for warm-up). Note that in order to avoid sampling
problems, it was necessary to tweak the default control parameters of the
sampler (see Appendix).
Table 1 shows the results. We see both methods produce identical results
(within the Monte Carlo error), but while rstan produces similar Monte Carlo
standard errors with smaller amount of total iterations than bssm, the total
computation time of rstan is almost 80 times higher than with bssm (58 minutes
versus 45 seconds), which suggests that for these type of problems it is
highly beneficial to take advantage of the known model structure and available
approximations versus general Bayesian software such as Stan which makes no
distinction between latent states $\alpha$ and hyperparameters $\theta$.
Table 1: Estimates of posterior mean, standard deviation and Monte Carlo standard error of the mean for hyperparameters $\theta$ and latent states for last time point for the example model. | bssm | rstan
---|---|---
| Mean | SD | MCSE | Mean | SD | MCSE
$\sigma_{\mu}$ | $0.092$ | $0.037$ | $9\times 10^{-4}$ | $0.090$ | $0.036$ | $9\times 10^{-4}$
$\sigma_{\nu}$ | $0.003$ | $0.003$ | $5\times 10^{-5}$ | $0.003$ | $0.003$ | $7\times 10^{-5}$
$\phi$ | $5.392$ | $0.910$ | $2\times 10^{-2}$ | $5.386$ | $0.898$ | $1\times 10^{-2}$
$\beta$ | $-0.912$ | $0.056$ | $1\times 10^{-3}$ | $-0.911$ | $0.056$ | $7\times 10^{-4}$
$\mu_{200}$ | $6.962$ | $0.346$ | $5\times 10^{-3}$ | $6.965$ | $0.349$ | $4\times 10^{-3}$
$\nu_{200}$ | $0.006$ | $0.020$ | $3\times 10^{-4}$ | $0.006$ | $0.019$ | $2\times 10^{-4}$
## Conclusions
State space models are a flexible tool for analysing a variety of time series
data. Here we introduced the R package bssm for fully Bayesian state space
modelling for a large class of models with several alternative MCMC sampling
strategies. All computationally intensive parts of the package are implemented
with C++ with parallel computation support for IS-MCMC making it an attractive
option for many common models where relatively accurate Gaussian
approximations are available.
Compared to early versions of the bssm package, the option to define R
functions for model updating and prior evaluation have lowered the bar for
analysing custom models. The package is also written in a way that it is
relatively easy to extend to new model types similar to current bsm_lg in
future. The bssm package could be expanded to allow other proposal adaptation
schemes such as adaptive Metropolis algorithm by Haario et al. (2001), as well
as support for multivariate SDE models and automatic differentiation for EKF-
type algorithms.
## Acknowledgements
This work has been supported by the Academy of Finland research grants 284513,
312605, 315619, 311877, and 331817.
## References
* Andrieu et al. (2010) C. Andrieu, A. Doucet, and R. Holenstein. Particle Markov chain Monte Carlo methods. _Journal of Royal Statistical Society B_ , 72(3):269–342, 2010.
* Banterle et al. (2019) M. Banterle, C. Grazian, A. Lee, and C. P. Robert. Accelerating Metropolis-Hastings algorithms by delayed acceptance. _Foundations of Data Science_ , 1(2):103, 2019\. URL https://doi.org/10.3934/fods.2019005.
* Carpenter et al. (2017) B. Carpenter, A. Gelman, M. Hoffman, D. Lee, B. Goodrich, M. Betancourt, M. Brubaker, J. Guo, P. Li, and A. Riddell. Stan: A probabilistic programming language. _Journal of Statistical Software_ , 76(1):1–32, 2017. URL https://doi.org/10.18637/jss.v076.i01.
* Christen and Fox (2005) J. A. Christen and C. Fox. Markov chain Monte Carlo using an approximation. _Journal of Computational and Graphical Statistics_ , 14(4):795–810, 2005. doi: 10.1198/106186005X76983. URL https://doi.org/10.1198/106186005X76983.
* de Valpine et al. (2017) P. de Valpine, D. Turek, C. Paciorek, C. Anderson-Bergman, D. Temple Lang, and R. Bodik. Programming with models: writing statistical algorithms for general model structures with NIMBLE. _Journal of Computational and Graphical Statistics_ , 26:403–413, 2017. doi: 10.1080/10618600.2016.1172487.
* Doucet et al. (2015) A. Doucet, M. K. Pitt, G. Deligiannidis, and R. Kohn. Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator. _Biometrika_ , 102(2):295–313, 03 2015. ISSN 0006-3444. URL https://doi.org/10.1093/biomet/asu075.
* Durbin and Koopman (2000) J. Durbin and S. J. Koopman. Time series analysis of non-Gaussian observations based on state space models from both classical and Bayesian perspectives. _Journal of Royal Statistical Society B_ , 62:3–56, 2000\.
* Durbin and Koopman (2002) J. Durbin and S. J. Koopman. A simple and efficient simulation smoother for state space time series analysis. _Biometrika_ , 89:603–615, 2002.
* Durbin and Koopman (2012) J. Durbin and S. J. Koopman. _Time Series Analysis by State Space Methods_. Oxford University Press, New York, 2nd edition, 2012.
* Eddelbuettel and François (2011) D. Eddelbuettel and R. François. Rcpp: Seamless R and C++ integration. _Journal of Statistical Software_ , 40(8):1–18, 2011. URL https://doi.org/10.18637/jss.v040.i08.
* Eddelbuettel and Sanderson (2014) D. Eddelbuettel and C. Sanderson. Rcpparmadillo: Accelerating r with high-performance c++ linear algebra. _Computational Statistics and Data Analysis_ , 71:1054–1063, March 2014. URL http://doi.org/10.1016/j.csda.2013.02.005.
* Franks and Vihola (2020) J. Franks and M. Vihola. Importance sampling correction versus standard averages of reversible MCMCs in terms of the asymptotic variance. _Stochastic Processes and their Applications_ , 130(10):6157 – 6183, 2020. ISSN 0304-4149. URL http://www.sciencedirect.com/science/article/pii/S0304414919304053.
* Funk and King (2020) S. Funk and A. A. King. Choices and trade-offs in inference with infectious disease models. _Epidemics_ , 30:100383, 2020. ISSN 1755-4365. URL https://doi.org/10.1016/j.epidem.2019.100383.
* Gordon et al. (1993) N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. _IEE Proceedings-F_ , 140(2):107–113, 1993.
* Haario et al. (2001) H. Haario, E. Saksman, and J. Tamminen. An adaptive metropolis algorithm. _Bernoulli_ , 7(2):223–242, 04 2001. URL https://projecteuclid.org:443/euclid.bj/1080222083.
* Harvey (1989) A. C. Harvey. _Forecasting, Structural Time Series Models and the Kalman Filter_. Cambridge University Press, 1989.
* Helske (2016) J. Helske. _ramcmc: Robust Adaptive Metropolis Algorithm_ , 2016. URL https://CRAN.R-project.org/package=ramcmc. R package version 0.1.0-1.1.
* Helske (2017) J. Helske. KFAS: Exponential family state space models in R. _Journal of Statistical Software_ , 78(10):1–39, 2017. URL https://doi.org/10.18637/jss.v078.i10.
* Helske and Helske (2019) S. Helske and J. Helske. Mixture hidden Markov models for sequence data: The seqHMM package in R. _Journal of Statistical Software_ , 88(3):1–32, 2019. URL https://doi.org/10.18637/jss.v088.i03.
* Hoffman and Gelman (2014) M. D. Hoffman and A. Gelman. The no-U-turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. _The Journal of Machine Learning Research_ , 15(1):1593–1623, 2014.
* Iacus (2016) S. M. Iacus. _sde: Simulation and Inference for Stochastic Differential Equations_ , 2016. URL https://CRAN.R-project.org/package=sde. R package version 2.0.15.
* Jacob and Funk (2020) P. E. Jacob and S. Funk. _rbi: Interface to LibBi_ , 2020. URL https://CRAN.R-project.org/package=rbi. R package version 0.10.3.
* Jazwinski (1970) A. Jazwinski. _Stochastic Processes and Filtering Theory_. Academic Press, 1970.
* King et al. (2016) A. A. King, D. Nguyen, and E. L. Ionides. Statistical inference for partially observed Markov processes via the R package pomp. _Journal of Statistical Software_ , 69(12):1–43, 2016. URL https://doi.org/10.18637/jss.v069.i12.
* Kitagawa (1996) G. Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. _Journal of Computational and Graphical Statistics_ , 5(1):1–25, 1996.
* Lindgren and Rue (2015) F. Lindgren and H. Rue. Bayesian spatial modelling with R-INLA. _Journal of Statistical Software_ , 63(19):1–25, 2015. URL https://doi.org/10.18637/jss.v063.i19.
* Michaud et al. (2020) N. Michaud, P. de Valpine, D. Turek, C. Paciorek, and D. Nguyen. Sequential Monte Carlo methods in the nimble R package. Technical Report arxiv:1703.06206, arXiv preprint, 2020.
* Murray (2015) L. Murray. Bayesian state-space modelling on high-performance hardware using LibBi. _Journal of Statistical Software, Articles_ , 67(10):1–36, 2015. ISSN 1548-7660. URL https://doi.org/10.18637/jss.v067.i10.
* NIMBLE Development Team (2020) NIMBLE Development Team. nimbleSMC: Sequential Monte Carlo methods for NIMBLE, 2020. URL https://cran.r-project.org/package=nimbleSMC. R package version 0.10.0.
* Petris and Petrone (2011) G. Petris and S. Petrone. State space models in r. _Journal of Statistical Software_ , 41(4):1–25, 2011. URL https://doi.org/10.18637/jss.v041.i04.
* Plummer et al. (2006) M. Plummer, N. Best, K. Cowles, and K. Vines. CODA: Convergence diagnosis and output analysis for MCMC. _R News_ , 6(1):7–11, 2006. URL https://CRAN.R-project.org/doc/Rnews/.
* Stan Development Team (2020) Stan Development Team. RStan: the R interface to Stan, 2020. URL http://mc-stan.org/. R package version 2.21.2.
* Tusell (2011) F. Tusell. Kalman filtering in R. _Journal of Statistical Software_ , 39(2):1–27, 2011. URL https://doi.org/10.18637/jss.v039.i02.
* Van Der Merwe et al. (2001) R. Van Der Merwe, A. Doucet, N. De Freitas, and E. A. Wan. The unscented particle filter. In _Advances in neural information processing systems_ , pages 584–590, 2001.
* Vihola (2012) M. Vihola. Robust adaptive Metropolis algorithm with coerced acceptance rate. _Statistics and Computing_ , 22(5):997–1008, 2012\. ISSN 1573-1375. URL https://doi.org/10.1007/s11222-011-9269-5.
* Vihola et al. (2020) M. Vihola, J. Helske, and J. Franks. Importance sampling type estimators based on approximate marginal MCMC. _Scandinavian Journal of Statistics_ , 2020. URL https://doi.org/10.1111/sjos.12492.
* Wickham (2016) H. Wickham. _ggplot2: Elegant Graphics for Data Analysis_. Springer-Verlag New York, 2016. ISBN 978-3-319-24277-4. URL https://ggplot2.tidyverse.org.
## Appendix: Code for section Comparison of IS-MCMC and HMC
library("bssm")# Simulate the dataset.seed(123)n <\- 200sd_level <\- 0.1drift
<\- 0.01beta <\- -0.9phi <\- 5level <\- cumsum(c(5, drift + rnorm(n - 1, sd =
sd_level)))x <\- 3 + (1:n) * drift + sin(1:n + runif(n, -1, 1))y <\-
rnbinom(n, size = phi, mu = exp(beta * x + level))# Construct model for
bssmbssm_model <\- bsm_ng(y, xreg = x, beta = normal(0, 0, 10), phi =
halfnormal(1, 10), sd_level = halfnormal(0.1, 1), sd_slope = halfnormal(0.01,
0.1), a1 = c(0, 0), P1 = diag(c(10, 0.1)^2), distribution = "negative
binomial")# run the MCMCfit_bssm <\- run_mcmc(bssm_model, iter = 60000, burnin
= 10000, particles = 10, seed = 1)# create the Stan
modellibrary("rstan")stan_model <\- "data { int<lower=0> n; // number of data
points int<lower=0> y[n]; // time series vector[n] x; // covariate}parameters
{ real<lower=0> sd_slope; real<lower=0> sd_level; real beta; real<lower=0>
phi; // instead of working directly with true state variables // it is often
suggested use standard normal variables in sampling // and reconstruct the
true parameters in transformed parameters block // this should make sampling
more efficient although coding the model // is less intuitive. vector[n]
level_std; // N(0, 1) level noise vector[n] slope_std; // N(0, 1) slope
noise}transformed parameters { vector[n] level; vector[n] slope; // construct
the actual states level[1] = 10 * level_std[1]; slope[1] = 0.1 * slope_std[1];
slope[2:n] = slope[1] + cumulative_sum(sd_slope * slope_std[2:n]); level[2:n]
= level[1] + cumulative_sum(slope[1:(n-1)]) + cumulative_sum(sd_level *
level_std[2:n]);}model { beta ~ normal(0, 10); phi ~ normal(0, 10); sd_slope ~
normal(0, 0.1); sd_level ~ std_normal(); // standardised noise terms level_std
~ std_normal(); slope_std ~ std_normal(); y ~ neg_binomial_2_log(level + beta
* x, phi);}"stan_data <\- list(n = n, y = y, x = x)stan_inits <\-
list(list(sd_level = 0.1, sd_slope = 0.01, phi = 1, beta = 0))# need to
increase adapt_delta and max_treedepth in order to avoid divergencesfit_stan
<\- stan(model_code = stan_model, data = stan_data, iter = 15000, warmup =
5000, control = list(adapt_delta = 0.99, max_treedepth = 12), init =
stan_inits, chains = 1, refresh = 0, seed = 1)d_stan <\- summary(fit_stan,
pars = c("sd_level", "sd_slope", "phi", "beta", "level[200]", "slope[200]"
))$summary[,c("mean", "sd", "se_mean")]d_bssm <\- summary(fit_bssm, variable =
"both", return_se = TRUE)# Parameter
estimates:d_stand_bssm$thetad_bssm$states$Mean[200,]d_bssm$states$SD[200,]d_bssm$states$SE[200,]#
Timings:sum(get_elapsed_time(fit_stan))fit_bssm$time[3]
_Jouni Helske
Department of Mathematics and Statistics
University of Jyväskylä
Finland
ORCiD: 0000-0001-7130-793X
<EMAIL_ADDRESS>_
_Matti Vihola
Department of Mathematics and Statistics
University of Jyväskylä
Finland
ORCiD: 0000-0002-8041-7222
<EMAIL_ADDRESS>_
|
††thanks: Present address: Samsung Electronics, Gyeonggi–do 16677, Republic of
Korea
# Characterization of a flux-driven Josephson parametric amplifier with near
quantum-limited added noise for axion search experiments
Çağlar Kutlu<EMAIL_ADDRESS>Korea Advanced Institute of Science and
Technology, Daejeon 34051, Republic of Korea Center for Axion and Precision
Physics Research, Institute for Basic Science, Daejeon 34051, Republic of
Korea Arjan F. van Loo Center for Emergent Matter Science (CEMS), RIKEN,
Wako, Saitama 351–0198, Japan Sergey V. Uchaikin Andrei N. Matlashov Center
for Axion and Precision Physics Research, Institute for Basic Science, Daejeon
34051, Republic of Korea Doyu Lee Center for Axion and Precision Physics
Research, Institute for Basic Science, Daejeon 34051, Republic of Korea
Seonjeong Oh Jinsu Kim Korea Advanced Institute of Science and Technology,
Daejeon 34051, Republic of Korea Center for Axion and Precision Physics
Research, Institute for Basic Science, Daejeon 34051, Republic of Korea
Woohyun Chung Center for Axion and Precision Physics Research, Institute for
Basic Science, Daejeon 34051, Republic of Korea Yasunobu Nakamura Center for
Emergent Matter Science (CEMS), RIKEN, Wako, Saitama 351–0198, Japan Research
Center for Advanced Science and Technology (RCAST), The University of Tokyo,
Meguro–ku, Tokyo 153–8904, Japan Yannis K. Semertzidis Korea Advanced
Institute of Science and Technology, Daejeon 34051, Republic of Korea Center
for Axion and Precision Physics Research, Institute for Basic Science, Daejeon
34051, Republic of Korea
###### Abstract
The axion, a hypothetical elementary pseudoscalar, is expected to solve the
strong _CP_ problem of QCD and is also a promising candidate for dark matter.
The most sensitive axion search experiments operate at millikelvin
temperatures and hence rely on instrumentation that carries signals from a
system at cryogenic temperatures to room temperature instrumentation. One of
the biggest limiting factors affecting the parameter scanning speed of these
detectors is the noise added by the components in the signal detection chain.
Since the first amplifier in the chain limits the minimum noise, low-noise
amplification is of paramount importance. This paper reports on the operation
of a flux-driven Josephson parametric amplifier (JPA) operating at around
$2.3\text{\,}\mathrm{GHz}$ with added noise approaching the quantum limit. The
JPA was employed as a first stage amplifier in an experimental setting similar
to the ones used in haloscope axion detectors. By operating the JPA at a gain
of $19\text{\,}\mathrm{dB}$ and cascading it with two cryogenic amplifiers
operating at $4\text{\,}\mathrm{K}$, noise temperatures as low as
$120\text{\,}\mathrm{mK}$ were achieved for the whole signal detection chain.
## I Introduction
Axions are spin-0 particles that emerge as a result of the Peccei-Quinn
mechanism which was originally proposed as a solution to the strong _CP_
problem of quantum chromodynamics[1, 2]. They were also identified as viable
candidates for all or a fraction of the cold dark matter in our universe[3, 4,
5]. It is possible to detect axions upon their conversion to microwave
photons, using resonant cavities immersed in high magnetic fields[6]. Since
the axion mass is unknown, these detectors employ a mechanism to scan
different frequencies corresponding to different axion masses. The scanning
rate of such detectors scales with $1/T_{\mathrm{sys}}^{2}$, where
$T_{\mathrm{sys}}$ is the system noise background characterized in units of
temperature. It can be decomposed as
$T_{\mathrm{sys}}=T_{\mathrm{cav}}+T_{\mathrm{add}}$, where the first term
denotes the noise temperature accompanying the signal itself and the second
one denotes the noise added by the signal detection chain. Throughout this
work, noise temperature refers to the added noise unless otherwise stated. In
order to reduce $T_{\mathrm{cav}}$, the cavity is cooled to millikelvin
temperatures. If the first amplifier has sufficiently high gain ($G_{1}$), its
noise temperature ($T_{1}$) will be the dominant contribution to
$T_{\mathrm{add}}$ as given by the well-known relation[7]:
$T_{\mathrm{add}}=T_{1}+\frac{T_{\mathrm{rest}}}{G_{1}}$ where
$T_{\mathrm{rest}}$ is the noise temperature of the whole chain except the
first amplifier. Amplifiers based on Josephson junctions including microstrip
superconducting quantum interference device amplifiers (MSA) and JPA have
already been shown to be capable of gains higher than
$30\text{\,}\mathrm{dB}$, and noise temperatures approaching the quantum
limit[8, 9]. While an MSA has an internal shunt resistor used for biasing
which hinders noise performance[10, 11], by design the JPA requires no
resistive element to operate. Several experiments presently searching for dark
matter axions have already adopted the JPA as the first amplifier[12, 13, 14].
In this work, the frequency coverage, gain and noise properties of a flux-
driven JPA for use in an axion dark matter experiment operating around
$2.3\text{\,}\mathrm{GHz}$ are investigated.
The power spectral density of the noise accompanying a signal measured in an
impedance matched environment can be given as [15] :
$\displaystyle
S_{n}(f,T)=hf\left[\frac{1}{\exp{\left(\frac{hf}{k_{B}T}\right)}-1}+\frac{1}{2}\right]$
(1)
where $h$ is Planck’s constant and $k_{B}$ is Boltzmann’s constant. The first
term in the brackets is the mean number of quanta at frequency $f$ at the bath
temperature $T$ and the second term is the contribution from zero-point
fluctuations. The lower limit on noise temperature for linear phase-
insensitive amplifiers is given by[16] $T_{Q}=\lim_{T\rightarrow
0}S_{n}(f,T)/(k_{B})=hf/(2k_{B})$ which is about $55.2\text{\,}\mathrm{mK}$ at
$2.3\text{\,}\mathrm{GHz}$. Using a $2.3\text{\,}\mathrm{GHz}$ flux-driven JPA
$T_{\mathrm{add}}\approx$120\text{\,}\mathrm{mK}$$ is achieved. This
corresponds to a $T_{\mathrm{sys}}\approx$190\text{\,}\mathrm{mK}$$ for an
axion haloscope experiment running at a bath temperature of
$50\text{\,}\mathrm{mK}$. The lower bound for $T_{\mathrm{sys}}$ is given by
the standard quantum limit[17] $T_{\mathrm{SQL}}=2T_{Q}$ which is about
$110\text{\,}\mathrm{mK}$ at $2.3\text{\,}\mathrm{GHz}$.
## II Flux-driven JPA
The equivalent circuit diagram of the tested device is shown in Figure 1. It
consists of a superconducting quantum interference device (SQUID) attached to
the end of a coplanar waveguide $\lambda/4$ resonator that is coupled via a
capacitor ($C_{c}$) to the transmission line for the signal input and output.
The SQUID acts as a variable inductor whose value depends on the magnetic flux
passing through its loop. In the setup, a superconducting coil is used to
provide the necessary DC flux ($\phi$) through the SQUID loop in order to tune
the resonance frequency ($f_{r}$). Parametric amplification is achieved by
modulating the flux through the SQUID using a pump signal. The pump tone is
provided by a separate transmission line inductively coupled to the SQUID. The
JPA is operated in the three-wave mixing mode[18] where the pump ($f_{p}$),
the signal ($f_{s}$), and the idler ($f_{i}$) frequencies satisfy the relation
$f_{p}=f_{s}+f_{i}$. The signal input and output share the same port. A
circulator is used to separate them. Since the $\lambda/4$ resonator only
allows odd harmonics, there is no measurable pump leakage to the output line.
This prevents the stronger pump tone from saturating the rest of the
amplifiers in the chain[19]. Figure 2 shows a schematic for the axion search
experimental setup.
Figure 1: Equivalent circuit diagram of the JPA sample. The JPA was fabricated
by photolithography of a Nb layer, deposited on a $0.3\text{\,}\mathrm{mm}$
thick Si substrate. The SQUID was placed on top of the Nb layer by E-beam
lithography followed by shadow evaporation[20, 21]. The sample was attached to
a printed circuit board (PCB) and the transmission lines were bonded with Al
wires. The PCB was fixed onto a gold plated copper structure and placed inside
a superconducting coil. The whole structure was covered tightly with a lead
shield and attached to the mixing-chamber (MC) plate using a gold plated
copper rod.
## III Measurements
When there is no pump tone present, the JPA can be modeled as a resonator with
a well-defined quality factor and resonance frequency which are functions of
flux. The resonance frequency is estimated from the frequency domain phase
response using a parameter fit[22]. The phase response is obtained by doing a
transmission S-parameter measurement using a vector network analyzer (VNA) in
the configuration as shown in Figure 2. The resonance frequency was measured
as a function of the coil current (see Figure 3). It was found that the
minimum observable resonance frequency was at $2.18\text{\,}\mathrm{GHz}$ and
the maximum was $2.309\text{\,}\mathrm{GHz}$. The lower bound is due to the
frequency band of the circulators which spans from $2.15\text{\,}\mathrm{GHz}$
to $2.60\text{\,}\mathrm{GHz}$. At the lower frequencies, the JPA becomes much
more sensitive to flux noise due to a higher $\frac{\partial
f_{r}}{\partial\phi}$. This work mainly focused on operation with frequencies
above $2.2\text{\,}\mathrm{GHz}$.
Figure 2: The experimental setup used in all the characterization
measurements. SG, VNA, and SA stand for the signal generator, vector network
analyzer, and spectrum analyzer, respectively. During this work, the switch
that selects between the cavity and the noise source was always kept at the
position shown in the figure. The ports IN2 and OUT were used to directly
measure the JPA characteristics, bypassing the cavity. The microwave short
element shown next to the JPA was used to bypass the JPA for calibration
measurements. U1 and U2 are HEMT amplifiers with noise temperatures of
$1.5\text{\,}\mathrm{K}$ and $5\text{\,}\mathrm{K}$, respectively.
During the experiments, the MC plate temperature was stabilized at
$50\pm{}1\text{\,}\mathrm{mK}$. With the temperature fixed, the frequency
response of the JPA is determined by three experimental variables: the coil
current ($i_{b}$), the pump frequency ($f_{p}$), and the pump power ($P_{p}$).
The measurements shown in this work had $i_{b}$ confined to the region where
the flux through the SQUID loop is given by $-0.5\phi_{0}<\phi<0$, where
$\phi_{0}$ is the magnetic flux quantum. Therefore, $f_{r}$ can be
unambiguously converted to $\phi$ or $i_{b}$. All experiments began with a
transmission measurement, with the resonance frequency tuned to
$2.18\text{\,}\mathrm{GHz}$. This becomes the baseline measurement to be used
for the duration of the experiment. When the result was compared to a separate
measurement, in which a microwave short was put in place of the JPA, it was
found that the baseline obtained via such an off-resonance measurement was at
most $0.2\text{\,}\mathrm{dB}$ lossier than an ideal mirror. The JPA gain
($G_{J}$) was estimated by dividing the transmission magnitude response with
the baseline’s magnitude response.
To investigate the gain behavior, a sweep over the parameters $i_{b}$,
$f_{p}$, $P_{p}$ was made and the maximum gain was measured at each point.
After each $i_{b}$ tuning step, the resonance frequency is estimated by
performing a phase measurement and applying a parameter fit. With the detuning
defined as $\delta=f_{p}/2-f_{r}$, the equigain contours had a minimum in
necessary pump power around $\delta=0$, as shown in Figure 4(a). It was
observed that for resonance frequencies above $2.299\text{\,}\mathrm{GHz}$ the
minimum starts to shift to lower detunings which is attributed to pump-induced
shifts in resonance frequency[22]. Figure 4(b) shows that the slice of
$\delta=0$ can be used to achieve peak gains of up to $30\text{\,}\mathrm{dB}$
along the frequency range of the device.
Figure 3: Resonance frequency versus flux obtained by sweeping the coil
current and measuring the phase response at each step. One period corresponds
to a current of $324.4\text{\,}\mathrm{\SIUnitSymbolMicro A}$. The inset shows
the fit performed to estimate the resonance frequency for each applied flux.
(a)
(b)
Figure 4: (a) Maximum gain measured as a function of detuning and pump power
for a flux bias corresponding to $f_{r}=$2.29\text{\,}\mathrm{GHz}$$. (b)
Maximum gain as a function of frequency and pump power with $f_{p}=2f_{r}$.
To investigate noise temperature, a methodology similar to the well-known
Y-factor method[23] was used. A $50\text{\,}\mathrm{\SIUnitSymbolOhm}$
cryogenic microwave terminator was used as the noise source. A bias-tee was
attached in front for improved thermalization of its inner conductor. These
two components were fixed onto a gold-plated copper plate along with a
ruthenium oxide temperature sensor and a
$100\text{\,}\mathrm{\SIUnitSymbolOhm}$ resistor functioning as a heater. This
plate was then fixed onto the MC plate so that the dominant thermalization was
through a thin copper wire attached to the MC plate. The noise source was
connected to the switch input using a superconducting coaxial cable, which
provides thermal isolation while minimizing losses. Using a PID controller,
the terminator temperature could be adjusted from $50\text{\,}\mathrm{mK}$ to
$1\text{\,}\mathrm{K}$ without affecting the MC plate temperature. The noise
power generated by the noise source was measured using a spectrum analyzer
(SA) with $1\text{\,}\mathrm{kHz}$ resolution bandwidth after being amplified
by the JPA and the rest of the signal detection chain. The power spectra were
recorded at noise source temperatures ($T_{s}$) of
$60120180\text{\,}\mathrm{mK}$. The power values were converted into power
spectral densities (PSD) by dividing them with the noise bandwidth
corresponding to the SA settings used. Before each PSD measurement, the JPA
gain and passive resonance were measured. From these measurements, it was
concluded that there were neither gain changes nor resonance shifts. From the
obtained PSD values $S(T_{s})$, a fit was done to a function of the following
form independently for each frequency bin (see Figure 5) :
$\displaystyle
S(T_{s})=(2G_{J}-1)\frac{G_{L}G_{\mathrm{tot}}}{G_{J}}(S_{n}(T_{s})+rk_{B}T_{n}+\gamma)$
(2)
where $S_{n}(T_{s})$ is the noise PSD of the source, $G_{\mathrm{tot}}$ is the
total gain seen from the reference plane, $G_{L}$ is the loss factor between
the $50\text{\,}\mathrm{\SIUnitSymbolOhm}$ terminator and the reference plane
and $T_{n}$ is the noise temperature. The reference plane is at the end of the
superconducting cable connected to the noise source (see Figure 2). Here, $r$
and $\gamma$ are factors that are explained in Appendix A.
Figure 5: The upper plot shows the set of power spectra obtained during a
noise temperature measurement performed for a tuning at
$f_{r}=$2.305\text{\,}\mathrm{GHz}$$ with $f_{p}=2f_{r}$. The offset $\nu$ is
defined as $\nu=f-f_{r}$ where $f$ is the center of the frequency bin at which
the power was measured using the spectrum analyzer. $T_{s}$ is the temperature
of the noise source. The inset shows three vertical slices which were fit with
Equation 2. The lower plot shows the estimated noise temperature of the whole
chain as a function of $\nu$.
Since the amplifier needs to be tuned along with the cavity during the axion
experiment, the noise temperature was investigated at different frequencies.
The measurements were done in $5\text{\,}\mathrm{MHz}$ steps from
$2.282.305\text{\,}\mathrm{GHz}$. At each step, the pump power and resonance
frequency were tuned such that the JPA gain was about $20$ $\mathrm{dB}$. From
these measurements (Figure 6) a minimum noise temperature of
$120\text{\,}\mathrm{mK}$ was observed at $2.28\text{\,}\mathrm{GHz}$.
Figure 6: Total gain and the noise temperature of the whole chain for 6 tuning
points with $19.3\pm{}0.5\text{\,}\mathrm{dB}$ JPA gain. Both quantities were
estimated from noise temperature measurements. The small variations along
tuning frequencies were mainly attributed to the losses due to the microwave
components before the JPA.
Another important characteristic is the saturation that occurs when a
narrowband signal is applied. For stable and predictable operation the JPA
must be operated away from the effects of saturation. A common way to quantify
the saturation of an amplifier is to determine the input power at which the
gain is reduced by $1\text{\,}\mathrm{dB}$ ($P_{1\mathrm{dB}}$). The
$P_{1\mathrm{dB}}$ was measured at $\delta=0$ for different frequencies and
different pump powers corresponding to different gains. It is evident from the
results (see Figure 7) that an axion-like signal with an expected power of
$-180\text{\,}\mathrm{\milli}$ is far from saturating the device. While
saturation from narrowband signals is avoidable to a certain extent, it was
observed that thermal noise at the input can also saturate and alter the
behavior of the device. For frequencies below $2.28\text{\,}\mathrm{GHz}$ with
gains above $23\text{\,}\mathrm{dB}$, the device started showing saturated
behavior with thermal noise when the noise source temperature was raised above
$120\text{\,}\mathrm{mK}$, which was done to measure noise temperature. While
this does not necessarily mean that the device is unusable below these
frequencies, it renders the direct measurement of the noise temperature using
a noise source unreliable for these frequency and gain regions.
Figure 7: Saturation measurements for three different $f_{r}$. Each
measurement was done by sweeping the signal powers from the VNA and observing
at which input power the gain reduces by 1 dB. The horizontal axis corresponds
to the unsaturated gain measured with the lowest signal power available from
VNA.
## IV Conclusion
In conclusion, a flux-driven JPA, tunable in the range
$2.22.305\text{\,}\mathrm{GHz}$ was demonstrated and determined to be
operational for use in axion search experiments. The added noise temperatures
of the receiver chain were measured using a noise source at a location as
close as possible to the origin of the axion signal. With an added noise
temperature of $120\text{\,}\mathrm{mK}$ the system was shown to reach
$T_{\mathrm{sys}}\approx 1.7T_{\mathrm{SQL}}$. This is the first record of
$T_{\mathrm{sys}}$ below $2T_{\mathrm{SQL}}$ for an axion haloscope setup
operating below $10\text{\,}\mathrm{GHz}$. The saturation input power for the
JPA was observed to be more than adequate for an axion-like signal. Currently,
the tested JPA is being used as part of a KSVZ[24, 25] sensitive axion search
experiment at the Center for Axion and Precision Physics Research (CAPP). The
system is taking physics data with a scanning speed that has been improved
more than an order of magnitude. We expect that further optimization of the
JPA design could result in improved instantaneous bandwidth and tuning range.
This work was supported by the Institute for Basic Science
(IBS-R017–D1–2021–a00) and JST ERATO (Grant No. JPMJER1601). A. F. van Loo is
supported by a JSPS postdoctoral fellowship.
## Appendix A Noise Temperature Estimation
Figure 8: The simplified model used for the noise temperature estimations
conducted in this work. Bold letters denote the power gains of components. The
reference plane marks the input of the detector chain. Arrows denote the flow
of power entering the nodes shown with a small circle. $G_{L}$ is a composite
gain factor for everything between the noise source and the reference plane.
$G_{c}$ is the circulator gain factor. $G_{J}$ is the signal and $G_{I}$ is
the idler gain for the JPA. The amplifier gain $G_{R}$ and noise temperature
$T_{nR}$ contain the effects of all elements after the last circulator,
including SA noise. For simplicity, circulators are assumed to have complete
rotational symmetry with respect to their ports and to be completely identical
to each other.
The output PSD from a component with its input connected to a matched source
can be written as :
$\displaystyle S_{\mathrm{O}}=GS_{\mathrm{in}}+S_{\mathrm{added}}$ (3)
where $S_{\mathrm{in}}$ is the source PSD, $G$ is the power gain of the
component, and $S_{\mathrm{added}}$ is the noise added by it. The noise
temperature ($T_{n}$) is a measure of the added noise at the output of a
component. By convention, it is defined as if it is for noise entering the
device itself: $T_{n}=S_{\mathrm{added}}/(k_{B}G)$. The entire detection chain
(see Figure 8), from the reference plane to the spectrum analyzer, can be
described as a single composite component with $G=G_{\mathrm{tot}}$ and noise
temperature $T_{n}$.
The noise temperature can be defined for a situation similar to the
experimental one where a narrowband axion signal with power $A$ is present.
This signal enters the chain from a source connected to the reference plane.
Assuming the source is thermalized to the MC plate with the temperature
$T_{f}$, then the defining relation for $T_{n}$ can be written as :
$\displaystyle
S_{O}=G_{\mathrm{tot}}(\underbrace{A\delta(f-f_{s})+S_{n}(T_{f})}_{S_{\mathrm{in}}}+k_{B}T_{n})$
(4)
where $S_{O}$ is the PSD at the output, $G_{\mathrm{tot}}$ is the total power
gain for the signal from the reference plane, $S_{n}$ is the noise coming from
the source itself. The main idea here is that if one has a reliable estimate
of $T_{n}$, and understands the source environment well ($S_{n}(T_{f})$), it
is straightforward to estimate $A$ without the precise knowledge of
$G_{\mathrm{tot}}$. This is possible since $S_{O}$ can be easily measured at
two frequencies $f_{s}$ and $f_{s}^{\prime}$ using a spectrum analyzer.
Provided that $|f_{s}-f_{s}^{\prime}|$ is small enough so that $T_{n}$ is
approximately the same for both frequencies, $A$ can be estimated from these
two measurements. This approach forms the basis of the analysis methods
applied in axion dark matter search experiments[26, 27, 28, 29].
The detection chain consists of passive components, the JPA and the HEMT
amplifiers. Each one of these adds noise in a different way. A passive
component at physical temperature $T_{f}$ has
$S_{\mathrm{added}}=(1-G)S_{n}(T_{f})$. The HEMT amplifier noise is usually
estimated from measurements. The JPA adds noise by two main mechanisms. The
first one is by amplifying the input noise at the idler mode onto the signal
mode. The second one is via the losses or other dissipation mechanisms inside
or before the sample. Ideally, the latter can be made zero, whereas the former
will approach to the half-photon added noise in the limit of a
$0\text{\,}\mathrm{K}$ bath temperature.
Using the model shown in Figure 8, it is straightforward to write a relation
for the output PSD. For clarity, the explicit frequency dependence of the
thermal noise $S_{n}$ and of the gains will be omitted. Also, the
approximation $S_{n}(f,T)\approx S_{n}(f_{p}-f,T)$ will be denoted with the
shorthand $S_{nf}=S_{n}(f,T_{f})$. This approximation has less than 30 ppm
error given that $|2f-f_{p}|<$100\text{\,}\mathrm{kHz}$$. Note that
$100\text{\,}\mathrm{kHz}$ is the typical bandwidth for the JPA tested in this
work. Furthermore, the transmission characteristics of the microwave
components will be assumed to not vary on a scale of
$100\text{\,}\mathrm{kHz}$. Using the gain symbols for components as shown in
Figure 8, the power flow at each node in terms of their PSD is written as :
$\displaystyle\begin{split}S_{B}&=G_{L}S_{A}+(1-G_{L})S_{nf}\\\
S_{C}&=G_{c}S_{B}+(1-G_{c})S_{nf}\\\ S_{D}&=G_{c}S_{C}+(1-G_{c})S_{nf}\\\
S_{E}&=G_{J}S_{D}+G_{I}S_{D}+G_{J}S_{j}\\\
S_{F}&=G_{c}S_{E}+(1-G_{c})S_{nf}\\\
S_{O}&=G_{R}(S_{F}+k_{B}T_{nR})\end{split}$ (5)
The idler gain is denoted by $G_{I}$, and is substituted using
$G_{I}=G_{J}-1$[30] in the following derivations. As shown in Equation 5, the
idler contribution to the noise appears as $G_{I}S_{D}$. The symbol $S_{j}$
denotes an unknown noise density added at the JPA stage which does not
contribute to the quantum limit but rather contains losses or other mechanisms
of stationary noise. Note that the noise propagating back from the later
stages is also included in $S_{j}$. The output $S_{O}$ can be written for two
cases. In the first case, the noise source is operational at temperature
$T_{s}$, and in the second case, a signal source at temperature $T_{f}$ is
connected to the reference plane. The former case describes the measurement
situation, whereas the latter case is only used to define $T_{n}$ in terms of
the parameters in the model. For the first case, i.e. $S_{A}=S_{n}(T_{s})$,
the output PSD can be written as :
$\displaystyle S_{O}^{(1)}$
$\displaystyle=\underbrace{G_{R}G_{c}^{3}G_{L}(2G_{J}-1)}_{G_{\mathrm{noise}}}\left[S_{n}(T_{s})+S_{\alpha}\right]$
(6) $\displaystyle S_{\alpha}$
$\displaystyle=\lambda^{(1)}S_{nf}+\frac{G_{J}S_{j}}{(2G_{J}-1)G_{c}^{2}G_{L}}+\left.\frac{k_{B}T_{nR}}{G_{c}^{3}G_{L}(2G_{J}-1)}\right.$
(7) $\displaystyle\lambda^{(1)}$
$\displaystyle=\beta_{l}+\frac{\beta_{c}}{G_{L}}+\frac{\beta_{c}}{G_{c}G_{L}}+\frac{\beta_{c}}{G_{c}^{2}G_{L}}$
(8) $\displaystyle\beta_{\mathord{\color[rgb]{0,0,0}\bullet}}$
$\displaystyle\equiv\frac{1}{G_{\mathord{\color[rgb]{0,0,0}\bullet}}}-1$ (9)
The output for the second case, where $S_{B}=A\delta(f-f_{s})+S_{nf}$, is
written as :
$\displaystyle S_{O}^{(2)}$
$\displaystyle=\underbrace{G_{J}G_{c}^{3}G_{R}}_{G_{\mathrm{tot}}}\left[S_{B}+k_{B}T_{n}\right]$
(10) $\displaystyle k_{B}T_{n}$
$\displaystyle=\lambda^{(2)}S_{nf}+\frac{S_{j}}{G_{c}^{2}}+k_{B}\frac{T_{nR}}{G_{c}^{3}G_{J}}$
(11) $\displaystyle\lambda^{(2)}$
$\displaystyle=\left[\frac{G_{J}-1}{G_{J}}+\left(\beta_{c}+\frac{\beta_{c}}{G_{c}}\right)\frac{2G_{J}-1}{G_{J}}+\frac{\beta_{c}}{G_{c}^{2}}\right]$
(12)
Here, the unknowns are $S_{j}$, $T_{nR}$ and $G_{R}$. It is clear from
Equations 11 and 12 that $T_{n}$ approaches $T_{Q}$ as expected in the limits
of $G_{J}\gg 1$, $G_{c}\rightarrow 0$, and $G_{L}\rightarrow 0$. Using
Equations 6, 7 and 11 $,S_{O}^{(1)}$ can be rewritten as :
$\displaystyle\begin{split}S_{O}^{(1)}&=\frac{G_{\mathrm{tot}}}{r}(S_{n}(T_{s})+rk_{B}T_{n}+\gamma)\\\
r&=\frac{G_{J}}{G_{L}(2G_{J}-1)}\\\
\gamma&=\left(\lambda^{(1)}-r\lambda^{(2)}\right)S_{nf}\end{split}$ (13)
This relation is used to perform a fit with $G_{\mathrm{tot}}$ and $T_{n}$ as
the fit parameters. For the estimations, the parameters $G_{L}$ and $G_{c}$
were taken as $-0.05\text{\,}\mathrm{dB}$, and $-0.4\text{\,}\mathrm{dB}$
respectively. Some typical values for $r$ and $\gamma/k_{B}$ can be found in
Table 1.
Table 1: Calculated $r$ and $\gamma/k_{B}$ for typical component losses and $T_{f}=$50\text{\,}\mathrm{mK}$$. $G_{J}($\mathrm{dB}$)$ | $G_{L}($\mathrm{dB}$)$ | $G_{c}($\mathrm{dB}$)$ | $r$ | $\gamma/k_{B}($\mathrm{mK}$)$
---|---|---|---|---
15 | 0 | -0.3 | 0.508 | -31.1
15 | -0.2 | -0.5 | 0.532 | -26.8
20 | 0 | -0.3 | 0.503 | -31.4
20 | -0.2 | -0.5 | 0.526 | -27.1
19 | -0.05 | -0.4 | 0.509 | -29.8
## References
* Peccei and Quinn [1977] R. D. Peccei and H. R. Quinn, Phys. Rev. Lett. 38, 1440 (1977).
* Weinberg [1978] S. Weinberg, Phys. Rev. Lett. 40, 223 (1978).
* Preskill, Wise, and Wilczek [1983] J. Preskill, M. B. Wise, and F. Wilczek, Phys. Lett. B 120, 127 (1983).
* Dine and Fischler [1983] M. Dine and W. Fischler, Phys. Lett. B 120, 137 (1983).
* Abbott and Sikivie [1983] L. F. Abbott and P. Sikivie, Phys. Lett. B 120, 133 (1983).
* Sikivie [1983] P. Sikivie, Phys. Rev. Lett. 51, 1415 (1983).
* Friis [1944] H. Friis, Proceedings of the IRE 32, 419 (1944).
* Castellanos-Beltran and Lehnert [2007] M. A. Castellanos-Beltran and K. W. Lehnert, Appl. Phys. Lett. 91 (2007), 10.1063/1.2773988.
* Kinion and Clarke [2011] D. Kinion and J. Clarke, Appl. Phys. Lett. 98, 10 (2011).
* Uchaikin _et al._ [2019] S. Uchaikin, A. Matlashov, D. Lee, W. Chung, S. J. Oh, Y. Semertzidis, V. Zakosarenko, Ç Kutlu, A. van Loo, Y. Urade, S. Kono, M. Schmelz, R. Stolz, and Y. Nakamura, in _2019 IEEE International Superconductive Electronics Conference (ISEC)_ (2019) pp. 1–3.
* André _et al._ [1999] M. O. André, M. Mück, J. Clarke, J. Gail, and C. Heiden, Appl. Phys. Lett. 75, 698 (1999).
* Braine _et al._ [2020] T. Braine, R. Cervantes, N. Crisosto, N. Du, S. Kimes, L. J. Rosenberg, G. Rybka, J. Yang, D. Bowring, A. S. Chou, R. Khatiwada, A. Sonnenschein, W. Wester, G. Carosi, N. Woollett, L. D. Duffy, R. Bradley, C. Boutan, M. Jones, B. H. Laroque, N. S. Oblath, M. S. Taubman, J. Clarke, A. Dove, A. Eddins, S. R. O’kelley, S. Nawaz, I. Siddiqi, N. Stevenson, A. Agrawal, A. V. Dixit, J. R. Gleason, S. Jois, P. Sikivie, J. A. Solomon, N. S. Sullivan, D. B. Tanner, E. Lentz, E. J. Daw, J. H. Buckley, P. M. Harrington, E. A. Henriksen, and K. W. Murch, Phys. Rev. Lett. 124, 101303 (2020).
* Brubaker _et al._ [2017a] B. M. Brubaker, L. Zhong, Y. V. Gurevich, S. B. Cahn, S. K. Lamoreaux, M. Simanovskaia, J. R. Root, S. M. Lewis, S. Al Kenany, K. M. Backes, I. Urdinaran, N. M. Rapidis, T. M. Shokair, K. A. van Bibber, D. A. Palken, M. Malnou, W. F. Kindel, M. A. Anil, K. W. Lehnert, and G. Carosi, Phys. Rev. Lett. 118, 061302 (2017a).
* Crescini _et al._ [2020] N. Crescini, D. Alesini, C. Braggio, G. Carugno, D. D’Agostino, D. Di Gioacchino, P. Falferi, U. Gambardella, C. Gatti, G. Iannone, C. Ligi, A. Lombardi, A. Ortolan, R. Pengo, G. Ruoso, and L. Taffarello, Phys. Rev. Lett. 124, 171801 (2020).
* Callen and Welton [1951] H. B. Callen and T. A. Welton, Phys. Rev. 83, 34 (1951).
* Clerk _et al._ [2010] A. A. Clerk, M. H. Devoret, S. M. Girvin, F. Marquardt, and R. J. Schoelkopf, Rev. Mod. Phys. 82, 1155 (2010).
* Caves [1982] C. M. Caves, Phys. Rev. D 26, 1817 (1982).
* Roy and Devoret [2016] A. Roy and M. Devoret, Comptes Rendus Physique 17, 740 (2016), quantum microwaves / Micro-ondes quantiques.
* Yamamoto _et al._ [2008] T. Yamamoto, K. Inomata, M. Watanabe, K. Matsuba, T. Miyazaki, W. D. Oliver, Y. Nakamura, and J. S. Tsai, Appl. Phys. Lett. 93, 042510 (2008).
* Dolan [1977] G. J. Dolan, Appl. Phys. Lett. 31, 337 (1977).
* Zhong _et al._ [2013] L. Zhong, E. P. Menzel, R. Di Candia, P. Eder, M. Ihmig, A. Baust, M. Haeberlein, E. Hoffmann, K. Inomata, T. Yamamoto, Y. Nakamura, E. Solano, F. Deppe, A. Marx, and R. Gross, New Journal of Physics 15 (2013), 10.1088/1367-2630/15/12/125013.
* Krantz _et al._ [2013] P. Krantz, Y. Reshitnyk, W. Wustmann, J. Bylander, S. Gustavsson, W. D. Oliver, T. Duty, V. Shumeiko, and P. Delsing, New Journal of Physics 15, 105002 (2013).
* Engen [1970] G. F. Engen, IEEE Transactions on Instrumentation and Measurement 19, 344 (1970).
* Kim [1979] J. E. Kim, Phys. Rev. Lett. 43, 103 (1979).
* Shifman, Vainshtein, and Zakharov [1980] M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, Nuclear Physics, Section B 166, 493 (1980).
* Lee _et al._ [2020] S. Lee, S. Ahn, J. Choi, B. R. Ko, and Y. K. Semertzidis, Phys. Rev. Lett. 124, 101802 (2020).
* Jeong _et al._ [2020] J. Jeong, S. Youn, S. Bae, J. Kim, T. Seong, J. E. Kim, and Y. K. Semertzidis, Phys. Rev. Lett. 125, 221302 (2020).
* Asztalos _et al._ [2001] S. Asztalos, E. Daw, H. Peng, L. J. Rosenberg, C. Hagmann, D. Kinion, W. Stoeffl, K. van Bibber, P. Sikivie, N. S. Sullivan, D. B. Tanner, F. Nezrick, M. S. Turner, D. M. Moltz, J. Powell, M.-O. André, J. Clarke, M. Mück, and R. F. Bradley, Phys. Rev. D 64, 092003 (2001).
* Brubaker _et al._ [2017b] B. M. Brubaker, L. Zhong, S. K. Lamoreaux, K. W. Lehnert, and K. A. van Bibber, Phys. Rev. D 96, 123008 (2017b).
* Yurke _et al._ [1989] B. Yurke, L. R. Corruccini, P. G. Kaminsky, L. W. Rupp, A. D. Smith, A. H. Silver, R. W. Simon, and E. A. Whittaker, Phys. Rev. A 39, 2519 (1989).
|
# Tuning the performance of a micrometer-sized Stirling engine through
reservoir engineering
Niloyendu Roy 111 Corresponding author. email<EMAIL_ADDRESS>Chemistry
and Physics of Materials Unit, Jawaharlal Nehru Centre for Advanced Scientific
Research, Jakkur, Bangalore - 560064, INDIA Nathan Leroux Unité Mixte de
Physique CNRS/Thales, 91767 Palaiseau, France A K Sood Department of
Physics, Indian Institute of Science, Bangalore- 560012, INDIA International
Centre for Materials Science, Jawaharlal Nehru Centre for Advanced Scientific
Research, Jakkur, Bangalore - 560064, INDIA Rajesh Ganapathy 222
Corresponding author. email<EMAIL_ADDRESS>International Centre for
Materials Science, Jawaharlal Nehru Centre for Advanced Scientific Research,
Jakkur, Bangalore - 560064, INDIA School of Advanced Materials (SAMat),
Jawaharlal Nehru Centre for Advanced Scientific Research, Jakkur, Bangalore -
560064, INDIA
###### Abstract
Colloidal heat engines are paradigmatic models to understand the conversion of
heat into work in a noisy environment - a domain where biological and
synthetic nano/micro machines function. While the operation of these engines
across thermal baths is well-understood, how they function across baths with
noise statistics that is non-Gaussian and also lacks memory, the simplest
departure from equilibrium, remains unclear. Here we quantified the
performance of a colloidal Stirling engine operating between an engineered
memoryless non-Gaussian bath and a Gaussian one. In the quasistatic limit, the
non-Gaussian engine functioned like an equilibrium one as predicted by theory.
On increasing the operating speed, due to the nature of noise statistics, the
onset of irreversibility for the non-Gaussian engine preceded its thermal
counterpart and thus shifted the operating speed at which power is maximum.
The performance of nano/micro machines can be tuned by altering only the
nature of reservoir noise statistics.
Experimental advances in nano/micro manipulation has made feasible the
realization of mesoscale heat engines with only a single atom
rossnagel2016single or colloidal particle blickle2012realization ;
quinto2014microscopic ; martinez2016brownian ; ciliberto2017experiments ;
martinez2017colloidal as the working fluid. Even while the functioning of
these engines is strongly influenced by fluctuations in the local environment
with parameters like work and efficiency becoming stochastic quantities, when
operating between equilibrium heat baths, their cycle-averaged performance
mirrors their macroscopic counterparts and standard thermodynamic relations
apply sekimoto1998langevin ; sekimoto2010stochastic ; esposito2010efficiency ;
seifert2012stochastic ; verley2014unlikely ; rana2014single . Recently,
Krishnamurthy et al. krishnamurthy2016micrometre experimentally realized an
active stochastic heat engine by replacing the isothermal branches of a
Stirling cycle with isoactive ones. Here, a colloidal particle in a time-
varying optical potential was periodically cycled across two bacterial
reservoirs characterized by different levels of activity. Unlike in
equilibrium thermal baths where the displacement distribution of the colloid,
$\rho(x)$, is a Gaussian, in active reservoirs, it was non-Gaussian and heavy-
tailed krishnamurthy2016micrometre ; wu2000particle . These rare large
displacement events resulted in large work output and the efficiency of this
active engine was found to surpass equilibrium engines; even those operating
between thermal baths with an infinite temperature difference. Since the
metabolic activity of the bacteria could not be altered rapidly, this engine
was operated only in the quasistatic limit, i.e. for a cycle duration $\tau$
larger than the relaxation time of the colloid. Subsequent theoretical
calculations for the $\tau\to\infty$ limit posited that a departure from
equilibrium efficiencies requires noise not just with non-Gaussian statistics
but also with memory, a feature typical of active baths due to the persistent
motion of the particles zakine2017stochastic . In fact, when the bath noise is
non-Gaussian and white, an effective temperature $T_{eff}$ defined through the
variance of $\rho(x)$ is thought to act like a bona fide temperature
zakine2017stochastic ; fodor2018non and engines operating between such baths
are expected to perform like equilibrium ones in the quasistatic limit.
Whether this similarity persists when $\tau$ is reduced and irreversibility
begins to set in is not known and is worth exploring since real heat engines
never operate in the quasistatic limit as here their power $P\to 0$. On the
experimental front, memoryless non-Gaussian heat baths are yet to be realised
and predictions even in the quasistatic limit remain untested.
Here we engineered a memoryless non-Gaussian heat bath and then constructed
and quantified the functioning of a colloidal Stirling heat engine operating
between such a bath and a thermal one for different $\tau$. In the quasistatic
limit, the performance of this non-Gaussian engine mirrored a classical
Stirling engine operating between thermal/Gaussian baths in agreement with
theoretical predictions. Strikingly, due primarily to differences in the noise
statistics of the baths, the small $\tau$ behaviour of these engines was quite
different. On lowering $\tau$, not only did the distribution of work done per
cycle, $\rho(W_{\text{cyc}})$, by the non-Gaussian engine become increasingly
negatively skewed, unlike the standard Stirling case where it remained
Gaussian, the onset of irreversibility for these two engines was also
different. Importantly, we demonstrate that even sans memory, changing the
nature of noise statistics of the reservoirs between which an engine operates
allows tuning its performance characteristics, specifically, the $\tau$ at
which the power goes through a maximum.
## Results
### Reservoir engineering by flashing optical traps
Our experimental scheme for reservoir engineering is elaborated in Figure 1a.
A polystyrene colloidal particle of radius $R=2.5$ $\mu$m suspended in water
is held in a harmonic optical potential, $U={1\over 2}k_{1}\langle
x^{2}\rangle$, created by tightly focusing a laser beam (1064 nm ALS-IR-5-SF,
Azur Light Systems France) through a microscope objective (Leica Plan
Apochromat 100X, N.A. 1.4, oil) that is also used for imaging the particle
(see methods). Here, $k_{1}$ is the stiffness of this primary trap, $x$ is the
displacement of the colloid from the centre of the optical trap and
$\langle\rangle$ denotes an average. At equilibrium, the trap stiffness can be
determined through the equipartition relation ${1\over 2}k_{1}\langle
x^{2}\rangle={1\over 2}k_{B}T$ where $k_{B}$ is the Boltzmann constant and $T$
is the bath temperature, which in our experiments is fixed at 300 K. As a
first step, we attempted to engineer a reservoir that mimicked a thermal bath,
i.e. with Gaussian noise statistics, but with a desired $T_{eff}$. To this
end, we imposed an additional noise on the colloidal particle along one
spatial dimension, here the $x-$axis (Figure 1a), from a second optical trap
of fixed intensity but with a time-dependent centre that was flashed at a
distance $\delta a(t)$ away from the primary one (Figure 1b). This was made
possible by using a second laser (Excelsior 1064 nm, Spectra Physics USA)
coupled to the microscope through a spatial light modulator (Boulder Nonlinear
Systems USA) and the flashing frequency was held fixed at $34$ Hz (see
Methods). Earlier reservoir engineering studies wherein the colloidal particle
experienced only the potential from the flashing trap found that when $\delta
a$ was drawn from a Gaussian distribution, the particle indeed behaved like
one in a thermal bath but at a $T_{eff}>T$ and furthermore, when $\delta
a(t)<R$, the trap stiffness also remained unaltered. berut2014energy ;
chupeau2018thermal . Here we adhered to the same protocol and further ensured
that the peak of the $\delta a$ distribution coincided with the centre of the
primary trap. Thus, the effective trap stiffness in our experiments
$k=k_{1}+k_{2}$, where $k_{2}$ is the stiffness of the flashing trap. Like in
a thermal bath, $\rho(x)$ of the trapped colloidal particle was a Gaussian
(solid circles in Figure 1d) and its power spectral density (PSD) a
Lorentzian, allowing us to determine $k_{2}$ and hence $T_{eff}$
chupeau2018thermal (Supplementary Figure 1 and Supplementary Note 1). For the
$\delta a(t)$ profile shown in Figure 1b, the particle experienced a
$T_{eff}=1331$ K.
Engineering a memoryless non-Gaussian reservoir involved only a small tweak to
the manner in which the external noise was imposed on the colloidal particle.
The instantaneous $\delta a$ was now drawn randomly from a distribution with
zero mean and skew, as before, but with a high kurtosis (see Methods and
Supplementary Figure 2). Such a distribution has a narrow central region with
heavy tails. The flashing optical trap is thus mostly coincident with the
primary trap, thereby confining the particle strongly, and is occasionally
positioned a large distance away from the centre leading to a large excursion
by the particle (Figure 1c and Supplementary Movie). The overall noise
experienced by the particle is $\delta$-correlated as the thermal and imposed
noise are individually $\delta$-correlated. Under the influence of such a
noise, the corresponding $\rho(x)$ of the colloidal particle was also non-
Gaussian (hollow squares in Figure 1d). The PSD of the particle could be fit
to a Lorentzian over the dynamic range accessible and since all other
experimental parameters are held fixed, the roll-off frequency of the PSD was
also same as the Gaussian case (Supplementary Figure 3 and Supplementary Note
1). For an appropriate choice of the variance and kurtosis of the $\delta a$
distribution, we could engineer the $T_{eff}$ of the non-Gaussian bath, again
defined through the variance of $\rho(x)$, to be nearly identical to that in a
Gaussian bath (Figure 1d).
### Performing a Stirling cycle between engineered reservoirs
Armed with the capability to engineer reservoirs, we built a colloidal
Stirling engine operating between a hot non-Gaussian and a cold Gaussian bath
held at temperatures $T_{eff}^{H}=1824$ K and $T_{eff}^{C}=1570$ K,
respectively. We also compared the performance of this non-Gaussian engine
with a standard Stirling engine operating between engineered Gaussian baths
with similar effective temperature difference (see Supplementary Note 2). The
Stirling cycle we executed with the trapped colloid (Figure 1e), like in
previous studies blickle2012realization ; krishnamurthy2016micrometre ;
schmiedl2007efficiency , comprised of an isothermal compression (path ① - ②)
and expansion step (path ③ - ④) linked by two isochoric transitions (paths ② -
③ and ④ - ①). In the isothermal compression (expansion) steps, $k$ was
increased (decreased) linearly from $k_{min}=$2.5 pN$\mu$m-1 to $k_{max}=$ 2.7
pN$\mu$m-1 by changing $k_{1}$ alone. The isochroric transitions were near
instantaneous and occurred on millisecond time scales. We exploited the
ability to rapidly alter $T_{eff}$ and also the nature of noise statistics
through the SLM to explore engine performance over a range of $\tau$ which
spanned from 2 s to 32 s (see Methods).
### Elucidating the origins of irreversibility in the non-Gaussian Stirling
engine
The framework of stochastic thermodynamics provides a prescription for
calculating thermodynamic quantities like the work, power, and efficiency of
mesoscopic machines sekimoto1998langevin ; sekimoto2010stochastic ;
seifert2012stochastic ; schmiedl2007efficiency . The work done per cycle,
$W_{cyc}$, by the particle due to a modulation in the stiffness of the trap is
just the change in potential energy and is given by
$W_{cyc}=\int_{t_{i}}^{t_{i}+\tau}\frac{\partial U}{\partial k}\circ
dk\equiv\frac{1}{2}\int_{t_{i}}^{t_{i}+\tau}x^{2}\circ dk$. Here, the $\circ$
signifies that the product is taken in the Stratonovich sense and $t_{i}$ is
the starting time of $i^{\text{th}}$ cycle. Owing to its stochastic nature,
$W_{cyc}$ of the engine fluctuates from cycle-to-cycle and we quantified the
nature of these fluctuations through the probability distribution function
$\rho(W_{cyc})$. Figure 2a and b show $\rho(W_{cyc})$ at different $\tau$ for
the thermal and non-Gaussian Stirling cycles, respectively. Focusing on the
large cycle duration ($\tau=32$ s) first, we observed that $\rho(W_{cyc})$ is
a Gaussian for the thermal and also for the non-Gaussian cycles (circles in
Figure 2a and b). The experimentally calculated average work done per cycle,
$\langle W_{cyc}\rangle$, is negative indicating that the engine extracts heat
from the bath to perform work on the surroundings. Further, $\tau=32$ s
corresponds to the quasistatic limit for both since the value of $\langle
W_{cyc}\rangle$ is in excellent agreement with the theoretically calculated
quasistatic Stirling work output,
$W_{\infty}=k_{B}(T_{eff}^{C}-T_{eff}^{H})\ln\sqrt{\frac{k_{max}}{k_{min}}}$
(short solid horizontal lines in Figure 2c).
On lowering $\tau$, $\rho(W_{cyc})$ for the thermal Stirling engine remained a
Gaussian (Figure 2a) and $\langle W_{cyc}(\tau)\rangle\approx\langle
W_{cyc}(\tau=32\text{ s})\rangle$ (hollow circles Figure 2c). As expected of
such a distribution, $\langle W_{cyc}\rangle$ was the same as the most-
probable work $W^{*}$ \- the value of $W_{cyc}$ where $\rho(W_{cyc})$ is a
maximum (solid circles Figure 2c). For the non-Gaussian engine on the other
hand, on reducing $\tau$, $\rho(W_{cyc})$ became increasingly negatively
skewed (Figure 2b) and $W^{*}(\tau)$ also became increasingly positive (solid
squares Figure 2c). $\langle W_{cyc}(\tau)\rangle$ however, was marginally
smaller than $\langle W_{cyc}(\tau=32\text{ s})\rangle$. (hollow squares
Figure 2c). We note that the work done by a thermal Stirling engine at a
finite $\tau$ is given by the empirical relation schmiedl2007efficiency ;
blickle2012realization
$W(\tau)=W_{\infty}+W_{diss}\equiv W_{\infty}+\frac{\Sigma}{\tau}$ (1)
where, $W_{diss}$ is the dissipative work which accounts for the particle's
inability to fully explore the available phase space when $k$ is rapidly
lowered during the hot isotherm and $\Sigma$ is a constant also called the
irreversibility parameter. Since $W_{diss}$ is a positive quantity as per
definition, at small enough $\tau$, the overall work done itself can be
positive indicating the stalling of the engine. Clearly there is no buildup of
irreversibility for the thermal engine as $\tau$ is lowered since $\langle
W_{cyc}(\tau)\rangle\equiv W^{*}(\tau)\approx W_{\infty}$, while for the non-
Gaussian one, there is, even if only in the most-probable sense ($\langle
W_{cyc}(\tau)\rangle\approx W_{\infty}<W^{*}(\tau)$), and the engine stalls
for $\tau\leq 10$ s. We also found excellent agreement between equation (1)
and our data allowing us to determine $\Sigma=0.11\text{ }k_{B}T_{eff}^{C}$
(red solid line in Figure 2c).
The observed behaviour of the non-Gaussian engine can be easily rationalized
by analyzing the relaxation of the particle in the hot isotherm at the level
of an individual cycle. For the particle to fully sample the statistical
properties of the non-Gaussian hot reservoir, it should also experience the
occasional large kicks that displaces it far from the center and not just the
ones that predominantly keep it confined close to it. As $\tau$ is lowered, in
most cycles, the probability that the particle encounters a large kick in the
isothermal expansion step also becomes increasingly small. Due to the
incomplete exploration of the available phase volume in these cycles, less
useful work is performed and $W^{*}(\tau)$ lifts off with decreasing $\tau$.
In a few cycles, where these large kicks are present, anomalously large work
is done by the engine and this results in $\rho(W_{cyc})$ being negatively
skewed. When an adequate number of cycles, which has to be increased when
$\tau$ is lowered, have been performed, all features of the noise are sampled
and the engine operates like one in the quasistatic limit in an average sense
with $\langle W_{cyc}(\tau)\rangle\to W_{\infty}$ (Figure 2c). This inference
can be strengthened by quantifying the equilibration of the particle over a
fixed, but limited, number of cycles for all $\tau$. In Figure 2d, we show
${k\langle x^{2}\rangle\over k_{B}T_{eff}^{H}}$ calculated over a small window
in the middle of the hot isotherm and averaged over $N=50$ cycles for the
thermal (squares) and the non-Gaussian engine (circles). Despite $N$ being
small, ${k\langle x^{2}\rangle\over k_{B}T_{eff}^{H}}$ is close to 1 at all
$\tau$ for the thermal engine implying that it is truly in the quasistatic
limit, while for the non-Gaussian engine this is the case only at large $\tau$
with a clear violation of quasistaticity setting in for $\tau\leq 10$ s.
Evidently, for a non-Gaussian engine $W^{*}(\tau)$, and not $\langle
W_{cyc}(\tau)\rangle$, is a more precise metric for performance.
### Tuning the performance of a Stirling engine through memoryless non-
Gaussian noise
We now examined how differences in the nature of noise-statistics affected the
power output of our engines. In the quasistatic limit $P(\tau)=-{\langle
W_{cyc}(\tau)\rangle\over\tau}\to 0$ since $\tau\to\infty$, while at high
cycle frequencies $W_{diss}$ is large and $P$ is once again small. At
intermediate $\tau$, however, these effects compete resulting in a maximum in
$P$ and this is a feature of both macroscopic and mesoscopic engines
blickle2012realization ; curzon1975efficiency . Figure 3a shows the most-
probable power, $P^{*}(\tau)={-W^{*}(\tau)\over\tau}$, for the Gaussian
Stirling engine (circles) and the non-Gaussian one (squares). Since for the
thermal engine, over the range of $\tau$ studied $\Sigma=0$, $P^{*}(\tau)$,
which is same as same as $P(\tau)$, only increases monotonically on lowering
$\tau$ and does not exhibit a maximum. Whereas for the non-Gaussian engine, on
reducing $\tau$, $P^{*}(\tau)$ first appears to increase slightly, crosses
zero for $\tau<10$ s and then becomes more negative indicating stalling of the
engine. We emphasize that for a Stirling cycle executed under conditions
identical to that in Figure 1e but where the non-Gaussian reservoir is
replaced by a Gaussian one with the same $T_{eff}^{H}$, the maximum in $P$ is
expected to be at a $\tau$ that is lower than even the Gaussian engine studied
here (see Supplementary Note 3). This clearly shows that, even sans memory,
altering the statistical properties of the noise bath alone allows for tuning
the performance characteristics of mesoscopic heat engines.
For a complete understanding of the operation of the non-Gaussian engine, we
calculated its efficiency at various $\tau$ and benchmarked it with the
thermal engine. Conventionally, the efficiency, $\varepsilon={W_{cyc}\over
Q}$, where $Q$ is the heat absorbed by the particle when it is in contact with
the hot reservoir. $Q$ is the sum of the isochoric heat during the transition
from state point ② to ③, $Q_{2\to 3}=-{1\over
2}k_{max}(T_{eff}^{H}-T_{eff}^{C})$ and the isothermal heat during transition
from ③ to ④, $Q_{3\to 4}=\int_{(3)}^{(4)}\frac{\partial U}{\partial
x}\dot{x}dt=W_{H}+Q_{boundary}$. Here,
$W_{H}=\frac{1}{2}\int_{(3)}^{(4)}x^{2}\circ dk$ is the work done in the hot
isotherm and $Q_{boundary}=-\frac{1}{2}[k(t)x^{2}(t)]_{(3)}^{(4)}$. For the
non-Gaussian engine, we naturally chose $W^{*}$ instead of $W_{cyc}$ and
defined the most-probable efficiency $\varepsilon^{*}={W^{*}\over{\langle
W_{H}\rangle+\langle Q_{boundary}\rangle+\langle Q_{isochoric}\rangle}}$ (See
Supplementary Note 4). For the thermal engine, the experimentally determined
$\varepsilon^{*}$ (black circles in Figure 3b) hovers around the theoretically
calculated saturation Stirling efficiency
$\varepsilon_{Sat}=\varepsilon_{c}[1+{\varepsilon_{c}\over\ln(k_{max}/k_{min})}]^{-1}$
(solid blue line). Here, $\varepsilon_{c}=1-{T_{eff}^{C}\over T_{eff}^{H}}$ is
the Carnot efficiency. Whereas for the non-Gaussian engine,
$\varepsilon^{*}(\tau)$ converges to $\varepsilon_{Sat}$ only at large $\tau$
(red squares in Figure 3a). When $\tau$ is reduced, $\varepsilon^{*}(\tau)$
drops and becomes negative for $\tau<10$ s indicating stalling of the engine.
Of particular importance in the operation of real heat engines is the
efficiency at maximum power $\varepsilon_{Max}$, which for the non-Gaussian
engine is at $\tau=10.3$ s with $\varepsilon_{Max}=0.025$. Most remarkably,
this value in excellent agreement with theoretically predicted Curzon-Ahlborn
efficiency, $\varepsilon_{CA}={\varepsilon_{Sat}\over
2-\alpha\varepsilon_{Sat}}=0.026$ curzon1975efficiency ;
schmiedl2007efficiency . In our experiments, $\alpha\sim 0$ is a constant
calculated from the irreversibility parameters corresponding to the work done
in the hot and cold isotherms (Supplementary Figure 4 and Supplementary Note
5). While it is known that $\varepsilon_{Max}\approx\varepsilon_{CA}$ for both
macro and mesoscopic thermal engines, ours is the first observation of this
being the case even for a non-Gaussian engine.
## Discussion
Collectively, our experiments show that a micrometer-sized Stirling engine
operating between a Gaussian and a non-Gaussian bath, without memory, indeed
performs like a conventional engine in the quasistatic limit as anticipated by
theory. On lowering the cycle times, the buildup of irreversibility in the
engine, due entirely to the non-Gaussian nature of noise, results in work
distributions that become increasingly negatively skewed unlike a thermal
engine where it remains Gaussian. Strikingly, this noise-induced enhancement
of irreversibility modulates the performance characteristics of the non-
Gaussian engine in a manner similar to predictions by Curzon and Ahlborn for
thermal engines where irreversibility sets in purely due to the rapid change
of the control parameter. Our experiments thus reveal a new strategy for
optimizing the performance of a mesoscale engine by tuning only the nature of
noise statistics. Importantly, the ease with which the noise can be engineered
and also applied locally, i.e. on the particle scale, in our approach presents
advantages over other reservoir engineering methods where this can prove to be
difficult, if not impossible martinez2013effective ; martinez2017colloidal .
This should now make feasible the experimental realization of new stochastic
machines like the non-Gaussian and the Buttiker–Landauer ratchet
luczka1997symmetric ; buttiker1987transport ; landauer1988motion .
## Methods
### Experimental set-up for Reservoir Engineering
In order to impart additional noise into the trapped colloid, a secondary
optical trap was flashed along a line passing through the time-averaged centre
of the particle at variable distances from the same. This was achieved by
coupling a second laser (Excelsior 1064 nm, Spectra Physics USA) to the
microscope which is reflected from a Spatial Light Modulator (Boulder
Nonlinear Systems USA). The Spatial Light Modulator (SLM) contains a
$512\times 512$ array of shiny electrodes covered with a transparent liquid
crystal layer so that an electric potential modulation across the electrodes
imposes an additional phase pattern on the incident beam. We interfaced the
SLM to a computer so that a series of desired phase patterns can be fed to the
SLM at a fixed frequency of $34Hz$. This enabled us to dynamically reconfigure
the position of the first order diffraction spot by applying a series of
linear diffraction grating patterns with varying periodicity which is
controlled through a computer. We blocked the zeroth order spot so that only
the first order spot is incident on the back of the microscope objective
resulting in a flickering optical trap in the vicinity of the tweezed
colloidal particle.
### Image acquisition and processing
Images of the trapped colloid was captured at $250$ Hz using a fast camera
(Photron 500K-M3) attached to the microscope. Position of particle's centre in
each frame was located at the subpixel level using the particle tracking codes
by R. Parthasarathy parthasarathy2012rapid . This allowed us to find the
particle's position within an accuracy of $5$ nm.
### Non-Gaussian Reservoir Engineering
For engineering the non-Gaussian reservoir, $\delta a$ were chosen from a
$\delta-correlated$ distribution with zero mean and skewness but an extremely
high kurtosis of 50. One such distribution with standard deviation of
$\sigma=0.28\mu m$ is represented in Supplementary Figure 2(b). To create this
distribution, we first generate two highly asymmetric distributions $\delta
a_{L}$ and $\delta a_{R}$ (Supplementary Figure 2(a)) with a standard
deviation of $0.28\mu m$, a kurtosis of $60$ and a skewness of $-6.5$ for
$\delta a_{L}$ and $+6.5$ for $\delta a_{R}$ through Pearson's protocol in
MATLAB. Next we add/subtract a suitable number to $\delta a_{L}$ and $\delta
a_{R}$ so that their peaks coincide at zero. Then we take union of $\delta
a_{L}$ and $\delta a_{R}$ and randomly permute all the elements to finally
obtain the set of $\delta a$. In order to realize a desired effective
temperature with such a noise, the standard deviation of $\delta a$ is
optimized. It should be noted that heavy tails rise due to extremely rare
events that can only be captured with a huge statistics. Since we are limited
by a flashing frequency of $34Hz$, it is not possible to completely sample the
statistics with in one isotherm even for the largest $\tau$. To address this
issue, the engine was cycled enough number of times (depending on $\tau$) so
that the collection of all the hot isotherms exhausts all the rare events.
### Instantaneous isochoric transitions
The isochoric transitions ②$\rightarrow$③ and ④$\rightarrow$① shown in Figure
1e of the main text is realised by changing the statistics and the variance of
$\delta a$-distribution. The transition ②$\rightarrow$③ is realised by
changing the $\delta a$ distribution from a Gaussian resulting in
$T_{eff}=1570K$ to a non-Gaussian producing $T_{eff}=1824.3K$ while the
transition ④$\rightarrow$① is realised by the reverse. Since the secondary
laser is diffracted by a computer controlled SLM, the distribution from which
$\delta a$s are chosen can be altered in $1/34$th of a second. Thus the
particle is decoupled and coupled from one engineered reservoir to the other
in less than $33ms$ which is almost negligible even in compared to the lowest
cycle time and hence instantaneous.
## Acknowledgements
N.R. thanks Dr. Sudeesh Krishnamurthy for fruitful discussions. N.R. thanks
Jawaharlal Nehru Centre for Advanced Scientific Research (JNCASR) for
financial support. AKS thanks Department of Science and Technology (DST),
Govt. of India for a Year of Science Fellowship. RG thanks JNCASR for
financial support.
## Author Contributions
N.R., N.L., A.K.S. and R.G. designed experiments. N.R., N.L. and R.G. devised
experimental procedures. N.R. performed experiments and carried out data
analysis. N.R. and R.G. wrote the paper with inputs from all N.L. and A.K.S.
## References
* (1) Roßnagel, J. _et al._ A single-atom heat engine. _Science_ 352, 325–329 (2016).
* (2) Blickle, V. & Bechinger, C. Realization of a micrometre-sized stochastic heat engine. _Nature Physics_ 8, 143–146 (2012).
* (3) Quinto-Su, P. A. A microscopic steam engine implemented in an optical tweezer. _Nature communications_ 5, 1–7 (2014).
* (4) Martínez, I. A. _et al._ Brownian carnot engine. _Nature physics_ 12, 67–70 (2016).
* (5) Ciliberto, S. Experiments in stochastic thermodynamics: Short history and perspectives. _Physical Review X_ 7, 021051 (2017).
* (6) Martínez, I. A., Roldán, É., Dinis, L. & Rica, R. A. Colloidal heat engines: a review. _Soft matter_ 13, 22–36 (2017).
* (7) Sekimoto, K. Langevin equation and thermodynamics. _Progress of Theoretical Physics Supplement_ 130, 17–27 (1998).
* (8) Sekimoto, K. _Stochastic energetics_ , vol. 799 (Springer, 2010).
* (9) Esposito, M., Kawai, R., Lindenberg, K. & Van den Broeck, C. Efficiency at maximum power of low-dissipation carnot engines. _Physical review letters_ 105, 150603 (2010).
* (10) Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. _Reports on progress in physics_ 75, 126001 (2012).
* (11) Verley, G., Esposito, M., Willaert, T. & Van den Broeck, C. The unlikely carnot efficiency. _Nature communications_ 5, 1–5 (2014).
* (12) Rana, S., Pal, P., Saha, A. & Jayannavar, A. Single-particle stochastic heat engine. _Physical review E_ 90, 042146 (2014).
* (13) Krishnamurthy, S., Ghosh, S., Chatterji, D., Ganapathy, R. & Sood, A. A micrometre-sized heat engine operating between bacterial reservoirs. _Nature Physics_ 12, 1134–1138 (2016).
* (14) Wu, X.-L. & Libchaber, A. Particle diffusion in a quasi-two-dimensional bacterial bath. _Physical review letters_ 84, 3017 (2000).
* (15) Zakine, R., Solon, A., Gingrich, T. & Van Wijland, F. Stochastic stirling engine operating in contact with active baths. _Entropy_ 19, 193 (2017).
* (16) Fodor, É., Hayakawa, H., Tailleur, J. & van Wijland, F. Non-gaussian noise without memory in active matter. _Physical Review E_ 98, 062610 (2018).
* (17) Bérut, A., Petrosyan, A. & Ciliberto, S. Energy flow between two hydrodynamically coupled particles kept at different effective temperatures. _EPL (Europhysics Letters)_ 107, 60004 (2014).
* (18) Chupeau, M. _et al._ Thermal bath engineering for swift equilibration. _Physical Review E_ 98, 010104 (2018).
* (19) Schmiedl, T. & Seifert, U. Efficiency at maximum power: An analytically solvable model for stochastic heat engines. _EPL (Europhysics Letters)_ 81, 20003 (2007).
* (20) Curzon, F. L. & Ahlborn, B. Efficiency of a carnot engine at maximum power output. _American Journal of Physics_ 43, 22–24 (1975).
* (21) Martinez, I. A., Roldán, E., Parrondo, J. M. & Petrov, D. Effective heating to several thousand kelvins of an optically trapped sphere in a liquid. _Physical Review E_ 87, 032159 (2013).
* (22) Łuczka, J., Czernik, T. & Hänggi, P. Symmetric white noise can induce directed current in ratchets. _Physical Review E_ 56, 3968 (1997).
* (23) Büttiker, M. Transport as a consequence of state-dependent diffusion. _Zeitschrift für Physik B Condensed Matter_ 68, 161–167 (1987).
* (24) Landauer, R. Motion out of noisy states. _Journal of statistical physics_ 53, 233–248 (1988).
* (25) Parthasarathy, R. Rapid, accurate particle tracking by calculation of radial symmetry centers. _Nature methods_ 9, 724–726 (2012).
Figure 1: Experimental realization of a non-Gaussian Stirling heat engine. a
The big red spot represents the primary optical trap and the small red spots
represent the secondary flashing optical trap at different time instances
$t_{1}$, $t_{2}$ and $t_{3}$. b and c show the distance $\delta a(t)$ from the
primary trap at which the secondary trap was flashed as a function of $t$ for
engineering a Gaussian and a non-Gaussian reservoir, respectively. d shows the
probability distribution of particle displacements, $\rho(x)$, for the
engineered Gaussian/thermal (solid blue circles) and the non-Gaussian
reservoir (red hollow squares) for a nearly identical $T_{eff}$. e shows a
quintessential Stirling cycle between a hot non-Gaussian bath at
$T_{eff}^{H}=1824$ K and a cold Gaussian reservoir with $T_{eff}^{C}=1570$ K.
The trap stiffness $k$ is varied linearly in the expansion/compression steps.
Having a fixed primary trap and a second flashing optical trap, as opposed to
just the latter, prevented the trapped particle from escaping the trap and
allowed for long experiments. $\rho(x)$ of the particle measured at the four
state points labeled ① to ④ is also shown. The black lines are Gaussian fits.
Figure 2: Buildup of irreversibility in the non-Gaussian Stirling engine at
finite $\tau$. a and b show the probability distribution of work done per
cycle $\rho(W_{cyc})$ for the Gaussian and the non-Gaussian engine,
respectively, for $\tau=32$ s (blue circles), $10.6$ s (red triangles) and
$5.6$ s (black squares). Solid lines represent corresponding Gaussian fits to
the data. c Red hollow and solid squares show the average work done per cycle
$\langle W_{cyc}\rangle$ and the most-probable work $W*$, respectively, for
the non-Gaussian engine at various $\tau$. The red solid line is a fit to
Equation 1. Black hollow and solid circles show $\langle W_{cyc}\rangle$ and
$W^{*}$ respectively for the thermal/Gaussian engine. At large $\tau$, the
experimentally calculated work for these engines agrees with theoretically
calculated quasistatic work $W_{\infty}$ indicated by the Red and Black short
horizontal lines for the non-Gaussian and Gaussian engine respectively. d Red
squares (black circles) represent the ratio $k\langle
x^{2}\rangle/k_{B}T_{eff}^{H}$ calculated at the midpoint of the hot isotherm
of the non-Gaussian (Gaussian) engine at various $\tau$. The horizontal line
indicates the equilibrium condition, which is strongly violated inside the
shaded grey region, in case of the non-Gaussian engine. Figure 3: Quantifying
the performance of a non-Gaussian Stirling engine. In a, red squares (black
circles) show the most-probable power $P^{*}$ of the non-Gaussian (Gaussian)
engine at various $\tau$. $P^{*}$ increases slightly and then rapidly falls
for the non-Gaussian engine for $\tau\leq 10.6$ s. The red solid line is
calculated from the fit to Equation 1 and is overlaid on the experimental
data. b Red squares (black circles) represent the most-probable efficiency
$\varepsilon^{*}$ of the non-Gaussian (Gaussian) engine at various $\tau$. The
blue solid lines indicate the theoretically calculated saturation Stirling
saturation, $\varepsilon_{Sat}$. Efficiency $\varepsilon_{max}$ just before
the rapid drop in power ($\tau=10.6$) of the non-Gaussian engine agrees with
the Curzon-Ahlborn efficiency $\varepsilon_{CA}$. Note that the black vertical
line through the first data point (smallest $\tau$) is a portion of a large
error bar. The error bars at other $\tau$ values are smaller than the symbol
size.
|
# triSurfaceImmersion: Computing volume fractions and signed distances from
triangulated surfaces immersed in unstructured meshes
Tobias Tolle<EMAIL_ADDRESS>Dirk Gründing<EMAIL_ADDRESS>darmstadt.de Dieter Bothe<EMAIL_ADDRESS>Tomislav Marić
<EMAIL_ADDRESS>Mathematical Modeling and Analysis Institute,
Mathematics department, TU Darmstadt,
Alarich-Weiss-Straße 10, 64287 Darmstadt, Germany
###### Abstract
We propose a numerical method that enables the calculation of volume fractions
from triangulated surfaces immersed in unstructured meshes. First, the signed
distances are calculated geometrically near the triangulated surface. For this
purpose, the computational complexity has been reduced by using an octree
space subdivision. Second, an approximate solution of the Laplace equation is
used to propagate the inside/outside information from the surface into the
solution domain. Finally, volume fractions are computed from the signed
distances in the vicinity of the surface. The volume fraction calculation
utilizes either geometrical intersections or a polynomial approximation based
on signed distances. An adaptive tetrahedral decomposition of polyhedral cells
ensures a high absolute accuracy. The proposed method extends the admissible
shape of the fluid interface (surface) to triangulated surfaces that can be
open or closed, disjoint, and model objects of technical geometrical
complexity.
Current results demonstrate the effectiveness of the proposed algorithm for
two-phase flow simulations of wetting phenomena, but the algorithm has broad
applicability. For example, the calculation of volume fractions is crucial for
achieving numerically stable simulations of surface tension-driven two-phase
flows with the unstructured Volume-of-Fluid method. The method is applicable
as a discrete phase-indicator model for the unstructured hybrid Level Set /
Front Tracking method.
The implementation is available on GitLab [27].
This a pre-print of the accepted article
https://doi.org/10.1016/j.cpc.2021.108249, when citing, please refer to the
accepted article.
###### keywords:
volume of fluid , triangular surface mesh , signed distances , unstructured
mesh
PROGRAM SUMMARY
Program Title: argo/triSurfaceImmersion
CPC Library link to program files: (to be added by Technical Editor)
Developer’s repository link: https://gitlab.com/leia-methods/argo
Code Ocean capsule: (to be added by Technical Editor)
Licensing provisions: GPLv3
Programming language: C++
Nature of problem:
Computing volume fractions and signed distances from triangulated surfaces
immersed in unstructured meshes.
Solution method:
First, the algorithm computes minimal signed distances between mesh points
(cell centers and cell corner-points) and the triangulated surface, in the
close vicinity of the surface. The sign is computed with respect to the
surface normal orientation. Afterwards, the sign is propagated throughout the
unstructured volume mesh by an approximate solution of a diffusion equation.
The bulk cells’ volume fractions are set, and interface cells are identified
based on the signed distances. Volume fractions in cells intersected by the
triangulated surface mesh are either computed by geometric intersections
between surface triangles and a cell or by an approximation of the volume
fraction approximation from signed distances, coupled with tetrahedral cell
decomposition and refinement.
Additional comments including restrictions and unusual features:
The volume mesh can consist of cells of arbitrary shape. The surface mesh
normal vectors need to be oriented consistently.
## 1 Introduction
We present a new numerical algorithm that calculates initial conditions for
simulations of two-phase flow problems for fluid interfaces of complex shapes.
The initial conditions are calculated in the form of signed distances and
volume fractions from fluid interfaces approximated as arbitrarily shaped
triangular surfaces immersed into unstructured meshes. The signed distances
are relevant as initial conditions for the Level Set method [48, 49] for
multiphase flow simulation. Volume fractions on unstructured meshes are
required for the unstructured Volume-of-Fluid (VOF) method (cf. [30] for a
recent review). In fact, we have applied the proposed algorithms to model
experimental fluid interfaces from wetting experiments [16], which was not
possible using available contemporary approaches that model fluid interfaces
using (compositions of) implicit functions or parameterized surfaces. The
proposed algorithm approximates the surfaces using triangle meshes that are
omnipresent in Computer-Aided Design (CAD) because of their versatility: they
can approximate basic surfaces such as spheres and ellipsoids, but also
surfaces of mechanical parts, disjoint surfaces in mechanical assemblies, or
surfaces resulting from imaging scans.
The overall simulation domain $\Omega\subset\mathbb{R}^{3}$ is separated into
two subdomains $\Omega=\Omega^{+}(t)\cup\Omega^{-}(t)$, representing phase $1$
and phase $2$, respectively, as illustrated for a liquid drop on a surface in
fig. 1. At the contact line
$\Gamma:=\partial\Omega\cap\overline{\Omega^{+}}\cap\overline{\Omega^{-}}$,
the liquid-gas interface $\Sigma$ encloses a contact angle $\theta$ with the
solid surface $\partial\Omega_{\text{wall}}$. Furthermore, the normal vector
$\boldsymbol{n}_{\Sigma}$ of the interface $\Sigma$ is oriented such that it
points into the gas phase.
$\Omega^{+}(t)$
---
$\Sigma(t)$
---
$\theta$
---
$\partial\Omega$
---
$\partial\Omega_{\text{wall}}$
---
$\Omega^{-}(t)$
---
$\Gamma$
---
$\boldsymbol{n}_{\Sigma}$
---
Figure 1: The different domains for a liquid ($-$) drop on a solid surface
surrounded by a gas ($+$) phase.
Typically, a continuum mechanical model is used for the description of such
fluid mechanical problems. This description is often based on a sharp
interface model, as depicted in fig. 1. With this model, the liquid-gas
interface can be described using an indicator function
$\chi(\mathbf{x},t):=\begin{cases}1,&\mathbf{x}\in\Omega^{-}\subset\mathbb{R}^{3}\\\
0,&\text{otherwise}.\end{cases}$ (1)
An approximate solution of this model requires a decomposition of the solution
domain into volumes that have no volume overlaps, the closed _cells_
$\Omega_{c}$, denoted by
$\Omega\approx\tilde{\Omega}=\\{\Omega_{c}\\}_{c\in C}$ (2)
where $C=\\{1,2,3,\dots,N_{c}\\}$ is a set of indices to mesh cells. As can be
seen in fig. 2, the mesh is a set of non-overlapping subsets (_cells_)
$\Omega_{c}\subset\tilde{\Omega}$. With non-overlapping, we mean that the
volume of an intersection between any two cells is zero. _Index sets_
represent the unstructured mesh data [15]. We consider a set of cell corner-
points $P_{h}$ where each point in $P_{h}$ is an element of $\mathbb{R}^{3}$.
Geometrically, each cell $\Omega_{c}$ is a volume bounded by polygons, so-
called _faces_. A global set of faces $F_{h}$ is defined, and each face is a
sequence of _indices_ of points in $P_{h}$. In this context, we define a cell
set $C_{c}$ as a set of indices of faces in the set of mesh faces $F_{h}$.
Therefore, when referring to a volume defined by the cell, we use $\Omega_{c}$
and its magnitude is then $|\Omega_{c}|$, and when we refer to the cell as an
unordered index set, we use $C_{c}$ and its magnitude $|C_{c}|$ is the number
of faces that bound the cell.
Solutions of continuum mechanical problems in geometrically complex solution
domains significantly benefit from unstructured meshes. For example, gradients
of solution variables are resolved at geometrically complex boundaries by
employing mesh boundary layers, strongly reducing the number of cells required
to achieve specific accuracy. Hence, this approx reduces the overall required
computational resources.
As the phase indicator $\chi(\mathbf{x},t)$ given by eq. 1 contains a jump
discontinuity, it poses difficulties for numerical simulations of two-phase
flows. With Volume of fluid (VOF) methods, this non-continuous description is
discretized by introducing the so-called _volume fraction_
$\alpha_{c}=\dfrac{1}{|\Omega_{c}|}\int_{\Omega_{c}}\chi(\mathbf{x},t)dx.$ (3)
The unstructured VOF methods [30] rely on the volume fraction field
$\alpha_{c}$ to track interface with the advecting velocity obtained from the
solution of two-phase Navier-Stokes equations in a single-field formulation.
All multiphase flow simulation methods that utilize the single-field
formulation of Navier-Stokes equations approximate the phase-indicator
function similarly to eq. 3. The phase-indicator approximation utilizes signed
distances in the Level Set [48, 47, 49] method, the volume fractions
approximate the phase indicator for the Volume-of-Fluid [8, 37, 17, 40]
method.
Various methods exist that compute the volume fraction $\alpha_{c}$ based on
the exact phase indicator $\chi(\mathbf{x},t)$. The majority of methods
calculate the integral in eq. 3 numerically, as schematically shown in fig. 2,
using numerical quadrature.
$\Sigma$
---
$\chi(\mathbf{x},t)=1$
---
$\chi(\mathbf{x},t)=0$
---
$\Sigma$
---
$\Omega_{c}$
---
$\Omega^{-}$
---
$h(x)$
---
$x$
---
$y$
---
$\Omega^{-}$
---
Figure 2: Calculating volume fractions of a circular interface by numerical
integration.
Different approaches are outlined below with increasing complexity in terms of
admissible shapes of the fluid interface. The admissible shapes range from
analytic descriptions of basic geometric shapes such as spheres and ellipsoids
to implicit functions (or their combinations) and more general shapes
approximated with volume meshes.
Strobl et al. [46] propose an exact intersection between a sphere and a
tetrahedron, a wedge, or a hexahedron. The proposed algorithm is exact and
fast, though it is limited to the spherical interface shape.
Fries and Omerović [13] represent the fluid interface as a level set and
propose a higher-order quadrature for the integral on the right-hand side of
eq. 3. The parametrization of the surface uses roots of the implicit function
found by the closest-point algorithm. Results are presented for hexahedral and
tetrahedral unstructured meshes that may also be strongly deformed. Fries and
Omerović [13, fig. 52, fig. 53] also show results with higher-order ($>2$)
convergence for the volume integration of an arbitrary non-linear function on
hexahedral and tetrahedral meshes. However, the volume and area integration
error is reported for a single function. While a relative global volume error
between ${1}\mathrm{e}{-08}$ and ${1}\mathrm{e}{-06}$ is reported, no
information about the required CPU times is provided. In the approach proposed
by Fries and Omerović [13], fluid interfaces with complex shapes are modeled
as a composition of implicit functions.
Kromer and Bothe [25] propose an efficient third-order accurate quadrature for
the eq. (3). Contrary to Jones et al. [21], who decompose cells into
tetrahedrons, Kromer and Bothe [25] locally approximate the hypersurface by a
paraboloid based on the principal curvatures. Applying the Gaussian divergence
theorem to eq. (3) then yields contributions from the cell boundary and the
approximated hypersurface patch. Using the surface divergence theorem, Kromer
and Bothe [25] reformulate the contribution from the hypersurface patch into a
set of line integrals, where the associated integrand emerges from the
solution of a Laplace-Beltrami-type problem. The method of Kromer and Bothe
[22] is directly applicable to unstructured meshes. However, locally, i.e.,
within a cell, the fluid interface must be $C^{2}$ and simply connected.
Aulisa et al. [3] and Bnà et al. [5, 6] calculate the volume fraction by
representing the indicator function as a height function inside cubic cells,
using the structure of the underlying Cartesian mesh. Numerical integration of
the height function is illustrated by fig. 2. However, extending this approach
to unstructured meshes raises many questions. First, constructing a height
function in a specific direction is complex and computationally expensive
[38]. Second, the orientation of the interface in the chosen coordinate system
may easily make the problem ill-conditioned. Finally, required mesh-search
operations are complicated as the face normals of polyhedral cells are
typically not aligned with the coordinate axes.
The calculation of the volume fraction given by
$\alpha_{c}=\frac{|\Omega^{-}\cap\Omega_{c}|}{|\Omega_{c}|}$ can be
reformulated into the integration of a function $f=1$ within
$\Omega^{-}\cap\Omega_{c}$. Since $\partial\Omega_{c}$ consists of piecewise-
planar surfaces (faces), the complexity lies in the non-planar part of the
surface $\partial\Omega^{-}\cap\Omega_{c}=\Sigma(t)\cap\Omega_{c}$. Trimmed
isogeometric analysis can be used to integrate $f=1$ within
$\Omega^{-}\cap\Omega_{c}$ by representing $\partial\Omega^{-}\cap\Omega_{c}$
using a trimmed NURBS surface, effectively resulting in
$\alpha_{c}=\frac{|\Omega^{-}\cap\Omega_{c}|}{|\Omega_{c}|}$ for complex non-
linear CAD surfaces. Although not yet applied to volume fraction calculation
($f=1$ integration), trimmed isogeometric analysis has been applied to solving
PDEs in solution domains bounded by NURBS surfaces [24, 43, 36]. Similarly,
the immersed isogeometric analysis (e.g. [11]) requires function integration
in cut cells, where the integration of $f=1$ in the cut cell is equivalent to
computing $|\Omega^{-}\cap\Omega_{c}|$ used in volume fraction calculation.
Although it is a potentially interesting alternative approach for computing
volume fractions from CAD surfaces, the isogeometric analysis requires NURBS
trimming, octree refinement, and higher-order quadratures. These efforts are
worthwhile for the goal of achieving higher-order solutions for PDEs in
complex solution domains. However, as demonstrated in the results section, our
proposed algorithms achieve sufficient accuracy for signed distances and
volume fractions on unstructured meshes while relying on straightforward
second-order accurate discretization.
The signed distances in the Level Set Method require re-distancing
(correction). The re-distancing methods are usually based on approximate
solutions of Partial Differential Equations (PDEs) that ensure the signed-
distance property [41]. Contrary to this approach, the unstructured Level Set
/ Front Tracking method [28, 51] _geometrically_ computes minimal signed
distances from $\tilde{\Sigma}$. This calculation is relatively
straightforward on structured meshes [44, 45], but significantly more complex
on unstructured meshes [28, 51]. Here we significantly extend the calculation
of signed distances from [28, 51] by introducing an efficient approximate
propagation of the inside/outside information from $\tilde{\Sigma}$.
Volume fraction calculation methods outlined so far model the fluid interface
using exact functions and handle more complex interface shapes via
combinations of these functions. A combination of exact functions cannot
accurately capture the shape of the fluid interface in many cases. For
example, when the interface shape is prescribed experimentally Hartmann et al.
[16].
One approach exists that can handle arbitrarily complex interface shapes. In
this approach, the fluid interface encloses a volumetric mesh as its boundary
surface mesh. This mesh given by the fluid interface is intersected with a
”background” mesh that stores volume fractions. This approach is called
_volume mesh intersection_. An example for such an intersection between
$\tilde{\Omega}$ and cells from $\tilde{\Omega}^{-}$ is shown in fig. 3. In
principle, this approach is relatively straightforward, provided an accurate
geometrical intersection of tetrahedrons is available. However, geometrical
operations based on floating-point numbers are not stable and can lead to
severe errors [52, chap. 45].
$\tilde{\Sigma}$
---
$\Omega_{c}$
---
$\Omega_{c}$
---
$\tilde{\Omega}^{-}_{l}$
---
$\tilde{\Omega}^{-}_{l}$
---
Figure 3: Calculating volume fractions from a circular interface by volume
mesh intersection.
Ahn and Shashkov [1] have initialized volume fractions by volume mesh
intersection as shown in fig. 3. In this approach, the approximated phase
$\tilde{\Omega}^{-}(t)$ is decomposed into volumes (an unstructured mesh),
equivalently to the decomposition $\tilde{\Omega}$ given by eq. 2. The
boundary $\partial\Omega^{-}$ is the fluid interface $\Sigma(t)$, and it is
approximated as a polygonal surface mesh, leading to
$\Omega^{-}\approx\tilde{\Omega}^{-}:=\\{\tilde{\Omega}^{-}_{l}\\}_{l\in L},$
(4)
i.e. an approximation of $\Omega^{-}$. Generally, as shown in the detail in
fig. 3, a cell $\Omega_{c}$ of the background mesh $\tilde{\Omega}$ may
overlap with multiple cells $\Omega_{l}$ from the $\tilde{\Omega}^{-}$ mesh,
and vice versa. We define a set of indices $l$ of cells
$\tilde{\Omega}^{-}_{l}$ in $\tilde{\Omega}^{-}$ that overlap with the cell
$\Omega_{c}$: the so-called _cell stencil_ of $\Omega_{c}$ in
$\tilde{\Omega}^{-}_{l}$, namely
$\mathcal{S}(\Omega_{c},\tilde{\Omega}^{-})=\\{l\in
L:\Omega_{c}\cap\tilde{\Omega}^{-}_{l}\neq\emptyset,\text{where}~{}\Omega_{c}\in\tilde{\Omega},\tilde{\Omega}^{-}_{l}\in\tilde{\Omega}^{-}\\},$
(5)
where $L$ is an index set, containing indices of cells from
$\tilde{\Omega}^{-}$. Volume fractions $\\{\alpha_{c}\\}_{c\in C}$ can then be
calculated by performing the intersection
$\alpha_{c}=\frac{|\cup_{l\in\mathcal{S}(\Omega_{c},\tilde{\Omega}^{-})}\Omega_{c}\cap\tilde{\Omega}^{-}_{l}|}{|\Omega_{c}|}.$
(6)
Since each $\tilde{\Omega}^{-}_{l}$ overlaps with at least a one cell from
$\tilde{\Omega}$, and we can approximate the number of cells from
$\tilde{\Omega}$ that intersect each cell from $\tilde{\Omega}^{-}$ as
$N(\tilde{\Omega}^{-},\tilde{\Omega})\approx|\tilde{\Omega}^{-}|\underset{l\in
L}{\text{mean}}(|\mathcal{S}(\tilde{\Omega}^{-}_{l},\tilde{\Omega})|),$ (7)
where $|\tilde{\Omega}^{-}|$ denotes the number of cells in the mesh
$\tilde{\Omega}^{-}$. The average number of cells $\Omega_{c}$ overlapping
$\tilde{\Omega}^{-}_{l}$, $\underset{l\in
L}{\text{mean}}|C(\tilde{\Omega}^{-}_{l},\tilde{\Omega})|$, depends on the
mesh densities of both meshes, $\tilde{\Omega}$ and $\tilde{\Omega}^{-}$.
However, we do know that $\underset{l\in
L}{\text{mean}}|C(\tilde{\Omega}^{-}_{l},\tilde{\Omega})|>1$. Next, we know
that $|\tilde{\Omega}^{-}|$ grows quadratically in $2D$ and cubically in $3D$
with a uniform increase in mesh resolution, taken as the worst case scenario.
It grows linearly in $2D$ and quadratically in $3D$ if $\tilde{\Omega}^{-}$ is
refined only near the interface $\tilde{\Sigma}:=\partial\tilde{\Omega}^{-}$.
Consequently, the computational complexity of the volume mesh intersection
algorithm in terms of cell/cell intersections is quadratic in $2D$ and cubic
in $3D$ in the worst case, and linear in $2D$ and quadratic in $3D$ if local
refinement is used to increase the resolution of $\tilde{\Sigma}$. The
quadratic complexity in $3D$ is a serious drawback of this algorithm,
especially for large simulations where $|\tilde{\Omega}^{-}|$ easily reaches
hundred thousand cells per CPU core. Menon and Schmidt [34] have extended the
volume mesh intersection algorithm from Ahn and Shashkov [1] to perform a
volume conservative remapping of variables in the collocated Finite Volume
Method (FVM) with second-order accuracy on unstructured meshes. Their results
confirm the polynomial computational complexity in terms of absolute CPU times
for this volume mesh intersection algorithm [34, table 3].
López et al. [26] propose a volume truncation algorithm for non-convex cells
and apply it to the initialization of volume fractions from exact functions on
unstructured meshes. Cell-subdivision is introduced to handle cases for which
the interface crosses an edge of a cell twice. Non-planar truncated volumes
are triangulated [26, fig 18], and second-order accuracy is demonstrated in
terms of the relative global volume error for a uniform resolution and a
higher-order accuracy when locally refined sub-grid meshes are used.
Ivey and Moin [18] initialize volume fractions on unstructured meshes using
tetrahedral decomposition of non-convex cells and perform geometrical
intersections with a similar approach as the approach from Ahn and Shashkov
[1]. Unlike Ahn and Shashkov [1], Ivey and Moin [18] compute volume fractions
of intersected tetrahedrons by intersecting them with exact signed distance
functions that are used to model the fluid interface. Therefore, this
algorithm cannot directly utilize arbitrarily shaped interfaces. However,
their approach utilizes a linear interpolation of intersection points between
the tetrahedron and the signed-distance function and yields second-order
accuracy. Accuracy is further increased using adaptive mesh refinement.
The approaches reviewed so far require an exact representation of the
interface using explicit analytic expressions, which hinders the direct
application of such algorithms to initial conditions resulting from
experiments as these are typically not available as function compositions. The
volume mesh intersection algorithm [1] is flexible but computationally
expensive, and it requires highly accurate and robust geometrical
intersections.
The following sections outline the proposed algorithm that uses an
unstructured surface mesh $\tilde{\Sigma}$ to compute signed distances and
volume fractions on unstructured meshes. Relying on unstructured surface
meshes retains the ability to handle arbitrary-shaped surfaces while avoiding
computationally expensive cell/cell intersections. Of course, using surface
meshes to approximate the fluid interface renders the proposed algorithm
second-order accurate; however, sufficient absolute accuracy is achievable
with second-order accurate methods using local mesh refinement on the
background mesh [7, 12]. Applying local mesh refinement on the background mesh
in the close vicinity of the triangulated surface increases the accuracy and
limits it to the resolution of the surface mesh, not the background mesh that
stores volume fractions and signed distances. The proposed algorithm
geometrically computes signed distances near the fluid interface. These signed
distances (so-called _narrow-band_ signed-distances) are then propagated
throughout $\tilde{\Omega}$ by an approximate solution of a diffusion
equation. The propagated signed distances determine the value of the phase
indicator $\chi(\mathbf{x},t)$ in those cells that are either completely empty
$(\alpha_{c}=0)$, or completely full $(\alpha_{c}=1)$. Finally, second-order
accurate volume fraction values are calculated in intersected cells
$(0<\alpha_{c}<1)$. This work enables the calculation of complex initial
conditions for different multiphase simulation methods. These include in
particular geometric [20, 18, 39, 29, 42] and algebraic VOF methods [54, 9].
The calculation of volume fractions from a surface mesh (marker points in 2D)
was done in the mixed markers / VOF method by Aulisa et al. [2]: the proposed
algorithm significantly extends this idea towards an accurate and fast volume
fraction model for Front Tracking methods [53], as well as the hybrid Level
Set / Front Tracking methods on structured [44, 45] or unstructured [28, 51]
meshes. Signed distances and the respective inside-outside information from
triangulated surfaces are available for unstructured Level Set and Immersed
Boundary methods.
## 2 Surface mesh / cell intersection algorithm
The calculation of volume fractions by the proposed Surface Mesh Cell
Intersection/Approximation (SMCI/A) algorithm, outlined in fig. 4, requires
signed distances to the interface at cell centres and cell corner points. As a
naive computation is computationally expensive (section 2.2), we employ an
octree based approach to the calculation of signed distances. Starting point
of the octree based search is the calculation of search radii at the relevant
points.
$r_{p}$
---
$\mathbf{x}_{p}\in P_{h}$
---
$\mathbf{x}_{c}$
---
$r_{c}$
---
(a) Calculation of search radii.
$\tilde{\Sigma}$
---
octree
---
$\mathbf{n}_{\tilde{\Sigma}}$
---
(b) Octree sub-division of the surface mesh $\tilde{\Sigma}$ bounding-box.
$\phi_{c}^{+}$
---
$\mathbf{n}_{\tilde{\Sigma}}$
---
$\phi_{c}^{-}$
---
$\phi_{p}^{+}$
---
$\phi_{p}^{-}$
---
(c) Narrow-band signed distances from the search radii and the octree.
$+$
---
$-$
---
$-$
---
$+$
---
$+$
---
$-$
---
$-$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$-$
---
$-$
---
$+$
---
$-$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$-$
---
$-$
---
$+$
---
(d) Positive and negative sign diffusion throughout $\tilde{\Omega}$.
$+$
---
$-$
---
$-$
---
$+$
---
$+$
---
$-$
---
$-$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$-$
---
$-$
---
$+$
---
$-$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$-$
---
$-$
---
$+$
---
$\tilde{\chi}_{c}=1$
---
$\tilde{\chi}_{c}=0$
---
(e) Phase indicator $\tilde{\chi}$ is $1$ or $0$ in cells strictly
inside/outside of $\tilde{\Sigma}$, respectively.
$+$
---
$-$
---
$-$
---
$+$
---
$+$
---
$-$
---
$-$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$-$
---
$-$
---
$+$
---
$-$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$+$
---
$-$
---
$-$
---
$+$
---
$\tilde{\chi}_{c}=1$
---
$\tilde{\chi}_{c}=0$
---
(f) Computing the approximated phase indicator $\tilde{\chi}$ in intersected
cells.
Figure 4: Steps of the Surface Mesh Intersection / Approximation (SMCI/A)
algorithms.
### 2.1 Calculation of search radii
In the first step, a search radius $r_{c}$ and $r_{p}$ is calculated at each
cell center and cell-corner point, respectively. This is illustrated in fig.
4(a). Here, the cell search radius $r_{c}$ is defined by
$r_{c}=\lambda_{s}\operatorname{min}_{f\in
F_{c}}\|\mathbf{x}_{f,O}-\mathbf{x}_{f,N}\|_{2},$ (8)
where $\mathbf{x}_{c}$ is the cell center, $\lambda_{s}>0$ is the _search
radius factor_ detailed below and $\mathbf{x}_{f,O}$, $\mathbf{x}_{f,N}$ are
the cell centers of two cells that share the face with index $f$ of the cell
$\Omega_{c}$ ($O$ for owner cell with a smaller cell index than the neighbor
cell $N$). Here, the index set $F_{c}$ contains the indices of those faces
that form the boundary of $\Omega_{c}$. Based on (8), the corner-point search
radius $r_{p}$ is defined by
$r_{p}=\lambda_{s}\operatorname{min}_{c\in C_{p}(\mathbf{x}_{p})}r_{c},$ (9)
where $\mathbf{x}_{p}$ is the cell-corner point, while the _point-cell
stencil_ is the index set $\mathcal{S}(\mathbf{x}_{p},\tilde{\Omega})$, that
contains indices of all cells from $\tilde{\Omega}$ whose corner-point is
$\mathbf{x}_{p}$.
The search radii introduced above are used to define search balls in $3D$
(circles in $2D$), which are used to reduce the number of calculations to
determine signed distances between the cell corner points $\mathbf{x}_{p}$ and
the cell centers $\mathbf{x}_{c}$ with respect to the provided surface mesh
$\tilde{\Sigma}$.
### 2.2 Octree decomposition of the surface mesh and signed distance
calculation
In contrast to various other approaches for volume fraction initialization,
the fluid interface is not represented by the proposed algorithm using a
function, but as a surface mesh, consisting of triangles. To define the
interface $\tilde{\Sigma}$, we first denote the convex hull of a set of $n$
points
$P^{n}=\\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\\},\mathbf{x}_{i}\in\mathbb{R}^{3}$
by
$\operatorname{conv}(P^{n}):=\left\\{\mathbf{x}\in\mathbb{R}^{3}:\mathbf{x}=\sum_{\mathbf{x}_{i}\in
P^{n}}\gamma_{i}\mathbf{x}_{i},\sum_{i=1}^{n}\gamma_{i}=1\right\\}.$ (10)
Using this, a triangle is defined as the convex hull of a point triple:
$\mathcal{T}:=\operatorname{conv}(P^{3})$. Consequently, the surface mesh is
defined as
$\tilde{\Sigma}:=\\{\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{n}\\}.$
(11)
With the structure of $\tilde{\Sigma}$ in mind, we want to emphasize why an
octree based approach is the key to obtaining reasonable computation times.
Consider the case where a minimal distance between a point $\mathbf{x}$ and
$\tilde{\Sigma}$ would be calculated for each cell center $\mathbf{x}_{c}$ and
cell-corner point $\mathbf{x}_{p}$. The need for the spatial subdivision and
search operations becomes obvious, as this would require a distance
computation between each point of the interface mesh and each cell centers and
cell corner points of the background mesh. Consequently, this would require
$|C||\tilde{\Sigma}|$ operations to compute the geometric signed distances at
cell centers and additional computations for evaluating signed distances at
cell-corner points. For our computations below, the number $|C|$ often reaches
the order of $1e05$ per CPU core, while $|\tilde{\Sigma}|$ is typically on the
order of $1e04$ per CPU core. Aiming at redistancing computations for a
dynamic setting in multiphase flows where $\tilde{\Sigma}=\tilde{\Sigma}(t)$,
such a large number of distance computations makes such a brute force
redistancing approach prohibitively expensive.
The first step of the signed distance calculation is the computation of an
Axis-Aligned Bounding Box (AABB) from the surface mesh $\tilde{\Sigma}$. The
AABB is used to build an octree data structure, illustrated as a $2D$ quadtree
subdivision in fig. 4(b), which is used to access $\tilde{\Sigma}$. The octree
data structure enables fast search queries involving cell centers and cell
corner-points that are close to the surface mesh $\tilde{\Sigma}$, with a
logarithmic computational complexity with respect to the number of vertices in
$\tilde{\Sigma}$ [32, 33]. The structure of the octree depends on the ordering
of vertices in $\tilde{\Sigma}$: since $\tilde{\Sigma}$ is an unstructured
surface mesh, its vertices are generally sufficiently unordered, which makes
the octree well-balanced. Once the octree has been constructed, it can be used
to find the closest points $\mathbf{x}\in\tilde{\Sigma}$ to cell centres
$\mathbf{x}_{c}$ and cell corner points $\mathbf{x}_{p}$. Note that this is
only true for those $\mathbf{x}_{c},\mathbf{x}_{p}$ which are sufficiently
close to $\tilde{\Sigma}$ in terms of their search radius $r_{c},r_{p}$. Thus,
the search radii define a so-called _narrow band_ around $\tilde{\Sigma}$,
where the nearest distances are calculated geometrically. We denote the narrow
band of $\tilde{\Sigma}$ with $\mathcal{N}(\tilde{\Sigma})$, and the closed
ball
$\mathcal{B}(\mathbf{x}^{*},r):=\\{\mathbf{x}\in\mathbb{R}^{3}|\,\|\mathbf{x}-\mathbf{x}^{*}\|_{2}\leq
r\\}$ with a radius $r$ around a point $\mathbf{x}$. Then
$\mathcal{N}(\tilde{\Sigma}):=\left\\{\mathbf{x}\in\mathbb{R}^{3}|~{}\exists~{}\mathcal{T}\in\tilde{\Sigma}\text{
such that }\mathcal{T}\cap\mathcal{B}(\mathbf{x},r)\neq\emptyset\right\\},$
(12)
where $r$ is either $r_{p}$ or $r_{c}$.
For a point $\mathbf{x}\in\mathcal{N}(\tilde{\Sigma})$, the octree provides
the closest point $\mathbf{x}_{\text{min}}\in\mathcal{T}_{\text{min}}$ for
some $\mathcal{T}\in\tilde{\Sigma}$ and the corresponding triangle
$\mathcal{T}_{\text{min}}$ itself. While the absolute distance can be directly
computed as $\|\mathbf{x}-\mathbf{x}_{\text{min}}\|_{2}$, care must be taken
when computing the sign with respect to the orientation of $\tilde{\Sigma}$.
Directly using the triangle normals $\mathbf{n}_{\mathcal{T}}$ may lead to
false signs and consequently, to erroneous volume fractions. Thus, we follow
the work of [50, 4] and compute _angle weighted normals_
$\mathbf{n}_{\mathbf{x}_{v}}=\frac{\sum_{\mathcal{T}\in\text{ngh}(\mathbf{x}_{v})}\beta_{\mathcal{T}}\mathbf{n}_{\mathcal{T}}}{\sum_{\mathcal{T}\in\text{ngh}(\mathbf{x}_{v})}\beta_{\mathcal{T}}}$
(13)
at the vertices $\mathbf{x}_{v}$ of $\tilde{\Sigma}$. Here,
$\text{ngh}(\mathbf{x}_{v})$ denotes the set of all triangles containing
$\mathbf{x}_{v}$, $\mathbf{n}_{\mathcal{T}}$ a triangle normal and
$\beta_{\mathcal{T}}$ the inner angle of $\mathcal{T}$ at $\mathbf{x}_{v}$.
Baerentzen and Aanaes [4] propose a classification of the point
$\mathbf{x}_{\text{min}}$ whether it is located within a triangle, on an edge,
or a vertex and base the choice of the normal on this classification. While
such a classification is simple in theory, a robust implementation is
difficult due to the limited precision of floating point arithmetic. Thus, we
opt for a linear interpolation of $\mathbf{n}_{\mathbf{x}_{v}}$ within
$\mathcal{T}_{\text{min}}$ to $\mathbf{x}_{\text{min}}$, denoted
$\mathbf{n}_{I}(\mathbf{x}_{\text{min}},\mathcal{T}_{\text{min}})$. With this
normal computation, the signed distance between $\mathbf{x}$ and
$\mathbf{x}_{\text{min}}$ is calculated by
$\phi^{g}(\mathbf{x},\tilde{\Sigma})=\text{sign}((\mathbf{x}-\mathbf{x}_{\text{min}})\cdot\mathbf{n}_{I}(\mathbf{x}_{\text{min}},\mathcal{T}_{\text{min}}))\|\mathbf{x}-\mathbf{x}_{\text{min}}\|_{2}.$
(14)
where the supindex $g$ indicates a geometrical construction. This procedure is
illustrated in fig. 4(c). The robustness of this approach with regard to
inside/outside classification is demonstrated in section 4.3.
Using the spatial subdivision provided by the octree, the computational
complexity for finding the minimal distances between mesh points and
$\tilde{\Sigma}$ is reduced severely, as the vast majority of cell centers
$\mathbf{x}_{c}$ are not even considered for calculation as no triangle
$\mathcal{T}\in\tilde{\Sigma}$ exists within the corresponding search ball.
The closest triangles of those points $\mathbf{x}_{c}$, whose ball
$\mathcal{B}(\mathbf{x}_{c},r_{c})$ intersects $\tilde{\Sigma}$ are found with
logarithmic search complexity with respect to $|\tilde{\Sigma}|$. This
significant reduction of complexity can potentially enable a future
application of the proposed algorithm on moving interfaces $\tilde{\Sigma}(t)$
as a geometrically exact marker field model for unstructured Front Tracking
methods. Therefore, it is crucial to understand that the
$\operatorname{min}_{\mathcal{T}\in\tilde{\Sigma}}$ operation in eq. 14
throughout this text relies on the octree spatial subdivision and search
queries.
### 2.3 Signed distance propagation
After the calculation of geometric signed distances in the narrow band around
$\tilde{\Sigma}$, the signed distances are propagated to the bulk of different
phases, as shown in fig. 4(d). In [28, 51], the geometric signed distances are
set to large positive numbers throughout the domain, and a graph-traversal
algorithm is used to iteratively correct the signs of signed distances using
face-cell and point-point graph connectivity provided by the unstructured
mesh. Graph-traversal is computationally expensive and complicated to
implement in parallel. Here we propose a straightforward alternative that
instantaneously propagates signs of signed distances through the solution
domain and is parallelized easily. We rely on the diffusion equation for the
signed distances, namely
$\displaystyle-\Delta\phi$ $\displaystyle=0,$ (15) $\displaystyle\nabla\phi$
$\displaystyle=0,\quad\text{for}\quad\mathbf{x}\in\partial\Omega$
and its discretization using the unstructured finite volume method in OpenFOAM
[19, 23, 35], giving a linear system of equations. The key idea to sign
propagation is to apply a few iterations ($<5$) of an iterative linear solver
to this system. In our case a Conjugate Gradient approach with an incomplete
lower upper preconditioner has been used. With the initial field set to
$\displaystyle\phi(\mathbf{x})$
$\displaystyle=\begin{cases}\phi_{g}(\mathbf{x},\tilde{\Sigma}),&\quad\text{if
}\mathbf{x}\in\mathcal{N}(\tilde{\Sigma})\\\
0,&\quad\text{otherwise,}\end{cases}$ (16)
this small number of iterations suffices to properly propagate
$\text{sign}(\phi)$ with respect to the orientation of $\tilde{\Sigma}$
throughout $\tilde{\Omega}$. Prerequisite for this approach to work is that
the narrow band has a certain minimum width in interface normal direction. At
least four cells on each side of the interface are required to ensure a robust
propagation. This is achieved by setting a global search radius factor
$\lambda_{s}:=4$ in eq. 8 used to calculate $r_{c}$ at cell centers. Note that
increasing $\lambda_{s}$ beyond this value only increases computational costs,
and does not impact the accuracy of the proposed algorithm, as with a larger
value of $\lambda_{s}$ the narrow band $\mathcal{N}(\Sigma)$ becomes wider and
consequently the geometrical signed distances are calculated at more points
$\mathbf{x}_{c},\mathbf{x}_{p}$, using eqs. 17 and 20, respectively.
Two aspects have to be considered when solving the linear system of equations
resulting from the discretization of eq. 15. First, cells for which
$\mathbf{x}_{c}\in\mathcal{N}(\tilde{\Sigma})$ have to be excluded from the
vector of unknowns as $\phi^{g}(\mathbf{x}_{c})$ is already known for those.
Second, for cells away from $\mathcal{N}(\tilde{\Sigma})$ the only relevant
information is $\text{sign}(\phi_{c})$ indicating $\Omega_{c}\in\Omega^{-}$ or
$\Omega_{c}\in\Omega^{+}$, respectively. A few iterations of a linear solver
suffice to reliably propagate $\text{sign}(\phi_{c})$ to the entire domain.
The resulting field is
$\phi_{c}=\begin{cases}\phi^{g}_{c},&\text{if
}\mathbf{x}_{c}\in\mathcal{N}(\tilde{\Sigma}),\\\
\phi^{a}_{c},&\text{otherwise,}\end{cases}$ (17)
with $\phi^{g}_{c}$ denoting geometric signed distances and $\phi^{a}_{c}$
approximate values from the solution of eq. 15 carrying inside/outside
information but without geometric meaning.
Once the cell-centered signed distances $\phi_{c}$ are computed, they are used
to calculate the signed distances at cell corner-points via
$\phi^{I}_{p}=\sum_{c\in C_{p}}w_{p,c}\phi_{c},$ (18)
where $C_{p}$ is the index set of cells that contain the cell corner point
$\mathbf{x}_{p}$ and the supindex $I$ indicating interpolation. Furthermore,
$w_{p,c}$ is the _inverse-distance weighted_ (IDW) interpolation weight
$w_{p,c}=\frac{\|\mathbf{x}_{c}-\mathbf{x}_{p}\|_{2}^{-1}}{\sum_{\tilde{c}\in
C_{p}}\|\mathbf{x}_{\tilde{c}}-\mathbf{x}_{p}\|_{2}^{-1}}.$ (19)
As with $\phi_{c}$, the accuracy of $\phi_{p}$ is irrelevant outside of the
narrow band of $\tilde{\Sigma}$, only the sign of the signed distance is
important in the bulk. To correct for the error introduced by the IDW-
interpolation in eq. 18, signed distances at cell-corner points of intersected
cells are calculated geometrically
$\phi_{p}=\begin{cases}\phi^{g}_{p},&\quad\text{if
}\mathbf{x}_{p}\in\mathcal{N}(\tilde{\Sigma}),\\\
\phi^{I}_{p},&\quad\text{otherwise.}\end{cases}$ (20)
Equations 17 and 20 define the final signed distances at cell centers and
cell-corner points, respectively. These quantities will have the value of a
geometrical distance to $\tilde{\Sigma}$ in the narrow band, while outside of
the narrow band only the correct sign resulting from the approximative
solution of eq. 15 is relevant.
### 2.4 Volume fraction calculation
Once the signed distances at cell centers
$\\{\phi_{c}\\}_{c=1,2,\dots,|\tilde{\Omega}|}$ and cell corner points
$\\{\phi_{p}\\}_{p=1,2,\dots|P_{h}|}$ are calculated as outlined in the
previous section, the SMCI algorithm calculates the volume fractions in a
straightforward way. The volume fraction calculation is shown schematically
for the SMCI algorithm in fig. 5(b). Each cell is decomposed into
tetrahedrons, using the cell centroid $\mathbf{x}_{c}$ as the base point of
the tetrahedron, the centroid of the face $\mathbf{x}_{c,f}$, and two
successive points from the cell-face,
$\mathbf{x}_{c,f,i},\mathbf{x}_{c,f,i+1}$. The resulting tetrahedron has the
distance $\phi_{c}$ associated to the cell centroid, the distance $\phi_{c,f}$
associated to the face centroid, and and $(\phi_{c,f,i},\phi_{c,f,i+1})$ pair
of distances associated with a pair of points that belong to the cell-face
$(c,f)$, as shown in fig. 5(b). If all the distances of the tetrahedron are
negative, the tetrahedron lies in the negative halfspace with respect to
$\tilde{\Sigma}$, and its total volume contributes to the sum of the volume of
phase $1$ inside the volume $\Omega_{c}$. If a pair of distances in a
tetrahedron has different signs, the tetrahedron is intersected by the
interface approximated by the surface mesh $\tilde{\Sigma}$. The volume of
this intersection is calculated by geometrically intersecting the tetrahedron
with those triangles from $\tilde{\Sigma}$, that have a non-zero intersection
with a ball $\mathcal{B}$ enclosing the tetrahedron. The center of the ball
$\mathcal{B}_{c,f,i}:=\mathcal{B}(\mathbf{x}_{c,f,i},R{c,f,i})$ is the
centroid of the tetrahedron
$\mathbf{x}_{c,f,i}=0.25(\mathbf{x}_{c}+\mathbf{x}_{c,f}+\mathbf{x}_{c,f,i}+\mathbf{x}_{c,f,\text{mod}(i+1,|F_{c,f}|)})$,
where $i=0,\dots,|F_{c,f}|-1$, and $F_{f}$ is the oriented set of indices of
the points $\mathbf{x}$ (cf. fig. 5(b)) that belong to the face $f$ of the
cell $\Omega_{c}$. The radius of the tetrahedron-ball $\mathcal{B}_{c,f,i}$ is
then
$R_{c,f,i}=\operatorname{max}(\|\mathbf{x}_{c}-\mathbf{x}_{c,f,i}\|,\|\mathbf{x}_{c,f}-\mathbf{x}_{c,f,i}\|,\|\mathbf{x}_{c,f,j}-\mathbf{x}_{c,f,i}\|,\|\mathbf{x}_{c,f,\text{mod}(j+1,|F_{c,f}|)}-\mathbf{x}_{c,f,i}\|),$
(21)
$j=0,\dots,|F_{c,f}|-1$. This sub-set of $\tilde{\Sigma}$ is found using the
octree data structure with logarithmic complexity with respect to
$\tilde{\Sigma}$, as outlined in the previous section. For the example
tetrahedron in the cell shown in fig. 5(b), the resulting intersection between
the approximated interface $\tilde{\Sigma}$ and a tetrahedron from the cell
$\Omega_{c}$ is shown as the shaded volume. The magnitude of this volume is
computed by applying the Gauss divergence theorem using eq. 31. The phase-
specific volumes from cell-tetrahedrons are summed up for the cell
$\Omega_{c}$, into the total phase-specific volume of the phase $1$ within the
cell $\Omega_{c}$, and the volume fraction is therefore computed as
$\alpha_{c}=\dfrac{\sum_{f=0,\dots|C_{c}|-1}\sum_{i=0,\dots,|F_{c,f}|-1}|T(\mathbf{x}_{c},\mathbf{x}_{c,f},\mathbf{x}_{c,f,{i}},\mathbf{x}_{c,f,\text{mod}(i+1,|F_{c,f}|)})\cap(\mathcal{B}_{c,f,i}\cap\tilde{\Sigma})|}{|\Omega_{c}|}$
(22)
with $T:=\\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},\mathbf{x}_{4}\\}$
denoting a tetrahedron.
$\mathbf{x}_{c}$
---
(a) A cell $\Omega_{c}$ intersected by $\tilde{\Sigma}$.
$\mathbf{x}_{c},\phi(\mathbf{x}_{c})$
---
$\phi(\mathbf{x}_{c,f,i+1})$
---
$\phi(\mathbf{x}_{c,f,i})$
---
$\mathbf{x}_{c,f,i}$
---
$R_{c,f,i}$
---
$\mathbf{x}_{c,f},\phi(\mathbf{x}_{c,f})$
---
$\mathbf{x}_{c,f,i+1}$
---
$\mathbf{x}_{c,f,i}$
---
(b) Tetrahedral cell decomposition.
Figure 5: Centroid decomposition of an interface cell into tetrahedra and
calculation of $\alpha_{c}$ using the SMCI/A algorithms.
The SMCI algorithm is summarized by algorithm 1.
Algorithm 1 The Surface-Mesh / Cell Intersection Algorithm (SMCI)
1:$\alpha_{c}=0$, $\phi_{c,p}=0$
2:Compute search radius for cell centers ${r_{c}}_{c\in C}$ using eq. 8.
3:for cell centroids $\\{\mathbf{x}_{c}\\}_{c\in C}$ do
4: Place the vertices of $\tilde{\Sigma}$ into an octree (section 2.2).
5: Find the triangle $\mathcal{T}_{n}\in\tilde{\Sigma}$ nearest to
$\mathbf{x}_{c}$ within a ball $\mathcal{B}(\mathbf{x}_{c},r_{c})$.
6: Set $\phi^{g}_{c}:=\phi^{g}(\mathbf{x}_{c},\mathcal{T}_{n})$ using eq. 14.
7:end for
8:Approximately solve eq. 15 to propagate $sign(\phi_{c})$.
9:Compute search radius for cell corner points ${r_{p}}_{p\in P}$ using eq. 9.
10:Find all intersected cells $I=\\{c,\quad\phi_{c}\phi_{p}<0\text{ for at
least one }$p$\\}$.
11:Use eq. 17 to correct $\phi_{c}$ within the narrow band.
12:Compute $\phi_{p}$ in the bulk using eq. 18.
13:Use eq. 20 to correct $\phi_{p}$ within the narrow band.
14:for cells $\\{\Omega_{c}\\}_{c\in C}$ do
15: if $\phi_{c}\leq 0$ and all corner-point distances $\phi_{p}\leq 0$ then
$\triangleright$ Cell is inside the negative $\tilde{\Sigma}$-halfspace.
16: $\alpha_{c}=1$
17: end if
18: if cell $\Omega_{c}$ is intersected, $c\in I$ then $\triangleright$ Cell
is intersected by $\tilde{\Sigma}$.
19: $\alpha_{c}$ given by eq. 22.
20: end if
21:end for
## 3 Surface-Mesh / Cell Approximation algorithm
This section presents an alternative approach to the computation of volume
fractions presented in section 2.4. While section 2.4 details a method based
on geometric intersections, this section introduces an algorithm based on
volumetric reconstruction by adaptive mesh refinement. Detrixhe and Aslam [10]
introduce a second order accurate approximation for the volume fraction of a
triangle (2D) or a tetrahedron (3D). Their model is an algebraic expression
taking the signed distances $\phi$ of the vertices as arguments. In contrast,
we propose a volume fraction initialization algorithm that employs this model
in combination with an adaptive tetrahedral cell decomposition and the octree-
based signed distance calculation described in section 2. We term this
algorithm _Surface-Mesh/Cell Approximation_ (SMCA) and it is outlined below.
(a) Identify potential interface cells (marked grey) using bounding ball
criterion. Shown are circles with radii $|\phi_{c}|$.
(b) Adaptive, tetrahedral decomposition of interface cells. Compute $\phi$ at
new vertices.
(c) Compute the volume fraction $\alpha_{c}$ using the model of Detrixhe and
Aslam [10] (detail view).
Figure 6: Steps of the SMCA algorithm following signed distance computation
and inside/outside propagation.
The SMCA-algorithm is based on the signed distance results of the SMCI-
algorithm introduced in section 2. The steps depicted in fig. 4(a) \- 4(d) of
the SMCI/A are used to compute $\phi_{c},\phi_{p}$ in the narrow band and
propagate inside/outside information in the rest of the mesh points.
Subsequent steps for the computation of volume fractions are displayed in fig.
6. First, all cells intersected by $\tilde{\Sigma}$ are identified to reduce
computational costs, as only these cells have intermediate values
$0<\alpha_{c}<1$. This step is depicted in fig. 6(a). Each cell for which
$\mathbf{x}_{c}\in\mathcal{N}(\tilde{\Sigma})$ is checked with the _bounding
ball criterion_. We define a bounding ball (bb) for a point
$\mathbf{x}_{\text{bb}}\in\Omega_{c}$ using
$r_{bb}=\operatorname{max}_{\mathbf{x}\in\Omega_{c}}\|\mathbf{x}-\mathbf{x}_{bb}\|_{2}$.
This ball is the smallest ball that contains all points of $\Omega_{c}$. We
compare this bounding ball to
$\mathcal{B}(\mathbf{x}_{\text{bb}},|\phi(\mathbf{x}_{\text{bb}}))$. These
balls are shown in fig. 7, where the bounding ball is illustrated by a dashed
and the other ball by a continuous line. As a general observation, if the
bounding ball is contained in the ball with the radius
$|\phi(\mathbf{x}_{bb})|$, i.e.
$\mathcal{B}(\mathbf{x}_{bb},r_{bb})\subseteq\mathcal{B}(\mathbf{x}_{bb},|\phi(\mathbf{x}_{bb})|)$,
then such a cell is guaranteed to be a bulk cell. This cell can then be
removed from the set of cells in the narrow band to reduce the number of cells
which are considered for decomposition in the next step. If the criterion is
not satisfied, the cell is considered an interface cell. Two remarks on this
criterion: first, the existence of such a $\mathbf{x}_{\text{bb}}$ is not a
necessary but a sufficient condition. Second, in a practical implementation
evaluation of this criterion is only feasible for a small number of points
when aiming to keep computational costs reasonable. Thus, the actual check is
performed by evaluating
$f_{\text{bb}}(\mathbf{x},\phi_{\mathbf{x}},\Omega_{c})=\begin{cases}1,\quad\operatorname{max}_{\mathbf{x}_{i}\in\Omega_{c}}\|\mathbf{x}_{i}-\mathbf{x}\|_{2}\leq|\phi_{\mathbf{x}}|,\\\
0,\quad\text{otherwise}\end{cases}$ (23)
with $\mathbf{x}\in\Omega_{c}$. The evaluation of the
$\operatorname{max}$-operator is based on a comparison to the corner points
$\mathbf{x}_{i}$ of the cell $\Omega_{c}$. For example, in our implementation
this function is only evaluated at cell centres $\mathbf{x}_{c}$ (original
mesh cells, see below) or cell corner points (tetrahedra resulting from
decomposition). As a consequence, a few bulk cells are considered as interface
cells (fig. 7(b)). We deem this acceptable as this only has a minor impact on
the computational time, but not on the computed volume fractions.
$\Sigma$
---
$r_{\text{bb}}$
---
$|\phi_{c}|$
---
$\mathbf{x}_{c}$
---
(a) Bulk cell: the ball $\mathcal{B}(\mathbf{x}_{c},|\phi_{c}|)$ contains the
cell bounding ball $\mathcal{B}(\mathbf{x}_{c},r_{bb})$.
$\Sigma$
---
$|\phi_{c}|$
---
$\mathbf{x}_{c}$
---
$r_{\text{bb}}$
---
(b) False positive: a bulk cell which is not detected by the bounding ball
criterion as
$\mathcal{B}(\mathbf{x}_{c},r_{bb})\nsubseteq\mathcal{B}(\mathbf{x}_{c},|\phi_{c}|).$
Figure 7: Illustration of the idea of the bounding ball criterion in 2D for
clarity. The solid grey line represents
$\mathcal{B}(\mathbf{x}_{c},|\phi_{c}|)$, the grey dashed one
$\mathcal{B}(\mathbf{x}_{c},r_{bb})$.
After identification of interface cells, the cell volume fractions are
initialized according to the sign of $\phi_{c}$,
$\alpha_{c}=\begin{cases}1,\quad\phi_{c}\leq 0,\\\
0,\quad\text{otherwise}.\end{cases}$ (24)
This gives correct volume fractions for bulk cells, while the values of
interface cells are updated as described below. Each cell flagged as an
interface cell by the method described above is decomposed into tetrahedra
using its centroid and cell face centroids as shown in fig. 5. Each resulting
tetrahedron is further refined in an adaptive manner such that resolution is
only subsequently increased where a new tetrahedron is again intersected by
the interface. To achieve this, a tetrahedron $T$ is checked with the bounding
ball criterion eq. 23. The criterion is only evaluated at the vertex
$\mathbf{x}_{\text{max}}\in T$ for which
$|\phi(\mathbf{x}_{\text{max}})|=\operatorname{max}_{\mathbf{x}\in
T}|\phi(\mathbf{x})|$. Only if
$f_{\text{bb}}(\mathbf{x}_{\text{max}},\phi,T)~{}=~{}0$ (eq. 23), $T$ is
considered for further decomposition.
$\mathbf{x}_{1}$
---
$\mathbf{x}_{2}$
---
$\mathbf{x}_{3}$
---
$\mathbf{x}_{4}$
---
$\mathbf{x}_{12}$
---
$\mathbf{x}_{24}$
---
$\mathbf{x}_{23}$
---
$\mathbf{x}_{34}$
---
$\mathbf{x}_{13}$
---
$\mathbf{x}_{14}$
---
(a) Original tetrahedron with vertices ($\mathbf{x}_{i}$, black) and edge
midpoints ($\mathbf{x}_{ij}$, grey).
(b) Four tetrahedra are created by combining each vertex with its connected
edge midpoints (indicated by dashed lines).
$\mathbf{x}_{12}$
---
$\mathbf{x}_{23}$
---
$\mathbf{x}_{34}$
---
$\mathbf{x}_{24}$
---
$\mathbf{x}_{14}$
---
$\mathbf{x}_{13}$
---
(c) Decompose octahedron into four tetrahedra by combining each grey edge with
the black line formed by two opposite points (here $\mathbf{x}_{12}$,
$\mathbf{x}_{34}$).
Figure 8: Decomposition of a tetrahedron into eight tetrahedra using edge
midpoints.
An obvious choice would be decomposition at the centroid of $T$. However,
repeated application of this approach results in increasingly flattened
tetrahedra. To avoid this problem, we apply the decomposition shown in fig. 8.
First, from the vertices edge centres of the tetrahedron
$\mathbf{x}_{ij}=\frac{1}{2}(\mathbf{x}_{i}+\mathbf{x}_{j}),\quad
i,j\in\\{1,2,3,4\\},i\neq j$ (25)
are computed (fig. 8(a)). By combining each vertex $\mathbf{x}_{i}$ with the
three edge centres of the adjacent edges, four new tetrahedra are created
(fig. 8(b)). The remainder of the original tetrahedron is an octahedron (fig.
8(b) grey dashed lines) constituted by the edge centres $\mathbf{x}_{ij}$.
This octahedron is decomposed into four additional tetrahedra by choosing two
opposite edge centres as shown by the black line in fig. 8(c). The indices of
vertices of such a line are the numbers one to four. From the remaining four
edge centres, point pairs are created such that
$\\{\mathbf{x}_{mn},\mathbf{x}_{mo}\\}$ or
$\\{\mathbf{x}_{mn},\mathbf{x}_{on}\\}$, yielding four pairs. Combining each
pair with $\\{\mathbf{x}_{ij},\mathbf{x}_{kl}\\}$ (e.g. black edge in fig.
8(c)) gives the aforementioned four tetrahedra. Subsequently, $\phi$ is
computed for the added vertices $\mathbf{x}_{ij}$. The decomposition is based
on the pair of edge centres that have the smallest distance between each
other. Refinement is completed when a maximum refinement level
$l_{\text{max}}$ is reached. This can either be an arbitrary prescribed value
or can be computed such that the edge length of the refined tetrahedra is
comparable to the edge length of surface triangles. In the latter case,
$l_{\text{max}}=\operatorname{min}_{l\in\mathbb{N}}\left(\frac{L_{\text{tet}}}{L_{\text{tri}}}<2^{l}\right)$
(26)
with $L_{\text{tet}}$ and $L_{\text{tri}}$ being cell specific reference
lengths for tetrahedra and surface triangles, respectively. Different choices
for $L_{\text{tet}}$ and $L_{\text{tri}}$ are possible. We choose
$\displaystyle L_{\text{tet}}=\frac{1}{n_{t}}\sum_{\mathbf{e}\in
E_{\text{cdc}}}|\mathbf{e}|,$ $\displaystyle
L_{\text{tri}}=\operatorname{min}_{\mathbf{e}\in
E_{\tilde{\Sigma},c}}|\mathbf{e}|$
with $E_{\text{cdc}}$ denoting the set of edges resulting from tetrahedral
decomposition of a cell $\Omega_{c}$ at its centroid, $n_{t}$ the number of
edges in $E_{\text{cdc}}$ and $E_{\tilde{\Sigma},c}$ a subset of edges of
$\tilde{\Sigma}$. The set $E_{\tilde{\Sigma},c}$ consists of all edges of
$\mathcal{T}\in\tilde{\Sigma}$ for which
$\mathcal{T}\cap\mathcal{B}(\mathbf{x}_{\text{cp}},r_{\text{cp}})\neq\emptyset$.
Here,
$\displaystyle\mathbf{x}_{\text{cp}}$
$\displaystyle=\frac{1}{|P_{\text{cp}}|}\sum_{\mathbf{x}_{i}\in
P_{\text{cp}}},$ $\displaystyle P_{\text{cp}}$
$\displaystyle:=\\{\mathbf{x}\in\tilde{\Sigma}:\operatorname{min}_{\mathbf{x}_{i}\in\Omega_{c}}\|\mathbf{x}-\mathbf{x}_{i}\|_{2}\\}$
and the radius $r_{\text{cp}}=\operatorname{max}_{\mathbf{x}\in
P_{\text{cp}}}\|\mathbf{x}-\mathbf{x}_{\text{cp}}\|_{2}$.
Finally, after computing a tetrahedral decomposition of each interface cell,
the volume fraction of a cell $\Omega_{c}$ is calculated as
$\alpha_{c}=\frac{1}{|\Omega_{c}|}\sum_{T\in
T_{c}}\alpha(T)|\operatorname{conv}(T)|$ (27)
where $T_{c}$ denotes the set of tetrahedra resulting from the decomposition
of $\Omega_{c}$ and $|\operatorname{conv}(T)|$ the volume of $T$. The volume
fraction $\alpha(T)$ is computed with the approach of Detrixhe and Aslam [10]
(eq. 7), repeated here
$\alpha(T)=\left\\{\begin{aligned} &1,&&\phi_{4}\leq 0,\\\
&1-\frac{\phi_{4}^{3}}{(\phi_{4}-\phi_{1})(\phi_{4}-\phi_{2})(\phi_{4}-\phi_{3})},&&\phi_{3}\leq
0<\phi_{4},\\\
&1-\frac{\phi_{1}\phi_{2}(\phi_{3}^{2}+\phi_{3}\phi_{4}+\phi_{4}^{2})+\phi_{3}\phi_{4}(\phi_{3}\phi_{4}-(\phi_{1}+\phi_{2})(\phi_{3}+\phi_{4}))}{(\phi_{1}-\phi_{3})(\phi_{2}-\phi_{3})(\phi_{1}-\phi_{4})(\phi_{2}-\phi_{4})},&&\phi_{2}\leq
0<\phi_{3},\\\
&-\frac{\phi_{1}^{3}}{(\phi_{2}-\phi_{1})(\phi_{3}-\phi_{1})(\phi_{4}-\phi_{1})},&&\phi_{1}\leq
0<\phi_{2},\\\ &0&&\phi_{1}>0,\end{aligned}\right.$ (28)
where $\phi_{4}\geq\phi_{3}\geq\phi_{2}\geq\phi_{1}$ are the signed distances
at the vertices $\mathbf{x}_{i}$ of $T$. The overall approach is summarized in
algorithm 2.
Algorithm 2 The Surface-Mesh / Cell Approximation Algorithm (SMCA)
1:Follow algorithm 1 up to step 13.
2:Identify interface cells (eq. 23)
3:Set bulk $\alpha_{c}$ (eq. 24)
4:Centroid decomposition of cells into tetrahedra (fig. 5)
5:for $l\in\\{1,\ldots,l_{\text{max}}\\}$ do
6: Flag tetrahedra for further refinement (eq. 23)
7: Decompose flagged tetrahedra (fig. 8)
8: Compute $\phi$ for new points (eq. 14)
9:end for
10:Compute $\alpha_{c}$ for interface cells (eq. 27)
## 4 Results
The software implementation is available on GitLab [27]: we refer to the
specific version (git tag) used to generate results described below. Detailed
information on how to build and use the software is provided in the README.md
file in the root folder of the software repository.
We use the difference between the total volume given by the volume fraction
calculated from the surface on the unstructured mesh, and the exact volume
bounded by the surface, namely
$E_{v}=\frac{1}{V_{e}}\left|V_{e}-\sum_{c\in C}\alpha_{c}|\Omega_{c}|\right|,$
(29)
as the measure of accuracy of the proposed algorithms. Here, $V_{e}$ is the
volume given by the exact surface function, or the volume that is bounded by a
given surface mesh if an exact surface function is not available, e.g. in
sections 4.2 and 4.3. In these cases, we calculate $V_{e}$ using
$V_{e}=\frac{1}{3}\left|\int_{V_{e}}\nabla\cdot\mathbf{x}\,dV\right|=\frac{1}{3}\left|\int_{\partial
V_{e}}\mathbf{x}\cdot\mathbf{n}\,dS\right|$ (30)
where $\partial V_{e}$ is the surface that bounds $V_{e}$. As this surface is
triangluated, eq. 30 can be expanded further
$\displaystyle V_{e}$
$\displaystyle=\frac{1}{3}\left|\sum_{t\in{1..N_{\tilde{\Sigma}}}}\int_{T_{t}}\mathbf{x}\cdot\mathbf{n}\,dS\right|=\frac{1}{3}\left|\sum_{t\in{1..N_{\tilde{\Sigma}}}}\int_{T_{t}}(\mathbf{x}-\mathbf{x}_{t}+\mathbf{x}_{t})\cdot\mathbf{n}\,dS\right|=\frac{1}{3}\left|\sum_{t\in{1..N_{\tilde{\Sigma}}}}\mathbf{x}_{t}\cdot\mathbf{S}_{t}\right|$
(31)
where $N_{\tilde{\Sigma}}$ is the number of triangles in $\tilde{\Sigma}$,
$T_{t}\in\tilde{\Sigma}$ are triangles that form the interface mesh, and
$\mathbf{x}_{t},\mathbf{S_{t}}$ are their respective centroids and area normal
vectors.
Computing architecture |
---|---
CPU |
| vendor_id : AuthenticAMD
| cpu family : 23
| model : 49
| model name : AMD Ryzen Threadripper 3990X 64-Core Processor
| frequency : 2.90 GHz
Compiler |
| version : g++ (Ubuntu 10.2.0-5ubuntu1 20.04) 10.2.0
| optimization flags : -std=c++2a -O3
Table 1: Used computing architecture.
Table 1 contains the details on the computing architectures used to report the
absolute CPU times in the result section. We have fixed the CPU frequency to
2.9GHz to stabilize the CPU time measurements.
### 4.1 Sphere and ellipsoid
Exact initialization algorithms for spheres are available on unstructured
meshes [46, 25]. We use the sphere and ellipsoid test cases to confirm the
second-order convergence of SMCI/A algorithms and their applicability as a
volume fraction model for the unstructured Level Set / Front Tracking method
[28, 51]. The sphere case consists of a sphere with a radius $R=0.15$, and the
ellipsoid half-axes are $(0.4,0.3,0.2)$. Both the sphere and ellipsoid center
are at $(0.5,0.5,0.5)$, in a unit box domain. Error convergence, CPU time and
additional data are publicly available [31].
#### 4.1.1 SMCI Algorithm
Figure 9 shows the expected second-order convergence of the global error
$E_{v}$ given by eq. 29 on cubic fig. 9(a) and irregular hexahedral fig. 9(b)
unstructured meshes. In fig. 9, $N_{c}$ is the number of cells used along each
spatial dimension of $\tilde{\Omega}$ and $N_{T}$ is the number of triangles
used to resolve the sphere.
(a) Equidistant mesh.
(b) Irregular hexahedral mesh.
Figure 9: $E_{v}$ errors of the SMCI algorithm for the sphere. The grey dashed
line indicates second order convergence.
The CPU times reported in fig. 10 for the architecture A1 in table 1 show that
the SMCI algorithm is a promising candidate for a volume fraction model for
the unstructured Level Set / Front Tracking method. The complexity of the
algorithm expressed in terms of the measured CPU time remains, linear for a
constant ratio $\sqrt{N_{T}}/N_{c}$. The computational complexity increases to
quadratic with an increasing number of triangles per cell
$\sqrt{N_{T}}/N_{c}$: this happens when a very fine surface mesh is used to
compute volume fractions on a very coarse volume mesh. An intersection between
a highly resolved surface mesh and single cell of a relatively coarse mesh is
shown in fig. 11(a).
This configuration is relevant for accurate initialization of volume fractions
on coarse meshes, but irrelevant for calculating the phase indicator for Front
Tracking, where only a small number of triangles per multimaterial cell ($\leq
10$) is present. Therefore, linear complexity of the SMCI algorithm for small
ratios $\sqrt{N_{T}}/N_{c}$ makes SMCI a potential candidate for a highly
accurate geometrical volume fraction model for the unstructured Level Set /
Front Tracking method. We will investigate this possibility in our future
work. When considering the absolute CPU times, it is important to note that
the SMCI algorithm has not yet been optimized for performance.
Figure 10: CPU times of the SMCI algorithm for the sphere initialized on a
cubic unstructured mesh.
The volume error $E_{v}$ for a sphere is shown in fig. 9(b) for a perturbed
hexahedral mesh. An example perturbed mesh from this parameter study is shown
in fig. 11(b). The mesh is distorted by randomly perturbing cell corner
points, using a length scale factor $\alpha_{e}\in[0,1]$ for the edges $e$
that surround the mesh point. We have used $\alpha_{e}=0.25$, resulting in
perturbations that are of the size of $0.25\,\times$ the edge length. This
results in a severe perturbation of the mesh shown in fig. 11(b), as well as
non-planarity of the faces of hexahedral cells. Still, as shown in fig. 9(b),
SMCI retains second-order convergence, which is also the case for the
initialization of the ellipsoid on the equidistant fig. 12 and perturbed
hexahedral mesh fig. 12(b).
(a) SMCI: intersected cell.
$\alpha_{c}$
---
0
---
0.5
---
1
---
(b) SMCI: sphere and ellipsoid volume fractions.
Figure 11: SMCI algorithm used with a sphere and an ellipsoid on an
unstructured hexahedral mesh.
(a) Equidistant mesh.
(b) Irregular hexahedral mesh.
Figure 12: $E_{v}$ errors of the SMCI algorithm for the ellipsoid. The grey
dashed line indicates second order convergence.
#### 4.1.2 SMCA algorithm
First, the effectiveness of the local adaptivity employed in the SMCA
algorithm is examined with a spherical interface as described in section 4.1.
Resolution of the volume mesh is fixed to $N_{c}=16$ cells in each direction
while the sphere is resolved with $\sqrt{N_{T}}\approx 410$ triangles. Maximum
refinement levels $l_{\text{max}}$ from $0$ to $3$ are manually prescribed. In
fig. 13, the resulting global volume errors $E_{v}$ are displayed. This test
case confirms the expected second-order convergence of $E_{v}$ with adaptive
refinement.
Figure 13: $E_{v}$ errors of the SMCA algorithm using different refinement
levels $l_{\text{max}}$ for a sphere. Resolution of volume and surface mesh
are fixed to $N_{c}=16$ and $\sqrt{N_{T}}\approx 410$. The grey dashed line
indicates second order convergence.
An exemplary tetrahedral decomposition of a perturbed hex cell with a part of
the the surface mesh is displayed in fig. 14.
Figure 14: Tetrahedral decomposition of a perturbed hex cell used to
approximate $\alpha_{c}$. Tetrahedra from different refinement levels are
shown in different colors (level 1: blue, level 2: grey, level 3: red). Due to
adaptivity, the highest refinement level is localized in the vicinity of the
surface mesh..
It demonstrates that the adaptive refinement based on the bounding ball
criterion eq. 23 works as intended. Refinement is localized to the vicinity
around the interface. Yet, the approach ensures all tetrahedra intersected by
the interface are actually refined. The effectiveness of the local adaptive
refinement compared to a uniform one becomes apparent when comparing the
resulting number of tetrahedra. Our adaptive approach yields around $2247$
tetrahedra per interface cell on average for the spherical interface with
$\sqrt{N_{T}}\approx 410$, $N_{c}=16$ and $l_{\text{max}}=3$. A uniform
decomposition, on the contrary, would result in $M_{i}\times
M_{r}^{l_{\text{max}}}=24\times 8^{3}\approx 47.9\times 10^{3}$ tetrahedra,
where $M_{i}$ denotes the number of tetrahedra from initial cell decomposition
and $M_{r}$ the number of tetrahedra from refining a tetrahedron. Thus, the
local adaptive refinement reduces the required overall number of tetrahedra by
a factor of $5.5$ in comparison to a uniform refinement, without affecting the
accuracy.
Having verified the refinement procedure, accuracy of the SMCA algorithm and
its convergence with respect to surface mesh resolution is assessed in the
following. As for the SMCI algorithm, a sphere and an ellipsoid are used for
this purpose. Results for the sphere in terms of the global volume error
$E_{v}$ (eq. 29) are shown in fig. 15 for cubic cells (fig. 15(a)) and
perturbed hexahedral cells (fig. 15(b)). Domain size, sphere centre and radius
are identical to the SMCI setup as well as the perturbation factor
$\alpha_{e}=0.25$. The maximum refinement level is computed according to eq.
26. Both mesh types yield nearly identical results and show second-order
convergence. Resolution of the volume mesh $N_{c}$ has a minor influence for
coarser surface meshes which vanishes for $\sqrt{N_{T}}>100$.
(a) Equidistant mesh.
(b) Irregular hexahedral mesh.
Figure 15: $E_{v}$ errors of the SMCA algorithm for the sphere. The grey
dashed line indicates second order convergence.
For the ellipsoidal interface, the errors $E_{v}$ are shown in fig. 16. The
results are qualitatively and quantitatively similar to those of the spherical
interface.
(a) Equidistant mesh.
(b) Irregular hexahedral mesh.
Figure 16: $E_{v}$ errors of the SMCA algorithm for the ellipsoid. The grey
dashed line indicates second order convergence. Figure 17: CPU times of the
SMCA algorithm for the sphere initialized on a cubic unstructured mesh.
Absolute computational times required for the initialization of a sphere with
the SMCA algorithm are displayed in fig. 17. Run times have been measured on
the architecture listed in table 1. As the implementation SMCI algorithm, our
implementation of the SMCA algorithm has not yet been optimized for
performance.
Because of the algebraic calculation of volume fractions from signed
distances, the SMCA algorithm allows a direct comparison with volume fraction
initialization methods on unstructured meshes that represent the fluid
interface using function composition. Considering section 1, logical choices
for the comparison are the methods of Ahn and Shashkov [1], Fries and Omerović
[13], Jones et al. [21]. However, Ahn and Shashkov [1] do not provide
convergence results for the 3D initialization and Fries and Omerović [13]
integrate a function that is $\neq 1$ within their 3D surface, so the result
of the quadrature does not correspond to the volume enclosed by the surface.
We therefore provide a direct comparison with Jones et al. [21], specifically
Jones et al. [21, table 3].
Absolute volume errors are computed for an octant of a sphere with radius
$R=0.5$, placed at $(0,0,0)$ within a unit-length cubical domain, and are
shown in fig. 18. Tetrahedral unstructured meshes are generated using the
Delaunay algorithm in gmsh [14], by providing a discretization length that
results in a number of mesh points comparable to Jones et al. [21, table 3, No
Nodes]. As shown in fig. 18, the accuracy of the SMCA algorithm depends on the
volume mesh resolution and the number of refinement levels when an implicit
(exact) sphere is used as interface description. This is expected since both
parameters influence the size of the refined tetrahedra which are used to
approximate the volume fraction. Consequently, the achievable accuracy is not
limited by the volume mesh resolution and can be controlled through the number
of refinement levels. The lowest absolute errors are in the order of magnitude
of $10^{-9}$, achieved by SMCA using $10$ refinement levels, and correspond to
relative errors in the order of magnitude of $10^{-8}$, which is around $4$
orders of magnitude lower than minimal VOF advection errors reported so far in
the literature [30], and are therefore admissible as initial volume fraction
values. Even higher levels of absolute accuracy, comparable to Jones et al.
[21, table 3, $\overline{\epsilon}_{6},\overline{\epsilon}_{9}$], can be
achieved with further refinement, with substantially increased computational
expense. However, such further increase in accuracy is without significance to
the volume fraction advection [30]. Contrary to the implicit (exact) sphere,
resolving a sphere using a triangular mesh is more challenging, as the
absolute accuracy depends on the resolution of the surface mesh. Results for
spheres triangulated using the Frontal Algorithm in gmsh [14] are shown in
fig. 18. Doubling the resolution of the surface mesh, as expected, doubles the
accuracy of SMCA with triangulated surfaces as input. This approach of course
does not make sense for a sphere, whose implicit (exact) function is easily
defined. For geometrically complex surfaces shown below, it is important to
have in mind that the resolution of the surface mesh together with the
refinement level determine the absolute accuracy and computational costs.
Figure 18: Comparing the SMCA algorithm and Jones et al. [21, table 3] on
tetrahedral meshes.
### 4.2 Surface of a fluid from an experiment
Some methods that are surveyed in section 1 can initialize volume fractions
from exact implicit surfaces, such as a sphere or an ellipsoid, analyzed in
section 4.1. One novelty of SMCI/A algorithms is their ability to compute
volume fractions from arbitrary surfaces on arbitrary unstructured meshes. For
example, volume fractions given by an experimental surface were calculated by
the SMCI algorithm in Hartmann et al. [16] for studying breakup dynamics of a
capillary bridge on a hydrophobic stripe between two hydrophilic stripes. In
[16], the experimental setup involves a liquid bridge that is formed between
two larger droplets across a hydrophobic stripe. The hydrophobic stripe drives
the collapse of this liquid bridge, that is observed experimentally and in a
simulation in [16]. The quantitative comparison of the simulation and the
experiment from [16] is shown in fig. 19(a). The experimental surface from
Hartmann et al. [16], used to initialize volume fractions, is shown in fig.
19(b). The SMCI algorithm computes the volume vractions of the experimental
fluid interface from [16] with the volume error $E_{v}=7.789e-06$. As shown in
section 4.1, the accuracy of the initialization depends on the quality of the
surface mesh, not on the resolution of the volume mesh, that is chosen in this
case to appropriately resolve the hydrodynamics in [16].
(a) Qualitative comparison with experiment, image from [16].
$f$
---
$0.0$
---
$1.0$
---
$0.2$
---
$0.4$
---
$0.6$
---
$0.8$
---
(b) Initialization of volume fractions $f$ for the wetting experiment, image
adapted from [16].
Figure 19: Simulation of the wetting experiment with the fluid interface given
as a triangular surface mesh [16].
### 4.3 CAD model
To demonstrate that the SMCI/A algorithms are able to handle interfaces more
complex than shown in section 4.1 and section 4.2, the surface mesh from a CAD
model displayed in fig. 20(a) is used.
(a) Surface mesh from a CAD model.
(b) Cross section of the volume mesh with part of the surface mesh, colored by
signed distance.
Figure 20: Surface and volume mesh of the CAD model test case.
In contrast to the previous interfaces, this one features sharp edges and
geometric features of distinctly different sizes. The mesh for this test case
has been generated with the _cartesianMesh_ tool of cfMesh [22]. Refinement is
used in the vicinity of the interface. This meshing procedure is chosen to
obtain a mesh that closer resembles that of an industrial application than a
uniform cubic mesh. A cross section of the mesh is depicted in fig. 20(b).
Before examining the computed volume fractions for this case, the signed
distance calculation (section 2.2) and sign propagation (section 2.3) are
verified. The presence of sharp edges (see fig. 20(a)) makes this test case
more prone to false inside/outside classifications than the others shown so
far.
(a) Cells for which $\phi_{c}\geq 0$ (blue) overlayed with the surface mesh
(grey).
(b) Cross section through the mesh with cells colored by volume fraction.
Figure 21: Inside/outside computation and resulting volume fractions for the
CAD geometry.
Yet our procedure yields the correct sign for the distance in all cells as
shown in fig. 21(a). The enclosed volume of the surface mesh is considered as
$\Omega^{+}$, thus $\phi>0$ for all points $\mathbf{x}\in\Omega^{+}$. As
displayed in fig. 21(a) and confirmed by further manual inspection of the
results, the proposed signed distance calculation correctly classifies all
cells within the narrow band and robustly propagates this information to the
entire domain. This is reflected in the volume fractions as computed, shown in
fig. 21(b). Bulk cells are assigned values of either $1$ or $0$, depending on
whether they are located in $\Omega^{+}$ or $\Omega^{-}$ and mixed cells with
$0<\alpha_{c}<0$ are only found where the surface mesh is located. Accuracy-
wise, the global errors $E_{v}$ depicted in fig. 22 have been obtained with
the SMCA algorithm using different refinement levels. As for the spherical
interface (see fig. 13), second-order convergence is achieved, even though the
surface mesh approximates a non-smooth interface here.
Figure 22: $E_{v}$ errors of the SMCA algorithm using different refinement
levels $l_{\text{max}}$ for the CAD model with the reference volume $V_{e}$
computed by eq. 31. The grey dashed line indicates second order convergence.
## 5 Conclusions
The proposed Surface-Mesh Cell Intersection / Approximation algorithms
accurately compute signed distances from arbitrary surfaces intersecting
arbitrary unstructured meshes. Geometrical calculations ensure the accuracy of
signed distances near the discrete surface. The signed distances (actually
their inside / outside information) are propagated into the bulk using the
approximate solution of a Laplace equation. Once the signed distances are
available in the full simulation domain, the SMCI algorithm computes volume
fractions by intersecting arbitrarily-shaped mesh cells with the given surface
mesh, while the SMCA algorithm approximates volume fractions using signed
distances stored at cell corner points. Both algorithms are robust and show
second-order convergence for exact surfaces and arbitrarily shaped surface
meshes. The SMCI algorithm scales linearly with a small number of surface
triangles per cut-cell. Since a small number of triangles per cell is a
requirement for Front Tracking, this linear-complexity makes SMCI an
interesting candidate for computing volume fractions in the 3D unstructured
Level Set / Front Tracking method [28, 51], which will be the subject of
future investigations.
## 6 Acknowledgments
Calculations for this research were conducted on the Lichtenberg high
performance computer of the TU Darmstadt.
Funded by the German Research Foundation (DFG) – Project-ID 265191195 – SFB
1194, Projects B02, B01 and Z-INF.
We are grateful for the discussions on the phase-indicator calculation in the
LCRM method on structured meshes [45] with Prof. Dr. Seungwon Shin, Dr. Damir
Juric, and Dr. Jalel Chergui within the project ”Initiation of International
Cooperations” MA 8465/1-1.
## References
* Ahn and Shashkov [2007] H. T. Ahn and M. Shashkov. Multi-material interface reconstruction on generalized polyhedral meshes. Technical Report LA-UR-07-0656, 2007.
* Aulisa et al. [2003] E. Aulisa, S. Manservisi, and R. Scardovelli. A mixed markers and volume-of-fluid method for the reconstruction and advection of interfaces in two-phase and free-boundary flows. _J. Comput. Phys._ , 188(2):611–639, 2003. ISSN 00219991. doi: 10.1016/S0021-9991(03)00196-7. URL https://doi.org/10.1016/S0021-9991(03)00196-7.
* Aulisa et al. [2007] E. Aulisa, S. Manservisi, R. Scardovelli, and S. Zaleski. Interface reconstruction with least-squares fit and split advection in three-dimensional Cartesian geometry. _J. Comput. Phys._ , 225(2):2301–2319, 2007. ISSN 00219991. doi: 10.1016/j.jcp.2007.03.015. URL https://dx.doi.org/10.1016/j.jcp.2007.03.015.
* Baerentzen and Aanaes [2005] J. A. Baerentzen and H. Aanaes. Signed distance computation using the angle weighted pseudonormal. _IEEE Transactions on Visualization and Computer Graphics_ , 11(3):243–253, 2005. doi: 10.1109/TVCG.2005.49. URL https://doi.org/10.1109/TVCG.2005.49.
* Bnà et al. [2015] S. Bnà, S. Manservisi, R. Scardovelli, P. Yecko, and S. Zaleski. Numerical integration of implicit functions for the initialization of the VOF function. _Comput. Fluids_ , 113:42–52, 2015. doi: 10.1016/j.compfluid.2014.04.010. URL http://dx.doi.org/10.1016/j.compfluid.2014.04.010.
* Bnà et al. [2016] S. Bnà, S. Manservisi, R. Scardovelli, P. Yecko, and S. Zaleski. Vofi - A library to initialize the volume fraction scalar field. _Comput. Phys. Commun._ , 200:291–299, 2016. ISSN 00104655. doi: 10.1016/j.cpc.2015.10.026. URL http://dx.doi.org/10.1016/j.cpc.2015.10.026.
* Cummins et al. [2005] S. J. Cummins, M. M. Francois, and Douglas B. Kothe. Estimating curvature from volume fractions. _Comput. Struct._ , 83(6-7):425–434, 2005. ISSN 00457949. doi: 10.1016/j.compstruc.2004.08.017. URL http://dx.doi.org/10.1016/j.compstruc.2004.08.017.
* DeBar [1974] R. B. DeBar. Fundamentals of the KRAKEN code. _Tech. Rep._ , pages UCID–17366, 1974.
* Desphande et al. [2012] S. S. Desphande, L. Anumolu, and M. F. Trujillo. Evaluating the performance of the two-phase flow solver interFoam. _Comput. Sci. Discov._ , 5(014016):1—-36, 2012\. doi: 10.1088/1749-4699/5/1/014016. URL https://doi.org/10.1088/1749-4699/5/1/014016.
* Detrixhe and Aslam [2016] M. Detrixhe and T. D. Aslam. From level set to volume of fluid and back again at second-order accuracy. _Int. J. Numer. Methods Fluids_ , 80:231–255, 2016. doi: 10.1002/fld. URL https://dx.doi.org/10.1002/fld.
* Divi et al. [2020] Sai C Divi, Clemens V Verhoosel, Ferdinando Auricchio, Alessandro Reali, and E Harald Van Brummelen. Error-estimate-based adaptive integration for immersed isogeometric analysis. _Comput. Math. with Appl._ , 80(11):2481–2516, 2020. ISSN 0898-1221. doi: 10.1016/j.camwa.2020.03.026. URL https://doi.org/10.1016/j.camwa.2020.03.026.
* Francois et al. [2006] M. M. Francois, S. J. Cummins, E. D. Dendy, D. B. Kothe, J. M. Sicilian, and M. W. Williams. A balanced-force algorithm for continuous and sharp interfacial surface tension models within a volume tracking framework. _J. Comput. Phys._ , 213(1):141–173, 2006. ISSN 00219991. doi: 10.1016/j.jcp.2005.08.004. URL http://dx.doi.org/10.1016/j.jcp.2005.08.004.
* Fries and Omerović [2016] T. P. Fries and S. Omerović. Higher-order accurate integration of implicit geometries. _Int. J. Numer. Methods Eng._ , 106(5):323–371, 2016. ISSN 10970207. doi: 10.1002/nme.5121. URL https://dx.doi.org/10.1002/nme.5121.
* Geuzaine and Remacle [2009] Christophe Geuzaine and Jean-François Remacle. Gmsh: A 3-d finite element mesh generator with built-in pre-and post-processing facilities. _International journal for numerical methods in engineering_ , 79(11):1309–1331, 2009.
* Ghali [2008] S. Ghali. _Introduction to geometric computing_. Springer Science & Business Media, 2008. URL https://www.springer.com/gp/book/9781848001145.
* Hartmann et al. [2021] Maximilian Hartmann, Mathis Fricke, Lukas Weimar, Dirk Gründing, Tomislav Marić, Dieter Bothe, and Steffen Hardt. Breakup dynamics of capillary bridges on hydrophobic stripes. _International Journal of Multiphase Flow_ , page 103582, 2021.
* Hirt and Nichols [1981] C. W. Hirt and B. D. Nichols. Volume of fluid/VOF/ method for the dynamics of free boundaries. _J. Comput. Phys._ , 39(1):201–225, 1981. ISSN 00219991. doi: 10.1016/0021-9991(81)90145-5. URL https://dx.doi.org/10.1016/0021-9991(81)90145-5.
* Ivey and Moin [2015] C. B. Ivey and P. Moin. Accurate interface normal and curvature estimates on three-dimensional unstructured non-convex polyhedral meshes. _J. Comput. Phys._ , 300:365–386, 2015. ISSN 10902716. doi: 10.1016/j.jcp.2015.07.055. URL http://dx.doi.org/10.1016/j.jcp.2015.07.055.
* Jasak [1996] H. Jasak. _Error analysis and estimation for the finite volume method with applications to fluid flows._ PhD thesis, 1996.
* Jofre et al. [2014] L. Jofre, O. Lehmkuhl, J. Castro, and A. Oliva. A 3-D Volume-of-Fluid advection method based on cell-vertex velocities for unstructured meshes. _Comput. Fluids_ , 94:14–29, 2014. ISSN 00457930. doi: 10.1016/j.compfluid.2014.02.001. URL http://dx.doi.org/10.1016/j.compfluid.2014.02.001.
* Jones et al. [2019] B. W.S. Jones, A. G. Malan, and N. A. Ilangakoon. The initialisation of volume fractions for unstructured grids using implicit surface definitions. _Comput. Fluids_ , 179:194–205, 2019. ISSN 00457930. doi: 10.1016/j.compfluid.2018.10.021. URL https://doi.org/10.1016/j.compfluid.2018.10.021.
* [22] F. Juretić. The cfMesh library for polyhedral mesh generation. https://sourceforge.net/projects/cfmesh/. Accessed: 2020-01-15.
* Juretić [2005] F. Juretić. _Error analysis in finite volume CFD_. PhD thesis, Imperial College London (University of London), 2005.
* Kim et al. [2009] Hyun-Jung Kim, Yu-Deok Seo, and Sung-Kie Youn. Isogeometric analysis for trimmed CAD surfaces. _Comput. Methods Appl. Mech. Eng._ , 198(37-40):2982–2995, 2009.
* Kromer and Bothe [2019] J. Kromer and D. Bothe. Highly accurate computation of volume fractions using differential geometry. _J. Comput. Phys._ , 396(July):761–784, 2019\. ISSN 00219991. doi: 10.1016/j.jcp.2019.07.005. URL https://dx.doi.org/10.1016/j.jcp.2019.07.005.
* López et al. [2019] J. López, J. Hernández, P. Gómez, and F. Faura. Non-convex analytical and geometrical tools for volume truncation, initialization and conservation enforcement in VOF methods. _J. Comput. Phys._ , 392:666–693, 2019. ISSN 10902716. doi: 10.1016/j.jcp.2019.04.055. URL https://doi.org/10.1016/j.jcp.2019.04.055.
* [27] T. Marić, T. Tolle, and D. Gründing. The argo OpenFOAM module: the implementation of Surface Mesh Cell Intersection / Approximation algorithms. https://gitlab.com/leia-methods/argo/-/tree/2021-10-27-SMCIA-R1. Accessed: 2021-02-16.
* Marić et al. [2015] T. Marić, H. Marschall, and D. Bothe. lentFoam – A hybrid Level Set/Front Tracking method on unstructured meshes. _Comput. Fluids_ , 113:20–31, may 2015. ISSN 00457930. doi: 10.1016/j.compfluid.2014.12.019. URL https://dx.doi.org/10.1016/j.compfluid.2014.12.019.
* Marić et al. [2018] T. Marić, H. Marschall, and D. Bothe. An enhanced un-split face-vertex flux-based VoF method. _J. Comput. Phys._ , 371:967–993, apr 2018. ISSN 10902716. doi: 10.1016/j.jcp.2018.03.048. URL https://doi.org/10.1016/j.jcp.2018.03.048.
* Marić et al. [2020] T. Marić, D. B. Kothe, and D. Bothe. Unstructured un-split geometrical volume-of-fluid methods–a review. _Journal of Computational Physics_ , 420:109695, 2020. doi: 10.1016/j.jcp.2020.109695. URL https://doi.org/10.1016/j.jcp.2020.109695.
* Maric et al. [2021] Tomislav Maric, Tobias Tolle, and Dirk Gruending. Computing volume fractions and signed distances from arbitrary surfaces on unstructured meshes, October 2021. URL https://doi.org/10.5281/zenodo.5603255.
* Meagher [1982] D. Meagher. Geometric modeling using octree encoding. _Comput. Graph. Image Process._ , 19(2):129–147, 1982. ISSN 0146664X. doi: 10.1016/0146-664X(82)90104-6. URL http://dx.doi.org/10.1016/0146-664X(82)90104-6.
* Mehta and Sahni [2004] D. P. Mehta and S. Sahni. _Handbook of data structures and applications_. CRC Press, 2004.
* Menon and Schmidt [2011] S. Menon and D. P. Schmidt. Conservative interpolation on unstructured polyhedral meshes: An extension of the supermesh approach to cell-centered finite-volume variables. _Comput. Methods Appl. Mech. Eng._ , 200(41-44):2797–2804, 2011. ISSN 00457825. doi: 10.1016/j.cma.2011.04.025. URL http://dx.doi.org/10.1016/j.cma.2011.04.025.
* Moukalled et al. [2016] F. Moukalled, L. Mangani, and M. Darwish. _The finite volume method in computational fluid dynamics_ , volume 113. Springer, 2016. URL https://www.springer.com/de/book/9783319168739.
* Nitti et al. [2020] Alessandro Nitti, Josef Kiendl, Alessandro Reali, and Marco D de Tullio. An immersed-boundary/isogeometric method for fluid–structure interaction involving thin shells. _Computer Methods in Applied Mechanics and Engineering_ , 364:112977, 2020.
* Noh and Woodward [1976] W. F. Noh and P. R. Woodward. SLIC (Simple Line Interface Calculation) method. _Proc. Fifth Int. Conf. Numer. Methods Fluid Dyn. June 28–July 2, 1976 Twente Univ. Enschede_ , pages 330–340, 1976. doi: 10.1007/3-540-08004-X˙336. URL https://dx.doi.org/10.1007/3-540-08004-X_336.
* Owkes and Desjardins [2015] M. Owkes and O. Desjardins. A mesh-decoupled height function method for computing interface curvature. _J. Comput. Phys._ , 281:285–300, 2015. ISSN 10902716. doi: 10.1016/j.jcp.2014.10.036. URL https://dx.doi.org/10.1016/j.jcp.2014.10.036.
* Owkes and Desjardins [2017] M. Owkes and O. Desjardins. A mass and momentum conserving unsplit semi-Lagrangian framework for simulating multiphase flows. _J. Comput. Phys._ , 332:21–46, 2017. ISSN 10902716. URL http://dx.doi.org/10.1016/j.jcp.2016.11.046.
* Rider and Kothe [1998] W. J. Rider and D. B. Kothe. Reconstructing Volume Tracking. _J. Comput. Phys._ , 141(2):112–152, 1998. ISSN 00219991. doi: 10.1006/jcph.1998.5906. URL https://doi.org/10.1006/jcph.1998.5906.
* Russo and Smereka [2000] G. Russo and P. Smereka. A Remark on Computing Distance Functions. _J. Comput. Phys._ , 163(1):51–67, 2000. ISSN 00219991. doi: 10.1006/jcph.2000.6553. URL https://dx.doi.org/10.1006/jcph.2000.6553.
* Scheufler and Roenby [2019] H. Scheufler and J. Roenby. Accurate and efficient surface reconstruction from volume fraction data on general meshes. _J. Comput. Phys._ , 383:1–23, apr 2019. ISSN 10902716. doi: 10.1016/j.jcp.2019.01.009. URL https://doi.org/10.1016/j.jcp.2019.01.009.
* Schmidt et al. [2012] Robert Schmidt, Roland Wüchner, and Kai-Uwe Bletzinger. Isogeometric analysis of trimmed NURBS geometries. _Comput. Methods Appl. Mech. Eng._ , 241:93–111, 2012.
* Shin and Juric [2002] S. Shin and D. Juric. Modeling Three-Dimensional Multiphase Flow Using a Level Contour Reconstruction Method for Front Tracking without Connectivity. _J. Comput. Phys._ , 180(2):427–470, 2002. ISSN 00219991. doi: 10.1006/jcph.2002.7086. URL https://dx.doi.org/10.1006/jcph.2002.7086.
* Shin et al. [2011] S. Shin, I. Yoon, and D. Juric. The Local Front Reconstruction Method for direct simulation of two- and three-dimensional multiphase flows. _J. Comput. Phys._ , 230(17):6605–6646, 2011\. ISSN 00219991. doi: 10.1016/j.jcp.2011.04.040. URL http://dx.doi.org/10.1016/j.jcp.2011.04.040.
* Strobl et al. [2016] S. Strobl, A. Formella, and T. Pöschel. Exact calculation of the overlap volume of spheres and mesh elements. _J. Comput. Phys._ , 311:158–172, 2016. ISSN 10902716. doi: 10.1016/j.jcp.2016.02.003. URL http://dx.doi.org/10.1016/j.jcp.2016.02.003.
* Sussman and Fatemi [1999] M. Sussman and E. Fatemi. An efficient, interface-preserving level set redistancing algorithm and its application to interfacial incompressible fluid flow. _SIAM Journal on scientific computing_ , 20(4):1165–1191, 1999. doi: 10.1137/S1064827596298245. URL https://doi.org/10.1137/S1064827596298245.
* Sussman et al. [1998] M. Sussman, E. Fatemi, P. Smereka, and S. Osher. An improved level set method for incompressible two-phase flows. _Computers & Fluids_, 27(5-6):663–680, 1998\. doi: 10.1016/S0045-7930(97)00053-4. URL https://doi.org/10.1016/S0045-7930(97)00053-4.
* Sussman et al. [1999] M. Sussman, A. S. Almgren, J. B. Bell, P. Colella, L. H. Howell, and M. L. Welcome. An adaptive level set approach for incompressible two-phase flows. _Journal of Computational Physics_ , 148(1):81–124, 1999. doi: 10.1006/jcph.1998.6106. URL https://doi.org/10.1006/jcph.1998.6106.
* Thürrner and A. [1998] G. Thürrner and Wüthrich C. A. Computing vertex normals from polygonal facets. _Journal of Graphics Tools_ , 3(1):43–46, 1998\. doi: 10.1080/10867651.1998.10487487. URL https://doi.org/10.1080/10867651.1998.10487487.
* Tolle et al. [2020] T. Tolle, D. Bothe, and T. Marić. SAAMPLE: A Segregated Accuracy-driven Algorithm for Multiphase Pressure-Linked Equations. _Comput. Fluids_ , 200:104450, 2020. ISSN 00457930. doi: 10.1016/j.compfluid.2020.104450. URL https://dx.doi.org/10.1016/j.compfluid.2020.104450.
* Toth et al. [2017] C. D. Toth, J. O’Rourke, and Jacob E. Goodman. _Handbook of discrete and computational geometry_. Chapman and Hall/CRC, 2017.
* Tryggvason et al. [2001] G. Tryggvason, B. Bunner, A. Esmaeeli, D. Juric, N. Al-Rawahi, W. Tauber, J. Han, S. Nas, and Y. J Jan. A front-tracking method for the computations of multiphase flow. _J. Comput. Phys._ , 169(2):708–759, 2001. ISSN 00219991. doi: 10.1006/jcph.2001.6726. URL https://doi.org/10.1006/jcph.2001.6726.
* Ubbink [1997] O. Ubbink. _Numerical prediction of two fluid systems with sharp interfaces_. PhD thesis, Imperial College of Science, Technology and Medicine, 1997\.
|
# SN 2017hpa: A Nearby Carbon-Rich Type Ia Supernova with a Large Velocity
Gradient
Xiangyun Zeng Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi, Xinjiang 830011, People’s Republic of China School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, People’s Republic of China Xiaofeng Wang<EMAIL_ADDRESS>Physics Department and Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing, 100084, People’s Republic of China Beijing Planetarium, Beijing Academy of Science of Technology, Beijing 100044, People’s Republic of China Ali Esamdin<EMAIL_ADDRESS>Xinjiang Astronomical Observatory, Chinese Academy of Sciences, Urumqi, Xinjiang 830011, People’s Republic of China Craig Pellegrino Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Las Cumbres Observatory, 6740 Cortona Drive Suite 102, Goleta, CA 93117-5575, USA WeiKang Zheng Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA Jujia Zhang Yunnan Observatories (YNAO), Chinese Academy of Sciences, Kunming 650216, People’s Republic of China Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, Kunming 650216, People’s Republic of China Center for Astronomical Mega-Science, Chinese Academy of Sciences, 20A Datun Road, Chaoyang District, Beijing, 100012, People’s Republic of China Jun Mo Physics Department and Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing, 100084, People’s Republic of China Wenxiong Li Physics Department and Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing, 100084, People’s Republic of China The School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel D. Andrew Howell Las Cumbres Observatory, 6740 Cortona Drive Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Alexei V. Filippenko Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA Miller Senior Fellow, Miller Institute for Basic Research in Science, University of California, Berkeley, CA 94720, USA Han Lin Physics Department and Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing, 100084, People’s Republic of China Thomas G. Brink Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA Edward A. Baron Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, USA Jamison Burke Las Cumbres Observatory, 6740 Cortona Drive Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA James M. DerKacy Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, USA Curtis McCully Las Cumbres Observatory, 6740 Cortona Drive Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Daichi Hiramatsu Las Cumbres Observatory, 6740 Cortona Drive Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Griffin Hosseinzadeh Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138-1516, USA Las Cumbres Observatory, 6740 Cortona Drive Suite 102, Goleta, CA 9311-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Benjamin T. Jeffers Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA Timothy W. Ross Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA Benjamin E. Stahl Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA Department of Physics, University of California, Berkeley, CA 94720-7300, USA Marc J. Staley Graduate Fellow Samantha Stegman Department of Astronomy, University of California, Berkeley, CA 94720-3411, USA Department of Chemistry, University of Wisconsin, Madison, WI 53706, USA Stefano Valenti Department of Physics, University of California, Davis, CA 95616, USA Lifan Wang George P. and Cynthia Woods Mitchell Institute for Fundamental Physics & Astronomy, Texas A&M University, Department of Physics and Astronomy, 4242 TAMU, College Station, TX 77843, USA Danfeng Xiang Physics Department and Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing, 100084, People’s Republic of China Jicheng Zhang Physics Department and Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing, 100084, People’s Republic of China Tianmeng Zhang Key Laboratory of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, People’s Republic of China
###### Abstract
We present extensive, well-sampled optical and ultraviolet photometry and
optical spectra of the Type Ia supernova (SN Ia) 2017hpa. The light curves
indicate that SN 2017hpa is a normal SN Ia with an absolute peak magnitude of
$M_{\rm max}^{B}\approx$ $-19.12\pm 0.11$ mag and a post-peak decline rate
${\Delta}m_{15}(B)$ = $1.02\pm 0.07$ mag. According to the quasibolometric
light curve, we derive a peak luminosity of $1.25\times 10^{43}$ erg s-1and a
56Ni mass of $0.63\pm 0.02\,M_{\odot}$. The spectral evolution of SN 2017hpa
is similar to that of normal SNe Ia, while it exhibits unusually rapid
velocity evolution resembling that of SN 1991bg-like SNe Ia or the high-
velocity subclass of SNe Ia, with a post-peak velocity gradient of $\sim$
$130\pm 7$ km s-1 d-1. Moreover, its early spectra ($t<-7.9$ d) show prominent
C ii $\lambda$6580 absorption feature, which disappeared in near-maximum-light
spectra but reemerged at phases from $t\,\sim\,+8.7$ d to $t\,\sim\,+11.7$ d
after maximum light. This implies that some unburned carbon may mix deep into
the inner layer, and is supported by the low C ii $\lambda$6580 to Si ii
$\lambda$6355 velocity ratio ($\sim 0.81$) observed in SN 2017hpa. The O i
$\lambda$7774 line shows a velocity distribution like that of carbon. The
prominent carbon feature, low velocity seen in carbon and oxygen, and large
velocity gradient make SN 2017hpa stand out from other normal SNe Ia, and are
more consistent with predictions from a violent merger of two white dwarfs.
Detailed modelling is still needed to reveal the nature of SN 2017hpa.
supernovae: individual: SN 2017hpa — supernovae: general: high velocity
gradient
††software: SNooPy (Burns et al., 2011, 2014), SN-Spectral Evolution
(https://github.com/mwvgroup/SN-Spectral-Evolution), Minim Code (Chatzopoulos
et al., 2013), IRAF (Tody, 1986, 1993), DAOPHOT (Stetson, 1987), PyZOGY
(Zackay et al., 2016; Guevel & Hosseinzadeh, 2017), lcogtsnpipe (Valenti et
al., 2016), LOSS data-reduction pipeline (Ganeshalingam et al., 2010; Stahl et
al., 2019, 2020), Astropy (Astropy Collaboration et al., 2013), Matplotlib
(Hunter et al., 2007), Scipy (https://www.scipy.org/), Numpy
(https://numpy.org/)
## 1 Introduction
Type Ia supernovae (SNe Ia) are widely believed to arise from explosions of
carbon-oxygen (CO) white dwarfs (WDs) in a binary system, which have a typical
absolute $V$-band peak magnitude of $\sim-19$ mag (Phillips, 1993; Perlmutter
et al., 1999; Wang et al., 2006). The relatively uniform stellar explosions of
SNe Ia make them useful as standardizable candles in measuring extragalactic
distances (Phillips, 1993; Riess et al., 1996; Wang et al., 2005; Guy et al.,
2005; Howell, 2011; Burns et al., 2018), leading to the discovery of the
accelerating of the Universe (Riess et al., 1998; Perlmutter et al., 1999). In
recent years, larger samples of SNe Ia have been used to further constrain the
nature of dark energy driving the acceleration (e.g., Betoule et al., 2014;
Abbott et al., 2019).
However, the progenitor systems and explosion mechanism of SNe Ia still remain
controversial (e.g., Maoz et al., 2014). Two popular scenarios are the violent
merger-triggered explosion of two WDs, known as the double-degenerate (DD)
scenario (Webbink, 1984; Iben & Tutukov, 1984), and the accretion-triggered
explosion of a WD with a nondegenerate companion, known as the single-
degenerate (SD) scenario (Whelan & Iben, 1973; Nomoto, 1982; Nomoto et al.,
1997). In general, the detection of signatures of circumstellar material (CSM)
around some SNe Ia supports the SD scenario (Hamuy et al., 2003; Aldering et
al., 2006; Patat et al., 2007; Sternberg et al., 2011; Dilday et al., 2012;
Maguire et al., 2013; Silverman et al., 2013; Wang et al., 2019), though some
theoretical studies show that the CSM could be also produced in the DD
scenario (Shen et al., 2013; Raskin & Kasen, 2013). On the other hand, there
is also evidence for nondetections of companion signatures for some SNe Ia,
thus favoring the DD scenario (González Hernández et al., 2012; Schaefer &
Pagnotta, 2012; Olling et al., 2015; Tucker et al., 2019).
Popular explosion models of SNe Ia include the following cases. (1) The CO WD
accretes material from the companion star until its mass nearly reaches the
Chandrasekhar mass limit ($M_{\rm Ch}$, $\sim$1.4 $M_{\odot}$, Chandrasekhar,
1957) and compressional heating at the center causes the explosion (Piersanti
et al., 2004). (2) The detonation of a thin layer of He on the surface of a WD
(Kromer et al., 2010; Shen & Moore, 2014) triggers a second detonation in the
WD center and hence the explosion of a sub-$M_{\rm Ch}$ mass C-O WD. (3) The
violent merger or secular merger of two WDs, accompanied by radiation of
gravitational waves (Röpke et al., 2012; García-Berro & Lorén-Aguilar, 2017).
(4) In triple systems, oscillations of the third star cause a direct collision
of two WDs and trigger the SN explosion (Thompson, 2011; Mazzali et al.,
2018). In view of these explosion mechanisms, the delayed detonation model is
one of the most suitable ones to account for the observed properties of SNe
Ia, which initially involves a deflagration of a $M_{\rm Ch}$ CO WD and later
a supersonic detonation (Khokhlov, 1991; Höflich et al., 2017). Nevertheless,
the double detonation model of sub-$M_{\rm Ch}$ CO WDs shows many striking
features, and can also explain the observed properties of some SNe Ia (Shen et
al., 2018).
Observationally, there is increasing evidence for spectroscopic and
photometric diversity of SNe Ia. Most SNe Ia can be classified as
spectroscopically normal ones, while a small fraction exhibit peculiar
properties in some respects (e.g., Branch et al., 1993; Filippenko, 1997),
such as the SN 1991T-like overluminous SNe (Filippenko et al., 1992a; Ruiz-
Lapuente et al., 1992; Phillips, 1993), the SN 1991bg-like subluminous SNe
(Filippenko et al., 1992b; Leibundgut et al., 1993), or the SN 2002cx-like
subclasses (Filippenko, 2003; Li et al., 2003). Based on differences in Si ii
velocity evolution, Benetti et al. (2005) divided SNe Ia into three
subclasses: high velocity gradients (HVG), low velocity gradients (LVG), and
FAINT. According to the equivalent width (EW) of Si ii $\lambda$6355 and Si ii
$\lambda$5972 absorption lines near maximum brightness, Branch et al. (2006)
divided SNe Ia into core normal (CN), broad line (BL), cool (CL), and shallow
silicon (SS) subgroups. The sublumious SN 1991bg-like and overluminous SN
1991T-like SNe Ia have large overlap with the CL and SS subclasses,
respectively (Branch et al., 2006). Based on the Si ii $\lambda$6355 velocity
measured near the time of $B$-band maximum, Wang et al. (2009) classified SNe
Ia into normal-velocity (NV) and high-velocity (HV) subsets. The HV subclass
is found to share some common properties such as red $B-V$ color, slow decay
in blue bands starting at $t\approx 40$ d from the peak, and abundant
surrounding CSM (Wang et al., 2008, 2009, 2019; Foley et al., 2011; Mandel et
al., 2014). Although asymmetric explosions have been proposed to explain the
occurrence of HV and NV subclasses of SNe Ia (Maeda et al., 2010), it is
difficult to account for the fact that these two subgroups have different
birth environments (Wang et al., 2013).
Early-time observations can place important constraints on the explosion
physics of SNe Ia, including the size of the primary WD (Bloom et al., 2012),
the radius of the companion star (Hosseinzadeh et al., 2017), the distribution
of 56Ni in the ejecta, and the possible existence of CSM (Piro & Morozova,
2016). Therefore, clarifying the progenitor systems and explosion mechanisms
affects our understanding of stellar evolution and precision cosmology. The
unburned carbon detected in early-time spectra can provide important clues to
the progenitor system and explosion mechanism of SNe Ia (Yamanaka et al.,
2009; Silverman et al., 2011; Taubenberger et al., 2011; Thomas et al., 2011;
Silverman & Filippenko, 2012; Hsiao et al., 2013; Li et al., 2019a).
Previous studies show that nearly 30% of SNe Ia show signatures of C ii
$\lambda$6580 absorption at $t\approx-4$ d (relative to the time of maximum
light), while this fraction is over 40% when examining the $t\approx-10$ d
spectra (Parrent et al., 2011; Thomas et al., 2011; Folatelli et al., 2012;
Silverman & Filippenko, 2012; Maguire et al., 2014). These studies show that
carbon-positive SNe Ia tend to be LVG subtypes (Folatelli et al., 2012) and
have bluer optical colors around maximum light (Thomas et al., 2011; Silverman
& Filippenko, 2012). Among those carbon-positive SNe Ia, there are two events
which show carbon absorption lasting until 1–3 weeks after maximum light. One
is SN 2002fk, which has detecable carbon absorption lines in the $t\approx 10$
d spectrum Cartier et al. (2014). Another example is SN 2018oh studied by Li
et al. (2019a), the first Kepler-discovered SN Ia with a spectroscopic
classification; the carbon feature can be detected even in the $t\approx 20.5$
d spectrum, representing the latest detection of carbon in SNe Ia. The origin
of these carbon detections in post-maximum spectra still remains unclear. SN
2017hpa is the third SN Ia with persistent carbon feature; it exploded in the
spiral galaxy UGC 3122 (see Fig. 1) at a distance of $\sim$ 65.6 Mpc (redshift
$z\approx 0.0156$). The prominent carbon features and small distance of SN
2017hpa provide us with another excellent chance to study the observed
diversity of SNe Ia.
In this paper, the optical observations and data reduction are presented in
Section 2. Section 3 discusses the light and color curves, while Section 4
shows the spectroscopic evolution. The quasibolometric light curve and origin
of prominent carbon feature of SN 2017hpa are discussed in Section 5. We
summarize in Section 6.
## 2 Observations and Data Reduction
### 2.1 Discovery and Host Galaxy
SN 2017hpa was discovered at $\alpha=04^{h}39^{m}50^{s}.750$,
$\delta=07^{\circ}03\arcmin 54\arcsec.90$ (J2000) on 2017 Oct. 25.35 (UT dates
are adopted throughout this paper) during the Puckett Observatory World
Supernova Search (POSS) Gagliano et al. (2017). Figure 1 shows a color image
of SN 2017hpa. A spectrum taken $\sim 0.65$ d after the discovery classified
it as a normal SN Ia (Floers et al., 2017).
Figure 1: The left panel shows a color image synthesized from $gri$-band
observations from PanSTARRS, and the faint star right beside the position of
SN 2017hpa is totally covered in following observations. The right panel shows
a color image synthesized from $gri$-band observations from TNT; SN 2017hpa is
marked with a red circle while the reference stars are numbered.
The host galaxy of SN 2017hpa is UGC 3122, which is classified as SAB(rc)s at
$z=0.015631\pm 0.000005$ (Paturel et al., 2002; Springob et al., 2005). This
redshift corresponds to a distance modulus $\mu=34.05\pm 0.38$ mag with a
velocity uncertainty of 500 km s-1 (Willick et al., 1997), assuming a Hubble
constant of 73.5 km s-1 Mpc-1 (Riess et al., 2018).
### 2.2 Photometry
After the discovery of SN 2017hpa, we triggered follow-up photometric
observations on several telescopes, including the 0.8 m Tsinghua-NAOC
telescope (TNT; Huang et al., 2012; Zhang et al., 2015), the Las Cumbres
Obervatory (LCO) Telescope network (Shporer et al., 2011; Brown et al., 2013),
the 0.76 m Katzman Automatic Imaging Telescope (KAIT) at Lick Observatory (Li
et al., 2001; Filippenko et al., 2001; Filippenko, 2005), and the 1 m Nickel
reflector at Lick Observatory. The TNT, Nickel, and KAIT monitored SN 2017hpa
in the $BVRI$ bands, while the LCO 1 m telescope sampled its light curves in
the $BVgri$ bands.
For photometric observations obtained from the LCO during the Global Supernova
Project, PyZOGY (Zackay et al., 2016; Guevel & Hosseinzadeh, 2017) is employed
for image subtraction while the $lcogtsnpipe$ (Valenti et al., 2016) is
applied for measuring the SN flux. An ad hoc pipeline (based on the IRAF
DAOPHOT package; Stetson, 1987) is applied to reduce images from the TNT and
extract instrumental magnitudes of the SN. Photometric images from the Lick
Observatory are reduced using the Lick Observatory Supernova Search (LOSS;
Filippenko et al., 2001) data-reduction pipeline (Ganeshalingam et al., 2010;
Stahl et al., 2019, 2020), while DAOPHOT (Stetson, 1987) is applied to
implement the point-spread-function (PSF) photometry.
For the TNT instrumental magnitudes, the $BV$-band images are calibrated using
the APASS catalog (Henden et al., 2016) and the $gri$-band magnitudes are
calibrated using the PanSTARRS catalog (Chambers et al., 2016; Waters et al.,
2016; Flewelling et al., 2016; Magnier et al., 2016). The local standard stars
with photometric magnitudes from APASS and PanSTARRS are listed in Table 1.
The unfiltered instrumental magnitudes from LOSS are calibrated to the
standard Landolt $R$-band magnitudes based on the transformed local standards
of SDSS (Li et al., 2003; Zheng et al., 2017). The LOSS $BVRI$ instrumental
magnitudes are calibrated to the Johnson system using a series of Landolt
(Landolt, 1992) standard stars taken on a number of photometric nights.
Table 1: Photometric Standards in the SN 2017hpa Field 1aaStandard stars used for calibration of instrumental magnitudes. Star | $\alpha$(J2000) | $\delta$(J2000) | $B$ (mag) | $V$ (mag) | $g$ (mag) | $r$ (mag) | $i$ (mag)
---|---|---|---|---|---|---|---
1 | 04:40:00.331 | +07:02:17.167 | 14.125(020) | 13.368(015) | 13.689(014) | 13.114(034) | 12.915(040)
2 | 04:40:03.904 | +07:01:18.451 | 16.618(065) | 15.697(037) | 16.160(089) | 15.398(107) | 15.110(070)
3 | 04:39:40.185 | +07:02:54.100 | 15.836(029) | 15.046(049) | 15.403(030) | 14.779(034) | 14.593(045)
4 | 04:40:00.268 | +07:06:44.968 | 14.140(034) | 13.228(016) | 13.625(016) | 12.887(036) | 12.632(035)
5 | 04:40:02.771 | +07:00:42.491 | 16.812(121) | 16.013(077) | 16.389(088) | 15.752(053) | 15.491(174)
6 | 04:39:52.514 | +07:01:50.567 | 17.515(169) | 16.242(070) | 16.645(040) | 15.957(026) | 15.637(040)
7 | 04:39:49.255 | +07:04:03.612 | 17.125(095) | 16.130(077) | 16.594(044) | 15.802(069) | 15.477(068)
8 | 04:39:57.276 | +07:04:14.560 | 15.626(052) | 14.823(023) | 15.191(018) | 14.556(044) | 14.326(043)
9 | 04:39:54.303 | +07:01:52.794 | 17.553(151) | 16.262(008) | 16.918(114) | 15.686(037) | 15.101(034)
10 | 04:39:58.351 | +07:00:56.540 | 17.480(238) | 16.547(158) | 16.996(066) | 16.230(041) | 15.995(040)
11 | 04:39:41.224 | +07:05:43.976 | 17.430(026) | 16.468(078) | 17.033(093) | 16.288(066) | 16.102(040)
Optical and ultraviolet (UV) observations of SN 2017hpa were also obtained
with the Neil Gehrels Swift Observatory (Gehrels et al., 2004). The Swift/UVOT
observations started at relatively early phases in six bands including $uvw2$,
$uvm2$, $uvw1$, $u$, $b$, and $v$ (Roming et al., 2005). The filters in lower
case are used throughout this paper for photometric observations in UVOT
bands. Using zeropoints extracted by Breeveld et al. (2011) in the Vega
system, the data-reduction pipeline of the Swift Optical/Ultraviolet Supernova
Archive (SOUSA; Brown et al., 2014) is applied to obtain the Swift optical/UV
light curves of SN 2017hpa. The source counts are measured using a 3$\arcsec$
aperture and corrections are based on the average PSF. The template-
subtraction technique has also been applied to the Swift images and the final
uncertainty in the photometry is the combination of statistical uncertainties
in galaxy subtraction count rates and a 2% systematic fluctuation at each
pixel caused by differences in response sensitivity across the photon
detector. The final observed Swift and ground-based light curves are shown in
Figure 2, and the corresponding magnitudes are tabulated in Table 2 and Table
3.
Table 2: Photometric Observations of SN 2017hpa by Ground-Based Telescopes MJD | EpochaaRelative to the epoch of $B$-band maximum brightness (MJD = 58,066.6). Magnitudes are calibrated to AB magnitude system. | $B$ (mag) | $V$ (mag) | $R$ (mag) | $I$ (mag) | $g$ (mag) | $r$ (mag) | $i$ (mag) | $Clear$ (mag) | Telescope
---|---|---|---|---|---|---|---|---|---|---
58053.34 | -13.30 | 17.235(035) | 16.967(022) | $\cdots$ | $\cdots$ | 17.054(081) | 17.060(097) | 17.311(143) | $\cdots$ | TNT
58053.54 | -13.10 | 16.971(031) | 16.869(027) | 16.741(023) | 16.668(039) | $\cdots$ | $\cdots$ | $\cdots$ | 16.623(022) | KAIT4
58053.82 | -12.83 | 16.946(041) | 16.848(030) | $\cdots$ | $\cdots$ | 16.981(025) | 16.863(065) | 17.078(043) | $\cdots$ | LCO
58054.35 | -12.29 | 16.897(033) | 16.684(020) | $\cdots$ | $\cdots$ | 16.896(167) | 16.806(102) | 17.122(136) | $\cdots$ | TNT
58054.36 | -12.28 | 16.741(012) | 16.634(008) | 16.539(010) | 16.463(015) | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | Nickel
58054.36 | -12.28 | 16.744(012) | 16.642(011) | 16.546(011) | 16.463(016) | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | KAIT4
58054.54 | -12.11 | 16.717(027) | 16.614(021) | 16.505(019) | 16.442(190) | $\cdots$ | $\cdots$ | $\cdots$ | 16.355(024) | KAIT4
58054.84 | -11.80 | 16.705(043) | 16.659(042) | $\cdots$ | $\cdots$ | 16.692(022) | 16.614(058) | 16.856(045) | $\cdots$ | LCO
58055.19 | -11.45 | 16.674(031) | 16.484(017) | $\cdots$ | $\cdots$ | 16.469(074) | 16.507(089) | 16.846(130) | $\cdots$ | TNT
58055.54 | -11.10 | 16.460(021) | 16.409(017) | 16.276(017) | 16.217(030) | $\cdots$ | $\cdots$ | $\cdots$ | 16.147(016) | KAIT4
$\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$
58203.00 | +136.35 | 19.821(478) | 18.920(192) | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | TNT
58207.17 | +140.52 | 20.258(264) | 19.730(330) | 20.210(455) | 19.657(309) | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | Nickel
58207.17 | +140.52 | 20.281(285) | 19.851(363) | 20.501(477) | 19.562(302) | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | KAIT4
58210.16 | +143.52 | 20.056(159) | 19.796(416) | 19.774(295) | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | Nickel
58210.16 | +143.52 | 20.021(159) | 19.960(435) | 19.991(337) | 19.606(277) | $\cdots$ | $\cdots$ | $\cdots$ | $\cdots$ | KAIT4
Table 3: Swift UVOT Photometry of SN 2017hpa MJD | EpochaaRelative to the epoch of $B$-band maximum (MJD = 58,066.6). Magnitudes are calibrated to Vega magnitude system. | $uvw2$ (mag) | $uvm2$ (mag) | $uvw1$ (mag) | $u$ (mag) | $b$ (mag) | $v$ (mag)
---|---|---|---|---|---|---|---
58053.19 | -13.45 | 19.73(16) | 20.18(33) | 18.87(13) | 17.51(06) | 17.06(04) | 16.66(05)
58054.59 | -12.06 | 20.18(50) | $\cdots$ | 18.98(26) | 16.92(11) | 16.73(07) | 16.47(11)
58055.71 | -10.93 | 19.41(17) | 20.40(33) | 18.25(12) | 16.35(05) | 16.29(04) | 16.22(06)
58060.76 | -5.89 | 18.70(09) | 19.72(24) | 17.24(06) | 15.43(03) | 15.62(03) | 15.59(04)
58067.35 | 0.70 | 18.30(13) | 19.52(28) | 17.24(12) | 15.36(05) | 15.42(04) | 15.22(06)
58069.00 | 2.35 | 18.61(11) | 19.63(20) | 17.45(09) | 15.42(03) | 15.42(03) | 15.25(04)
58071.00 | 4.34 | 18.68(10) | 20.23(30) | 17.69(10) | 15.63(04) | 15.54(03) | 15.31(04)
58073.05 | 6.41 | 18.88(12) | 19.59(20) | 17.67(10) | 15.87(04) | 15.66(03) | 15.34(04)
58074.59 | 7.95 | 18.97(12) | 19.67(19) | 17.90(11) | 15.98(04) | 15.79(03) | 15.39(04)
58080.95 | 14.31 | 19.17(13) | 20.00(24) | 18.55(13) | 16.85(06) | 16.40(04) | 15.77(04)
58082.35 | 15.71 | 19.12(19) | 20.52(51) | 18.62(18) | 16.97(09) | 16.56(05) | 15.80(07)
58085.14 | 18.49 | 19.70(15) | 19.94(18) | 19.14(14) | 17.36(06) | 16.92(04) | 16.00(04)
58088.60 | 21.96 | 19.96(20) | 20.11(22) | 19.50(17) | 17.63(08) | 17.21(05) | 16.27(05)
58095.17 | 28.52 | 20.02(19) | 20.29(23) | 19.96(22) | 18.38(12) | 17.75(06) | 16.46(06)
Figure 2: The observed UV and optical light curves of SN 2017hpa.
### 2.3 Spectroscopy
A total of 26 low-resolution optical spectra of SN 2017hpa have been obtained
using different telescopes and equipment, including the AFOSC mounted on the
Asiago Ekar telescope, the BFOSC mounted on the Xinglong 2.16 m telescope
(XLT; Jiang et al., 1999; Zhang et al., 2016; Fan et al., 2016), the YFOSC on
the Lijiang 2.4 m telescope (LJT; Chen et al., 2001; Zhang et al., 2012; Wang
et al., 2019) of Yunnan Astronomical Observatories, the Kast spectrograph on
the Lick 3 m Shane telescope (Miller & Stone, 1993; Stahl et al., 2020), and
the LCO 2 m Faulkes Telescope North (FTN; Brown et al., 2013). The journal of
spectroscopic observations is presented in Table 4, including one spectrum
from the Transient Name Server111https://wis-tns.weizmann.ac.il/ and six
spectra from Stahl et al. (2020). When no data-reduction pipeline for was
available, we applied standard IRAF routines to reduce the spectra.
Spectrophotometric standard stars observed at an airmass comparable to the
target on the same night were used to calibrate the flux density of SN
2017hpa. The extinction curves of the various observatories are utilized to
correct for atmospheric extinction, and spectra of the standard stars are used
to eliminate the telluric absorption lines.
Table 4: Spectroscopic Observations of SN 2017hpa MJD | EpochaaRelative to the epoch of $B$-band maximum (MJD = 58,066.6). | $\lambda_{\rm Start}$ | $\lambda_{\rm End}$ | Instrument
---|---|---|---|---
58052.5 | -14.1 | 3387 | 8249 | Asiago (public)
58053.2 | -13.5 | 3496 | 9173 | LJT
58054.2 | -12.4 | 3502 | 9171 | LJT
58056.5 | -10.1 | 3622 | 10400 | Lick 3 m
58058.7 | -7.9 | 3399 | 9999 | LCO
58064.5 | -2.1 | 3249 | 9999 | LCO
58068.7 | 2.0 | 3249 | 9999 | LCO
58071.3 | 4.7 | 3746 | 8840 | XLT
58075.4 | 8.7 | 3250 | 10000 | LCO
58076.1 | 9.5 | 3744 | 8839 | XLT
58078.4 | 11.7 | 3630 | 10400 | Lick 3 m
58091.7 | 25.1 | 3500 | 9100 | LJT
58094.3 | 27.7 | 3299 | 9999 | LCO
58099.3 | 32.6 | 3299 | 9999 | LCO
58099.3 | 32.6 | 3630 | 10400 | Lick 3 m
58105.4 | 38.7 | 3632 | 10400 | Lick 3 m
58107.7 | 41.1 | 3500 | 9100 | LJT
58110.3 | 43.7 | 5896 | 8182 | Lick 3 m
58111.3 | 44.7 | 3249 | 10000 | LCO
58118.7 | 52.1 | 3500 | 9100 | LJT
58119.4 | 52.8 | 3300 | 10000 | LCO
58121.3 | 54.7 | 3299 | 10000 | LCO
58127.8 | 61.1 | 3500 | 9100 | LJT
58131.4 | 64.7 | 3632 | 10400 | Lick 3 m
58132.3 | 65.7 | 3400 | 9300 | LCO
58153.3 | 86.6 | 3300 | 9299 | LCO
## 3 Light Curves
### 3.1 Optical and Ultraviolet Light Curves
The multiband UV/optical light curves of SN 2017hpa are shown in Figure 2; one
can see that the observations in optical bands have nearly daily sampling,
ranging from about 2 weeks before to over 100 d after $B$-band maximum light.
The light curves of SN 2017hpa are similar to those of normal SNe Ia, reaching
maximum slightly earlier in the $I/i$ and $UV$ bands than the $B$ band, and
having a prominent shoulder in $R/r$ as well as a secondary maximum in $I/i$.
The slight deviations between the $BVgri$ light curves of different telescopes
are primarily due to different filter transmission functions, as shown in
Figure 3. The transmission differences at the red edge of the $I$-band filters
may cause the $I$-band discrepancies between LCO and TNT. Applying a
polynomial fit to the $B$-band light curves around maximum light yields a peak
of $15.48\pm 0.03$ mag on MJD = 58066.6 (UT 2017 November 9.6). The $V$-band
light curve reached its peak of $15.35\pm 0.2$ mag on MJD = 58068.4, $\sim
1.8$ d after the $B$-band peak.
Figure 3: The transmission curves of the TNT and LCO filters; each curve is
normalized to the peak.
Figures 4 and 5 compare the multiband UV/optical light curves of SN 2017hpa
with those of several well-observed normal SNe Ia which have comparable
${\Delta}m_{15}(B)$, including SN 2003du(Stanishev et al., 2007), SN 2005cf
(Wang et al., 2009), SN 2011fe (Maguire et al., 2013), SN 2012cg (Munari et
al., 2013; Brown et al., 2014), SN 2013dy (Pan et al., 2015; Zhai et al.,
2016), and SN 2018oh (Li et al., 2019a). The UV/optical light curves of the
comparison SNe Ia have been normalized to SN 2017hpa. As can be seen from
Figure 4, SN 2017hpa and other normal comparision SNe Ia have similar light-
curve shapes near $B$-band maximum. Although the UV light curves of SN 2017hpa
are similar to those of other comparison SNe Ia, they seem to show excess
emission at early phases, especially the first two data points. This may
suggest additional energy beyond the radioactive decay of centrally-located
nickel, such as surface nickel mixing (Piro & Nakar, 2013) or interaction of
SN ejecta with a companion star or with CSM (Kasen, 2010). The post-peak
decline rate ${\Delta}m_{15}(B)$ of the $B$-band light curve is measured to be
$1.02\pm 0.07$ mag, and the color stretch (Burns et al., 2014) is determined
to be $s_{BV}$= $0.94\pm 0.03$.
Figure 4: Comparison of the optical light curves of SN 2017hpa with other
well-observed SNe Ia having similar decline rates. The light curves of the
comparison SNe Ia have been normalized to match the observed peak magnitudes
of SN 2017hpa. Figure 5: Comparison of the UV light curves of SN 2017hpa with
other well-observed SNe Ia having similar decline rates. The light curves of
the comparison SNe Ia have been normalized to match the peak magnitudes of SN
2017hpa.
### 3.2 Reddening and Color Curves
Assuming $R_{V}=3.1$ (Cardelli et al., 1989), we obtain the line-of-sight
Galactic extinction for SN 2017hpa to be $A_{V}=0.485$ mag (Schlegel et al.,
1998; Schlafly & Finkbeiner, 2011), corresponding to a color excess of
$E(B-V)_{\rm gal}=0.156$ mag. After removing the Galactic reddening, the $B-V$
color is found to be $0.002\pm 0.05$ mag at $t=0$ d and $1.08\pm 0.06$ mag at
$t=35$ d relative to $B$ maximum, consistent with typical values of normal SNe
Ia (Phillips et al., 1999; Wang et al., 2009).
We applied SuperNovae in object-oriented Python (SNooPy; Burns et al., 2011,
2014) to fit the multiband light curves of SN 2017hpa, as shown in Figure 6.
Both $EBV$ and $st$ models in SNooPy2 are adopted to estimate the host-galaxy
extinction, and an average host reddening is derived to be $E(B-V)_{\rm host}$
= $0.06\pm 0.06$ mag. The relatively low host-galaxy reddening is consistent
with the fact that the SN is located far away from the center of the host
galaxy. Moreover, the spectra of SN 2017hpa show no detectable absorption
feature of Na i D due to host galaxy.
Figure 6: Best-fit light-curve model from SNooPy2. The light curves are
shifted vertically for clarity. The dashed lines represent the 1$\sigma$
uncertainty of the best-fit light-curve templates.
The optical intrinsic color evolution of SN 2017hpa is shown in Figure 7. At
$t\gtrsim-10$ d, both the $B-V$ and $g-r$ color curves evolve toward the red
until reaching the reddest color at 4–5 weeks after $B$ maximum. Both the
$V-I$ and $g-i$ color curves show a short-term evolution from red to blue
until $t\approx-10$ d; then they evolve redward and reach the red peak at
$t\approx 35$ d. After that, the $V-I$ and $g-i$ color curves became
progressively bluer.
Figure 7: The $B-V$, $V-I$, $g-r$, and $g-i$ color curves of SN 2017hpa
compared with those of SNe 2003du, 2005cf, 2011fe, 2012cg, 2013dy, and 2018oh.
All light curves including those of SN 2017hpa have been dereddened using
SNooPy2.
Overall, the color-curve evolution of SN 2017hpa is similar to that of SN
2005cf and SN 2018oh, except it has a bluer color at very early phases
(especially the $g-r$ color). Based on the near-UV (NUV) colors, SNe Ia can be
classified into NUV-red and NUV-blue subgroups (Milne et al., 2013). Figure 8
shows the observed $uvw1-V$ color evolution of SN 2017hpa together with that
of SNe 2005cf, 2011fe, and 2018oh. One can see that SN 2017hpa can be put into
the NUV-red group.
Figure 8: The $uvw1-v$ color of SN 2017hpa compared to the group of NUV-blue
and NUV-red SNe. The pink shaded region represents the regions covered by SNe
Ia that are classified as NUV-red and the blue shaded region represents the
regions covered by SNe Ia that are classified as NUV-blue. The overplotted
curve is the unreddened color evolution of SN 2017hpa (see Milne et al.,
2013).
### 3.3 First-Light Time
The rise time and first-light time can put additional constraints on the
radius of the exploding star itself (Bloom et al., 2012; Piro & Nakar, 2013).
The observation on Oct. 13, 2017 by Gagliano et al. (2017) provides a non-
detection limit magnitude of 20.5 mag. However, this observation is about 12
days before the discovery and can not provide very useful constraint for the
explosion date and hence the rise time of the light curves. We thus only
utilized the discovery magnitude, i.e., 17.9 mag $\pm$ 0.3 mag in clear band
(close to broadband $R$), obtained at $\sim$2.0 days before our multi-color
observations when performing the rise time fitting. The ideal expanding
fireball (Riess et al., 1999) model and broken-power-law (Zheng & Filippenko,
2017) model are both adopted to fit the $R$-band light curve of SN 2017hpa (as
shown in Figure 9), and the first light time of the light curve is estimated
as 58047.08 $\pm$ 0.73 days and 58049.65 $\pm$ 0.24 days, respectively. The
mean fitted first-light time (FFLT) is adopted as 58048.37 $\pm$ 0.97 days.
Figure 9: Fit to the observed $R$-band light curves using the analytic
function from Zheng & Filippenko (2017) and the ideal fireball model (Riess et
al., 1999). The black triangle represents the discovery magnitude, i.e., 17.9
mag $\pm$ 0.3 mag in clear band (close to broadband R), obtained at $\sim$2.0
days before our multi-color observations. The bottom panel shows the residual
relative to the best-fit curves. The horizontal dashed line in the bottom
panel represents zero residual.
With the time of maximum light in $B$ and the derived FFLT, the rise time of
SN 2017hpa is estimated to be $18.26\pm 0.97$ d, comparable to that of typical
SNe Ia (Zheng et al., 2017). The first multi-color observation of SN 2017hpa
is thus estimated to be $\sim 5$ d after the FFLT, $\sim 13$ d prior to $B$
maximum.
## 4 Optical Spectra
### 4.1 Temporal Evolution of the Spectra
The evolution of the optical spectra of SN 2017hpa is displayed in Figure 10.
The early-time spectra are characterized by prominent absorption lines of
intermediate-mass elements (IMEs), such as Fe ii $\lambda\lambda$4404,5018, Mg
ii $\lambda$4481, Si ii $\lambda$6355, S ii $\lambda\lambda$5468,5654, Ca ii
NIR triplet and Ca ii H&K. At $t\sim$ 2 weeks before the $B$-band maximum, the
absorption troughs near 4300 $\rm\AA$ and 4800 $\rm\AA$ could be attributed to
Fe ii/Fe iii/Mg ii, while the distinct absorption notches near 6300 $\rm\AA$
and 7000 $\rm\AA$ could be due to C ii $\lambda$6580 and C ii $\lambda$7234,
respectively. The C ii $\lambda$6580 absorption is relatively strong while the
C ii $\lambda$7234 absorption is weaker. The Si ii $\lambda$6355 absorption
lines at this phase display a perfect gaussian profile without invoking the
high-velocity feature (HVF), while a Ca ii NIR HVF could be detected through
multi-gaussian fitting (Zhao et al., 2015, 2016). After $t\sim$ 1 week before
maximum light, both C ii $\lambda$6580 and C ii $\lambda$7234 absorptions are
still prominent in the spectra of SN 2017hpa, and the absorption lines of
“W”-shaped S ii and Si ii $\lambda$5972 start to emerge in the spectra of SN
2017hpa. With the decreasing of the expansion velocity of the photosphere, the
absorption minimum of Si ii $\lambda$6355 line gradually shifted redward, and
the absorption lines of iron group elements and sulfur gradually increase in
its strength. At around $B$-band maximum, the spectra are primarily dominated
by “W”-shaped S ii absorption features near 5400 $\rm\AA$, the blended
absorption lines of Fe ii and Si ii/Si iii near 4500 $\rm\AA$ and Si ii
$\lambda$6355, while the C II features become invisible at this phase. By
$t\sim$ 0 days, the HVFs of Ca ii NIR triplet become relatively weak and the
photospheric component started to emerge in the spectra. At $t\sim+10$ days
after the $B$ maximum, the photospheric components of the Ca ii NIR continute
to gain the strength and start to dominate the spectra features.
Interestingly, the C ii $\lambda$6580 absorption feature seems to reemerge in
the spectra of SN 2017hpa around this phase, which is rarely seen in other SNe
Ia. At about one month after the $B$-band maximum, the Ca ii H&K lines and NIR
triplet are the main spectral features. Meanwhile, the features of iron group
element begin to dominate in the spectra when the SN enter the early nebular
phase. Figure 11 compares spectra of SN 2017hpa at several epochs with those
of well-observed SNe Ia with similar ${\Delta}m_{15}(B)$.
Figure 10: Optical spectral evolution of SN 2017hpa. All of the spectra have
been corrected for the redshift of the host galaxy and reddening. The epochs
shown on the right side represent the phases in days relative to $B$-band
maximum light. The dashed line marks the center of Si ii $\lambda$6355 line
profile at +2.04 d from $B$-band maximum. The color of the spectrum stands for
different instruments. The spectra have been shifted vertically for clarity.
Figure 11: Spectra of SN 2017hpa at $t\approx-14$, $-8$, -2, and +32 d
relative to $B$-band maximum, compared with spectra of SNe 2005cf (Garavini et
al., 2007; Wang et al., 2009), 2011fe (Mazzali et al., 2014; Zhang et al.,
2016), and 2013dy (Zheng et al., 2013; Pan et al., 2015; Zhai et al., 2016) at
comparable phases. Correction of reddening and redshift of the host galaxy had
been done for all of the given spectra. The spectra have been shifted
vertically for clarity.
For SN 2017hpa, the C ii $\lambda$6580 and C ii $\lambda$7234 absorptions
appear stronger than those of the comparison SNe Ia in the early-phase
spectra, as shown in Figure 11(a). Moreover, the O i $\lambda$7774 absorption
line of SN 2017hpa seems to also be stronger than in the comparison SNe Ia
except for SN 2011fe and SN 2012cg. The pseudo equivalent-width (pEW) of C ii
$\lambda$6580 is measured to be 14.0$\pm$1.0 $\rm\AA$ for SN 2017hpa at
$t\approx-13$ d, while those measured for SNe 2005cf, 2011fe and 2012cg are
8.0$\pm$1.1 $\rm\AA$, 0.8$\pm$0.2 $\rm\AA$ and 1.0$\pm$0.6 $\rm\AA$,
respectively. No C ii absorption feature is detected in the spectra of SN
2013dy at similar phase. The corresponding pEW of O i $\lambda$7774 is
measured as 52.0$\pm$6.3 $\rm\AA$ for SN 2017hpa at this epoch, comparable to
that of SN 2011fe (48.2$\pm$0.4 $\rm\AA$), while those measured for SNe 2012cg
and 2013dy are 16.2$\pm$1.4 $\rm\AA$ and 16.3$\pm$3.4 $\rm\AA$, respectively.
No prominent O i $\lambda$7774 is detected in the spectra of SN 2005cf at
similar epoch, which is consistent with the findings by Wang et al. (2009).
Following the discovery by Zhao et al. (2016) that the velocity of O i
$\lambda$7774 line shows a positive correlation with that of C ii
$\lambda$6580\. We propose that more unburned carbon and oxygen may be kept in
the explosion ejecta of SNe 2017hpa, 2011fe, and 2012cg, although the stronger
O i $\lambda$7774 absorption observed in SN 2017hpa could result from higher
oxygen abundance of the exploding white dwarf (Cui et al., 2020). A detached
Ca ii NIR HVF could be detected through multi-gaussian fitting proposed by
Zhao et al. (2015, 2016).
Figure 11(b) shows the comparison at $\sim 1$ week before maximum light. All
spectra show an increase in absorption strength of IMEs. The Si ii
$\lambda$6355 velocity of SN 2017hpa derived from absorption minimum is
$12,500\pm 180$ km s-1, which is comparable to that of the comparison sample.
The C ii $\lambda$6580 absorption line remained visible in the red wing of Si
ii $\lambda$6355 at this epoch.
The spectra near maximum light are displayed in Figure 11(c). The absorption
features due to IMEs such as Si ii at 4130 $\rm\AA$, Si iii at 4560 $\rm\AA$,
and S ii at 5468, 5612, and 5654 $\rm\AA$ become prominent at this phase. The
C ii absorption features at around 6300 and 7000 $\rm\AA$ remain noticeable in
SN 2017hpa but they are barely seen in the comparison SNe Ia. The $R$(Si ii),
defined as the line-strength ratio of Si ii $\lambda$5972 to Si ii
$\lambda$6355 (Nugent et al., 1995), can be used as indicator of the
photospheric temperature. A lower value of $R$(Si ii) corresponds to a higher
photospheric temperature for the SNe Ia. At around maximum light, $R$(Si ii)
is measured to be $0.18\pm 0.03$, comparable to that of SN 2018oh ($R$(Si ii)
$=0.15\pm 0.04$), suggesting that these two SNe have similar photoshperic
temperature around maximum light. The relatively larger ratio indicates a
lower photospheric temperature for SN 2017hpa. The pseudo-equivalent widths
(pEWs) of Si ii $\lambda$5972 and Si ii $\lambda$6355 near maximum light are
measured to be $15.5\pm 0.6$ $\rm\AA$ and $83.9\pm 2.2$ $\rm\AA$,
respectively, putting SN 2017hpa into the CN subtype of Branch et al. (2006)
classification.
Figure 11(d) shows the spectral evolution at $t\approx 30$ d. The C ii
$\lambda$6580 absorption line disappeared in this phase in all of our objects.
With the receding of the photosphere, the Fe ii features gain strength and
gradually dominate at wavelengths between 4700 and 5000 $\rm\AA$. The
absorption profiles of SN 2017hpa and the comparison sample are well developed
and tend to have uniform morphologies.
### 4.2 Carbon Features
The presence of C ii absorption can be easily identified in the early-time
spectra of SN 2017hpa around 6300 and 7000 $\rm\AA$. The C ii absorption
features of SN 2017hpa are stronger than in the comparison SNe Ia. The left
panel of Figure 12 shows that the C ii $\lambda$6580 absorption lines persist
in the spectra from $t\approx-14.1$ to $-7.9$ d. This absorption feature
disappeared in the spectra approaching maximum light and then reemerged at
$t\approx 9.5$ d. As a possible explanation, we propose that the C ii will be
highly excited when the detonation front or the deflagration front propagates
outward through the ejecta of SNe Ia (Ciaraldi-Schoolmann et al., 2013;
Seitenzahl et al., 2013) and this will make the C ii absorption features
disappear temporarily. With the receding and cooling of the photosphere of SNe
Ia, the C ii absorption trough will reemerge in the spectra. The right panel
of Figure 12 shows the relatively weak C ii $\lambda$7234 absorption; it is
noticeable in the earliest four spectra, and it then became flattened.
Inspection of the spectra does not reveal significant absorption of C ii
$\lambda$4267 in SN 2017hpa. Both C ii $\lambda$6580 and C ii $\lambda$7234
became barely visible in spectra taken $\sim 10$ d after maximum light.
Figure 12: The left panel shows the C ii $\lambda$6580 temporal evolution
while the right panel shows that of C ii $\lambda$7234\. The dashed lines mark
the Doppler velocity range from $-18,000$ km s-1 to $-5000$ km s-1.
The SN Spectroscopic Evolution222https://mwvg-spec-
evolve.readthedocs.io/en/latest/ package is employed to fit the absorption
components of Si ii $\lambda$6355 and C ii $\lambda$6580\. For SN 2017hpa, the
C ii $\lambda$6580 velocity is found to range from $\sim 13,000$ km s-1 at
$t\approx-14.1$ d to $\sim 9,300$ km s-1 at $t\approx-7.9$ d. According to
Silverman & Filippenko (2012), the average velocity ratio between C ii
$\lambda$6580 and Si ii $\lambda$6355 is $\sim 1.05$ for SNe Ia with
observations at least four days prior to $B$-band maximum light. However, the
mean C ii $\lambda$6580 to Si ii $\lambda$6355 velocity ratio (hereafter $R$(C
ii/Si ii)) measured for SN 2017hpa is only $\sim 0.81$ (as shown in Figure
13), suggesting that significant unburned carbon may have mixed deep into the
ejecta.
Figure 13: Top panel: temporal evolution of the C ii $\lambda$ expansion
velocity for SNe Ia with carbon detections. Bottom panel: temporal evolution
of the velocity ratio C ii $\lambda$6580 to Si ii $\lambda$6355\. The
comparison data are taken from Silverman & Filippenko (2012).
### 4.3 Ejecta Velocity
The ejecta velocities measured from the absorption lines, such as S ii
$\lambda\lambda$5468,5640, Si ii $\lambda$6355, C ii $\lambda$6580, C ii
$\lambda$7234, and O i $\lambda$7774, are shown in Figure 15. The photospheric
velocity measured from Si ii $\lambda$6355 at $t\approx-13.5$ d is $\sim
16,000$ km s-1, which is comparable to that of the Ca ii NIR triplet ($\sim
16,200$ km s-1), but it is faster than the C ii velocity ($\sim 12,000$ km s-1
for both C ii $\lambda$6580 and C ii $\lambda$7234). The velocity of the C ii
$\lambda$6580 absorption is roughly within the typical expansion velocity of
normal SNe Ia (Silverman & Filippenko, 2012). At the time of $B$-band maximum,
the velocity of Si ii $\lambda$6355 is estimated to be $\sim$ $9550\pm 170$ km
s-1, which can put SN 2017hpa into the NV subclass in the Wang et al. (2009)
classification scheme (the basic paramaters of SN 2017hpa are listed in Table
5), as shown in Figure 14. However, the Si ii $\lambda$6355 velocity of SN
2017hpa seems to have a large gradient, $\sim$ $130\pm 7$ km s-1 d-1, measured
within about 10 d after maximum light.
Table 5: Parameters of SN 2017hpa Parameter | Value
---|---
Photometric
$B_{\rm max}$ | $14.88\pm 0.02$ mag
$B_{\rm max}-V_{\rm max}$ | $0.005\pm 0.007$ mag
$M_{\rm max}(B)$ | $-19.12\pm 0.11$ mag
$E(B-V)_{\rm host}$ | $0.06\pm 0.06$ mag
$\Delta m_{15}(B)$ | $1.02\pm 0.07$ mag
$s_{BV}$ | $0.94\pm 0.03$
$t_{\rm max}(B)$ | $58,066.64\pm 0.36$ d
$t_{0}$ | $58,050.22\pm 1.20$ d
$\tau_{\rm rise}$ | $18.26\pm 0.97$ d
$L_{\rm bol}^{\rm max}$ | $1.25\times 10^{43}$ erg s-1
$M_{{}^{56}\rm Ni}$ | $0.63\pm 0.02\,M_{\odot}$
Spectroscopic
$v\rm_{0}$(Si ii) | $9550\pm 170$ km s-1
$\dot{v}$(Si ii) | $130\pm 7$ km s-1 d-1
$R$(Si ii) | $0.18\pm 0.03$
Figure 14: Velocity evolution of SN 2017hpa as derived from the absorption
minimum of Si ii $\lambda$6355, compared with SNe 2005cf, 2011fe, 2013dy, and
2018oh. The average velocity curves obtained for SN 1991T-like and SN 1991bg-
like SNe are overplotted in red and blue dashed lines, respectively. The
normal subclass of SNe Ia is plotted with a black solid line. The shaded
region represents the 1$\sigma$ uncertainty for the mean velocity curve of
normal SNe Ia. Data for the comparison SNe and the region of normal SNe Ia are
extracted from (Li et al., 2019a). Figure 15: Velocity evolution of different
elements measured from spectra of SN 2017hpa.
### 4.4 High-Velocity Feature
At early phases, the HVFs of the Ca ii NIR triplet can be clearly recognized
from the coresponding absorption-line profiles in the spectra. We utilize a
multi-Gaussian function to fit the absorption profile of the Ca ii NIR triplet
following the method described by Zhao et al. (2015, 2016). For SN 2017hpa,
the HVF of the Ca ii NIR triplet seen in the $t\approx-13.5$ d spectrum has a
velocity of $\sim 24,000$ km s-1, comparable to that of SN 2011fe ($\sim\
22,000$ km s-1; Zhang et al., 2016). The Ca ii NIR HVFs exhibit a velocity
plateau of 20,000 km s-1 from $t\approx-10$ to $-2$ d, which is similarly seen
in SN 2018oh but at different epochs (Li et al., 2019a). Note that there are
no obvious Si ii HVFs in the early-phase spectra of SN 2017hpa. It is
suggested that HVFs are more commonly detected in line profiles of the Ca ii
NIR triplet than in Si ii (Maguire et al., 2012; Childress et al., 2014;
Maguire et al., 2014; Pan et al., 2015; Silverman et al., 2015). Most SNe Ia
are found to have strong Ca ii NIR HVFs in their spectra at $t<7$ d while no
more than 30% of them have strong Si ii HVFs (Zhao et al., 2015).
## 5 Discussion
### 5.1 Distance and Quasibolometric Light Curve
Applying a standard cosmological model and assuming H${}_{0}=73.5$ km s-1
Mpc-1, $\Omega_{M}=0.3$, and $\Omega_{\Lambda}=0.7$ (Riess et al., 2018), a
distance modulus of $\sim 34.05$ mag can be obtained for the host galaxy of SN
2017hpa. We also utilize the latest $EBV$ model of SNooPy2 to fit the light
curves of SN 2017hpa in several optical bands, and the best-fit result gives
an average distance modulus of $34.00\pm 0.09$ mag. These two distance moduli
agree well with each other within the uncertainties. Adopting the distance
modulus as $34.00\pm 0.09$ mag and assuming $R_{V}=3.1$, we derive the
absolute $B$-band peak magnitude to be $M_{\rm max}(B)$ = $-19.12\pm 0.11$ mag
after correcting for both Galactic and host-galaxy extinction. This value
agrees well with the typical value of normal SNe Ia ($M_{\rm
max}(B)\approx-19.3$ mag; Phillips et al., 1999; Wang et al., 2009).
Our extensive photometric observations are used to establish the
quasibolometric light curve of SN 2017hpa. The spectral energy distribution
(SED) includes flux contributions from the following bands: $uvw2$, $uvm2$,
$uvw1$, $B$, $g$, $V$, $R$, $r$, $I$, and $i$. We adopt the procedure used for
SN 2018oh to establish the SED at several epochs (Li et al., 2019a). The
observed magnitudes are dereddened and converted into flux density. The flux
densities are then integrated using Simpson’s rule (Rogers, 1920; Syam, 2003)
through the effective wavelengths.
To get better knowledge of the peak luminosity, we use the UV and optical
observations to construct the quasibolometric light curves by assuming the NIR
contribution to be 5% at maximum light (Leloudas et al., 2009; Wang et al.,
2009; Zhang et al., 2016; Zhai et al., 2016). Applying a polynomial fitting,
the maximum luminosity is estimated to be $L_{\rm peak}$ = $1.25\times
10^{43}$ erg s-1 at about 0.85 d prior to $B$-band maximum. This peak
luminosity is comparable to that of SN 2011fe ($\sim 1.13\times 10^{43}$ erg
s-1; Zhang et al., 2016) but lower than that of SN 2018oh ($\sim 1.49\times
10^{43}$ erg s-1; Li et al., 2019a).
The modified radiation diffusion model of Arnett (Arnett, 1982; Chatzopoulos
et al., 2012; Li et al., 2019a) is applied to evaluate the initial nickel mass
together with other physical parameters of the SN ejecta. The Minim Code
(Chatzopoulos et al., 2013) is used to fit the quasibolometric light curve
with a constant opacity approximation. The model input parameters are the
first-light time (FLT) $t_{0}$, the radioactive 56Ni ejecta mass $M_{\rm Ni}$,
the timescale $t_{\rm lc}$ of the light curve, and the leaking timescale of
gamma rays $t_{\gamma}$ (see, e.g., Chatzopoulos et al., 2012, 2013). We set
all of these parameters free when performing the model fitting. The final
best-fit result of the quasibolometric luminosity evolution of SN 2017hpa is
shown in Figure 16. Based on $\chi^{2}$ minimization, a set of parameters is
found: $t_{0}=-0.94\pm 1.06$ d, $t_{\rm lc}=\,$$15.86\pm 0.76$ d, $M_{\rm
Ni}$= $0.63\pm 0.02\,M_{\odot}$, and $t_{\gamma}=\,$$28.37\pm 3.44$ d. The
initial nickel mass is comparable to the estimates of $M_{\rm Ni}$$\approx
0.57$ $M_{\odot}$ for SN 2011fe (Zhang et al., 2016) and $0.55\pm 0.04$
$M_{\odot}$ for SN 2018oh (Li et al., 2019a), but smaller than that of
$0.77\pm 0.11$ $M_{\odot}$ for SN 2005cf (Wang et al., 2009) and $0.68\pm
0.14$ $M_{\odot}$ for SN 2003du (Stanishev et al., 2007).
Figure 16: The quasibolometric light curve (dots) with an Arnett (1982)
radiation diffusion model (blue curve).
Adopting the method used by Li et al. (2019a), the average opacity $\kappa$ is
estimated to be $0.36\pm 0.15$ cm2 g-1. With the best-fit $t_{\rm lc}$ and
$t_{\gamma}$, we then obtain the ejecta mass and kinetic energy as $M_{\rm
ej}$$=0.70\pm 0.22$ $M_{\odot}$ and $E_{\rm kin}$$=(0.70\pm 0.50)\times
10^{51}$ erg. These values are within the range of typical SNe Ia as suggested
by Scalzo et al. (2019).
### 5.2 High Velocity Gradient
The ejecta velocity (i.e., $9550\pm 170$ km s-1) measured for SN 2017hpa near
maximum light is comparable to that of normal SNe Ia. According to the
velocity gradient of the Si ii, SNe Ia can be divided into LVG, HVG, and FAINT
subtypes (Benetti et al., 2005). Most normal velocity (as opposed to HV) SNe
Ia tend to be LVG or FAINT objects (Silverman et al., 2012). The left panel of
Figure 17 shows the scatter plot of the ${\Delta}m_{15}(B)$ versus velocity
gradient of SNe Ia, and the right panel displays that of the velocity gradient
vs. velocity measured around the time of maximum light. It can be seen that SN
2017hpa should be classified in the HVG subcategory, contradicting the trend
that SNe Ia showing prominent carbon features tend to be LVG objects (Parrent
et al., 2011). According to previous studies, HV SNe Ia tend to have larger
velocity gradients and vice versa, while this tendency seems to be broken by
SN 2017hpa.
Figure 17: Spectroscopic subclassification of SN 2017hpa (as marked with black
dot) based on the scheme of Benetti et al. (2005). Left panel:
${\Delta}m_{15}(B)$ is plotted with respect to the velocity gradient which is
measured from Si ii $\lambda$6355\. The SNe from different subtypes are taken
from Benetti et al. (2005) and Chakradhari et al. (2018), the four
transitional objects are from Pastorello et al. (2007) and Sahu et al. (2013),
SN 2005cf is from Wang et al. (2009), and SN 2018oh is from Li et al. (2019a).
Right panel: the scatter plot of the velocity measured from Si ii
$\lambda$6355 near maximum light versus the velocity gradient. The velocities
are taken from Silverman et al. (2012) and Wang et al. (2019). The horizontal
dashed line in the left pannel marks the boundary between HVG and LVG, which
is 70 km s-1 d-1 Benetti et al. (2005).
Previous studies have shown that for the HVG and LVG subclasses, the
difference in velocity gradient may be due to the different nature of the
explosion or the mixing degree of heavy elements (Sahu et al., 2013). An off-
center ignition will trigger SNe to explode asymmetrically. In this case,
different viewing angles will cause the observed velocity gradient to vary
greatly (Maeda et al., 2010). It is suggested that varying the criterion for
deflagration-to-detonation transition (DDT; Woosley et al., 2009) in
explosions also can result in a wide range of velocity gradients (Blondin et
al., 2011). However, these two scenarios are not suited to SN 2017hpa, which
has a low velocity but high velocity gradient. Alternatively, the effective
mixing of heavy elements in the SN ejecta may lead to the high velocity
gradient of the HVG subclass, while the inefficient mixing may cause the low
velocity gradient of the LVG subclass (Blondin et al., 2012; Sahu et al.,
2013).
### 5.3 Prominent Carbon Features
Detection of unburned carbon is important for constraining the explosion
mechanisms or progenitor systems of SNe Ia, and different explosion models
predict the presence of unburned material in different regions of the ejecta
(Fink et al., 2010; Pakmor et al., 2012; Sim et al., 2012; Seitenzahl et al.,
2013; Shen et al., 2018; Li et al., 2021). The velocities of IMEs in DDT will
increase with the explosion strength, leading to the diffusion of unburned
materials farther outward (Fink et al., 2010; Blondin et al., 2011). The
carbon and oxygen have distinct velocity distributions in the delayed-
detonation model while a similar velocity distribution is predicted in the
violent merger (Röpke et al., 2012). For those carbon-positive SNe Ia such as
SNe 2005di, 2005el, 2005ki, and SNF20080514-002, the absorption notch due to C
ii $\lambda$6580 usually disappears about one week before maximum light
(Thomas et al., 2011). Recent studies suggest that some SNe Ia or peculiar SNe
Ia exhibit unusually persistent carbon features in their spectra, such as SN
2002fk (Cartier et al., 2014), iPTF14atg (Cao et al., 2015), and SN 2018oh (Li
et al., 2019a). SN 2017hpa is another example showing such a prominent C ii
$\lambda$6580 feature at early epoches ($t\leq\ -7.9$ d), and this carbon
feature disappeared in near-maximum-light spectra. However, unlike other SNe
Ia, the carbon feature seems to reemerge at phases from $t\,\sim\,$+8.7 d to
$\sim\,$+11.7 d after maximum light.
The velocity ratio of C ii $\lambda$6580 to Si ii $\lambda$6355 is an
important parameter for setting constraints on the explosion models of SNe Ia
(Parrent et al., 2011; Folatelli et al., 2012). As shown in Figure 13, the
typical value of such a velocity ratio is $\sim 1.05$ for SNe Ia (Silverman &
Filippenko, 2012). However, the mean $R$(C ii/Si ii) is measured to be 0.81
for SN 2017hpa, much lower than the typical value. As noted by Silverman et
al. (2012), for a given object, the $R$(C ii/Si ii) usually increases somewhat
with time. This conclusion may be also supported by the data presented by
Parrent et al. (2011). Scalzo et al. (2010) suggested that the prominent C ii
$\lambda$6580 feature, concurrent with low velocities, could be associated
with a pre-explosion envelope of progenitor material originated from the
merger of two white dwarfs. With the receding of the photosphere of SNe Ia,
the ejecta of the inner layer with more uniform velocity distribution begin to
show up, which leads to the observed phenomenon that the $R$(C ii/Si ii)
increases slowly with time. The abundance distribution inferred from the
violent merger model indicates that both carbon and oxygen can be mixed deep
into the inner layer of the ejecta (Röpke et al., 2012). The prominent carbon
features and high velocity gradient may suggest that SN 2017hpa is reminiscent
of a low-luminosity subclass like SN 1991bg. However, the light curves and
color curves of SN 2017hpa are quite different from those of low-luminosity
objects like SN 2005bl (Taubenberger et al., 2008) or even transitional
objects like SN 2004eo (Pastorello et al., 2007).
To investigate the abnormal behavior of SN 2017hpa, we perform a further
comparison of C ii $\lambda$6580 absorption in Figure 18, where the sample
includes SNe 2002fk, 2005el, 2005cf, 2009dc, 2011fe, 2012cg, 2013dy, and
2018oh. The spectra of the comparison SNe are taken from the (Wang et al.,
2009; Yaron & Gal-Yam, 2012; Silverman et al., 2012; Zhai et al., 2016; Zhang
et al., 2016; Guillochon et al., 2017; Li et al., 2019a; Stahl et al., 2020).
In spectra of SN 2013dy, a strong C ii $\lambda$6580 absorption feature can be
found with a velocity up to $\sim 16,300$ km s-1 at early epochs, but this
absorption feature quickly fades away at $t\approx-12.9$ d, $\sim 3$ d after
explosion (Zheng et al., 2013; Pan et al., 2015). SN 2012cg shows moderately
strong C ii $\lambda$6580 at the same phase with respect to SN 2017hpa, and
the C ii absorption feature lasts until $\sim 8$ d before maximum light
(Silverman et al., 2012). The spectra of SN 2009dc exhibit very prominent C ii
$\lambda$6580 absorption that lasts for a long time, and this SN is proposed
to result from a super-Chandrasekhar mass progenitor system (Howell et al.,
2006; Silverman et al., 2011; Taubenberger et al., 2011; Tanaka et al., 2010).
SNe 2005cf, 2005el, and 2011fe show moderate C ii features along their spectra
evolution, while SNe 2002fk and 2018oh have C ii absorption comparable to that
of SN 2017hpa. All three of these normal SNe Ia show prominent C ii absorption
feature. The C ii absorption lines could be detected even in the spectra at
$\sim$7 days after $B$-band maximum light.
Figure 18: The C ii $\lambda$6580 evolution of SN 2017hpa compared to some
well-observed SNe Ia, including SN 2002fk, 2005el, 2005cf, 2009dc, 2011fe,
2012cg, 2013dy, and 2018oh.
Previous studies suggest that carbon-positive SNe Ia tend to have bluer
optical or UV colors (Thomas et al., 2011; Silverman et al., 2012; Milne et
al., 2013). Swift/UVOT observations also suggest that SNe Ia with prominent
carbon features are NUV-blue objects (Roming et al., 2005; Milne et al., 2010;
Thomas et al., 2011). The only exception is SN 2005cf, which belongs to the
NUV-red subgroup but with signatures of carbon features (Silverman et al.,
2012; Milne et al., 2013). SN 2017hpa is also a NUV-red SN Ia and has carbon
in its early-time spectra. Based on model comparisons, Brown et al. (2019)
suggested that the physical origin of the NUV-blue and NUV-red subclasses are
likely related to metal abundance. As suggested by Heringer et al. (2017), the
C ii absorption features could be hidden by the emission from iron, leading to
a lower metallicity within the outer layers of SNe Ia with carbon signatures.
However, if metallicity is the dominant origin of the NUV differences and the
presence of C ii features, a continuous distribution is expected in each of
them (Brown et al., 2019). A large sample of SNe Ia with positive C ii
features is needed for modeling metallicity effects on C ii absorption in the
spectra.
Based on the above discussions, SN 2017hpa shows prominent carbon features
with distinct evolution, a low C ii $\lambda$6580 to Si ii $\lambda$6355
velocity ratio, and normal ejecta velocity but a high velocity gradient, all
of which are unusual for known subtypes of normal SNe Ia. We suggest that SN
2017hpa could result from a violent merger explosion of two carbon-oxygen
white dwarfs, which brings up the prominent and distinct C ii features in its
spectra. The deep mixing of the SN ejecta may result in the high velocity
gradient of SN 2017hpa.
## 6 Conclusion
In this paper, we present extensive optical photometry and spectroscopy of the
Type Ia SN 2017hpa, which was discovered at a relatively young phase. This
object can be put into the category of normal and NUV-red SNe Ia, with
${\Delta}m_{15}(B)$= $1.02\pm 0.07$ mag and an absolute $B$-band magnitude
$M_{\rm max}(B)$ = $-19.12\pm 0.11$ mag.
The quasibolometric light curve of SN 2017hpa is established by using
extensive UV/optical photometric observations. Arnett’s 56Ni and 56Co
radioactive-decay-driven radiation diffusion model is utilized to fit the
quasibolometric light curve, deriving a peak luminosity of SN 2017hpa as
$L_{\rm peak}$ = $1.25\times 10^{43}$ erg s-1. The mass of nickel synthesized
during the explosion is estimated to be $M_{\rm Ni}$= $0.63\pm
0.02\,M_{\odot}$, and the ejecta mass $M_{\rm ej}$$=0.70\pm 0.22$ $M_{\odot}$.
The spectral evolution of SN 2017hpa is roughly the same as that of normal SN
Ia such as SN 2018oh. However, prominent C ii absorption and abnormal velocity
evolution distinguish it from other normal SNe Ia. The carbon and oxygen
features appear stronger than in normal SNe Ia and lasted until about 10 d
after maximum light, and both the carbon and oxygen have a lower velocity than
intermediate-mass elements such as Si ii and Ca ii. Although SN 2017hpa has a
typical ejecta velocity, $\sim$ $9550\pm 170$ km s-1 as measured near the
maximum light, it has an unsually large velocity gradient ($\sim$ $130\pm 7$
km s-1 d-1) in comparison with other normal SNe Ia. The significant amount of
unburned C and O in the ejecta, lower velocity relative to IMEs, and large
velocity gradient are more consistent with the merger model. More observations
and detailed modeling are needed to reveal the exact explosion physics in
objects like SN 2017hpa.
## Acknowledgments
We thank the anonymous referee for the suggestive comments, which improved the
manuscript. Funding for this work was provided by the National Natural Science
Foundation of China (NSFC, grants 11873081, U2031209, 12033002, 11633002, and
11761141001) and the National Program on Key Research and Development Project
(grant 2016YFA0400803), the High Level Talent-Heaven Lake Program of Xinjiang
Uygur Autonomous Region of China. This work is partially supported by the
Scholar Program of Beijing Academy of Science and Technology (DZ:BS202002). We
acknowledge the staffs of the Lijiang 2.4 m telescope (LJT), the Xinglong 2.16
m telescope (XLT), and Lick Observatory for their support. The Chinese Academy
of Sciences and the People’s Government of Yunnan Province provide support for
the LJT, which is corporately run and maintained by Yunnan Observatories and
the Center for Astronomical Mega-Science (CAS). JuJia Zhang is supported by
the National Natural Science Foundation of China (NSFC; grants 11773067,
11403096), the Youth Innovation Promotion Association of the CAS (grant
2018081), and the Ten Thousand Talents Program of Yunnan for Top-notch Young
Talents. Support for A.V.F.’s group at U.C. Berkeley was provided by the
TABASGO Foundation, the Christopher R. Redlich Fund, and the Miller Institute
for Basic Research in Science (U.C. Berkeley).
This work makes use of data from the Las Cumbres Observatory network. JB, DH,
DAH, and CP were supported by NSF grant AST-1911225. The Swift/UVOT data were
reduced by P.J. Brown and released in the Swift Optical/Ultraviolet Supernova
Archive (SOUSA), which is supported by NASA’s Astrophysics Data Analysis
Program (grant NNX13AF35G). Some of the observations with the Lick Observatory
1 m Nickel telescope were conducted by U.C. Berkeley undergraduate students
Sanyum Channa, Edward Falcon, Nachiket Girish, Romain Hardy, Julia Hestenes,
Andrew Hoffman, Evelyn Liu, Shaunak Modak, Costas Soler, Kevin Tang, Sameen
Yunus, and Keto Zhang; we thank them for their excellent work. Lick/KAIT and
its ongoing operation were made possible by donations from Sun Microsystems,
Inc., the Hewlett-Packard Company, AutoScope Corporation, Lick Observatory,
the U.S. National Science Foundation, the University of California, the Sylvia
& Jim Katzman Foundation, and the TABASGO Foundation. A major upgrade of the
Kast spectrograph on the Shane 3 m telescope at Lick Observatory was made
possible through generous gifts from the Heising-Simons Foundation as well as
William and Marina Kast. Research at Lick Observatory is partially supported
by a generous gift from Google.
## References
* Abbott et al. (2019) Abbott, T. M. C., Allam, S., Andersen, P., et al. 2019, ApJ, 872, L30
* Aldering et al. (2006) Aldering, G., Antilogus, P., Bailey, S., et al. 2006, ApJ, 650, 510
* Arnett (1982) Arnett, W. D. 1982, ApJ, 253, 785
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33. doi:10.1051/0004-6361/201322068
* Benetti et al. (2005) Benetti, S., Cappellaro, E., Mazzali, P. A., et al. 2005, ApJ, 623, 1011
* Betoule et al. (2014) Betoule, M., Kessler, R., Guy, J., et al. 2014, A&A, 568, A22
* Bloom et al. (2012) Bloom, J. S., Kasen, D., Shen, K. J., et al. 2012, ApJ, 744, L17
* Blondin et al. (2011) Blondin, S., Kasen, D., Röpke, F. K., et al. 2011, MNRAS, 417, 1280. doi:10.1111/j.1365-2966.2011.19345.x
* Blondin et al. (2012) Blondin, S., Matheson, T., Kirshner, R. P., et al. 2012, AJ, 143, 126. doi:10.1088/0004-6256/143/5/126
* Branch et al. (2006) Branch, D., Dang, L. C., Hall, N., et al. 2006, PASP, 118, 560
* Branch et al. (1993) Branch, D., Fisher, A., & Nugent, P. 1993, AJ, 106, 2383
* Breeveld et al. (2011) Breeveld, A. A., Landsman, W., Holland, S. T., et al. 2011, AIPC, 1358, 373
* Brown et al. (2014) Brown, P. J., Breeveld, A. A., Holland, S., et al. 2014, Ap&SS, 354, 89. doi:10.1007/s10509-014-2059-8
* Brown et al. (2019) Brown, P. J., Hosseinzadeh, G., Jha, S. W., et al. 2019, ApJ, 877, 152
* Brown et al. (2013) Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, PASP, 125, 1031
* Burns et al. (2018) Burns, C. R., Parent, E., Phillips, M. M., et al. 2018, ApJ, 869, 56
* Burns et al. (2011) Burns, C. R., Stritzinger, M., Phillips, M. M., et al. 2011, AJ, 141, 19
* Burns et al. (2014) Burns, C. R., Stritzinger, M., Phillips, M. M., et al. 2014, ApJ, 789, 32
* Cao et al. (2015) Cao, Y., Kulkarni, S. R., Howell, D. A., et al. 2015, Nature, 521, 328
* Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245
* Cartier et al. (2014) Cartier, R., Hamuy, M., Pignata, G., et al. 2014, ApJ, 789, 89
* Ciaraldi-Schoolmann et al. (2013) Ciaraldi-Schoolmann, F., Seitenzahl, I. R., & Röpke, F. K. 2013, A&A, 559, A117. doi:10.1051/0004-6361/201321480
* Cui et al. (2020) Cui, X., Wang, B., Wu, C.-Y., et al. 2020, Research in Astronomy and Astrophysics, 20, 003. doi:10.1088/1674-4527/20/1/3
* Chakradhari et al. (2018) Chakradhari, N. K., Sahu, D. K., Anupama, G. C., et al. 2018, MNRAS, 474, 2502. doi:10.1093/mnras/stx2839
* Chambers et al. (2016) Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, arXiv:1612.05560
* Chandrasekhar (1957) Chandrasekhar, S. 1957, An Introduction to the Study of Stellar Structure
* Chatzopoulos et al. (2012) Chatzopoulos, E., Wheeler, J. C., & Vinko, J. 2012, ApJ, 746, 121
* Chatzopoulos et al. (2013) Chatzopoulos, E., Wheeler, J. C., Vinko, J., et al. 2013, ApJ, 773, 76
* Chen et al. (2001) Chen, D., Wang, J.-C., Xu, J., et al. 2001, Publications of the Yunnan Observatory, 4, 42
* Childress et al. (2014) Childress, M. J., Filippenko, A. V., Ganeshalingam, M., et al. 2014, MNRAS, 437, 338
* Contreras et al. (2010) Contreras, C., Hamuy, M., Phillips, M. M., et al. 2010, AJ, 139, 519. doi:10.1088/0004-6256/139/2/519
* Dilday et al. (2012) Dilday, B., Howell, D. A., Cenko, S. B., et al. 2012, Science, 337, 942
* Dessart et al. (2014) Dessart, L., Blondin, S., Hillier, D. J., et al. 2014, MNRAS, 441, 532
* Fan et al. (2016) Fan, Z., Wang, H., Jiang, X., et al. 2016, PASP, 128, 115005. doi:10.1088/1538-3873/128/969/115005
* Filippenko (1997) Filippenko, A. V. 1997, ARA&A, 35, 309
* Filippenko (2003) Filippenko, A. V. 2003, in From Twilight to Highlight: The Physics of Supernovae, ed. W. Hillebrandt & B. Leibundgut (Berlin: Springer-Verlag), 171
* Filippenko (2005) Filippenko, A. V. 2005, in 1604–2004, Supernovae as Cosmological Lighthouses, ed. M. Turatto, et al. (San Francisco: ASP), 87
* Filippenko et al. (2001) Filippenko, A. V., Li, W. D., Treffers, R. R., & Modjaz, M. 2001, in Small-Telescope Astronomy on Global Scales, ed. W. P. Chen, C. Lemme, & B. Paczyński (San Francisco: ASP), 121
* Filippenko et al. (1992b) Filippenko, A. V., Richmond, M. W., Branch, D., et al. 1992b, AJ, 104, 1543
* Filippenko et al. (1992a) Filippenko, A. V., Richmond, M. W., Matheson, T., et al. 1992a, ApJ, 384, L15
* Fink et al. (2010) Fink, M., Röpke, F. K., Hillebrandt, W., et al. 2010, A&A, 514, A53
* Flewelling et al. (2016) Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2016, arXiv:1612.05243
* Floers et al. (2017) Floers, A., Taubenberger, S., Vogl, C., et al. 2017, The Astronomer’s Telegram, 10896
* Folatelli et al. (2012) Folatelli, G., Phillips, M. M., Morrell, N., et al. 2012, ApJ, 745, 74
* Foley et al. (2011) Foley, R. J., Sanders, N. E., & Kirshner, R. P. 2011, ApJ, 742, 89
* Fukugita et al. (1996) Fukugita, M., Ichikawa, T., Gunn, J. E., et al. 1996, AJ, 111, 1748
* Gagliano et al. (2017) Gagliano, R., Post, R., Weinberg, E., et al. 2017, Transient Name Server Discovery Report, 2017-1164
* Ganeshalingam et al. (2010) Ganeshalingam, M., Li, W., Filippenko, A. V., et al. 2010, ApJS, 190, 418
* Garavini et al. (2007) Garavini, G., Nobili, S., Taubenberger, S., et al. 2007, A&A, 471, 527
* García-Berro & Lorén-Aguilar (2017) García-Berro, E. & Lorén-Aguilar, P. 2017, Dynamical Mergers (Alsabti, A. W. and Murdin, P.), 1237
* Gehrels et al. (2004) Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, ApJ, 611, 1005
* González Hernández et al. (2012) González Hernández, J. I., Ruiz-Lapuente, P., Tabernero, H. M., et al. 2012, Nature, 489, 533
* Guevel & Hosseinzadeh (2017) Guevel, D., & Hosseinzadeh, G. 2017, Dguevel/Pyzogy: Initial Release, v0.0.1, Zenodo, doi:10.5281/zenodo.1043973
* Guillochon et al. (2010) Guillochon, J., Dan, M., Ramirez-Ruiz, E., et al. 2010, ApJ, 709, L64
* Guillochon et al. (2017) Guillochon, J., Parrent, J., Kelley, L. Z., et al. 2017, ApJ, 835, 64. doi:10.3847/1538-4357/835/1/64
* Gustafsson (1996) Gustafsson, F. 1996, IEEE Transactions on Signal Processing, 44, 988
* Guy et al. (2005) Guy, J., Astier, P., Nobili, S., et al. 2005, A&A, 443, 781
* Hamuy et al. (2003) Hamuy, M., Phillips, M. M., Suntzeff, N. B., et al. 2003, Nature, 424, 651
* Henden et al. (2016) Henden, A. A., Templeton, M., Terrell, D., et al. 2016, VizieR Online Data Catalog, 2336
* Heringer et al. (2017) Heringer, E., van Kerkwijk, M. H., Sim, S. A., et al. 2017, ApJ, 846, 15
* Heringer et al. (2019) Heringer, E., van Kerkwijk, M. H., Sim, S. A., et al. 2019, ApJ, 871, 250
* Höflich et al. (2017) Höflich, P., Hsiao, E. Y., Ashall, C., et al. 2017, ApJ, 846, 58
* Höflich et al. (1996) Hóflich, P., Khokhlov, A., Wheeler, J. C., et al. 1996, ApJ, 472, L81
* Hosseinzadeh et al. (2017) Hosseinzadeh, G., Sand, D. J., Valenti, S., et al. 2017, ApJ, 845, L11
* Howell et al. (2006) Howell, D. A., Sullivan, M., Nugent, P. E., et al. 2006, Nature, 443, 308. doi:10.1038/nature05103
* Howell (2011) Howell, D. A. 2011, Nature Communications, 2, 350
* Hsiao et al. (2013) Hsiao, E. Y., Marion, G. H., Phillips, M. M., et al. 2013, ApJ, 766, 72
* Huang et al. (2012) Huang, F., Li, J.-Z., Wang, X.-F., et al. 2012, Research in Astronomy and Astrophysics, 12, 1585
* Huang et al. (2018) Huang, F., Wang, X.-F., Hosseinzadeh, G., et al. 2018, MNRAS, 475, 3959
* Hunter et al. (2007) J. D. Hunter, ”Matplotlib: A 2D Graphics Environment,” in Computing in Science & Engineering, vol. 9, no. 3, pp. 90-95, May-June 2007, doi: 10.1109/MCSE.2007.55.
* Iben & Tutukov (1984) Iben, I., & Tutukov, A. V. 1984, ApJS, 54, 335. doi:10.1086/190932
* Iwamoto et al. (1999) Iwamoto, K., Brachwitz, F., Nomoto, K., et al. 1999, ApJS, 125, 439
* Jiang et al. (1999) Jiang, X., Xu, D., & Hu, J. 1999, Acta Astrophysica Sinica, 19, 220
* Johnson et al. (1966) Johnson, H. L., et al. 1966, Comm. Lunar Planet. Lab. 4, 99Z, ed. R. Wang (Cambridge: Cambridge Univ. Press), 41
* Kasen (2010) Kasen, D. 2010, ApJ, 708, 1025. doi:10.1088/0004-637X/708/2/1025
* Khokhlov (1991) Khokhlov, A. M. 1991, A&A, 245, 114
* Komatsu et al. (2011) Komatsu, E., Smith, K. M., Dunkley, J., et al. 2011, ApJS, 192, 18
* Kromer et al. (2010) Kromer, M., Sim, S. A., Fink, M., et al. 2010, ApJ, 719, 1067
* Landolt (1992) Landolt, A. U., 1992, AJ, 104, 340
* Leibundgut et al. (1993) Leibundgut, B., Kirshner, R. P., Phillips, M. M., et al. 1993, AJ, 105, 301
* Leloudas et al. (2009) Leloudas, G., Stritzinger, M. D., Sollerman, J., et al. 2009, A&A, 505, 265
* Li et al. (2011) Li, W., Bloom, J. S., Podsiadlowski, P., et al. 2011, Nature, 480, 348
* Li et al. (2003) Li, W., Filippenko, A. V., Chornock, R., et al. 2003, PASP, 115, 844
* Li et al. (2001) Li, W., Filippenko, A. V., & Riess, A. G. 2001, ApJ, 546, 719
* Li et al. (2019b) Li, W. X., Wang, X., Hu, M., et al. 2019, ApJ, 882, 30
* Li et al. (2019a) Li, W. X., Wang, X., Vinkó, J., et al. 2019, ApJ, 870, 12
* Li et al. (2021) Li, W., Wang, X., Bulla, M., et al. 2021, ApJ, 906, 99. doi:10.3847/1538-4357/abc9b5
* Livne (1990) Livne, E. 1990, ApJ, 354, L53
* Livne & Glasner (1990) Livne, E., & Glasner, A. S. 1990, ApJ, 361, 244
* Maeda et al. (2010) Maeda, K., Benetti, S., Stritzinger, M., et al. 2010, Nature, 466, 82
* Magnier et al. (2016) Magnier, E. A., Schlafly, E. F., Finkbeiner, D. P., et al. 2016, arXiv:1612.05242
* Maguire et al. (2012) Maguire, K., Sullivan, M., Ellis, R. S., et al. 2012, MNRAS, 426, 2359. doi:10.1111/j.1365-2966.2012.21909.x
* Maguire et al. (2014) Maguire, K., Sullivan, M., Pan, Y.-C., et al. 2014, MNRAS, 444, 3258
* Maguire et al. (2013) Maguire, K., Sullivan, M., Patat, F., et al. 2013, MNRAS, 436, 222
* Mandel et al. (2014) Mandel, K. S., Foley, R. J., & Kirshner, R. P. 2014, ApJ, 797, 75
* Maoz et al. (2014) Maoz, D., Mannucci, F., & Nelemans, G. 2014, ARA&A, 52, 107
* Mazzali et al. (2018) Mazzali, P. A., Ashall, C., Pian, E., et al. 2018, MNRAS, 476, 2905
* Mazzali et al. (2005) Mazzali, P. A., Benetti, S., Altavilla, G., et al. 2005, ApJ, 623, L37. doi:10.1086/429874
* Mazzali et al. (2014) Mazzali, P. A., Sullivan, M., Hachinger, S., et al. 2014, MNRAS, 439, 1959
* Miller & Stone (1993) Miller, J. S., & Stone, R. P. S. 1993, Lick Obs. Tech. Rep. 66, Lick Obs., Santa Cruz
* Milne et al. (2010) Milne, P. A., Brown, P. J., Roming, P. W. A., et al. 2010, ApJ, 721, 1627
* Milne et al. (2013) Milne, P. A., Brown, P. J., Roming, P. W. A., et al. 2013, ApJ, 779, 23
* Munari et al. (2013) Munari, U., Henden, A., Belligoli, R., et al. 2013, New A, 20, 30. doi:10.1016/j.newast.2012.09.003
* Nomoto (1982) Nomoto, K. 1982, ApJ, 253, 798. doi:10.1086/159682
* Nomoto et al. (1997) Nomoto, K., Iwamoto, K., & Kishimoto, N. 1997, Science, 276, 1378
* Nomoto et al. (1984) Nomoto, K., Thielemann, F.-K., & Yokoi, K. 1984, ApJ, 286, 644
* Nugent et al. (1995) Nugent, P., Phillips, M., Baron, E., et al. 1995, ApJ, 455, L147
* Olling et al. (2015) Olling, R. P., Mushotzky, R., Shaya, E. J., et al. 2015, Nature, 521, 332
* Pakmor et al. (2012) Pakmor, R., Kromer, M., Taubenberger, S., et al. 2012, ApJ, 747, L10
* Pakmor et al. (2013) Pakmor, R., Kromer, M., Taubenberger, S., et al. 2013, ApJ, 770, L8
* Pan et al. (2015) Pan, Y.-C., Foley, R. J., Kromer, M., et al. 2015, MNRAS, 452, 4307
* Pan et al. (2015) Pan, Y.-C., Sullivan, M., Maguire, K., et al. 2015, MNRAS, 446, 354. doi:10.1093/mnras/stu2121
* Parrent et al. (2011) Parrent, J. T., Thomas, R. C., Fesen, R. A., et al. 2011, ApJ, 732, 30
* Pastorello et al. (2007) Pastorello, A., Mazzali, P. A., Pignata, G., et al. 2007, MNRAS, 377, 1531. doi:10.1111/j.1365-2966.2007.11700.x
* Patat et al. (2007) Patat, F., Chandra, P., Chevalier, R., et al. 2007, Science, 317, 924
* Paturel et al. (2002) Paturel, G., Dubois, P., Petit, C., et al. 2002, LEDA, 0
* Pauldrach et al. (1996) Pauldrach, A. W. A., Duschinger, M., Mazzali, P. A., et al. 1996, A&A, 312, 525
* Perlmutter et al. (1999) Perlmutter, S., Aldering, G., Goldhaber, G., et al. 1999, ApJ, 517, 565
* Phillips (1993) Phillips, M. M. 1993, ApJ, 413, L105
* Phillips et al. (1999) Phillips, M. M., Lira, P., Suntzeff, N. B., et al. 1999, AJ, 118, 1766
* Phillips et al. (1992) Phillips, M. M., Wells, L. A., Suntzeff, N. B., et al. 1992, AJ, 103, 1632
* Piersanti et al. (2004) Piersanti, L., Tornambé, A., & Castellani, V. 2004, MNRAS, 353, 243
* Piro & Nakar (2013) Piro, A. L. & Nakar, E. 2013, ApJ, 769, 67. doi:10.1088/0004-637X/769/1/67
* Piro & Morozova (2016) Piro, A. L., & Morozova, V. S. 2016, ApJ, 826, 96
* Plewa et al. (2004) Plewa, T., Calder, A. C., & Lamb, D. Q. 2004, ApJ, 612, L37
* Raskin & Kasen (2013) Raskin, C. & Kasen, D. 2013, ApJ, 772, 1. doi:10.1088/0004-637X/772/1/1
* Riess et al. (2018) Riess, A. G., Casertano, S., Yuan, W., et al. 2018, ApJ, 855, 136
* Riess et al. (1998) Riess, A. G., Filippenko, A. V., Challis, P., et al. 1998, AJ, 116, 1009
* Riess et al. (1996) Riess, A. G., Press, W. H., & Kirshner, R. P. 1996, ApJ, 473, 88
* Riess et al. (1999) Riess, A. G., Filippenko, A. V., Li, W., et al. 1999, AJ, 118, 2675. doi:10.1086/301143
* Rogers (1920) Rogers, R. A. P. 1920, Nature, 105, 138
* Roming et al. (2005) Roming, P. W. A., Kennedy, T. E., Mason, K. O., et al. 2005, Space Sci. Rev., 120, 95
* Röpke et al. (2012) Röpke, F. K., Kromer, M., Seitenzahl, I. R., et al. 2012, ApJ, 750, L19. doi:10.1088/2041-8205/750/1/L19
* Ruiz-Lapuente et al. (1992) Ruiz-Lapuente, P., Cappellaro, E., Turatto, M., et al. 1992, ApJ, 387, L33
* Sahu et al. (2013) Sahu, D. K., Anupama, G. C., & Anto, P. 2013, MNRAS, 430, 869. doi:10.1093/mnras/sts609
* Seitenzahl et al. (2013) Seitenzahl, I. R., Ciaraldi-Schoolmann, F., Röpke, F. K., et al. 2013, MNRAS, 429, 1156. doi:10.1093/mnras/sts402
* Scalzo et al. (2010) Scalzo, R. A., Aldering, G., Antilogus, P., et al. 2010, ApJ, 713, 1073
* Scalzo et al. (2019) Scalzo, R. A., Parent, E., Burns, C., et al. 2019, MNRAS, 483, 628
* Schaefer & Pagnotta (2012) Schaefer, B. E. & Pagnotta, A. 2012, Nature, 481, 164
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103
* Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
* Seitenzahl et al. (2013) Seitenzahl, I. R., Ciaraldi-Schoolmann, F., Röpke, F. K., et al. 2013, MNRAS, 429, 1156
* Shen et al. (2018) Shen, K. J., Kasen, D., Miles, B. J., et al. 2018, ApJ, 854, 52
* Shen et al. (2013) Shen, K. J., Guillochon, J., & Foley, R. J. 2013, ApJ, 770, L35. doi:10.1088/2041-8205/770/2/L35
* Shen & Moore (2014) Shen, K. J., & Moore, K. 2014, ApJ, 797, 46
* Shporer et al. (2011) Shporer, A., Brown, T., Lister, T., et al. 2011, The Astrophysics of Planetary Systems: Formation, Structure, and Dynamical Evolution, 276, 553. doi:10.1017/S1743921311021193
* Silverman & Filippenko (2012) Silverman, J. M., & Filippenko, A. V. 2012, MNRAS, 425, 1917
* Silverman et al. (2012) Silverman, J. M., Foley, R. J., Filippenko, A. V., et al. 2012, MNRAS, 425, 1789
* Silverman et al. (2011) Silverman, J. M., Ganeshalingam, M., Li, W., et al. 2011, MNRAS, 410, 585
* Silverman et al. (2012) Silverman, J. M., Kong, J. J., & Filippenko, A. V. 2012, MNRAS, 425, 1819. doi:10.1111/j.1365-2966.2012.21269.x
* Silverman et al. (2013) Silverman, J. M., Nugent, P. E., Gal-Yam, A., et al. 2013, ApJS, 207, 3
* Silverman et al. (2015) Silverman, J. M., Vinkó, J., Marion, G. H., et al. 2015, MNRAS, 451, 1973. doi:10.1093/mnras/stv1011
* Sim et al. (2012) Sim, S. A., Fink, M., Kromer, M., et al. 2012, MNRAS, 420, 3003
* Springob et al. (2005) Springob, C. M., Haynes, M. P., Giovanelli, R., et al. 2005, ApJS, 160, 149
* Stahl et al. (2020) Stahl, B. E., Martínez-Palomera, J., Zheng, W., et al. 2020, MNRAS, 496, 3553. doi:10.1093/mnras/staa1706
* Stahl et al. (2019) Stahl, B. E., Zheng, W., de Jaeger, T., et al. 2019, MNRAS, 490, 3882. doi:10.1093/mnras/stz2742
* Stahl et al. (2020) Stahl, B. E., Zheng, W., de Jaeger, T., et al. 2020, MNRAS, 492, 4325. doi:10.1093/mnras/staa102
* Stanishev et al. (2007) Stanishev, V., Goobar, A., Benetti, S., et al. 2007, A&A, 469, 645
* Sternberg et al. (2011) Sternberg, A., Gal-Yam, A., Simon, J. D., et al. 2011, Science, 333, 856
* Stetson (1987) Stetson, P. B. 1987, PASP, 99, 191
* Su et al. (1989) Su, D. Q., Zhou, B. F., & Yu, X. M. 1989, ScChA, 11, 1187
* Syam (2003) Syam, M. I. 2003, Journal of Applied Sciences, 3, 9
* Tanaka et al. (2010) Tanaka, M., Kawabata, K. S., Yamanaka, M., et al. 2010, ApJ, 714, 1209
* Tanikawa et al. (2019) Tanikawa, A., Nomoto, K., Nakasato, N., et al. 2019, ApJ, 885, 103
* Taubenberger et al. (2011) Taubenberger, S., Benetti, S., Childress, M., et al. 2011, MNRAS, 412, 2735
* Taubenberger et al. (2008) Taubenberger, S., Hachinger, S., Pignata, G., et al. 2008, MNRAS, 385, 75. doi:10.1111/j.1365-2966.2008.12843.x
* Thomas et al. (2011) Thomas, R. C., Aldering, G., Antilogus, P., et al. 2011, ApJ, 743, 27
* Thompson (2011) Thompson, T. A. 2011, ApJ, 741, 82
* Tody (1986) Tody, D. 1986, Proc. SPIE, 627, 733. doi:10.1117/12.968154
* Tody (1993) Tody, D. 1993, Astronomical Data Analysis Software and Systems II, 52, 173
* Tucker et al. (2019) Tucker, M. A., Shappee, B. J., & Wisniewski, J. P. 2019, ApJ, 872, L22. doi:10.3847/2041-8213/ab0286
* Valenti et al. (2016) Valenti, S., Howell, D. A., Stritzinger, M. D., et al. 2016, MNRAS, 459, 3939
* Wang et al. (2019) Wang, C.-J., Bai, J.-M., Fan, Y.-F., et al. 2019, Research in Astronomy and Astrophysics, 19, 149
* Wang & Hu (1994) Wang, L., & Hu, J. 1994, Nature, 369, 380
* Wang et al. (2019) Wang, X., Chen, J., Wang, L., et al. 2019, ApJ, 882, 120
* Wang et al. (2009) Wang, X., Filippenko, A. V., Ganeshalingam, M., et al. 2009, ApJ, 699, L139
* Wang et al. (2008) Wang, X., Li, W., Filippenko, A. V., et al. 2008, ApJ, 675, 626
* Wang et al. (2009) Wang, X., Li, W., Filippenko, A. V., et al. 2009, ApJ, 697, 380
* Wang et al. (2013) Wang, X., Wang, L., Filippenko, A. V., et al. 2013, Science, 340, 170
* Wang et al. (2006) Wang, X., Wang, L., Pain, R., et al. 2006, ApJ, 645, 488
* Wang et al. (2005) Wang, X., Wang, L., Zhou, X., et al. 2005, ApJ, 620, L87
* Waters et al. (2016) Waters, C. Z., Magnier, E. A., Price, P. A., et al. 2016, arXiv:1612.05245
* Webbink (1984) Webbink, R. F. 1984, ApJ, 277, 355. doi:10.1086/161701
* Whelan & Iben (1973) Whelan, J., & Iben, I. 1973, ApJ, 186, 1007. doi:10.1086/152565
* Willick et al. (1997) Willick, J. A., Courteau, S., Faber, S. M., et al. 1997, ApJS, 109, 333
* Woosley et al. (2009) Woosley, S. E., Kerstein, A. R., Sankaran, V., et al. 2009, ApJ, 704, 255. doi:10.1088/0004-637X/704/1/255
* Woosley et al. (1986) Woosley, S. E., Taam, R. E., & Weaver, T. A. 1986, ApJ, 301, 601
* Yamanaka et al. (2009) Yamanaka, M., Kawabata, K. S., Kinugasa, K., et al. 2009, ApJ, 707, L118. doi:10.1088/0004-637X/707/2/L118
* Yaron & Gal-Yam (2012) Yaron, O., & Gal-Yam, A. 2012, PASP, 124, 668. doi:10.1086/666656
* Zackay et al. (2016) Zackay, B., Ofek, E. O., & Gal-Yam, A. 2016, ApJ, 830, 27
* Zhai et al. (2016) Zhai, Q., Zhang, J.-J., Wang, X.-F., et al. 2016, AJ, 151, 125
* Zhang et al. (2016) Zhang, J.-C., Fan, Z., Yan, J.-Z., et al. 2016, PASP, 128, 105004. doi:10.1088/1538-3873/128/968/105004
* Zhang et al. (2015) Zhang, J.-C., Ge, L., Lu, X.-M., et al. 2015, PASP, 127, 1292. doi:10.1086/684369
* Zhang et al. (2016) Zhang, K., Wang, X., Zhang, J., et al. 2016, ApJ, 820, 67
* Zhang et al. (2019) Zhang, T., Wang, X., Zhao, X., et al. 2019, ApJ, 872, 14
* Zhang et al. (2012) Zhang, W., Yu, W., & Yan, Z. 2012, The Astronomer’s Telegram, 4395
* Zhao et al. (2016) Zhao, X., Maeda, K., Wang, X., et al. 2016, ApJ, 826, 211
* Zhao et al. (2015) Zhao, X., Wang, X., Maeda, K., et al. 2015, ApJS, 220, 20
* Zheng & Filippenko (2017) Zheng, W., & Filippenko, A. V. 2017, ApJ, 838, L4
* Zheng et al. (2017) Zheng, W., Filippenko, A. V., Mauerhan, J., et al. 2017, ApJ, 841, 64
* Zheng et al. (2013) Zheng, W., Silverman, J. M., Filippenko, A. V., et al. 2013, ApJ, 778, L15
|
# Unconventional Superfluidity in a model of Fermi-Bose Mixtures
K Sheshadri1<EMAIL_ADDRESS>A Chainani2<EMAIL_ADDRESS>1226, Bagalur,
Bangalore North, Karnataka State, India 562149 2Condensed Matter Physics
Group, National Synchrotron Radiation Research Center, Hsinchu 30076, Taiwan
###### Abstract
A finite-temperature ($T>0$) study of a model of a mixture of spin-zero
hardcore bosons and spinless fermions, with filling fractions $\rho_{B}$ and
$\rho_{F}$, respectively, on a two-dimensional square lattice with composite
hopping $t$ is presented. The composite hopping swaps the locations of a
fermion and a boson that occupy nearest-neighbor sites of the lattice. The
superfluid order parameter $\psi$, the femion hopping amplitude $\phi$, the
chemical potential $\mu$, the free energy minimum $\tilde{F}$ and entropy $S$
are calculated in the limit $\rho_{B}+\rho_{F}=1$ within a mean-field
approximation, and lead to a phase diagram in the $\rho_{F}-T$ plane. This
phase diagram consists of a metallic superfluid phase under a dome-shaped
$T(\rho_{F})$, and insulating normal liquid and insulating normal gas phases
outside the dome. These phases are separated by coupled discontinuous
transitions as indicated by jumps in $\psi$ and $\phi$. The maximum critical
transition temperature $T_{c}$ is observed very close to $\rho_{F}=1/2$. While
$\tilde{F}(T)$ is continuous with a derivative discontinuity at
$T=T_{c}(\rho_{F})$ for $0<\rho_{F}\leq 1/2$ (first-order transition), it
becomes discontinuous for $\rho_{F}>1/2$ (zeroth-order transition), where the
entropy becomes negative for a range of temperatures below $T_{c}$. The ratio
of $T_{c}$ to Fermi band width agrees remarkably with the ratio of
$T_{c}$/$T_{F}$ (where $T_{F}$ is the Fermi temperature) of unconventional
superfluids and superconductors like Fermi-Bose mixtures, the high-$T_{c}$
cuprates, iron-based and hydride superconductors, that exhibit experimental
values of $T_{c}$ spread over nine orders of magnitude from $\sim 200$nK to
$\sim 260$K.
## I Introduction
Fermi-Bose mixtures (FBMs) constitute an unusual and important state of
matter, including well known examples like He3-He4 mixtures Ebner , the mixed
phase of type-II superconductors, ultracold atom systemsTruscott ; Schreck ;
Hadzibabic , unconventional superconductors which exhibit Bardeen-Cooper-
Schrieffer to Bose-Einstein condensation (BCS-BEC) crossoverLubashevsky ;
Okazaki ; Kasahara ; Rinott , and so on. Experimental and theoretical studies
of FBMs have shown remarkable results, particularly in terms of the BCS-BEC
crossover across a Feshbach resonance Regal , that have revealed their
distinct aspects compared to the limiting cases of BCS superconductivity and
BEC superfluidity Randeria ; Ketterle .
The BCS-BEC crossover was originally predicted to occur for excitons in
semiconductors Keldysh and quarks in high energy physicsKerbikov . However,
it was first reported experimentally in ultracold fermionic atoms with s-wave
interactions Bartenstein . Unusual and unexpected results include formation of
a Feshbach moleculeCumby , and the role of three body physicsBloom in FBMs.
On the other hand, the role of BCS-BEC crossover in condensed matter involves
experimental results on iron-based superconductorsLubashevsky ; Okazaki ;
Kasahara ; Rinott and its relation to well-accepted theoretical
resultsRanderia ; Randeria2 ; quick . While interactions between fermions
mediated by phonons define the BCS theory of superconductivity, several
studies have also considered their importance in mixtures of ultracold atoms
Bijlsma ; Viverit ; Capuzzi ; Albus . The Boson-Fermion (BF) model Schafroth ,
which preceded the BCS theory, discusses itinerant fermions hybridizing with
bosons composed of bound pairs of fermions of opposite spins. The BF model was
subsequently used to study electrons interacting with local lattice
deformations Ranninger as well as high temperature superconductivity
Ranninger2 ; Friedberg ; Gershkenbein ; Domanski . Recent studies have applied
it for describing resonance superfluids in the BCS-BEC crossover regime Shin ,
as well as a temperature driven crossover in an FBM Maska2 . These studies
have shown the importance and interplay of bosonic and fermionic degrees of
freedom in various physical systems.
Early studies on mixtures investigated the role of an attractive interaction
between fermions and bosons. It was shown that an FBM with attractive
interactions undergoes a collapse when the fermion number exceeded a critical
valueUfrecht . The breakthrough in controlling a Feshbach resonance in FBMs
allowed researchers to effectively tune the boson-fermion interaction and
control the system from collapsing at high densities Ospelkaus . In contrast,
theoretical studies employing repulsive interactions could describe various
stable density configurations of FBMs. The role of finite temperatures and
going beyond the mean-field approximation was also investigated Viverit . In
the case of a strongly repulsive quasi one-dimensional FBM,Guan it was shown
that the phase diagram as a function of applied magnetic field $H$ displays a
pure boson phase for $H=0$, polarized fermions and bosons coexisting for
$0<H<H_{c}$, and a fully polarized fermion phase for $H>H_{c}$. More
interestingly, for an FBM on a 2D optical lattice in the framework of an
extended single band Hubbard model with Coulomb interaction terms between
bosons ($U_{BB}$), between fermions ($U_{FF}$) and an additional Coulomb
interaction between bosons and fermions ($U_{BF}$), it was shown that the
bosons can mediate an attractive interaction between fermions, leading to
fermion paired states with different $s,~{}p$ and $d$ orbital symmetries Wang
. Further, the phase diagram as a function of $U_{BF}$ versus fermion number
also revealed the existence of spin density wave and charge density wave
phases. The authors also predicted that for experimentally accessible regime
of parameters, the 2D FBM would exhibit superfluidity with an unconventional
fermion pairing having a transition temperature around a percent of the Fermi
energy. On the other hand, the role of interaction-dependent temperature
effects in an FBM were investigated by CramerCramer . It was shown that
adiabatic temperature changes of the FBM occur which depend on the interaction
between fermions and bosonsCramer .
In addition, the dynamics of FBMs has also been investigated and it was shown
that long range density wave phases can be obtained for fermions and bosons
hopping independently in the presence of on-site boson-boson $U_{BB}$ and
boson-fermion $U_{BF}$ Coulomb interactions Lewenstein ; Pollet . However, a
composite hopping that exchanges a fermion with a boson, when they occupy
neighboring sites, was not considered in earlier work. This form of hopping
was proposed recently by us zeroTarxiv ; zeroTPRO and distinguishes our work
from earlier work on FBMs. In this work, we calculate the thermodynamic
properties of a model of FBM on a two-dimensional square lattice with
composite hopping between neighboring spinless fermions and hardcore bosons,
extending our earlier study of $T=0$ properties zeroTarxiv ; zeroTPRO . As in
the previous work, we use a mean-field approximation and restrict ourselves to
the case
$\rho_{F}+\rho_{B}=1,$ (1)
where $\rho_{F}$ and $\rho_{B}$ are the filling fractions of the fermions and
bosons, respectively. To recall, at $T=0$, the model displays two distinct
phases separated by coupled first-order transitions at Fermi filling fraction
$\rho_{F}\simeq 0.3$: for $\rho_{F}<0.3$ the Fermi sector is insulating and
the Bose sector is a normal liquid, while for $\rho_{F}>0.3$ the Fermi sector
is metallic and the Bose sector is a superfluid. In the present work, we find
that thermal fluctuations suppress superfluidity, and at a certain
$T=T_{c}(\rho_{F})$ there is a discontinuous transition to an insulating non-
superfluid phase as shown by the superfluid amplitude $\psi(T)$ and the
fermion hopping amplitude $\phi(T)$. We further find that the transition
occurring at $T_{c}(\rho_{F})$ is first order for $0.3<\rho_{F}\leq 1/2$ (the
minimum free energy $\tilde{F}(T)$ is continuous with a discontinuity in its
first derivative), but is zeroth order for $1/2<\rho_{F}<1$ (the minimum free
energy $\tilde{F}(T)$ is discontinuous). In the latter regime, the entropy
becomes negative for a range of temperatures below $T_{c}$. We compute the
ratio of $T_{c}$ to the Fermi band width, and find remarkable agreement with
measured values in the range of $0.02$ to $0.20$ for a wide variety of
unconventional superfluids and superconductors, including Fermi-Bose mixtures,
the high-$T_{c}$ cuprates, iron-based superconductors and hydrides, that have
their $T_{c}$ spread over nine orders of magnitude from a few hundred
nanokelvins to a few hundred kelvinszeroTPRO . Our estimate for the
superconducting $T_{c}$ in the solid-state context with known experimental
band widths or the Fermi temperature are consistent with observed $T_{c}$’s of
the cuprates as well as iron-based superconductors.
## II The Composite-Hopping Model and Its Mean-Field Thermodynamics
We consider the composite-hopping model with Hamiltonian
$\displaystyle H$ $\displaystyle=$
$\displaystyle-\alpha\sum_{i}\left[b_{i}^{\dagger}b_{i}+f_{i}^{\dagger}f_{i}-1\right]-\mu\sum_{i}f_{i}^{\dagger}f_{i}$
(2) $\displaystyle-t\sum_{<ij>}f_{i}^{\dagger}f_{j}b_{j}^{\dagger}b_{i}$
that was proposed in a recent study for $T=0$zeroTarxiv ; zeroTPRO , where the
notation used above is also explained. In this work also, we consider a FBM on
a two-dimensional square lattice. The composite hopping term, the last term
above, results in swapping of a hardcore boson and a spinless fermion when
they occupy nearest-neighbor sites. Using the mean-field approximation, this
term is transformed according to
$\displaystyle f_{i}^{\dagger}f_{j}b_{j}^{\dagger}b_{i}$ $\displaystyle\simeq$
$\displaystyle\langle f_{i}^{\dagger}f_{j}\rangle~{}(\langle
b_{j}^{\dagger}\rangle b_{i}+\langle b_{i}\rangle b_{j}^{\dagger}-\langle
b_{j}^{\dagger}\rangle\langle b_{i}\rangle)$ (3) $\displaystyle+\langle
b_{j}^{\dagger}\rangle\langle b_{i}\rangle f_{i}^{\dagger}f_{j}-\langle
f_{i}^{\dagger}f_{j}\rangle\langle b_{j}^{\dagger}\rangle\langle
b_{i}\rangle,$
so $H$ is approximated by a mean-field Hamiltonian
$\displaystyle H^{MF}$ $\displaystyle=$ $\displaystyle
H_{0}+H_{1}+H_{2},~{}~{}\mathrm{where}$ $\displaystyle H_{0}$ $\displaystyle=$
$\displaystyle N(2\phi\psi^{2}+\alpha),$ $\displaystyle H_{1}$
$\displaystyle=$
$\displaystyle-(\alpha+\mu)\sum_{i}f_{i}^{\dagger}f_{i}-\frac{1}{z}\psi^{2}\sum_{<ij>}f_{i}^{\dagger}f_{j},~{}\mathrm{and}$
$\displaystyle H_{2}$ $\displaystyle=$ $\displaystyle\sum_{i}\left[-\alpha
b_{i}^{\dagger}b_{i}-\phi\psi(b_{i}+b_{i}^{\dagger})\right].$ (4)
We have taken $zt=1$ ($z$ is the coordination number of the lattice), and
introduced the thermodynamic expectation values
$\phi=\langle f_{i}^{\dagger}f_{j}\rangle,~{}~{}\psi=\langle
b_{i}\rangle=\langle b_{j}^{\dagger}\rangle.$ (5)
We assume $\phi~{}\mathrm{and}~{}\psi$ to be real and homogeneous and consider
$\psi$ to be the superfluid order parameter FisherWeichman89 ; shesh93 ;
shesh95 ; gutz ; spin1_1 ; spin1_2 ; spin1_3 . For the hardcore bosons, we use
the single-site boson occupation number basis $\\{|0\rangle,~{}|1\rangle\\}$
for diagonalizing the $2\times 2$ matrix $h_{2}$ of $H_{2}/N$, i.e.,
$h_{2}=\begin{bmatrix}0&-\phi\psi\\\ -\phi\psi&-\alpha\end{bmatrix}$ (6)
that has the eigenvalues
$\lambda_{\pm}=\frac{1}{2}\left[-\alpha\pm
R\right],~{}~{}\mathrm{where}~{}~{}R=\sqrt{\alpha^{2}+4\phi^{2}\psi^{2}}.$ (7)
Using the Fourier transform
$f_{i}=\frac{1}{\sqrt{N}}\sum_{\bf k}e^{i{\bf k.r_{i}}}f_{\bf k},$ (8)
the Hamiltonian $H_{1}$ of the fermion sector becomes
$H_{1}=\sum_{\bf k}(\varepsilon_{\bf k}-\mu)f_{\bf k}^{\dagger}f_{\bf k},$ (9)
where
$\displaystyle\varepsilon_{\bf k}$ $\displaystyle=$
$\displaystyle-\alpha-\psi^{2}\gamma_{\bf k},~{}~{}\mathrm{and}$
$\displaystyle\gamma_{\bf k}$ $\displaystyle=$ $\displaystyle\frac{2}{z}(\cos
k_{x}+\cos k_{y}).$ (10)
The free energy per lattice site $F=F(\alpha,\psi)$ is now
$\displaystyle F$ $\displaystyle=$
$\displaystyle\frac{1}{2}(\alpha-R)+2\phi\psi^{2}-T\ln(1+e^{-R/T})$ (11)
$\displaystyle-T\frac{1}{N}\sum_{k}\ln\left[1+e^{(\mu-\varepsilon_{\bf
k})/T}\right].$
We take $k_{B}=1$ here and in the following. To calculate
$\phi=\phi(\alpha,\psi)$, we use its definition in (5) and go over to
$k$-space to get
$\phi=\frac{1}{Nz}\sum_{<ij>}\langle
f_{i}^{\dagger}f_{j}\rangle=\frac{1}{N}\sum_{\bf k}\gamma_{\bf k}\langle
f_{\bf k}^{\dagger}f_{\bf k}\rangle.$ (12)
By definition,
$\langle f_{\bf k}^{\dagger}f_{\bf k}\rangle=\frac{\mathrm{Tr}(f_{\bf
k}^{\dagger}f_{\bf
k}e^{-H_{1}/T})}{\mathrm{Tr}(e^{-H_{1}/T})}=\frac{1}{1+e^{(\varepsilon_{\bf
k}-\mu)/T}},$ (13)
and so
$\phi=\frac{1}{N}\sum_{\bf k}\frac{\gamma_{\bf k}}{1+e^{(\varepsilon_{\bf
k}-\mu)/T}}.$ (14)
Using $\gamma_{\bf k}=-(\alpha+\varepsilon_{\bf k})/\psi^{2}$ (the first
equation in (II)) we obtain
$\phi=-\frac{1}{N}\sum_{\bf k}\frac{\alpha+\varepsilon_{\bf
k}}{\psi^{2}}\frac{1}{1+e^{(\varepsilon_{\bf k}-\mu)/T}}.$ (15)
Introducing the density of states
$\rho(E)=\frac{1}{N}\sum_{\bf k}\delta(E-\varepsilon_{\bf k}),$ (16)
we can write
$\phi=-\frac{1}{\psi^{2}}\int_{E_{0}}^{\mu}dE~{}\frac{\alpha+E}{1+e^{(E-\mu)/T}}\rho(E),$
(17)
where $\mu$ is chosen such that the fermion filling fraction
$\rho_{F}(\alpha,\psi)=\int_{E_{0}}^{\mu}dE\frac{1}{1+e^{(E-\mu)/T}}\rho(E)$
(18)
has a desired value. Here, $E_{0}=-\alpha-\psi^{2}$ is the minimum value of
fermion energy. To calculate the density of states (16), we convert the
$k$-sum in to an integral according to $(1/N)\sum_{\bf k}\to(1/4\pi^{2})\int
d{\bf k}$. Since $\varepsilon_{\bf-k}=\varepsilon_{\bf k}$, the $k$-space
integral is four times the integral over the first quadrant of the Brillouin
zone, and so we have
$\rho(E)=\frac{1}{\pi^{2}}\int_{0}^{\pi}dk_{x}\int_{0}^{\pi}dk_{y}~{}\delta(E+\alpha+\psi^{2}\gamma_{\bf
k}).$ (19)
The integral over $k_{y}$ can be easily evaluated, and we get
$\displaystyle\rho(E)$ $\displaystyle=$
$\displaystyle\frac{2}{\pi^{2}\psi^{2}}f\left(\frac{\alpha+E}{\psi^{2}}\right),~{}\mathrm{where}$
$\displaystyle f(u)$ $\displaystyle=$
$\displaystyle\int_{0}^{\pi}\frac{dk_{x}}{\sqrt{1-(2u+\cos k_{x})^{2}}}.$ (20)
We can readily see that the function $f(u)$ is real only when $-1\leq u\leq
1$, and is non-negative. Therefore we have the inequality
$-\alpha-\psi^{2}\leq E\leq-\alpha+\psi^{2}$ for the fermion energy $E$. We
substitute the above expression for $\rho(E)$ in to equations (17) and (18)
and transform the integrals to obtain
$\displaystyle\rho_{F}$ $\displaystyle=$
$\displaystyle\frac{2}{\pi^{2}}\int_{-1}^{u_{F}}du\frac{f(u)}{1+e^{(u-u_{F})\psi^{2}/T}}~{}\mathrm{and}$
$\displaystyle\phi$ $\displaystyle=$
$\displaystyle-\frac{2}{\pi^{2}}\int_{-1}^{u_{F}}du\frac{uf(u)}{1+e^{(u-u_{F})\psi^{2}/T}},$
(21)
where $u_{F}=(\alpha+\mu)/\psi^{2}$. This helps us choose the value of $\mu$
for a desired $\rho_{F}$, given the values of $(\alpha,\psi)$. For any
$(\rho_{F},T)$, we determine $\psi$ and $\alpha$ by solving $\partial
F/\partial\psi=0~{}\mathrm{and}~{}\partial F/\partial\alpha=0$ simultaneously.
These two equations, and the two equations in (II) above, are solved
iteratively to obtain $(\alpha,\psi)$ as well as $(\phi,\mu)$ for any chosen
$(\rho_{F},T)$. In general, there could be multiple solutions $(\alpha,\psi)$.
We substitute each solution in $F(\alpha,\psi)$ and denote the resulting free
energy minimum for the solution by $\tilde{F}$; the correct solution is the
one that corresponds to the lowest $\tilde{F}$. From equation (11), we obtain
$\displaystyle\frac{\partial F}{\partial\psi}$ $\displaystyle=$ $\displaystyle
2\psi(\phi+\phi_{\psi}\psi)\left[1-\frac{\phi}{R}\chi\right],~{}\mathrm{and}$
$\displaystyle\frac{\partial F}{\partial\alpha}$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left[(1-2\rho_{F})-\frac{\chi}{R}\alpha\right]+2\phi_{\alpha}\psi^{2}\left[1-\frac{\chi}{R}\phi\right],$
where
$\phi_{\psi}=\partial\phi/\partial\psi,~{}\phi_{\alpha}=\partial\phi/\partial\alpha$,
and
$\chi(R,T)=\frac{e^{R/T}-1}{e^{R/T}+1}.$ (23)
Since $\phi~{}\mathrm{and}~{}\phi_{\psi}$ are positive (see equations (14) and
(II)), we always have $\phi+\phi_{\psi}\psi>0$, so $\partial F/\partial\psi=0$
gives
$\psi=0~{}~{}\mathrm{or}~{}~{}R=\phi\chi(R,T).$ (24)
Using this in equation (II), $\partial F/\partial\alpha=0$ gives
$\alpha=(1-2\rho_{F})R/\chi.$ (25)
## III Tractable Limits and Some General Observations
Now that we have derived the implicit equations for $\psi$ and $\alpha$, we
first analyze two simple limits: the $\psi=0$ (disordered phase) and the $T=0$
limits. We obtain closed-form solutions for $\alpha,~{}\psi$ and $\tilde{F}$
in these limits. For the general case, we make some observations before moving
on to a discussion of the numerical results in the next section.
From equation (25), we get
$\alpha^{2}[\chi^{2}-(1-2\rho_{F})^{2}]=4\phi^{2}\psi^{2}(1-2\rho_{F})^{2}$.
The solution $\psi=0$ corresponds to the disordered state, i.e., the Bose
sector is in a non-superfluid, normal phase. We then get $\alpha=0$ or
$\chi=|1-2\rho_{F}|$. In this state, $R=|\alpha|$, so using the definition of
$\chi$ in equation (23) we get
$\alpha=0,~{}T[\ln(1-\rho_{F})-\ln\rho_{F}]$ (26)
in the disordered state ($\psi=0$). Of these two solutions for $\alpha$, we
must pick the one corresponding to the lower $\tilde{F}$. Substituting the
above in equation (11) for $F$, we obtain $\tilde{F}_{I}=-T(2\ln 2)$ for the
first solution $\alpha=0$, and $\tilde{F}_{II}=-T[\ln 2-\ln(1-\rho_{F})]$ for
the second solution $\alpha=T[\ln(1-\rho_{F})-\ln\rho_{F}]$. We can see that
$\tilde{F}_{I}<\tilde{F}_{II}$ for $\rho_{F}<1/2$ and
$\tilde{F}_{II}<\tilde{F}_{I}$ for $\rho_{F}>1/2$. This shows that in the
disordered phase, $\alpha=0$ for $\rho_{F}\leq 1/2$ and
$\alpha=T[\ln(1-\rho_{F})-\ln\rho_{F}]$ (which is the same as
$\chi=|1-2\rho_{F}|$) for $\rho_{F}>1/2$. In Fig. (1) we show the behavior of
$-\tilde{F}_{D}/T$ (which is the disordered-phase entropy $S_{D}$) as a
function of $\rho_{F}$.
At $T=0$, we get $\tilde{F}=0$ for the disordered state (since both
$\tilde{F}_{I}$ and $\tilde{F}_{II}$ vanish). For the ordered state, $\chi=1$
so that
$\psi_{0}^{2}=\rho_{F}(1-\rho_{F}),~{}~{}\alpha_{0}=(1-2\rho_{F})\phi_{0},$
(27)
and the minimum free energy is
$\tilde{F}_{0}=\rho_{F}^{2}[-\phi_{0}+u_{F}(\rho_{F}-1)]$ by taking the
$T\rightarrow 0$ limit in equation (11) and substituting the above expressions
for $\psi_{0},~{}\alpha_{0}$. Here $\psi_{0},~{}\phi_{0},~{}\alpha_{0}$ and
$\tilde{F}_{0}$ are $T=0$ values. This gives coupled first-order transitions
at $\rho_{F}\simeq 0.3$ at $T=0$ between a metallic superfluid for
$\rho_{F}>0.3$ and an insulating normal liquid for $\rho_{F}\leq 0.3$. We
referred to the latter as insulating normal gas in our earlier workzeroTarxiv
; zeroTPRO . However, since the destruction of superfluidity of the Bose
sector in this regime is due to an interplay between correlation and quantum
effects and not due to temperature, this unusual phase should be more
appropriately called an insulating normal liquid rather than an insulating
normal gas.
As we increase the temperature at a fixed $\rho_{F}$, thermal fluctuations
suppress superfluidity, reducing $\psi$. To see how this happens, we rewrite
the ordered-phase self consistency equation $R=\phi\chi$ in the form
$T=J(\psi)$ where $J(\psi)=\phi x/\ln[(1+x)/(1-x)]$; here
$x=\sqrt{(1-2\rho_{F})^{2}+4\psi^{2}}$. The function $J(\psi)$ has zeros at
$\psi=0$ (when $\phi=0$) and $\psi=\psi_{0}$ (when $x=1$). Since it is
positive, it must have a maximum in the interval $(0,\psi_{0})$. This is
graphically illustrated in Fig. (2). As $T$ increases from zero, the line
$y=T$ intersects the curve $y=J(\psi)$ at two points $\psi=\psi_{1},\psi_{2}$.
Since $J(\psi)$ is a single-valued function, we have
$\psi_{2}<\psi_{1}<\psi_{0}$ and further, $\psi_{1}$ decreases and $\psi_{2}$
increases as $T$ is increased. The solution $\psi=\psi_{1}$ corresponds to the
ordered minimum of the free energy. As the temperature increases further,
$y=T$ increases, while the maximum of $J(\psi)$ decreases, and at a certain
temperature $T_{2}(\rho_{F})$, the line $y=T$ becomes a tangent to $y=J(\psi)$
at the maximum, when $\psi_{1}=\psi_{2}$. For $T>T_{2}(\rho_{F})$, the self
consistency equation $T=J(\psi)$ has no solution, and therefore $T_{c}\leq
T_{2}$.
For $T\leq T_{2}$, when the ordered solution exists, there are two
possibilities. (1) The ordered free energy minimum ($\tilde{F}_{O}$) remains
lower than the disordered free energy minimum ($\tilde{F}_{D}$) for all $T\leq
T_{2}$. In this case the transition temperature is $T_{c}=T_{2}$, and occurs
when the ordered minimum ceases to exist. The free energy minimum value is
discontinuous at the transition, since the system switches to the only
solution that exists, namely $\psi=0$. This corresponds to a zeroth-order
transition. A zeroth-order phase transition has previously been considered in
the theory of superfluidity and superconductivity Maslov2004 . More recently,
the reentrant phase transition in black holes has been discussed as a zeroth-
order transition.Gunasekaran ; Altamirano01 ; Zou ; Hennigar ; Altamirano02 ;
Amin .
Our numerical computations obtain this scenario for $\rho_{F}>1/2$. (2) There
exists a certain temperature $T_{1}<T_{2}$ such that
$\tilde{F}_{O}<\tilde{F}_{D}$ for $T<T_{1}$, and $\tilde{F}_{O}>\tilde{F}_{D}$
for $T>T_{1}$, with the two phases coexisting at $T_{c}=T_{1}$, which is a
point of first-order transition. Our numerical results obtain this result for
$\rho_{F}\leq 1/2$.
We can show that the transition is indeed zeroth order for $\rho_{F}>1/2$
based on the disordered-phase behavior of $\alpha$ that we derived above.
Formally, the solution of the self-consistency equation $R=\phi\chi$ is
$\psi^{2}=(1/4)[\chi^{2}-(1-2\rho_{F})^{2}]$. As we saw above,
$\chi=|1-2\rho_{F}|$ in the disordered phase when $\rho_{F}>1/2$, so the
ordered solution $\psi>0$ does not exist in the disordred phase. If
$T_{c}<T_{2}$, then we would have the ordered solution existing in the
disordered phase for $T_{c}<T<T_{2}$, which is a contradiction. So we must
have $T_{c}=T_{2}$ in this case, resulting in a zeroth-order transition.
For $\rho_{F}\leq 1/2$, however, the ordered-phase self-consistency equation
in the disordered phase (where $\alpha=0$) becomes
$2\psi=(e^{2\phi\psi/T}-1)/(e^{2\phi\psi/T}+1)$, that has a finite $\psi$
solution for $T<T_{2}$. If $T_{c}=T_{2}$, then we would have an ordered
solution existing in the disordered phase for $T>T_{c}=T_{2}$, a
contradiction. We therefore have $T_{c}<T_{2}$ in this case. The transition
criterion in this case is clearly $\tilde{F}_{O}=\tilde{F}_{D}$ since the
ordered and disordered minima both exist at the transition. The transition for
$\rho_{F}\leq 1/2$ is therefore first order.
We now turn to the correlation function $C({\bf
r})=(1/N)\sum_{i}\langle\delta\rho_{{\bf r}_{i}}\delta\rho_{{\bf r}_{i}+{\bf
r}}\rangle$ where $\delta\rho_{{\bf r}_{i}}=f_{i}^{\dagger}f_{i}-\rho_{F}$. We
obtain $C({\bf r})=-n^{2}({\bf r})$ where $n({\bf r})=(1/N)\sum_{\bf k}\langle
f_{\bf k}^{\dagger}f_{\bf k}\rangle e^{i{\bf k}.{\bf r}}$, and $C_{\bf
q}=-(1/N)\sum_{\bf k}\langle f_{\bf k}^{\dagger}f_{\bf k}\rangle\langle f_{\bf
k+q}^{\dagger}f_{\bf k+q}\rangle$ for the Fourier transform of $C({\bf
r})$zeroTarxiv ; zeroTPRO . In our earlier work, we could show that $n({\bf
r})$ shows a periodicity of twice the lattice spacing for $\rho_{F}=1/2$,
while no apparent periodicity was obtained for other values of $\rho_{F}$.
Using the expression for the Fermi function in Eq. (13), we can readily see
that in the disordered phase ($\psi=0$), we obtain $C_{\bf q}=-1/4$. This
shows that there is no density wave (DW) order in the insulating normal gas
phase, and the DW order obtained for the $T=0$ metallic superfluidzeroTarxiv ;
zeroTPRO is stabilized by the nesting of the Fermi surface and the
superfluidity of the Bose sector.
## IV Discussion
Figure 1: The disordered-phase entropy $S_{D}=-\tilde{F}_{D}/T$ (equal to
$-\tilde{F}_{I}/T$ for $\rho_{F}\leq 1/2$ and $-\tilde{F}_{II}/T$ for
$\rho_{F}>1/2$) plotted as a function of $\rho_{F}$. The high-temperature
entropy is $2\ln 2$, as expected, for $0<\rho_{F}\leq 1/2$. However, it is
anomalously high for $1/2<\rho_{F}\leq 1$, the regime of zeroth-order
transition. Figure 2: A figure illustrating the graphical solution of the
self-consistency equation $T=J(\psi)$. The function $J(\psi)$ vanishes at
$\psi=0,\psi_{0}$ and has a maximum in between. The horizontal lines show
plots of $y=T$ (for $0<T<T_{2},~{}T=T_{2},~{}T>T_{2}$). For $0<T<T_{2}$, the
line $y=T$ intersects the curve $y=J(\psi)$ (red curve) at two points
$\psi=\psi_{1},\psi_{2}$, with $\psi_{0}\geq\psi_{1}>\psi_{2}$, that
correspond, respectively, to a minimum and a maximum of $F$. For $y=T_{2}$,
the two points merge and the line is a tangent to the curve $J(\psi)$ at its
maximum. For $T>T_{2}$, the self-consistency equation has no solution for any
real value of $\psi$. Figure 3: Panels (a)-(f) show the $T$-dependence plots,
respectively, of $\psi,~{}\phi,~{}\mu,~{}\tilde{F},~{}\alpha$ and $S$. Each
panel has plots at five different values of $\rho_{F}$, namely
$0.35,1/2,0.70,0.81,0.95$. Figure 4: The phase diagram of model (2) in
$(\rho_{F},T)$ plane showing the metallic superfluid ($\phi,~{}\psi>0$) and
insulating non-superfluid phases ($\phi=\psi=0$) phases (the insulating normal
liquid at $T=0$ and the insulating normal gas for $T>0$). These phases are
separated by lines of discontinuous transitions: at the phase boundary, the
minimum free energy $\tilde{F}$ is continuous with a derivative discontinuity
for $\rho\leq 1/2$ (first-order transition, dashed line), while $\tilde{F}$
has a jump for $\rho>1/2$ (zeroth-order transition, dotted line). Figure 5:
The temperatures $T_{1}$ and $T_{2}$ are plotted as functions of $\rho_{F}$.
As explained in the text, $T_{1}$ is the critical temperature for first-order
transition, where ${\tilde{F}}_{O}={\tilde{F}}_{D}$. It is not defined for
$\rho_{F}>1/2$, where ${\tilde{F}}_{O}<{\tilde{F}}_{D}$ for $T\leq T_{2}$. The
ordered minimum disappears at $T_{2}$. The critical temperature $T_{c}$ is
$T_{1}$ for $\rho_{F}\leq 1/2$ (first-order transition) and $T_{2}$ for
$\rho_{F}>1/2$ (zeroth-order transition). Note that $T_{2}>T_{1}$. This is
responsible for a segment of the phase boundary in Fig. 4 being vertical at
$\rho_{F}=1/2$. Figure 6: Plot of $\psi_{c}^{2}$, where $\psi_{c}$ is the
jump in the superfluid order parameter at the discontinuous transition
temperature $T_{c}$, as a function of $\rho_{F}$. Figure 7: Plot of
$T_{c}/B_{0}$ as a function of $\rho_{F}$. Here, $B_{0}=2\psi_{0}^{2}$ is the
Fermi band width at $T=0$zeroTarxiv ; zeroTPRO . Figure 8: Plot of
$T_{c}/B_{c}$ as a function of $\rho_{F}$. Here, $B_{c}=2\psi_{c}^{2}$ is the
$T=T_{c}$ analogue of the $T=0$ Fermi band widths $B_{0}$.
Our numerical results are presented in figures 1-8. In Fig.1 we show the
behavior of disordered-phase entropy as a function of $\rho_{F}$. The entropy
in this case is $S_{D}=-\tilde{F}_{D}/T$, where $\tilde{F}_{D}$ is the
disordered-phase free energy minimum. As we discussed above, this is
$-\tilde{F}_{I}/T$ for $\rho_{F}\leq 1/2$ and $-\tilde{F}_{II}/T$ for
$\rho_{F}>1/2$. We can see that the entropy is $2\ln 2$ for $\rho_{F}\leq 1/2$
as one might expect in the disordered phase. However, the entropy is
anomalously large for $\rho_{F}>1/2$, which is also the regime where the
temperature-driven transition is zeroth order. We believe this is a
consequence of the filling constraint (1), that leads to the solution in
equation (25) for $\alpha$ so that $\alpha=T[\ln(1-\rho_{F})-\ln\rho_{F}]$ in
this case (see equation (26)).
To understand how temperature suppresses the metallic-superfluid order, we
solve the self-consistency equations (24) and (25) for $\psi,~{}\alpha$ using
an approach graphically described in Fig.2. As shown in the figure, at a
certain temperature $T<T_{2}$, the line $y=T$ intersects the curve $y=J(\psi)$
at two points $\psi_{1},\psi_{2}~{}(\psi_{0}\geq\psi_{1}>\psi_{2})$. The
solution $\psi=\psi_{1}$ corresponds to the free energy minimum at this
temperature. As the temperature increases, it is obvious from the figure that
this solution moves to the left, i.e. decreases, leading to a thermal
suppression of superfluidity. At a certain high temperature $T>T_{2}$, the
line $y=T$ has no intersection with the curve $y=J(\psi)$. Therefore there is
no solution to the self-consistency equation $T=J(\psi)$ at temperatures
higher than a certain $T_{2}$, at which point the line $y=T$ becomes a tangent
to the curve $y=J(\psi)$ at its maximum.
By numerically implementing this graphical method of solution, we can obtain
$\psi,~{}\phi,~{}\mu,~{}\tilde{F},~{}\alpha$ and $S$ (the entropy) as the
temperature $T$ is varied; these results are plotted in panels (a)-(f) of
Fig.3. Each panel shows temperature dependence of one of these quantities for
five different values of $\rho_{F}=0.35,~{}1/2,~{}0.70,~{}0.81,~{}0.95$. This
choice of $\rho_{F}$ values is the same as for the $T=0$ case, which span the
superfluid metallic phase as reported earlier zeroTarxiv ; zeroTPRO . The
figures show that at each of these fillings, there is a certain critical
temperature $T_{c}$ where the model has a discrete phase transition: the
quantities $\psi,\phi,\mu$ and $\alpha$ all show discontinuous changes at
$T_{c}$ (figures 3(a, b, c, e)). We can observe that at $\rho_{F}=0.35$ and
$1/2$, the free energy minimum $~{}\tilde{F}$ is continuous, but with a
derivative discontinuity, whereas it is discontinuous at
$\rho_{F}=0.70,~{}0.81$ and $0.95$ (Fig. 3(d)).
In figure 3(f) we show plots of entropy $S(T)=-\partial\tilde{F}/\partial T$,
computed by numerical differentiation of $\tilde{F}(T)$. The unusual feature
that can be readily seen is that the entropy becomes negative for certain
temperatures below $T_{c}$ when $\rho_{F}>1/2$. While the concept of negative
entropy has been applied to quantum information systems earlier cerf ;
delrio2011 , it’s relevance for physical systems has been discussed only
recently Chatzi2020 . Cerf and Adami showed that unlike in classical
information theory, quantum conditional entropies can be negative for quantum
entangled systemscerf . Subsequently, del Rio et al. explained its
thermodynamic meaning: negative entropy is related to a possible cooling of an
environment connected to a quantum information system, when quantum
information contained in the system is eraseddelrio2011 . In a very recent
studyChatzi2020 , it has been proposed that the results of two independent
inelastic neutron scattering experimentsOlsen ; Callear , which showed an
anomalous scattering from H2 molecules in nanoscale confined geometries, can
be explained on the basis of negative conditional entropy and quantum
thermodynamics.
Our numerical results for $\rho_{F}\leq 1/2$ show that at $T=T_{c}=T_{1}$, the
ordered and disordered phases coexist and we have
$\tilde{F}_{O}=\tilde{F}_{D}$. The ordered minimum survives well into the
disordered phase, and disappears at a temperature $T_{2}>T_{1}$. We have
therefore a standard first-order transition in this case. On the other hand,
for $\rho_{F}>1/2$, the ordered minimum disappears abruptly at a certain
temperature $T_{2}(\rho_{F})$. The system then assumes the only minimum
available to it, namely the disordered minimum. The free energy obviously
shows a discontinuous jump in this case. The transition is therefore zeroth
order. These numerical results are consistent with the qualitative remarks we
made above in Section III.
As a result, we can define $T_{1}$ only for $\rho_{F}\leq 1/2$, where clearly
$T_{c}=T_{1}<T_{2}$. These results are summarized in figures 4 and 5. Figure 4
shows the phase diagram of our model in $\rho_{F}-T$ plane, showing the
metallic superfluid under the dome, and two insulating non-superfluid phases
outside it. We refer to the insulating phase at $T=0$ for $0<\rho_{F}<0.3$ as
an insulating normal liquid and the insulating phase for $T>0$ as an
insulating normal gas. While the latter is dominated by thermal effects, the
former is an unusual phase of bosons at $T=0$ that is a non-superfluid because
of correlation and quantum effects.
Figure 5 shows the plots of $T_{1}(\rho_{F})$ and $T_{2}(\rho_{F})$. The
function $T_{2}(\rho_{F})$ can be defined for $0\leq\rho_{F}\leq 1$, and has a
maximum at a certain $\rho_{F}<1/2$. The temperature $T_{1}$ on the other hand
remains lower than $T_{2}$; it vanishes for $0\leq\rho_{F}\leq 0.3$, in
agreement with the $T=0$ resultszeroTarxiv ; zeroTPRO .
The line separating the two phases in Fig. 4 has a vertical segment at the van
Hove point $\rho_{F}=1/2$. This is as a consequence of two facts: (a) $T_{c}$
is the lower of $T_{1}$ and $T_{2}$, and (b) $T_{1}<T_{2}$, for $\rho_{F}\leq
1/2$.
As we saw in Fig.3, the superfluid order parameter $\psi$ is discontinuous
with a jump $\psi_{c}$ at $T_{c}$. In Fig.6 we show the plot of
$\psi_{c}^{2}(\rho_{F})$. The jump in the order parameter seems to be maximum
around the same $\rho_{F}$ where $T_{2}(\rho_{F})$ is maximum. It shows an
abrupt drop at $\rho_{F}=1/2$, and after a broad maximum, decreases slowly
towards zero at $\rho_{F}=1$.
Based on the zero-temperature Fermi band width of $B_{0}=2\psi_{0}^{2}$, the
ratio $T_{c}/B_{0}$ agreed very well with measured values for several types of
unconventional superfluids and superconductors in our earlier workzeroTarxiv ;
zeroTPRO . This value of $T_{c}\simeq 0.12$ in our earlier work was not based
on a calculation, but only an estimate based on the zero-temperature free
energy minimum for $\rho_{F}=1/2$. However, in our present work we have
performed explicit computation of $T_{c}$ based on $T_{1},~{}T_{2}$
calculations for the whole range of $\rho_{F}$ from $0$ to $1$, as shown in
figures (4) and (5). Figure 7 presents a plot of the ratio $T_{c}/B_{0}$ as a
function of $\rho_{F}$. We note that the calculated value of the ratio for
$\rho_{F}=1/2$ is in almost exact agreement with our earlier
estimatezeroTarxiv ; zeroTPRO . The ratio is between $0.01$ and $0.10$ for
most $\rho_{F}$ values in the range 0.3 to 1.0.
Figure 8 plots the ratio $T_{c}/B_{c}$ as a function of $\rho_{F}$, using
$B_{c}=2\psi_{c}^{2}$, the $T=T_{c}$ analogue of the $T=0$ Fermi band width
$B_{0}$. It is interesting to compare this with experimentally determined
values of the ratio $T_{c}$/$T_{F}$, of the superfluid or superconducting
transition temperature $T_{c}$ to the Fermi temperature $T_{F}$. This ratio
also scales approximately with the ratio $\Delta$/$E_{F}$, the pairing
strength of single-band superconductors, where $\Delta$ is the superconducting
gap and $E_{F}$ is the Fermi energy. In particular, it has been
recognizedUemura that the unconventional superconductors show a relatively
large value of $T_{c}$/$T_{F}\sim 0.02-0.2$, while the conventional
superconductors show a small value of $T_{c}$/$T_{F}$ $\sim$ $10^{-4}$ to
$10^{-5}$. For example, elemental metal tin (Sn)Ashcroft has a
$T_{c}$/$T_{F}$ $\sim$ 3$\times$$10^{-5}$, while the
high-$T_{c}$-cupratesYamamoto ; Brookes ; Xie exhibit a $T_{c}$/$T_{F}$
$\sim$ 0.02-0.03. On the other hand, the iron based superconductors (estimated
from experimental dataLubashevsky ; Okazaki ; Kasahara ; Rinott ; MonoFeSe )
and the newly discovered hydride superconductors (as estimated from
experimental $T_{c}$ valuesDrozdov ; Drozdov2 ; Somayazulu and band structure
calculationsBianconi ; Jarlborg ; Liu ) show $T_{c}$/$T_{F}$ $\sim$ 0.1 - 0.2.
Surprisingly, the corresponding value for a Fermi-Bose mixtureFerrier
estimated from experimental data is $T_{c}$/$T_{F}$ $\simeq 0.19$, although
the $T_{c}$$\sim$ 200 nK. Even for a purely ultracold Fermi atomic systemChin
, $T_{c}$/$T_{F}$ $\simeq 0.2$. Thus, the high-$T_{c}$ cuprates, iron-based
superconductors, hydrides and ultracold atomic systems are clearly classified
as unconventional superconductors/superfluids Millev90 .
A very recent study on an iron-based superconductor has reported evidence for
the first solid-state BEC superconductor, although the observed value of the
ratio $\Delta$/$E_{F}$ ($\simeq 0.12$, corresponding to a $T_{c}$/$T_{F}$
$\simeq 0.025$) is reduced across the BCS-BEC crossover, and the authors
interpret it as most probably arising from interband coupling effectsHashimoto
.
The $T_{c}$ values of these unconventional materials span nine orders of
magnitude: while the ultracold FBMs have $T_{c}\simeq 200$nK, the hydrides
have a $T_{c}\simeq 250$K. But the important point is that the ratio
calculated in our model is remarkably close to these values for several
classes of unconventional superfluids and superconductors. In particular, we
obtain $T_{c}/B_{0}\sim 0.01-0.1~{}(T_{c}/B_{c}\sim 0.01-0.25)$; the value of
this ratio is a very robust feature of our model, and compares well with the
range $0.03-0.22$ for most unconventional materials (see the Table 1 of our
previous paper for detailszeroTPRO ).
An important question is that of a potential experimental realization of the
composite hopping model. Consider a lattice with $N$ sites and $M$ electrons
($2N>M>N$) with nearest-neighbor hopping and on-site repulsion. In this case,
by the Pigeonhole principle, at least one site must have more than one
electron. Taking into account the repulsion between electrons, in the minimum
energy configuration we have $M-N$ local pairs (that can be considered as
hardcore bosons) and $2N-M$ electrons each occupiing a site. When one of the
electrons of a pair hops to a neighboring site with one electron, we have a
realization of composite hopping (see our earlier workzeroTarxiv ; zeroTPRO
for a more detailed explanation). This situation is perhaps also well
described by a one-band fermion Hubbard model above half filling, and the
composite hopping model might offer a good effective description. Our model
approximates the electrons by spinless fermions, but in subsequent work we
plan to treat the spin half case. In this case, the composite hopping strength
$t$ is just the single-electron hopping strength. For a narrow band Hubbard
model, we can take $t\sim 0.1-0.2$eV, and if we further use $T_{c}\simeq
0.06zt$ (corresponding to its peak value in Fig.4 at the van Hove point
$\rho_{F}=1/2$), we obtain a value of $T_{c}\simeq 250-500$K.
If room-temperature superconductivity is theoretically possible within a
composite-hopping framework, then it provides a strong reason to look for
practical examples of it. In the pigeonhole context discussed above, external
pressure might help overcome the repulsion between electrons and keep them
paired. It is interesting to explore if high-pressure room-temperature
superconductors like sulphidesdias2020 are indeed solid-state realizations of
the composite-hopping model, and are superconducting because of such a
‘pigeonhole’ pairing mechanism.
A second possibility might be realization of composite hopping in the context
of ultracold atoms in an optical lattice. Quantum simulation is an exciting
area of researchlewenstein2012 , and given the many unusual physical
properties displayed by our model such as negative compressibility (discussed
in our earlier workzeroTarxiv ; zeroTPRO ), negative entropycerf ; delrio2011
; Chatzi2020 , zeroth-order transitionMaslov2004 ; Gunasekaran ; Altamirano01
; Zou ; Hennigar ; Altamirano02 ; Amin ; kundu2020 and the remarkable
agreement of the calculated ratio of $T_{c}$/$T_{F}$ with a wide range of
unconventional superfluids and superconductors zeroTPRO , it would be
interesting to explore the physics of composite hopping in this perspective.
In our model, there is a clear distinction between the boson-dominated
($\rho_{F}\leq 1/2$) and the fermion-dominated ($\rho_{F}>1/2$) regimes: in
the former, the temperature-driven transition is first-order and the entropy
remains positive throughout, while in the latter regime the transition is
zeroth order and the entropy is negative in the ordered phase close to
$T_{c}$. In the latter regime, the zero-temperature bulk modulus becomes
negative over a range of $\rho_{F}$zeroTPRO .
An important question whether the zeroth order temperature-driven transition
for $\rho_{F}>1/2$ is a consequence of mean-field approximation (3), or an
inherent property of the composite-hopping model. This question assumes
importance given that the model offers an irreducible nontrivial description
of an FBM perhaps not explored before, and the circumstance that zeroth-order
transitions are relatively rare in the solid stateMaslov2004 ; kundu2020 . It
would therefore be of interest to study the model using other numerical
techniques of quantum many-body theory like Quantum Monte-Carlo calculations
(QMC)bhmqmc .
## V Conclusion
In this work, we have extended the zero-temperature study of the composite
hopping model of FBMs to explore finite-temperature thermodynamics. We
computed the $\rho_{F}-T$ phase diagram and found that the temperature-driven
metallic superfluid phase to insulating normal phase transition is
discontinuous: for $\rho\leq 1/2$, it is first order, while for $\rho>1/2$ it
is zeroth order. We calculated the the temperature dependent superfluid
amplitude $\psi$, the fermion hopping amplitude $\phi$, the Fermi chemical
potential, and free energy within a mean-field approximation. We also computed
the entropy and found that it becomes negative for a certain range of
temperatures below $T_{c}$ when $\rho_{F}>1/2$. The ratios $T_{c}/B_{0}$ and
$T_{c}/B_{c}$ where $B_{0}$ and $B_{c}$ are Fermi band widths at $T=0$ and
$T=T_{c}$, respectively, could also be calculated. The calculated ratios of
$T_{c}$ to Fermi band widths match well with experimental values of
$T_{c}$/$T_{F}$ (where $T_{F}$ is the Fermi temperature) of unconventional
superfluids and superconductors, including Fermi-Bose mixtures, the
high-$T_{c}$ cuprates and iron-based superconductors, that span a $T_{c}$
range of nine orders. The results indicate the important role of composite
hopping in describing the superfluid properties of Fermi-Bose mixtures.
## Acknowledgments
A.C. thanks the Ministry of Science and Technology of the Republic of China,
Taiwan, for financially supporting this research under Contract No. MOST
108-2112-M-213-001- MY3.
## References
* (1) C. Ebner and D. O. Edwards, “The low temperature thermodynamic properties of dilute solutions of 3He in 4He”, Phys. Rep. 2, 77 (1970).
* (2) A. G. Truscott, K. E. Strecker, W. I. McAlexander, G. B. Patridge, and R. G. Hulet, “Observation of Fermi Pressure in a Gas of Trapped Atoms”, Science 291, 2570 (2001).
* (3) F. Schreck, L. Khaykovich, K.L. Corwin, G. Ferrari, T. Bourdel, J. Cubizolles, and C. Salomon, “Quasipure Bose-Einstein Condensate Immersed in a Fermi Sea”, Phys. Rev. Lett. 87, 080403 (2001).
* (4) Z. Hadzibabic, C.A. Stan, K. Dieckmann, S. Gupta, M.W. Zwierlein, A. Gorlitz, and W. Ketterle, “Two-Species Mixture of Quantum Degenerate Bose and Fermi Gases”, Phys. Rev. Lett. 88, 160401 (2002).
* (5) Y. Lubashevsky, E. Lahoud, K. Chashka, D. Podolsky, and A. Kanigel, “Shallow pockets and very strong coupling superconductivity in FeSexTe1-x”, Nat Phys 8, 309 (2012).
* (6) K. Okazaki, Y. Ito, Y. Ota, Y. Kotani, T. Shimojima, T. Kiss, S. Watanabe, C. -T. Chen, S. Niitaka, T. Hanaguri, H. Takagi, A.Chainani, and S. Shin, “Superconductivity in an electron band just above the Fermi level: possible route to BCS-BEC superconductivity”, Sci. Rep. 4, 4109; DOI:10.1038/srep04109 (2014).
* (7) Shigeru Kasahara, Tatsuya Watashige, Tetsuo Hanaguri, Yuhki Kohsaka, Takuya Yamashita, Yusuke Shimoyama, Yuta Mizukami, Ryota Endo, Hiroaki Ikeda, Kazushi Aoyama, Taichi Terashima, Shinya Uji, Thomas Wolf, Hilbert von Lohneysen, Takasada Shibauchi, and Yuji Matsuda, “Field-induced superconducting phase of FeSe in the BCS-BEC cross-over”, PNAS, 111, 16311 (2014).
* (8) Shahar Rinott, K. B. Chashka, Amit Ribak, Emile D. L. Rienks, Amina Taleb-Ibrahimi, Patrick Le Fevre, Francois Bertran, Mohit Randeria and Amit Kanigel, “Tuning across the BCS-BEC crossover in the multiband superconductor Fe1+ySexTe1-x: An angle-resolved photoemission study”, Science Advances 3, e1602372 DOI: 10.1126/sciadv.1602372.
* (9) C. A. Regal, C. Ticknor, J. L. Bohn, and D. S. Jin, “Tuning p-wave interactions in an ultracold Fermi gas of atoms”, Phys. Rev. Lett. 90, 053201 (2003).
* (10) M. Randeria and E. Taylor, “Crossover from Bardeen-Cooper-Schrieffer to Bose-Einstein condensation and the unitary Fermi gas”, Annual Review of Condensed Matter Physics 5, 209 (2014).
* (11) W. Ketterle and M. W. Zwierlein, in Proceedings of the International School of Physics: Enrico Fermi, edited by M. Inguscio, W. Ketterle, and C. Salomon (IOS Press, 2008) pp.95-287.
* (12) L. V. Keldysh and A. N. Kozlov, “Collective properties of excitons in semiconductors,” Sov. Phys. JETP 27, 521 (1968).
* (13) B. O. Kerbikov, “BCS-Bose crossover in color superconductivity,” Phys. At. Nucl. 65, 1918 (2002), arXiv:hep-ph/0204209.
* (14) M. Bartenstein, A. Altmeyer, S. Riedl, S. Jochim, C. Chin, J. Hecker Denschlag, and R. Grimm, “Crossover from a molecular Bose-Einstein condensate to a degenerate Fermi gas”, Phys. Rev. Lett. 92, 120401 (2004).
* (15) Tyler D. Cumby, Ruth A. Shewmon, Ming-Guang Hu, John D. Perreault, and Deborah S. Jin, “Feshbach molecule formation in a Bose Fermi mixture”, Phys. Rev. A 87, 012703.
* (16) Ruth S. Bloom, Ming-Guang Hu, Tyler D. Cumby, and Deborah S. Jin, “Tests of Universal Three Body Physics in an Ultracold Bose-Fermi Mixture”, Phys. Rev. Lett. 111, 105301 (2013).
* (17) M. Randeria, J.-M.Duan, and L.-Y. Shieh, “Bound states, Cooper pairing, and Bose condensation in two dimensions”, Phys. Rev. Lett. 62, 981 (1989).
* (18) R. M. Quick, C. Esebbag, and M. De Llano. “BCS theory tested in an exactly solvable fermion fluid.” Physical Review B 47, 11512 (1993).
* (19) M. J. Bijlsma, B. A. Heringa and H. T. C. Stoof, “Phonon exchange in dilute Fermi-Bose mixtures: Tailoring the Fermi-Fermi interaction”, Phys. Rev. A 61, 053601 (2000).
* (20) P. Capuzzi and E. S. Hernandez, “Phase separation and response of 3 He-4 He mixtures within a magnetic trap”, Phys. Rev. A 64, 043607 (2001).
* (21) A. P. Albus, S. A. Gardiner, F. Illuminati, and M. Wilkens, “Quantum field theory of dilute homogeneous Bose-Fermi mixtures at zero temperature: General formalism and beyond mean-field corrections”, Phys. Rev. A 65, 053607 (2002).
* (22) L. Viverit and S. Giorgini, Ground-state properties of a dilute Bose-Fermi mixture, Phys. Rev. A 66, 063604 (2002).
* (23) M. R. Schafroth, “Theory of superconductivity”, Phys. Rev. 96, 1442 (1954).
* (24) J. Ranninger and S. Robaszkiewicz, “Superconductivity of locally paired electrons”, Physica B 135, 468 (1985).
* (25) J. Ranninger, J. M. Robin, and M. Eschrig, “Superfluid precursor effects in a model of hybridized bosons and fermions”, Phys. Rev. Lett. 74, 4027 (1995).
* (26) R. Friedberg and T. D. Lee, “Gap energy and long-range order in the boson-fermion model of superconductivity”, Phys. Rev. B 40, 6745 (1989).
* (27) V. B. Geshkenbein, L. B. Ioffe, and A. I. Larkin, “Superconductivity in a system with preformed pairs”, Phys. Rev. B 55, 3173 (1997).
* (28) T. Domanski, M. M. Maska, and M. Mierzejewski, “Upward curvature of the upper critical field in the boson-fermion model”, Phys. Rev. B 67, 134507 (2003).
* (29) Y.-I. Shin, C. H. Schunck, A. Schirotzek, and W. Ketterle, “Phase diagram of a two-component Fermi gas with resonant interactions”, Nature 451, 689 (2008).
* (30) M. M. Maska and N. Trivedi, “Temperature-driven BCS-BEC crossover and Cooper-paired metallic phase in coupled boson-fermion systems”, Phys. Rev. B 102, 144506 (2020).
* (31) Christian Ufrecht, MatthiasMeister, AlbertRoura and Wolfgang P. Schleich, “Comprehensive classification for Bose-Fermi mixtures”, New J. Phys. 19, 085001 (2017).
* (32) C. Ospelkaus, S. Ospelkaus, K. Sengstock and K. Bongs, Interaction-driven dynamics of 40K-87Rb fermion-boson gas mixtures in the large-particle-number limit, Phys. Rev. Lett. 96 020401 (2006) ; S. Ospelkaus, C. Ospelkaus, L. Humbert, K Sengstock and K. Bongs, “Tuning of heteronuclear interactions in a degenerate Fermi-Bose mixture”, Phys. Rev. Lett. 97 120403 (2007)
* (33) X.-W. Guan, M. T. Batchelor, and J.-Y. Lee, “Magnetic ordering and quantum statistical effects in strongly repulsive Fermi-Fermi and Bose-Fermi mixtures”, Physical Review A 78, 023621 (2008).
* (34) D. W. Wang, M. D. Lukin, and E. Demler, “Engineering superfluidity in Bose-Fermi mixtures of ultracold atoms”, Phys. Rev. A 72, 051604 (2005).
* (35) M. Cramer, “Interaction-Dependent Temperature Effects in Bose-Fermi Mixtures in Optical Lattices”, Phys. Rev. Lett. 106, 215302 (2011).
* (36) M. Lewenstein, L. Santos, M. A. Baranov and H. Fehrmann, “Atomic Bose-Fermi mixtures in an optical lattice”, Phys. Rev. Lett. 92, 050401 (2004).
* (37) L. Pollet, M. Troyer, K. Van Houcke, and S. M. A. Rombouts, “Phase diagram of Bose-Fermi mixtures in one-dimensional optical lattices”, Phys. Rev. Lett. 96, 190402 (2006).
* (38) K Sheshadri and A Chainani, “A Composite-Hopping Model of Fermi-Bose Mixtures”, arxiv/1912.00132 (2019).
* (39) K Sheshadri and A Chainani, “Coupled first-order transitions and unconventional superfluidity in a Fermi-Bose mixture”, Physical Review Research 2 (2), 023291 (2020).
* (40) K. Sheshadri, H. R. Krishnamurthy, R Pandit and T. V. Ramakrishnan, “Superfluid and Insulating Phases in an Interacting-Boson Model: Mean-Field Theory and the RPA.Europhys.” Europhys. Lett., 22, 267 (1993).
* (41) K. Sheshadri, H. R. Krishnamurthy, Rahul Pandit, and T. V. Ramakrishnan, “Percolation-Enhanced Localization in the Disordered Bosonic Hubbard Model”, Phys. Rev. Lett. 75, 4075 (1995).
* (42) Matthew P. A. Fisher, Peter B. Weichman, G. Grinstein, and Daniel S. Fisher , “Boson localization and the superfluid-insulator transition”, Phys. Rev. B 40, 546 (1989).
* (43) W. Krauth, M. Caffarel, J. P. Bouchaud, “Gutzwiller wave function for a model of strongly interacting bosons”, Physical Review B 45, 3137 (1992).
* (44) R. V. Pai, K. Sheshadri, R. Pandit, “Phases and transitions in the spin-1 Bose-Hubbard model: Systematics of a mean-field theory”, Physical Review B 77, 014503 (2008).
* (45) R. V. Pai, J. M. Kurdestany, K. Sheshadri, R. Pandit, “Bose-Hubbard models in confining potentials: Inhomogeneous mean-field theory”, Physical Review B 85, 214524 (2012).
* (46) Takashi Kimura, Shunji Tsuchiya, and Susumu Kurihara, “Possibility of a First-Order Superfluid Mott-Insulator Transition of Spinor Bosons in an Optical Lattice”, Phys. Rev. Lett. 94, 110403 (2005).
* (47) V.P. Maslov, “Zeroth-Order Phase Transitions”, Mathematical Notes 76, 697 (2004).
* (48) S. Gunasekaran, R. B. Mann and D. Kubiznak, JHEP 11, 110 (2012).
* (49) N. Altamirano, D. Kubiznak, and R. B. Mann, Phys. Rev. D. 88, 101502(R) (2013); N. Altamirano, D. Kubiznak, R. B. Mann and Z. Sherkatghanad, Class. Quant. Grav. 31, 042001 (2014); S. W. Wei, P. Cheng and Y. X. Liu, Phys. Rev. D 93, 084015 (2016).
* (50) D. Zou, Y. Liu, B. Wang, Phys. Rev. D. 90, 044063 (2014); R. A. Hennigar, W. G. Brenna and R. B. Mann, JHEP 1507, 077 (2015); M. B. Jahani Poshteh, B. Mirza, Z. Sherkatghanad, Phys. Rev. D 88, 024005 (2013).
* (51) R. A. Hennigar and R. B. Mann, Entropy 17, 8056 (2015).
* (52) N. Altamirano, D. Kubiznak, R. B. Mann and Z. Sherkatghanad, Galaxies 2, 89 (2014); D. Kubiznak and F. Simovic, Class. Quant. Grav. 33, 245001 (2016).
* (53) Amin Dehyadegari and Ahmad Sheykhi, Reentrant phase transition of Born-Infeld-AdS black holes, Phys. Rev. D 98, 024011 (2018).
* (54) N. J. Cerf and C. Adami, “Negative Entropy and Information in Quantum Mechanics”, Phys. Rev. Lett. 79, 5194 (1997).
* (55) Lidia del Rio, Johan Aberg, Renato Renner, Oscar Dahlsten and Vlatko Vedral “The thermodynamic meaning of negative entropy”, Nature 474, 61-63 (2011).
* (56) C. A. Chatzidimitriou-Dreismann, “Experimental Implications of Negative Quantum Conditional Entropy – H2 Mobility in Nanoporous Materials”, Applied Sciences 10, 8266 (2020) ; doi:10.3390/app10228266.
* (57) Raina J. Olsen, Matthew Beckner, Matthew B. Stone, Peter Pfeifer, Carlos Wexler, and Haskell Taub, “Quantum excitation spectrum of hydrogen adsorbed in nanoporous carbons observed by inelastic neutron scattering”, Carbon, 58, 46 (2013).
* (58) Samantha K. Callear, Anibal J. Ramirez-Cuesta, William I.F. David, Franck Millange, Richard I. Walton, “High-resolution inelastic neutron scattering and neutron powder diffraction study of the adsorption of dihydrogen by the Cu(II) metal?organic framework material HKUST-1”, Chemical Physics 427, 9 (2013).
* (59) Y. J. Uemura, “Basic similarities among cuprate, bismuthate, organic, chevrel-phase and heavy-fermion superconductors shown by penetration-depth measurements” Phys. Rev. Lett. 66 2665 (1991) ; “Condensation, excitation, pairing, and superfluid density in high-Tc superconductors: the magnetic resonance mode as a roton analogue and a possible spin-mediated pairing”, J. Phys.: Condens. Matter 16, S4515 (2004).
* (60) N. W. Ashcroft and N. D. Mermin, Solid State Physics, Saunders College Publishing, Florida (1975).
* (61) A. Yamamoto, N. Takeshita, C. Terakura, and Y. Tokura, “High pressure effects revisited for the cuprate superconductor family with highest critical temperature”, Nat. Commun. 6, 8990 (2015).
* (62) N. B. Brookes, G. Ghiringhelli, O. Tjernberg, L. H. Tjeng, T. Mizokawa, T. W. Li, and A. A. Menovsky, “Detection of Zhang-Rice Singlets Using Spin-Polarized Photoemission”, Phys. Rev. Lett. 87, 237003 (2001).
* (63) B. P. Xie, K. Yang, D. W. Shen, J. F. Zhao, H. W. Ou, J. Wei, S. Y. Gu, M. Arita, S. Qiao, H. Namatame, M. Taniguchi, N. Kaneko, H. Eisaki, Z. Q. Yang, D.L. Feng, “High-energy scale revival and giant kink in the dispersion of a cuprate superconductor”, Phys. Rev. Lett. 98, 147001 (2007).
* (64) Q. Song, T.L. Yu, X. Lou, B.P. Xie, H.C. Xu, C.H.P. Wen, Q. Yao, S.Y. Zhang, X.T. Zhu, J.D. Guo, R. Peng and D.L. Feng, “Evidence of cooperative effect on the enhanced superconducting transition temperature at the FeSe/SrTiO3 interface”, Nat. Commun.10, 758 (2019), doi.org/10.1038/s41467-019-08560-z.
* (65) A. P. Drozdov, M. I. Eremets, I. A. Troyan, V. Ksenofontov and S. I. Shylin, “Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system”, Nature 525, 73 (2015).
* (66) A. P. Drozdov, P. P. Kong, V. S. Minkov, S. P. Besedin, M. A. Kuzovnikov, S. Mozaffari, L. Balicas, F. F. Balakirev, D. E. Graf, V. B. Prakapenka, E. Greenberg, D. A. Knyazev, M. Tkacz and M. I. Eremets, “Superconductivity at 250 K in lanthanum hydride under high pressures”, Nature 569, 528 (2019).
* (67) Evidence for Superconductivity above 260 K in Lanthanum Superhydride at Megabar Pressures M. Somayazulu, M. Ahart, A. K. Mishra, Z. M. Geballe, M. Baldini, Y. Meng, V. V. Struzhkin, and R. J. Hemley, “Evidence for Superconductivity above 260 K in Lanthanum Superhydride at Megabar Pressures”, Phys. Rev. Lett. 122, 027001 (2019).
* (68) A. Bianconi and T. Jarlborg, “Superconductivity above the lowest Earth temperature in pressurized sulfur hydride”, EPL (Europhysics Letters) 112, 37001 (2015).
* (69) T. Jarlborg and A. Bianconi, “Breakdown of the Migdal approximation at Lifshitz transitions with giant zero-point motion in the H3S superconductor”, Sci. Rep. 6, 24816; doi: 10.1038/srep24816 (2016).
* (70) H. Liu , I. I. Naumov , R. Hoffmann , N. W. Ashcroft, and R. J. Hemley, “Potential high-Tc superconducting lanthanum and yttrium hydrides at high pressure”, PNAS 114, 6990 (2017).
* (71) I. Ferrier-Barbut, M. Delehaye, S. Laurent, A. T. Grier, M. Pierce, B. S. Rem, F. Chevy, C. Salomon, “A mixture of Bose and Fermi superfluids”, Science 345, 1035 (2014).
* (72) C. Chin, M. Bartenstein, A. Altmeyer, S. Riedl, S. Jochim, J. Hecker Denschlag, R. Grimm, “Observation of the Pairing Gap in a Strongly Interacting Fermi Gas”, Science 305, 1128 (2004).
* (73) Yonko T Millev and Dimo I Uzunov, “Weakly First-Order Transition in Unconventional Superconductors”, Phys. Lett. A 145, 287 (1990).
* (74) T. Hashimoto, Y. Ota, A. Tsuzuki, T. Nagashima, A. Fukushima, S. Kasahara , Y. Matsuda , K. Matsuura , Y. Mizukami , T. Shibauchi, Shik Shin, K. Okazaki, “Bose-Einstein condensation superconductivity induced by disappearance of the nematic state”, Sci. Adv. 2020; 6 : eabb9052.
* (75) Maciej Lewenstein, Anna Sanpera, and Veronica Ahufinger, “Ultracold Atoms in Optical Lattices: Simulating quantum many-body systems”, Oxford University Press (2012).
* (76) V.P. Maslov, “Zeroth-Order Phase Transitions”, Mathematical Notes 76, 697 (2004).
* (77) Satyaki Kundu, Tapas Bar, Rajesh Kumble Nayak, and Bhavtosh Bansal, “Critical Slowing Down at the Abrupt Mott Transition: When the First-Order Phase Transition Becomes Zeroth Order and Looks Like Second Order”, Phys. Rev.Lett. 124, 095703 (2020).
* (78) Elliot Snider, Nathan Dasenbrock-Gammon, Raymond McBride, Mathew Debessai, Hiranya Vindana, Kevin Vencatasamy, Keith V. Lawler, Ashkan Salamat and Ranga P. Dias, “Room-temperature superconductivity in a carbonaceous sulfur hydride”, Nature 586, 373 (2020).
* (79) Werner Krauth and Nandini Trivedi, “Mott and superfluid transitions in a strongly interacting lattice boson system.” EPL (Europhysics Letters) 14, 627 (1991).
|
11institutetext: Grupo de Astrofísica Molecular, Instituto de Física
Fundamental, CSIC, C/ Serrano 123, 28006 Madrid, Spain
11email<EMAIL_ADDRESS>22institutetext: Observatorio Astronómico
Nacional (IGN), C/ Alfonso XII 3, 28014 Madrid, Spain 33institutetext:
Observatorio de Yebes (IGN). Cerro de la Palera s/n, 19141 Yebes, Guadalajara,
Spain
# A study of C4H3N isomers in TMC-1: line by line detection of
HCCCH2CN††thanks: Based on observations with the 40-m radio telescope of the
National Geographic Institute of Spain (IGN) at Yebes Observatory (projects
19A003 and 20A014). Yebes Observatory thanks the ERC for funding support under
grant ERC-2013-Syg-610256-NANOCOSMOS.
N. Marcelino A study of C4H3N isomers in TMC-1: line by line detection of
HCCCH2CN††thanks: Based on observations with the 40-m radio telescope of the
National Geographic Institute of Spain (IGN) at Yebes Observatory (projects
19A003 and 20A014). Yebes Observatory thanks the ERC for funding support under
grant ERC-2013-Syg-610256-NANOCOSMOS.A study of C4H3N isomers in TMC-1: line
by line detection of HCCCH2CN††thanks: Based on observations with the 40-m
radio telescope of the National Geographic Institute of Spain (IGN) at Yebes
Observatory (projects 19A003 and 20A014). Yebes Observatory thanks the ERC for
funding support under grant ERC-2013-Syg-610256-NANOCOSMOS. B. Tercero A study
of C4H3N isomers in TMC-1: line by line detection of HCCCH2CN††thanks: Based
on observations with the 40-m radio telescope of the National Geographic
Institute of Spain (IGN) at Yebes Observatory (projects 19A003 and 20A014).
Yebes Observatory thanks the ERC for funding support under grant
ERC-2013-Syg-610256-NANOCOSMOS.A study of C4H3N isomers in TMC-1: line by line
detection of HCCCH2CN††thanks: Based on observations with the 40-m radio
telescope of the National Geographic Institute of Spain (IGN) at Yebes
Observatory (projects 19A003 and 20A014). Yebes Observatory thanks the ERC for
funding support under grant ERC-2013-Syg-610256-NANOCOSMOS.A study of C4H3N
isomers in TMC-1: line by line detection of HCCCH2CN††thanks: Based on
observations with the 40-m radio telescope of the National Geographic
Institute of Spain (IGN) at Yebes Observatory (projects 19A003 and 20A014).
Yebes Observatory thanks the ERC for funding support under grant
ERC-2013-Syg-610256-NANOCOSMOS.A study of C4H3N isomers in TMC-1: line by line
detection of HCCCH2CN††thanks: Based on observations with the 40-m radio
telescope of the National Geographic Institute of Spain (IGN) at Yebes
Observatory (projects 19A003 and 20A014). Yebes Observatory thanks the ERC for
funding support under grant ERC-2013-Syg-610256-NANOCOSMOS. M. Agúndez A study
of C4H3N isomers in TMC-1: line by line detection of HCCCH2CN††thanks: Based
on observations with the 40-m radio telescope of the National Geographic
Institute of Spain (IGN) at Yebes Observatory (projects 19A003 and 20A014).
Yebes Observatory thanks the ERC for funding support under grant
ERC-2013-Syg-610256-NANOCOSMOS.A study of C4H3N isomers in TMC-1: line by line
detection of HCCCH2CN††thanks: Based on observations with the 40-m radio
telescope of the National Geographic Institute of Spain (IGN) at Yebes
Observatory (projects 19A003 and 20A014). Yebes Observatory thanks the ERC for
funding support under grant ERC-2013-Syg-610256-NANOCOSMOS. J. Cernicharo A
study of C4H3N isomers in TMC-1: line by line detection of HCCCH2CN††thanks:
Based on observations with the 40-m radio telescope of the National Geographic
Institute of Spain (IGN) at Yebes Observatory (projects 19A003 and 20A014).
Yebes Observatory thanks the ERC for funding support under grant
ERC-2013-Syg-610256-NANOCOSMOS.A study of C4H3N isomers in TMC-1: line by line
detection of HCCCH2CN††thanks: Based on observations with the 40-m radio
telescope of the National Geographic Institute of Spain (IGN) at Yebes
Observatory (projects 19A003 and 20A014). Yebes Observatory thanks the ERC for
funding support under grant ERC-2013-Syg-610256-NANOCOSMOS.
(Received ; accepted )
We present Yebes 40m telescope observations of the three most stable C4H3N
isomers towards the cyanopolyyne peak of TMC-1. We have detected 13
transitions from CH3C3N (A and E species), 16 lines from CH2CCHCN, and 27
lines ($a$-type and $b$-type) from HCCCH2CN. We thus provide a robust
confirmation of the detection of HCCCH2CN and CH2CCHCN in space. We have
constructed rotational diagrams for the three species, and obtained rotational
temperatures between $4-8$ K and similar column densities for the three
isomers, in the range $(1.5-3)\times 10^{12}$ cm-2. Our chemical model
provides abundances of the order of the observed ones, although it
overestimates the abundance of CH3CCCN and underestimates that of HCCCH2CN.
The similarity of the observed abundances of the three isomers suggests a
common origin, most probably involving reactions of the radical CN with the
unsaturated hydrocarbons methyl acetylene and allene. Studies of reaction
kinetics at low temperature and further observations of these molecules in
different astronomical sources are needed to draw a clear picture of the
chemistry of C4H3N isomers in space.
###### Key Words.:
Astrochemistry – ISM: abundances – ISM: clouds, TMC-1 – ISM: molecules – line:
identification
## 1 Introduction
Three C4H3N isomers have been detected in space to date. These are, in order
of increasing energy, methylcyanoacetylene (CH3C3N), cyanoallene (CH2CCHCN),
and propargyl cyanide (HCCCH2CN). Our knowledge of C4H3N isomers in the
interstellar medium is the result of a nice multidisciplinary story with
contributions from theoretical calculations, laboratory experiments, and
astronomical observations. The presence of cyanoallene in cold interstellar
clouds was predicted by Balucani et al. (2000, 2002) based on crossed
molecular beam experiments and _ab initio_ calculations which indicated that
the reaction of CN and CH3CCH would produce CH3C3N, already detected in TMC-1
(Broten et al., 1984), and CH2CCHCN in nearly equal amounts. Laboratory
experiments indeed showed that the reaction CN + CH3CCH is rapid at low
temperatures (Carty et al., 2001). These results motivated an astronomical
search for cyanoallene in TMC-1, which turned out to be successful using the
GBT (Lovas et al., 2006) and Effelsberg 100m (Chin et al., 2006) telescopes.
In their combined crossed beam and _ab initio_ study, Balucani et al. (2000,
2002) studied also the reaction between CN and CH2CCH2 (allene), a non polar
metastable isomer of CH3CCH which is thought to be also present in cold
interstellar clouds. These authors found that the reaction should be rapid at
low temperatures, something that was confirmed by Carty et al. (2001),
producing cyanoallene and the third C4H3N isomer: HCCCH2CN. This isomer was
not detected in TMC-1 by Lovas et al. (2006), although it was later on found
toward this same source during a cm line survey with the GBT (McGuire et al.,
2020). The detection of propargyl cyanide in TMC-1 by these authors relied on
four individual lines detected at a modest signal-to-noise ratio (SNR) and was
supported by line stacking of 68 transitions.
Here we present an independent and robust detection of HCCCH2CN in TMC-1, with
10 lines detected with SNR above 10 plus 12 lines detected above 3$\sigma$,
together with observations of the two other C3H4N isomers, CH3C3N and
CH2CCHCN. The presence of the latter is confirmed by the detection of a
significant number of rotational lines. The high sensitivity and number of
lines detected allow us to derive precise abundances for the three isomers in
a coherent and systematic way and to revisit the chemistry of C3H4N isomers in
TMC-1.
## 2 Observations
The data presented here are part of a deep spectral line survey in the Q band
toward TMC-1, performed at the Yebes 40 m
radiotelescope111http://rt40m.oan.es/rt40m$\\_$en.php (de Vicente et al.,
2016), located at 990 m of altitude near Guadalajara (Spain). The observed
position corresponds to the cyanopolyyne peak in TMC-1, at
$\alpha_{J2000}=4^{\rm h}41^{\rm m}41.9^{\rm s}$ and
$\delta_{J2000}=+25^{\circ}41^{\prime}27.0^{\prime\prime}$. We have covered
the full Q band at the 40 m telescope, between 31.1 GHz and 50.4 GHz, using
the recently installed NANOCOSMOS HEMT Q band receiver (Tercero et al., 2020b)
and the fast Fourier transform spectrometers (FFTS) with 8$\times$2.5 GHz
bands per lineal polarization, which allow a simultaneous scan of a band width
of 18 GHz at a spectral resolution of 38 kHz ($\sim$0.27 km s-1). We observed
two setups at different central frequencies in order to fully cover the lower
and upper frequencies allowed by the Q band receiver, and to check for
spurious signals and other technical artifacts.
The observations were performed in several sessions, between November 2019 and
February 2020, using the frequency switching technique with a frequency throw
of 10 MHz. The intensity scale in the spectra obtained is T${}_{\rm A}^{*}$,
antenna temperature corrected for atmospheric absorption and spillover losses,
which was calibrated using two absorbers at different temperatures and the
atmospheric transmission model ATM (Cernicharo, 1985; Pardo et al., 2001).
Pointing and focus were checked every hour through pseudo-continuum
observations (see e.g. de Vicente et al. 2016; Tercero et al. 2020a) of the
SiO $J=1-0$, $v=1$ maser emission towards the O-rich evolved star IK Tau,
which is close to the target source. The pointing errors were always found
within 2-3′′. System temperatures were in the range 50-250 K depending on the
frequency, the particular weather conditions of each observing session (from 5
mm to 10 mm of precipitable water vapor), and the elevation of the source
(from 15∘ to 80∘). The final rms obtained is in the range 0.5-1 mK, rising up
to 3 mK at the highest frequencies. The main beam efficiency of the Yebes 40 m
telescope ranges from 0.6 at 32 GHz to 0.43 at 49 GHz, and the half power beam
width (HPBW) ranges from 55′′ at 32 GHz to 37′′ at 49 GHz. All the data were
reduced and analyzed using the GILDAS222http://www.iram.fr/IRAMFR/GILDAS/
software.
## 3 Results
Figure 1: Observed lines of HCCCH2CN ($a$-type) toward TMC-1 (CP). The
vertical dashed green line marks a radial velocity of 5.7 km s-1. Figure 2:
Observed lines of HCCCH2CN ($b$-type) toward TMC-1 (CP). Blue arrows show the
position of the strongest three hyperfine components. Velocity axis refers to
the frequency result of collapsing the hyperfine structure.
The high sensitivity of this line survey allowed the detection of HCCCH2CN
towards TMC-1 through 17 $a$-type lines up to quantum numbers $J=9-8$ and
$K_{\rm a}=0,1,2$ ($E_{\rm u}\leq 13$ K), with 10 of them showing a SNR
$>$10\. In addition, we detected 10 $b$-type lines harbouring hyperfine
structure. These lines are shown in Fig. 1 and Fig. 2 and are listed in Table
2. Line identification was performed using the MADEX catalogue (Cernicharo
2012, see Table 2), which also includes predictions for the hyperfine
structure. This detection confirms the presence of this species in space,
recently claimed for the first time in TMC-1 by McGuire et al. (2020) using
the Green Bank Telescope (GBT). These authors presented a 5$\sigma$ signal
(18$\sigma$ in the response impulse function) obtained by an intensity and
noise-weighted average (“stack”) of the data at the expected frequencies of
the HCCCH2CN lines that could be present within the noise level. It is worth
noting that our 40 m survey of TMC-1 in the Q band is complementary to that
performed with the GBT between 8 GHz and 30 GHz. Although most of the
individual lines of HCCCH2CN are below the detection limit of the GBT data,
four of them are detected at 1-3$\sigma$ levels. Thanks to the high spectral
resolution of these data (1.4 kHz) they distinguished three cloud components
in the line profiles (see Fossé et al. 2001 for a detailed analysis of the
velocity structure of this source).
In this work, a single Gaussian function was fitted to the HCCCH2CN line
profiles to obtain the observed line parameters (see Table 2). We derived a
$V_{\rm LSR}=(5.70\pm 0.09)$ km s-1 and a line width ($\Delta$$v$, full width
at half maximum) of $(0.66\pm 0.18)$ km s-1. The former is slightly different
from the value $(5.83\pm 0.01)$ km s-1, obtained by Cernicharo et al. (2020a)
from Gaussian fits to the 50 lines of HC5N and its 13C and 15N isotopologues
detected in our line survey. Note we have a larger uncertainty due to the
lower number of transitions and the weakness of some of the lines as compared
to HC5N, in particular the $b$-type transitions.
We also detected the other two C4H3N isomers, CH2CCHCN and CH3CCCN, using
frequencies from the MADEX catalogue (Cernicharo 2012, see Table 2). The 16
lines of CH2CCHCN detected in our line survey are shown in Fig. 3 and are
listed in Table 2. All of them are detected above a 10$\sigma$ level. This
species was previously identified in TMC-1 through four lines between 20 GHz
and 26 GHz (Lovas et al., 2006). Here we report the first detection of lines
of CH2CCHCN above 30 GHz in TMC-1. Kaifu et al. (2004) did not detect lines
above the noise limit at the CH2CCHCN frequencies in their line survey between
8.8 GHz and 50 GHz carried out with the Nobeyama 45 m telescope. As we
mentioned in previous works (Cernicharo et al., 2020a, b, c; Marcelino et al.,
2020), the sensitivity of our observations is a factor 5-10 better than that
of Kaifu et al. (2004) at the same frequencies. The derived $V_{\rm LSR}$ for
the CH2CCHCN lines, by fitting a single Gaussian, is $(5.66\pm 0.03)$ km s-1,
which is similar, within errors, to the one obtained for HCCCH2CN. The isomer
CH3CCCN, a well known species in TMC-1 (Broten et al., 1984; Kaifu et al.,
2004), has been also identified in our line survey through 10 strong lines
($J_{\rm u}$ from 8 to 12 and $K=0,1$) plus five $K=2$ lines ($E_{\rm u}>29$
K) tentatively detected (see Fig. 4 and Table 2). These lines show a $V_{\rm
LSR}$ of $(5.80\pm 0.02)$ km s-1 which matches that observed for HC5N.
We can estimate rotational temperatures ($T_{\rm rot}$) and molecular column
densities ($N$) for the detected species by constructing rotational diagrams
(see e.g. Goldsmith & Langer 1999). This analysis assumes the Rayleigh-Jeans
approximation, optically thin lines, and LTE conditions. The equation that
derives the total column density under these conditions can be re-arranged as
${\rm\ln}\left(\frac{8\pi k_{\rm B}\nu^{2}\int{T_{\rm MB}dv}}{hc^{3}A_{\rm
ul}g_{\rm u}b}\right)={\rm\ln}\left(\frac{N}{Q_{\rm rot}}\frac{T_{\rm
rot}-T_{\rm bg}}{T_{\rm rot}}\right)-\frac{E_{\rm u}}{k_{\rm B}T_{\rm rot}},$
(1)
where $g_{u}$ is the statistical weight in the upper level, $A_{\rm ul}$ is
the Einstein $A$-coefficient for spontaneous emission, $Q_{\rm rot}$ is the
rotational partition function which depends on $T_{\rm rot}$, $E_{\rm u}$ is
the upper level energy, $\nu$ is the frequency of the transition, $b$ is the
dilution factor, and $T_{\rm bg}$ is the cosmic microwave background radiation
temperature. We assume a source diameter of 80′′ (see Fossé et al. 2001). The
first term of Eq. (1), which depends only on spectroscopic and observational
line parameters, is plotted as a function of $E_{\rm u}$/$k_{\rm B}$ for the
different lines detected. Thus, $T_{\rm rot}$ and $N$ can be derived by
performing a linear least squares fit to the points (see Fig. 5).
Results for $T_{\rm rot}$ and $N$ using the population diagram procedure are
shown in Table 1 and Fig. 5. The uncertainties were calculated using the
statistical errors given by the linear least squares fit for the slope and the
intercept. The individual errors of the data points are those derived by
taking into account the uncertainty obtained in the determination of the
observed line parameters (see Table 2). For HCCCH2CN ($a$-type transitions)
and CH2CCHCN, different hyperfine structure components of the same $(J_{K_{\rm
a},K_{\rm c}})_{\rm u}-(J_{K_{\rm a},K_{\rm c}})_{\rm l}$ transition are
blended in a single line. Thus, to correctly determine $T_{\rm rot}$ and $N$,
the Einstein $A$-coefficient for spontaneous emission and the statistical
weight were assumed as the weighted average values of the sum of the hyperfine
components, and the rotational partition function was calculated using this
value for the statistical weight of each $(J_{K_{\rm a},K_{\rm c}})_{\rm
u}-(J_{K_{\rm a},K_{\rm c}})_{\rm l}$ transition. For CH3CCCN we built
independent rotational diagrams for each symmetry state $A$ and $E$.
We obtained rotational temperatures between $4-8$ K for the three isomers (see
Table 1), indicating they are subthermally excited, like most of the species
in this region (see e.g. Cernicharo et al. 2020a, c; Marcelino et al. 2020).
For the column density, we derived very similar values of the three isomers,
in the range $(1.5-3)\times 10^{12}$ cm-2.
Table 1: Derived rotational temperatures ($T_{\rm rot}$) and column densities ($N$) for the C4H3N isomers towards TMC-1 (CP). Species | $T_{\rm rot}$ (K) | $N$ (cm-2)
---|---|---
HCCCH2CN | $4\pm 1$ | $(2.8\pm 0.7)\times 10^{12}$
CH2CCHCN | $5.5\pm 0.3$ | $(2.7\pm 0.2)\times 10^{12}$
A-CH3CCCN | $6.7\pm 0.2$ | $(9.7\pm 0.3)\times 10^{11}$
E-CH3CCCN | $8.2\pm 0.6$ | $(7.7\pm 0.5)\times 10^{11}$
## 4 Discussion
The chemistry of C4H3N isomers in cold molecular clouds has been discussed by
Balucani et al. (2000) and more specifically by Balucani et al. (2002), based
on crossed molecular beam experiments and _ab initio_ calculations. In these
studies it was pointed out that reactions of the CN radical with methyl
acetylene and allene are barrierless and exothermic when producing CH3C3N and
CH2CCHCN, in the methyl acetylene reaction, and CH2CCHCN and HCCCH2CN, in the
reaction involving allene. Indeed, the reactions of CN with CH3CCH and CH2CCH2
were measured to be rapid at low temperatures (Carty et al., 2001). This
chemical scheme was implemented in a chemical model by Quan & Herbst (2007) to
explain the abundance of cyanoallene in TMC-1. Later on, Abeysekera et al.
(2015) measured the product branching ratios of the reaction between CN and
methyl acetylene at low temperature using a chirped-pulse uniform flow and
found that HC3N is the major product, while CH3C3N accounts for 22 % of the
products and CH2CCHCN is not formed. These results are in contrast with those
obtained from crossed molecular beam experiments (Huang et al., 1999; Balucani
et al., 2000, 2002), where CH2CCHCN is observed as product of the CN + CH3CCH
reaction. Therefore, the most stable isomer CH3C3N can be formed in the
reaction of CN and methyl acetylene, the second most stable isomer CH2CCHCN
can be formed when CN reacts with CH2CCH2 and perhaps also with CH3CCH,
depending on whether one gives credit to the chirped-pulse uniform flow
experiment or to the crossed molecular beam ones, and the least stable isomer
HCCCH2CN can only be formed in the reaction between CN and allene. These
neutral-neutral reactions involving CN are therefore likely routes to the
three C4H3N isomers in cold interstellar clouds like TMC-1, where abundant CN,
CH3CCH, and probably CH2CCH2 (non polar and thus it cannot be detected at
radio wavelengths) are present. Moreover, the presence of HCCCH2CN (and
perhaps also CH2CCHCN) can be used as proxy of the non polar C3H4 isomer
allene since this isomer is only formed from CH2CCH2 in the aforementioned
reactions of CN.
In the light of the recent discovery of HCCCH2CN in TMC-1 and the
observational study of the three C4H3N isomers presented here, we have carried
out chemical model calculations to review the chemistry of these species in
cold clouds and evaluate whether the mechanism proposed by Balucani et al.
(2002) is in agreement with observations. We adopt typical parameters of cold
dark clouds, i.e., a gas kinetic temperature of 10 K, a volume density of H
nuclei of $2\times 10^{4}$ cm-3, a visual extinction of 30 mag, a cosmic-ray
ionization rate of H2 of $1.3\times 10^{-17}$ s-1, and the so-called ”low-
metal” elemental abundances (Agúndez & Wakelam, 2013). We use the chemical
network RATE12 from the UMIST database (McElroy et al., 2013), updated to
include the C4H3N isomers CH2CCHCN and HCCCH2CN. The reactions
$\displaystyle\rm CN+CH_{3}CCH$ $\displaystyle\rightarrow\rm HCN+CH_{2}CCH,$
(2a) $\displaystyle\rightarrow\rm HC_{3}N+CH_{3},$ (2b)
$\displaystyle\rightarrow\rm CH_{3}C_{3}N+H,$ (2c)
$\displaystyle\rightarrow\rm CH_{2}CCHCN+H,$ (2d) $\displaystyle\rm
CN+CH_{2}CCH_{2}$ $\displaystyle\rightarrow\rm CH_{2}CCHCN+H,$ (3a)
$\displaystyle\rightarrow\rm HCCCH_{2}CN+H,$ (3b)
are included with the rate constants measured by Carty et al. (2001). For the
branching ratios of reaction (2) we use either the values measured in the
chirped-pulse uniform flow experiment by Abeysekera et al. (2015), 12 %, 66 %,
22 %, and 0 % for channels (a), (b), (c), and (d), respectively, or the values
suggested by crossed molecular beam experiments and quantum chemical
calculations Balucani et al. (2000), 50 % for channels (c) and (d). For
reaction (3) we adopt branching ratios of 90 % and 10 % for channels (a) and
(b), respectively, based on quantum chemical calculations by Balucani et al.
(2002). The destruction processes of CH2CCHCN and HCCCH2CN are assumed to be
the same as those of CH3C3N, which are basically reactions with abundant
cations.
The calculated abundances of the three C4H3N isomers are shown as a function
of time in Fig. 6. It is seen that the three isomers reach their maximum
abundance at early times, in the range $(1-4)\times 10^{5}$ yr, with CH3C3N
being the most abundant and HCCCH2CN being the least abundant. According to
the chemical model, the formation of CH3C3N occurs through two routes. The
first and major involves the dissociative recombination of the precursor ion
CH3C3NH+ with electrons and is the responsible of the larger calculated
abundance of CH3C3N compared to the two other isomers. A second and minor
route is provided by reaction (2c). Cyanoallene is formed through reaction
(3), with reaction (2c) contributing to the same level if channel (2d) is
assumed to be open. Propargyl cyanide is exclusively formed through reaction
(3), with a lower abundance because it is formed with a branching ratio of
just 10 %. The impact of using the branching ratios for reaction (2) of
Balucani et al. (2000) or those of Abeysekera et al. (2015) is modest, with
the main effect being a change of less than a factor of two in the abundance
of CH2CCHCN (see Fig. 6).
The fact that the observed abundances of the three isomers are remarkably
similar provides clues on the underlying chemical processes at work. For
example, the route to CH3C3N from the precursor ion CH3C3NH+ is probably
overestimated in the chemical model, as indicated by the too large abundance
calculated for this species. It has become clear in recent years that
dissociative recombination of polyatomic ions usually results in a much larger
fragmentation than previously believed (Larsson et al., 2012), meaning that it
would not be strange than CH3C3N is a minor product in the dissociative
recombination of CH3C3NH+. The low branching ratio adopted for HCCCH2CN
formation in reaction (3) based on calculations by Balucani et al. (2002)
seems also to be in conflict with the observational finding of similar
abundances for CH2CCHCN and HCCCH2CN. It would be very interesting to measure
the product branching ratios for the reaction of CN with allene, as was done
for CN + CH3CCH (Abeysekera et al., 2015), to shed light on the formation
routes of these two metastable C4H3N isomers. This will also allow to put
tight constraints on the abundance of allene in cold dense clouds.
In summary, the similar abundances observed for the three C4H3N isomers favors
a common origin through reactions (2) and (3) with similar branching ratios in
the latter reaction. If this scenario is correct, we can conclude that allene
is as abundant as methyl acetylene in TMC-1. This is in fact predicted by the
chemical model, where CH3CCH and CH2CCH2 are mostly formed during the
dissociative recombination of the C3H${}_{7}^{+}$ ion (Larsson et al., 2005),
with similar branching ratios assumed for the two C3H4 isomers.
In addition to the three C4H3N isomers and the well known species HC3N and
CH2CHCN, Balucani et al. (2000) predicted the presence of $c$-C6H5CN and the
C5H5N isomer CH2CC(CN)CH3 in cold interstellar clouds. It is worth noting that
all these species but CH2CC(CN)CH3 have been identified in TMC-1 (see McGuire
et al. 2018 for the detection of cyanobenzene) and are also present in our
survey. Another $-$CN species, cyanocyclopentadiene ($c$-C5H5CN), has been
recently detected in this source (McCarthy et al., 2020). A complete study of
the molecular rings $c$-C6H5CN and $c$-C5H5CN in our data will be published
elsewhere. We searched in our data for the two C5H5N isomers CH3CH2CCCN and
CH3CHCCHCN by performing a line stacking analysis (see, e.g., Cuadrado et al.
2016; Loomis et al. 2020). We added spectra at the expected frequency of
several lines from these species that could be present within the noise level.
More concreteley, we considered $a$-type transitions sharing similar upper
level energies, up to 15 K, and Einstein coefficients. All spectra, in local
standard of rest (LSR) velocity scale, are resampled to the same velocity
channel resolution before stacking. Figure 7 shows the spectra obtained
following this method. Whereas there is no evidence for the presence of
CH3CH2CCCN in our data, the stacked spectrum of CH3CHCCHCN shows a 2$\sigma$
signal at the systemic velocity of the source. An observational effort at
lowest frequencies has to be undertaken to confirm the presence of CH3CHCCHCN
in space.
## 5 Conclusions
Using a very sensitive line survey of TMC-1 in the Q band we have detected
multiple transitions of the three C4H3N isomers CH3C3N, CH2CCHCN, and
HCCCH2CN. The presence of the latter in TMC-1 is supported by 27 observed
individual lines. We have constructed rotational diagrams for the three
species and obtained similar rotational temperatures and column densities for
the three isomers, in the range of $4-8$ K and $(1.5-3)\times 10^{12}$ cm-2,
respectively. The observed abundances of the three isomers in TMC-1 suggest a
similar chemical origin based on reactions of the radical CN with the isomers
CH3CCH and CH2CCH2. There are still uncertainties in the network of reactions
related to these species since our chemical model overestimates the abundance
of CH3C3N and underestimates the production of HCCCH2CN. Further studies of
these isomers in other sources could help in clarifying their chemical
formation pathways.
###### Acknowledgements.
We acknowledge funding support from the European Research Council (ERC Grant
610256: NANOCOSMOS). We also thank the Spanish MICIU for funding support under
grants AYA2016-75066-C2-1-P, PID2019-106110GB-I00, and PID2019-107115GB-C21,
and PID2019-106235GB-I00. M.A. thanks MICIU for grant RyC-2014-16277.
## References
* Abeysekera et al. (2015) Abeysekera, C., Joalland, B., Ariyasingha, N., et al. 2015, The Journal of Physical Chemistry Letters, 6, 1599
* Agúndez & Wakelam (2013) Agúndez, M. & Wakelam, V. 2013, Chemical Reviews, 113, 8710
* Balucani et al. (2000) Balucani, N., Asvany, O., Huang, L. C. L., et al. 2000, ApJ, 545, 892
* Balucani et al. (2002) Balucani, N., Asvany, O., Kaiser, R. I., & Osamura, Y. 2002, Journal of Physical Chemistry A, 106, 4301
* Bester et al. (1983) Bester, M., Tanimoto, M., Vowinkel, B., Winnewisser, G., & Yamada, K. 1983, Zeitschrift Naturforschung Teil A, 38, 64
* Bester et al. (1984) Bester, M., Yamada, K., Winnewisser, G., et al. 1984, A&A, 137, L20
* Bouchy et al. (1973) Bouchy, A., Demaison, J., Roussy, G., & Barriol, J. 1973, Journal of Molecular Structure, 18, 211
* Broten et al. (1984) Broten, N. W., MacLeod, J. M., Avery, L. W., et al. 1984, ApJ, 276, L25
* Carty et al. (2001) Carty, D., Le Page, V., Sims, I. R., & Smith, I. W. M. 2001, Chemical Physics Letters, 344, 310
* Cernicharo (1985) Cernicharo, J. 1985, Internal IRAM Report (Granada: IRAM)
* Cernicharo (2012) Cernicharo, J. 2012, in EAS Publications Series, Vol. 58, EAS Publications Series, ed. C. Stehlé, C. Joblin, & L. d’Hendecourt, 251–261
* Cernicharo & Guelin (1987) Cernicharo, J. & Guelin, M. 1987, A&A, 176, 299
* Cernicharo et al. (2020a) Cernicharo, J., Marcelino, N., Agúndez, M., et al. 2020a, A&A, 642, L8
* Cernicharo et al. (2020b) Cernicharo, J., Marcelino, N., Agúndez, M., et al. 2020b, A&A, 642, L17
* Cernicharo et al. (2020c) Cernicharo, J., Marcelino, N., Pardo, J. R., et al. 2020c, A&A, 641, L9
* Chin et al. (2006) Chin, Y.-N., Kaiser, R. I., Lemme, C., & Henkel, C. 2006, in American Institute of Physics Conference Series, Vol. 855, Astrochemistry - From Laboratory Studies to Astronomical Observations, ed. R. I. Kaiser, P. Bernath, Y. Osamura, S. Petrie, & A. M. Mebel, 149–153
* Cuadrado et al. (2016) Cuadrado, S., Goicoechea, J. R., Roncero, O., et al. 2016, A&A, 596, L1
* de Vicente et al. (2016) de Vicente, P., Bujarrabal, V., Díaz-Pulido, A., et al. 2016, A&A, 589, A74
* Demaison et al. (1985) Demaison, J., Pohl, I., & Rudolph, H. D. 1985, Journal of Molecular Spectroscopy, 114, 210
* Fossé et al. (2001) Fossé, D., Cernicharo, J., Gerin, M., & Cox, P. 2001, ApJ, 552, 168
* Goldsmith & Langer (1999) Goldsmith, P. F. & Langer, W. D. 1999, ApJ, 517, 209
* Huang et al. (1999) Huang, L. C. L., Balucani, N., Lee, Y. T., Kaiser, R. I., & Osamura, Y. 1999, The Journal of Chemical Physics, 111, 2857
* Kaifu et al. (2004) Kaifu, N., Ohishi, M., Kawaguchi, K., et al. 2004, PASJ, 56, 69
* Larsson et al. (2005) Larsson, M., Ehlerding, A., Geppert, W. D., et al. 2005, J. Chem. Phys., 122, 156101
* Larsson et al. (2012) Larsson, M., Geppert, W. D., & Nyman, G. 2012, Reports on Progress in Physics, 75, 066901
* Loomis et al. (2020) Loomis, R. A., Burkhardt, A. M., Shingledecker, C. N., et al. 2020, arXiv e-prints, arXiv:2009.11900
* Lovas et al. (2006) Lovas, F. J., Remijan, A. J., Hollis, J. M., Jewell, P. R., & Snyder, L. E. 2006, ApJ, 637, L37
* Marcelino et al. (2020) Marcelino, N., Agúndez, M., Tercero, B., et al. 2020, A&A, 643, L6
* McCarthy et al. (2020) McCarthy, M. C., Lee, K. L. K., Loomis, R. A., et al. 2020, Nature Astronomy [arXiv:2009.13546]
* McElroy et al. (2013) McElroy, D., Walsh, C., Markwick, A. J., et al. 2013, A&A, 550, A36
* McGuire et al. (2018) McGuire, B. A., Burkhardt, A. M., Kalenskii, S., et al. 2018, Science, 359, 202
* McGuire et al. (2020) McGuire, B. A., Burkhardt, A. M., Loomis, R. A., et al. 2020, ApJ, 900, L10
* McNaughton et al. (1988) McNaughton, D., Romeril, N. G., Lappert, M. F., & Kroto, H. W. 1988, Journal of Molecular Spectroscopy, 132, 407
* Moïses et al. (1982) Moïses, A., Boucher, D., Burie, J., Demaison, J., & Dubrulle, A. 1982, Journal of Molecular Spectroscopy, 92, 497
* Pardo et al. (2001) Pardo, J. R., Cernicharo, J., & Serabyn, E. 2001, IEEE Transactions on Antennas and Propagation, 49, 1683
* Quan & Herbst (2007) Quan, D. & Herbst, E. 2007, A&A, 474, 521
* Schwahn et al. (1986) Schwahn, G., Schieder, R., Bester, M., & Winnewisser, G. 1986, Journal of Molecular Spectroscopy, 116, 263
* Tercero et al. (2020a) Tercero, B., Cernicharo, J., Cuadrado, S., de Vicente, P., & Guélin, M. 2020a, A&A, 636, L7
* Tercero et al. (2020b) Tercero, F., López-Pérez, J. A., Gallego, J. D., et al. 2020b, arXiv e-prints, arXiv:2010.16224
## Appendix A Additional figures and tables
Table 2: Observed lines of C4H3N isomers towards TMC-1 (CP).
Transition | Rest Freq. | $E{up}$ | $A_{ij}$ | $S_{ij}$ | $\int T_{\rm A}^{*}dv$ | $V_{\rm LSR}$ | $\Delta$v | $T_{\rm A}^{*}$
---|---|---|---|---|---|---|---|---
$(J_{K_{\rm a},K_{\rm c}})_{\rm u}-(J_{K_{\rm a},K_{\rm c}})_{\rm l}$ | (MHz) | (K) | (10-6 s-1) | | (K km s-1) | (km s-1) | (km s-1) | (K)
HCCCH2CN, $a$-type, $\mu_{\rm a}=2.87$ D
$6_{1,6}-5_{1,5}$ | 31848.982(3) | 6.2 | 1.39 | 5.83 | 0.0093(10) | 5.83( 2) | 0.79( 6) | 0.0111( 8)
$6_{0,6}-5_{0,5}$ | 32722.702(3) | 5.5 | 1.55 | 5.99 | 0.0099( 5) | 5.67( 1) | 0.78( 3) | 0.0120( 4)
$6_{2,5}-5_{2,4}$ | 32876.187(3) | 8.8 | 1.40 | 5.33 | 0.0025( 6) | 5.51( 6) | 0.66(14) | 0.0036( 5)
$6_{2,4}-5_{2,3}$ | 33048.726(3) | 8.8 | 1.42 | 5.33 | 0.0016( 6) | 5.49(10) | 0.70(20) | 0.0021( 5)
$6_{1,5}-5_{1,4}$ | 33863.716(3) | 6.5 | 1.67 | 5.83 | 0.0072( 5) | 5.66( 2) | 0.79( 4) | 0.0085( 4)
$7_{1,7}-6_{1,6}$ | 37139.207(4) | 8.0 | 2.24 | 6.86 | 0.0081( 5) | 5.74( 2) | 0.69( 3) | 0.0110( 5)
$7_{0,7}-6_{0,6}$ | 38102.698(4) | 7.3 | 2.47 | 6.99 | 0.0110( 6) | 5.70( 1) | 0.74( 3) | 0.0140( 5)
$7_{2,6}-6_{2,5}$ | 38342.339(4) | 10.6 | 2.32 | 6.43 | 0.0016( 6) | 5.56( 9) | 0.57(18) | 0.0027( 7)
$7_{2,5}-6_{2,4}$ | 38616.702(4) | 10.7 | 2.37 | 6.43 | 0.0039( 5) | 5.64( 3) | 0.60( 5) | 0.0061( 6)
$7_{1,6}-6_{1,5}$ | 39486.580(4) | 8.4 | 2.70 | 6.86 | 0.0075( 5) | 5.66( 2) | 0.60( 3) | 0.0117( 6)
$8_{1,8}-7_{1,7}$ | 42421.779(4) | 10.0 | 3.39 | 7.87 | 0.0077( 8) | 5.74( 2) | 0.59( 5) | 0.0122( 9)
$8_{0,8}-7_{0,7}$ | 43450.742(4) | 9.4 | 3.70 | 7.99 | 0.0109(11) | 5.73( 2) | 0.73( 5) | 0.0140(10)
$8_{2,7}-7_{2,6}$ | 43802.419(4) | 12.7 | 3.55 | 7.50 | 0.0028( 8) | 5.69( 4) | 0.48(10) | 0.0056(10)
$8_{2,6}-7_{2,5}$ | 44210.195(4) | 12.8 | 3.66 | 7.50 | 0.0018(16) | 5.70( 5) | 0.35(31) | 0.0047(11)
$8_{1,7}-7_{1,6}$ | 45099.074(4) | 10.6 | 4.07 | 7.87 | 0.0125( 9) | 5.69( 1) | 0.61( 3) | 0.0192(10)
$9_{1,9}-8_{1,8}$ | 47696.032(5) | 12.3 | 4.86 | 8.89 | 0.0036(13) | 5.74( 6) | 0.50(13) | 0.0068(17)
$9_{0,9}-8_{0,8}$ | 48764.484(5) | 11.8 | 5.26 | 8.98 | 0.0078(16) | 5.76( 4) | 0.63(10) | 0.0117(14)
HCCCH2CN, $b$-type, $\mu_{\rm b}=2.19$ D
$3_{1,3}-2_{0,2},F_{\rm u}-F_{\rm l}=3-2$ | 32519.775(3) | 2.3 | 0.49 | 1.79 | }1.5*[0.0031( 8)] | 5.59( 9)∗ | 1.06(20) | 0.0028( 5)
$3_{1,3}-2_{0,2},F_{\rm u}-F_{\rm l}=2-1$ | 32519.815(3) | 2.3 | 0.46 | 1.21 |
$3_{1,3}-2_{0,2},F_{\rm u}-F_{\rm l}=4-3$ | 32519.916(3) | 2.3 | 0.55 | 2.58 | 0.0013( 4) | 5.73( 7) | 0.58(22) | 0.0021( 4)
$9_{0,9}-8_{1,8},F_{\rm u}-F_{\rm l}=8-7$ | 36933.586(4) | 11.8 | 0.74 | 4.44 | }1.5*[0.0034(17)] | 5.80(11)∗ | 1.16(31) | 0.0027( 6)
$9_{0,9}-8_{1,8},F_{\rm u}-F_{\rm l}=10-9$ | 36933.621(4) | 11.8 | 0.75 | 5.57 |
$9_{0,9}-8_{1,8},F_{\rm u}-F_{\rm l}=9-8$ | 36933.800(4) | 11.8 | 0.74 | 4.98 | 0.0020( 8) | 5.86(12) | 0.84(20) | 0.0023( 6)
$4_{1,4}-3_{0,3},F_{\rm u}-F_{\rm l}=4-3$ | 37340.269(4) | 3.4 | 0.77 | 2.38 | 0.0025( 4) | 5.78( 7) | 0.82(13) | 0.0029( 6)
$4_{1,4}-3_{0,3},F_{\rm u}-F_{\rm l}=3-2$ | 37340.455(4) | 3.4 | 0.75 | 1.81 | }1.5*[0.0030( 3)] | 5.78( 4)∗ | 0.60( 7) | 0.0047( 5)
$4_{1,4}-3_{0,3},F_{\rm u}-F_{\rm l}=5-4$ | 37340.473(4) | 3.4 | 0.82 | 3.10 |
$5_{1,5}-4_{0,4},F_{\rm u}-F_{\rm l}=5-4$ | 42010.978(4) | 4.6 | 1.12 | 2.96 | 0.0017( 5) | 5.85( 6) | 0.46(19) | 0.0035(10)
$5_{1,5}-4_{0,4},F_{\rm u}-F_{\rm l}=6-5$ | 42011.211(4) | 4.6 | 1.16 | 3.65 | }1.5*[0.0028(13)] | 5.80( 6)∗ | 0.54(15) | 0.0050(10)
$5_{1,5}-4_{0,4},F_{\rm u}-F_{\rm l}=4-3$ | 42011.215(4) | 4.6 | 1.10 | 2.40 |
$6_{1,6}-5_{0,5},F_{\rm u}-F_{\rm l}=6-5$ | 46545.801(5) | 6.2 | 1.55 | 3.57 | 0.0013( 5) | 5.63(11) | 0.48(20) | 0.0026(12)
$6_{1,6}-5_{0,5},F_{\rm u}-F_{\rm l}=7-6$ | 46546.045(5) | 6.2 | 1.59 | 4.24 | }1.5*[0.0041( 7)] | 5.69( 6)∗ | 0.54( 9) | 0.0071(11)
$6_{1,6}-5_{0,5},F_{\rm u}-F_{\rm l}=5-4$ | 46546.057(5) | 6.2 | 1.54 | 3.00 |
CH2CCHCN, $\mu_{\rm a}=4.07$ D
$6_{1,5}-5_{1,4}$ | 31615.627(5) | 6.4 | 2.73 | 5.83 | 0.0247(12) | 5.64( 1) | 0.80( 3) | 0.0289( 9)
$7_{1,7}-6_{1,6}$ | 35379.044(5) | 7.9 | 3.90 | 6.86 | 0.0213( 6) | 5.68( 1) | 0.77( 2) | 0.0259( 5)
$7_{0,7}-6_{0,6}$ | 36064.688(5) | 6.9 | 4.22 | 7.00 | 0.0253( 4) | 5.64( 1) | 0.69( 1) | 0.0347( 4)
$7_{2,6}-6_{2,5}$ | 36140.273(5) | 11.4 | 3.90 | 6.43 | 0.0086( 8) | 5.61( 3) | 0.90( 6) | 0.0090( 6)
$7_{2,5}-6_{2,4}$ | 36222.501(5) | 11.4 | 3.93 | 6.43 | 0.0095( 6) | 5.63( 2) | 0.90( 4) | 0.0099( 5)
$7_{1,6}-6_{1,5}$ | 36878.547(5) | 8.2 | 4.42 | 6.86 | 0.0226( 5) | 5.63( 1) | 0.70( 1) | 0.0300( 5)
$8_{1,8}-7_{1,7}$ | 40425.712(6) | 9.9 | 5.90 | 7.87 | 0.0198( 8) | 5.70( 1) | 0.60( 2) | 0.0311( 8)
$8_{0,8}-7_{0,7}$ | 41187.082(6) | 8.9 | 6.34 | 8.00 | 0.0245( 6) | 5.66( 1) | 0.59( 1) | 0.0388( 7)
$8_{2,7}-7_{2,6}$ | 41297.656(6) | 13.4 | 5.99 | 7.50 | 0.0088( 7) | 5.67( 2) | 0.65( 4) | 0.0128( 6)
$8_{2,6}-7_{2,5}$ | 41420.713(6) | 13.4 | 6.04 | 7.50 | 0.0084( 7) | 5.64( 2) | 0.65( 4) | 0.0121( 6)
$8_{1,7}-7_{1,6}$ | 42138.451(6) | 10.2 | 6.68 | 7.87 | 0.0194( 9) | 5.63( 1) | 0.52( 2) | 0.0348(10)
$9_{1,9}-8_{1,8}$ | 45469.519(6) | 12.0 | 8.48 | 8.89 | 0.0176( 9) | 5.69( 1) | 0.56( 2) | 0.0293(11)
$9_{0,9}-8_{0,8}$ | 46297.882(6) | 11.1 | 9.06 | 9.00 | 0.0205(11) | 5.67( 1) | 0.55( 2) | 0.0352(13)
$9_{2,8}-8_{2,7}$ | 46452.840(6) | 15.6 | 8.70 | 8.56 | 0.0097(10) | 5.74( 2) | 0.66( 5) | 0.0138(10)
$9_{2,7}-8_{2,6}$ | 46628.069(6) | 15.7 | 8.80 | 8.56 | 0.0083( 9) | 5.64( 2) | 0.60( 4) | 0.0129(11)
$9_{1,8}-8_{1,7}$ | 47394.848(6) | 12.5 | 9.60 | 8.89 | 0.0152(13) | 5.63( 2) | 0.62( 4) | 0.0232(13)
CH3CCCN, $\mu_{\rm a}=4.75$ D
E $8_{2}-7_{2}$ | 33050.3475(8) | 29.4 | 4.18 | 7.50 | 0.0044( 9) | 5.14( 9) | 1.41(16) | 0.0029( 5)
E $8_{1}-7_{1}$ | 33051.3033(9) | 6.9 | 4.39 | 7.88 | 0.0472( 8) | 5.79( 1) | 0.76( 1) | 0.0580( 6)
A $8_{0}-7_{0}$ | 33051.6219(9) | 7.1 | 4.46 | 8.00 | 0.0485( 8) | 5.80( 1) | 0.74( 1) | 0.0616( 6)
E $9_{2}-8_{2}$ | 37181.5838(9) | 31.2 | 6.08 | 8.56 | 0.0013( 7) | 5.77(10) | 0.64(24) | 0.0020( 7)
E $9_{1}-8_{1}$ | 37182.659(1) | 8.7 | 6.32 | 8.89 | 0.0421( 8) | 5.79( 1) | 0.68( 1) | 0.0585( 7)
A $9_{0}-8_{0}$ | 37183.017(1) | 8.9 | 6.40 | 9.00 | 0.0455( 8) | 5.79( 1) | 0.68( 1) | 0.0632( 7)
E $10_{2}-9_{2}$ | 41312.799(1) | 33.2 | 8.46 | 9.60 | 0.0025(11) | 5.42( 8) | 0.55(15) | 0.0031(10)
E $10_{1}-9_{1}$ | 41313.994(1) | 10.7 | 8.73 | 9.90 | 0.0372(64) | 5.81( 4) | 0.58(10) | 0.0604( 9)
A $10_{0}-9_{0}$ | 41314.393(1) | 10.9 | 8.81 | 10.0 | 0.0412(51) | 5.83( 3) | 0.57( 7) | 0.0674( 9)
E $11_{2}-10_{2}$ | 45443.993(1) | 35.4 | 11.4 | 10.6 | … | … | … | $\leq$0.0050(10)
E $11_{1}-10_{1}$ | 45445.307(1) | 12.9 | 11.7 | 10.9 | 0.0294(42) | 5.80( 3) | 0.57( 8) | 0.0483(12)
A $11_{0}-10_{0}$ | 45445.745(1) | 13.1 | 11.8 | 11.0 | 0.0321(42) | 5.82( 3) | 0.62( 8) | 0.0487(12)
E $12_{2}-11_{2}$ | 49575.162(1) | 37.7 | 14.9 | 11.7 | … | … | … | $\leq$0.0060(20)
E $12_{1}-11_{1}$ | 49576.596(1) | 15.3 | 15.2 | 11.9 | 0.0242(52) | 5.75( 5) | 0.69(14) | 0.0328(26)
A $12_{0}-11_{0}$ | 49577.073(1) | 15.5 | 15.4 | 12.0 | 0.0269(38) | 5.81( 4) | 0.64( 8) | 0.0393(26)
Notes. ∗ LSR velocity corresponds to the strongest hyperfine transition.
Numbers in parentheses indicate the uncertainty in units of the last
significant digits. For the observational parameters we adopted the
uncertainty of the Gaussian fit provided by GILDAS. HCCCH2CN: Spectroscopic
line parameters were obtained using MADEX by fitting the rotational lines
reported by Demaison et al. (1985) and McNaughton et al. (1988). Dipole
moments are from McNaughton et al. (1988). CH2CCHCN: Spectroscopic line
parameters were obtained using MADEX by fitting the rotational lines reported
by Bouchy et al. (1973) and Schwahn et al. (1986). Dipole moment is from
Bouchy et al. (1973). CH3CCCN: Spectroscopic line parameters were obtained
using MADEX by fitting the rotational lines reported by Moïses et al. (1982)
and Bester et al. (1983). Rotation constants $A$ and $D_{\rm k}$ have been
assumed to be the same as those of CH3CN. Some additional data have been taken
from the CDMS (https://cdms.astro.uni-koeln.de/). Dipole moment is from Bester
et al. (1984). Note that the E species is 7.8 K above the A species, and
energies for the E species are referred to the lowest energy level (1,1).
Figure 3: Observed lines of CH2CCHCN toward TMC-1 (CP). The vertical dashed
green line marks a radial velocity of 5.7 km s-1. Figure 4: Observed lines
from CH3CCCN towards TMC-1 (CP). Dashed green line marks a radial velocity of
5.8 km s-1.
Figure 5: Rotational diagrams of the C4H3N isomers towards TMC-1 (CP). Derived
values of the rotational temperature, $T_{\rm rot}$, column density, $N$, and
their respective uncertainties are indicated for each molecule. Figure 6:
Calculated fractional abundances of the three C4H3N isomers as a function of
time. Solid and dashed lines correspond to two models in which we use
branching ratios for the CN + CH3CCH reaction from Abeysekera et al. (2015)
and from Balucani et al. (2000), respectively (see text). The abundances
observed in TMC-1 for the three C4H3N isomers (from Table 1 adopting a H2
column density of 1022 cm-2; Cernicharo & Guelin 1987) are shown as horizontal
dotted lines. Figure 7: Stacked spectra of CH3CH2CCCN and CH3CHCCHCN toward
TMC-1.
|
∎
11institutetext: J. Zeng 22institutetext: School of Computer and Information
Engineering, Jiangxi Normal University, Nanchang, China.
Liu Bie Ju Centre for Mathematical Sciences, City University of Hong Kong,
Hong Kong.
22email<EMAIL_ADDRESS>33institutetext: W. Yin 44institutetext:
Department of Mathematics, University of California, Los Angeles, CA.
44email<EMAIL_ADDRESS>55institutetext: D.X. Zhou 66institutetext:
School of Data Science, Department of Mathematics, and Liu Bie Ju Centre for
Mathematical Sciences, City University of Hong Kong, Hong Kong.
66email<EMAIL_ADDRESS>
# Moreau Envelope Augmented Lagrangian Method for Nonconvex Optimization with
Linear Constraints††thanks: We thank Kaizhao Sun for discussions that help us
complete this paper, as well as presenting to us an additional approach to
ensure boundedness. The work of J. Zeng is partly supported by National
Natural Science Foundation of China (No. 61977038) and the Thousand Talents
Plan of Jiangxi Province (No. jxsq2019201124). The work of D.-X. Zhou is
partly supported by Research Grants Council of Hong Kong (No. CityU 11307319),
Laboratory for AI-powered Financial Technologies, and the Hong Kong Institute
for Data Science.
Jinshan Zeng Wotao Yin Ding-Xuan Zhou
(Received: date / Accepted: date)
###### Abstract
The augmented Lagrangian method (ALM) is one of the most useful methods for
constrained optimization. Its convergence has been well established under
convexity assumptions or smoothness assumptions, or under both assumptions.
ALM may experience oscillations and divergence when the underlying problem is
simultaneously nonconvex and nonsmooth. In this paper, we consider the
linearly constrained problem with a nonconvex (in particular, weakly convex)
and nonsmooth objective. We modify ALM to use a Moreau envelope of the
augmented Lagrangian and establish its convergence under conditions that are
weaker than those in the literature. We call it the Moreau envelope augmented
Lagrangian (MEAL) method. We also show that the iteration complexity of MEAL
is $o(\varepsilon^{-2})$ to yield an $\varepsilon$-accurate first-order
stationary point. We establish its whole sequence convergence (regardless of
the initial guess) and a rate when a Kurdyka-Łojasiewicz property is assumed.
Moreover, when the subproblem of MEAL has no closed-form solution and is
difficult to solve, we propose two practical variants of MEAL, an inexact
version called iMEAL with an approximate proximal update, and a linearized
version called LiMEAL for the constrained problem with a composite objective.
Their convergence is also established.
###### Keywords:
Nonconvex nonsmooth optimization augmented Lagrangian method Moreau envelope
proximal augmented Lagrangian method Kurdyka-Łojasiewicz inequality
## 1 Introduction
In this paper, we consider the following optimization problem with linear
constraints
$\begin{array}[]{ll}\mathrm{minimize}_{x\in\mathbb{R}^{n}}&f(x)\\\
\mathrm{subject\ to}&Ax=b,\end{array}$ (1)
where $f:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is a proper, lower-
semicontinuous weakly convex function, which is possibly nonconvex and
nonsmooth, $A\in\mathbb{R}^{m\times n}$ and $b\in\mathbb{R}^{m}$ are some
given matrix and vector, respectively. A function $f$ is said to be weakly
convex with a modulus $\rho>0$ if $f(x)+\frac{\rho}{2}\|x\|^{2}$ is convex on
$\mathbb{R}^{n}$, where $\|\cdot\|$ is the Euclidean norm. The class of weakly
convex functions is broad Nurminskii73 , including all convex functions,
smooth but nonconvex functions with Lipschitz continuous gradient, and their
composite forms (say, $f(x)=h(x)+g(x)$ with both $h$ and $g$ being weakly
convex, and $f(x)=g(h(x))$ with $g$ being convex and Lipschitz continuous and
$h$ being a smooth mapping with Lipschitz Jacobian (Drusvyatskiy-Paquette19, ,
Lemma 4.2)).
The augmented Lagrangian method (ALM) is a well-known algorithm for
constrained optimization by Hestenes Hestenes69 and Powell Powell69 . ALM has
been extensively studied and has a large body of literature (Bertsekas73 ;
Birgin10 ; Conn91 ; Conn96 ; Rockafellar73-ALM just to name a few), yet _no
ALM algorithm can solve the underlying problem ( 1) without at least one of
the following assumptions_: convexity Bertsekas73 ; Bertsekas76 ; Fernadez12 ;
Polyak-Tretyakov73 ; Rockafellar73-ALM , or smoothness Andreani08 ; Andreani10
; Andreani19 ; Andreani18 ; Curtis15 , or solving nonconvex subproblems to
their global minima Birgin10 ; Birgin18 , or an auto-updated penalty sequence
staying bounded on the problem at hand Birgin20 ; Grapiglia-Yuan19 . Indeed,
without these assumptions, ALM may oscillate and even diverge unboundedly on
simple quadratic programs Wang19 ; Zhang-Luo18 on weakly convex objectives.
An example is given Sec. 7.1 below.
At a high level, we introduce a Moreau-envelope modification of the ALM for
solving (1) and show the method can converge under weaker conditions. In
particular, convexity is relaxed to weak convexity; nonsmooth functions are
allowed; the subproblems can be solved inexactly to some extent; linearization
can be applied to the Lipschitz-differential function in the objective; and,
there is no assumption on the rank of $A$. On the other hand, we introduce two
alternative subgradient properties in Definition 1 below as our main
assumption. By also assuming either a bounded energy sequence or bounded
primal-dual sequence, we derive certain subsequence rates of convergence. We
introduce a novel way to establish those boundedness properties based on a
feasible coercivity assumption and a local-stability assumption on the
subproblem. Finally, with the additional assumption of Kurdyka-Łojasiewicz
(KŁ) inequality, we establish global convergence. Overall, this paper shows
that the Moreau envelope technique makes ALM applicable to more problems.
### 1.1 Proposed Algorithms
To present our algorithm, define the augmented Lagrangian:
${\cal
L}_{\beta}(x,\lambda):=f(x)+\langle\lambda,Ax-b\rangle+\frac{\beta}{2}\|Ax-b\|^{2},$
(2)
and the _Moreau envelope_ of ${\cal L}_{\beta}(x,\lambda)$:
$\phi_{\beta}(z,\lambda)=\min_{x}\left\\{{\cal
L}_{\beta}(x,\lambda)+\frac{1}{2\gamma}\|x-z\|^{2}\right\\},$ (3)
where $\lambda\in\mathbb{R}^{m}$ is a multiplier vector, $\beta>0$ is a
penalty parameter, and $\gamma>0$ is a proximal parameter. The Moreau envelope
applies to the primal variable $x$ for each fixed dual variable $\lambda$.
We introduce Moreau Envelope Augmented Lagrangian method (dubbed MEAL) as
follows: given an initialization $(z^{0},\lambda^{0})$, $\gamma>0$, a sequence
of penalty parameters $\\{\beta_{k}\\}$ and a step size $\eta\in(0,2)$, for
$k=0,1,\ldots,$ run
$\mathrm{(MEAL)}\quad\left\\{\begin{array}[]{l}z^{k+1}=z^{k}-\eta\gamma\nabla_{z}\phi_{\beta_{k}}(z^{k},\lambda^{k}),\\\
\lambda^{k+1}=\lambda^{k}+\beta_{k}\nabla_{\lambda}\phi_{\beta_{k}}(z^{k},\lambda^{k}).\end{array}\right.$
(4)
The penalty parameter $\beta_{k}$ can either vary or be fixed.
Introduce
$x^{k+1}=\mathrm{Prox}_{\gamma,{\cal
L}_{\beta_{k}}(\cdot,\lambda^{k})}(z^{k}):=\operatorname*{argmin}_{x}\left\\{{\cal
L}_{\beta_{k}}(x,\lambda^{k})+\frac{1}{2\gamma}\|x-z^{k}\|^{2}\right\\},\
\forall k\in\mathbb{N},$
which yields
$\nabla_{z}\phi_{\beta_{k}}(z^{k},\lambda^{k})=\gamma^{-1}(z^{k}-x^{k+1})$ and
$\nabla_{\lambda}\phi_{\beta_{k}}(z^{k},\lambda^{k})=Ax^{k+1}-b$. Then, MEAL
(4) is equivalent to:
$\mathrm{(MEAL\
Reformulated)}\quad\left\\{\begin{array}[]{l}x^{k+1}=\mathrm{Prox}_{\gamma,{\cal
L}_{\beta_{k}}(\cdot,\lambda^{k})}(z^{k}),\\\
z^{k+1}=z^{k}-\eta(z^{k}-x^{k+1}),\\\
\lambda^{k+1}=\lambda^{k}+\beta_{k}(Ax^{k+1}-b).\end{array}\right.$ (5)
Next, we provide two practical variants of MEAL that do not require an
accurate computation of $\mathrm{Prox}_{\gamma,{\cal L}_{\beta}}$.
##### Inexact MEAL (iMEAL)
We call $x^{k+1}$ an $\epsilon_{k}$-accurate stationary point of the
$x$-subproblem in (5) if there exists
$s^{k}\in\partial_{x}{\cal
L}_{\beta_{k}}(x^{k+1},\lambda^{k})+\gamma^{-1}(x^{k+1}-z^{k})\quad\text{such
that}~{}\|s^{k}\|\leq\epsilon_{k}.$ (6)
iMEAL is described as follows: given an initialization $(z^{0},\lambda^{0})$,
$\gamma>0$, $\eta\in(0,2)$, and two positive sequences $\\{\epsilon_{k}\\}$
and $\\{\beta_{k}\\}$, for $k=0,1,\ldots,$ run
$\mathrm{(iMEAL)}\quad\left\\{\begin{array}[]{l}\mathrm{find\ an}\ x^{k+1}\
\mathrm{to\ satisfy}\ \eqref{iMealCond},\\\
z^{k+1}=z^{k}-\eta(z^{k}-x^{k+1}),\\\
\lambda^{k+1}=\lambda^{k}+\beta_{k}(Ax^{k+1}-b).\end{array}\right.$ (7)
##### Linearized MEAL (LiMEAL)
When problem (1) has the following form
$\begin{array}[]{ll}\mathop{\mathrm{minimize}}_{x\in\mathbb{R}^{n}}&f(x):=h(x)+g(x)\\\
\mathrm{subject\ to}&Ax=b,\end{array}$ (8)
where $h:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is Lipschitz-continuous
differentiable and $g:\mathbb{R}^{n}\rightarrow\mathbb{R}$ is weakly convex
and has an easy proximal operator (in particular, admitting a closed-form
solution) Hajinezhad-Hong19 ; Wang19 ; Xu-Yin-BCD13 ; Zeng-DGD18 , we shall
use $\nabla h$. Write $f^{k}(x):=h(x^{k})+\langle\nabla
h(x^{k}),x-x^{k}\rangle+g(x)$ and ${\cal
L}_{\beta,{f^{k}}}(x,\lambda):=f^{k}(x)+\langle\lambda,Ax-b\rangle+\frac{\beta}{2}\|Ax-b\|^{2}.$
We describe LiMEAL for (8) as: given $(z^{0},\lambda^{0})$, $\gamma>0$,
$\eta\in(0,2)$ and $\\{\beta_{k}\\}$, for $k=0,1,\ldots,$ run
$\mathrm{(LiMEAL)}\quad\left\\{\begin{array}[]{l}x^{k+1}=\mathrm{Prox}_{\gamma,{\cal
L}_{\beta_{k},{f^{k}}}(\cdot,\lambda^{k})}(z^{k}),\\\
z^{k+1}=z^{k}-\eta(z^{k}-x^{k+1}),\\\
\lambda^{k+1}=\lambda^{k}+\beta_{k}(Ax^{k+1}-b).\end{array}\right.$ (9)
Since one can choose to use $h$ or not in LiMEAL, LiMEAL is more general than
MEAL.
### 1.2 Relation to ALM and Proximal ALM
Like ALM, MEAL alternatively updates primal and dual variables; but unlike
ALM, MEAL applies the update to the Moreau envelope of augmented Lagrangian.
By Rockafellar-var97 , the Moreau envelope $\phi_{\beta_{k}}(z,\lambda^{k})$
provides a smooth approximation of ${\cal L}_{\beta_{k}}(x,\lambda^{k})$ from
below and shares the same minima. The smoothness of Moreau envelope alleviates
the possible oscillation that arises when ALM is applied to certain nonconvex
optimization problems.
For the problems satisfying the conditions in this paper, ALM may require a
sequence of possibly unbounded $\\{\beta_{k}\\}$. When $\beta_{k}$ is large,
the ALM subproblem is ill-conditioned. Therefore, bounding $\beta_{k}$ is
practically desirable Birgin-book14 ; Conn91 . MEAL and its practical variants
can use a fixed penalty parameter under a novel subgradient assumption in
Definition 1 later.
Proximal ALM was introduced in Rockafellar76-PALM . Its variants were recently
studied in Hajinezhad-Hong19 ; Hong17-Prox-PDA ; Zhang-Luo20 ; Zhang-Luo18 .
These methods add a proximal term to the augmented Lagrangian. Under the
reformulation (5), proximal ALM Rockafellar76-PALM for problem (1) is a
special case of MEAL with the step size $\eta=1$. In Hong17-Prox-PDA , a
proximal primal-dual algorithm called Prox-PDA was proposed for problem (1).
Certain non-Euclidean matrix norms were adopted in Prox-PDA to guarantee the
strong convexity of the ALM subproblem. A proximal linearized version of Prox-
PDA for the composite optimization problem (8) was studied in Hajinezhad-
Hong19 . These methods are closely related to MEAL, but their convergence
conditions in the literature are stronger.
Recently, Zhang-Luo20 ; Zhang-Luo18 modified proximal inexact ALM for the
linearly constrained problems with an additional bounded box constraint set or
polyhedral constraint set, denoted by ${\cal C}$. Our method is partially
motivated by their methods. Their problems are equivalent to the composite
optimization problems (8) with $g(x)=\iota_{\cal C}(x)$, where $\iota_{\cal
C}(x)=0$ when $x\in{\cal C}$ and $+\infty$ otherwise. In this setting, the
methods in Zhang-Luo20 ; Zhang-Luo18 can be regarded as prox-linear versions
of LiMEAL (9), that is, yielding $x^{k+1}$ via a prox-linear scheme Xu-Yin-
BCD13 instead of the minimization scheme as used in LiMEAL (9), together with
an additional dual step size and a sufficiently small primal step size in
Zhang-Luo20 ; Zhang-Luo18 . Specifically, in the case of $g(x)=\iota_{\cal
C}(x)$, the updates of $x^{k+1}$ in methods in Zhang-Luo20 ; Zhang-Luo18 are
yielded by
$\displaystyle x^{k+1}=\mathrm{Proj}_{\cal C}(x^{k}-s\nabla
K(x^{k},z^{k},\lambda^{k})),$
where $K(x^{k},z^{k},\lambda^{k}))={\cal
L}_{\beta^{k},f}(x,\lambda^{k})+\frac{1}{2\gamma}\|x-z^{k}\|^{2}$, and
$\mathrm{Proj}_{\cal C}(x)$ is the projection of $x$ onto ${\cal C}$. Besides
the difference, LiMEAL can handle proximal functions beyond the indicator
function and permits a wider choice $\eta\in(0,2)$.
### 1.3 Other Related Literature
On convex and constrained problems, locally linear convergence111Locally
linear convergence means exponentially fast convergence to a local minimum
from a sufficiently close initial point. of ALM has been extensively studied
in the literature Bertsekas73 ; Bertsekas76 ; Bertsekas82 ; Conn00 ;
Fernadez12 ; Nocedal99 ; Polyak-Tretyakov73 , mainly under the second order
sufficient condition (SOSC) and constraint conditions such as the linear
independence constraint qualification (LICQ). Global convergence (i.e.,
convergence regardless of the initial guess) of ALM and its variants were
studied in Andreani07 ; Armand17 ; Birgin05 ; Birgin12 ; Birgin10 ; Conn91 ;
Conn96 ; Rockafellar73-ALM ; Tretykov73 , mainly under constraint
qualifications and assumed boundedness of nondecreasing penalty parameters. On
nonconvex and constrained problems, convergence of ALM was recently studied in
Andreani08 ; Andreani10 ; Andreani19 ; Andreani18 ; Birgin10 ; Birgin18 ;
Curtis15 , mainly under the following assumptions: solving nonconvex
subproblems to their approximate global minima or stationary points Birgin10 ;
Birgin18 , or boundedness of the nondecreasing penalty sequence Birgin20 ;
Grapiglia-Yuan19 . Most of them require Lipschitz differentiability of the
objective.
Convergence of proximal ALM and its variants was established under the
assumptions of either convexity in Rockafellar76-PALM or smoothness (in
particular, Lipschitz differentiablity) in Hajinezhad-Hong19 ; Hong17-Prox-PDA
; Jiang19 ; Xie-Wright19 ; Zhang-Luo20 ; Zhang-Luo18 . Besides proximal ALM,
other related works for nonconvex and constrained problems include Bian15 ;
Haeser19 ; Nouiehed18 ; ONeill20 , which also assume smoothness of the
objective, plus either gradient or Hessian information.
### 1.4 Contribution and Novelty
MEAL, iMEAL and LiMEAL achieve the same order of iteration complexity
$o({\varepsilon^{-2}})$ to reach an $\varepsilon$-accurate first-order
stationary point, slightly better than those in the ALM literature Hajinezhad-
Hong19 ; Hong17-Prox-PDA ; Xie-Wright19 ; Zhang-Luo18 ; Zhang-Luo20 while
also requiring weaker conditions. Our methods have convergence guarantees for
a broader class of objective functions, for example, nonsmooth and nonconvex
functions like the smoothly clipped absolute deviation (SCAD) regularization
Fan-SCAD and minimax concave penalty (MCP) regularization Zhang-MCP , which
are underlying the applications of statistical learning and beyond Wang19 .
Note that we only assume the feasibility of $Ax=b$, which is weaker than the
commonly-used hypotheses such as: the strict complementarity condition in
Zhang-Luo18 , certain rank assumption (such as
$\mathrm{Im}(A)\subseteq\mathrm{Im}(B)$ when considering the two- (multi-)
block case $Ax+By=0$) in Wang19 , and the linear independence constrained
qualification (LICQ) in Bertsekas82 ; Nocedal99 (which implies the full-rank
assumption in the linear constraint case).
Our analysis is noticeably different from those in the literature
Rockafellar76-PALM ; Hajinezhad-Hong19 ; Hong17-Prox-PDA ; Jiang19 ; Zhang-
Luo18 ; Zhang-Luo20 ; Xie-Wright19 ; Wang19 . We base our analysis on new
potential functions. The Moreau envelope in the potential functions is
partially motivated by Davis-Drusvyatskiy19 . Our overall potential functions
are new and tailored for MEAL, iMEAL, and LiMEAL and include the augmented
Lagrangian with additional terms. The technique of analysis may have its own
value for further generalizing and improving ALM-type methods.
### 1.5 Notation and Organization
We let $\mathbb{R}$ and $\mathbb{N}$ denote the sets of real and natural
numbers, respectively. Given a matrix $A$, $\mathrm{Im}(A)$ denotes its image,
and $\tilde{\sigma}_{\min}(A^{T}A)$ denotes the smallest positive eigenvalue
of $A^{T}A$. $\|\cdot\|$ is the Euclidean norm for a vector. Given any two
nonnegative sequences $\\{\xi_{k}\\}$ and $\\{\zeta_{k}\\}$, we write
$\xi_{k}=o(\zeta_{k})$ if
$\lim_{k\rightarrow\infty}\frac{\xi_{k}}{\zeta_{k}}=0$, and $\xi_{k}={\cal
O}(\zeta_{k})$ if there exists a positive constant $c$ such that $\xi_{k}\leq
c\zeta_{k}$ for all sufficiently large $k$.
In the rest of this paper, Section 2 presents background and preliminary
techniques. Section 3 states convergence results of MEAL and iMEAL. Section 4
presents the results of LiMEAL. Section 5 includes main proofs. Section 6
provides sufficient conditions for certain boundedness assumptions in above
results along with comparisons with the related work. Section 7 provides some
numerical experiments to demonstrate the effectiveness of proposed methods. We
conclude this paper in Section 8.
## 2 Background and Preliminaries
This paper uses extended-real-valued functions, for example,
$h:\mathbb{R}^{n}\to\mathbb{R}\cup\\{+\infty\\}$. Write the domain of $h$ as
$\mathrm{dom}(h):=\\{x\in\mathbb{R}^{n}:h(x)<+\infty\\}$ and its range as
$\mathrm{ran}(h):=\\{y:y=h(x),\forall x\in\mathrm{dom}(h)\\}$. For each
$x\in\mathrm{dom}(h)$, the Fréchet subdifferential of $h$ at $x$, written as
$\widehat{\partial}h(x)$, is the set of vectors $v\in\mathbb{R}^{n}$
satisfying
$\liminf_{u\neq x,u\rightarrow x}\ \frac{h(u)-h(x)-\langle
v,u-x\rangle}{\|x-u\|}\geq 0.$
When $x\notin\mathrm{dom}(h),$ we define $\widehat{\partial}h(x)=\emptyset.$
The _limiting-subdifferential_ (or simply _subdifferential_) of $h$
Mordukhovich-2006 at $x\in\mathrm{dom}(h)$ is defined as
$\partial h(x):=\\{v\in\mathbb{R}^{n}:\exists x^{t}\to x,\;h(x^{t})\to
h(x),\;\widehat{\partial}h(x^{t})\ni v^{t}\to v\\}.$ (10)
A necessary (but not sufficient) condition for $x\in\mathbb{R}^{n}$ to be a
minimizer of $h$ is $0\in\partial h(x)$. A point that satisfies this inclusion
is called limiting-critical or simply critical. The distance between a point
$x$ and a subset ${\cal S}$ of $\mathbb{R}^{n}$ is defined as
$\mathrm{dist}(x,{\cal S})=\inf_{u}\\{\|x-u\|:u\in{\cal S}\\}$.
### 2.1 Moreau Envelope
Given a function $h:\mathbb{R}^{n}\rightarrow\mathbb{R}$, define its Moreau
envelope Moreau65 ; Rockafellar-var97 :
${\cal
M}_{\gamma,h}(z)=\min_{x}\left\\{h(x)+\frac{1}{2\gamma}\|x-z\|^{2}\right\\},$
(11)
where $\gamma>0$ is a parameter. Define its associated proximity operator
$\mathrm{Prox}_{\gamma,h}(z)=\operatorname*{argmin}_{x}\left\\{h(x)+\frac{1}{2\gamma}\|x-z\|^{2}\right\\}.$
(12)
If $h$ is $\rho$-weakly convex and $\gamma\in(0,\rho^{-1})$, then
$\mathrm{Prox}_{\gamma,h}$ is monotone, single-valued, and Lipschitz, and
${\cal M}_{\gamma,h}$ is differentiable with
$\nabla{\cal
M}_{\gamma,h}(z)=\gamma^{-1}\left(z-\mathrm{Prox}_{\gamma,h}(z)\right)\in\partial
h(\mathrm{Prox}_{\gamma,h}(z));$ (13)
see (Rockafellar-var97, , Proposition 13.37). From Drusvyatskiy18 ;
Drusvyatskiy-Paquette19 , we also have
$\displaystyle{\cal M}_{\gamma,h}(\mathrm{Prox}_{\gamma,h}(z))\leq h(z),$
$\displaystyle\|\mathrm{Prox}_{\gamma,h}(z)-z\|=\gamma\|\nabla{\cal
M}_{\gamma,h}(z)\|,$ $\displaystyle\mathrm{dist}(0,\partial
h(\mathrm{Prox}_{\gamma,h}(z)))\leq\|\nabla{\cal M}_{\gamma,h}(z)\|.$
The first relation above presents Moreau envelope as a smooth lower
approximation of $h$. By the second and third relations, small $\|\nabla{\cal
M}_{\gamma,h}(z)\|$ implies that $z$ is near its proximal point
$\mathrm{Prox}_{\gamma,h}(z)$ and $z$ is nearly stationary for $h$ Davis-
Drusvyatskiy19 . Therefore, $\|\nabla{\cal M}_{\gamma,h}(z)\|$ can be used as
a continuous stationarity measure. Hence, replacing the augmented Lagrangian
with its Moreau envelope not only generates a strongly convex subproblem but
also yields a stationarity measure.
### 2.2 Implicit Regularity Properties
Let $h$ be a proper, lower semicontinuous, $\rho$-weakly convex function.
Given a $\gamma\in(0,\rho^{-1})$, define the generalized inverse mapping of
$\mathrm{Prox}_{\gamma,h}$:
$\displaystyle\mathrm{Prox}_{\gamma,h}^{-1}(x):=\\{w:\mathrm{Prox}_{\gamma,h}(w)=x\\},\quad\forall
x\in\mathrm{ran}(\mathrm{Prox}_{\gamma,h}).$ (14)
In the definition below, we introduce two important regularity properties.
###### Definition 1
Let $h$ be a proper, lower semicontinuous and $\rho$-weakly convex function.
1. (a)
We say $h$ satisfies the implicit Lipschitz subgradient property if for any
$\gamma\in(0,\rho^{-1})$, there exists $L>0$ (depending on $\gamma$) such that
for any $u,v\in\mathrm{ran}(\mathrm{Prox}_{\gamma,h})$,
$\|\nabla{\cal M}_{\gamma,h}(w)-\nabla{\cal M}_{\gamma,h}(w^{\prime})\|\leq
L\|u-v\|,\ \forall
w\in\mathrm{Prox}_{\gamma,h}^{-1}(u),w^{\prime}\in\mathrm{Prox}_{\gamma,h}^{-1}(v);$
2. (b)
We say $h$ satisfies the implicit bounded subgradient property if for any
$\gamma\in(0,\rho^{-1})$, there exists $\hat{L}>0$ (depending on $\gamma$)
such that for any $u\in\mathrm{ran}(\mathrm{Prox}_{\gamma,h})$,
$\|\nabla{\cal M}_{\gamma,h}(w)\|\leq\hat{L},\ \forall
w\in\mathrm{Prox}_{\gamma,h}^{-1}(u).$
Since $\nabla{\cal M}_{\gamma,h}(x)\in\partial h(\mathrm{Prox}_{\gamma,h}(x))$
for any $x\in\mathbb{R}^{n}$, we have $\nabla{\cal M}_{\gamma,h}(w)\in\partial
h(u),\forall u\in\mathrm{ran}(\mathrm{Prox}_{\gamma,h})$ and
$w\in\mathrm{Prox}_{\gamma,h}^{-1}(u)$. Hence, the implicit Lipschitz
subgradient and implicit bounded subgradient imply, respectively, the
Lipschitz continuity and boundedness only on the components of $\partial h$
that are Moreau envelope gradients, but not on other components of $\partial
h$. When $h$ is differentiable, implicit Lipschitz subgradient implies
Lipschitz gradient. Having implicit bounded subgradients is weaker than having
bounded $\partial h$, which is commonly assumed in the analysis of nonconvex
algorithms (cf. Davis-Drusvyatskiy19 ; Hajinezhad-Hong19 ; Zeng-DGD18 ).
Nonsmooth and nonconvex functions like the SCAD regularization and MCP
regularization which appear in statistical learning Wang19 , have implicit
bounded subgradients.
### 2.3 Kurdyka-Łojasiewicz Inequality
The Kurdyka-Łojasiewicz (KŁ) inequality Bolte-KL2007a ; Bolte-KL2007b ;
Kurdyka-KL1998 ; Lojasiewicz-KL1963 ; Lojasiewicz-KL1993 is a property that
leads to global convergence of nonconvex algorithms in the literature (see,
Attouch13 ; Bolte2014 ; Wang19 ; Xu-Yin-BCD13 ; Zeng-BCD19 ; Zeng-ADMM19 ).
The following definition of Kurdyka-Łojasiewicz property is adopted from
Bolte-KL2007a .
###### Definition 2
A function $h:\mathbb{R}^{n}\rightarrow\mathbb{R}\cup\\{+\infty\\}$ is said to
have the Kurdyka-Łojasiewicz property at $x^{*}\in\mathrm{dom}(\partial h)$ if
there exist a neighborhood ${\cal U}$ of $x^{*}$, a constant $\nu>0$, and a
continuous concave function $\varphi(s)=cs^{1-\theta}$ for some $c>0$ and
$\theta\in[0,1)$ such that the Kurdyka-Łojasiewicz inequality holds: for all
$x\in{\cal U}\cap\mathrm{dom}(\partial h)$ and $h(x^{*})<h(x)<h(x^{*})+\nu$,
$\varphi^{\prime}(h(x)-h(x^{*}))\cdot\mathrm{dist}(0,\partial h(x))\geq 1,$
(15)
(we use the conventions: $0^{0}=1,\infty/\infty=0/0=0$), where $\theta$ is
called the KŁ exponent of $h$ at $x^{*}$. Proper lower semicontinuous
functions satisfying the KŁ inequality at every point of
$\mathrm{dom}(\partial h)$ are called KŁ functions.
This property was firstly introduced by Lojasiewicz-KL1993 on real analytic
functions Krantz2002-real-analytic for
$\theta\in\left[\tfrac{1}{2},1\right)$, was then extended to functions defined
on the o-minimal structure in Kurdyka-KL1998 , and was later extended to
nonsmooth subanalytic functions in Bolte-KL2007a . KŁ functions include real
analytic functions Krantz2002-real-analytic , semialgebraic functions Bochnak-
semialgebraic1998 , tame functions defined in some o-minimal structures
Kurdyka-KL1998 , continuous subanalytic functions Bolte-KL2007a , definable
functions Bolte-KL2007b , locally strongly convex functions Xu-Yin-BCD13 , as
well as many deep-learning training models Zeng-BCD19 ; Zeng-ADMM19 .
## 3 Convergence of MEAL
This section presents the convergence results of MEAL and iMEAL. We postpone
their proofs to Section 5.
### 3.1 Assumptions and Stationarity Measure
###### Assumption 1
The set ${\cal X}:=\\{x:Ax=b\\}$ is nonempty.
###### Assumption 2
The objective $f$ in problem (1) satisfies:
1. (a)
$f$ is proper lower semicontinuous and $\rho$-weakly convex; and for any
$\gamma\in(0,\rho^{-1})$, either (b) or (c):
2. (b)
$f$ satisfies the implicit Lipschitz subgradient property with a constant
$L_{f}>0$ (possibly depending on $\gamma$); or,
3. (c)
$f$ satisfies the implicit bounded subgradient property with a constant
$\hat{L}_{f}>0$ (possibly depending on $\gamma$).
We do not assume the following hypotheses: the strict complementarity
condition used in Zhang-Luo18 , any rank assumption (such as
$\mathrm{Im}(A)\subseteq\mathrm{Im}(\mathrm{B})$ when considering the two-
(multi-)block case $Ax+By=0$) used in Wang19 , the linear independence
constrained qualification (LICQ) used in Bertsekas82 ; Nocedal99 (implying
the full-rank assumption in the linear constraint case). Assumption 2 is mild
as discussed in Section 2.2.
According to (3) and the update (4) of MEAL, we have
$\displaystyle\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})=\left(\begin{array}[]{c}(\eta\gamma)^{-1}(z^{k}-z^{k+1})\\\
\beta_{k}^{-1}(\lambda^{k+1}-\lambda^{k})\end{array}\right)\in\left(\begin{array}[]{c}\partial
f(x^{k+1})+A^{T}\lambda^{k+1}\\\ Ax^{k+1}-b\end{array}\right).$ (20)
Let
$\displaystyle\xi_{\mathrm{meal}}^{k}:=\min_{0\leq t\leq
k}\|\nabla\phi_{\beta_{t}}(z^{t},\lambda^{t})\|,\ \forall k\in\mathbb{N}.$
(21)
Then according to (20), the bound $\xi_{\mathrm{meal}}^{k}\leq\varepsilon$
implies
$\displaystyle\min_{0\leq t\leq
k}\mathrm{dist}\left\\{0,\left(\begin{array}[]{c}\partial
f(x^{t+1})+A^{T}\lambda^{t+1}\\\
Ax^{t+1}-b\end{array}\right)\right\\}\leq\xi^{k}_{\mathrm{meal}}\leq\varepsilon,$
that is, MEAL achieves $\varepsilon$-accurate first-order stationarity for
problem (1) within $k$ iterations. Hence, $\xi_{\mathrm{meal}}^{k}$ is a valid
stationarity measure of MEAL. Define iteration complexity:
$\displaystyle T_{\varepsilon}=\inf\left\\{t\geq
1:\|\nabla\phi_{\beta_{t}}(z^{t},\lambda^{t})\|\leq\varepsilon\right\\}.$ (22)
Comparing $T_{\varepsilon}$ to the common iteration complexity
$\displaystyle\hat{T}_{\varepsilon}=\inf\left\\{t\geq
1:\mathrm{dist}(0,\partial f(x^{t})+A^{T}\lambda^{t})\leq\epsilon\ \text{and}\
\|Ax^{t}-b\|\leq\varepsilon\right\\},$
we get $T_{\varepsilon}\geq\hat{T}_{\varepsilon}$.
If $f$ is differentiable, $\mathrm{dist}(0,\partial
f(x^{t})+A^{T}\lambda^{t})$ reduces to $\|\nabla f(x^{t})+A^{T}\lambda^{t}\|$.
### 3.2 Convergence Theorems of MEAL
We present the quantities used to state the convergence results of MEAL. Let
$\displaystyle{\cal P}_{\beta}(x,z,\lambda)={\cal
L}_{\beta}(x,\lambda)+\frac{1}{2\gamma}\|x-z\|^{2},$ (23)
for some $\beta,\gamma>0.$ Then according to (5), MEAL can be interpreted as a
primal-dual update with respect to ${\cal P}_{\beta_{k}}(x,z,\lambda)$ at the
$k$-th iteration, that is, updating $x^{k+1}$, $z^{k+1}$, and $\lambda^{k+1}$
by minimization, gradient descent, and gradient ascent respectively.
Based on (23), we introduce the following Lyapunov functions for MEAL:
$\displaystyle{\cal E}_{\mathrm{meal}}^{k}:={\cal
P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})+2\alpha_{k}\|z^{k}-z^{k-1}\|^{2},\
\forall k\geq 1,$ (24)
associated with the implicit Lipschitz subgradient assumption and
$\displaystyle\tilde{\cal E}_{\mathrm{meal}}^{k}:={\cal
P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})+3\alpha_{k}\|z^{k}-z^{k-1}\|^{2},\
\forall k\geq 1,$ (25)
associated with the implicit bounded subgradient assumption, where
$\displaystyle\alpha_{k}:=\frac{\beta_{k}+\beta_{k+1}+\gamma\eta(1-\eta/2)}{2c_{\gamma,A}\beta_{k}^{2}},\
\forall k\in\mathbb{N},$ (26)
and $c_{\gamma,A}:=\gamma^{2}\tilde{\sigma}_{\min}(A^{T}A)$. When $\beta$ is
fixed, we also fix
$\displaystyle\alpha:=\frac{2\beta+\gamma\eta(1-\eta/2)}{2c_{\gamma,A}\beta^{2}}.$
(27)
###### Theorem 3.1 (Iteration Complexity of MEAL)
Suppose that Assumptions 1 and 2(a) hold. Pick $\gamma\in(0,\rho^{-1})$ and
$\eta\in(0,2)$. Let $\\{(x^{k},z^{k},\lambda^{k})\\}$ be a sequence generated
by MEAL (5). The following claims hold:
1. (a)
Set $\beta$ sufficiently large such that in (27),
$\alpha<\min\left\\{\frac{1-\gamma\rho}{4\gamma(1+\gamma
L_{f})^{2}},\frac{1}{8\gamma}(\frac{2}{\eta}-1)\right\\}$. Under Assumption
2(b), if $\\{{\cal E}_{\mathrm{meal}}^{k}\\}$ is lower bounded, then
$\xi^{k}_{\mathrm{meal}}=o(1/\sqrt{k})$ for $\xi_{\mathrm{meal}}^{k}$ in (21).
2. (b)
Pick any $K\geq 1$. Set $\\{\beta_{k}\\}$ so that in (26),
$\alpha_{k}\equiv\frac{\alpha^{*}}{K}$ for some positive constant
$\alpha^{*}\leq\min\left\\{\frac{1-\rho\gamma}{6\gamma},\frac{1}{12\gamma}\left(\frac{2}{\eta}-1\right)\right\\}$.
Under Assumption 2(c), if $\\{\tilde{\cal E}_{\mathrm{meal}}^{k}\\}$ is lower
bounded, then $\xi_{\mathrm{meal}}^{K}\leq\tilde{c}_{1}/\sqrt{K}$ for some
constant $\tilde{c}_{1}>0$.
Section 6.1 provides conditions sufficient for the lower-boundedness
assumptions. Let us interpret the theorem. To achieve an
$\varepsilon$-accurate stationary point, the iteration complexity of MEAL is
$o(\varepsilon^{-2})$ assuming the implicit Lipschitz subgradient property and
${\cal O}(\varepsilon^{-2})$ assuming the implicit bounded subgradient
property. Both iteration complexities are consistent with the existing results
of ${\cal O}(\varepsilon^{-2})$ in Hajinezhad-Hong19 ; Hong17-Prox-PDA ; Xie-
Wright19 ; Zhang-Luo20 . The established results of MEAL also hold for
proximal ALM by setting $\eta=1$. We note that it is not our goal to pursue
any better complexity (e.g., using momentum) in this paper.
###### Remark 1
Let $\bar{\alpha}:=\min\left\\{\frac{1-\gamma\rho}{4\gamma(1+\gamma
L_{f})^{2}},\frac{1}{8\gamma}(\frac{2}{\eta}-1)\right\\}$. By (27), the
requirement $0<\alpha<\bar{\alpha}$ in Theorem 3.1(a) is met by setting
$\displaystyle\beta>\frac{1+\sqrt{1+\eta(2-\eta)\gamma
c_{\gamma,A}\bar{\alpha}}}{2c_{\gamma,A}\bar{\alpha}}.$ (28)
Similarly, the assumption $\alpha_{k}=\frac{\alpha^{*}}{K}$ in Theorem 3.1(b)
is met by setting
$\displaystyle\beta_{k}=\frac{K\left(1+\sqrt{1+\eta(2-\eta)\gamma
c_{\gamma,A}\alpha^{*}/K}\right)}{2c_{\gamma,A}\alpha^{*}},\ k=1,\ldots,K.$
(29)
Next, we establish global convergence (whole sequence convergence regardless
of initial points) and its rate for MEAL under the KŁ inequality (Definition
2). Let $\hat{z}^{k}:=z^{k-1}$,
$y^{k}:=(x^{k},z^{k},\lambda^{k},\hat{z}^{k}),\ \forall k\geq 1,$
$y:=(x,z,\lambda,\hat{z})\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^{n},$
and
$\displaystyle{\cal P}_{\mathrm{meal}}(y):={\cal
P}_{\beta}(x,z,\lambda)+3\alpha\|z-\hat{z}\|^{2}$ (30)
where $\alpha$ is defined in (27).
###### Proposition 1 (Global convergence and rate of MEAL)
Suppose that the assumptions required for Theorem 3.1(a) hold and that
$\\{(x^{k},z^{k},\lambda^{k})\\}$ generated by MEAL (5) is bounded. If ${\cal
P}_{\mathrm{meal}}$ satisfies the KŁ property at some point
$y^{*}:=(x^{*},x^{*},\lambda^{*},x^{*})$ with an exponent of $\theta\in[0,1)$,
where $(x^{*},\lambda^{*})$ is a limit point of $\\{(x^{k},\lambda^{k})\\}$,
then
1. (a)
the whole sequence $\\{\hat{y}^{k}:=(x^{k},z^{k},\lambda^{k})\\}$ converges to
$\hat{y}^{*}:=(x^{*},x^{*},\lambda^{*})$; and
2. (b)
the following rate-of-convergence results hold: (1) if $\theta=0$, then
$\\{\hat{y}^{k}\\}$ converges within a finite number of iterations; (2) if
$\theta\in(0,\frac{1}{2}]$, then $\|\hat{y}^{k}-\hat{y}^{*}\|\leq c\tau^{k}$
for all $k\geq k_{0}$, for certain $k_{0}>0,c>0,\tau\in(0,1)$; and (3) if
$\theta\in(\frac{1}{2},1)$, then $\|\hat{y}^{k}-\hat{y}^{*}\|\leq
ck^{-\frac{1-\theta}{2\theta-1}}$ for all $k\geq k_{0}$, for certain
$k_{0}>0,c>0$.
In Proposition 1, the KŁ property of ${\cal P}_{\mathrm{meal}}$ defined in
(30) plays a central role in the establishment of global convergence of MEAL.
The KŁ exponent determines the convergence speed of MEAL; particularly, the
exponent $\theta=1/2$ implies linear convergence so it is most desirable.
Below we give some results on $\theta$, which are obtainable from (Shiota1997,
, page 43), (Bolte-KL2007a, , Theorem 3.1), (Zeng-BCD19, , Lemma 5), and (Li-
Pong-KLexponent18, , Theorem 3.6 and Corollary 5.2).
###### Proposition 2
The following claims hold:
1. (a)
If $f$ is subanalytic with a closed domain and continuous on its domain, then
${\cal P}_{\mathrm{meal}}$ defined in (30) is a KŁ function;
2. (b)
If ${\cal L}_{\beta}(x,\lambda)$ defined in (2) has the KŁ property at some
point $(x^{*},\lambda^{*})$ with exponent $\theta\in[1/2,1)$, then ${\cal
P}_{\mathrm{meal}}$ has the KŁ property at $(x^{*},x^{*},\lambda^{*},x^{*})$
with exponent $\theta$;
3. (c)
If $f$ has the following form:
$\displaystyle f(x)=\min_{1\leq i\leq
r}\left\\{\frac{1}{2}x^{T}M_{i}x+u_{i}^{T}x+c_{i}+P_{i}(x)\right\\},$ (31)
where $P_{i}$ are proper closed polyhedral functions, $M_{i}$ are symmetric
matrices of size $n$, $u_{i}\in\mathbb{R}^{n}$ and $c_{i}\in\mathbb{R}$ for
$i=1,\ldots,r$, then ${\cal L}_{\beta}$ is a KŁ function with an exponent of
$\theta=1/2$.
Claim (a) can be obtained as follows. The terms in ${\cal P}_{\mathrm{meal}}$
besides $f$ are polynomial functions, which are both real analytic and
semialgebraic Bochnak-semialgebraic1998 . Since $f$ is subanalytic with a
closed domain and continuous on its domain, by (Zeng-BCD19, , Lemma 5), ${\cal
P}_{\mathrm{meal}}$ is also subanalytic with a closed domain and continuous on
its domain. By (Bolte-KL2007a, , Theorem 3.1), ${\cal P}_{\mathrm{meal}}$ is a
KŁ function. Claim (b) can be verified by applying (Li-Pong-KLexponent18, ,
Theorem 3.6) to ${\cal P}_{\mathrm{meal}}$. Claim (c) can be established as
follows. The class of functions $f$ defined by (31) are weakly convex with a
modulus $\rho=2\max_{1\leq i\leq r}\|M_{i}\|$. According to (Li-Pong-
KLexponent18, , Sec. 5.2), this class covers many nonconvex functions such as
SCAD Fan-SCAD and MCP Zhang-MCP in statistical learning. The function ${\cal
L}_{\beta}(x,\lambda)=\frac{\beta}{2}\|Ax+\beta^{-1}\lambda-b\|^{2}+(f(x)-\frac{1}{2\beta}\|\lambda\|^{2})$.
according to (Li-Pong-KLexponent18, , Corollary 5.2), is a KŁ function with an
exponent of $1/2$. More results on the KŁ functions with exponent $1/2$ can be
found in Li-Pong-KLexponent18 ; Yu-Li-Pong-KLexponent21 and the references
therein.
### 3.3 Convergence of iMEAL
When considering iMEAL, the Lyapunov functions need to be slightly modified
into
$\displaystyle{\cal E}_{\mathrm{imeal}}^{k}:={\cal
P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})+3\alpha_{k}\|z^{k}-z^{k-1}\|^{2},\
\forall k\geq 1,$ (32)
associated with the implicit Lipschitz subgradient assumption, and
$\displaystyle\tilde{\cal E}_{\mathrm{imeal}}^{k}:={\cal
P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})+4\alpha_{k}\|z^{k}-z^{k-1}\|^{2},\
\forall k\geq 1,$ (33)
associated with the implicit bounded subgradient assumption, where
$\alpha_{k}$ is defined in (26).
###### Theorem 3.2 (Iteration Complexity of iMEAL)
Let Assumptions 1 and 2(a) hold, $\gamma\in(0,\rho^{-1})$, and $\eta\in(0,2)$.
Let $\\{(x^{k},z^{k},\lambda^{k})\\}$ be a sequence generated by iMEAL (7)
with $\sum_{k=0}^{\infty}\epsilon_{k}^{2}<\infty$. The following claims hold:
1. (a)
Set $\beta$ sufficiently large such that in (27),
$\alpha<\min\left\\{\frac{1-\gamma\rho}{6\gamma(1+\gamma
L_{f})^{2}},\frac{1}{12\gamma}(\frac{2}{\eta}-1)\right\\}$. Under Assumption
2(b), if $\\{{\cal E}_{\mathrm{imeal}}^{k}\\}$ is lower bounded, then
$\xi^{k}_{\mathrm{meal}}=o(1/\sqrt{k})$ (cf. (21)).
2. (b)
Pick $K\geq 1$. Set $\\{\beta_{k}\\}$ such that in (26),
$\alpha_{k}\equiv\frac{\hat{\alpha}^{*}}{K}$ for some positive constant
$\hat{\alpha}^{*}\leq\min\left\\{\frac{1-\rho\gamma}{8\gamma},\frac{1}{16\gamma}(\frac{2}{\eta}-1)\right\\}$.
Under Assumption 2(c), if $\\{\tilde{\cal E}_{\mathrm{imeal}}^{k}\\}$ is lower
bounded, then $\xi_{\mathrm{meal}}^{K}\leq\tilde{c}_{2}/\sqrt{K}$ for some
constant $\tilde{c}_{2}>0$.
By Theorem 3.2, the iteration complexity of iMEAL is the same as that of MEAL
and also consistent with that of inexact proximal ALM Xie-Wright19 (when the
stationary accuracy $\epsilon_{k}$ is square summable). Moreover, if the
condition on $\epsilon_{k}$ is strengthened to be
$\sum_{k=0}^{\infty}\epsilon_{k}<+\infty$ as required in the literature
Rockafellar76-PALM ; Wang19 , then following a proof similar for Proposition
1, global convergence and similar rates of MEAL also hold for iMEAL under the
assumptions required for Theorem 3.2(a) and the KŁ property.
## 4 Convergence of LiMEAL for Composite Objective
This section presents the convergence results of LiMEAL (9) for the
constrained problem with a composite objective (8). The proofs are postponed
to Section 5 below. Similar to Assumption 2, we make the following
assumptions.
###### Assumption 3
The objective $f(x)=h(x)+g(x)$ in problem (8) satisfies:
1. (a)
$h$ is differentiable and $\nabla h$ is Lipschitz continuous with a constant
$L_{h}>0$;
2. (b)
$g$ is proper lower-semicontinuous and $\rho_{g}$-weakly convex; and either
3. (c)
$g$ has the implicit Lipschitz subgradient property with a constant $L_{g}>0$;
or
4. (d)
$g$ has the implicit bounded subgradient property with a constant
$\hat{L}_{g}>0$.
In (c) and (d), $L_{g}$ and $\hat{L}_{g}$ may depend on $\gamma$.
By the update (9) of LiMEAL, some simple derivations show that
$\displaystyle x^{k+1}=\mathrm{Prox}_{\gamma,g}(z^{k}-\gamma(\nabla
h(x^{k})+A^{T}\lambda^{k+1}))$ (34)
and
$\displaystyle
g_{\mathrm{limeal}}^{k}:=\left(\begin{array}[]{c}\gamma^{-1}(z^{k}-x^{k+1})+(\nabla
h(x^{k+1})-\nabla h(x^{k}))\\\
\beta_{k}^{-1}(\lambda^{k+1}-\lambda^{k})\end{array}\right)\in\left(\begin{array}[]{c}\partial
f(x^{k+1})+A^{T}\lambda^{k+1}\\\ Ax^{k+1}-b\end{array}\right).$ (39)
Actually, the term $\gamma^{-1}(z^{k}-x^{k+1})$ represents some prox-gradient
sequence frequently used in the analysis of algorithms for the unconstrained
composite optimization (e.g., Davis-Drusvyatskiy19 ). Thus, let
$\displaystyle\xi_{\mathrm{limeal}}^{k}:=\min_{0\leq t\leq
k}\|g_{\mathrm{limeal}}^{t}\|,\ \forall k\in\mathbb{N},$ (40)
which can be taken as an effective stationarity measure of LiMEAL for problem
(8).
In the following, we present the iteration complexity of LiMEAL for problem
(8). Since the prox-linear scheme is adopted in the update of $x^{k+1}$ in
LiMEAL as described in (9), thus, the proximal term (i.e.,
$\|x^{k}-x^{k-1}\|^{2}$) should be generally included in the associated
Lyapunov functions of LiMEAL, shown as follows:
$\displaystyle{\cal E}^{k}_{\mathrm{limeal}}$ $\displaystyle:={\cal
P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})+3\alpha_{k}(\gamma^{2}L_{h}^{2}\|x^{k}-x^{k-1}\|^{2}+\|z^{k}-z^{k-1}\|^{2})$
(41)
associated with the implicit Lipschitz gradient assumption, and
$\displaystyle\tilde{\cal E}_{\mathrm{limeal}}^{k}:={\cal
P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})+4{\alpha}_{k}(\gamma^{2}L_{h}^{2}\|x^{k}-x^{k-1}\|^{2}+\|z^{k}-z^{k-1}\|^{2}),$
(42)
associated with the implicit bounded subgradient assumption, where
$\alpha_{k}$ is defined in (26).
The iteration complexity of MEAL can be similarly generalized to LiMEAL as
follows.
###### Theorem 4.1 (Iteration Complexity of LiMEAL)
Take Assumptions 1 and 3(a)-(b). Pick $\eta\in(0,2)$ and
$0<\gamma<\frac{2}{(\rho_{g}+L_{h})\left(1+\sqrt{1+\frac{2(2-\eta)\eta
L_{h}^{2}}{(\rho_{g}+L_{h})^{2}}}\right)}$. Let
$\\{(x^{k},z^{k},\lambda^{k})\\}$ be a sequence generated by LiMEAL (9). The
following claims hold:
1. (a)
Set $\beta$ sufficiently large such that
$\alpha<\min\left\\{\frac{1}{12\gamma}(\frac{2}{\eta}-1),\frac{1-\gamma(\rho_{g}+L_{h})-\eta(1-\eta/2)\gamma^{2}L_{h}^{2}}{6\gamma\left((1+\gamma
L_{g})^{2}+\gamma^{2}L_{h}^{2}\right)}\right\\}$. Under Assumption 3(c), if
$\\{{\cal E}^{k}_{\mathrm{limeal}}\\}$ is lower bounded, then
$\xi^{k}_{\mathrm{limeal}}=o(1/\sqrt{k})$.
2. (b)
Pick $K\geq 1$. Set $\\{\beta_{k}\\}$ such that
$\alpha_{k}\equiv\frac{\bar{\alpha}^{*}}{K}$ for some positive constant
$\bar{\alpha}^{*}\leq\min\Big{\\{}\frac{1-\gamma\left(\rho_{g}+L_{h})-\eta(1-\eta/2)\gamma^{2}L_{h}^{2}\right)}{8\gamma(1+\gamma^{2}L_{h}^{2})}$,
$\frac{1}{16\gamma}\left(\frac{2}{\eta}-1\right)\Big{\\}}$. Under Assumption
3(d), if $\\{\tilde{\cal E}^{k}_{\mathrm{limeal}}\\}$ is lower bounded, then
$\xi_{\mathrm{limeal}}^{K}\leq\tilde{c}_{3}/\sqrt{K}$ for some constant
$\tilde{c}_{3}>0$.
Similar to the discussions following Theorem 3.1, to yield an
$\varepsilon$-accurate first-order stationary point, the iteration complexity
of LiMEAL is $o(\varepsilon^{-2})$ under the implicit Lipschitz subgradient
assumption and ${\cal O}(\varepsilon^{-2})$ under the implicit bounded
subgradient assumption, as demonstrated by Theorem 4.1. The conditions on
$\beta$ and $\beta_{k}$ in these two cases can be derived similarly to (28)
and (29), respectively.
In the following, we establish the global convergence and rates of LiMEAL
under assumptions required for Theorem 4.1(a) and the KŁ property.
Specifically, let $\hat{x}^{k}:=x^{k-1},\ \hat{z}^{k}:=z^{k-1},\
{y}^{k}:=(x^{k},z^{k},\lambda^{k},\hat{x}^{k},\hat{z}^{k}),\ \forall k\geq 1,$
${y}:=(x,z,\lambda,\hat{x},\hat{z})\in\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{R}^{n},$
and
$\displaystyle{\cal P}_{\mathrm{limeal}}({y}):={\cal
P}_{\beta}(x,z,\lambda)+4\alpha\left(\|z-\hat{z}\|^{2}+\gamma^{2}L_{h}^{2}\|x-\hat{x}\|^{2}\right).$
(43)
###### Proposition 3 (Global convergence and rates of LiMEAL)
Suppose that Assumptions 1 and 3(a)-(c) hold and that the sequence
$\\{(x^{k},z^{k},\lambda^{k})\\}$ generated by LiMEAL (9) is bounded. If
$\gamma\in(0,\frac{1}{\rho_{g}+L_{h}})$, $\eta\in(0,2)$,
$0<\alpha<\min\left\\{\frac{1}{8\gamma}\left(\frac{2}{\eta}-1\right),\frac{1-\gamma(\rho_{g}+L_{h})}{8\gamma\left((1+\gamma
L_{g})^{2}+\gamma^{2}L_{h}^{2}\right)}\right\\}$, and ${\cal
P}_{\mathrm{limeal}}$ satisfies the KŁ property at some point
$y^{*}:=(x^{*},x^{*},\lambda^{*},x^{*},x^{*})$ with an exponent of
$\theta\in[0,1)$, where $(x^{*},\lambda^{*})$ is a limit point of
$\\{(x^{k},\lambda^{k})\\}$, then
1. (a)
the whole sequence $\\{\hat{y}^{k}:=(x^{k},z^{k},\lambda^{k})\\}$ converges to
$\hat{y}^{*}:=(x^{*},x^{*},\lambda^{*})$; and
2. (b)
all the rates of convergence results in Proposition 1(b) also hold for LiMEAL.
###### Remark 2
The established results in this section is more general than those in Zhang-
Luo18 and done under weaker assumptions on $h$ and for more general class of
$g$. Specifically, as discussed in Section 1.2, the algorithm studied in
Zhang-Luo18 is a prox-linear version of LiMEAL with $g$ being an indicator
function of a box constraint set. In Zhang-Luo18 , global convergence and a
linear rate of proximal inexact ALM were proved for quadratic programming,
where that the augmented Lagrangian satisfies the KŁ inequality with exponent
$1/2$. Besides, the strict complementarity condition required in Zhang-Luo18
is also removed in this paper for LiMEAL.
## 5 Main Proofs
In this section, we first prove some lemmas and then present the proofs of our
main convergence results.
### 5.1 Preliminary Lemmas
#### 5.1.1 Lemmas on Iteration Complexity and Global Convergence
The first lemma concerns the convergence speed of a nonenegative sequence
$\\{\xi_{k}\\}$ satisfying the following relation
$\displaystyle\tilde{\eta}\xi_{k}^{2}\leq({\cal E}_{k}-{\cal
E}_{k+1})+\tilde{\epsilon}_{k}^{2},\ \forall k\in\mathbb{N},$ (44)
where $\tilde{\eta}>0$, $\\{{\cal E}_{k}\\}$ and $\\{\tilde{\epsilon}_{k}\\}$
are two nonnegative sequences, and
$\sum_{k=1}^{\infty}\tilde{\epsilon}_{k}^{2}<+\infty$.
###### Lemma 1
For any sequence $\\{\xi_{k}\\}$ satisfying (44),
$\tilde{\xi}_{k}:=\min_{1\leq t\leq k}\xi_{t}=o(1/\sqrt{k})$.
###### Proof
Summing (44) over $k$ from $1$ to $K$ and letting $K\rightarrow+\infty$ yields
$\displaystyle\sum_{k=1}^{\infty}\xi_{k}^{2}\leq\tilde{\eta}^{-1}\left({\cal
E}_{1}+\sum_{k=1}^{\infty}\tilde{\epsilon}_{k}^{2}\right)<+\infty,$
which implies the desired convergence speed by
$\frac{k}{2}\tilde{\xi}_{k}^{2}\leq\sum_{\frac{k}{2}\leq j\leq
k}{\xi}_{j}^{2}\rightarrow 0$ as $k\rightarrow\infty$, as proved in (Deng-
parallelADMM17, , Lemma 1.1).
Then we provide a lemma to show the convergence speed of a nonenegative
sequence $\\{\xi_{k}\\}$ satisfying the following relation instead of (44)
$\displaystyle\tilde{\eta}\xi_{k}^{2}\leq({\cal E}_{k}-{\cal
E}_{k+1})+\tilde{\epsilon}_{k}^{2}+{\alpha}_{k}\tilde{L},\ \forall
k\in\mathbb{N},$ (45)
where $\tilde{\eta}>0,$ $\tilde{L}>0$, $\\{{\cal E}_{k}\\}$,
$\\{{\alpha}_{k}\\}$ and $\\{\tilde{\epsilon}_{k}\\}$ are nonnegative
sequences, and $\sum_{k=1}^{\infty}\tilde{\epsilon}_{k}^{2}<+\infty$.
###### Lemma 2
Pick $K\geq 1$. Let $\\{\xi_{k}\\}$ be a nonnegative sequence satisfying (45).
Set $\alpha_{k}\equiv\frac{\tilde{\alpha}}{K}$ for some $\tilde{\alpha}>0$.
Then $\tilde{\xi}_{K}:=\min_{1\leq k\leq K}\xi_{k}\leq\tilde{c}/\sqrt{K}$ for
some constant $\tilde{c}>0$.
###### Proof
Summing (45) over $k$ from $1$ to $K$ yields
$\displaystyle\sum_{k=1}^{K}\xi_{k}^{2}\leq\frac{{\cal
E}_{1}+\sum_{k=1}^{K}\tilde{\epsilon}_{k}^{2}+\tilde{L}\sum_{k=1}^{K}\alpha_{k}}{\tilde{\eta}}.$
From $\sum_{k=1}^{\infty}\tilde{\epsilon}_{k}^{2}<+\infty$ and
$\sum_{k=1}^{K}{\alpha}_{k}=\tilde{\alpha}$, we get
$K\tilde{\xi}_{K}^{2}\leq\sum_{k=1}^{K}\xi_{k}^{2}\leq\frac{{\cal
E}_{1}+\sum_{k=1}^{\infty}\tilde{\epsilon}_{k}^{2}+\tilde{L}\tilde{\alpha}}{\tilde{\eta}}<+\infty$.
The result follows with $\tilde{c}:=\sqrt{{\cal
E}_{1}+\sum_{k=1}^{\infty}\tilde{\epsilon}_{k}^{2}+\tilde{L}\tilde{\alpha}}/\sqrt{{\tilde{\eta}}}$.
In both Lemmas 1 and 2, the nonnegative assumption on the sequence $\\{{\cal
E}_{k}\\}$ can be relaxed to its lower boundedness.
The following lemma presents the global convergence and rate of a sequence
generated by some algorithm for the nonconvex optimization problem, based on
the Kurdyka-Łojasiewicz inequality, where the global convergence result is
from (Attouch13, , Theorem 2.9) while the rate results are from (Attouch-
Bolte09, , Theorem 5).
###### Lemma 3 (Existing global convergence and rate)
Let ${\cal L}$ be a proper, lower semicontinuous function, and $\\{u^{k}\\}$
be a sequence that satisfies the following three conditions:
1. (P1)
(Sufficient decrease condition) there exists a constant $a_{1}>0$ such that
${\cal L}(u^{k+1})+a_{1}\|u^{k+1}-u^{k}\|^{2}\leq{\cal L}(u^{k}),\ \forall
k\in\mathbb{N};$
2. (P2)
(Bounded subgradient condition) for each $k\in\mathbb{N}$, there exists
$v^{k+1}\in\partial{\cal L}(u^{k+1})$ such that $\|v^{k+1}\|\leq
a_{2}\|u^{k+1}-u^{k}\|$ for some constant $a_{2}>0$;
3. (P3)
(Continuity condition) there exist a subsequence $\\{u^{k_{j}}\\}$ and
$\tilde{u}$ such that $u^{k_{j}}\rightarrow\tilde{u}$ and ${\cal
L}(u^{k_{j}})\rightarrow{\cal L}(\tilde{u})$ as $j\rightarrow\infty$.
If ${\cal L}$ satisfies the KŁ inequality at $\tilde{u}$ with an exponent of
$\theta$, then
1. (1)
$\\{u^{k}\\}$ converges to $\tilde{u}$; and
2. (2)
depending on $\theta$, (i) if $\theta=0$, then $\\{u^{k}\\}$ converges within
a finite number of iterations; (ii) if $\theta\in(0,\frac{1}{2}]$, then
$\|u^{k}-\tilde{u}\|\leq c\tau^{k}$ for all $k\geq k_{0}$, for certain
$k_{0}>0,c>0,\tau\in(0,1)$; and (iii) if $\theta\in(\frac{1}{2},1)$, then
$\|u^{k}-\tilde{u}\|\leq ck^{-\frac{1-\theta}{2\theta-1}}$ for all $k\geq
k_{0}$, for certain $k_{0}>0,c>0$.
#### 5.1.2 Lemmas on controlling dual ascent by primal descent
In the following, we establish several lemmas to show that the dual ascent
quantities of proposed algorithms can be controlled by the primal descent
quantities.
###### Lemma 4 (MEAL: controlling dual by primal)
Let $\\{(x^{k},z^{k},\lambda^{k})\\}$ be a sequence generated by MEAL (5).
Take $\gamma\in(0,\rho^{-1})$.
1. (a)
Under Assumptions 1, 2(a), and 2(b), we have for any $k\geq 1$,
$\displaystyle\|A^{T}(\lambda^{k+1}-\lambda^{k})\|\leq(L_{f}+\gamma^{-1})\|x^{k+1}-x^{k}\|+\gamma^{-1}\|z^{k}-z^{k-1}\|,$
(46) $\displaystyle\|\lambda^{k+1}-\lambda^{k}\|^{2}\leq
2c_{\gamma,A}^{-1}\left[(\gamma
L_{f}+1)^{2}\|x^{k+1}-x^{k}\|^{2}+\|z^{k}-z^{k-1}\|^{2}\right],$ (47)
where $c_{\gamma,A}=\gamma^{2}\tilde{\sigma}_{\min}(A^{T}A)$.
2. (b)
Alternatively, under Assumptions 1, 2(a), and 2(c), we have for any $k\geq 1$,
$\displaystyle\|\lambda^{k+1}-\lambda^{k}\|^{2}\leq
3c_{\gamma,A}^{-1}\left[4\gamma^{2}\hat{L}^{2}_{f}+\|x^{k+1}-x^{k}\|^{2}+\|z^{k}-z^{k-1}\|^{2}\right].$
(48)
###### Proof
The update (5) of $x^{k+1}$ implies
$\displaystyle
x^{k+1}=\operatorname*{argmin}_{x}\left\\{f(x)+\langle\lambda^{k},Ax-b\rangle+\frac{\beta_{k}}{2}\|Ax-b\|^{2}+\frac{1}{2\gamma}\|x-z^{k}\|^{2}\right\\}.$
Its optimality condition and the update (5) of $\lambda^{k+1}$ in MEAL
together give us
$\displaystyle 0\in\partial\left(f+\frac{1}{2\gamma}\|\cdot-(z^{k}-\gamma
A^{T}\lambda^{k+1})\|^{2}\right)(x^{k+1}).$ (49)
Let $w^{k+1}:=z^{k}-\gamma A^{T}\lambda^{k+1},\ \forall k\in\mathbb{N}.$ The
above inclusion implies
$\displaystyle x^{k+1}=\mathrm{Prox}_{\gamma,f}(w^{k+1}),$ (50)
and thus by (13),
$\displaystyle A^{T}\lambda^{k+1}$ $\displaystyle=-\nabla{\cal
M}_{\gamma,f}(w^{k+1})-\gamma^{-1}(x^{k+1}-z^{k}),$ (51)
which further implies
$\displaystyle\|A^{T}(\lambda^{k+1}-\lambda^{k})\|$
$\displaystyle=\|(\nabla{\cal M}_{\gamma,f}(w^{k+1})-\nabla{\cal
M}_{\gamma,f}(w^{k}))+\gamma^{-1}(x^{k+1}-x^{k})-\gamma^{-1}(z^{k}-z^{k-1})\|.$
(a) With Assumption 2(b), the above equality yields
$\displaystyle\|A^{T}(\lambda^{k+1}-\lambda^{k})\|\leq(L_{f}+\gamma^{-1})\|x^{k+1}-x^{k}\|+\gamma^{-1}\|z^{k}-z^{k-1}\|,$
which leads to (46). By Assumption 1 and the relation
$\lambda^{k+1}-\lambda^{k}=\beta_{k}(Ax^{k+1}-b)$,
$(\lambda^{k+1}-\lambda^{k})\in\mathrm{Im}(A)$. Thus, from the above
inequality, we deduce
$\displaystyle\|\lambda^{k+1}-\lambda^{k}\|\leq\tilde{\sigma}_{\min}^{-1/2}(A^{T}A)\left[(L_{f}+\gamma^{-1})\|x^{k+1}-x^{k}\|+\gamma^{-1}\|z^{k}-z^{k-1}\|\right],$
and, further by $(u+v)^{2}\leq 2(u^{2}+v^{2})$ for any $u,v\in\mathbb{R}$,
$\displaystyle\|\lambda^{k+1}-\lambda^{k}\|^{2}\leq
2\tilde{\sigma}_{\min}^{-1}(A^{T}A)\left[(L_{f}+\gamma^{-1})^{2}\|x^{k+1}-x^{k}\|^{2}+\gamma^{-2}\|z^{k}-z^{k-1}\|^{2}\right].$
(b) From Assumption 2(c), we have
$\displaystyle\|A^{T}(\lambda^{k+1}-\lambda^{k})\|\leq
2\hat{L}_{f}+\gamma^{-1}(\|x^{k+1}-x^{k}\|+\|z^{k}-z^{k-1}\|),$
which implies
$\displaystyle\|\lambda^{k+1}-\lambda^{k}\|\leq\tilde{\sigma}_{\min}^{-1/2}(A^{T}A)\left[2\hat{L}_{f}+\gamma^{-1}(\|x^{k+1}-x^{k}\|+\|z^{k}-z^{k-1}\|)\right],$
and further by $(a+c+d)^{2}\leq 3(a^{2}+c^{2}+d^{2})$ for any
$a,c,d\in\mathbb{R}$,
$\displaystyle\|\lambda^{k+1}-\lambda^{k}\|^{2}\leq
3\tilde{\sigma}_{\min}^{-1}(A^{T}A)\left[4\hat{L}^{2}_{f}+\gamma^{-2}(\|x^{k+1}-x^{k}\|^{2}+\|z^{k}-z^{k-1}\|^{2})\right].$
The similar lemma also holds for iMEAL shown as follows.
###### Lemma 5 (iMEAL: controlling dual by primal)
Let $(x^{k},z^{k},\lambda^{k})$ be a sequence generated by iMEAL (7). Take
$\gamma\in(0,\rho^{-1})$.
1. (a)
Under Assumptions 1 2(a), and 2(b), for any $k\geq 1$,
$\displaystyle\|\lambda^{k+1}-\lambda^{k}\|^{2}\leq
3c_{\gamma,A}^{-1}\left[(\gamma
L_{f}+1)^{2}\|x^{k+1}-x^{k}\|^{2}+\|z^{k}-z^{k-1}\|^{2}+\gamma^{2}(\epsilon_{k}+\epsilon_{k-1})^{2}\right].$
2. (b)
Alternatively, under Assumptions 1 2(a), and 2(c), for any $k\geq 1$,
$\displaystyle\|\lambda^{k+1}-\lambda^{k}\|^{2}\leq
4c_{\gamma,A}^{-1}\left[4\gamma^{2}\hat{L}^{2}_{f}+\|x^{k+1}-x^{k}\|^{2}+\|z^{k}-z^{k-1}\|^{2}+\gamma^{2}(\epsilon_{k}+\epsilon_{k-1})^{2}\right].$
###### Proof
The proof is similar to that of Lemma 4, but with (49) being replaced by
$\displaystyle
0\in\partial\left(f+\frac{1}{2\gamma}\left\|\cdot-\Big{(}z^{k}-\gamma(A^{T}\lambda^{k+1}-s^{k})\Big{)}\right\|^{2}\right)(x^{k+1}),$
and thus $w^{k+1}:=z^{k}-\gamma(A^{T}\lambda^{k+1}-s^{k}).$
###### Lemma 6 (LiMEAL: controlling dual by primal)
Let $\\{(x^{k},z^{k},\lambda^{k})\\}$ be a sequence generated by LiMEAL (9).
Take $\gamma\in(0,\rho_{g}^{-1})$.
1. (a)
Under Assumptions 1, 3(a)-(b), and 3(c), for any $k\geq 1$,
$\displaystyle\|A^{T}(\lambda^{k+1}-\lambda^{k})\|$ (52)
$\displaystyle\leq(L_{g}+\gamma^{-1})\|x^{k+1}-x^{k}\|+L_{h}\|x^{k}-x^{k-1}\|+\gamma^{-1}\|z^{k}-z^{k-1}\|,$
$\displaystyle\|\lambda^{k+1}-\lambda^{k}\|^{2}$ (53) $\displaystyle\leq
3c_{\gamma,A}^{-1}\left[(\gamma
L_{g}+1)^{2}\|x^{k+1}-x^{k}\|^{2}+\gamma^{2}L_{h}^{2}\|x^{k}-x^{k-1}\|^{2}+\|z^{k}-z^{k-1}\|^{2}\right].$
2. (b)
Alternatively, under Assumptions 1, 3(a)-(b), and 3(d), for any $k\geq 1$,
$\displaystyle\|\lambda^{k+1}-\lambda^{k}\|^{2}$ (54) $\displaystyle\leq
4c_{\gamma,A}^{-1}\left[4\gamma^{2}\hat{L}^{2}_{g}+\|x^{k+1}-x^{k}\|^{2}+\gamma^{2}L_{h}^{2}\|x^{k}-x^{k-1}\|^{2}+\|z^{k}-z^{k-1}\|^{2}\right].$
###### Proof
The proof is also similar to that of Lemma 4, but (49) needs to be modified to
$\displaystyle
0\in\partial\left(g+\frac{1}{2\gamma}\|\cdot-\Big{(}z^{k}-\gamma(A^{T}\lambda^{k+1}+\nabla
h(x^{k}))\Big{)}\|^{2}\right)(x^{k+1}),$
and thus $w^{k+1}:=z^{k}-\gamma(A^{T}\lambda^{k+1}+\nabla h(x^{k}))).$
#### 5.1.3 Lemmas on One-step Progress
Here, we provide several lemmas to characterize the progress achieved by a
single iterate of the proposed algorithms.
###### Lemma 7 (MEAL: one-step progress)
Let $\\{(x^{k},z^{k},\lambda^{k})\\}$ be a sequence generated by MEAL (4).
Take Assumption 2(a), $\gamma\in(0,\rho^{-1})$, and $\eta\in(0,2)$. Then for
any $k\in\mathbb{N}$,
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})\geq\frac{(1-\gamma\rho)}{2\gamma}\|x^{k+1}-x^{k}\|^{2}$
(55)
$\displaystyle+\frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}+\frac{1}{4}\gamma\eta(2-\eta)\|\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})\|^{2}-\alpha_{k}c_{\gamma,A}\|\lambda^{k+1}-\lambda^{k}\|^{2},$
where $\alpha_{k}$ is presented in (26) and
$c_{\gamma,A}=\gamma^{2}\tilde{\sigma}_{\min}(A^{T}A)$.
###### Proof
By the update (5) of $x^{k+1}$ in MEAL, $x^{k+1}$ is updated via minimizing a
strongly convex function ${\cal P}_{\beta_{k}}(x,z^{k},\lambda^{k})$ with
modulus at least $(\gamma^{-1}-\rho)$, we have
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k}}(x^{k+1},z^{k},\lambda^{k})\geq\frac{\gamma^{-1}-\rho}{2}\|x^{k+1}-x^{k}\|^{2}.$
(56)
Next, recall in (5), $z^{k+1}=z^{k}+\eta(x^{k+1}-z^{k})$ implies
$\displaystyle 2x^{k+1}-z^{k}-z^{k+1}=(2\eta^{-1}-1)(z^{k+1}-z^{k}).$ (57)
So we have
$\displaystyle{\cal P}_{\beta_{k}}(x^{k+1},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k}}(x^{k+1},z^{k+1},\lambda^{k})=\frac{1}{2\gamma}(\|x^{k+1}-z^{k}\|^{2}-\|x^{k+1}-z^{k+1}\|^{2})$
$\displaystyle=\frac{1}{2\gamma}\langle
z^{k+1}-z^{k},2x^{k+1}-z^{k}-z^{k+1}\rangle=\frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}.$
Moreover, by the update $\lambda^{k+1}=\lambda^{k}+\beta_{k}(Ax^{k+1}-b)$, we
have
$\displaystyle{\cal P}_{\beta_{k}}(x^{k+1},z^{k+1},\lambda^{k})-{\cal
P}_{\beta_{k}}(x^{k+1},z^{k+1},\lambda^{k+1})=-\beta_{k}^{-1}\|\lambda^{k+1}-\lambda^{k}\|^{2},$
and
$\displaystyle{\cal P}_{\beta_{k}}(x^{k+1},z^{k+1},\lambda^{k+1})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})=\frac{\beta_{k}-\beta_{k+1}}{2\beta_{k}^{2}}\|\lambda^{k+1}-\lambda^{k}\|^{2}.$
Combining the above four terms of estimates yields
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})$ (58)
$\displaystyle\geq\frac{(1-\rho\gamma)}{2\gamma}\|x^{k+1}-x^{k}\|^{2}+\frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}-\frac{\beta_{k}+\beta_{k+1}}{2\beta_{k}^{2}}\|\lambda^{k+1}-\lambda^{k}\|^{2}.$
Then, we establish (55) from (58). By the definition (20) of
$\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})$, we have
$\displaystyle\|\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})\|^{2}=(\eta\gamma)^{-2}\|z^{k}-z^{k+1}\|^{2}+\beta_{k}^{-2}\|\lambda^{k+1}-\lambda^{k}\|^{2},$
which implies
$\displaystyle(\eta\gamma)^{-2}\|z^{k}-z^{k+1}\|^{2}=\|\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})\|^{2}-\beta_{k}^{-2}\|\lambda^{k+1}-\lambda^{k}\|^{2}.$
Substituting this into the above inequality yields
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})\geq\frac{(1-\gamma\rho)}{2\gamma}\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle+\frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}+\frac{1}{4}\gamma\eta(2-\eta)\|\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})\|^{2}-\alpha_{k}c_{\gamma,A}\|\lambda^{k+1}-\lambda^{k}\|^{2},$
where
$\alpha_{k}=\frac{\beta_{k}+\beta_{k+1}+\gamma\eta(1-\eta/2)}{2c_{\gamma,A}\beta_{k}^{2}}$.
This finishes the proof.
Next, we provide a lemma for iMEAL (7).
###### Lemma 8 (iMEAL: one-step progress)
Let $\\{(x^{k},z^{k},\lambda^{k})\\}$ be a sequence generated by iMEAL (7).
Take Assumptions 2(a) and (b), $\gamma\in(0,\rho^{-1})$, and $\eta\in(0,2)$.
It holds that
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})$ (59)
$\displaystyle\geq\frac{(1-\gamma\rho)}{2\gamma}\|x^{k+1}-x^{k}\|^{2}+\langle
s^{k},x^{k}-x^{k+1}\rangle+\frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}$
$\displaystyle+\frac{1}{2}\gamma\eta(1-\eta/2)\|\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})\|^{2}-\alpha_{k}c_{\gamma,A}\|\lambda^{k+1}-\lambda^{k}\|^{2},\
\forall k\in\mathbb{N}.$
###### Proof
The proof of this lemma is similar to that of Lemma 7 and uses the descent
quantity along the update of $x^{k+1}$. By the update (7) of $x^{k+1}$ in
iMEAL and noticing that ${\cal
L}_{\beta_{k}}(x,\lambda^{k})+\frac{\|x-z^{k}\|}{2\gamma}$ is strongly convex
with modulus at least $(\gamma^{-1}-\rho)$, we have
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})$
$\displaystyle\geq{\cal P}_{\beta_{k}}(x^{k+1},z^{k},\lambda^{k})+\langle
s^{k},x^{k}-x^{k+1}\rangle+\frac{\gamma^{-1}-\rho}{2}\|x^{k+1}-x^{k}\|^{2}.$
By replacing (56) in the proof of Lemma 7 with the above inequality and
following the rest part of its proof, we obtain the following inequality
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})\geq\frac{1-\gamma\rho}{2\gamma}\|x^{k+1}-x^{k}\|^{2}+\langle
s^{k},x^{k}-x^{k+1}\rangle$
$\displaystyle+\frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}-\frac{\beta_{k}+\beta_{k+1}}{2\beta_{k}^{2}}\|\lambda^{k+1}-\lambda^{k}\|^{2}.$
We can establish (59) with a derivation similar to that in the proof of Lemma
7.
Also, we state a similar lemma for one-step progress of LiMEAL (9) as follows.
###### Lemma 9 (LiMEAL: one-step progress)
Let $\\{(x^{k},z^{k},\lambda^{k})\\}$ be a sequence generated by LiMEAL (9).
Take Assumptions 3(a) and (b), $\gamma\in(0,\rho_{g}^{-1})$, and
$\eta\in(0,2)$. We have
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})$ (60)
$\displaystyle\geq\left(\frac{1-\gamma(\rho_{g}+L_{h})}{2\gamma}-\frac{1}{4}\gamma(2-\eta)\eta
L_{h}^{2}\right)\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle+\frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}+\frac{1}{4}\gamma(1-\eta_{k}/2)\eta\|g_{\mathrm{limeal}}^{k}\|^{2}-\alpha_{k}c_{\gamma,A}\|\lambda^{k+1}-\lambda^{k}\|^{2},\
\forall k\in\mathbb{N}.$
###### Proof
The proof of this lemma is similar to that of Lemma 7. By the update (9) of
$x^{k+1}$ in LiMEAL, $x^{k+1}$ is updated via minimizing
$(\gamma^{-1}-\rho_{g})$-strongly convex ${\cal
L}_{\beta_{k},f^{k}}(x,\lambda^{k})+\frac{\|x-z^{k}\|}{2\gamma}$, so
$\displaystyle{\cal
L}_{\beta_{k},f^{k}}(x^{k},\lambda^{k})+\frac{\|x^{k}-z^{k}\|^{2}}{2\gamma}\geq{\cal
L}_{\beta_{k},f^{k}}(x^{k+1},\lambda^{k})+\frac{\|x-z^{k}\|^{2}}{2\gamma}+\frac{\gamma^{-1}-\rho_{g}}{2}\|x^{k+1}-x^{k}\|^{2}.$
By definition, ${\cal L}_{\beta_{k},f^{k}}(x,\lambda)=h(x^{k})+\langle\nabla
h(x^{k}),x-x^{k}\rangle+g(x)+\langle\lambda,Ax-b\rangle+\frac{\beta}{2}\|Ax-b\|^{2}$
and ${\cal
P}_{\beta_{k}}(x,z,\lambda)=h(x)+g(x)+\langle\lambda,Ax-b\rangle+\frac{\beta_{k}}{2}\|Ax-b\|^{2}+\frac{\|x-z\|^{2}}{2\gamma}$,
so the above inequality implies
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})$
$\displaystyle\geq{\cal
P}_{\beta_{k}}(x^{k+1},z^{k},\lambda^{k})+\frac{\gamma^{-1}-\rho_{g}}{2}\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle-(h(x^{k+1})-h(x^{k})-\langle\nabla
h(x^{k}),x^{k+1}-x^{k}\rangle)$ $\displaystyle\geq{\cal
P}_{\beta_{k}}(x^{k+1},z^{k},\lambda^{k})+\frac{\gamma^{-1}-\rho_{g}-L_{h}}{2}\|x^{k+1}-x^{k}\|^{2},$
where the second inequality is due to the $L_{h}$-Lipschitz continuity of
$\nabla h$. By replacing (56) in the proof of Lemma 7 with the above
inequality and following the rest part of that proof, we obtain
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})$ (61)
$\displaystyle\geq\frac{1-\gamma(\rho_{g}+L_{h})}{2\gamma}\|x^{k+1}-x^{k}\|^{2}+\frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}-\frac{\beta_{k}+\beta_{k+1}}{2\beta_{k}^{2}}\|\lambda^{k+1}-\lambda^{k}\|^{2}.$
Next, based on the above inequality, we establish (60). By the definition (39)
of $g_{\mathrm{limeal}}^{k}$ and noticing that
$z^{k}-x^{k+1}=-\eta^{-1}(z^{k+1}-z^{k})$ by the update (9) of $z^{k+1}$, we
have
$\|g_{\mathrm{limeal}}^{k}\|^{2}\leq
2L_{h}^{2}\|x^{k+1}-x^{k}\|^{2}+2(\gamma\eta)^{-2}\|z^{k+1}-z^{k}\|^{2}+\beta_{k}^{-2}\|\lambda^{k+1}-\lambda^{k}\|^{2},$
which implies
$(\gamma\eta)^{-2}\|z^{k+1}-z^{k}\|^{2}\geq\frac{1}{2}\|g_{\mathrm{limeal}}^{k}\|^{2}-\frac{1}{2}\beta_{k}^{-2}\|\lambda^{k+1}-\lambda^{k}\|^{2}-L_{h}^{2}\|x^{k+1}-x^{k}\|^{2}.$
Substituting this inequality into (61) yields
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})$
$\displaystyle\geq\left(\frac{1-\gamma(\rho_{g}+L_{h})}{2\gamma}-\frac{1}{4}\gamma(2-\eta)\eta
L_{h}^{2}\right)\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle+\frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}+\frac{1}{4}\gamma(1-\eta/2)\eta\|g_{\mathrm{limeal}}^{k}\|^{2}-\alpha_{k}c_{\gamma,A}\|\lambda^{k+1}-\lambda^{k}\|^{2}.$
This finishes the proof of this lemma.
### 5.2 Proofs for Convergence of MEAL
Based on the above lemmas, we give proofs of Theorem 3.1 and Proposition 1.
#### 5.2.1 Proof of Theorem 3.1
###### Proof
We first establish the $o(1/\sqrt{k})$ rate of convergence under the implicit
Lipschitz subgradient assumption (Assumption 2(b)) and then the convergence
rate result under the implicit bounded subgradient assumption (Assumption
2(c)).
(a) In the first case, $\beta_{k}=\beta$ and $\alpha_{k}=\alpha$. Substituting
(47) into (55) yields
$\displaystyle{\cal P}_{\beta}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1})\geq\frac{1}{2}\gamma\eta(1-\eta/2)\|\nabla\phi_{\beta}(z^{k},\lambda^{k})\|^{2}$
$\displaystyle+\left(\frac{(1-\gamma\rho)}{2\gamma}-2\alpha(1+\gamma
L_{f})^{2}\right)\|x^{k+1}-x^{k}\|^{2}+\frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}$
$\displaystyle-2\alpha\|z^{k}-z^{k-1}\|^{2}.$
By the definition (24) of ${\cal E}_{\mathrm{meal}}^{k}$, the above inequality
implies
$\displaystyle{\cal E}_{\mathrm{meal}}^{k}-{\cal E}_{\mathrm{meal}}^{k+1}$
$\displaystyle\geq\frac{1}{2}\gamma\eta(1-\eta/2)\|\nabla\phi_{\beta}(z^{k},\lambda^{k})\|^{2}+\left(\frac{1}{4\gamma}(\frac{2}{\eta}-1)-2\alpha\right)\|z^{k+1}-z^{k}\|^{2}$
$\displaystyle+\left(\frac{1-\gamma\rho}{2\gamma}-2\alpha(1+\gamma
L_{f})^{2}\right)\|x^{k+1}-x^{k}\|^{2}$ (62)
$\displaystyle\geq\frac{1}{2}\gamma\eta(1-\eta/2)\|\nabla\phi_{\beta}(z^{k},\lambda^{k})\|^{2},$
where the second inequality holds due to the condition on $\alpha$. Thus,
claim (a) follows from the above inequality, Lemma 1 with
$\tilde{\epsilon}_{k}=0$ and the lower boundedness of $\\{{\cal
E}_{\mathrm{meal}}^{k}\\}$.
(b) Similarly, substituting (48) into (55) and using the definition (25) of
$\tilde{\cal E}_{\mathrm{meal}}^{k}$, we have
$\displaystyle\tilde{\cal E}_{\mathrm{meal}}^{k}-\tilde{\cal
E}_{\mathrm{meal}}^{k+1}\geq\frac{1}{2}\gamma\eta(1-\eta/2)\|\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})\|^{2}-12\alpha_{k}\gamma^{2}\hat{L}_{f}^{2}$
$\displaystyle+\left(\frac{1-\gamma\rho}{2\gamma}-3\alpha_{k}\right)\|x^{k+1}-x^{k}\|^{2}+\left(\frac{1}{4\gamma}(\frac{2}{\eta}-1)-3\alpha_{k+1}\right)\|z^{k+1}-z^{k}\|^{2}.$
With $\alpha_{k}=\frac{\alpha^{*}}{K}$,
$\displaystyle\tilde{\cal E}_{\mathrm{meal}}^{k}-\tilde{\cal
E}_{\mathrm{meal}}^{k+1}\geq\frac{1}{2}\gamma(1-\eta/2)\eta\|\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})\|^{2}-12\alpha_{k}\gamma^{2}\hat{L}_{f}^{2},$
which yields claim (b) by Lemma 2 with $\tilde{\epsilon}_{k}=0$ and the lower
boundedness of $\\{\tilde{\cal E}_{\mathrm{meal}}^{k}\\}$.
#### 5.2.2 Proof of Proposition 1
###### Proof
With Lemma 3, we only need to check conditions $(P1)$-$(P3)$ hold for MEAL.
(a) Establishing $(P1)$: With $a:=\frac{\gamma\eta(2-\eta)}{4\beta}$, we have
$\frac{1+a}{\beta c_{\gamma,A}}=\alpha$ for $\alpha$ in (27). Substituting
(47) into (58) with fixed $\beta_{k}$ yields
$\displaystyle{\cal P}_{\beta}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1})\geq(\frac{1-\rho\gamma}{2\gamma}-2\alpha(\gamma
L_{f}+1)^{2})\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle+\frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}-2\alpha\|z^{k}-z^{k-1}\|^{2}+a\beta^{-1}\|\lambda^{k+1}-\lambda^{k}\|^{2}.$
For the definition (30) of ${\cal P}_{\mathrm{meal}}$ and the assumption on
$\alpha$, we deduce from the above inequality:
$\displaystyle{\cal P}_{\mathrm{meal}}(y^{k})-{\cal
P}_{\mathrm{meal}}(y^{k+1})\geq(\frac{1-\rho\gamma}{2\gamma}-2\alpha(\gamma
L_{f}+1)^{2})\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle+\left(\frac{1}{2\gamma}(\frac{2}{\eta}-1)-3\alpha\right)\|z^{k+1}-z^{k}\|^{2}+\alpha\|z^{k}-z^{k-1}\|^{2}+a\beta^{-1}\|\lambda^{k+1}-\lambda^{k}\|^{2}$
$\displaystyle\geq c_{1}\|y^{k+1}-y^{k}\|^{2},$ (63)
where $c_{1}:=\min\left\\{\frac{1-\rho\gamma}{2\gamma}-2\alpha(\gamma
L_{f}+1)^{2},\alpha,a\beta^{-1}\right\\}$ by
$\frac{1}{2\gamma}(\frac{2}{\eta}-1)-3\alpha\geq\alpha$. This yields $(P1)$
for MEAL.
(b) Establishing $(P2)$: Note that ${\cal
P}_{\mathrm{meal}}(y)=f(x)+\langle\lambda,Ax-b\rangle+\frac{\beta}{2}\|Ax-b\|^{2}+\frac{1}{2\gamma}\|x-z\|^{2}+3\alpha\|z-\hat{z}\|^{2}$.
The optimality condition from the update of $x^{k+1}$ in (5) is
$\displaystyle 0\in\partial
f(x^{k+1})+A^{T}\lambda^{k+1}+\gamma^{-1}(x^{k+1}-z^{k}),$
which implies
$\gamma^{-1}(z^{k}-z^{k+1})+A^{T}(\lambda^{k+1}-\lambda^{k})\in\partial_{x}{\cal
P}_{\mathrm{meal}}(y^{k+1}).$ From the update of $z^{k+1}$ in (5),
$z^{k+1}-x^{k+1}=-(1-\eta)\eta^{-1}(z^{k+1}-z^{k})$ and thus
$\displaystyle\partial_{z}{\cal
P}_{\mathrm{meal}}(y^{k+1})=\gamma^{-1}(z^{k+1}-x^{k+1})+6\alpha(z^{k+1}-z^{k})=\left(6\alpha-\frac{1-\eta}{\eta\gamma}\right)(z^{k+1}-z^{k}).$
The update of $\lambda^{k+1}$ in (5) yields $\partial_{\lambda}{\cal
P}_{\mathrm{meal}}(y^{k+1})=Ax^{k+1}-b=\beta^{-1}(\lambda^{k+1}-\lambda^{k}).$
Moreover, it is easy to show $\partial_{\hat{z}}{\cal
P}_{\mathrm{meal}}(y^{k+1})=6\alpha(z^{k}-z^{k+1}).$ Thus, let
$v^{k+1}:=\left(\begin{array}[]{c}\gamma^{-1}(z^{k}-z^{k+1})+A^{T}(\lambda^{k+1}-\lambda^{k})\\\
\left(6\alpha-\frac{1-\eta}{\eta\gamma}\right)(z^{k+1}-z^{k})\\\
\beta^{-1}(\lambda^{k+1}-\lambda^{k})\\\
6\alpha(z^{k}-z^{k+1})\end{array}\right),$
which obeys $v^{k+1}\in\partial{\cal P}_{\mathrm{meal}}(y^{k+1})$ and
$\displaystyle\|v^{k+1}\|$
$\displaystyle\leq\left(\gamma^{-1}+\left|6\alpha-\frac{1-\eta}{\eta\gamma}\right|+6\alpha\right)\|z^{k+1}-z^{k}\|+\beta^{-1}\|\lambda^{k+1}-\lambda^{k}\|+\|A^{T}(\lambda^{k+1}-\lambda^{k})\|$
$\displaystyle\leq\left(\gamma^{-1}+\left|6\alpha-\frac{1-\eta}{\eta\gamma}\right|+6\alpha\right)\|z^{k+1}-z^{k}\|+\beta^{-1}\|\lambda^{k+1}-\lambda^{k}\|$
$\displaystyle+(L_{f}+\gamma^{-1})\|x^{k+1}-x^{k}\|+\gamma^{-1}\|\hat{z}^{k+1}-\hat{z}^{k}\|,$
where the second inequality is due to (46). This yields $(P2)$ for MEAL.
(c) Establishing $(P3)$: $(P3)$ follows from the boundedness assumption of
$\\{y^{k}\\}$, and the convergence of $\\{{\cal P}_{\mathrm{meal}}(y^{k})\\}$
is implied by $(P1)$. This finishes the proof.
### 5.3 Proof for Convergence of iMEAL
In this subsection, we present the proof of Theorem 3.2 for iMEAL (7).
###### Proof (of Theorem 3.2)
We first show the $o(1/\sqrt{k})$ rate of convergence under Assumption 2(b)
and then the convergence rate result under Assumption 2(c).
(a) In this case, we use a fixed $\beta_{k}=\beta$ and thus
$\alpha_{k}=\alpha$. Substituting the inequality in Lemma 5(a) into (59) in
Lemma 8 yields
$\displaystyle{\cal P}_{\beta}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1})\geq\frac{1}{2}\gamma\eta(1-\eta/2)\|\nabla\phi_{\beta}(z^{k},\lambda^{k})\|^{2}$
(64) $\displaystyle+\left(\frac{1-\gamma\rho}{2\gamma}-3\alpha(1+\gamma
L_{f})^{2}\right)\|x^{k+1}-x^{k}\|^{2}+\langle
s^{k},x^{k}-x^{k+1}\rangle-3\alpha\gamma^{2}(\epsilon_{k}+\epsilon_{k-1})^{2}$
$\displaystyle+\frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}-3\alpha\|z^{k}-z^{k-1}\|^{2}.$
Let $\delta:=2\left(\frac{(1-\gamma\rho)}{2\gamma}-3\alpha(1+\gamma
L_{f})^{2}\right)$. By the assumption
$0<\alpha<\min\big{\\{}\frac{1-\gamma\rho}{6\gamma(1+\gamma L_{f})^{2}}$,
$\frac{1}{12\gamma}(\frac{2}{\eta}-1)\Big{\\}}$, we have $\delta>0$ and
further
$\displaystyle\langle
s^{k},x^{k}-x^{k+1}\rangle\geq-\frac{\delta}{2}\|x^{k+1}-x^{k}\|^{2}-\frac{1}{2\delta}\|s^{k}\|^{2}\geq-\frac{\delta}{2}\|x^{k+1}-x^{k}\|^{2}-\frac{1}{2\delta}(\epsilon_{k}+\epsilon_{k-1})^{2}.$
Substituting this into (64) and noting the definition (32) of ${\cal
E}_{\mathrm{imeal}}^{k}$, we have
$\displaystyle{\cal E}_{\mathrm{imeal}}^{k}-{\cal
E}_{\mathrm{imeal}}^{k+1}\geq\frac{1}{2}\gamma\eta(1-\eta/2)\|\nabla\phi_{\beta}(z^{k},\lambda^{k})\|^{2}-(3\alpha\gamma^{2}+\frac{1}{2\delta})(\epsilon_{k}+\epsilon_{k-1})^{2},$
which yields claim (a) by the assumption
$\sum_{k=1}^{\infty}(\epsilon_{k})^{2}<+\infty$ and Lemma 1.
(b) Then we establish claim (b) under Assumption 2(c). Substituting the
inequality in Lemma 5(b) into (59) in Lemma 8 yields
$\displaystyle{\cal P}_{\beta_{k}}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta_{k+1}}(x^{k+1},z^{k+1},\lambda^{k+1})\geq\frac{1}{2}\gamma\eta(1-\eta/2)\|\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})\|^{2}$
$\displaystyle+\left(\frac{(1-\gamma\rho)}{2\gamma}-4\alpha_{k}\right)\|x^{k+1}-x^{k}\|^{2}+\langle
s^{k},x^{k}-x^{k+1}\rangle-4\gamma^{2}\alpha_{k}(\epsilon_{k}+\epsilon_{k-1})^{2}$
$\displaystyle+\frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}-4\alpha_{k}\|z^{k}-z^{k-1}\|^{2}-16\alpha_{k}\gamma^{2}\hat{L}_{f}^{2}.$
(65)
Let
$\hat{\alpha}^{*}:=\min\left\\{\frac{1-\rho\gamma}{8\gamma},\frac{1}{16\gamma}(\frac{2}{\eta}-1)\right\\}$
and
$\tilde{\delta}:=2\left(\frac{(1-\gamma\rho)}{2\gamma}-4\hat{\alpha}^{*}\right)>0$.
We have
$\displaystyle\langle
s^{k},x^{k}-x^{k+1}\rangle\geq-\frac{\tilde{\delta}}{2}\|x^{k+1}-x^{k}\|^{2}-\frac{1}{2\tilde{\delta}}\|s^{k}\|^{2}\geq-\frac{\tilde{\delta}}{2}\|x^{k+1}-x^{k}\|^{2}-\frac{1}{2\tilde{\delta}}(\epsilon_{k}+\epsilon_{k-1})^{2}.$
Substituting this into (5.3), and by the definition (33) of $\tilde{\cal
E}_{\mathrm{imeal}}^{k}$ and setting of $\alpha_{k}$, we have
$\displaystyle\tilde{\cal E}_{\mathrm{imeal}}^{k}-\tilde{\cal
E}_{\mathrm{imeal}}^{k+1}$
$\displaystyle\geq\frac{1}{2}\gamma(1-\eta/2)\eta\|\nabla\phi_{\beta_{k}}(z^{k},\lambda^{k})\|^{2}-(4\alpha_{k}\gamma^{2}+\frac{1}{2\tilde{\delta}})(\epsilon_{k}+\epsilon_{k-1})^{2}-16\alpha_{k}\gamma^{2}\hat{L}_{f}^{2},$
which yields claim (b) by the assumption
$\sum_{k=1}^{\infty}(\epsilon_{k})^{2}<+\infty$ and Lemma 2.
### 5.4 Proofs for Convergence of LiMEAL
Now, we show proofs of main convergence theorems for LiMEAL (9).
#### 5.4.1 Proof of Theorem 4.1
###### Proof
We first establish claim (a) and then claim (b) under the associated
assumptions.
(a) In this case, a fixed $\beta_{k}$ is used. Substituting (53) into (60)
yields
$\displaystyle{\cal P}_{\beta}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1})\geq\frac{1}{4}\gamma(1-\eta/2)\eta\|g_{\mathrm{limeal}}^{k}\|^{2}$
$\displaystyle+\left(\frac{1-\gamma(\rho_{g}+L_{h})}{2\gamma}-\frac{1}{4}\gamma(2-\eta)\eta
L_{h}^{2}-3(1+\gamma L_{g})^{2}\alpha\right)\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle+\frac{1}{4\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}-3\alpha(\gamma^{2}L_{h}^{2}\|x^{k}-x^{k-1}\|^{2}+\|z^{k}-z^{k-1}\|^{2}).$
By the definition (41) of ${\cal E}^{k}_{\mathrm{limeal}}$, the above
inequality implies
$\displaystyle{\cal E}_{\mathrm{limeal}}^{k}-{\cal
E}_{\mathrm{limeal}}^{k+1}\geq\frac{1}{4}\gamma(1-\eta/2)\eta\|g_{\mathrm{limeal}}^{k}\|^{2}+\left(\frac{1}{4\gamma}(\frac{2}{\eta}-1)-3\alpha\right)\|z^{k+1}-z^{k}\|^{2}$
(66)
$\displaystyle+\left(\frac{1-\gamma(\rho_{g}+L_{h})}{2\gamma}-\frac{1}{4}\gamma(2-\eta)\eta
L_{h}^{2}-3\alpha\left((1+\gamma
L_{g})^{2}+\gamma^{2}L_{h}^{2}\right)\right)\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle\geq\frac{1}{4}\gamma(1-\eta/2)\eta\|g_{\mathrm{limeal}}^{k}\|^{2},$
where the second inequality holds under the conditions in Theorem 4.1(a). This
shows the claim (a) by Lemma 1 and the lower boundedness of $\\{{\cal
E}_{\mathrm{limeal}}^{k}\\}$.
(b) Similarly, substituting (54) into (60) and using the definitions of
${\alpha}_{k}$ in (26) and $\tilde{\cal E}^{k}_{\mathrm{limeal}}$ in (42), we
obtain
$\displaystyle\tilde{\cal E}_{\mathrm{limeal}}^{k}-\tilde{\cal
E}_{\mathrm{limeal}}^{k+1}$
$\displaystyle\geq\frac{1}{4}\gamma(1-\eta/2)\eta_{k}\|g_{\mathrm{limeal}}^{k}\|^{2}-16{\alpha}_{k}\gamma^{2}\hat{L}_{g}^{2}+\left(\frac{1}{4\gamma}(\frac{2}{\eta}-1)-4{\alpha}_{k+1}\right)\|z^{k+1}-z^{k}\|^{2}$
$\displaystyle+\left(\frac{(1-\gamma(\rho_{g}+L_{h}))}{2\gamma}-\frac{1}{4}\gamma(2-\eta)\eta
L_{h}^{2}-4\alpha_{k}-4\gamma^{2}L_{h}^{2}\alpha_{k+1}\right)\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle\geq\frac{1}{4}c\gamma\eta\|g_{\mathrm{limeal}}^{k}\|^{2}-16{\alpha}_{k}\gamma^{2}\hat{L}_{g}^{2},$
where the second inequality is due to the settings of parameters presented in
Theorem 4.1(b). This inequality shows claim (b) by Lemma 2 and the lower
boundedness of $\\{\tilde{\cal E}_{\mathrm{limeal}}^{k}\\}$.
#### 5.4.2 Proof of Proposition 3
###### Proof
By Lemma 3, we only need to verify conditions $(P1)$-$(P3)$ hold for LiMEAL.
(a) Establishing $(P1)$: Similar to the proof of Theorem 3.1, let
$a:=\frac{\gamma\eta(2-\eta)}{4\beta}$. Then $\frac{1+a}{\beta
c_{\gamma,A}}=\alpha$, where $\alpha$ is defined in (27). Substituting (53)
into (61) with fixed $\beta_{k}$ yields
$\displaystyle{\cal P}_{\beta}(x^{k},z^{k},\lambda^{k})-{\cal
P}_{\beta}(x^{k+1},z^{k+1},\lambda^{k+1})$
$\displaystyle\geq\left(\frac{1-\gamma(\rho_{g}+L_{h})}{2\gamma}-3\alpha(1+\gamma
L_{g})^{2}\right)\|x^{k+1}-x^{k}\|^{2}-3\alpha\gamma^{2}L_{h}^{2}\|x^{k}-x^{k-1}\|^{2}$
$\displaystyle+\frac{1}{2\gamma}(\frac{2}{\eta}-1)\|z^{k+1}-z^{k}\|^{2}-3\alpha\|z^{k}-z^{k-1}\|^{2}+a\beta^{-1}\|\lambda^{k+1}-\lambda^{k}\|^{2}.$
By the definition (43) of ${\cal P}_{\mathrm{limeal}}$, the above inequality
implies
$\displaystyle{\cal P}_{\mathrm{limeal}}(y^{k})-{\cal
P}_{\mathrm{limeal}}(y^{k+1})$
$\displaystyle\geq\left(\frac{1-\gamma(\rho_{g}+L_{h})}{2\gamma}-4\alpha\left((1+\gamma
L_{g})^{2}+\gamma^{2}L_{h}^{2}\right)\right)\|x^{k+1}-x^{k}\|^{2}$
$\displaystyle+\left(\frac{1}{2\gamma}\left(\frac{2}{\eta}-1\right)-4\alpha\right)\|z^{k+1}-z^{k}\|^{2}+a\beta^{-1}\|\lambda^{k+1}-\lambda^{k}\|^{2}$
$\displaystyle+\alpha\left(\gamma^{2}L_{h}^{2}\|\hat{x}^{k+1}-\hat{x}^{k}\|^{2}+\|\hat{z}^{k+1}-\hat{z}^{k}\|^{2}\right),$
which, with the assumptions on the parameters, implies $(P1)$ for LiMEAL.
(b) Establishing $(P2)$: Note that ${\cal
P}_{\mathrm{limeal}}(y)=f(x)+\langle\lambda,Ax-b\rangle+\frac{\beta}{2}\|Ax-b\|^{2}+\frac{1}{2\gamma}\|x-z\|^{2}+4\alpha\gamma^{2}L_{h}^{2}\|x-\hat{x}\|^{2}+4\alpha\|z-\hat{z}\|^{2}$.
The update of $x^{k+1}$ in (9) has the optimality condition
$\displaystyle 0\in\partial g(x^{k+1})+\nabla
h(x^{k})+A^{T}\lambda^{k+1}+\gamma^{-1}(x^{k+1}-z^{k}),$
which implies
$\displaystyle(\nabla h(x^{k+1})-\nabla
h(x^{k}))+8\gamma^{2}L_{h}^{2}\alpha(x^{k+1}-x^{k})$
$\displaystyle+\gamma^{-1}(z^{k}-z^{k+1})+A^{T}(\lambda^{k+1}-\lambda^{k})\in\partial_{x}{\cal
P}_{\mathrm{limeal}}(y^{k+1}).$
The derivations for the other terms are straightforward and similar to those
in the proof of Proposition 1. We directly show the final estimate: for some
$v^{k+1}\in\partial{\cal P}_{\mathrm{limeal}}(y^{k+1})$,
$\displaystyle\|v^{k+1}\|$
$\displaystyle\leq\left(L_{h}+L_{g}+\gamma^{-1}+16\alpha\gamma^{2}L_{h}^{2}\right)\|x^{k+1}-x^{k}\|$
$\displaystyle+\left(\gamma^{-1}+\left|8\alpha-\frac{1-\eta}{\eta}\right|+8\alpha\right)\|z^{k+1}-z^{k}\|$
$\displaystyle+\beta^{-1}\|\lambda^{k+1}-\lambda^{k}\|+L_{h}\|\hat{x}^{k+1}-\hat{x}^{k}\|+\gamma^{-1}\|\hat{z}^{k+1}-\hat{z}^{k}\|,$
which yields $(P2)$ for LiMEAL.
(c) Establishing $(P3)$: $(P3)$ follows from the boundedness assumption of
$\\{y^{k}\\}$ and the convergence of $\\{{\cal P}_{\mathrm{limeal}}(y^{k})\\}$
by $(P1)$. This finishes the proof.
## 6 Discussions on Boundedness and Related Work
In this section, we discuss how to ensure the bounded sequences and then
compare our results to related other work.
### 6.1 Discussions on Boundedness of Sequence
Theorem 3.1 imposes the condition of lower boundedness of $\\{{\cal
E}_{\mathrm{meal}}^{k}\\}$ and Proposition 1 does with boundedness of the
generated sequence $\\{(x^{k},z^{k},\lambda^{k})\\}$. In this section, we
provide some sufficient conditions to guarantee the former and then the latter
boundedness conditions.
Besides the $\rho$-weak convexity of $f$ (implying the curvature of $f$ is
lower bounded by $\rho$), we impose the coerciveness on the constrained
problem (1) as follows.
###### Assumption 4 (Coercivity)
The minimal value $f^{*}:=\inf_{x\in{\cal X}}f(x)$ is finite (recall ${\cal
X}:=\\{x:Ax=b\\}$), and $f$ is coercive over the set ${\cal X}$, that is,
$f(x)\rightarrow\infty$ if $x\in{\cal X}$ and $\|x\|\rightarrow\infty$.
The coercive assumption is a common condition used to obtain the boundedness
of the sequence, for example, used in (Wang19, , Assumption A1) for the
nonconvex ADMM. Particularly, let $(x^{0},z^{0},\lambda^{0})$ be a finite
initial guess of MEAL and
$\displaystyle{\cal E}^{0}:={\cal E}_{\mathrm{meal}}^{1}<+\infty.$ (67)
By Assumption 4, if $x\in{\cal X}$ and $f(x)\leq{\cal E}^{0}$, then there
exists a positive constant ${\cal B}_{0}$ (possibly depending on ${\cal
E}^{0}$) such that $\|x\|\leq{\cal B}_{0}.$ Define another positive constant
as
$\displaystyle{\cal B}_{1}:={\cal B}_{0}+\sqrt{2\rho^{-1}\cdot\max\\{0,{\cal
E}^{0}-f^{*}\\}}.$ (68)
Given a $\gamma\in(0,1/\rho)$ and $z\in\mathbb{R}^{n}$ with $\|z\|\leq{\cal
B}_{1}$ and $u\in\mathrm{Im}(A)$, we define
$\displaystyle
x(u;z):=\operatorname*{argmin}_{\\{x:Ax=u\\}}\left\\{f(x)+\frac{1}{2\gamma}\|x-z\|^{2}\right\\}.$
(69)
Since $f$ is $\rho$-weakly convex by Assumption 2(a), then for any
$\gamma\in(0,1/\rho)$, the function $f(x)+\frac{1}{2\gamma}\|x-z\|^{2}$ is
strongly convex with respect to $x$, and thus the above $x(u;z)$ is well-
defined and unique for any given $z\in\mathbb{R}^{n}$ and
$u\in\mathrm{Im}(A)$. Motivated by (Boyd04, , Ch 5.6.3), we impose some local
stability on $x(u;z)$ defined in (69).
###### Assumption 5 (Local stability)
For any given $z\in\mathbb{R}^{n}$ with $\|z\|\leq{\cal B}_{1}$, there exist a
$\delta>0$ and a finite positive constant $\bar{M}$ (possibly depending on
$A$, ${\cal B}_{1}$ and $\delta$) such that
$\|x(u;z)-x(b;z)\|\leq\bar{M}\|u-b\|,\ \forall
u\in\mathrm{Im}(A)\cap\\{v:\|v-b\|\leq\delta\\}.$
The above local stability assumption is also related to the Lipschitz sub-
minimization path assumption suggested in (Wang19, , Assumption A3). As
discussed in Wang19 , the Lipschitz sub-minimization path assumption relaxes
the more stringent full-rank assumption used in the literature (see the
discussions in (Wang19, , Sections 2.2 and 4.1) and references therein). As
$\\{z\in\mathbb{R}^{n}:\|z\|\leq{\cal B}_{1}\\}$ is a compact set, $\bar{M}$
can be taken as the supremum of these stability constants over this compact
set. Based on Assumption 5, we have the following lemma.
###### Lemma 10
Let $\\{(x^{k},z^{k},\lambda^{k})\\}$ be the sequence generated by MEAL (5)
with fixed $\beta>0$ and $\eta>0$. If $\gamma\in(0,1/\rho)$,
$\|z^{k}\|\leq{\cal B}_{1}$ and $\|Ax^{k+1}-b\|\leq\delta$, there holds
$\|x^{k+1}-x(b;z^{k})\|\leq\bar{M}\|Ax^{k+1}-b\|,\ \forall k\in\mathbb{N}.$
###### Proof
Let $u^{k+1}=Ax^{k+1}$. By the update of $x^{k+1}$ in (5), there holds
${\cal P}_{\beta}(x^{k+1},z^{k},\lambda^{k})\leq{\cal
P}_{\beta}(x(u^{k+1};z^{k}),z^{k},\lambda^{k}).$
Noting that $Ax(u^{k+1};z^{k})=Ax^{k+1}$ due to its definition in (69), the
above inequality implies
$f(x^{k+1})+\frac{1}{2\gamma}\|x^{k+1}-z^{k}\|^{2}\leq
f(x(u^{k+1};z^{k}))+\frac{1}{2\gamma}\|x(u^{k+1};z^{k})-z^{k}\|^{2}.$
By the definition of $x(u^{k+1};z^{k})$ in (69) again and noting that
$Ax^{k+1}=u^{k+1}$, we have
$f(x^{k+1})+\frac{1}{2\gamma}\|x^{k+1}-z^{k}\|^{2}\geq
f(x(u^{k+1};z^{k}))+\frac{1}{2\gamma}\|x(u^{k+1};z^{k})-z^{k}\|^{2}.$
These two inequalities imply
$f(x^{k+1})+\frac{1}{2\gamma}\|x^{k+1}-z^{k}\|^{2}=f(x(u^{k+1};z^{k}))+\frac{1}{2\gamma}\|x(u^{k+1};z^{k})-z^{k}\|^{2},$
which yields
$x^{k+1}=x(u^{k+1};z^{k})=x(Ax^{k+1};z^{k})$
by the strong convexity of function $f(x)+\frac{1}{2\gamma}\|x-z^{k}\|^{2}$
for any $\gamma\in(0,1/\rho)$ and thus the uniqueness of $x(u^{k+1};z^{k})$.
Then by Assumption 5, we yield the desired result.
Based on the above assumptions, we establish the lower boundedness of
$\\{{\cal E}_{\mathrm{meal}}^{k}\\}$ and the boundedness of
$\\{(x^{k},z^{k},\lambda^{k})\\}$ as follows.
###### Proposition 4
Let $\\{(x^{k},z^{k},\lambda^{k})\\}_{k\in\mathbb{N}}$ be a sequence generated
by MEAL (5) with a finite initial guess $(x^{0},z^{0},\lambda^{0})$ such that
$\|z^{0}\|\leq{\cal B}_{1}$, where ${\cal B}_{1}$ is defined in (68). Suppose
that Assumptions 1, 2(a)-(b) and 4 hold and further Assumption 5 holds with
some $0<\bar{M}<\frac{2}{\sqrt{\sigma_{\min}(A^{T}A)}}$. If
$\gamma\in(0,\rho^{-1})$, $\eta\in(0,2)$ and
$\beta>\max\left\\{\frac{1+\sqrt{1+\eta(2-\eta)\gamma
c_{\gamma,A}\alpha_{\max}}}{2c_{\gamma,A}\alpha_{\max}},\frac{a_{2}+\sqrt{a_{2}^{2}+4a_{1}a_{3}}}{2a_{1}}\right\\},$
where $\alpha_{\max}:=\min\left\\{\frac{1-\gamma\rho}{4\gamma(1+\gamma
L_{f})^{2}},\frac{1}{8\gamma}(\frac{2}{\eta}-1)\right\\}$,
$c_{\gamma,A}=\gamma^{2}\sigma_{\min}(A^{T}A)$,
$a_{1}=4-\bar{M}^{2}\sigma_{\min}(A^{T}A)$,
$a_{2}=4(\bar{L}+\gamma^{-1})\bar{M}^{2}-\gamma\eta(2-\eta)$,
$a_{3}=(1+\gamma\bar{L})\eta(2-\eta)\bar{M}^{2}$ and $\bar{L}=\rho+2L_{f}$,
then the following hold:
1. (a)
$\\{{\cal E}_{\mathrm{meal}}^{k}\\}$ is lower bounded;
2. (b)
$\\{(x^{k},z^{k})\\}$ is bounded; and
3. (c)
if further $\lambda^{0}\in\mathrm{Null}(A^{T})$ (the null space of $A^{T}$)
and $\|\nabla{\cal M}_{\gamma,f}(w^{1})\|$ is finite with $w^{1}=z^{0}-\gamma
A^{T}\lambda^{1}$, then $\\{\lambda^{k}\\}$ is bounded.
###### Proof
In order to prove this proposition, we firstly establish the following claim
for sufficiently large $k$:
Claim A: If $\|z^{k-1}\|\leq{\cal B}_{1},\|Ax^{k}-b\|\leq\delta,\forall k\geq
k_{0}$ for some sufficiently large $k_{0}$, then ${\cal
E}_{\mathrm{meal}}^{k}\geq f^{*}$, and $\|z^{k}\|\leq{\cal B}_{1}$ and
$\|x^{k}\|\leq{\cal B}_{2}$.
By Theorem 3.1(a), such $k_{0}$ does exist due to the lower boundedness of
$\\{{\cal E}_{meal}^{k}\\}$ for all finite $k$ and thus
$\xi_{\mathrm{meal}}^{k}\leq\hat{c}/\sqrt{k}$ for some constant $\hat{c}>0$
(implying $\|Ax^{k}-b\|$ is sufficiently small with a sufficiently large $k$
).
In the next, we show Claim A. By the definition (24) of ${\cal
E}_{\mathrm{meal}}^{k}$, we have
$\displaystyle{\cal E}_{\mathrm{meal}}^{k}$
$\displaystyle=f(x^{k})+\langle\lambda^{k},Ax^{k}-b\rangle+\frac{\beta}{2}\|Ax^{k}-b\|^{2}+\frac{1}{2\gamma}\|x^{k}-z^{k}\|^{2}+2\alpha\|z^{k}-z^{k-1}\|^{2}$
$\displaystyle=f(x^{k})+\langle
A^{T}\lambda^{k},x^{k}-\bar{x}^{k}\rangle+\frac{\beta}{2}\|Ax^{k}-b\|^{2}+\frac{1}{2\gamma}\|x^{k}-z^{k}\|^{2}+2\alpha\|z^{k}-z^{k-1}\|^{2},$
where
$\bar{x}^{k}:=x(b;z^{k-1})$
as defined in (69). Let $\bar{\lambda}^{k}$ be the associated optimal
Lagrangian multiplier of $\bar{x}^{k}$ and $\bar{w}^{k}=z^{k-1}-\gamma
A^{T}\bar{\lambda}^{k}$. Then we have
$\bar{x}^{k}=\mathrm{Prox}_{\gamma,f}(\bar{w}^{k}),$
and $\nabla{\cal M}_{\gamma,f}(\bar{w}^{k})\in\partial f(\bar{x}^{k})$. By
(51) in the proof of Lemma 4, we have
$\displaystyle A^{T}\lambda^{k}=-\nabla{\cal
M}_{\gamma,f}(w^{k})-\gamma^{-1}(x^{k}-z^{k-1}),$
and $\nabla{\cal M}_{\gamma,f}(w^{k})\in\partial f(x^{k})$, where
$w^{k}=z^{k-1}-\gamma A^{T}\lambda^{k}$. Substituting the above equation into
the previous equality yields
$\displaystyle{\cal E}_{\mathrm{meal}}^{k}$
$\displaystyle=f(x^{k})+\langle\nabla{\cal
M}_{\gamma,f}(w^{k}),\bar{x}^{k}-x^{k}\rangle+\frac{\beta}{2}\|Ax^{k}-b\|^{2}$
(70) $\displaystyle+\gamma^{-1}\langle
x^{k}-z^{k-1},\bar{x}^{k}-x^{k}\rangle+\frac{1}{2\gamma}\|x^{k}-z^{k}\|^{2}+2\alpha\|z^{k}-z^{k-1}\|^{2}.$
Noting that $\nabla{\cal M}_{\gamma,f}(\bar{w}^{k})\in\partial f(\bar{x}^{k})$
and by the $\rho$-weak convexity of $f$, we have
$f(x^{k})\geq f(\bar{x}^{k})+\langle\nabla{\cal
M}_{\gamma,f}(\bar{w}^{k}),x^{k}-\bar{x}^{k}\rangle-\frac{\rho}{2}\|x^{k}-\bar{x}^{k}\|^{2},$
which implies
$\displaystyle f(x^{k})+\langle\nabla{\cal
M}_{\gamma,f}(w^{k}),\bar{x}^{k}-x^{k}\rangle$ $\displaystyle\geq
f(\bar{x}^{k})-\frac{\rho}{2}\|\bar{x}^{k}-x^{k}\|^{2}-\langle\nabla{\cal
M}_{\gamma,f}(\bar{w}^{k})-\nabla{\cal
M}_{\gamma,f}(w^{k}),\bar{x}^{k}-x^{k}\rangle$ $\displaystyle\geq
f(\bar{x}^{k})-\frac{\rho}{2}\|\bar{x}^{k}-x^{k}\|^{2}-\|\nabla{\cal
M}_{\gamma,f}(\bar{w}^{k})-\nabla{\cal
M}_{\gamma,f}(w^{k})\|\cdot\|\bar{x}^{k}-x^{k}\|.$
By the implicit Lipschitz subgradient assumption (i.e., Assumption 2 (b)) and
the definition of $\bar{L}:=\rho+2L_{f}$, the above inequality yields
$\displaystyle f(x^{k})+\langle\nabla{\cal
M}_{\gamma,f}(w^{k}),\bar{x}^{k}-x^{k}\rangle\geq
f(\bar{x}^{k})-\frac{\bar{L}}{2}\|\bar{x}^{k}-x^{k}\|^{2}.$ (71)
Moreover, it is easy to show that
$\displaystyle\gamma^{-1}\langle
x^{k}-z^{k-1},\bar{x}^{k}-x^{k}\rangle+\frac{1}{2\gamma}\|x^{k}-z^{k}\|^{2}+2\alpha\|z^{k}-z^{k-1}\|^{2}$
(72)
$\displaystyle=\frac{1}{2\gamma}\|\bar{x}^{k}-z^{k}\|^{2}-\frac{1}{2\gamma}\|\bar{x}^{k}-x^{k}\|^{2}+\gamma^{-1}\langle
z^{k}-z^{k-1},\bar{x}^{k}-x^{k}\rangle+2\alpha\|z^{k}-z^{k-1}\|^{2}$
$\displaystyle=\frac{1}{2\gamma}\|\bar{x}^{k}-z^{k}\|^{2}-\left(\frac{1}{2\gamma}+\frac{1}{8\alpha\gamma^{2}}\right)\|\bar{x}^{k}-x^{k}\|^{2}+2\alpha\left\|(z^{k}-z^{k-1})+\frac{1}{4\alpha\gamma}(\bar{x}^{k}-x^{k})\right\|^{2}.$
Substituting (71)-(72) into (70) and by Lemma 10, we have
$\displaystyle{\cal E}_{\mathrm{meal}}^{k}$ $\displaystyle\geq
f(\bar{x}^{k})+\frac{1}{2\gamma}\|\bar{x}^{k}-z^{k}\|^{2}+2\alpha\left\|(z^{k}-z^{k-1})+\frac{1}{4\alpha\gamma}(\bar{x}^{k}-x^{k})\right\|^{2}$
$\displaystyle+\frac{1}{2}\left[\beta-\left(\frac{1}{4\alpha\gamma^{2}}+\bar{L}+\gamma^{-1}\right)\bar{M}^{2}\right]\|Ax^{k}-b\|^{2}$
$\displaystyle\geq
f(\bar{x}^{k})+\frac{1}{2\gamma}\|\bar{x}^{k}-z^{k}\|^{2}+2\alpha\left\|(z^{k}-z^{k-1})+\frac{1}{4\alpha\gamma}(\bar{x}^{k}-x^{k})\right\|^{2}$
(73) $\displaystyle\geq
f^{*}+\frac{1}{2\gamma}\|\bar{x}^{k}-z^{k}\|^{2}+2\alpha\left\|(z^{k}-z^{k-1})+\frac{1}{4\alpha\gamma}(\bar{x}^{k}-x^{k})\right\|^{2}$
(74) $\displaystyle>-\infty,$ (75)
where the second inequality follows from the definition of
$\alpha=\frac{2\beta+\gamma\eta(1-\eta/2)}{2\gamma^{2}\sigma_{\min}(A^{T}A)\beta^{2}}$
and the condition on $\beta$, the third inequality holds for
$\bar{x}^{k}:=x(b;z^{k-1})$ and thus $A\bar{x}^{k}=b$ and $f(\bar{x}^{k})\geq
f^{*}$, and the final inequality is due to Assumption 4. The above inequality
yields the lower boundedness of $\\{{\cal E}_{\mathrm{meal}}^{k}\\}$ in Claim
A. Thus, clam (a) in this proposition holds.
Then, we show the boundedness of $\\{(x^{k},z^{k})\\}$ in Claim A. By (73) and
(5.2.1), we have
$\displaystyle f(\bar{x}^{k})\leq{\cal E}^{0}:={\cal E}_{\mathrm{meal}}^{1},$
which implies $\|\bar{x}^{k}\|\leq{\cal B}_{0}$ by Assumption 4. By (74) and
the condition on $\gamma\in(0,\rho^{-1})$, we have
$f^{*}+\frac{\rho}{2}\|\bar{x}^{k}-z^{k}\|^{2}\leq
f^{*}+\frac{1}{2\gamma}\|\bar{x}^{k}-z^{k}\|^{2}\leq{\cal E}^{0},$ which
implies
$\displaystyle\|z^{k}\|\leq{\cal B}_{0}+\sqrt{2({\cal
E}^{0}-f^{*})/\rho}={\cal B}_{1}.$
By (74) again, we have
$\left\|(z^{k}-z^{k-1})+\frac{1}{4\alpha\gamma}(\bar{x}^{k}-x^{k})\right\|^{2}\leq\frac{{\cal
E}^{0}-f^{*}}{2\alpha},$ which, together with these existing bounds
$\|z^{k-1}\|\leq{\cal B}_{1}$, $\|z^{k}\|\leq{\cal B}_{1}$ and
$\|\bar{x}^{k}\|\leq{\cal B}_{0}$, yields
$\displaystyle\|x^{k}\|\leq{\cal B}_{0}+4\alpha\gamma\left(2{\cal
B}_{1}+\sqrt{\frac{{\cal E}^{0}-f^{*}}{2\alpha}}\right)=:{\cal B}_{2}.$ (76)
Thus, we have shown Claim A. Recursively, we can show that $\\{x^{k}\\}$ and
$\\{z^{k}\\}$ are respectively bounded by ${\cal B}_{2}$ and ${\cal B}_{1}$
for any $k\geq 1$, that is, claim (b) in this proposition holds.
In the following, we show claim (c) of this proposition. By the update of
$\lambda^{k+1}$ in (5), it is easy to show
$\lambda^{k}=\lambda^{0}+\hat{\lambda}^{k}$, where
$\hat{\lambda}^{k}=\beta\sum_{t=1}^{k}(Ax^{t}-b)\in\mathrm{Im}(A)$ by
Assumption 1. Furthermore, by the assumption that
$\lambda^{0}\in\mathrm{Null}(A^{T})$, we have
$\displaystyle\langle\lambda^{0},\hat{\lambda}^{k}\rangle=0,\ \forall k\geq
1.$ (77)
By (51), for any $k\geq 1$, we have
$\displaystyle A^{T}\lambda^{k}=-(\nabla{\cal M}_{\gamma,f}(w^{k})-\nabla{\cal
M}_{\gamma,f}(w^{1}))-\nabla{\cal
M}_{\gamma,f}(w^{1})-\gamma^{-1}(x^{k}-z^{k-1}),$
where $w^{k}=z^{k-1}-\gamma A^{T}\lambda^{k}$. By Assumption 2(b) and the
boundedness of $\\{(x^{k},z^{k})\\}$ shown before, the above equation implies
$\displaystyle\|A^{T}{\lambda}^{k}\|$ $\displaystyle\leq
L_{f}\|x^{k}-x^{1}\|+\|\nabla{\cal
M}_{\gamma,f}(w^{1})\|+\gamma^{-1}\|x^{k}-z^{k-1}\|$
$\displaystyle\leq\gamma^{-1}{\cal B}_{1}+(2L_{f}+\gamma^{-1}){\cal
B}_{2}+\|\nabla{\cal M}_{\gamma,f}(w^{1})\|<+\infty.$
By the relation $\lambda^{k}=\lambda^{0}+\hat{\lambda}^{k}$ and (77), the
above inequality implies
$\displaystyle\|A^{T}\hat{\lambda}^{k}\|\leq\gamma^{-1}{\cal
B}_{1}+(2L_{f}+\gamma^{-1}){\cal B}_{2}+\|\nabla{\cal M}_{\gamma,f}(w^{1})\|.$
Since $\hat{\lambda}^{k}\in\mathrm{Im}(A)$, the above inequality implies
$\displaystyle\|\hat{\lambda}^{k}\|\leq\tilde{\sigma}_{\min}^{-1/2}(A^{T}A)\|A^{T}\hat{\lambda}^{k}\|\leq\tilde{\sigma}_{\min}^{-1/2}(A^{T}A)\left[\gamma^{-1}{\cal
B}_{1}+(2L_{f}+\gamma^{-1}){\cal B}_{2}+\|\nabla{\cal
M}_{\gamma,f}(w^{1})\|\right],$
which yields the boundedness of $\\{\lambda^{k}\\}$ by the triangle
inequality. This finishes the proof.
The proof idea of claim (c) of this proposition is motivated by the proof of
(Zhang-Luo18, , Lemma 3.1). Based on Proposition 4, we show the lower
boundedness of the Lypunov function sequence and the boundedness of the
sequence generated by MEAL. Following the similar analysis of this section, we
can obtain the similar boundeness results for both iMEAL and LiMEAL.
### 6.2 Discussions on Related Work
When compared to the tightly related work Hajinezhad-Hong19 ; Hong17-Prox-PDA
; Jiang19 ; Rockafellar76-PALM ; Xie-Wright19 ; Zhang-Luo20 ; Zhang-Luo18 ,
this paper provides some slightly stronger convergence results under weaker
conditions. The detailed discussions and comparisons with these works are
shown as follows and presented in Tables 1 and 2.
Table 1: Convergence results of our and related algorithms for problem (1)
Algorithm | MEAL (our) | iMEAL (our) | Prox-PDA Hong17-Prox-PDA | Prox-ALM Xie-Wright19
---|---|---|---|---
Assumption | $f$: weakly convex, imp-Lip or imp-bound | $\nabla f$: Lipschitz
Iteration | imp-Lip: $o(\varepsilon^{-2})$ | imp-Lip: $o(\varepsilon^{-2})$ | ${\cal O}(\varepsilon^{-2})$ | ${\cal O}(\varepsilon^{-2})$
complexity | imp-bound: ${\cal O}(\varepsilon^{-2})$ | imp-bound: ${\cal O}(\varepsilon^{-2})$
Global | $\checkmark$ under KŁ | – | – | –
Convergence
$\bullet$ imp-Lip: the implicit Lipschitz subgradient assumption 2(b);
$\bullet$ imp-bound: the implicit bounded subgradient assumption 2(c);
$\bullet$ Xie-Wright19 considers a nonlinear equality constraints $c(x)=0$
where $\nabla c$ is Lipschitz and bounded.
Table 2: Convergence results of our and related algorithms for the composite optimization problem (8). Algorithm | LiMEAL (our) | PProx-PDA Hajinezhad-Hong19 | Prox-iALM Zhang-Luo18 | S-prox-ALM Zhang-Luo20
---|---|---|---|---
Assumption | $\nabla h$: Lipschitz, | $\nabla h$: Lipschitz, | $\nabla h$: Lipschitz, | $\nabla h$: Lipschitz,
$g$: weakly convex, | $g$: convex, | $g:\iota_{\cal C}(x)$, | $g:\iota_{\cal P}(x)$,
imp-Lip or imp-bound | $\partial g$: bounded | ${\cal C}$: box constraint | ${\cal P}$: polyhedral set
Iteration | imp-Lip: $o(\varepsilon^{-2})$ | ${\cal O}(\varepsilon^{-2})$ | ${\cal O}(\varepsilon^{-2})$ | ${\cal O}(\varepsilon^{-2})$
complexity | imp-bound: ${\cal O}(\varepsilon^{-2})$
Global | $\checkmark$ under KŁ | – | $\checkmark$ for quadratic | –
Convergence | programming
When reduced to the case of linear constraints, the proximal ALM suggested in
Rockafellar76-PALM is a special case of MEAL with $\eta=1$, and the Lipschitz
continuity of certain fundamental mapping at the origin (Rockafellar76-PALM, ,
p. 100) generally implies the KŁ property of the proximal augmented Lagrangian
with exponent $1/2$ at some stationary point, and thus, the linear convergence
of proximal ALM can be directly yielded by Proposition 1(b). Moreover, the
proposed algorithms still work (in terms of convergence) for some constrained
problems with nonconvex objectives and a fixed penalty parameter.
In Hong17-Prox-PDA , a proximal primal-dual algorithm (named Prox-PDA) was
proposed for the linearly constrained problem (1) with $b=0$. Prox-PDA is
shown as follows:
$\displaystyle\text{(Prox-PDA)}\
\left\\{\begin{array}[]{l}x^{k+1}=\operatorname*{argmin}_{x\in\mathbb{R}^{n}}\
\left\\{f(x)+\langle\lambda^{k},Ax\rangle+\frac{\beta}{2}\|Ax\|^{2}+\frac{\beta}{2}\|x-x^{k}\|^{2}_{B^{T}B}\right\\},\\\
\lambda^{k+1}=\lambda^{k}+\beta Ax^{k+1},\end{array}\right.$
where $B$ is chosen such that $A^{T}A+B^{T}B\succeq\mathrm{I}_{n}$ (the
identity matrix of size $n$). To achieve a $\sqrt{\varepsilon}$-accurate
stationary point, the iteration complexity of Prox-PDA is ${\cal
O}(\varepsilon^{-1})$ under the Lipschitz differentiability of $f$ (that is,
$f$ is differentiable and has Lipschitz gradient) and the assumption that
there exists some $\underline{f}>-\infty$ and some $\delta>0$ such that
$f(x)+\frac{\delta}{2}\|Ax\|^{2}\geq\underline{f}$ for any
$x\in\mathbb{R}^{n}$. Such iteration complexity of Prox-PDA is consistent with
the order of ${\cal O}(\varepsilon^{-2})$ to achieve an $\varepsilon$-accurate
stationary point. On one hand if we take $B=\mathrm{I}_{n}$ in Prox-PDA, then
it reduces to MEAL with $\gamma=\beta^{-1}$ and $\eta=1$. On the other hand,
by our main Theorem 3.1(a), the iteration complexity of the order of
$o(\varepsilon^{-2})$ is slightly better than that of Prox-PDA, under weaker
conditions (see, Assumption 2(a)-(b)). Moreover, we established the global
convergence and rate of MEAL under the KŁ inequality, while such global
convergence result is missing (though obtainable) for Prox-PDA in Hong17-Prox-
PDA .
A prox-linear variant of Prox-PDA (there dubbed PProx-PDA) was proposed in the
recent paper Hajinezhad-Hong19 for the linearly constrained problem (8) with
a composite objective. Besides Lipschitz differentiability of $h$, the
nonsmooth function $g$ is assumed to be convex with bounded subgradients.
These assumptions used in Hajinezhad-Hong19 are stronger than ours in
Assumption 3(a), (b) and (d), while the yielded iteration complexity of LiMEAL
(Theorem 4.1(b)) is consistent with that of PProx-PDA in (Hajinezhad-Hong19, ,
Theorem 1). Moreover, we establish the global convergence and rate of LiMEAL
(Proposition 3), which is missing (though obtainable) for PProx-PDA.
In Xie-Wright19 , an ${\cal O}(\varepsilon^{-2})$-iteration complexity of
proximal ALM was established for the constrained problem with nonlinear
equality constraints, under assumptions that the objective is differentiable
and its gradient is both Lipschitz continuous and bounded, and that the
Jacobian of the constraints is also Lipschitz continuous and bounded and
satisfies a full-rank property (see (Xie-Wright19, , Assumption 1)). If we
reduce their setting to linear constraints, their iteration complexity is
slightly worse than ours and their assumptions are stronger (of course, except
for the part on nonlinear constraints).
In Zhang-Luo18 , a very related algorithm (called Proximal Inexact Augmented
Lagrangian Multiplier method, dubbed Prox-iALM) was introduced for the
following linearly constrained problem
$\displaystyle\min_{x\in\mathbb{R}^{n}}\ h(x)\quad\mathrm{subject\ to}\quad
Ax=b,\ x\in{\cal C},$
where ${\cal C}$ is a box constraint set. Subsequence convergence to a
stationary point was established under the following assumptions: (a) the
origin is in the relative interior of the set $\\{Ax-b:x\in{\cal C}\\}$; (b)
the strict complementarity condition Nocedal99 holds for the above
constrained problem; (c) $h$ is differentiable and has Lipschitz continuous
gradient. Moreover, the global convergence and linear rate of this algorithm
was established for the quadratic programming, in which case, the augmented
Lagrangian satisfies the KŁ inequality with exponent $1/2$, by noticing the
connection between Luo-Tseng error bound and KŁ inequality Li-Pong-
KLexponent18 . According to Theorem 4.1 and Proposition 3, the established
convergence results in this paper are more general and stronger than that in
Zhang-Luo18 but under weaker assumptions. Particularly, besides the weaker
assumption on $h$, the strict complementarity condition (b) is also removed in
this paper for LiMEAL.
The algorithm studied in Zhang-Luo18 has been recently generalized to handle
the linearly constrained problem with the polyhedral set in Zhang-Luo20
(dubbed S-prox-ALM). Under the Lipschitz differentiability of the objective,
the iteration complexity of the order ${\cal O}(\varepsilon^{-2})$ was
established in Zhang-Luo20 for the S-prox-ALM algorithm. Such iteration
complexity is consistent with LiMEAL as shown in Theorem 4.1. Besides these
major differences between this paper and Zhang-Luo20 ; Zhang-Luo18 , the step
sizes $\eta$ are more flexible for both MEAL and LiMEAL (only requiring
$\eta\in(0,2)$), while the step sizes used in the algorithms in Zhang-Luo20 ;
Zhang-Luo18 should be sufficiently small to guarantee the convergence.
Meanwhile, the Lyapunov function used in this paper is motivated by the Moreau
envelope of the augmented Lagrangian, which is very different from the
Lyapunov function used in Zhang-Luo20 ; Zhang-Luo18 . Based on the defined
Lyapunov function, our analysis is much simpler than that in Zhang-Luo20 ;
Zhang-Luo18 .
## 7 Numerical Experiments
We use two experiments to demonstrate the effectiveness of the proposed
algorithms:
1. 1.
The first experiment is based on a nonconvex quadratic program on which ALM
with any bounded penalty parameter diverges (Wang19, , Proposition 1) but
LiMEAL converges.
2. 2.
The second experiment borrows a general quadratic program from (Zhang-Luo18, ,
Sec. 6.2) and LiMEAL outperforms Prox-iALM suggested in Zhang-Luo18 .
The source codes can be accessed at https://github.com/JinshanZeng/MEAL.
### 7.1 ALM vs LiMEAL
Consider the following optimization problem from (Wang19, , Proposition 1):
$\displaystyle\min_{x,y\in\mathbb{R}}\ x^{2}-y^{2},\quad\text{subject to}\
x=y,\ x\in[-1,1].$ (78)
ALM with any bounded penalty parameter $\beta$ diverges on this problem. By
Theorem 4.1 and Proposition 3, LiMEAL converges exponentially fast since its
augmented Lagrangian is a KŁ function with an exponent of $1/2$. For both ALM
and LiMEAL, we set the penalty parameter $\beta$ to 50. We set LiMEAL’s
proximal parameter $\gamma$ to $1/2$ and test three different values
$\eta^{\prime}s$: $0.5,1,1.5$. The curves of objective
$f(x^{k},y^{k})=(x^{k})^{2}-(y^{k})^{2}$, constraint violation error
$|x^{k}-y^{k}|$, multiplier sequences $\\{\lambda^{k}\\}$, and the norm of
gradient of Moreau envelope in (39), which is the stationarity measure, are
depicted in Fig. 1.
Observe that ALM diverges: its multiplier sequence $\\{\lambda^{k}\\}$
oscillates between two distinct values (Fig. 1 (a)) and the constraint
violation converges to a positive value (Fig. 1 (b)). Also observe that LiMEAL
converges exponentially fast (Fig. 1 (c)–(e)) and achieves the optimal
objective value of 0 in about 10 iterations (Fig. 1 (f)) with all $\eta$
values. This verifies Proposition 3.
(a) Divergent $\\{\lambda^{k}\\}$ of ALM
(b) Constraint violation of ALM
(c) Convergent $\\{\lambda^{k}\\}$ of LiMEAL
(d) Constraint violation of LiMEAL
(e) convergence rate of LiMEAL
(f) objective sequence of LiMEAL
Figure 1: Apply ALM and LiMEAL to problem (78). ALM diverges. LiMEAL quickly
converges.
### 7.2 Quadratic Programming
Consider the quadratic program with box constraints:
$\displaystyle\min_{x\in\mathbb{R}^{n}}\ \frac{1}{2}x^{T}Qx+r^{T}x\quad
s.t.\quad Ax=b,\ \ell_{i}\leq x_{i}\leq u_{i},\ i=1,\ldots,n,$ (79)
where $Q\in\mathbb{R}^{n\times n}$, $r\in\mathbb{R}^{n}$,
$A\in\mathbb{R}^{m\times n}$, $b\in\mathbb{R}^{m}$, and
$\ell_{i},u_{i}\in\mathbb{R}$, $i=1,\ldots,n$. Let ${\cal
C}:=\\{x:\ell_{i}\leq x_{i}\leq u_{i},i=1,\ldots,n\\}$. Applying LiMEAL
yields: initialize $(x^{0},z^{0},\lambda^{0})$, $\gamma>0$, $\eta\in(0,2)$ and
$\beta>0$, for $k=0,1,\ldots,$ run
$\mathrm{(LiMEAL)}\quad\left\\{\begin{array}[]{l}\tilde{x}^{k}=(\beta
A^{T}A+\gamma^{-1}{\bf I}_{n})^{-1}(\gamma^{-1}z^{k}+\beta A^{T}b-r-
Qx^{k}-A^{T}\lambda^{k}),\\\ x^{k+1}=\mathrm{Proj}_{\cal C}(\tilde{x}^{k}),\\\
z^{k+1}=z^{k}-\eta(z^{k}-x^{k+1}),\\\
\lambda^{k+1}=\lambda^{k}+\beta_{k}(Ax^{k+1}-b).\end{array}\right.$
Applying Prox-iALM from (Zhang-Luo18, , Algorithm 2.2) yields: initialize
$(x^{0},z^{0},\lambda^{0})$, parameters $\beta,p,\alpha,s,\eta>0$, for
$k=0,1,\ldots,$ run
$\mathrm{(Prox-iALM)}\quad\left\\{\begin{array}[]{l}\bar{x}^{k}=(\beta
A^{T}A+p{\bf I}_{n})x^{k}+Qx^{k}+A^{T}\lambda^{k}-pz^{k}-(\beta A^{T}b-r),\\\
x^{k+1}=\mathrm{Proj}_{\cal C}(x^{k}-s\bar{x}^{k}),\\\
z^{k+1}=z^{k}-\eta(z^{k}-x^{k+1}),\\\
\lambda^{k+1}=\lambda^{k}+\beta_{k}(Ax^{k+1}-b).\end{array}\right.$
When $\eta=1$, then Prox-iALM reduces to Algorithm 2.1 in Zhang-Luo18 , which
we name iALM.
The experimental settings are similar to (Zhang-Luo18, , Sec. 6.2): set
$m=5,n=20$, generate the entries of $Q$, $A$, $b$, and $\tilde{x}$ by sampling
from the uniform distribution, and set $b=A\tilde{x}$. For LiMEAL, we set
$\beta=50,\gamma=\frac{1}{2\|Q\|_{2}}$ and test three values of
$\eta^{\prime}s$: $0.5,1,1.5$. For Prox-iALM, we use the parameter settings in
(Zhang-Luo18, , Sec. 6.2):
$p=2\|Q\|_{2},\beta=50,\alpha=\frac{\beta}{4},s=\frac{1}{2(\|Q\|_{2}+p+\beta\|A\|_{2}^{2})}$.
Moreover, we test two values of $\eta^{\prime}s$: $1$ and $0.5$ for Prox-iALM.
Prox-iALM with $\eta=1$ reduces to iALM. The curves of the objective sequence,
$\|Ax^{k}-b\|$, $\|x^{k+1}-z^{k}\|$ and the norm of gradient of the Moreau
envelope are depicted in Fig. 2. We observe that LiMEAL converges faster than
both iALM and Prox-iALM. By Fig. 2(d), LiMEAL converges exponentially fast
with all three values of $\eta^{\prime}s$. These results verify the results in
Proposition 3(b) since the augmented Lagrangian of problem (79) is a KŁ
function with an exponent of $1/2$.
(a) Objective sequence
(b) $\|Ax^{k}-b\|$
(c) $\|x^{k+1}-z^{k}\|$
(d) Convergence rates of LiMEAL
Figure 2: Performance of LiMEAL and Prox-iALM for the quadratic programming
problem (79).
## 8 Conclusion
This paper suggests a Moreau envelope augmented Lagrangian (MEAL) method for
the linearly constrained weakly convex optimization problem. By leveraging the
implicit smoothing property of Moreau envelope, the proposed MEAL generalizes
the ALM and proximal ALM to the nonconvex and nonsmooth case. To yield an
$\varepsilon$-accurate first-order stationary point, the iteration complexity
of MEAL is $o(\varepsilon^{-2})$ under the implicit Lipschitz subgradient
assumption and ${\cal O}(\varepsilon^{-2})$ under the implicit bounded
subgradient assumption. The global convergence and rate of MEAL are also
established under the further Kurdyka-Łojasiewicz inequality. Moreover, an
inexact variant (called iMEAL), and a prox-linear variant (called LiMEAL) for
the composite objective case are suggested and analyzed for different
practical settings. The convergence results established in this paper for MEAL
and its variants are generally stronger than the existing ones, but under
weaker assumptions.
One future direction of this paper is to get rid of the implicit Lipschitz
subgradient and implicit bounded subgradient assumptions, which in some extent
limit the applications of the suggested algorithms, though these two
assumptions are respectively weaker than the Lipschitz differentiable and
bounded subgradient assumptions commonly used in the literature. Another
direction is to generalize this work to the constrained problem with nonlinear
constraints. The third direction is to develop more practical variants of the
proposed methods as well as establish their convergence results. One possible
application of our study is robustness and convergence of stochastic gradient
descent in training parameters of structured deep neural networks such as deep
convolutional neural networks Zhou20 , where linear constraints can be used to
impose convolutional structures. We leave them in our future work.
## References
* (1) Andreani, R., Birgin, E.G., Martinez, J.M., Schuverdt, M.L.: On augmented lagrangian methods with general lower-level constraints. SIAM J. Optim. 18(4), 1286–1309 (2007)
* (2) Andreani, R., Birgin, E.G., Martinez, J.M., Schuverdt, M.L.: Augmented lagrangian methods under the constant positive linear dependence constraint qualification. Math. Program. 111, 5–32 (2008)
* (3) Andreani, R., Birgin, E.G., Martinez, J.M., Schuverdt, M.L.: Second-order negative-curvature methods for box-constrained and general constrained optimization. Comput. Optim. Appl. 45(2), 209–236 (2010)
* (4) Andreani, R., Fazzio, N., Schuverdt, M.L., Secchin, L.: A sequential optimality condition related to the quasi-normality constraint qualification and its algorithmic consequences. SIAM J. Optim. 29(1), 743–766 (2019)
* (5) Andreani, R., Secchin, L., Silva, P.: Convergence properties of a second order augmented lagrangian method for mathematical programs with complementarity constraints. SIAM J. Optim. 28(3), 2574–2600 (2018)
* (6) Armand, P., Omheni, R.: A globally and quadratically convergent primal-dual augmented lagrangian algorithm for equality constrained optimization. Optim. Methods Softw. 32(1), 1–21 (2017)
* (7) Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. 116, 5–16 (2009)
* (8) Attouch, H., Bolte, J., Svaiter, B.F.: Convergence of descent methods for semi–algebraic and tame problems: proximal algorithms, forward–backward splitting, and regularized gauss–seidel methods. Math. Program. 137, 91–219 (2013)
* (9) Bertsekas, D.P.: Convergence rate of penalty and multiplier methods. In: Proc. IEEE Conf. on Decision and Control), pp. 260–264. SanDiego, California (1973)
* (10) Bertsekas, D.P.: On penalty and multiplier methods for constrained minimization. SIAM J. Control Optim. 14(2), 216–235 (1976)
* (11) Bertsekas, D.P.: Constrained Optimization and Lagrange Multiplier Methods. Academic Press, London (1982)
* (12) Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-lipschitz and nonconvex minimization. Math. Program. 149(1), 301–327 (2005)
* (13) Birgin, E.G., Castillo, R., Martinez, J.M.: Numerical comparison of augmented lagrangian algorithms for nonconvex problems. Comput. Optim. Appl. 31, 31–56 (2005)
* (14) Birgin, E.G., Floudas, C.A., Martinez, J.M.: Global minimization using an augmented lagrangian method with variable lower-level constraints. Math. Program. 125, 139–162 (2010)
* (15) Birgin, E.G., Floudas, C.A., Martinez, J.M.: The boundedness of penalty parameters in an augmented lagrangian method with constrained subproblems. Optim. Methods Softw. 27(6), 1001–1024 (2012)
* (16) Birgin, E.G., Haeser, G., Ramos, A.: Augmented lagrangians with constrained subproblems and convergence to second-order stationary points. Comput. Optim. Appl. 69(1), 51–75 (2018)
* (17) Birgin, E.G., Martinez, J.M.: Practical Augmented Lagrangian Methods for Constrained Optimization, vol. vol. 10. SIAM, Philadelphia (2014)
* (18) Birgin, E.G., Martinez, J.M.: Complexity and performance of an augmented lagrangian algorithm. Optim. Methods Softw. (2020)
* (19) Bochnak, J., Coste, M., Roy, M.F.: Real algebraic geometry, vol. 36. Springer Science & Business Media, Berlin (1998)
* (20) Bolte, J., Daniilidis, A., Lewis, A.: The łojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 17(4), 1205–1223 (2007)
* (21) Bolte, J., Daniilidis, A., Lewis, A., Shiota, M.: Clark subgradients of stratifiable functions. SIAM J. Optim. 18(2), 556–572 (2007)
* (22) Bolte, J., Sabach, S., Teboulle, M.: Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 146(1), 459–494 (2014)
* (23) Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge University Press, Cambridge, UK (2004)
* (24) Conn, A.R., Gould, N.I.M., Startenaer, A., Toint, P.L.: Convergence properties of an augmented lagrangian algorithm for optimization with a combination of general equality and linear constraints. SIAM J. Optim. 6, 674–703 (1996)
* (25) Conn, A.R., Gould, N.I.M., Toint, P.L.: A globally convergent augmented lagrangian algorithm for optimization with general constraints and simple bounds. SIAM J. Numer. Anal. 28, 545–572 (1991)
* (26) Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. SIAM, Philadelphia (2000)
* (27) Curtis, F.E., Jiang, H., Robinson, D.P.: An adaptive augmented lagrangian method for large-scale constrained optimization. Math. Program. 152(1), 201–245 (2015)
* (28) Davis, D., Drusvyatskiy, D.: Stochastic model-based minimization of weakly convex functions. SIAM J. Optim. 29(1), 207–239 (2019)
* (29) Deng, W., Lai, M.J., Peng, Z., Yin, W.: Parallel multi-block admm with $o(1/k)$ convergence. J. Sci. Comput. 71, 712–736 (2017)
* (30) Drusvyatskiy, D.: The proximal point method revisited. SIAG/OPT Views and News 26, 1–8 (2018)
* (31) Drusvyatskiy, D., Paquette, C.: Efficiency of minimizing compositions of convex functions and smooth maps. Math. Program. 178, 503–558 (2019)
* (32) Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96, 1348–1360 (2001)
* (33) Fernadez, D., Solodov, M.V.: Local convergence of exact and inexact augmented lagrangian methods under the second-order sufficient optimality condition. SIAM J. Optim. 22(2), 384–407 (2012)
* (34) Grapiglia, G.N., Yuan, Y.X.: On the complexity of an augmented Lagrangian method for nonconvex optimization. ArXiv e-prints (2019)
* (35) Haeser, G., Liu, H., Ye, Y.: Optimality condition and complexity analysis for linearly-constrained optimization without differentiability on the boundary. Math. Program. 178, 263–299 (2019)
* (36) Hajinezhad, D., Hong, M.: Perturbed proximal primal-dual algorithm for nonconvex nonsmooth optimization. Math. Program. 176, 207–245 (2019)
* (37) Hestenes, M.R.: Multiplier and gradient methods. J. Optim. Theory Appl. 4, 303–320 (1969)
* (38) Hong, M., Hajinezhad, D., Zhao, M.M.: Prox-pda: The proximal primal-dual algorithm for fast dostributed nonconvex optimization and learning over networks. In: Proc. of the 34th International Conference on Machine Learning (ICML), pp. 1529–1538. Sydney, Australia (2017)
* (39) Jiang, B., Lin, T., Ma, S., Zhang, S.: Structured nonconvex and nonsmooth otpmization: algorithms and iteration complexity analysis. Comput. Optim. Appl. 72(1), 115–157 (2019)
* (40) Krantz, S., Parks, H.R.: A Primer of Real Analytic Functions (2nd Edition). Birkhauser, Basel, Switzerland (2002)
* (41) Kurdyka, K.: On gradients of functions definable in o-minimal structures. Annales de l’institut Fourier 48(3), 769–783 (1998)
* (42) Li, G., Pong, T.K.: Calculus of the exponent of kurdyka-łojasiewicz inequality and its applications to linear convergence of first-order methods. Found. Comput. Math. 18, 1199–1232 (2018)
* (43) Łojasiewicz, S.: Une propriété topologique des sous-ensembles analytiques réels. In: Les Équations aux dérivées partielles. Éditions du centre National de la Recherche Scientifique, Paris pp. 87–89 (1963)
* (44) Łojasiewicz, S.: Sur la geometrie semi-et sous-analytique. Annales de l’institut Fourier 43(5), 1575–1595 (1993)
* (45) Mordukhovich, B.S.: Variational analysis and generalized differentiation I: Basic Theory. Springer-Verlag, New York (2006)
* (46) Moreau, J.: Proximité et dualité dans un espace hilbertien. Bull. Soc. Math. France 93, 273–299 (1965)
* (47) Nocedal, J., Wright, S.J.: Numerical Optimization. Springer-Verlag, New York (1999)
* (48) Nouiehed, M., Lee, J.D., Razaviyayn, M.: Convergence to second-order stationary for constrained non-convex optimization. ArXiv e-prints (2018)
* (49) Nurminskii, E.A.: The quasigradient method for the solving of the nonlinear programming problems. Cybernetics 9, 145–150 (1973)
* (50) O’Neill, M., Wright, S.J.: A log-barrier newton-cg method for bound constrained optimization with complexity guarantees. IMA J. Numer. Anal. 00, 1–38 (2020)
* (51) Polyak, B.T., Tretyakov, N.V.: The method of penalty bounds for constrained extremum problems. Zh. Vych Mat i Mat. Fiz, 13:34-46 = U.S.S.R. Computational Mathematics and Mathmatical Physics 13, 42–58 (1973)
* (52) Powell, M.J.D.: A method for nonlinear constraints in minimization problems. in Optimization, R. Fletcher, ed. Academic Press, London pp. 283–298 (1969)
* (53) Rockafellar, R.T.: The multiplier method of hestenes and powell applied to convex programming. J. Optim. Theory Appl. 12, 555–562 (1973)
* (54) Rockafellar, R.T.: Augmented lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)
* (55) Rockafellar, R.T., Wets, R.J.B.: Variational Analysis. Springer-Verlag, New York (1997)
* (56) Shiota, M.: Geometry of Subanalytic and Semialgebraic Sets (Progress in Mathematics). Birkhauser, Basel, Switzerland (1997)
* (57) Tretykov, N.Y.: The method of penalty estimates of convex programming. Economics and Mathematical Methods (Russian) 9, 525–540 (1973)
* (58) Wang, Y., Yin, W., Zeng, J.: Global convergence of admm in nonconvex nonsmooth optimization. J. Sci. Comput. 78, 29–63 (2019)
* (59) Xie, Y., Wright, S.J.: Complexity of proximal augmented Lagrangian for nonconvex optimalization with nonlinear equality constraints. ArXiv e-prints (2019)
* (60) Xu, Y., Yin, W.: A block coordinate descent method for regularized multiconvex optimizaton with applications to nonnegative tensor factorization and completion. SIAM J. Imaging Sci. 6(3), 1758–1789 (2013)
* (61) Yu, P., Li, G., Pong, T.: Kurdyka-łojasiewicz exponent via inf-projection. Found. Comput. Math. (2021). DOI https://doi.org/10.1007/s10208-021-09528-6
* (62) Zeng, J., Lau, T.T.K., Lin, S.B., Yao, Y.: Global convergence of block coordinate descent in deep learning. In: Proceedings of the 36th International Conference on Machine Learning (ICML). Long Beach, California, PMLR 97 (2019)
* (63) Zeng, J., Lin, S.B., Yao, Y., Zhou, D.X.: On admm in deep learning: Convergence and saturation-avoidance. J Mach Learn Res 22(199), 1–67 (2021)
* (64) Zeng, J., Yin, W.: On nonconvex descentralized gradient descent. IEEE Trans. Signal Process. 66(11), 2834–2848 (2018)
* (65) Zhang, C.H.: Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 38(2), 894–942 (2010)
* (66) Zhang, J., Luo, Z.Q.: A global dual error bound and its application to the analysis of linearly constrained nonconvex optimization. ArXiv e-prints (2020)
* (67) Zhang, J., Luo, Z.Q.: A proximal alternating direction method of multiplier for linearly constrained nonconvex minimization. SIAM J. Optim. 30(3), 2272–2302 (2020)
* (68) Zhou, D.X.: Universality of deep convolutional neural networks. Appl. Comput. Harmonic Anal. 48, 787–794 (2020)
|
# Out-of-Distribution Generalization Analysis via Influence Function
Haotian Ye
Yuanpei College
Peking University
Beijing, China
<EMAIL_ADDRESS>
&Chuanlong Xie
Huawei Noah’s Ark Lab
Hong Kong, China
<EMAIL_ADDRESS>
Yue Liu∗
Huawei Noah’s Ark Lab
Beijing, China
<EMAIL_ADDRESS>
&Zhenguo Li
Huawei Noah’s Ark Lab
Hong Kong, China
<EMAIL_ADDRESS>
###### Abstract
The mismatch between training and target data is one major challenge for
current machine learning systems. When training data is collected from
multiple domains and the target domains include all training domains and other
new domains, we are facing an Out-of-Distribution (OOD) generalization problem
that aims to find a model with the best OOD accuracy. One of the definitions
of OOD accuracy is worst-domain accuracy. In general, the set of target
domains is unknown, and the worst over target domains may be unseen when the
number of observed domains is limited. In this paper, we show that the worst
accuracy over the observed domains may dramatically fail to identify the OOD
accuracy. To this end, we introduce Influence Function, a classical tool from
robust statistics, into the OOD generalization problem and suggest the
variance of influence function to monitor the stability of a model on training
domains. We show that the accuracy on test domains and the proposed index
together can help us discern whether OOD algorithms are needed and whether a
model achieves good OOD generalization.
## 1 Introduction
Most machine learning systems assume both training and test data are
independently and identically distributed, which does not always hold in
practice (Bengio et al. (2019)). Consequently, its performance is often
greatly degraded when the test data is from a different domain (distribution).
A classical example is the problem to identify cows and camels (Beery et al.
(2018)), where the empirical risk minimization (ERM, Vapnik (1992)) may
classify images by background color instead of object shape. As a result, when
the test domain is “out-of-distribution” (OOD), e.g. when the background color
is changed, its performance will drop significantly. The OOD generalization is
to obtain a robust predictor against this distribution shift.
Suppose that we have training data collected from $m$ domains:
$\displaystyle{\mathbb{S}}=\\{{\mathbb{S}}^{e}:e\in\mathcal{E}_{tr},|\mathcal{E}_{tr}|=m\\},\quad{\mathbb{S}}^{e}=\\{{\bm{z}}^{e}_{1},{\bm{z}}^{e}_{2},\ldots,{\bm{z}}^{e}_{n^{e}}\\}\,\,\text{with}\,\,{\bm{z}}^{e}_{i}\sim
P^{e},$ (1)
where $P^{e}$ is the distribution corresponding to domain $e$,
$\mathcal{E}_{tr}$ is the set of _all available domains, including validation
domains_ , and ${\bm{z}}^{e}_{i}$ is a data point. The OOD problem we
considered is to find a model $f_{\text{OOD}}$ such that
$\displaystyle
f_{\text{OOD}}=\operatorname*{arg\,min}_{f}\sup_{P^{e}\in\mathcal{E}_{all}}\ell(f,P^{e}),$
(2)
where $\mathcal{E}_{all}$ is the set of all target domains and $\ell(f,P^{e})$
is the expected loss of $f$ on the domain $P^{e}$. Recent algorithms address
this OOD problem by recovering invariant (causal) features and build the
optimal model on top of these features, such as Invariant Risk Minimization
(IRM, Arjovsky et al. (2019)), Risk Extrapolation (REx, Krueger et al.
(2020)), Group Distributionally Robust Optimization (gDRO, Sagawa et al.
(2019)) and Inter-domain Mixup (Mixup, Xu et al. (2020); Yan et al. (2020);
Wang et al. (2020)). Most works evaluate on Colored MNIST (see 5.1 for
details) where we can directly obtain the worst domain accuracy over
$\mathcal{E}_{all}$. Gulrajani & Lopez-Paz (2020) has assembled many
algorithms and multi-domain datasets, and finds that OOD algorithms can’t
outperform ERM in some domain generalization tasks (Gulrajani & Lopez-Paz
(2020)), e.g. VLCS (Torralba & Efros (2011)) and PACS (Li et al. (2017)). This
is not surprising, since these tasks only require high performance on certain
domains, while an OOD algorithm is expected to learn truly invariant features
and be excellent on a large set of target domains $\mathcal{E}_{all}$. This
phenomenon is described as “accuracy-vs-invariance trade-off” in Akuzawa et
al. (2019).
Two questions arise in the min-max problem (2). First, previous works assume
that there is sufficient diversity among the domains in $\mathcal{E}_{all}.$
Thus the supremacy of $\ell(f,P^{e})$ may be much larger than the average,
which implies that ERM may fail to discover $f_{OOD}.$ But in reality, we do
not know whether it is true. If not, the distribution of $\ell(f,P^{e})$ is
concentrated on the expectation of $\ell(f,P^{e})$, and ERM is sufficient to
find an invariant model for $\mathcal{E}_{all}.$ Therefore, we call for a
method to judge whether an OOD algorithm is needed. Second, how to judge a
model’s OOD performance? Traditionally, we consider test domains
$\mathcal{E}_{test}\subset\mathcal{E}_{tr}$ and use the worst-domain accuracy
over $\mathcal{E}_{test}$ (which we call test accuracy) to approximate the OOD
accuracy. However, test accuracy is a biased estimate of the OOD accuracy
unless $\mathcal{E}_{tr}$ is closed to $\mathcal{E}_{all}$. More seriously, It
may be irrelevant or even _negatively correlated_ to the OOD accuracy. This
phenomenon is not uncommon, especially when there are features virtually
spurious in $\mathcal{E}_{all}$ but show a strong correlation to the target in
$\mathcal{E}_{tr}$.
We give a toy example in Colored MNIST when the test accuracy fails to
approximate the OOD accuracy. For more details, please refer to Section 5.1
and Appendix A.4. We choose three domains from Colored MNIST and use cross-
validation (Gulrajani & Lopez-Paz (2020)) to select models, i.e. we take turns
to select a domain $S\in\mathcal{E}_{tr}$ as the test domain and train on the
rest, and select the model with max average test accuracy. Figure 1 shows the
comparison between ERM and IRM. One can find that no matter which domain is
the test domain, ERM model uniformly outperforms IRM model on the test domain.
However, IRM model achieves consistently better OOD accuracy. Shortcomings of
the test accuracy here are obvious, regardless of whether cross-validation is
used. In short, the naive use of the test accuracy may result in a non-OOD
model.
Figure 1: Experiments in Colored MNIST to show test accuracy is not enough to
reflect a model’s OOD accuracy. The top left penal shows the test accuracy of
ERM and IRM. The other three panels present the relationship between test
accuracy (x-axis) and OOD accuracy (y-axis) in three setups.
To address this obstacle, we hope to find a metric that correlates better with
model’s OOD property, _even when $\mathcal{E}_{tr}$ is much smaller than
$\mathcal{E}_{all}$ and the “worst” domain remains unknown_. Without any
assumption to $\mathcal{E}_{all}$, our goal is unrealistic. Therefore, we
assume that features that are invariant across $\mathcal{E}_{tr}$ should also
be across $\mathcal{E}_{all}$. This assumption is necessary. Otherwise, the
only thing we can do is to collect more domains. Therefore, we need to focus
on what features the model has learnt. Specifically, we want to check whether
the model learns invariant features and avoid varying features.
The influence function (Cook & Weisberg (1980)) can serve our purpose.
Influence function was proposed to measures the parameter change when a data
point is removed or upweighted by a small perturbation (details in 3.2). When
modified it to domain-level, it measures the influence of a domain instead of
a data point on the model. Note that we are not emulating the changes of the
parameter when a domain is removed. Instead, we are exactly caring about
upweighting the domain by $\delta\rightarrow 0^{+}$ (will be specified later).
Base on this, the variance of influence function allows us to measure OOD
property and solve the obstacle.
##### Contributions
we summarize our contributions here: (i) We introduce influence function to
domain-level and propose index $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$
(formula 6) based on influence function of the model $f_{\bm{\theta}}$. Our
index can measure the OOD extent of available domains, i.e. how different
these domains (distributions) are. This measurement provides a basis for
whether to adopt an OOD algorithm and to collect more diverse domains. See
Section 4.1 and Section 5.1.1 for details. (ii) We point out that the proposed
index $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ can solve the weakness of
test accuracy. Specifically, under most OOD generalization problems, using
test accuracy and our index together, we can discern the OOD property of a
model. See Section 4.2 for details. (iii) We propose to use only a small but
important part of the model to calculate the influence function. This
overcomes the huge computation cost of solving the inverse of Hessian. It is
not merely for calculation efficiency and accuracy, but it coincides with our
understanding that only these parameters capture what features a model has
learnt (Section 4.3).
We organize our paper as follows: Section 2 reviews related works and Section
3 introduces the preliminaries of OOD methods and influence function. Section
4 presents our proposal and detailed analysis. Section 5 shows our
experiments. The conclusion is given in Section 6.
## 2 Related work
The mismatch between the development dataset and the target domain is one
major challenge in machine learning (Castro et al. (2020); Kuang et al.
(2020)). Many works assume that the ground truth can be represented by a
causal Direct Acyclic Graph (DAG), and they use the DAG structure to discuss
the worst-domain performance (Rojas-Carulla et al. (2018); Peters et al.
(2016); Subbaswamy et al. (2019); Bühlmann et al. (2020); Magliacane et al.
(2018)). All these works employ multiple domain data and causal assumptions to
discover the parents of the target variable. Rojas-Carulla et al. (2018) and
Magliacane et al. (2018) also apply this idea to Domain Generalization and
Multi-Task Learning setting. Starting from multiple domain data rather than
model assumptions, Arjovsky et al. (2019) proposes Invariant Risk Minimization
(IRM) to extract causal (invariant) features and learn invariant optimal
predictor on the top of the causal features. It analyzes the generalization
properties of IRM from the view of sufficient dimension reduction (Cook
(2009); Cook et al. (2002)). Ahuja et al. (2020) considers IRM as finding the
Nash equilibrium of an ensemble game among several domains and develops a
simple training algorithm. Krueger et al. (2020) derives the Risk
Extrapolation (REx) to extract invariant features and further derives a
practical objective function via variance penalization. Xie et al. (2020)
employs a framework from distributional robustness to interpret the benefit of
REx comparing to robust optimization (Ben-Tal et al. (2009); Bagnell (2005)).
Besides, Adversarial Domain Adaption (Li et al. (2018); Koyama & Yamaguchi
(2020)) uses discriminator to look for features that are independent of
domains and uses these features for further prediction.
Influence function is a classic method from the robust statistics literature
(Robins et al. (2008; 2017); Van der Laan et al. (2003); Tsiatis (2007)). It
can be used to track the impact of a training sample on the prediction. Koh &
Liang (2017) proposes a second-order optimization technique to approximate the
influence function. They verify their method with different assumptions on the
empirical risk ranging from being strictly convex and twice-differentiable to
non-convex and non-differentiable losses. Koh et al. (2019) also estimates the
effect of removing a subgroup of training points via influence function. They
find out that the approximation computed by the influence function is
correlated with the actual effect. Influence function has been used in many
machine learning tasks. Cheng et al. (2019) proposes an explanation method,
Fast Influence Analysis, that employs influence function on Latent Factor
Model to solve the lack of interpretability of the collaborative filtering
approaches for recommender systems. Cohen et al. (2020) uses influence
function to detect adversarial attacks. Ting & Brochu (2018) proposes an
asymptotically optimal sampling method via an asymptotically linear estimator
and the associated influence function. Alaa & Van Der Schaar (2019) develops a
model validation procedure that estimates the estimation error of causal
inference methods. Besides, Fang et al. (2020) leverages influence function to
select a subset of normal users who are influential to the recommendations.
## 3 Preliminaries
### 3.1 ERM, IRM and REx
In this section, we give some notations and introduce some recent OOD methods.
Recall the multiple domain setup (1) and OOD problem (2). For a domain $P^{e}$
and a hypothetical model $f$, the population loss is
$\ell(f,P^{e})=\mathbb{E}_{{\mathbf{z}}\sim P^{e}}[L(f,{\mathbf{z}})]$ where
$L(f,{\mathbf{z}})$ is the loss function on ${\mathbf{z}}$. The empirical
loss, which is the objective of ERM, is
$\ell(f,{\mathbb{S}})=(1/m)\sum_{e\in\mathcal{E}_{tr}}\ell(f,{\mathbb{S}}^{e})$
with $\ell(f,{\mathbb{S}}^{e})=(1/n)\sum_{i=1}^{n}L(f,{\bm{z}}^{e}_{i}).$
Recent OOD methods propose some novel regularized objective functions in the
form:
$\mathcal{L}(f,{\mathbb{S}})=\ell(f,{\mathbb{S}})+\lambda R(f,{\mathbb{S}})$
(3)
to discover $f_{\text{OOD}}$ in (2). Here $R(f,{\mathbb{S}})$ is a
regularization term and $\lambda$ is the tuning parameter which controls the
degree of penalty. Note that ERM is a special case by setting $\lambda=0$. For
simplicity, we will use $\mathcal{L}(f,{\mathbb{S}})$ to represent the total
loss in case of no ambiguity. Arjovsky et al. (2019) focuses on the stability
of $f_{\text{OOD}}$ and considers the IRM regularization:
$R(f,{\mathbb{S}})=\sum_{e\in\mathcal{E}_{tr}}\|\nabla_{w}\ell\big{(}{\color[rgb]{0,0,0}wf}),{\mathbb{S}}^{e}\big{)}\big{|}_{w=1.0}\|^{2}$
(4)
where $w$ is a scalar and fixed “dummy” classifier. Arjovsky et al. (2019)
shows that the scalar fixed classifier $w$ is sufficient to monitor invariance
and responds to the idealistic IRM problem which decomposes the entire
predictor into data representation and one shared optimal top classifier for
all training domains. On the other hand, Krueger et al. (2020) encourages the
uniform performance of $f_{\text{OOD}}$ and proposes the V-REx penalty:
$\displaystyle
R(f,{\mathbb{S}})=\sum_{e\in\mathcal{E}_{tr}}(\ell(f,{\mathbb{S}}^{e})-\ell(f,{\mathbb{S}}))^{2}.$
Krueger et al. (2020) derives the invariant prediction by the robustness to
spurious features and figure out that REx is more robust than group
distributional robustness (Sagawa et al. (2019)). In this work, we also
decompose the entire predictor into a feature extractor and a classifier on
the top of the learnt features. As we will see, different from Arjovsky et al.
(2019) and Krueger et al. (2020), we directly monitor the invariance of the
top model.
### 3.2 Influence function and group effect
Consider a parametric hypothesis $f=f_{\bm{\theta}}$ and the corresponding
solution:
$\hat{\bm{\theta}}=\operatorname*{arg\,min}_{\bm{\theta}}\mathcal{L}(f_{\bm{\theta}},{\mathbb{S}}).$
By a quadratic approximation of $\mathcal{L}(f_{\bm{\theta}},{\mathbb{S}})$
around $\hat{\bm{\theta}}$, the influence function takes the form
$\displaystyle\mathcal{IF}(\hat{\bm{\theta}},{\bm{z}})=-{\bm{H}}^{-1}_{\hat{\bm{\theta}}}\nabla_{\bm{\theta}}L(f_{\hat{\bm{\theta}}},{\bm{z}})\quad\text{with}\quad{\bm{H}}_{\hat{\bm{\theta}}}=\nabla^{2}_{\bm{\theta}}{\color[rgb]{0,0,0}\mathcal{L}(f_{\hat{\bm{\theta}}},{\mathbb{S}})}.$
When the sample size of ${\mathbb{S}}$ is sufficiently large, the parameter
change due to removing a data point $z$ can be approximated by
$-\mathcal{I}(z)/\sum_{e\in\mathcal{E}_{tr}}|{\mathbb{S}}^{e}|$ without
retraining the model. Here $|{\mathbb{S}}^{e}|=n^{e}$ stands for the cardinal
of the set ${\mathbb{S}}^{e}$. Furthermore, Koh et al. (2019) shows that the
influence function can also predict the effects of large groups of training
points (i.e. $\mathcal{Z}=\\{z_{1},...,z_{k}\\}$), although there are
significant changes in the model. The parameter change due to removing the
group can be approximated by
$\mathcal{IF}(\hat{\bm{\theta}},\mathcal{Z})=-{\bm{H}}^{-1}_{\hat{\bm{\theta}}}\nabla_{\bm{\theta}}\frac{1}{|\mathcal{Z}|}\sum_{z\in\mathcal{Z}}L(f_{\hat{\bm{\theta}}},z).$
Motivated by the work of Koh et al. (2019), we introduce influence function to
OOD problem to address our obstacles.
## 4 Methodology
### 4.1 Influence of domains
We decompose a parametric hypothesis $f_{\bm{\theta}}(x)$ into a top model $g$
and a feature extractor $\Phi$, i.e.
$f_{\bm{\theta}}(x)=g(\Phi({\bm{x}},{\bm{\beta}}),{\bm{\gamma}})$ and
${\bm{\theta}}=({\bm{\gamma}},{\bm{\beta}}).$ Such decomposition coincides the
understanding of most DNN, i.e. a DNN extracts the features and build a top
model based on the extracted features. When upweighting a domain $e$ by a
small perturbation $\delta$, we do not upweight the regularized term, i.e.
$\displaystyle\mathcal{L}_{+}({\bm{\theta}},\mathbb{S},\delta)=\mathcal{L}({\bm{\theta}},\mathbb{S})+\delta\cdot\ell(f,{\mathbb{S}}^{e}),$
since the stability across different domains, which is encouraged by the
regularization, should not depend on the sample size of a domain. For a
learnt model $f_{\hat{\bm{\theta}}}$, fixing the feature extractor $\Phi$,
i.e. fixing ${\bm{\beta}}=\hat{\bm{\beta}}$, the change of top model $g$
caused by upweighting the domain is
$\mathcal{IF}(\hat{\bm{\gamma}},{\mathbb{S}}^{e}|\hat{\bm{\theta}}){\color[rgb]{0,0,0}:=\lim_{\delta\rightarrow
0^{+}}\frac{\Delta{\bm{\theta}}}{\delta}}=-{\bm{H}}^{-1}_{\hat{\bm{\gamma}}}\nabla_{\bm{\gamma}}\ell(f_{\hat{\bm{\theta}}},{\mathbb{S}}^{e}),\quad
e\in\mathcal{E}_{tr}.$ (5)
Here
${\bm{H}}_{\hat{\bm{\gamma}}}=\nabla^{2}_{\hat{\bm{\gamma}}}\mathcal{L}(f_{\hat{\bm{\theta}}},{\mathbb{S}})$,
and we assume $\mathcal{L}$ is twice-differentiable in ${\bm{\gamma}}$. Please
see Appendix A.3 for detailed derivation and why ${\bm{\beta}}$ should be
fixed. For a regularized method, e.g. IRM and REx, the influence of their
regularized term is reflected in ${\bm{H}}$ and in learnt model
$f_{\hat{\bm{\theta}}}$. As mentioned above,
$\mathcal{IF}(\hat{\bm{\gamma}},{\mathbb{S}}^{e}|\hat{\bm{\theta}})$ measures
change of model caused by upweighting domain $e$. Therefore, if
$g(\Phi,\hat{\bm{\gamma}})$ is invariant across domains, the entire model
$f_{\hat{\bm{\theta}}}$ treats all domains equally. As a result, a small
perturbation on different domains should cause _the same model change_. This
leads to our proposal.
### 4.2 Proposed Index and its Utility
On basis of the domain-level influence function
$\mathcal{IF}(\hat{\bm{\gamma}},{\mathbb{S}}^{e}|\hat{\bm{\theta}})$, we
propose our index to measure the fluctuation of the parameter change when
different domains are upweighted:
$\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}:=\ln\Big{(}\|\mathrm{Cov}_{e\in\mathcal{E}_{tr}}\big{(}\mathcal{IF}(\hat{\bm{\gamma}},{\mathbb{S}}^{e}|\hat{\bm{\theta}})\big{)}\|_{2}\Big{)}.$
(6)
Here $\|\cdot\|_{2}$ is the 2-norm for matrix, i.e. the largest eigenvalue of
the matrix, $\mathrm{Cov}_{e\in\mathcal{E}_{tr}}(\cdot)$ refers to the
covariance matrix of the domain-level influence function over
$\mathcal{E}_{tr}$ and $\ln(\cdot)$ is a nonlinear transformation that works
well in practice.
##### OOD Model
Under the OOD problem in (2), a good OOD model should (i) learn invariant and
useful features; (ii) avoid spurious and varying features. Learning useful and
invariant features means the model should have high accuracy over a set of
test domains $\mathcal{E}_{test}$, no matter which test domain it is. In turn,
high accuracy over $\mathcal{E}_{test}$ also means the model truly learns some
useful features for the test domains. However, this is not enough, since we do
not know whether the useful features are invariant features across
$\mathcal{E}_{all}$ or just spurious features on $\mathcal{E}_{test}.$ On the
other hand, avoiding varying features means that different domains are
_actually the same_ to the learnt model, so according to the arguments in
Section 4.1, $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ should be small.
Combined this, we derive our proposal: if a learnt model
$f_{\hat{\bm{\theta}}}$ manage to simultaneously achieve small
$\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ and high accuracy over
$\mathcal{E}_{test}$, it should have good OOD accuracy. We prove our proposal
in a simple but illuminating case, and we conduct various experiments (Section
5) to support our proposal. Several issues should be clarified. First, not all
OOD problems demand models to learn invariant features. For example, the set
of all target domains is small such that the varying features are always
strongly correlated to the labels, or the objective is the mean of the
accuracy over $\mathcal{E}_{all}$ rather than the worst-domain accuracy. But
to our concern, we regard the OOD problem in (2) as a bridge to causal
discover. Thus the set of the target domains is large, and the “weak” OOD
problems are out of our consideration. To a large extent, invariant features
are still the major target and our proposal is still a good criterion to
model’s OOD property. Second, we admit that the gap between being stable in
$\mathcal{E}_{tr}$ (small $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$) and
avoiding all spurious features on $\mathcal{E}_{all}$ does exist. However, to
our knowledge, for features that are varying in $\mathcal{E}_{all}$ but are
invariant in $\mathcal{E}_{tr}$, demanding a model to avoid them is somehow
unrealistic. Therefore, we make a step forward that we measure whether the
learnt model successfully avoids features that vary across $\mathcal{E}_{tr}$.
We leave index about varying features over $\mathcal{E}_{all}$ in our future
work.
##### The Shuffle $\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$
As mentioned above, smaller metric $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$
means strong stablility across $\mathcal{E}_{tr}$, and hence should have
better OOD accuracy. However, the proposed metric depends on the dataset
${\mathbb{S}}$ and the learnt model $f_{\hat{\bm{\theta}}}.$ Therefore, there
is no uniform baseline to check whether the metric is “small” enough. To this
end, we propose a baseline value of the proposed metric by shuffling the
multi-domain data. Consider pooling all data points in ${\mathbb{S}}$ and
randomly redistributed to $m$ new synthetic domains
$\\{\tilde{\mathbb{S}}^{1},\tilde{\mathbb{S}}^{2},...,\tilde{\mathbb{S}}^{m}\\}:=\tilde{\mathbb{S}}$.
We compute _the shuffle version_ of
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ for a learnt model
$f_{\hat{\bm{\theta}}}$ over the shuffled data $\tilde{\mathbb{S}}$:
$\tilde{\mathcal{V}}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}:=\ln\Big{(}\|\mathrm{Cov}_{e\in\mathcal{E}_{tr}}\big{(}\mathcal{IF}(\hat{\bm{\gamma}},{\mathbb{S}}^{e}|\hat{\bm{\theta}})\big{)}\|_{2}\Big{)}.$
(7)
and denote the standard version and shuffle version of the metric as
$\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ and
$\tilde{\mathcal{V}}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ respectively. For
any algorithm that obtains relatively good test accuracy, if
$\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ is much larger than
$\tilde{\mathcal{V}}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$,
$f_{\hat{\bm{\theta}}}$ has learnt features that vary across
$e\in\mathcal{E}_{tr}$, and cannot treat domains in $\mathcal{E}_{tr}$
equally. This implies that $f_{\hat{\bm{\theta}}}$ may not be an invariant
predictor over $\mathcal{E}_{all}.$ Otherwise, if the two values are similar,
the model has avoided varying features in $\mathcal{E}_{tr}$ and maybe
invariant across $\mathcal{E}_{tr}$. Therefore, either the model capture the
invariance over the diverse domains, or the domains are not diverse at all.
Note that this process is suitable for any algorithm, hence providing a
baseline to see whether $\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ is
small. Here we also obtain a method to judge whether an OOD algorithm is
needed. Consider $f_{\hat{\bm{\theta}}}$ learnt by ERM. If
$\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ is relatively larger than
$\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\theta}}}$, then ERM fails to avoid
varying features. In this case, one should consider an OOD algorithm to
achieve better OOD generalization. Otherwise, ERM is enough, and any attempt
to achieve better OOD accuracy should start with finding more domains instead
of using OOD algorithms. This coincides experiments in Gulrajani & Lopez-Paz
(2020) (Section5.2). Our understanding is that domains in $\tilde{\mathbb{S}}$
are similar. Therefore, the difference between shuffle and standard version of
the metric reflects how much varying features a learnt model uses. We show how
to use the two version of $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ in
Section 5.1.1 and Section 5.2.
### 4.3 Influence Calculation
There is a question surrounding the influence function: how to efficiently
calculate and inverse Hessian? Koh & Liang (2017) suggests Conjugate Gradient
and Stochastic estimation solve the problem. However, when $\hat{\bm{\theta}}$
is obtained by running SGD, it could hardly arrive at the global minimum.
Although adding a damping term (i.e. let
$\hat{\bm{H}}_{\hat{\bm{\theta}}}={\bm{H}}_{\hat{\bm{\theta}}}+\lambda I$) can
moderately alleviate the problem by transforming it into a convex situation,
under large neural-network with non-linear activation function like ReLU, this
method may still work poorly since the damping term in order to satisfy the
transform is so large that it will influence the performance significantly.
Most importantly, the variation of the eigenvalue of Hessian is huge, making
the convergence of influence function calculation quite slow and inaccurate
(Basu et al. (2020)).
In our metric, we circumvent the problem by excluding most parameters
${\bm{\beta}}$ and directly calculate Hessian of ${\bm{\gamma}}$ to get
accurate influence function. This modification not only speed up the
calculation, but it also coincides our expectation, that an OOD algorithm
should learn invariant features _does not mean that_ the influence function of
_all_ parameters should be identical across domains. For example, if $g(\Phi)$
wants to extract the same features in different domains, the influence
function should be different on $\Phi(\cdot)$. Therefore, if we use all
parameters to calculate the influence, given that ${\bm{\gamma}}$ is
relatively insignificant in size compared with ${\bm{\beta}}$, the information
of learnt features provided by ${\bm{\gamma}}$ is hard to be captured. On the
contrary, only considering the influence of the top model will manifest the
influence of different domains in the aspect of _features_ , thus enabling us
to achieve our goal.
As our experiments show, after this modification, the influence function
calculation speed can be 2000 times faster, and the utility (correlation with
OOD property) could be even higher. One may not feel surprised given the huge
number of parameters in the embedding model $\Phi(\cdot)$. They slow down the
calculation and overshadow the top model’s influence value.
## 5 Experiment
In this section, we experimentally show that: (1) A model
$f_{\hat{\bm{\theta}}}$ reaches small
$\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ if it has good OOD
property, while a non-OOD model won’t. (2) The metric
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ provides additional information on
the stability of a learnt model, which overcomes the weakness of the test
accuracy. (3) The comparison of $\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$ and
$\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\beta}}}$ can check whether a better
OOD algorithm is needed.
We consider experiments in Bayesian Network, Colored MNIST and VLCS. The
synthetic data generated by Bayesian Network includes domain-dependent noise
and fake associations between features and response. For Colored MNIST, we
already know that the digit is the causal feature and the color is non-causal.
The causal relationships help us to determine the worst domain and obtain the
OOD accuracy. VLCS is a real dataset, in which we show utility of
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ step by step. Due to the space
limitation, we put the experiments in Bayesian Network to the appendix.
Generally, cross-validation (Gulrajani & Lopez-Paz (2020)) is used to judge a
model’s OOD property. In the introduction, we have already shown that the
leave-one-domain-out cross-validation may fail to discern OOD properties. We
also consider another two potential competitors: conditional mutual
information and IRM penalty. The comparison between our metric and the two
competitors are postponed into Appendix.
Figure 2: The index $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ is highly
correlated to $x$. The plot contains 501 learnt ERM models with $x=2\times
10^{-4}i$, $i=0,1,...,500.$ The dashed line is the baseline value when the
difference between domains is eliminated by pooling and redistributing the
training data. The blue solid line is the linear regression of $x$ versus
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$.
### 5.1 Colored MNIST
Colored MNIST (Arjovsky et al. (2019)) introduces a synthetic binary
classification task. The images are colored according to their label, making
color a spurious feature in predicting the label. Specifically, for a domain
$e$, we assign a preliminary binary label
$\tilde{y}=\mathbf{1}_{\text{digits}\leq 4}$ and randomly flip $\tilde{y}$
with $p=0.25$. Then, we color the image according to $\tilde{y}$ but with a
flip rate of $p^{e}$. Clearly, when $p^{e}<0.25$ or $p^{e}>0.75$, color is
more correlated with $\tilde{y}$ than real digit. Therefore, the oracle OOD
model $f_{\text{OOD}}$ will attain accuracy $0.75$ in all domains while an ERM
model may attain high training accuracy and low OOD property if $p^{e}$ in
training domains is too small or too large. Throughout the Colored MNIST
experiments, we use three-layer MLP with ReLU activation and hidden dimension
256. Although our MLP model has relatively many parameters and is non-convex
due to the activation layer, due to the technique mentioned in Section 4.3,
the influence calculation is still fast and accurate, with directly
calculating influence once spends less than 2 seconds.
#### 5.1.1 Identify OOD Problem
In this section, we show that $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ can
discern whether the training domains are sufficiently diverse as mentioned in
Section 4.2. Assume $\mathcal{E}_{tr}$ has five training domains with
$p^{e}\in\\{0.2-2x,0.2-x,0.2,0.2+x,0.2+2x\\},$
where $x\in[0.0,0.1]$ is positively related to the diversity among the
training domains. If $x$ is zero, all data points are generated from the same
domain ($p^{e}=0.2$) and so the learning task on $\mathcal{E}_{tr}$ is not an
OOD problem. On the contrary, larger $x$ means that the training domains are
more diverse. We repeat 501 times to learn the model with ERM. Given the
learnt model $f_{\hat{\bm{\theta}}}$ and the training data, we compute
$\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ and check the correlation
between $\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ and $x$. Figure 2
presents the results. Our index $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ is
highly related to $x.$ The Pearson coefficient is 0.9869, and the Spearman
coefficient is 0.9873. Also, the benchmark of
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ that learns on the same training
domains ($\tilde{\mathbb{S}}$ in 4.2) can be derived from the raw data by
pooling and redistributing all data points, and we mark it by the black dashed
line. If $\mathcal{V}_{\hat{\bm{\gamma}}|\hat{\bm{\theta}}}$ is much higher
than the benchmark, indicating that $x$ is not small, an OOD algorithm should
be considered if better OOD generalization is demanded. Otherwise, the present
algorithm (like ERM) is sufficient. The results coincide our expectation that
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ can discern whether $P^{e}$ is
different.
#### 5.1.2 Relationship between $\mathcal{V}$ and OOD Accuracy
In this section, we use an experiment to support our proposal in Section 4.2.
As previously proposed, if a model shows high test accuracy and small
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ simultaneously, it captures
invariant features and avoids varying features, so it deserves to be an OOD
model. In this experiment, we consider a model with high test accuracy and
show that smaller $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ generally
corresponds to better OOD accuracy, which supports our proposal.
Figure 3: The relationship between $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$
and OOD accuracy in REx (left) and IRM (right) with
$\lambda\in\\{0,50,100,500,1000\\}.$ We train 400 models for each $\lambda$.
The OOD accuracy and $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ enjoy high
Pearson coefficient: -0.9745 (up-left), -0.9761 (down-left), -0.8417 (up-
right), -0.9476 (down-right). The coefficients are negative because lower
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ forebodes better OOD property.
Consider two setups: $p^{e}\in\\{0.0,0.1\\}$ and
$p^{e}\in\\{0.1,0.15,0.2,0.25,0.3\\}.$ We implement IRM and REx with different
penalty (note that ERM is $\lambda=0$) to check relationship between
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ and OOD accuracy. For IRM and REx,
we run $190$ epochs pre-training with $\lambda=1$ and use early stopping to
prevent over-fitting. With this technique, all models successfully achieve
good test accuracy (within 0.1 of the oracle accuracy) and meet our
requirement. Figure 3 presents the results. We can see that
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ are highly correlated to OOD
accuracy in IRM and REx, with the absolute of Pearson Coefficient never less
than $0.8417$. Those models learned with larger $\lambda$ present better OOD
property, learning less varying features, and showing smaller
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}.$ The results are consistent with
our proposal, except that when $\lambda$ is large in IRM,
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ is a little bit unstable. We have
carefully examined the phenomenon and found that it is caused by computational
instability when inversing Hessian with eigenvalue quite close to 0. The
problem of unstable inversing happens with a low probability and can be
addressed by repeating the experiment once or twice.
### 5.2 Domain Generalization: VLCS
In this section, we implement the proposed metric for 4 algorithms: ERM, gDRO,
Mixup and IRM on the VLCS image dataset, which is widely used for doamin
generalization. We emulate a real scenario with
$\mathcal{E}_{all}=\\{V,L,C,S\\}$ and
$\mathcal{E}_{tr}=\mathcal{E}_{all}\backslash\\{S\\}$. As mentioned in
Gulrajani & Lopez-Paz (2020), we use “training-domain validation set” method,
i.e. we split a validation set for each $S\in\mathcal{E}_{tr}$ and the test
accuracy is defined as the average accuracy amount the three validation sets.
Note that, our goal is to use the test accuracy and
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$ to measure the _OOD generalization_
, rather than to tune for the SOTA performance on a unseen domain $\\{S\\}$.
Therefore, we do not apply any model selection method and just use the default
hyper-parameters in Gulrajani & Lopez-Paz (2020).
#### 5.2.1 Step 1: Test accuracy comparison
Table 1: Step1: Test Accuracy ($\%$) Domain | C | L | V | Mean
---|---|---|---|---
ERM | 99.29 | 73.62 | 77.07 | 83.34
Mixup | 99.32 | 74.36 | 78.84 | 84.17
gDRO | 95.79 | 70.95 | 75.25 | 80.66
IRM | 49.44 | 44.76 | 41.17 | 45.12
For each algorithm, we run the naive training process 12 times and show the
average of test accuracy of each algorithm in Table 1. Before calculating
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$, the learnt model should at least
arrive a good test accuracy. Otherwise, there is no need to discuss its OOD
performance since OOD accuracy is smaller than test accuracy. In the table,
the test accuracy of ERM, Mixup and gDRO is good, but that of IRM is not. In
this case, IRM will be eliminated. If an algorithm fails to reach high test
accuracy first, we should first change the hyper-parameters until we observe a
relatively high test accuracy.
#### 5.2.2 Step 2: shuffle and standard metric comparison
Now we are ready to check whether the learnt models are invariant across
$\mathcal{E}_{tr}$. As mentioned in 4.2, the difference of
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$ and
$\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\beta}}}$ represents whether how much
a model is invariant across $\mathcal{E}_{tr}$. We calculate the value and the
results are in Figure 4. For ERM and Mixup, the two value is nearly the same.
In this case, we expect that ERM and Mixup models are invariant and should
have a relatively high OOD accuracy, so no more algorithm is needed. For gDRO,
we can clearly see that $\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\beta}}}$ is
uniformly smaller than $\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$. Therefore,
gDRO models don’t treat different domains equally, and hence we predict that
the OOD accuracy will be relatively low. In this case, one who starts with
gDRO should turn to other algorithms if a better OOD performance is demanded.
Note that, in the whole process, we know nothing about $\\{S\\}$, so the OOD
accuracy is unseen. However, from the above analysis, we know that (1) in this
settings, ERM and Mixup is better than gDRO; (2) one who uses gDRO can turn to
other algorithms (like Mixup) for better OOD performance; (3) one who uses ERM
should consider collecting more environments if he (she) still wants to
improve OOD performance. So far, we finish the judgement using test accuracy
and the proposed metric.
Figure 4: The standard and shuffle version of the metric, i.e.
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$ and
$\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\beta}}}$ for ERM, Mixup and gDRO.
For each algorithm, each version of the metric, we run the experiments more
than 12 times in case of statistical error. Similar
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$ and
$\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\beta}}}$ represents invariance
across $\mathcal{E}_{tr}$, which is the case of ERM and Mixup. For gDRO,
$\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\beta}}}$ is clearly smaller.
#### 5.2.3 Step 3: OOD accuracy results (oracle)
Table 2: Step3: OOD Accuracy ($\%$) | ERM | Mixup | gDRO | IRM
---|---|---|---|---
Mean | 62.76 | 63.91 | 60.17 | 31.33
Std | 1.16 | 1.57 | 2.56 | 13.44
In this step, we fortunately obatin $\mathcal{E}_{all}$ and can check whether
our judgement is reasonable. Normally, this step will not happen. We now show
the OOD accuracy of four algorithms in table 2. Similar to our judgement, ERM
and Mixup models achieve a higher OOD accuracy than gDRO. The performance of
IRM (under this hyper-parameters) is lower than test accuracy. During the
above process, we can also compare the metric of the model from the same
algorithm but with different hyper-parameters (as the same in section 5.1.2).
Besides, one may notice that even the highest OOD accuracy is just $63.91\%$.
That is to say, to obtain OOD accuracy larger than $70\%$, we should consider
collecting more environments. In the appendix A.6, we continue our real
scenario to see that, if initially $\mathcal{E}_{tr}$ is more diverse, what
will our metric lead us to.
The whole results in VLCS can also be found in the same appendix, and the
comparison of the proposed metric with the IRM penalty in formula 4 can be
found there too. Besides, we show the comparison with Conditional Mutual
Information in the appendix A.5. In summary, we use a realistic task to see
how to judge the OOD property of learnt model using the proposed metric and
test accuracy. The judgement coincides well with the real OOD performance.
## 6 Conclusion
In this paper, we focus on two presently unsolved problems, that how can we
discern the OOD property of multiple domains and of learnt models. To this
end, we introduce influence function into OOD problem and propose our metric
to help solve these issues. Our metric can not only discern whether a multi-
domains problem is OOD but can also judge a model’s OOD property when combined
with test accuracy. To make our calculation more meaningful, accurate and
efficient, we modify influence function to domain-level and propose to use
only the top model to calculate the influence. Our method is proved in simple
cases and it works well in experiments. We sincerely hope that, with the help
of this index, our understanding of OOD generalization will become more and
more precise and thorough.
## References
* Ahuja et al. (2020) Kartik Ahuja, Karthikeyan Shanmugam, Kush Varshney, and Amit Dhurandhar. Invariant risk minimization games. _arXiv preprint arXiv:2002.04692_ , 2020.
* Akuzawa et al. (2019) Kei Akuzawa, Yusuke Iwasawa, and Yutaka Matsuo. Domain generalization via invariant representation under domain-class dependency, 2019. URL https://openreview.net/forum?id=HJx38iC5KX.
* Alaa & Van Der Schaar (2019) Ahmed Alaa and Mihaela Van Der Schaar. Validating causal inference models via influence functions. volume 97 of _Proceedings of Machine Learning Research_ , pp. 191–201, Long Beach, California, USA, 09–15 Jun 2019. PMLR.
* Arjovsky et al. (2019) Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. _arXiv preprint arXiv:1907.02893_ , 2019.
* Bagnell (2005) J Andrew Bagnell. Robust supervised learning. In _Proceedings of the national conference on artificial intelligence_ , volume 20, pp. 714. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2005.
* Basu et al. (2020) Samyadeep Basu, Philip Pope, and Soheil Feizi. Influence functions in deep learning are fragile. _arXiv preprint arXiv:2006.14651_ , 2020.
* Beery et al. (2018) Sara Beery, Grant Van Horn, and Pietro Perona. Recognition in terra incognita. In _Proceedings of the European Conference on Computer Vision (ECCV)_ , pp. 456–473, 2018.
* Ben-Tal et al. (2009) Aharon Ben-Tal, Laurent El Ghaoui, and Arkadi Nemirovski. _Robust optimization_ , volume 28. Princeton University Press, 2009.
* Bengio et al. (2019) Yoshua Bengio, Tristan Deleu, Nasim Rahaman, Nan Rosemary Ke, Sebastien Lachapelle, Olexa Bilaniuk, Anirudh Goyal, and Christopher Pal. A meta-transfer objective for learning to disentangle causal mechanisms. In _International Conference on Learning Representations_ , 2019.
* Bühlmann et al. (2020) Peter Bühlmann et al. Invariance, causality and robustness. _Statistical Science_ , 35(3):404–426, 2020.
* Castro et al. (2020) Daniel C Castro, Ian Walker, and Ben Glocker. Causality matters in medical imaging. _Nature Communications_ , 11(1):1–10, 2020.
* Cheng et al. (2019) Weiyu Cheng, Yanyan Shen, Linpeng Huang, and Yanmin Zhu. Incorporating interpretability into latent factor models via fast influence analysis. In _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pp. 885–893, 2019.
* Cohen et al. (2020) Gilad Cohen, Guillermo Sapiro, and Raja Giryes. Detecting adversarial samples using influence functions and nearest neighbors. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , June 2020.
* Cook (2009) R Dennis Cook. _Regression graphics: Ideas for studying regressions through graphics_ , volume 482. John Wiley & Sons, 2009.
* Cook & Weisberg (1980) R Dennis Cook and Sanford Weisberg. Characterizations of an empirical influence function for detecting influential cases in regression. _Technometrics_ , 22(4):495–508, 1980.
* Cook et al. (2002) R Dennis Cook, Bing Li, et al. Dimension reduction for conditional mean in regression. _The Annals of Statistics_ , 30(2):455–474, 2002\.
* Fang et al. (2020) Minghong Fang, Neil Zhenqiang Gong, and Jia Liu. Influence function based data poisoning attacks to top-n recommender systems. In _Proceedings of The Web Conference 2020_ , pp. 3019–3025, 2020\.
* Gulrajani & Lopez-Paz (2020) Ishaan Gulrajani and David Lopez-Paz. In search of lost domain generalization. _arXiv preprint arXiv:2007.01434_ , 2020.
* Koh & Liang (2017) Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. _arXiv preprint arXiv:1703.04730_ , 2017.
* Koh et al. (2019) Pang Wei W Koh, Kai-Siang Ang, Hubert Teo, and Percy S Liang. On the accuracy of influence functions for measuring group effects. In _Advances in Neural Information Processing Systems_ , pp. 5254–5264, 2019.
* Koyama & Yamaguchi (2020) Masanori Koyama and Shoichiro Yamaguchi. Out-of-distribution generalization with maximal invariant predictor. _arXiv preprint arXiv:2008.01883_ , 2020.
* Krueger et al. (2020) David Krueger, Ethan Caballero, Joern-Henrik Jacobsen, Amy Zhang, Jonathan Binas, Remi Le Priol, and Aaron Courville. Out-of-distribution generalization via risk extrapolation (rex). _arXiv preprint arXiv:2003.00688_ , 2020.
* Kuang et al. (2020) Kun Kuang, Ruoxuan Xiong, Peng Cui, Susan Athey, and Bo Li. Stable prediction with model misspecification and agnostic distribution shift. In _AAAI_ , pp. 4485–4492, 2020.
* Li et al. (2017) Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. Deeper, broader and artier domain generalization. In _Proceedings of the IEEE international conference on computer vision_ , pp. 5542–5550, 2017.
* Li et al. (2018) Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. Domain generalization with adversarial feature learning. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_ , pp. 5400–5409, 2018.
* Magliacane et al. (2018) Sara Magliacane, Thijs van Ommen, Tom Claassen, Stephan Bongers, Philip Versteeg, and Joris M Mooij. Domain adaptation by using causal inference to predict invariant conditional distributions. In _Advances in Neural Information Processing Systems_ , pp. 10846–10856, 2018.
* Mukherjee et al. (2020) Sudipto Mukherjee, Himanshu Asnani, and Sreeram Kannan. Ccmi: Classifier based conditional mutual information estimation. In _Uncertainty in Artificial Intelligence_ , pp. 1083–1093. PMLR, 2020.
* Peters et al. (2016) Jonas Peters, Peter Bühlmann, and Nicolai Meinshausen. Causal inference by using invariant prediction: identification and confidence intervals. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 5(78):947–1012, 2016.
* Robins et al. (2008) James Robins, Lingling Li, Eric Tchetgen, Aad van der Vaart, et al. Higher order influence functions and minimax estimation of nonlinear functionals. In _Probability and statistics: essays in honor of David A. Freedman_ , pp. 335–421. Institute of Mathematical Statistics, 2008.
* Robins et al. (2017) James M Robins, Lingling Li, Rajarshi Mukherjee, Eric Tchetgen Tchetgen, Aad van der Vaart, et al. Minimax estimation of a functional on a structured high-dimensional model. _The Annals of Statistics_ , 45(5):1951–1987, 2017.
* Rojas-Carulla et al. (2018) Mateo Rojas-Carulla, Bernhard Schölkopf, Richard Turner, and Jonas Peters. Invariant models for causal transfer learning. _The Journal of Machine Learning Research_ , 19(1):1309–1342, 2018.
* Sagawa et al. (2019) Shiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. Distributionally robust neural networks. In _International Conference on Learning Representations_ , 2019.
* Sen et al. (2017) Rajat Sen, Ananda Theertha Suresh, Karthikeyan Shanmugam, Alexandros G Dimakis, and Sanjay Shakkottai. Model-powered conditional independence test. In _Advances in neural information processing systems_ , pp. 2951–2961, 2017.
* Subbaswamy et al. (2019) Adarsh Subbaswamy, Peter Schulam, and Suchi Saria. Preventing failures due to dataset shift: Learning predictive models that transport. In _The 22nd International Conference on Artificial Intelligence and Statistics_ , pp. 3118–3127. PMLR, 2019.
* Ting & Brochu (2018) Daniel Ting and Eric Brochu. Optimal subsampling with influence functions. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), _Advances in Neural Information Processing Systems 31_ , pp. 3650–3659. Curran Associates, Inc., 2018. URL http://papers.nips.cc/paper/7623-optimal-subsampling-with-influence-functions.pdf.
* Torralba & Efros (2011) Antonio Torralba and Alexei A Efros. Unbiased look at dataset bias. In _CVPR 2011_ , pp. 1521–1528. IEEE, 2011.
* Tsiatis (2007) Anastasios Tsiatis. _Semiparametric theory and missing data_. Springer Science & Business Media, 2007.
* Van der Laan et al. (2003) Mark J Van der Laan, MJ Laan, and James M Robins. _Unified methods for censored longitudinal data and causality_. Springer Science & Business Media, 2003.
* Van der Vaart (2000) Aad W Van der Vaart. _Asymptotic statistics_ , volume 3. Cambridge university press, 2000.
* Vapnik (1992) Vladimir Vapnik. Principles of risk minimization for learning theory. In _Advances in neural information processing systems_ , pp. 831–838, 1992.
* Wang et al. (2020) Yufei Wang, Haoliang Li, and Alex C Kot. Heterogeneous domain generalization via domain mixup. In _ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_ , pp. 3622–3626. IEEE, 2020.
* Wright (1921) Sewall Wright. Correlation and causation. _J. agric. Res._ , 20:557–580, 1921.
* Xie et al. (2020) Chuanlong Xie, Fei Chen, Yue Liu, and Zhenguo Li. Risk variance penalization: From distributional robustness to causality. _arXiv preprint arXiv:2006.07544_ , 2020.
* Xu et al. (2020) Minghao Xu, Jian Zhang, Bingbing Ni, Teng Li, Chengjie Wang, Qi Tian, and Wenjun Zhang. Adversarial domain adaptation with domain mixup. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 34, pp. 6502–6509, 2020.
* Yan et al. (2020) Shen Yan, Huan Song, Nanxiang Li, Lincan Zou, and Liu Ren. Improve unsupervised domain adaptation with mixup training. _arXiv preprint arXiv:2001.00677_ , 2020.
## Appendix A Appendix
### A.1 Simple Bayesian Network
In this section, we show that the model with better OOD accuracy achieves
smaller $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$. We assume the data is
generated from the following Bayesian network:
$\displaystyle{\mathbf{x}}_{1}\leftarrow\mathcal{N}(0,\sigma^{2}_{e}),\quad{\mathbf{y}}\leftarrow{\mathbf{x}}_{1}^{e}{\bm{W}}_{1\rightarrow{\mathbf{y}}}+\mathcal{N}(0,1),\quad{\mathbf{x}}_{2}\leftarrow{\mathbf{y}}^{e}{\bm{W}}_{{\mathbf{y}}\rightarrow
2}+\mathcal{N}(0,\sigma^{2}_{e}).$ (8)
where ${\mathbf{x}}_{1},{\mathbf{x}}_{2}\in{\mathbb{R}}^{5}$ are the features,
${\mathbf{y}}\in{\mathbb{R}}^{5}$ is the target vector,
${\bm{W}}_{1\rightarrow{\mathbf{y}}}\in{\mathbb{R}}^{5\times 5}$ and
${\bm{W}}_{{\mathbf{y}}\rightarrow 2}\in\mathbb{R}^{5\times 5}$ are the
underlying parameters that are invariant across domains. The variance of
gaussian noise is $\sigma^{2}_{e}$ that depends on domain. For simplicity, we
denote $e=\sigma_{e}$ to represent a domain. The goal here is to linearly
regress the response y on the input vector $({\bm{x}}_{1},{\bm{x}}_{2})$, i.e.
$\hat{\bm{y}}={\bm{x}}_{1}\hat{\bm{W}}_{1}+{\bm{x}}_{2}\hat{\bm{W}}_{2}.$
According to the Bayesian network (8), ${\mathbf{x}}_{1}$ is the invariant
feature, while the correlation between ${\mathbf{x}}_{2}$ and ${\mathbf{y}}$
is spurious and unstable since $e=\sigma_{e}$ varies across domains. Clearly,
the model based only on ${\mathbf{x}}_{1}$ is an invariant model. Any
invariant estimator should achieve
$\hat{\bm{W}}_{1}\approx{\bm{W}}_{1\rightarrow{\mathbf{y}}}$ and
$\hat{\bm{W}}_{2}\approx{\bf 0}$.
Table 3: Average parameter error $\|\hat{\bm{W}}-{\bm{W}}\|^{2}$ and the stable measurement $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ of 500 models from ERM, IRM and REx. Here, “Causal Error” represents $\|\hat{\bm{W}}_{1}-{\bm{W}}_{1\rightarrow{\mathbf{y}}}\|^{2}$ and “Non-causal Error” represents $\|\hat{\bm{W}}_{2}\|^{2}$. Method | $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ | Causal Error | Non-causal Error
---|---|---|---
ERM | 15.844 | 0.582 | 0.581
IRM | 5.254 | 0.122 | 0.109
REx | 1.341 | 0.042 | 0.033
Now consider five training domains
$e\in\mathcal{E}_{tr}=\\{0.2,0.7,1.2,1.7,2.2\\}$ , each containing 1000 data
points. We estimate three linear models using ERM, IRM and REx respectively
and record the parameter error as well as
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ (note that ${\bm{\gamma}}$ is
${\bm{\theta}}$ here). Table 3 presents the results among 500 repetitions. As
expected, IRM and REx learn more invariant relationships than ERM (smaller
causal error) and better avoid non-causal variables
($\hat{\bm{W}}_{2}\approx{\bf 0}$). Furthermore, the proposed measurement
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ is highly related to invariance,
i.e. model with better OOD property achieves smaller
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$. This results coincides our
understanding.
### A.2 Proof of an Example
In this section, we use a simple model to illuminate the validity of
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ proposed in Section 4. Consider a
structural equation model (Wright (1921)):
$\displaystyle{\textnormal{x}}_{1}\sim
P_{x}^{e},\quad{\textnormal{y}}\leftarrow{\textnormal{x}}_{1}+\mathcal{N}(0,1),\quad{\textnormal{x}}_{2}\leftarrow{\textnormal{y}}+\mathcal{N}(0,\sigma_{e}^{2})$
where $P_{x}^{e}$ is a distribution with a finite second-order moment, i.e.
$\mathbb{E}{\textnormal{x}}_{1}^{2}<+\infty$, and $\sigma_{e}^{2}$ is the
variance of the noise term in ${\textnormal{x}}_{2}.$ Both $P_{x}^{e}$ and
$\sigma_{e}^{2}$ vary across domains. For simplicity, we assume there are
infinite training data points collected from two training domains
$\mathcal{E}_{tr}=\\{(P_{x}^{1},\sigma_{1}^{2}),(P_{x}^{2},\sigma_{2}^{2})\\}$.
Our goal is to predict y from
${\mathbf{x}}:=({\textnormal{x}}_{1},{\textnormal{x}}_{2})^{\top}$ using a
least-squares predictor
$\hat{y}={\bm{x}}^{\top}\hat{\bm{\beta}}:=x_{1}\hat{\beta}_{1}+x_{2}\hat{\beta}_{2}$.
Here we consider two algorithms: ERM and IRM with $\lambda\rightarrow+\infty$.
According to Arjovsky et al. (2019), using IRM we obtain
${\bm{\beta}}_{\text{IRM}}\rightarrow(1,0)^{\top}$.
Intuitively, ERM will exploit both ${\textnormal{x}}_{1}$ and
${\textnormal{x}}_{2}$, thus achieving a better regression model. However,
since relationship between y and ${\textnormal{x}}_{2}$ varies across domains,
our index will be huge in such condition. Conversely, $\beta_{\text{IRM}}$
only uses invariant features ${\textnormal{x}}_{1}$, thus
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}\rightarrow-\infty$. Note that we do
not have an embedding model here, so
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}=\mathcal{V}_{{\bm{\beta}}}$.
ERM we denote
$\displaystyle\ell({\bm{\beta}})=\frac{1}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\ell_{e}({\bm{\beta}})\quad\text{with}\quad\ell_{e}({\bm{\beta}})=\mathbb{E}_{e}({\textnormal{y}}-{\mathbf{x}}{\bm{\beta}})^{2}.$
Note that in $\mathbb{E}_{e}$, ${\textnormal{x}}_{1}$ is sample from
$P_{x}^{e}$. We then have
$\displaystyle\frac{\partial\ell({\bm{\beta}})}{{\bm{\beta}}}=-\sum_{e\in\mathcal{E}_{tr}}\mathbb{E}_{e}[{\mathbf{x}}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]=-\frac{2}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\left(\begin{array}[]{c}\mathbb{E}_{e}[{\textnormal{x}}_{1}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]\\\
\mathbb{E}_{e}[{\textnormal{x}}_{2}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]\\\
\end{array}\right)$
To proceed further, we denote
$\displaystyle\bar{d}=\frac{1}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\mathbb{E}_{e}{\mathbf{x}}_{1}^{2},\quad
s=\sum_{e\in\mathcal{E}_{tr}}\sigma_{e}^{2}=\sigma_{1}^{2}+\sigma_{2}^{2}.$
By solving the following equations:
$\displaystyle\frac{1}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\mathbb{E}_{e}[{\textnormal{x}}_{1}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]=\bar{d}(1-\beta_{1}-\beta_{2})=0$
and
$\displaystyle\frac{1}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\mathbb{E}_{e}[{\textnormal{x}}_{2}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]=(\bar{d}+1)(1-\beta_{1}-\beta_{2})+\beta_{1}-\frac{s}{|\mathcal{E}_{tr}|}\beta_{2}=0$
we have $\hat{\bm{\beta}}=(\hat{\beta}_{1},\hat{\beta}_{2})^{\top}$ with
$\displaystyle\hat{\beta}_{1}=\frac{s}{s+2},\quad\hat{\beta}_{2}=\frac{2}{s+2}.$
Now we calculate our index. It is easy to see that
$\displaystyle\frac{\partial\ell_{e}({\bm{\beta}})}{\beta_{1}}$
$\displaystyle=$
$\displaystyle-2\mathbb{E}_{e}[{\textnormal{x}}_{1}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]=-2\mathbb{E}_{e}{\textnormal{x}}_{1}^{2}(1-\beta_{1}-\beta_{2})$
$\displaystyle\frac{\partial\ell_{e}({\bm{\beta}})}{\beta_{2}}$
$\displaystyle=$
$\displaystyle-2\mathbb{E}_{e}[{\textnormal{x}}_{2}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]=-2[(\mathbb{E}_{e}x_{1}^{2}+1)(1-\beta_{1}-\beta_{2})+\beta_{1}-\sigma_{e}^{2}\beta_{2}].$
Therefore,
$\displaystyle\nabla\ell_{1}({\bm{\beta}})-\nabla\ell_{2}({\bm{\beta}})=\left(\begin{array}[]{cc}0\\\
2\beta_{2}(\sigma_{1}^{2}-\sigma_{2}^{2})\end{array}\right)\quad\text{and}\quad\nabla\ell_{1}(\hat{\bm{\beta}})-\nabla\ell_{1}(\hat{\bm{\beta}})=\left(\begin{array}[]{cc}0\\\
\frac{4(\sigma_{1}^{2}-\sigma_{2}^{2})}{s+2}\end{array}\right)$ (14)
On the other hand, calculate the hessian and we have
$\displaystyle{\bm{H}}_{\text{ERM}}=\left(\begin{array}[]{cc}2\bar{d}&2\bar{d}\\\
2\bar{d}&2\bar{d}+s+2\\\
\end{array}\right)\quad\text{and}\quad{\bm{H}}^{-1}=\frac{1}{2\bar{d}(s+2)}\left(\begin{array}[]{cc}2\bar{d}+s+2&-2\bar{d}\\\
-2\bar{d}&2\bar{d}\\\ \end{array}\right).$
Then we have (note that
$\mathcal{IF}(\hat{\bm{\beta}},{\mathbb{S}}^{e})={\bm{H}}^{-1}\nabla\ell_{e}(\hat{\bm{\beta}})$)
$\displaystyle\mathcal{V}_{\hat{\bm{\beta}}}$ $\displaystyle=$
$\displaystyle\ln(\|\mathrm{Cov}_{e\in\mathcal{E}}(\mathcal{IF}(\hat{\bm{\beta}},{\mathbb{S}}^{e}))\|_{2})$
$\displaystyle=$
$\displaystyle\ln(\frac{1}{4}\|(\mathcal{IF}_{1}-\mathcal{IF}_{2})(\mathcal{IF}_{1}-\mathcal{IF}_{2})^{\top}\|_{2})$
$\displaystyle=$
$\displaystyle\ln(\frac{1}{4}\|\mathcal{IF}_{1}-\mathcal{IF}_{2}\|^{2})$
$\displaystyle=$ $\displaystyle
2\ln(\frac{1}{2}\|{\bm{H}}^{-1}(\nabla\ell_{1}(\hat{\bm{\beta}})-\nabla\ell_{2}(\hat{\bm{\beta}}))\|)$
$\displaystyle=$ $\displaystyle
2\ln(\frac{1}{4\bar{d}(s+2)}\|\left(\begin{array}[]{cc}2\bar{d}+s+2&-2\bar{d}\\\
-2\bar{d}&2\bar{d}\\\ \end{array}\right)\left(\begin{array}[]{cc}0\\\
\frac{4(\sigma_{1}^{2}-\sigma_{2}^{2})}{s+2}\end{array}\right)\|)$
$\displaystyle=$ $\displaystyle
2\ln(\frac{2\sqrt{2}|\sigma_{1}^{2}-\sigma_{2}^{2}|}{(s+2)^{2}})$
where the third equation holds because the rank of matrix is $1$. Clearly,
when $|\sigma_{1}^{2}-\sigma_{2}^{2}|\rightarrow 0$ (means two domains become
identical), our index $\mathcal{V}_{\bm{\beta}}\rightarrow-\infty$. Otherwise,
given $\sigma_{1}\not=\sigma_{2}$, we have $\mathcal{V}_{\bm{\beta}}>-\infty$,
showing that ERM captures varied features.
IRM We now turn to IRM model and show that
$\mathcal{V}_{\bm{\beta}}\rightarrow-\infty$ when $\lambda\rightarrow+\infty$,
thus proving IRM learnt model $\hat{\bm{\beta}}_{\text{IRM}}$ does achieve
smaller $\mathcal{V}_{\bm{\beta}}$ compared with $\hat{\bm{\beta}}$ in ERM.
Under IRM model, assuming the tuning parameter is $\lambda$, we have
$\displaystyle\mathcal{L}({\bm{\beta}})=\frac{1}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\mathbb{E}_{e}[({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})^{2}]+4\lambda\|\mathbb{E}_{e}[{\mathbf{x}}^{\top}{\bm{\beta}}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]\|^{2}.$
Then we have the gradient with respect ${\bm{\beta}}$:
$\displaystyle\nabla\mathcal{L}({\bm{\beta}})$ $\displaystyle=$
$\displaystyle\frac{1}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\big{(}-2\mathbb{E}_{e}[{\mathbf{x}}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]+8\lambda\mathbb{E}_{e}[{\mathbf{x}}^{\top}{\bm{\beta}}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]\mathbb{E}_{e}[{\mathbf{x}}({\textnormal{y}}-2{\mathbf{x}}^{\top}{\bm{\beta}})]\big{)},$
and the Hessian matrix
$\displaystyle{\bm{H}}$ $\displaystyle=$
$\displaystyle{\bm{H}}_{\text{ERM}}+\frac{8\lambda}{|\mathcal{E}|}\sum_{e\in\mathcal{E}}\big{(}\mathbb{E}_{e}[{\mathbf{x}}({\textnormal{y}}-2{\mathbf{x}}^{\top}{\bm{\beta}})]\mathbb{E}_{e}[{\mathbf{x}}({\textnormal{y}}-2{\mathbf{x}}^{\top}{\bm{\beta}})]^{\top}-2\mathbb{E}_{e}[{\mathbf{x}}^{\top}{\bm{\beta}}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]\mathbb{E}_{e}[{\mathbf{x}}{\mathbf{x}}^{\top}]\big{)}.$
(17)
Denote ${\bm{\beta}}_{\lambda}$ the solution of IRM algorithm on
$\mathcal{E}_{tr}$ when penalty is $\lambda$. From Arjovsky et al. (2019) we
know
${\bm{\beta}}_{\lambda}\rightarrow{\bm{\beta}}_{\text{IRM}}:=(1,0)^{\top}$. To
show
$\lim_{\lambda\rightarrow+\infty}\mathcal{V}_{{\bm{\beta}}_{\lambda}}=-\infty$,
we only need to show that
$\displaystyle\lim_{\lambda\rightarrow+\infty}{\bm{H}}^{-1}(\nabla\ell_{1}({\bm{\beta}}_{\lambda})-\nabla\ell_{2}({\bm{\beta}}_{\lambda}))=\mathbf{0}$
We prove this by showing that
$\displaystyle\lim_{\lambda\rightarrow+\infty}{\bm{H}}^{-1}(\lambda)=\mathbf{0}\quad\text{and}\quad\lim_{\lambda\rightarrow+\infty}\nabla\ell_{1}({\bm{\beta}}_{\lambda})-\nabla\ell_{2}({\bm{\beta}}_{\lambda})=\mathbf{0}$
(18)
simultaneously. We add $(\lambda)$ after ${\bm{H}}^{-1}$ to show that
${\bm{H}}^{-1}$ is a continuous function of $\lambda$. Rewrite ${\bm{H}}$ in
formula 17 as
$\displaystyle{\bm{H}}(\lambda,{\bm{\beta}}_{\lambda})={\bm{H}}_{\text{ERM}}+\lambda
F({\bm{\beta}}_{\lambda})$
where
$\displaystyle F({\bm{\beta}})$ $\displaystyle=$
$\displaystyle\frac{8}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\big{(}\mathbb{E}_{e}[{\mathbf{x}}({\textnormal{y}}-2{\mathbf{x}}^{\top}{\bm{\beta}})]\mathbb{E}_{e}[{\mathbf{x}}({\textnormal{y}}-2{\mathbf{x}}^{\top}{\bm{\beta}})]^{\top}-2\mathbb{E}_{e}[{\mathbf{x}}^{\top}{\bm{\beta}}({\textnormal{y}}-{\mathbf{x}}^{\top}{\bm{\beta}})]\mathbb{E}_{e}[{\mathbf{x}}{\mathbf{x}}^{\top}]\big{)}$
$\displaystyle\lim_{{\bm{\beta}}_{\lambda}\rightarrow{\bm{\beta}}_{\text{IRM}}}F({\bm{\beta}}_{\lambda})$
$\displaystyle=$
$\displaystyle\frac{4}{|\mathcal{E}_{tr}|}\sum_{e\in\mathcal{E}_{tr}}\left(\begin{array}[]{c}-\mathbb{E}_{e}{\textnormal{x}}_{1}^{2}\\\
1-\mathbb{E}_{e}{\textnormal{x}}_{1}^{2}\end{array}\right)\left(\begin{array}[]{cc}-\mathbb{E}_{e}{\textnormal{x}}_{1}^{2}&1-\mathbb{E}_{e}{\textnormal{x}}_{1}^{2}\end{array}\right)=F({\bm{\beta}}_{\text{IRM}})\text{
exists.}$
Obviously, $F({\bm{\beta}}_{\text{IRM}})$ is positive definite. Therefore, we
have
$\displaystyle\lim_{\lambda\rightarrow+\infty}{\bm{H}}(\lambda,{\bm{\beta}}_{\lambda})^{-1}$
$\displaystyle=$
$\displaystyle\lim_{\lambda\rightarrow+\infty}\lim_{{\bm{\beta}}_{\lambda}\rightarrow{\bm{\beta}}_{\text{IRM}}}[{\bm{H}}_{\text{ERM}}+\lambda
F({\bm{\beta}}_{\lambda})]^{-1}$ $\displaystyle=$
$\displaystyle\lim_{\lambda\rightarrow+\infty}[{\bm{H}}_{\text{ERM}}+\lambda
F({\bm{\beta}}_{\text{IRM}})]^{-1}$ $\displaystyle=$ $\displaystyle\mathbf{0}$
The first equation holds because
$\lim_{\lambda\rightarrow+\infty}F({\bm{\beta}}_{\lambda})=F({\bm{\beta}}_{\text{IRM}})$
has the limit and is not $\mathbf{0}$, and the last equation holds because the
eigenvalue of ${\bm{H}}$ goes to $+\infty$ when $\lambda\rightarrow+\infty$.
Now consider
$\nabla\ell_{1}({\bm{\beta}}_{\lambda})-\nabla\ell_{2}({\bm{\beta}}_{\lambda})$.
According to formula 14, we have
$\displaystyle\lim_{\lambda\rightarrow+\infty}\nabla\ell_{1}({\bm{\beta}}_{\lambda})-\nabla\ell_{2}({\bm{\beta}}_{\lambda})$
$\displaystyle=$
$\displaystyle\lim_{{\bm{\beta}}_{\lambda}\rightarrow{\bm{\beta}}_{\text{IRM}}}\nabla\ell_{1}({\bm{\beta}}_{\lambda})-\nabla\ell_{2}({\bm{\beta}}_{\lambda})$
$\displaystyle=$
$\displaystyle\nabla\ell_{1}({\bm{\beta}}_{\text{IRM}})-\nabla\ell_{2}({\bm{\beta}}_{\text{IRM}})$
$\displaystyle=$ $\displaystyle\left(\begin{array}[]{c}0\\\
2\beta_{2}(\sigma_{1}^{2}-\sigma_{2}^{2})\end{array}\right)$ $\displaystyle=$
$\displaystyle\mathbf{0}$
Hence we finish proof of formula 18 and show that
$\mathcal{V}_{\bm{\beta}}\rightarrow-\infty$ in IRM.
### A.3 Formula (5)
This section shows the derivation of the expression (5). Recall that the
training dataset $\mathbb{S}=\\{\mathbb{S}^{1},...,\mathbb{S}^{m}\\}$ and the
objective function
$\displaystyle\mathcal{L}(f,{\mathbb{S}})=\ell(f,{\mathbb{S}})+\lambda
R(f,{\mathbb{S}}),$
where the second term on the right hand side is the regularization. As to ERM,
the regularization term is zero. With the feature extractor (${\bm{\beta}}$)
fixed, we upweight a domain $\mathbb{S}^{e}$. The new objective function is
$\displaystyle\mathcal{L}_{+}({\bm{\theta}},\mathbb{S},\delta)=\mathcal{L}({\bm{\theta}},\mathbb{S})+\delta\cdot\ell({\bm{\theta}},{\mathbb{S}}^{e})$
Notice that when upweight an domain, we only upweight the empirical loss on
the corresponding domain. Further, we denote
$\hat{\bm{\gamma}},\hat{\bm{\gamma}}_{+}$ as the optimal solutions before and
after upweighting a domain. It is easy to see that
$\|\hat{\bm{\gamma}}_{+}-\hat{\bm{\gamma}}\|\rightarrow 0$ when
$\delta\rightarrow 0$. Following the derivation in Koh & Liang (2017),
according to the first-order Taylor expansion of
$\nabla_{\bm{\gamma}}\mathcal{L}_{+}({\bm{\theta}},\mathbb{S},\delta)$ with
respect to ${\bm{\gamma}}$ on $\hat{\bm{\gamma}}$,
$\displaystyle\mathbf{0}$ $\displaystyle=$
$\displaystyle\nabla_{\bm{\gamma}}[\mathcal{L}(\hat{\bm{\theta}}_{+},\mathbb{S})+\delta\ell(\hat{\bm{\theta}}_{+},\mathbb{S}^{e})]$
$\displaystyle=$
$\displaystyle\nabla_{\bm{\gamma}}(\mathcal{L}(\hat{\bm{\theta}},\mathbb{S})+\delta\ell(\hat{\bm{\theta}},\mathbb{S}^{e}))+\nabla_{\bm{\gamma}}^{2}[\mathcal{L}(\hat{\bm{\theta}},\mathbb{S})+\delta\ell(\hat{\bm{\theta}},\mathbb{S}^{e})](\hat{\bm{\gamma}}_{+}-\hat{\bm{\gamma}})+o(\|\hat{\bm{\gamma}}_{+}-\hat{\bm{\gamma}}\|)$
$\displaystyle=$
$\displaystyle\delta\nabla_{\bm{\gamma}}\ell(\hat{\bm{\theta}},\mathbb{S}^{e})+\nabla_{\bm{\gamma}}^{2}[\mathcal{L}(\hat{\bm{\theta}},\mathbb{S})+\delta\ell(\hat{\bm{\theta}},\mathbb{S}^{e})](\hat{\bm{\gamma}}_{+}-\hat{\bm{\gamma}})+o(\|\hat{\bm{\gamma}}_{+}-\hat{\bm{\gamma}}\|)$
Assume that
$\nabla^{2}[\mathcal{L}(\hat{\bm{\theta}},\mathbb{S})+\delta\ell(\hat{\bm{\theta}},\mathbb{S}^{e})]$
is invertible, we have
$\displaystyle\frac{\hat{\bm{\gamma}}_{+}-\hat{\bm{\gamma}}}{\delta}$
$\displaystyle=$
$\displaystyle[\nabla_{\bm{\gamma}}^{2}\mathcal{L}(\hat{\bm{\theta}},\mathbb{S})+\delta\ell(\hat{\bm{\theta}},\mathbb{S}^{e})]^{-1}\nabla_{\bm{\gamma}}\ell(\hat{\bm{\theta}},\mathbb{S}^{e})+o(\|\frac{\hat{\bm{\gamma}}_{+}-\hat{\bm{\gamma}}}{\delta}\|)$
$\displaystyle\lim_{\delta\rightarrow
0}\frac{\hat{\bm{\gamma}}_{+}-\hat{\bm{\gamma}}}{\delta}$ $\displaystyle=$
$\displaystyle[\nabla_{\gamma}^{2}\mathcal{L}(\hat{\bm{\theta}},\mathbb{S})]^{-1}\nabla_{\bm{\gamma}}\ell(\hat{\bm{\theta}},\mathbb{S}^{e})$
Note that this derivation is not fully rigorous. Please refer to Van der Vaart
(2000) for more rigorous discussions about influence function.
The reason that ${\bm{\beta}}$ should be fixed is as follows. First, if
${\bm{\beta}}$ can be varied, then the change of ${\bm{\theta}}$ will become:
$\displaystyle\left(\begin{matrix}H_{{\bm{\gamma}}{\bm{\gamma}}}&H_{{\bm{\gamma}}{\bm{\beta}}}\\\
H_{{\bm{\beta}}{\bm{\gamma}}}&H_{{\bm{\beta}}{\bm{\beta}}}\end{matrix}\right)^{-1}\left(\begin{matrix}\nabla_{{\bm{\gamma}}}l(\hat{\bm{\theta}},{\mathbb{S}}^{e})\\\
\nabla_{{\bm{\beta}}}l(\hat{\bm{\theta}},{\mathbb{S}}^{e})\end{matrix}\right).$
Therefore, the computational cost is similar to calculate and inverse the
whole hessian matrix. Most importantly, without fixing ${\bm{\beta}}$, the
change of ${\bm{\gamma}}$ is somehow useless. Say when upweighting
${\mathbb{S}}^{e}$, the use of a feature decreases. It’s possible, however,
that the parameter in ${\bm{\gamma}}$ corresponding to the feature increases
while ${\bm{\beta}}$ decreases a larger scale. In this case, the use of the
feature decreases but ${\bm{\gamma}}$ increases. Without fixing
${\bm{\beta}}$, the change of ${\bm{\gamma}}$ calculated by influence function
may provide no information about the use a feature. Therefore, we argue that,
fixing ${\bm{\beta}}$ is a “double-win” choice.
### A.4 Accuracy is not enough
In Introduction, we have given an example where test accuracy misleads us. In
this section, we will first supplement some examples where test accuracy not
only misjudge different algorithms, but it also misjudges the OOD property of
models learnt with different penalty within the same algorithm. After that, we
will show the universality of these problems and why test accuracy fails.
Figure 5: Experiments in Colored MNIST to show test accuracy (x-axis) cannot
be used to judge model learnt with different penalty. Consider two test
domains with $p^{\text{test}}=0.2$ (up penals) and $p^{\text{test}}=0.3$ (down
penals). For each $\lambda$, we run IRM and REx 500 times. We can see that
when $\lambda$ increases from $\lambda=0$ to $\lambda=1000$, the OOD accuracy
also increases, but test accuracy does not. When $p^{test}=0.3$, their
relationship becomes more perplexed.
Consider two training domains
$p^{e}\in\\{0.0,0.1\\},$
and a test domain with flip rate denoted by $p^{\text{test}}$. We implement
IRM and REx with penalty $\lambda\in\\{0,50,100,500,1000\\}$ to check the
relationship between test accuracy and OOD accuracy. The training process is
identical to the experiment in section 5.1.2. As results showed in Figure 5,
when OOD property of model gradually improves (caused by gradually increasing
$\lambda$), its relationship with test accuracy is either completely (when
$p^{\text{test}}$ is 0.2) or partly (when $p^{\text{test}}$ is 0.3) negatively
correlated. This phenomenon reveals the weakness of test accuracy. If one
wants to select a $\lambda$ when $p^{\text{test}}$ is 0.3, judged by test
accuracy, $\lambda=50$ may be the best choice, no matter in IRM or REx.
However, the model learnt with $\lambda=50$ has OOD accuracy even _less than a
random guess model_.
Whether test accuracy is positively, negatively correlated or irrelevant to
model’s OOD property mainly depends on the “distance” between test domain and
the “worst” domain for the model. If test accuracy happens to be the lowest
among all the domains, we directly have OOD accuracy equals to test accuracy.
In practice, however, their distance may be huge, and this is precisely the
difficulty of OOD generalization. For example, we are accessible to images of
cows in grasslands, woods and forests, but cows in desert are rare. At this
point, the “worst” domain is certainly far from what we can get. If we expect
a model to capture the real feature of cows, the model should avoid any usage
of background color. However, a model based on color will perform consistently
well (better than any OOD model) no matter in grasslands, woods and forests
since all of the available domains are green background in general. In Colored
MNIST, test accuracy fails in the same way.
Such situations are quite common. Generally, within domains we have, there may
be some features that are strongly correlated to the prediction but are
slightly varied across domains. These features are spurious, given that their
relationship with prediction is significantly disparate in other domains to
which we want to generalize. However, using these features in prediction will
easily achieve high test accuracy. Consequently, it will be extremely risky to
judge models merely by test accuracy.
### A.5 Conditional mutual information
A possible alternative of $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ may be
Conditional Mutual Information (CMI). For three continuous random variables
$X$, $Y$, $Z$, the CMI is defined as
$\displaystyle I(X;Y|Z)=\int\int\int
p(x,y,z)\log\frac{p(x,y,z)}{p(x,z)p(y|z)}dxdydz$ (21)
where $p(\cdot)$ is the probability density function. Consider
$I(e;y|\Phi(x))$ or $I(e;y|\hat{y})$, i.e. the mutual information of $e$ and
true label $y$, given the features or the prediction $\hat{y}$ of $x$. The
insight is that, if the model is invariant across different domains, then
little information about $e$ should be contained in $y$ given $\Phi(x)$.
Otherwise, if the prediction $\hat{y}$ is highly correlated to $e$, then the
mutual information will be high.
Figure 6: Experiments of the relationship between OOD accuracy and CMI (true
or estimated using the method in Sen et al. (2017)). Models are trained by REx
(left) and IRM (right) with $\lambda\in\\{0,10,100,1000\\}$. We train 50
models for each $\lambda$ and calculate the true CMI $I(e;y|\hat{y})$ or CCIT
value. As analyzed in the appendix A.5, true CMI enjoys a highly correlated
relationship to OOD accuracy, with Pearson Coefficient $-0.9923$ (left) and
$-0.9858$ (right). However, the estimated value shows a completely different
picture, with Pearson Coefficient $-0.0768$ (left) and $-0.1193$ (right).
This metric seems to be promising. However, the numerical estimation of CMI
remains a challenge. To this end, previous works have done a lot to solve this
problem, including CCMI proposed in Mukherjee et al. (2020) and CCIT proposed
in Sen et al. (2017). In this part, we will first calculate true
$I(e;y|\hat{y})$ in a simple Colored MNIST experiment to show that if there is
no estimation problem, CMI could be a potential metric to judge the OOD
property of the learnt model, at least in a simple, discrete task. We then run
the code provided by Sen et al. (2017) (https://github.com/rajatsen91/CCIT) to
show that even in this simple task, the estimation of CMI may severely
influence its performance.
Specifically, the experimental setting is similar to that in subsection 5.1.2,
with two OOD algorithm and number of training domains in $\\{2,5\\}$. For each
algorithm, we consider the penalty weight $\lambda\in\\{0,10,100,1000\\}$, run
the algorithm 50 times, and record their OOD accuracy as well as true CMI
value or CCIT value. The results are shown in Figure 6. We can see that in the
case when true CMI can be easily calculated, especially in the case when the
number of domains is small and the task is discrete (not continuous), CMI is
highly correlated to OOD accuracy. However, in a regression task or in a task
when directly calculating the value of CMI becomes impractical, the estimation
process may severely destroy the correlation, and may also result in an
inverse correlation. Therefore, we summarized the estimation of CMI has
limited its utility. We leave the fine-grained analysis of the relationship
between CMI, estimated CMI and OOD property to future works.
### A.6 Results on VLCS
#### A.6.1 Continued Scenario
Table 4: Domain $C$ out: Test Accuracy ($\%$) Domain | L | S | V | Mean
---|---|---|---|---
ERM | 73.43 | 73.87 | 79.15 | 75.48
Mixup | 73.74 | 74.54 | 78.65 | 75.64
gDRO | 71.40 | 71.95 | 77.19 | 73.51
IRM | 49.61 | 38.64 | 45.35 | 44.53
This is a continuation of section 5.2. Say in this task, $\mathcal{E}_{all}$
remains the four domains but ${}_{c}E_{tr}=\\{L,S,V\\}$ (empirically we find
it more diverse). Similarly, we start with test accuracy shown in table 4. In
this step, the situation is the same, i.e. IRM should be eliminated until
proper hyper-parameters are found. In step 2, we show the comparison between
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$ and
$\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\beta}}}$ of three algorithms in
Figure 7. As we can see, this time the two value are similar for all three
algorithms, including gDRO. This is different from the case when $S$ is
unseen. In this case, we predict that all of the three algorithms should
achieve high OOD accuracy. In fact, if we act as the oracle and calculate
their OOD performance, we will find that our judgement is close to the
reality: ERM, Mixup and gDRO achieve OOD accuracy from $70.55\%$ to $72.87\%$.
According to the confidence interval, they difference are not satistically
significant. As for IRM, the OOD accuracy is $38.64\%$. One who use ERM, Mixup
or gDRO should be satisfied for the performance, since higher demand is
somehow impractical!
Figure 7: The standard and shuffle version of the metric, i.e.
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$ and
$\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\beta}}}$ for ERM, Mixup and gDRO.
This time, all three algorithms show similar
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\beta}}}$ and
$\tilde{\mathcal{V}}_{{\bm{\gamma}}|{\bm{\beta}}}$.
#### A.6.2 Full results and comparison with IRM penalty
As mentioned in Section 5.2, we consider ERM, gDRO, Mixup and IRM on VLCS
image dataset. We report the full results here, and compare the performance of
out metric $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ with IRM penalty in
formula 4. Thorough the whole experiments, $\mathcal{E}_{all}=\\{V,L,C,S\\}$.
We construct four experimental settings. In each setting, one domain is
removed and the rest consists of $\mathcal{E}_{tr}$. For each domain in
$\mathcal{E}_{tr}$, we split a validation set, and test accuracy is the
average accuracy amount validation sets. The results are shown in Table 5.
First, our results coincide with Gulrajani & Lopez-Paz (2020) that ERM nearly
outperforms any algorithms. We can see that the OOD accuracy of ERM is either
the highest or only slightly lower than Mixup. Meanwhile, it has a relatively
small $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$. Second, higher OOD accuracy
corresponds to lower $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$. In addition,
we notice that IRM has a relatively low test accuracy and OOD accuracy. We
explain the phenomenon by an improper hyper-parameters in IRM, although we
didn’t change the default hyper-parameters in the code of Gulrajani & Lopez-
Paz (2020) (https://github.com/facebookresearch/DomainBed). No matter what,
this phenomenon provides a good example in which we can compare our metric
with IRM penalty and discuss their advantages and disadvantages.
Table 5: Experiments in VLCS with 4 algorithms. OOD accuracy means the min
accuracy in $\mathcal{E}_{all}$. We use training-domain validation method
mentioned in Gulrajani & Lopez-Paz (2020), so test accuracy is the average
accuracy of three split validation set. “Domain” means which domain is
excluded, i.e. which domain is in
$\mathcal{E}_{all}\backslash\mathcal{E}_{tr}$. In each setting, we run each
algorithm 12 times and report the mean and (std). Note that in a real
implementation, IRM penalty can be negative.
OOD accuracy (%) | Test accuracy (%)
---|---
Domain | C | L | S | V | Domain | C | L | S | V
ERM | 72.54 | 61.48 | 62.76 | 65.59 | ERM | 75.48 | 84.55 | 83.33 | 81.49
| (2.62) | (2.31) | (1.16) | (2.27) | | (3.37) | (10.61) | (11.64) | (12.83)
Mixup | 72.87 | 62.10 | 63.91 | 63.81 | Mixup | 75.65 | 84.92 | 84.17 | 81.02
| (2.04) | (3.10) | (1.57) | (3.64) | | (2.80) | (10.54) | (11.07) | (13.52)
gDRO | 70.55 | 61.64 | 60.17 | 62.35 | gDRO | 73.51 | 82.82 | 80.66 | 80.03
| (1.91) | (3.92) | (2.56) | (2.11) | | (3.27) | (9.41) | (11.17) | (11.50)
IRM | 38.64 | 38.84 | 31.33 | 39.50 | IRM | 44.53 | 48.83 | 45.13 | 51.94
| (0.54) | (0.31) | (13.44) | (2.35) | | (4.94) | (10.53) | (16.53) | (11.66)
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ (our metric) | IRM penalty (e-4)
Domain | C | L | S | V | Domain | C | L | S | V
ERM | 2.0468 | 1.9084 | 1.8476 | 1.9811 | ERM | 1.78 | 1.48 | 1.43 | 1.63
| (0.3474) | (0.3231) | (0.2887) | (0.3955) | | (1.88) | (1.03) | (0.75) | (2.04)
Mixup | 2.6996 | 2.4417 | 2.5810 | 2.8780 | Mixup | 75.4 | 57.7 | 65.5 | 48.3
| (0.1926) | (0.2003) | (0.1492) | (0.2304) | | (32.6) | (26.4) | (37.2) | (30.6)
gDRO | 3.3371 | 4.8520 | 5.0915 | 5.1675 | gDRO | 9.42 | 2.13 | 2.6 | 1.94
| (0.1385) | (0.2515) | (0.278) | (0.3507) | | (10.1) | (3.41) | (2.46) | (4.37)
IRM | 8.1820 | 6.8329 | 7.6234 | 8.1288 | IRM | 2.59 | 0.96 | 0 | 2.71
| (0.9523) | (0.6646) | (0.6792) | (0.974) | | (3.31) | (3.31) | (4.77) | (9.2)
Despite that IRM could be a good OOD algorithm, using IRM penalty as the
metric to judge the OOD property of a learnt model still has much weakness,
and some are severe. First, in different tasks, the value of $\lambda$ to
obtain an OOD model may be different, so as other hyper-parameters like
“anneal_steps” in IRM code. Without exhaustive search on the proper value of
the hyper-parameters, it’s easy that IRM overfits on the penalty term (which
is the situation in VLCS). When IRM overfits, the IRM penalty will become
quite small (higher $\lambda$ often leads to smaller penalty), but absolutely
overfitting on penalty term will not result in good OOD accuracy. Therefore,
the balance between loss and penalty is important. However, how to find a
balanced point? This is a model selection problem, and Gulrajani & Lopez-Paz
(2020) propose that an OOD algorithm without model selection is not complete.
No matter what to be used as the metric, it cannot be IRM penalty since we
cannot use what is included in the training process as the metric to select
training hyper-parameters.
Second, IRM penalty shows a bias on different algorithms. In the Table 5, the
IRM penalty of IRM is smaller than most algorithms. Besides, although the OOD
accuracy of Mixup is similar to ERM, its IRM penalty is significantly higher.
This is not strange but will limit the usage of IRM penalty. As for our
metric, we mention that small $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ is
better. However, the understanding of “smallness” is based on the relative
value of the shuffle version and standard version of
$\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$. As mentioned in section 5.2, when
$\mathcal{E}_{all}\backslash\mathcal{E}_{tr}=\\{S\\}$, we can see that shuffle
section 5.2 is obviously smaller than standard version in gDRO, but in ERM and
Mixup, these value are relatively close or indistinguishable. In this case, we
know that gDRO captures less invariant features and is not OOD than the other
two algorithms. During the whole process, we can circumvent the direct
comparison of $\mathcal{V}_{{\bm{\gamma}}|{\bm{\theta}}}$ in different
algorithms, which is quite important. In summary, IRM penalty makes IRM a good
algorithm, but using it as the general metric of OOD performance is completely
another picture.
|
Modelling and discretization of flow in porous media with thin, full-tensor
permeability inclusions
M. Starnoni1,2, I. Berre1, E. Keilegavlen1, & J.M. Nordbotten1
1Department of Mathematics, University of Bergen, Bergen, Norway
2Department of Environment, Land and Infrastructure Engineering, Politecnico
di Torino, Torino, Italy
## Abstract
When modelling fluid flow in fractured reservoirs, it is common to represent
the fractures as lower-dimensional inclusions embedded in the host medium.
Existing discretizations of flow in porous media with thin inclusions assume
that the principal directions of the inclusion permeability tensor are aligned
with the inclusion orientation. While this modelling assumption works well
with tensile fractures, it may fail in the context of faults, where the damage
zone surrounding the main slip surface may introduce anisotropy that is not
aligned with the main fault orientation. In this paper, we introduce a
generalized dimensional reduced model which preserves full-tensor permeability
effects also in the out-of-plane direction of the inclusion. The governing
equations of flow for the lower-dimensional objects are obtained through
vertical averaging. We present a framework for discretization of the resulting
mixed-dimensional problem, aimed at easy adaptation of existing simulation
tools. We give numerical examples that show the failure of existing
formulations when applied to anisotropic faulted porous media, and go on to
show the convergence of our method in both 2D and 3D.
Key points
* •
Existing local discretizations of flow in fractured porous media fail in
modelling out-of plane anisotropic properties of thin inclusions
* •
We present a new framework to modelling and discretizing flow in porous media
with thin, full-tensor permeability inclusions
* •
We show convergence of our method in both 2D and 3D faulted porous media
Keywords discretization, faults, permeability, mixed-dimensional, flow, porous
media
## 1 Introduction
Modeling and simulation of flow in porous media with faults, fractures, and
other thin inclusions representing discontinuities is central to a wide range
of subsurface engineering applications, including geothermal energy
exploitation (Bödvarsson and Tsang,, 1982), shale gas extraction (Cao et al.,,
2016), carbon sequestration (Johnson et al.,, 2009), and energy storage
(Nagelhout and Roest,, 1997).
The inclusions are characterized by a high aspect ratio, and permeability
significantly different from that of the host medium; hence, they severely
affect flow patterns. This poses a challenge for traditional simulation
models, which are based on upscaling of fine-scale details into an equivalent
permeability (Oda,, 1985; Farmer,, 2002; Liu et al.,, 2016; Sævik et al.,,
2013). We instead focus on an alternative approach, which explicitly
represents the inclusions in the mathematical and simulation models and
thereby to a large degree avoids challenges relating to parameter upscaling.
To avoid elongated cells at the inclusion in the computational grid, it is
common to represent the inclusions as co-dimension one objects embedded in the
host medium (Boon et al.,, 2018; Nordbotten et al.,, 2019). The intersection
of inclusions further gives rise to line and point intersections of co-
dimension two and three. Each of these objects (matrix, inclusions, and
intersection points and lines) are represented as independent subdomains
separated by interfaces. We refer to this representation of the geometry as
mixed-dimensional.
Governing equations for fluid flow in lower-dimensional representation of the
inclusion can be derived by integration in the direction orthogonal to the
inclusion. This leads to a decomposition of the governing equations into an
in-plane component that represents flow within the inclusion, and an out-of-
plane component that couples flow between the inclusion and the host medium.
While the in-plane flow has been modeled with both linear and non-linear, as
well as both isotropic and non-isotropic flow models (Martin et al.,, 2005;
Reichenberger et al.,, 2006; Brenner et al.,, 2017, 2018), existing models for
the coupling term are limited by an assumption on orthogonal flow between
inclusion and host. Reduced order models for flow were also developed for
aquifers, leading to the same set of equations, see for instance Bear, (1979),
Yortsos, (1995), and Nordbotten and Celia, (2011). These existing models will
be denoted as ”local” in the following, meaning that each partial differential
equation (PDE) contains only quantities associated with the subdomain where
the PDE is defined.
Local models generally work well when the inclusion is a joint (tensile
fracture). However, inclusions with a more complex geological history may have
significantly more complex flow properties in the out-of-plane direction. For
instance, the damage zone in the vicinity of faults may exhibit shear
fractures, slip surfaces, and/or deformation bands, as summarized in Fossen et
al., (2007). These features introduce secondary permeability anisotropy in the
damage zone as they tend to have preferred orientations, as shown by both
field studies (Fossen et al.,, 2005; Johansen and Fossen,, 2008) and core
analysis (Hesthammer et al.,, 2000). This leads to preferential flow
directions that are neither parallel nor orthogonal to the main plane. This
type of flow cannot be represented by existing models that employ dimension
reduction. To the Authors’ best knowledge, the only attempt to modeling faults
and their surrounding damage zones in a mixed-dimensional framework can be
found in Fumagalli and Scotti, (2019). However, they still apply local
formulations to model the damage zones as lower-dimensional objects which are
connected on one side to the fault and on the other side to the rock matrix,
hence conceptually seeing the whole fault zone as a multilayer object. An
alternative approach would be to implement the fault core as a
transmissibility multiplier and the damage zone by modifying the grid
permeability in the cells adjacent to the model faults, as illustrated in
Wilson et al., (2020). In the following, we will consistently refer to the
thin inclusions as faults, notwithstanding that all methods presented herein
can be applied to models of fractures and other thin inclusions, however, we
expect that the methods proposed are of more importance for faults.
The contribution of this paper is two-fold: First, we present a generalized
dimensional reduced model that can preserve full-tensor permeability effects
also in the out-of-plane direction of the fault. The resulting reduced
equations have a form similar to that of traditional models, however the more
general coupling structure leads to additional terms both in the in-plane and
out-of-plane equations. These terms, as well as our whole novel formulation,
will be denoted as ”semi-local” in the following, emphasizing the fact that
the new PDEs will contain quantities that, while physically in the same
location, from a modeling perspective reside outside the subdomain where the
PDE is defined, specifically the internal boundary between the subdomain and
its higher dimensional neighbor.
Multiple discretization schemes have been proposed for the local
dimensionally-reduced models, including methods based on finite volumes
(Helmig et al.,, 1997; Karimi-Fard et al.,, 2003; Sandve et al.,, 2012), mixed
finite elements (Martin et al.,, 2005; Boon et al.,, 2018), virtual elements
(Fumagalli and Keilegavlen,, 2019) and mimetic methods (Formaggia et al.,,
2018). A comparison study for all these discretizations of flow in fractured
media can be found in Flemisch et al., (2018) and Berre et al., (2020) for 2D
and 3D flow, respectively. However, the additional terms arising in our
formulation bring the semi-local model outside the scope of previously
proposed discretization methods. The second contribution of the paper is
therefore the derivation of discretization schemes for semi-local models. We
achieve this in two stages: First, based on the unified framework for
discretization of mixed-dimensional problems with local interface laws
presented in Nordbotten et al., (2019), we present conditions under which any
standard discretization scheme for fixed-dimensional problems can be extended
to mixed-dimensional problems with semi-local interface laws. Second, we
present a concrete discretization approach based on finite volume methods.
The paper is organized as follows. In Sec. 2, the mathematical model is
presented, first for a domain with a single fault, and then for a general
faults configuration. Thereafter, in Sec. 3, the unified discretization is
formulated. After presenting simulation results in Sec. 4, concluding remarks
are given in Sec. 5.
## 2 Flow modelling in faulted porous media
In this section, the mathematical model for flow in faulted porous media is
presented, first for a porous domain containing a single fault (Sections 2.1
and 2.2), and then for a general network of faults (Section 2.3). For the
general case, we also provide the weak formulation of the interface problem
(Sections 2.4-2.5), which will be useful from the perspective of
implementation.
\begin{overpic}[width=325.215pt]{upscaling} \put(25.0,20.0){\small$\Psi_{3}$}
\put(30.0,5.0){\small$\Psi_{2}$} \put(30.0,35.0){\small$\Psi_{1}$}
\put(80.0,23.0){\small$\Omega_{3}$} \put(85.0,5.0){\small$\Omega_{2}$}
\put(85.0,35.0){\small$\Omega_{1}$} \put(23.0,37.0){\small${\bm{n}}_{3}$}
\put(23.0,5.0){\small${\bm{n}}_{3}$}
\put(10.0,30.0){\small$\partial_{\Psi_{3}}\Psi_{1}$}
\put(10.0,12.0){\small$\partial_{\Psi_{3}}\Psi_{2}$} \put(7.0,21.0){\small$a$}
\end{overpic} Figure 1: Representation of the fault as a thin three-
dimensional domain $\Psi_{3}$ (left) and as a two-dimensionl manifold
$\Omega_{3}$ (right). The boundary of $\Psi_{j}$ adjacent to $\Psi_{3}$ is
denoted by $\partial_{\Psi_{3}}\Psi_{j}$, for $j=1,2$, while ${\bm{n}}_{i}$ is
the normal vector which is always pointing outwards from $\Psi_{i}$, for
$i=1,2,3$.
### 2.1 Domain with a single fault
We start by considering two three-dimensional porous media $\Psi_{1}$ and
$\Psi_{2}$, each of them with its Neumann and Dirichlet boundaries
$\partial_{N}$ and $\partial_{D}$, respectively. The two three-dimensional
domains are separated by a fault $\Psi_{3}$, which is a thin, almost two-
dimensional object of thickness $a$ (in the following $a$ will be denoted as
the aperture), but which is currently represented as three-dimensional. We
note that $\Psi_{3}$ need not be planar, i.e. $a$ need not be constant. We
denote by $\partial_{\Psi_{3}}\Psi_{j}$, for $j=1,2$, the boundary of
$\Psi_{j}$ adjacent to $\Psi_{3}$. Furthermore, let ${\bm{n}}_{i}$ be the
normal vector which is always pointing outwards from $\Psi_{i}$. It thus
follows that ${\bm{n}}_{3}=-{\bm{n}}_{j}$ on $\partial_{\Psi_{3}}\Psi_{j}$. A
representation of the fault as a thin three-dimensional domain $\Psi_{3}$ is
illustrated in the left of Fig. 1. Darcy flow in the three-dimensional porous
medium is then governed by the following set of equations ($i=1,2,3$):
$\displaystyle\nabla\cdot{\bm{q}}_{i}+f_{i}=0\quad$ $\displaystyle
on\quad\Psi_{i}$ (1) $\displaystyle{\bm{q}}_{i}=-{\bm{K}}_{i}\nabla
p_{i}\quad$ $\displaystyle on\quad\Psi_{i}$ (2)
$\displaystyle\lambda_{3,j}={\bm{q}}_{3}\cdot{\bm{n}}_{3}=-{\bm{q}}_{j}\cdot{\bm{n}}_{j}=-\lambda_{j,3}\quad\quad(j=1,2)\quad$
$\displaystyle on\quad\partial_{\Psi_{3}}\Psi_{j}$ (3)
$\displaystyle{\bm{q}}_{i}\cdot{\bm{n}}_{i}=g_{i}\quad$ $\displaystyle
on\quad\partial_{N}\Psi_{i}$ (4) $\displaystyle\text{tr }p_{i}=0\quad$
$\displaystyle on\quad\partial_{D}\Psi_{i}$ (5)
Here, $p$ is pressure, $\mathbf{q}$ is the Darcy flux, $f$ is a source, and
$\bm{K}$ is a second-order tensor representing the absolute permeability
divided by fluid viscosity. Equation (1) represents mass conservation, while
equation (2) is Darcy’s law. Equation (3) represents flux continuity
conditions on $\partial_{\Psi_{3}}\Psi_{j}$, where $\lambda_{3,j}$ represents
flow from $\Psi_{3}$ to $\Psi_{j}$, thus by flux continuity it follows that
$\lambda_{3,j}=-\lambda_{j,3}$. Finally, equations (4)-(5) are boundary
conditions on $\partial_{N}\Psi_{i}$ and $\partial_{D}\Psi_{i}$, repectively.
Figure 2: Illustration of possible structures of the permeability of a fault
embedded in a porous domain indicated by the principal axis of the
permeability tensor: (a) orthogonal permeability, (b) homogeneous full-
permeability structure, (c) different structure on each half of the fault.
Before deriving the governing equations for the lower-dimensional manifold, we
discuss the decomposition of the permeability tensor within the fault.
Existing local laws for faults as embedded thin inclusions assume that the
principal directions of the local permeability tensor are aligned with the
fault orientation, as illustrated in Fig. 2.a. Hence, more general
orientations of the principal permeability directions, shown in Fig. 2.b-2.c,
cannot be represented by existing models. To be concrete, we let the
permeability on $\Psi_{3}$ have the following decomposition in terms of a
coordinate system aligned with the fault orientation:
${\bm{K}}_{3}=\begin{bmatrix}{\bm{K}}_{3,\parallel}&{\bm{k}}_{3,t}\\\
{\bm{k}}^{T}_{3,t}&k_{3,\bot}\end{bmatrix}$ (6)
Here, ${\bm{K}}_{3,\parallel}$ is a $2\times 2$ second-order tensor
representing the within-fault permeability and $k_{3,\bot}$ is a scalar
representing the normal permeability. The off-diagonal term ${\bm{k}}_{3,t}$
is a two-vector representing the symmetric off-diagonal components of
${\bm{K}}_{3}$; for local interface laws, these off-diagonal components are
assumed to be negligible, i.e. ${\bm{k}}_{3,t}=0$ (Nordbotten and Celia,,
2011; Berre et al.,, 2020). The inclusion of this anisotropic term leads to
significant complications in the modeling and discretization, and is the main
topic of this work. With this structure of the fault permeability, the Darcy
flux for the fault can be decomposed as
${\bm{q}}_{3}=[{\bm{q}}_{3,\parallel},q_{3,\bot}]$, where the 2-vector
tangential component ${\bm{q}}_{3,\parallel}$ and the scalar normal component
$q_{3,\bot}$ have the following form:
$\displaystyle{\bm{q}}_{3,\parallel}=-{\bm{K}}_{3,\parallel}\nabla_{\parallel}p_{3}-{\bm{k}}_{3,t}\nabla_{\bot}p_{3},$
(7) $\displaystyle
q_{3,\bot}=-{\bm{k}}_{3,t}\cdot\nabla_{\parallel}p_{3}-k_{3,\bot}\nabla_{\bot}p_{3}.$
(8)
Here, $\nabla_{\parallel}$ and $\nabla_{\bot}=\dfrac{\partial}{\partial n}$
represent the in-plane and out-of-plane components of the gradient for the
fault, respectively.
\begin{overpic}[width=216.81pt]{sketch_interfaces}
\put(32.0,50.0){\small$\Omega_{1}$} \put(32.0,25.0){\small$\Omega_{2}$}
\put(27.0,40.0){\small$\Omega_{3}$} \put(102.0,37.0){\small$\Omega_{3}$}
\put(102.0,29.0){\small$\Gamma_{2,3}$} \put(102.0,45.0){\small$\Gamma_{1,3}$}
\put(85.0,56.0){\small$\partial_{\Omega_{3}}\Omega_{1}$}
\put(85.0,18.0){\small$\partial_{\Omega_{3}}\Omega_{2}$}
\put(75.0,65.0){\small$\Omega_{1}$} \put(75.0,5.0){\small$\Omega_{2}$}
\end{overpic} Figure 3: Illustration of the mixed-dimensional geometry.
$\Omega_{3}$ is connected to the higher dimensional neighbors $\Omega_{j}$
through the interfaces $\Gamma_{j,3}$, for $j=1,2$. Note that $\Omega_{3}$,
$\Gamma_{j,3}$ and $\partial_{\Omega_{3}}\Omega_{j}$ are all coinciding in
physical space.
### 2.2 Model reduction
To proceed, we apply integration over the perpendicular direction to achieve a
dimension reduction of the fault. This replaces $\Psi_{3}$ with a lower-
dimensional domain $\Omega_{3}$ (see right of Fig. 1). Note that we use $\Psi$
to represent the equi-dimensional geometry, that is all $\Psi_{j}$ are 3D, and
$\Omega$ to denote the mixed-dimensional geometry. We also introduce two
interfaces $\Gamma_{j,3}$ on each side $j=1,2$ of $\Omega_{3}$, as illustrated
in Fig. 3. The interfaces physically represent the half zone comprised between
the fault and either side of the surrounding matrix. In a mixed-dimensional
setting, they have no perpendicular extent, and serve as connectors between
two objects of different dimensions. Note that, due to the dimension reduction
of the model, $\Omega_{3}$, $\Gamma_{1,3}$, $\Gamma_{2,3}$,
$\partial_{\Omega_{3}}\Omega_{1}$ and $\partial_{\Omega_{3}}\Omega_{2}$ are
all coinciding in physical space. Furthermore, we define the integrated Darcy
flux ${\bm{q}}_{3}^{(2)}$ and the average pressure $p^{(2)}_{3}$, respectively
as
${\bm{q}}_{3}^{(2)}=\int_{-a/2}^{a/2}{\bm{q}}^{(3)}_{3,\parallel}dn,\quad\quad
p^{(2)}_{3}=\dfrac{1}{a}\int_{-a/2}^{a/2}p_{3}^{(3)}dn.$ (9)
Here, we use subscripts to index the domains, and superscripts (when necessary
for clarity) to emphasize the effective topological dimension of the domain,
e.g. $p_{3}^{(3)}$ and $p_{3}^{(2)}$ are the pressures within the fault in the
3D (on $\Psi_{3}$) and 2D (on $\Omega_{3}$) representations, respectively.
When passing to a mixed-dimensional representation of the geometry, i.e. when
integrating eqs. (1) and (7) along the perpendicular direction, the out-of-
plane component of the gradient is converted into a jump operator as follows:
$\int_{-a/2}^{a/2}\nabla_{\bot}p_{3}^{(3)}dn=(\text{tr }p_{1}-\text{tr
}p_{2}).$ (10)
The governing equations for the fault are then obtained from equations (1),
(7), (4) and (5) by integrating in the perpendicular direction. Moreover,
since the fault is assumed to be thin, we assume that the permeability is
constant across the perpendicular direction. Together with the definitions
above, this results in
$\displaystyle\nabla_{3}\cdot{\bm{q}}_{3}^{(2)}-(\lambda_{1,3}+\lambda_{2,3})+f_{3}^{(2)}=0\quad$
$\displaystyle on\quad\Omega_{3}$ (11)
$\displaystyle{\bm{q}}_{3}^{(2)}=-a{\bm{K}}_{3,\parallel}\nabla_{3}p_{3}^{(2)}+{\bm{\mu}}_{1,3}+{\bm{\mu}}_{2,3}\quad$
$\displaystyle on\quad\Omega_{3}$ (12)
$\displaystyle{\bm{q}}_{3}^{(2)}\cdot{\bm{n}}_{3}^{(2)}=g_{3}^{(2)}\quad$
$\displaystyle on\quad\partial_{N}\Omega_{3}$ (13) $\displaystyle\text{tr
}p_{3}^{(2)}=0\quad$ $\displaystyle on\quad\partial_{D}\Omega_{3}$ (14)
where we have also introduced the integrated source term and boundary flux
$f_{3}^{(2)}=\int_{-a/2}^{a/2}f_{3}^{(3)}dn,\quad\quad
g_{3}^{(2)}=\int_{-a/2}^{a/2}g_{3}^{(3)}dn.$ (15)
We emphasize that the differential operator $\nabla_{3}$ in eqs. (11)-(12)
operates on the manifold $\Omega_{3}$. Compared to traditional upscaled
models, see for instance Nordbotten et al., (2019), additional terms
${\bm{\mu}}_{j,3}$ appear in equation (12), analogous to the flux terms
$\lambda_{j,3}$ in equation (11). This two-vector term, which is not present
in previous work, represents the within-fault flux induced by pressure
differences between the fault and the surrounding matrix, and is defined for
either side of the fault as
${\bm{\mu}}_{j,3}=\epsilon_{j,3}{\bm{k}}_{3,t}(p_{3}^{(2)}-\text{tr }p_{j}),$
(16)
where the permutation variable $\epsilon_{j,3}$ is positive if the coordinate
systems of $\Omega_{3}$ and $\partial_{\Omega_{3}}\Omega_{j}$ coincide, and
negative otherwise.
To complete the model, we derive a constitutive law for $\lambda_{j,3}$. This
is obtained by integrating equation (8) in the perpendicular direction, that
is
$\int_{-a/2}^{a/2}q_{3,\bot}^{(3)}dn=-\int_{-a/2}^{a/2}{\bm{k}}_{3,t}\cdot\nabla_{\parallel}p_{3}^{(3)}dn-\int_{-a/2}^{a/2}k_{3,\bot}\nabla_{\bot}p_{3}^{(3)}dn.$
(17)
The left hand side of equation (17) is approximated using the trapeizodal
rule, that is
$\int_{-a/2}^{a/2}q_{3,\bot}^{(3)}dn\approx\dfrac{a}{2}(\epsilon_{1,3}\lambda_{1,3}+\epsilon_{2,3}\lambda_{2,3}),$
(18)
where continuity of the flux across the boundary between the fault and the
surrounding matrix is applied. The first term at the right hand side of
equation (17) is approximated as
$\int_{-a/2}^{a/2}{\bm{k}}_{3,t}\cdot\nabla_{\parallel}p_{3}^{(3)}dn={\bm{k}}_{3,t}\cdot\int_{-a/2}^{a/2}\nabla_{\parallel}p_{3}^{(3)}dn\approx{\color[rgb]{1,0,0}a}{\bm{k}}_{3,t}\cdot\nabla_{3}p_{3}^{(2)}.$
(19)
Finally, the second term at the right hand side of (17) is resolved using the
jump operator defined in equation (10) as follows:
$\int_{-a/2}^{a/2}k_{3,\bot}\nabla_{\bot}p_{3}^{(3)}dn=\epsilon_{1,3}k_{3,\bot}(p_{3}^{(2)}-\text{tr
}p_{1})+\epsilon_{2,3}k_{3,\bot}(p_{3}^{(2)}-\text{tr }p_{2}).$ (20)
By incorporating eqs. (18), (19) and (20) into equation (17), we identify the
flux $\lambda_{j,3}$ having the following form:
$\lambda_{j,3}=-k_{3,\bot}\dfrac{2(p_{3}^{(2)}-\text{tr
}p_{j})}{a}-\epsilon_{j,3}{\bm{k}}_{3,t}\cdot\nabla_{3}p_{3}^{(2)}.$ (21)
Here, the first term on the right-hand side represents the local component of
the constitutive law, while the second part is the semi-local contribution
that induces a flux across $\Gamma_{j,3}$ due to the pressure gradient within
the lower-dimensional manifold $\Omega_{3}$.
Inspecting equations (16) and (21), we see that both the normal permeability
$k_{3,\bot}$ and the off-diagonal permeability ${\bm{k}}_{3,t}$ are in the
reduced model naturally interpreted as properties of the interface
$\Gamma_{j,3}$. In the continuation, we will thus generalize the model as
derived above, and index these quantities with the interface, i.e.
${\bm{k}}_{3,t}\rightarrow{\bm{k}}_{3,j,t}$ and $k_{3,\bot}\rightarrow
k_{3,j,\bot}$ are assigned independently to either side of the fault.
In summary, omitting superscripts for the sake of clarity, we can write the
mixed-dimensional equations (1)-(5), (11)-(14), (16), and (21) in a unified
way, that is for $i=\\{1,2,3\\}$
$\displaystyle\nabla_{i}\cdot{\bm{q}}_{i}-\sum_{j\in{\hat{S}}_{i}}\lambda_{j,i}+f_{i}=0\quad$
$\displaystyle on\quad\Omega_{i}$ (22)
$\displaystyle{\bm{q}}_{i}=-{\bm{\kappa}}_{i,\parallel}\nabla_{i}p_{i}+\sum_{j\in{\hat{S}}_{i}}\epsilon_{j,i}{\bm{\kappa}}_{i,j,t}(p_{i}-\text{tr
}p_{j})\quad$ $\displaystyle on\quad\Omega_{i}$ (23)
$\displaystyle{\bm{q}}_{i}\cdot{\bm{n}}_{i}=\lambda_{i,3}\quad\quad(i\neq
3)\quad$ $\displaystyle on\quad\partial_{\Omega_{3}}\Omega_{i}$ (24)
$\displaystyle\lambda_{j,3}=-\kappa_{3,j,\bot}(p_{3}-\text{tr
}p_{j})-{\bm{\kappa}}_{3,j,t}\cdot\nabla_{3}p_{3}\quad\quad(j=1,2)\quad$
$\displaystyle on\quad\Gamma_{j,3}$ (25)
$\displaystyle{\bm{q}}_{i}\cdot{\bm{n}}_{i}=g_{i}\quad$ $\displaystyle
on\quad\partial_{N}\Omega_{i}$ (26) $\displaystyle\text{tr }p_{i}=0\quad$
$\displaystyle on\quad\partial_{D}\Omega_{i}$ (27)
where ${\hat{S}}_{i}$ is the set of neighbors of $\Omega_{i}$ of higher
dimension, e.g. ${\hat{S}}_{3}=\\{\Omega_{1},\Omega_{2}\\}$. Equations
(22)-(27) are complemented with the natural convention that there is no four-
dimensional domain in the model, thus ${\hat{S}}_{i}=\emptyset$ for $i=1,2$,
and one clearly has for these three-dimensional domains also that $a=1$,
${\bm{K}}_{\parallel}=\bm{K}$ and $\nabla_{i}=\nabla$.
We remark that due to the model reduction, the within-fault permeability
${\bm{K}}_{3,\parallel}$ and the normal permeability $k_{3,j,\bot}$ scale with
the aperture $a$ and its inverse, respectively, while the off-diagonal
permeability ${\bm{k}}_{3,j,t}$ remains as in the equi-dimensional model. In
order to present equations (22)-(27) without reference to this small
parameter, these scalings have been incorporated directly into the material
constants. Thus, the mixed-dimensional permeability ${\bm{\kappa}}_{3}$ is
related to the equi-dimensional ${\bm{K}}_{3}$ as follows
${\bm{\kappa}}_{3}=\begin{bmatrix}{\bm{\kappa}}_{3,\parallel}&{\bm{\kappa}}_{3,j,t}\\\
{\bm{\kappa}}^{T}_{3,j,t}&\kappa_{3,j,\bot}\end{bmatrix}=\begin{bmatrix}a{\bm{K}}_{3,\parallel}&{\bm{k}}_{3,j,t}\\\
{\bm{k}}^{T}_{3,j,t}&2a^{-1}k_{3,j,\bot}\end{bmatrix}.$ (28)
We point out that, when one reduces multiple dimensions at once, these
scalings get exponents corresponding to the number of dimensions below the
ambient dimension. We also emphasize that the normal and off-diagonal
permeabilities are in principle not a property of the fault itself, but
instead a property which belongs to the internal interface $\Gamma_{j,3}$
between the fault and either side of the higher-dimensional neighbors. This
represents an important extension of the existing local laws for fractured
porous media, making the model also applicable to faulted porous media, since
it allows for capturing the anisotropic character of the fault damage zone.
Moreover, since different values of ${\bm{k}}_{3,j,t}$ and $k_{3,j,\bot}$ can
be assigned to each side of the fault, our model can represent different
permeability structures on each side of the fault.
A schematic illustration of the different quantities and their domain of
definition for the local and semi-local formulations is shown in Fig. 4.
\begin{overpic}[width=216.81pt]{local_vs_semi_local} \put(-35.0,70.0){Semi-
local} \put(110.0,70.0){Local} \put(50.0,15.0){\small${\bm{q}}_{j},p_{j}$}
\put(-45.0,55.0){\small$\lambda_{j,3}\sim(\nabla_{3}p_{3},p_{j},p_{3})$}
\put(-45.0,17.0){\small${\bm{q}}_{3}\sim(\nabla_{3}p_{3},p_{3},p_{1},p_{2})$}
\put(103.0,60.0){\small$\lambda_{j,3}\sim(p_{j},p_{3})$}
\put(105.0,32.0){\small${\bm{q}}_{3}\sim\nabla_{3}p_{3}$} \end{overpic} Figure
4: Illustration of the quantities associated with the local and semi-local
formulations.
### 2.3 Domain with a general network of faults
Following the theory by Boon et al., (2018), equations (22)-(27) can be
generalized also to faults intersections, both the one-dimensional (1D) line
intersections between two faults and the zero-dimensional (0D) point
intersections of three faults (see Fig. 5 for an illustration of the mixed-
dimensional geometry). To this end, we use subscripts to index each domain
(matrix, fault, or intersection) by number as in the previous section, and let
$I$ denote the index set of all domains. Superscripts for the topological
dimension associated with each individual domain will be consistently omitted,
keeping in mind that the dimension is always a property of the domain, i.e.
$d=d_{i}$. Hence, we can write for all $i\in I$ the equations
$\displaystyle\nabla_{i}\cdot{\bm{q}}_{i}-\sum_{j\in{\hat{S}}_{i}}\lambda_{j,i}+f_{i}=0\quad$
$\displaystyle on\quad\Omega_{i}$ (29)
$\displaystyle{\bm{q}}_{i}=-{\bm{\kappa}}_{i,\parallel}\nabla_{i}p_{i}+\sum_{j\in{\hat{S}}_{i}}\epsilon_{j,i}{\bm{\kappa}}_{i,j,t}(p_{i}-\text{tr
}p_{j})\quad$ $\displaystyle on\quad\Omega_{i}$ (30)
$\displaystyle{\bm{q}}_{i}\cdot{\bm{n}}_{i}=\lambda_{i,j}\quad\quad(j\in{\check{S}}_{i})\quad$
$\displaystyle on\quad\partial_{\Omega_{j}}\Omega_{i}$ (31)
$\displaystyle\lambda_{j,i}=-\kappa_{i,j,\bot}(p_{i}-\text{tr
}p_{j})-\epsilon_{j,i}{\bm{\kappa}}_{i,j,t}\cdot\nabla_{i}p_{i}\quad\quad(j\in{\hat{S}}_{i})\quad$
$\displaystyle on\quad\Gamma_{j,i}$ (32)
$\displaystyle{\bm{q}}_{i}\cdot{\bm{n}}_{i}=g_{i}\quad$ $\displaystyle
on\quad\partial_{N}\Omega_{i}$ (33) $\displaystyle\text{tr }p_{i}=0\quad$
$\displaystyle on\quad\partial_{D}\Omega_{i}$ (34)
where ${\check{S}}_{i}$ is the set of neighbors of $\Omega_{i}$ of lower
dimension, e.g. ${\check{S}}_{1}=\\{\Omega_{2},\Omega_{3},\Omega_{4}\\}$. It
is easy to show that as long as the mixed-dimensional permeabilities are
diagonally dominant in the sense of
$\kappa_{i,j,\bot}\det{\bm{\kappa}}_{i,\parallel}>{\bm{\kappa}}_{i,j,t}\cdot{\bm{\kappa}}_{i,j,t},$
(35)
then the coefficients are globally positive definite, and equations (29)-(34)
are well-posed as long as $\partial_{D}\Omega_{i}$ has non-zero measure for at
least one domain (Boon et al.,, 2020).
\begin{overpic}[width=173.44534pt]{sketch_domain_network_inclusions_separate}
\put(52.0,95.0){\small$\Omega_{1}$} \put(5.0,52.0){\small$\Omega_{2}$}
\put(40.0,47.0){\small$\Omega_{3}$} \put(60.0,45.0){\small$\Omega_{4}$}
\put(25.0,20.0){\small$\Omega_{5}$} \put(50.0,26.0){\small$\Omega_{7}$}
\put(36.0,30.0){\small$\Omega_{6}$} \put(36.0,2.0){\small$\Omega_{8}$}
\end{overpic} Figure 5: Illustration of a 3D domain $\Omega_{1}$ containing
three faults $\Omega_{j}$ ($j=2,3,4$) with their respective three 1D line
intersections $\Omega_{k}$ ($k=5,6,7$) and one 0D point intersection
$\Omega_{8}$.
### 2.4 Mixed-dimensional formulation of the fault-matrix flows
While equations (29)-(34) constitute a full semi-local model, they are stated
in a form which is not immediately amenable for discretization. This and the
following subsection explore the model in more detail, with a goal of
rewriting the equations in a form that can be handled by standard
discretization schemes with only minimal adaptations. A discretization
approach based on this reformulation is then given in Section 3.
In order to simplify the exposition, we will introduce a mixed-dimensional
notation following Nordbotten et al., (2019). In particular, we will denote
the collection of pressure functions as
$\mathfrak{p}=\left(p_{1},...,p_{|I|}\right)$, and similarly the collection of
all fluxes (both in domains and across boundaries) as
$\mathfrak{q}=\left({\bm{q}}_{1},...,{\bm{q}}_{|I|},\lambda_{1,1},...,\lambda_{|I|,|J|}\right)$.
It is sometimes convenient to refer explicitly to only the domain or boundary
fluxes, and we will therefore sometimes abuse notation and simply write
$\mathfrak{q}=(q,\lambda)$. We refer to these as mixed-dimensional functions,
and consistently denote them with calligraphic font. We adopt the natural
convention that when evaluating a mixed-dimensional function at a point, say
$x\in\Omega_{i}$, then we simply evaluate the function on that domain, so that
$\mathfrak{p}(x)=p_{i}(x)$. In a similar sense, we denote the disjoint union
of domains as
$\mathfrak{F}=\left(\amalg_{i}\Omega_{i}\right)\sqcup\left(\amalg_{j,i}\Gamma_{j,i}\right)$.
With this notion of mixed-dimensional functions, the extension of the
divergence and gradient operators to the mixed-dimensional setting is natural.
First, we extend the concept of continuous functions by requiring that for
$\mathfrak{q}$ to be continuous, then it must hold that, for all
$\Gamma_{j,i}$, ${\bm{q}}_{i}\cdot{\bm{n}}_{i}=\lambda_{i,j}$. Then, for any
point $x\in\Omega_{i}$ we define
$\left(\mathfrak{D}\cdot\mathfrak{q}\right)(x)=\left[\nabla_{i}\cdot{\bm{q}}_{i}-\sum_{j\in{\hat{S}}_{i}}\lambda_{j,i}\right]_{x}\quad\quad\text{and}\quad\quad\left(\mathbb{D}\mathfrak{p}\right)(x)=\left[\nabla_{i}p_{i}\right]_{x},$
(36)
while for any point on an interface $x\in\Gamma_{j,i}$ we define
$\left(\mathbb{D}\mathfrak{p}\right)(x)=\left[p_{i}-\text{tr
}p_{j}\right]_{x}.$ (37)
Now we can write equations (29) - (34) simply as:
$\displaystyle\mathfrak{D}\cdot\mathfrak{q}+\mathfrak{f}=0\quad$
$\displaystyle on\quad\mathfrak{F}$ (38)
$\displaystyle\mathfrak{q}=-\mathfrak{K}\mathbb{D}p\quad$ $\displaystyle
on\quad\mathfrak{F}$ (39)
$\displaystyle\mathfrak{q}\cdot\mathfrak{n}=\mathfrak{g}\quad$ $\displaystyle
on\quad\partial_{N}\mathfrak{F}$ (40) $\displaystyle\text{tr
}\mathfrak{p}=0\quad$ $\displaystyle on\quad\partial_{D}\mathfrak{F}$ (41)
where we have also introduced the collection of sources
$\mathfrak{f}=\left(f_{1},...,f_{|I|}\right)$ and the collection of boundary
fluxes $\mathfrak{g}=\left(g_{1},...,g_{|I|}\right)$. Here, the material
coefficients are now all part of the mixed-dimensional permeability
$\mathfrak{K}$, which is defined such as that for any mixed-dimensional
gradient $\mathfrak{u}=\mathbb{D}p=(u,\mu)$, it holds that for any point
$x\in\Omega_{i}$:
$\left(\mathfrak{K}\mathfrak{u}\right)(x)={\bm{\kappa}}_{i,\parallel}u_{i}-\sum_{j\in{\hat{S}}_{i}}\epsilon_{j,i}{\bm{\kappa}}_{i,j,t}\mu_{j,i},$
(42)
while for any point on an interface $x\in\Gamma_{j,i}$, it holds that
$\left(\mathfrak{K}\mathfrak{u}\right)(x)=\kappa_{i,j,\bot}\mu_{j,i}+\epsilon_{j,i}{\bm{\kappa}}_{i,j,t}\cdot
u_{i}.$ (43)
It is then also sometimes convenient to write equation (39) in matrix form,
that is for $\mathfrak{q}=(q,\lambda)$ and
$\mathfrak{u}=\mathbb{D}\mathfrak{p}=(u,\mu)$, one has:
$\begin{Bmatrix}q\\\
\lambda\end{Bmatrix}=-\begin{bmatrix}\mathfrak{K}_{\Omega\Omega}&\mathfrak{K}_{\Omega\Gamma}\\\
\mathfrak{K}_{\Gamma\Omega}&\mathfrak{K}_{\Gamma\Gamma}\end{bmatrix}\begin{Bmatrix}u\\\
\mu\end{Bmatrix}.$ (44)
Equation (44) highlights the contribution from the semi-local terms in the
mixed-dimensional version of Darcy’s law.
### 2.5 Weak formulation as an interface system
The semi-local terms in equations (29)-(34) lead to coupling terms between
domains that are local in physical space, but non-local in the mixed-
dimensional representation of the geometry. A critical example are the fault
and its sides, which, from the perspective of implementation, we would prefer
to only interact via the interfaces $\Gamma_{j,i}$, and not directly, as is
the case for the last term in equation (30).
Thus we are motivated to consider a reformulation of the governing equations
before considering numerical discretizations. We proceed by first performing
an LU decomposition of equation (44) as follows:
$\mathfrak{K}_{U}\begin{Bmatrix}q\\\
\lambda\end{Bmatrix}=-\mathfrak{K}_{L}\begin{Bmatrix}u\\\ \mu\end{Bmatrix},$
(45)
where $\mathfrak{K}_{U}$ and $\mathfrak{K}_{L}$ are defined, respectively, as:
$\mathfrak{K}_{U}=\begin{bmatrix}I&\mathfrak{K}_{\Omega\Gamma}\mathfrak{K}_{\Gamma\Gamma}^{-1}\\\
0&I\end{bmatrix}\quad\quad\text{and}\quad\quad\mathfrak{K}_{L}=\begin{bmatrix}A_{\Omega}&0\\\
\mathfrak{K}_{\Gamma\Omega}&\mathfrak{K}_{\Gamma\Gamma}\end{bmatrix},$ (46)
and $A_{\Omega}$ is the Schur-complement defined as
$A_{\Omega}=\mathfrak{K}_{\Omega\Omega}-\mathfrak{K}_{\Omega\Gamma}\mathfrak{K}_{\Gamma\Gamma}^{-1}\mathfrak{K}_{\Gamma\Omega}.$
(47)
Note that, since $\mathfrak{K}_{\Gamma\Gamma}$ consists only of scalar values
$(\kappa_{i,j,\bot})$, this reformulation only depends on the trivial
inversion of scalars.
In the following it will be helpful to discuss the components of the mixed-
dimensional gradient and divergence, and we therefore additionally define the
”full jump” $\mathbb{d}\mathfrak{q}$ such that for any point $x\in\Omega_{i}$
it holds that
$\left(\mathbb{d}\mathfrak{q}\right)(x)=\left[-\sum_{j\in{\hat{S}}_{i}}\lambda_{j,i}\right]_{x},$
(48)
while the ”half jump” $\mathbb{d}^{\star}\mathfrak{p}$ is simply the
restriction of $\mathbb{D}\mathfrak{p}$ to $\Gamma_{j,i}$. We then write (with
the natural extension of $\nabla$ and $\nabla\cdot$):
$\mathfrak{D}\cdot\mathfrak{q}=\nabla\cdot
q+\mathbb{d}\lambda\quad\quad\text{and}\quad\quad\mathbb{D}\mathfrak{p}=\left(\nabla
p,\mathbb{d}^{\star}\mathfrak{p}\right).$ (49)
We now proceed by (formally) eliminating internal domain variables, in order
to obtain a problem only posed on interfaces. We note that equations (38) and
(39) can now be written as the first order system:
$\displaystyle\mathfrak{D}\cdot\mathfrak{q}=\mathfrak{f}$ (50)
$\displaystyle\mathfrak{K}_{U}\mathfrak{q}=-\mathfrak{K}_{L}\mathfrak{D}\mathfrak{p}$
(51)
where use of equation (45) has been made. By writing out equation (50) in
local notation for each $\Omega_{i}$ and by stating equation (51) explicitly
as two equations, we obtain the following set of equations:
$\displaystyle\nabla\cdot q=f-\mathbb{d}\lambda$ (52) $\displaystyle
q+A_{\Omega}\nabla
p=-\mathfrak{K}_{\Omega\Gamma}\mathfrak{K}_{\Gamma\Gamma}^{-1}\lambda$ (53)
$\displaystyle\lambda=-\left(\mathfrak{K}_{\Gamma\Omega}\nabla
p+\mathfrak{K}_{\Gamma\Gamma}\mathbb{d}^{\star}\mathfrak{p}\right)$ (54)
This reveals that equations (52) and (53) form a locally well-posed system (of
standard Darcy type) on each $\Omega_{i}$, and we can therefore consider
$p=p(\lambda)$ for any given $\lambda$.
We formalize this concept by introducing the (continuous) solution operators
for the standard elliptic value problem on $\Omega_{i}$,
$\mathcal{S}_{\Omega_{i}}^{K}$, defined as:
$\left(\upsilon,\nabla\upsilon,\text{tr
}\upsilon,F\right)=\mathcal{S}_{\Omega_{i}}^{K}\left(f,\chi,b,\upsilon_{0}\right),$
(55)
where $\upsilon$ is the solution to
$\displaystyle\nabla\cdot\varphi=f-F\quad\quad$
$\displaystyle\text{on}\quad\Omega_{i}$ (56)
$\displaystyle\varphi=-K\left(\nabla\upsilon+\chi\right)\quad\quad$
$\displaystyle\text{on}\quad\Omega_{i}$ (57) $\displaystyle\varphi\cdot
n=b\quad\quad$
$\displaystyle\text{on}\quad\partial\Omega_{i}\setminus\partial\Omega$ (58)
$\displaystyle\upsilon=0\quad\quad$
$\displaystyle\text{on}\quad\partial\Omega_{i}\cap\partial\Omega$ (59)
$\displaystyle\dfrac{1}{|\Omega_{i}|}\int_{\Omega_{i}}\upsilon=\upsilon_{0}\quad\quad$
$\displaystyle\text{if}\quad\partial\Omega_{i}\cap\partial\Omega\neq\oslash$
(60)
where $\partial\Omega$ is the global boundary and
$F=\dfrac{1}{|\Omega_{i}|}\left(\int_{\Omega_{i}}f-\int_{\Omega_{i}}b\right)$
if $\partial\Omega_{i}\cap\partial\Omega\neq\oslash$, and zero otherwise.
Using this solution operator, we see that the solution to equations (52) and
(53) can be stated as functions of $\lambda$ (and a set of number of numbers
$p_{0}$ corresponding to the domains where
$\partial\Omega_{i}\cap\partial\Omega\neq\oslash$) as:
$\left(p,\nabla p,\text{tr
}p,F\right)_{\Omega_{i}}\left(\lambda,p_{0}\right)=\mathcal{S}_{\Omega_{i}}^{A_{i}}\left(f_{i}-\left(\mathbb{d}\lambda\right)_{i},A_{i}^{-1}\left(\mathfrak{K}_{\Omega\Gamma}\mathfrak{K}_{\Gamma\Gamma}^{-1}\lambda\right)_{i},\lambda_{\check{I}_{i}},p_{0}\right).$
(61)
Inserting $p=p(\lambda,p_{0})$ etc. into equation (54), we have now
reformulated the fault-matrix problem into a pure interface problem. From the
perspective of implementation, we desire to consider the interface problem in
the weak sense, and we therefore multiply by test functions $w$ and integrate
to obtain the problem: Find $\lambda\in L^{2}(\Gamma)$ such that, for all
$w\in L^{2}(\Gamma_{j})$
$\left(\mathfrak{K}_{\Gamma\Gamma}^{-1}\lambda,w\right)_{\Gamma_{j,i}}+\left(\mathfrak{K}_{\Gamma\Gamma}^{-1}\mathfrak{K}_{\Gamma\Omega}\nabla
p(\lambda,p_{0}),w\right)_{\Gamma_{j,i}}+\left(\mathbb{d}^{\star}\mathfrak{p}(\lambda,p_{0}),w\right)_{\Gamma_{j,i}}=0$
(62)
and $F_{i}(\lambda,p_{0})=0$ if
$\partial\Omega_{i}\cap\partial\Omega\neq\oslash$. We point out that the inner
products in equation (62) are bounded from a formal perspective, since for
$\lambda\in L^{2}(\Gamma)$, then $p_{i}\in H^{1}(\Omega_{i})$, and both
$\mathfrak{K}_{\Gamma\Gamma}^{-1}\mathfrak{K}_{\Gamma\Omega}\nabla p$ and
$\text{tr }p$ will lie in (at least) $L^{2}(\Gamma_{j,i})$.
Finally, we emphasize that equations (61)-(62) are attractive from the
perspective of implementation, since the inner products appearing are easy to
evaluate, and the solution operators $\mathcal{S}_{\Omega_{i}}^{A_{i}}$ can be
approximated by any standard method, as we will detail in the next section.
## 3 Discretizations of flow for faulted porous media
The equations derived in Section 2.5, and in particular the interface problem
of equation (61), form the starting point for the discretization approach laid
out in this section. We present the general discretization framework in
Section 3.1, and discuss implementational aspects in Section 3.2.
### 3.1 Unified discretization
Equation (61) provides a solution operator for the arbitrary standard method
used to solve the elliptic boundary value problem (52)-(53) on $\Omega_{i}$.
To be concrete, we consider each domain $\Omega_{i}$ and its Neumann boundary
$\partial\Omega_{i}=\partial_{N}\Omega_{i}\cup\sum_{j\in\check{S}_{i}}\partial_{\Omega_{j}}\Omega_{i}$
as endowed with a numerical discretization. Then, the solution operator
$\mathcal{S}_{i}$ can be stated as
$\mathcal{S}_{i}:\left[N(\Omega_{i}),N^{d_{i}}(\Omega_{i}),N(\partial\Omega_{i})]\rightarrow[N(\Omega_{i}),N^{d_{i}}(\Omega_{i}),N(\partial\Omega_{i})\right],$
(63)
where $N(\Omega_{i})$, $N^{d_{i}}(\Omega_{i})$, and $N(\partial\Omega_{i})$
are the discrete representations of $L^{2}(\Omega_{i})$,
$\left(L^{2}(\Omega_{i})\right)^{d_{i}}$, and $L^{2}(\partial\Omega_{i})$,
respectively, and ${d_{i}}$ is the topological dimension of $\Omega_{i}$. In
particular, $\mathcal{S}_{i}$ takes as input sinks, vector sources, and
Neumann data and returns as output pressures, pressure gradients, and pressure
traces. Most discretization schemes for elliptic equations can provide such a
solution operator; we discuss the concrete implementation in the next
subsection.
To discretize the flux coupling term $\lambda_{j,i}$, we introduce a mortar-
like grid $\mathcal{T}_{j,i}$ on the interface $\Gamma_{j,i}$ on which the
boundary flux $\lambda_{j,i}$ will be defined. The flux variables are
represented as piecewise constant on the mortar grid $\mathcal{T}_{j,i}$, thus
$\lambda_{j,i}\in P_{0}(\mathcal{T}_{j,i})\subset\L^{2}(\Omega_{i})$. In order
to allow communications between subdomains, and thus explicitly relate the
degrees of freedom of the numerical methods $\mathcal{S}_{i}$ and the mortar
grids $\mathcal{T}_{j,i}$, we introduce projection operators, namely
$\Pi_{N(\Omega_{i})}$ and $\Pi_{L^{2}(\Omega_{i})}$. The former is the
compound operator projecting from the coupling variables on the mortar grids
to the subdomain degrees of freedom, that is
$\begin{split}\Pi_{N(\Omega_{i})}:&\left[L^{2}(\Omega_{i}),\left(L^{2}(\Omega_{i})\right)^{d_{i}},L^{2}\left(\Omega_{\check{S}_{i}}\right),L^{2}(\partial\Omega_{i})\right]\\\
&\rightarrow\left[N(\Omega_{i}),N^{d_{i}}(\Omega_{i}),N(\partial\Omega_{i})\right],\end{split}$
(64)
while the latter conversely moves from the numerical variables to the coupling
variables, that is
$\begin{split}\Pi_{L^{2}(\Omega_{i})}:&\left[N(\Omega_{i}),N^{d_{i}}(\Omega_{i}),N(\partial\Omega_{i})\right]\\\
&\rightarrow\left[L^{2}(\Omega_{i}),\left(L^{2}(\Omega_{i})\right)^{d_{i}},L^{2}\left(\Omega_{\check{S}_{i}}\right),L^{2}(\partial\Omega_{i})\right].\end{split}$
(65)
Now, following the variational formulation derived in Sec. 2.5, we exploit
equation (62) in order to provide discretization-independent framework for
faulted porous media. This takes the form: for given numerical discretizations
$\mathcal{S}_{i}$, find $\lambda_{j,i}\in P_{0}(\mathcal{T}_{j,i})$, for all
$i\in I$ and $j\in\hat{S}_{i}$ such that
$\begin{split}&\left(\mathbb{d}^{\star}\mathfrak{p},w_{j}\right)_{\Gamma_{j,i}}+\left(\mathfrak{K}^{-1}_{\Gamma\Gamma}\left(\lambda_{j,i}+\mathfrak{K}_{\Gamma\Omega}\cdot\nabla
p\right),w_{j}\right)_{\Gamma_{j,i}}=0\\\ &\quad\text{for all }w_{j}\in
P_{0}(\mathcal{T}_{j,i})\\\ \end{split}$ (66)
subject to discrete constraints (for all $i\in I$):
$\displaystyle[p_{i},u_{i},t_{j}]$
$\displaystyle=\Pi_{L^{2}\left(\Omega_{i}\right)}\mathcal{S}_{i}(\psi_{i}+a_{i},b_{i},c_{i})$
(67) $\displaystyle[a_{i},b_{i},c_{i}]$
$\displaystyle=\Pi_{N(\Omega_{i})}\left[-\sum_{j\in\hat{S}_{i}}\lambda_{j,i},-\sum_{j\in\hat{S}_{i}}A_{i}^{-1}\mathfrak{K}_{\Omega\Gamma}\mathfrak{K}_{\Gamma\Gamma}^{-1}\lambda_{j,i},\sum_{j\in\check{S}_{i}}\lambda_{i,j}\right]$
(68)
where the dummy variables $a_{i}$, $b_{i}$ and $c_{i}$ have the
interpretations of sources, forces, and fluxes due to interactions with other
domains, respectively. In contrast, the variables $p_{i}$, $u_{i}$, and
$t_{j}$ are the pressures, pressure gradients, and pressure traces after
projection onto the grids $\mathcal{T}_{j,i}$.
The interpretation of this scheme is as follows. Eq. (67) resolves the
internal differential equations in each subdomain, eq.(68) is the projection
of variables from the flux grids to the numerical boundary (and source) data,
while equation (66) simply states that the flux $\lambda_{j,i}$ between the
fault and its surrodundings should satisfy Darcy’s law. In the following
section, we present the strategy for implementation of this approach and give
details for a specific numerical scheme.
### 3.2 MPFA discretization
It is of interest to consider the requirements put on the subdomain solution
operators $\mathcal{S}_{i}$ in some more detail. From the variational
formulations stated above, we see that for a discretization on a generic
subdomain $\Omega_{i}$ to interact with the interface $\Gamma_{j}$, we need to
provide operators which:
1. 1.
Handle Neumann boundary data on the form $\Pi_{N(\Omega_{i})}\lambda_{j}$, for
all interfaces $\Gamma_{j}$ where $\Omega_{i}$ is the higher-dimensional
neighbor.
2. 2.
Handle source terms $\Pi_{N(\Omega_{i})}\lambda_{j}$ from interfaces
$\Gamma_{j}$ where $\Omega_{i}$ is the lower-dimensional neighbor.
3. 3.
Provide a discrete operator $\text{tr }p_{i}$ so that
$\Pi_{L^{2}(\Omega_{i})}$ can project the pressure trace from
$\partial_{j}\Omega_{i}$ to $\Gamma_{j}$ where $\Omega_{i}$ is the higher-
dimensional neighbor.
4. 4.
Provide a pressure $p_{i}$ so that $\Pi_{L^{2}(\Omega_{i})}$ can project the
pressure to all $\Gamma_{j}$ where $\Omega_{i}$ is the lower-dimensional
neighbor.
5. 5.
Handle the divergence of vector source terms
$\Pi_{N(\Omega_{i})}(\nabla\cdot{\bm{\mu}}_{j,i})$ from interfaces
$\Gamma_{j}$ where $\Omega_{i}$ is the lower-dimensional neighbor.
6. 6.
Provide a pressure gradient $u_{i}$ so that $\Pi_{L^{2}(\Omega_{i})}$ can
project the pressure gradient to all $\Gamma_{j}$ where $\Omega_{i}$ is the
lower-dimensional neighbor.
The four first requirements are readily available for any discretization
scheme for elliptic equations. Specifically, we have based our solution
operators on a cell-centered finite volume method termed the multi-point flux
approximation (MPFA) (Aavatsmark,, 2002; Nordbotten and Keilegavlen,, 2020).
Treatment of vector source terms (item 5) is not as natural in primal
discretization schemes such as finite elements, but is easy to include in most
flux-based discretization methods such as e.g. mixed finite elements. We have
employed the approach introduced in Starnoni et al., (2019), which treats the
vector source term as part of the discrete divergence operator, and thereby
provides an expression of the fluxes in terms of jumps in cell-centers vector
sources. Finally, the pressure gradients are discretized as piece wise
constant on each cell from an interpolation of the face cells fluxes (item 6).
We implemented our model in PorePy, an open-source software for simulation of
multiphysics processes in fractured porous media (Keilegavlen et al.,, 2021).
\begin{overpic}[width=108.405pt]{sketch_discretization}
\put(30.0,76.0){\small$\Omega_{h}$}
\put(50.0,60.0){\small$\partial_{j}\Omega_{h}$}
\put(75.0,30.0){\small$\Gamma_{j}$} \put(75.0,0.0){\small$\Omega_{l}$}
\put(60.0,12.0){\small$\Pi_{L^{2}(\Omega_{h})}$}
\put(60.0,42.0){\small$\Pi_{L^{2}(\Omega_{l})}$}
\put(20.0,12.0){\small$\Pi_{N(\Omega_{l})}$}
\put(20.0,42.0){\small$\Pi_{N(\Omega_{h})}$} \end{overpic} Figure 6:
Illustration of a coupling between subdomains. $\Omega_{h}$ and $\Omega_{l}$
are the higher and lower subdomains respectively, $\Gamma_{j}$ is the
interface between the two subdomains, $\partial_{j}\Omega_{h}$ is the portion
of the boundary of $\Omega_{h}$ as seen from $\Gamma_{j}$,
$\Pi_{N(\Omega_{k})}$ is the projection operator from coupling variables on
the mortar grid to each of the subdomains degrees of freedom ($k=h,l$), and
$\Pi_{L^{2}(\Omega_{k})}$ is the projection operator from numerical variables
to coupling variables.
To better understand the structure of the discrete coupling, it is instructive
to write out the coupled system for two subdomains $\Omega_{h}$ and
$\Omega_{l}$ separated by an interface $\Gamma_{j}$ (see Fig. 6). Let
$\overline{p}_{h}$ and $\overline{p}_{l}$, be the vectors of cell-center
pressures in $\Omega_{h}$ and $\Omega_{l}$ respectively, and let
$\overline{\lambda}_{j}$ be the vector of discrete mortar fluxes in
$\Gamma_{j}$. The discrete coupled system in absence of external sources can
then be represented on the generic form
$\begin{bmatrix}A_{h}&0&G_{h}\Pi_{N(\Omega_{h})}\\\
0&A_{l}&B_{l}\Pi_{N(\Omega_{l})}+J_{l}\Pi_{N(\Omega_{l})}T_{j}\\\
-\Pi_{L^{2}(\Omega_{h})}P_{h}&\Pi_{L^{2}(\Omega_{l})}P_{l}+T_{j}\Pi_{L^{2}(\Omega_{l})}R_{l}&D_{j}\\\
\end{bmatrix}\begin{bmatrix}\overline{p}_{h}\\\ \overline{p}_{l}\\\
\overline{\lambda}_{j}\end{bmatrix}=\begin{bmatrix}0\\\ 0\\\ 0\end{bmatrix}.$
(69)
The first two rows of the system (69) represent the discretised differential
equations in each subdomain, while the third row is the discretized Darcy’s
law in the direction perpendicular to the interface. Here, $A_{h}$ and $A_{l}$
are the fixed-dimensional discretizations on the subdomains, $G_{h}$ is the
discretization of Neumann boundary conditions on $\Omega_{h}$, $B_{l}$ is the
discretization of source terms in $\Omega_{l}$, $J_{l}$ is the discretization
of the vector source term on $\Omega_{l}$, $T_{j}$ is the discretized
$\mathfrak{K}_{\Omega\Gamma}\mathfrak{K}_{\Gamma\Gamma}^{-1}$ product on
$\Gamma_{j}$, and $\Pi_{N(\Omega_{h})}$ and $\Pi_{N(\Omega_{l})}$ are the
projection operators from coupling variables on the mortar grid to each of the
subdomains degrees of freedom. Furthermore, $P_{h}$ provides a discrete
representation of the pressure trace operator on $\Omega_{h}$, $P_{l}$ gives
the pressure unknowns on $\Omega_{l}$, $R_{l}$ gives the reconstruction of the
pressure gradient on $\Omega_{l}$, and $\Pi_{L^{2}(\Omega_{k})}$ is the
projection operator from numerical variables to coupling variables. Finally,
$D_{j}$ is the discretized inverse normal permeability on $\Gamma_{j}$.
We conclude by making two remarks: firstly, there is no direct coupling
between $\Omega_{h}$ and $\Omega_{l}$ and secondly, global boundary conditions
are left out of the system.
## 4 Numerical examples
We validate the semi-local model and our implementation by a suite of
numerical examples. First, we consider a case with a single fault, and show
how the semi-local model can capture the effects of anisotropic off-diangonal
permeabilities, while the local model fails to do so. Second, we probe the
robustness of our discretization on more complex geometries in 2D and 3D.
### 4.1 Comparison to the equi-dimensional model
In this first example, we compare our reduced model to an equi-dimensional
model. The aim is to highlight the enhanced modelling capabilities of our
formulation with respect to the standard local formulation. With reference to
this latter point, we present results of two test cases: the first one where
the fault has the same off-diagonal permeability on both sides (see Fig. 2.b),
and a second one where different permeability structures are assigned to each
side of the fault (see Fig. 2.c).
\begin{overpic}[width=151.76964pt]{sketch_equi_dim}
\put(5.0,54.0){\small$\Omega_{3}$} \put(63.0,66.0){\small$\Omega_{1}$}
\put(63.0,32.0){\small$\Omega_{2}$} \put(8.0,90.0){\small$h_{out}$}
\put(83.0,90.0){\small$h_{out}$} \put(48.0,6.0){\small$h_{in}$} \end{overpic}
Figure 7: Setup of Case 1.
#### 4.1.1 Case 1: homogeneous permeability
We consider a 2D square domain of side $L=1~{}m$ cut by a horizontal fault of
aperture $a=1~{}cm$ located in the middle of the domain. In the mixed-
dimensional setting we therefore have two 2D domains $\Omega_{1}$ and
$\Omega_{2}$ and one 1D fault $\Omega_{3}$, as illustrated in Fig. 7. The
hydraulic conductivity is isotropic homogeneous for the 2D matrix, that is
${\bm{K}}_{j}=K_{j}\mathbf{I}$, with $j=1,2$, while for the fault we consider
the following equi-dimensional full tensor:
${\bm{K}}_{3}=\begin{bmatrix}K_{f,\parallel}&k_{f,t}\\\
k_{f,t}&k_{f,\bot}\end{bmatrix},$ (70)
For simplicity, we take $K_{1}=K_{2}=K_{m}$. Boundary conditions consist of an
applied difference in hydraulic head along the vertical direction and no-flow
conditions elsewhere. In particular, the inlet pressure $h_{in}$ is specified
on the portion of the bottom boundary where $0.25<x<0.75~{}m$, while the
outlet pressure $h_{out}$ is specified on the portion of the top boundary
where $x<0.25~{}\&~{}x>0.75~{}m$ (see Fig. 7). Data for the simulations are
reported in Table 1. We consider as reference solution the solution obtained
with an equi-dimensional model of $N=40k$ structured square cells (mesh size
$dx=5~{}mm$), where the fault is discretized with two rows of 200 elements
each. Then, for the reduced models, we consider triangular grids with
approximately $N=[40,160,700,3k,11k]$ (respectively $N_{f}=[4,8,16,32,64]$
cells for the fault), and report the average $L^{2}$ error in pressure along
the fault
$\varepsilon_{p}=\dfrac{\sqrt{\sum_{i}\Delta_{i}(p_{i}-p_{i,eq})^{2}}}{\sqrt{\sum_{i}\Delta_{i}p_{i,eq}^{2}}},$
(71)
where $\Delta_{i}$ is the size of the fault element in the reduced model, and
$p_{i,eq}$ is calculated from the equi-dimensional model as the mean value of
the two fault cells at each location $x_{i}$:
$p_{i,eq}(x_{i})=\sum_{j=y_{1},y_{2}}p_{ij},$ (72)
where $y_{j}=L/2\pm dx/2$.
Convergence results are shown in Fig. 8a. As Fig. 8a clearly shows, our
formulation presents about first-order convergence rate, while the local
formulation does not converge. This is due to the strong anisotropy of the
fault, which is not captured by the standard local formulation. As a result of
the anisotropy of the fault, the flow will take a preferential direction
towards one of the two inlets, therefore breakig the symmetry of the local
formulation. This is better observed in Fig. 8b showing the pressure
distribution along the fault for the three models. As Fig. 8b clearly shows,
the semi-local and the equi-dimensional models coincide, while the local
formulation exhibits an erroneous symmetric profile.
Table 1: Data for Case 1. Values of the fault hydraulic conductivity are given for the equi-dimensional model, i.e. before scaling. Parameter | Description | Value
---|---|---
$K_{m}$ | Matrix hydraulic conductivity | $1~{}m/s$
$K_{f,\parallel}$ | Fault tangential hydraulic conductivity | $100~{}m/s$
$k_{f,\bot}$ | Fault normal hydraulic conductivity | $100~{}m/s$
$k_{f,t}$ | Fault off-diagonal hydraulic conductivity | $80~{}m/s$
$a$ | Fault aperture | $0.01~{}m$
$L$ | Side of the square domain | $1~{}m$
$h_{in}$ | Hydraulic head at the bottom boundary | $10~{}m$
$h_{out}$ | Hydraulic head at the top boundary | $1~{}m$
(a)
(b)
Figure 8: Case 1: (a) convergence of the average error in pressure within the
fault and (b) pressure distribution along the fault for different methods.
#### 4.1.2 Case 2: dual permeability
As a further illustration of the enhanced modeling capabilities of the semi-
local model, we modify the setup used in the previous section to have
different permeability structures on the two sides of the fault. This is
relevant for modeling of geological faults, where the two sides of the fault
may undergo different damage processes. To that end, we divide the fault into
an upper and lower part (see Fig. 9) and assign different permeability
structures to the two sides, that is for $j=1,2$:
${\bm{K}}_{3,j}=\begin{bmatrix}K_{f,\parallel}&k_{f,j,t}\\\
k_{f,j,t}&k_{f,\bot}\end{bmatrix}.$ (73)
In particular, values of $K_{m}$, $K_{f,\parallel}$ and $k_{f,\bot}$ are the
same as those given in Table 1, while $k_{f,1,t}$ and $k_{f,2,t}$ take values
of $50$ and $80~{}m/s$, respectively. The aperture of the fault is set to
$a=2~{}cm$ and we use the same boundary conditions as in Case 1.
Convergence results for the local and semi-local models are shown in Figs.
10a-10b, with the reference solution again computed from an equi-dimensional
model with a grid with 40k cells. As in the previous case, the local model
fails to converge, while the semi-local model exhibits first order convergence
up to the last refinement step. Here, the mesh size is of the same order of
the fault aperture, thus further error reduction cannot be expected due to the
modeling error in the dimension reduction.
\begin{overpic}[width=325.215pt]{sketch_dual_kt}
\put(75.0,26.0){\small${K}_{1}$} \put(75.0,8.0){\small${K}_{2}$}
\put(75.0,15.0){\small${K}_{3,2}$} \put(75.0,19.5){\small${K}_{3,1}$}
\put(102.0,15.0){$\small{1~{}cm}$} \put(102.0,20.0){$\small{1~{}cm}$}
\end{overpic} Figure 9: Setup of Case 2.
(a)
(b)
Figure 10: Case 2: (a) convergence of the average error in pressure within the
fault and (b) pressure distribution along the fault for different methods.
### 4.2 Self-convergence
In this section, we test the robustness of the method on more challenging
fault configurations in 2D and 3D.
#### 4.2.1 2D case
We consider the same test case as Case 1 in Boon et al., (2018). The domain is
a unit square including a network of five faults (Fig. 11a). Of these five
faults, one cuts the square domain into two 2D subdomains, denoted as
$\Omega_{1}$ and $\Omega_{2}$, respectively. The faults are numbered for
$j=3,..,7$ and are of two kinds: $\Omega_{3}$ and $\Omega_{4}$ are conductive,
that is $K_{3}=K_{4}=K_{f,1}$, while the other three are blocking, that is
$K_{5}=K_{6}=K_{7}=K_{f,2}$. The hydraulic conductivity is isotropic
homogeneous for the 2D matrix, with $K_{1}=K_{2}=K_{m}$, while for the faults
we consider an equi-dimensional full tensor with $k_{j,t}=0.1K_{j,\parallel}$,
for $j=3,..,7$. Boundary conditions consist of an applied difference in
hydraulic head along the vertical direction and no-flow conditions elsewhere.
Data for the simulations are reported in Table 2. We consider as reference
solution the solution obtained with approximately $N=133k$ cells for the 2D
domain and a total number of $N_{f}=510$ cells for the faults. Then we
consider grids with approximately $N=[300,1k,4k,17k,67k]$ (respectively
$N_{f}=[26,48,93,183,363]$), and report the average $L^{2}$ error in pressure
along the faults.
The convergence results, shown in Fig. 11b, indicate a rate of at least first
order. The test thus confirms the performance of our method also in cases that
involve faults that are intersecting and have low permeability. Both these
features are highly relevant in a geologic setting where fault may have
complex geometry and reduced permeability compared to the host rock.
Table 2: Data for the 2D self-convergence test. Values of the fault hydraulic conductivity are given for the equi-dimensional model, i.e. before scaling. Parameter | Description | Value
---|---|---
$K_{m}$ | Matrix hydraulic conductivity | $1~{}m/s$
$K_{f,1,\parallel}$ | Fault tangential hydraulic conductivity | $100~{}m/s$
$k_{f,1,\bot}$ | Fault normal hydraulic conductivity | $100~{}m/s$
$k_{f,1,t}$ | Fault off-diagonal hydraulic conductivity | $10~{}m/s$
$K_{f,2,\parallel}$ | Fault tangential hydraulic conductivity | $0.01~{}m/s$
$k_{f,2,\bot}$ | Fault normal hydraulic conductivity | $0.01~{}m/s$
$k_{f,2,t}$ | Fault off-diagonal hydraulic conductivity | $0.001~{}m/s$
$a$ | Fault aperture | $0.01~{}m$
$h_{in}$ | Hydraulic head at the top boundary | $1~{}m$
$h_{out}$ | Hydraulic head at the bottom boundary | $0~{}m$
\begin{overpic}[width=151.76964pt]{sketch_2d_intersections}
\put(5.0,52.0){\small$\Omega_{1}$} \put(88.0,5.0){\small$\Omega_{2}$}
\put(55.0,80.0){\small$\Omega_{3}$} \put(63.0,66.0){\small$\Omega_{4}$}
\put(75.0,52.0){\small$\Omega_{5}$} \put(25.0,32.0){\small$\Omega_{6}$}
\put(78.0,15.0){\small$\Omega_{7}$} \end{overpic} (a)
(b)
Figure 11: 2D self-convergence test: (a) mixed-dimensional geometry and (b)
convergence of the average error in pressure within the faults.
#### 4.2.2 3D case
As a final verification, we consider a 3D case with multiple intersecting
faults. The setup is based on Case 2 in the benchmark study described in Berre
et al., (2020). The domain is a unit cube including a network of 9 faults,
whose intersections divide the cubic domain into several subdomains, as
illustrated in Fig. 12a. These 3D subdomains are grouped into two regions,
where we assigned different permeabilities $K_{m,1}$ and $K_{m,2}$, both
homogeneous and isotropic (see Berre et al., (2020) for a visualization of
these two regions). For the faults we consider full tensors with tangential
permeability ${\bm{K}}_{j,\parallel}=K_{f,\parallel}\mathbf{I}_{\parallel}$,
normal permeability $k_{j,\bot}=k_{f,\bot}$, and off-diagonal permeability
${\bm{k}}_{j,t}=0.1k_{j,t}\mathbf{i}_{\parallel}$. Boundary conditions consist
of an imposed normal flux $q_{in}$ on the portion of the boundary where
$(x,y,z)<0.25~{}m$ and a constant hydraulic head $h_{out}$ on the portion of
the boundary where $(x,y,z)>0.875~{}m$. Data for the simulations are reported
in Table 3. We consider as reference solution the solution obtained with
approximately $N_{3}=85k$ cells for the 3D domain and a total number of
$N_{f}=8364$ cells for all faults. Then we consider
$N_{3}=[500,1k,2k,4k,10k,20k,40k]$ (respectively
$N_{f}=[148,282,384,814,1536,2298,3456]$) and report the average $L^{2}$ error
in pressure along the faults.
Convergence results are shown in Fig. 12b, indicating first order convergence
on average. This confirms the consistency of our implementation also for 3D
problems with complex fault geometries.
Table 3: Data for the 3D self-convergence test. Values of the fault hydraulic conductivity are given for the equi-dimensional model, i.e. before scaling. Parameter | Description | Value
---|---|---
$K_{m,1}$ | Matrix hydraulic conductivity | $1~{}m/s$
$K_{m,2}$ | Matrix hydraulic conductivity | $0.1~{}m/s$
$K_{f,\parallel}$ | Fault tangential hydraulic conductivity | $1e^{4}~{}m/s$
$k_{f,\bot}$ | Fault normal hydraulic conductivity | $1e^{4}~{}m/s$
$k_{f,t}$ | Fault off-diagonal hydraulic conductivity | $1e^{3}~{}m/s$
$a$ | Fault aperture | $1e^{-4}~{}m$
$q_{in}$ | Normal flux at the inflow boundary | $-1~{}m/s$
$h_{out}$ | Hydraulic head at the outflow boundary | $1~{}m$
(a)
(b)
Figure 12: 3D self-convergence test: (a) mixed-dimensional geometry and (b)
convergence of the average error in pressure within the faults.
## 5 Conclusions
We presented an improved framework to modelling and discretizing flow in
generally anisotropic porous media with thin inclusions, within the context of
mixed-dimensional partial differential equations. Our model considers a full
permeability tensor for the inclusions, resulting in additional terms arising
in our formulation as compared to existing local discretizations. We expect
our model to be important for modeling of flow in faulted porous media,
however the methods proposed herein can be in any case applied to models of
fractures, in fact our full-permeability model naturally reduces to the
existing models of fracture-matrix flow when the off-diagonal components of
the inclusion permeability tensor are set to zero.
We provided numerical examples showing convergence of the method for both 2D
and 3D faulted porous media. In particular, we provided numerical evidence
that, as opposed to existing local discretizations, our model is capable of
simulating the anisotropic behaviour of the faults near damage zone.
We remark that, in the spirit of flux-mortars coupling schemes, our
formulation is independent of the discretization methods used to discretize
the flow equations in the porous matrix and the faults. However, we only
showed results obtained using a multi-point flux finite volume approach.
Nevertheless, the formulation also applies to other discretization methods,
e.g. mixed finite elements.
Acknowledgements
This work forms part of Norwegian Research Council project 250223. Data will
be made public on Zenodo at the time of publication.
## References
* Aavatsmark, (2002) Aavatsmark, I. (2002). An introduction to multipoint flux approximations for quadrilateral grids. Computational Geosciences, 6(3-4):405–432.
* Bear, (1979) Bear, J. (1979). Hydraulics of groundwater new york. Mc GrawHill Inc.
* Berre et al., (2020) Berre, I., Boon, W. M., Flemisch, B., Fumagalli, A., Gläser, D., Keilegavlen, E., Scotti, A., Stefansson, I., Tatomir, A., Brenner, K., et al. (2020). Verification benchmarks for single-phase flow in three-dimensional fractured porous media. Advances in Water Resources, 147:103759.
* Bödvarsson and Tsang, (1982) Bödvarsson, G. S. and Tsang, C. F. (1982). Injection and thermal breakthrough in fractured geothermal reservoirs. Journal of Geophysical Research: Solid Earth, 87(B2):1031–1048.
* Boon et al., (2020) Boon, W. M., Nordbotten, J. M., and Vatne, J. E. (2020). Functional analysis and exterior calculus on mixed-dimensional geometries. Annali di Mathematica.
* Boon et al., (2018) Boon, W. M., Nordbotten, J. M., and Yotov, I. (2018). Robust discretization of flow in fractured porous media. SIAM Journal on Numerical Analysis, 56(4):2203–2233.
* Brenner et al., (2017) Brenner, K., Hennicker, J., Masson, R., and Samier, P. (2017). Gradient discretization of hybrid-dimensional darcy flow in fractured porous media with discontinuous pressures at matrix–fracture interfaces. IMA Journal of Numerical Analysis, 37(3):1551–1585.
* Brenner et al., (2018) Brenner, K., Hennicker, J., Masson, R., and Samier, P. (2018). Hybrid-dimensional modelling of two-phase flow through fractured porous media with enhanced matrix fracture transmission conditions. Journal of Computational Physics, 357:100–124.
* Cao et al., (2016) Cao, P., Liu, J., and Leong, Y.-K. (2016). A fully coupled multiscale shale deformation-gas transport model for the evaluation of shale gas extraction. Fuel, 178:103–117.
* Farmer, (2002) Farmer, C. (2002). Upscaling: a review. International journal for numerical methods in fluids, 40(1-2):63–78.
* Flemisch et al., (2018) Flemisch, B., Berre, I., Boon, W., Fumagalli, A., Schwenck, N., Scotti, A., Stefansson, I., and Tatomir, A. (2018). Benchmarks for single-phase flow in fractured porous media. Advances in Water Resources, 111:239–258.
* Formaggia et al., (2018) Formaggia, L., Scotti, A., and Sottocasa, F. (2018). Analysis of a mimetic finite difference approximation of flows in fractured porous media. ESAIM: Mathematical Modelling and Numerical Analysis, 52(2):595–630.
* Fossen et al., (2005) Fossen, H., Johansen, T. E. S., Hesthammer, J., and Rotevatn, A. (2005). Fault interaction in porous sandstone and implications for reservoir management; examples from southern utah. AAPG bulletin, 89(12):1593–1606.
* Fossen et al., (2007) Fossen, H., Schultz, R. A., Shipton, Z. K., and Mair, K. (2007). Deformation bands in sandstone: a review. Journal of the Geological Society, 164(4):755–769.
* Fumagalli and Keilegavlen, (2019) Fumagalli, A. and Keilegavlen, E. (2019). Dual virtual element methods for discrete fracture matrix models. Oil & Gas Science and Technology–Revue d’IFP Energies nouvelles, 74:41.
* Fumagalli and Scotti, (2019) Fumagalli, A. and Scotti, A. (2019). A multi-layer reduced model for flow in porous media with a fault and surrounding damage zones. arXiv preprint arXiv:1903.01117.
* Helmig et al., (1997) Helmig, R. et al. (1997). Multiphase flow and transport processes in the subsurface: a contribution to the modeling of hydrosystems. Springer-Verlag.
* Hesthammer et al., (2000) Hesthammer, J., Johansen, T., and Watts, L. (2000). Spatial relationships within fault damage zones in sandstone. Marine and Petroleum Geology, 17(8):873–893.
* Johansen and Fossen, (2008) Johansen, T. E. S. and Fossen, H. (2008). Internal geometry of fault damage zones in interbedded siliciclastic sediments. Geological Society, London, Special Publications, 299(1):35–56.
* Johnson et al., (2009) Johnson, S., Morris, J., et al. (2009). Hydraulic fracturing mechanisms in carbon sequestration applications. In 43rd US Rock Mechanics Symposium & 4th US-Canada Rock Mechanics Symposium. American Rock Mechanics Association.
* Karimi-Fard et al., (2003) Karimi-Fard, M., Durlofsky, L. J., Aziz, K., et al. (2003). An efficient discrete fracture model applicable for general purpose reservoir simulators. In SPE Reservoir Simulation Symposium. Society of Petroleum Engineers.
* Keilegavlen et al., (2021) Keilegavlen, E., Berge, R., Fumagalli, A., Starnoni, M., Stefansson, I., Varela, J., and Berre, I. (2021). Porepy: an open-source software for simulation of multiphysics processes in fractured porous media. Computational Geosciences, 25(1):243–265.
* Liu et al., (2016) Liu, R., Li, B., Jiang, Y., and Huang, N. (2016). Mathematical expressions for estimating equivalent permeability of rock fracture networks. Hydrogeology Journal, 24(7):1623–1649.
* Martin et al., (2005) Martin, V., Jaffré, J., and Roberts, J. E. (2005). Modeling fractures and barriers as interfaces for flow in porous media. SIAM Journal on Scientific Computing, 26(5):1667–1691.
* Nagelhout and Roest, (1997) Nagelhout, A. and Roest, J. (1997). Investigating fault slip in a model of an underground gas storage facility. International Journal of Rock Mechanics and Mining Sciences, 34(3-4):212–e1.
* Nordbotten et al., (2019) Nordbotten, J. M., Boon, W. M., Fumagalli, A., and Keilegavlen, E. (2019). Unified approach to discretization of flow in fractured porous media. Computational Geosciences, 23(2):225–237.
* Nordbotten and Celia, (2011) Nordbotten, J. M. and Celia, M. A. (2011). Geological storage of CO2: modeling approaches for large-scale simulation. John Wiley & Sons.
* Nordbotten and Keilegavlen, (2020) Nordbotten, J. M. and Keilegavlen, E. (2020). An introduction to multi-point flux (mpfa) and stress (mpsa) finite volume methods for thermo-poroelasticity. arXiv preprint arXiv:2001.01990.
* Oda, (1985) Oda, M. (1985). Permeability tensor for discontinuous rock masses. Geotechnique, 35(4):483–495.
* Reichenberger et al., (2006) Reichenberger, V., Jakobs, H., Bastian, P., and Helmig, R. (2006). A mixed-dimensional finite volume method for two-phase flow in fractured porous media. Advances in water resources, 29(7):1020–1036.
* Sævik et al., (2013) Sævik, P. N., Berre, I., Jakobsen, M., and Lien, M. (2013). A 3d computational study of effective medium methods applied to fractured media. Transport in porous media, 100(1):115–142.
* Sandve et al., (2012) Sandve, T. H., Berre, I., and Nordbotten, J. M. (2012). An efficient multi-point flux approximation method for discrete fracture–matrix simulations. Journal of Computational Physics, 231(9):3784–3800.
* Starnoni et al., (2019) Starnoni, M., Berre, I., Keilegavlen, E., and Nordbotten, J. (2019). Consistent mpfa discretization for flow in the presence of gravity. Water Resources Research.
* Wilson et al., (2020) Wilson, P., Smith, S., Povey, D., and Harris, S. (2020). Ranking and selecting fault models using flow-indicator fault properties and simple streamline simulations. Petroleum Geoscience.
* Yortsos, (1995) Yortsos, Y. C. (1995). A theoretical analysis of vertical flow equilibrium. Transport in Porous Media, 18(2):107–129.
|
# Variational quantum solver employing the PDS energy functional
Bo Peng<EMAIL_ADDRESS>Physical and Computational Science Division, Pacific
Northwest National Laboratory, Richland, Washington 99354, United States of
America Karol Kowalski<EMAIL_ADDRESS>Physical and Computational
Science Division, Pacific Northwest National Laboratory, Richland, Washington
99354, United States of America
###### Abstract
Recently a new class of quantum algorithms that are based on the quantum
computation of the connected moment expansion has been reported to find the
ground and excited state energies. In particular, the Peeters-Devreese-
Soldatov (PDS) formulation is found variational and bearing the potential for
further combining with the existing variational quantum infrastructure. Here
we find that the PDS formulation can be considered as a new energy functional
of which the PDS energy gradient can be employed in a conventional variational
quantum solver. In comparison with the usual variational quantum eigensolver
(VQE) and the original static PDS approach, this new variational quantum
solver offers an effective approach to navigate the dynamics to be free from
getting trapped in the local minima that refer to different states, and
achieve high accuracy at finding the ground state and its energy through the
rotation of the trial wave function of modest quality, thus improves the
accuracy and efficiency of the quantum simulation. We demonstrate the
performance of the proposed variational quantum solver for toy models, H2
molecule, and strongly correlated planar H4 system in some challenging
situations. In all the case studies, the proposed variational quantum approach
outperforms the usual VQE and static PDS calculations even at the lowest
order. We also discuss the limitations of the proposed approach and its
preliminary execution for model Hamiltonian on the NISQ device.
## 1 Introduction
Quantum computing (QC) techniques attract much attention in many mathematics,
physics, and chemistry areas by providing means to address insurmountable
computational barriers for simulating quantum systems on classical
computers.[47, 61, 53, 3, 2, 43] One of the focus areas for quantum computing
is quantum chemistry, where Hamiltonians can be effectively mapped into qubit
registers. In this area, several quantum computing algorithms, including
quantum phase estimator (QPE) [38, 15, 5, 12, 58, 69, 26, 52] and variational
quantum eigensolver (VQE), [50, 44, 55, 60, 31, 32, 24, 16, 29] have been
extensively tested on benchmark systems corresponding to the description of
chemical reactions involving bond-forming and breaking processes, excited
states, and strongly correlated molecular systems. In more recent
applications, several groups reported quantum algorithms for imaginary time
evolution,[42, 46] quantum filter diagonalization,[48] quantum inverse
iteration algorithms,[36] and quantum power/moments methods. [59, 67] The main
thrust that drives this field is related to the efficient encoding of the
electron correlation effects that are needed to describe molecular systems.
Basic methodological questions related to an efficient way of incorporating
large degrees of freedom required to capture a subtle balance between static
and dynamical correlations effects still need to be appropriately addressed. A
typical way of addressing these challenges in VQE approaches is by
incorporating more and more parameters (usually corresponding to excitation
amplitudes in a broad class of unitary coupled-cluster methods [27, 4, 64, 35,
37, 17]). Unfortunately, this brute force approach is quickly stumbling into
insurmountable problems associated with the resulting quantum circuit
complexity and problems with numerical optimization procedures performed on
classical machines (the so-called barren plateau problem reported in Refs.[45,
11, 68, 10, 51, 41, 66]).
In this paper, we propose a new solution to these problems. Instead of adding
more parameters to the trial wave function, we choose to optimize a new class
of energy functionals (or quasi-functionals, where the energy is calculated as
a simple equation solution) that already encompasses information about high-
order static and dynamical correlation effects. An ideal choice for such high-
level functional is based on the Peeters, Devreese, and Soldatov (PDS)
formalism,[49, 62] where variational energy is obtained as a solution of
simple equations expressed in terms of the Hamiltonian’s moments or
expectations values of the powers of the Hamiltonians operator defined for the
trial wave function. In Ref. [34] we demonstrated that in such calculations
high-level of accuracy can be achieved even with very simple parametrization
of the trial wave functions (capturing only essential correlation effects) and
low-rank moments. We believe that merging the PDS formalism with the quantum
gradient based variational approach would be considered as a more interesting
alternative for by-passing main problems associated with the excessive number
of amplitudes that need to be included to reach the so-called chemical
accuracy.
In the following sections we will briefly introduce the PDS formalism and
describe how the PDS energy functional can be incorporated with the
minimization procedures that are based on the quantum gradient approach [25,
57, 45, 42, 71] to produce a new class of variational quantum solver (which is
called PDS($K$)-VQS for short in the rest of the paper) to target the ground
state and its energy in a quantum-classical hybrid manner. Furthermore, we
will test its performance, in particular the performance of the more
affordable lower order PDS($K$)-VQS ($K=2,3,4$) approaches combining with the
trial wave function expressed in low-depth quantum circuits, at finding the
ground state and its energy for the Hamiltonians describing toy models and H2
molecular system, as well as the strongly correlated planar H4 system, in some
challenging situations where the barren plateau problem precludes the
effective utilization of the standard VQE approach.
## 2 Method
### 2.1 PDS formalism
In this section we will give a brief description of the PDS formalism. The
detailed discussion of the PDS methodology and highly relevant connected
moment expansion (CMX) formalisms have been given in the original work[49, 62]
as well as our recent work[34, 14] and many earlier literatures (see for
example Refs. [33, 54, 39, 65, 40, 18, 19]). The many-body techniques used in
the derivation of PDS expansions originate in the effort to provide upper
bounds for the free energies, and to provide alternative re-derivation of the
Bogolubov’s [8] and Feynman’s [20] inequalities. Since the Gibbs-Bogolubov
inequality reduces to the Rayleight-Ritz variational principle in zero
temperature limit, these formulations can be directly applied to quantum
chemistry. Here we only provide an overview of basic steps involved in the
derivations of the PDS formulation.
A starting point of the studies of upper bounds for the exact ground-state
energy $E_{0}$ is the analysis of function $\Gamma(t)$ (defined for trial wave
function $|\phi\rangle$ having non-zero overlap with the ground-state wave
function)
$\Gamma(t)=\langle\phi|e^{-tH}|\phi\rangle\;,$ (1)
and its Laplace transform $f(s)$
$f(s)=\int_{0}^{+\infty}e^{-st}\Gamma(t)dt\;.$ (2)
It can be proved that, for a complex scalar $s$, Eq. (2) exists if the real
part of $s$ $\Re(s)>-E_{0}$. Under this condition, for Hamiltonian $H$ defined
by discrete energy levels $E_{i}$ and corresponding eigenvectors
$|\Psi_{i}\rangle$ ($i=0,1,\ldots,M$)
$H=\sum_{i=0}^{M}E_{i}|\Psi_{i}\rangle\langle\Psi_{i}|\;,$ (3)
$f(s)$ takes the form
$f(s)=\sum_{i=0}^{M}\frac{\omega(E_{i})}{s+E_{i}}$ (4)
where $\omega(E_{i})=|\langle\Psi_{i}|\phi\rangle|^{2}$. The PDS formalism is
based on introducing parameters into expansion (4) using a simple identity
(with a real parameter $a$)
$\displaystyle\frac{1}{s+E_{n}}$ $\displaystyle=$
$\displaystyle\frac{1}{s+a}-\frac{E_{n}-a}{(s+a)^{2}}+\frac{(E_{n}-a)^{2}}{(s+E_{n})(s+a)^{2}}\;.$
(5)
When the above identity is applied for the first time to Eq. (4) (introducing
the first parameter $a_{1}$) one gets the following expression for the $f(s)$
function
$\displaystyle f(s)$ $\displaystyle=$
$\displaystyle\sum_{i=0}^{M}\omega(E_{i})\left[\frac{1}{s+a_{1}}-\frac{E_{i}-a_{1}}{(s+a_{1})^{2}}+\frac{(E_{i}-a_{1})^{2}}{(s+E_{i})(s+a_{1})^{2}}\right]$
(6)
The transformation (6) can be repeated $K$ times (with each time introducing a
new parameter $a_{i}$, $i=1,\ldots,K$) to reformulate the $f(s)$ function as
$f(s)=R_{K}(s,a_{1},\ldots,a_{K})+W_{K}(s,a_{1},\ldots,a_{K})\;,$ (7)
where
$\displaystyle R_{K}(s,a_{1},\ldots,a_{K})$ $\displaystyle=$
$\displaystyle\sum_{i=0}^{M}\left[\frac{\omega(E_{i})}{s+E_{i}}\prod_{j=1}^{K}\frac{(E_{i}-a_{j})^{2}}{(s+a_{j})^{2}}\right]\geq-
E_{0}\leavevmode\nobreak\ \leavevmode\nobreak\ (\text{if}\leavevmode\nobreak\
\leavevmode\nobreak\ \Re(s)>-E_{0}),$ (8) $\displaystyle
W_{K}(s,a_{1},\ldots,a_{K})$ $\displaystyle=$
$\displaystyle\sum_{i=0}^{M}\left\\{\omega(E_{i})\sum_{j=1}^{K}\left[\Big{(}\frac{1}{s+a_{j}}-\frac{E_{i}-a_{j}}{(s+a_{j})^{2}}\Big{)}\prod_{n=1}^{j-1}\frac{(E_{i}-a_{n})^{2}}{(s+a_{n})^{2}}\right]\right\\}\;.$
(9)
The $K$-th order PDS formalism (PDS($K$) for short henceforth) is then
associated with defining the introduced $K$ real parameters
$(a_{1},\ldots,a_{K})$ that minimize the value of
$R_{K}(s,a_{1},\ldots,a_{K})$. In this minimization process the necessary
extreme conditions are given by the system of equations
$\frac{\partial R_{K}(s,a_{1},\ldots,a_{K})}{\partial
a_{i}}=0,\leavevmode\nobreak\ \leavevmode\nobreak\ (i=1,\ldots,K),$ (10)
which can be alternatively represented by the matrix system of equations for
an auxiliary vector $\mathbf{X}=(X_{1},\cdots,X_{K})^{T}$
$\mathbf{MX}=-\mathbf{Y}.$ (11)
Here, the matrix elements of $\bf M$ and vector $\bf Y$ are defined as the
expectation values of Hamiltonian powers (i.e. moments),
$M_{ij}=\langle\phi|H^{2K-i-j}|\phi\rangle$,
$Y_{i}=\langle\phi|H^{2K-i}|\phi\rangle$ ($i,j=1,\cdots,K$) (for simplicity,
we will use the notation $\langle
H^{n}\rangle\equiv\langle\phi|H^{n}|\phi\rangle$). It can be shown that the
optimal parameters in the PDS($K$) formalism,
$(a_{1}^{(K)},\ldots,a_{K}^{(K)})$, are the roots of the polynomial
$P_{K}(\mathcal{E})$,
$P_{K}(\mathcal{E})=\mathcal{E}^{K}+\sum_{i=1}^{K}X_{i}\mathcal{E}^{K-i},$
(12)
and these roots provide upper bounds for the exact ground and excited state
energies, e.g., for the ground state energy we have
$E_{0}\leq{\rm
min}(a_{1}^{(K)},\ldots,a_{K}^{(K)})\leq\langle\phi|H|\phi\rangle\;.$ (13)
Note that, as shown in Refs.[49, 62] the PDS formalism also applies to the
Hamiltonian characterized by discrete and continuous spectral resolutions
together.
### 2.2 PDS($K$)-VQS formalism
In the variational method, we approximate the quantum state using parametrized
trial state $|\Psi\rangle\approx|\phi\rangle$. Using a quantum circuit, the
trial state can be prepared by applying a sequence of parametrized unitary
gates on the initial state $|0\rangle$,
$\displaystyle|\phi\rangle=|\phi(\vec{\theta})\rangle=\cdots
U_{k}(\theta_{k})\cdots U_{1}(\theta_{1})|0\rangle$ (14)
($\vec{\theta}=\\{\theta_{1},\cdots,\theta_{n}\\}$). Here $U_{k}(\theta_{k})$
is the $k$-th unitary single- or two-qubit gate that is controlled by
parameter $\theta_{k}$. The goal is to approach the ground-state energy of a
many-body Hamiltonian, $H$, by finding the values of these parameters,
$\vec{\theta}$, that minimize the expectation value of the Hamiltonian
$\displaystyle
E_{\min}=\min_{\vec{\theta}}\langle\phi(\vec{\theta})|H|\phi(\vec{\theta})\rangle.$
(15)
To do this, the conventional VQE starts by constructing the ansatz
$|\phi(\vec{\theta})\rangle$ and measuring the corresponding expectation value
of the Hamiltonian using a quantum computer, and then relies on a classical
optimization routine to obtain new $\vec{\theta}$. During the parameter
optimization (or dynamics), the set of parameters that is updated at the
$k$-th step ($k>1$) can be written as
$\displaystyle\vec{\theta}_{k}=\vec{\theta}_{k-1}-\eta\mathcal{R}^{-1}(\vec{\theta})\nabla\mathcal{E}(\vec{\theta}).$
(16)
where
$\nabla\mathcal{E}(\vec{\theta})=\partial\mathcal{E}/\partial\vec{\theta}$ is
the energy gradient vector, and $\eta$ is the step size (or learning rate).
$\mathcal{R}(\vec{\theta})$ is the Riemannian metric matrix at $\vec{\theta}$
that is flexible to characterize the singular point in the parameter space and
is essentially related to the indistinguishability of
$\mathcal{E}(\vec{\theta})$.[71] It is worth mentioning that Eq. (5)
originates from natural gradient learning method in the general nonlinear
optimization framework especially targeting machine learning problems.[1]
Here, the natural gradient is the optimizer that accounts for the geometric
structure of the parameter space. For the curved (or nonorthonormal) parameter
manifold that exhibits the Riemannian character (e.g. in large neural
networks), natural gradient learning method is often employed to avoid the
plateaus in the parameter space.[42, 71, 63]
Note that when the parameter space is a Euclidean space with orthonormal
coordinate system the Riemannian metric tensor will reduce to the unity matrix
(see Tab. 1). In VQE setting, one can define the Riemannian metric as the
quantum Fubini-Study metric, which is the quantum analog of the Fisher
information matrix in the classical natural gradient,[1] to measure the
distance in the space of pure quantum states. The quantum Fubini-Study metric
describes the curvature of the ansatz class rather than the learning
landscape, but often performs as well as Hessian based methods (e.g. BFGS
optimizer that approximates the Hessian of the cost function using first-order
gradient, see Ref. [70] for a recent detailed discussion). There are also some
other options for the Riemannian metric including imaginary-time evolution
(ITE) or even classical Fisher metric that have been discussed in some recent
reports.[42, 71, 63] In Tab. 1, three commonly used flavors of the Riemannian
metric matrix $\mathcal{R}(\vec{\theta})$ are listed and will be used in the
following case studies. Remarkably, as pointed out in Refs. [73, 42], the
difference between natural gradient descent (NGD) and ITE accounts for the
global phase, and if introducing a time-dependent phase gate to the trial
state, the Riemannian metric employing NGD will be equivalent to the metric
employing ITE.
Riemannian metric $\mathcal{R}_{ij}(\vec{\theta})$ GD $\delta_{ij}$ NGD
$\Re\Big{(}\frac{\partial\langle\phi(\vec{\theta})|}{\partial\theta_{i}}\frac{\partial|\phi(\vec{\theta})\rangle}{\partial\theta_{j}}\Big{)}-\frac{\partial\langle\phi(\vec{\theta})|}{\partial\theta_{i}}|\phi(\vec{\theta})\rangle\langle\phi(\vec{\theta})|\frac{\partial|\phi(\vec{\theta})\rangle}{\partial\theta_{j}}$
ITE
$\Re\Big{(}\frac{\partial\langle\phi(\vec{\theta})|}{\partial\theta_{i}}\frac{\partial|\phi(\vec{\theta})\rangle}{\partial\theta_{j}}\Big{)}$
Table 1: Three Rimannian metric forms, ordinary gradient descent (GD), natural
gradient descent (NGD), and imaginary time evolution (ITE), exploited in the
present study. Figure 1: The workflow of variational quantum solver employing
the PDS energy functional.
To get the energy gradient in the PDS framework, take the derivative w.r.t.
$\theta_{i}$ on both sides of Eq. (12), and after reorganizing the terms we
can express the energy derivative as
$\displaystyle\frac{\partial\mathcal{E}}{\partial\theta_{i}}=\frac{-1}{K\mathcal{E}^{K-1}+\displaystyle\sum_{i=1}^{K-1}(K-i)X_{i}\mathcal{E}^{K-i-1}}\left(\begin{array}[]{c}\mathcal{E}^{K-1}\\\
\vdots\\\
1\end{array}\right)^{T}\frac{\partial\mathbf{X}}{\partial\theta_{i}},$ (20)
where $\frac{\partial\mathbf{X}}{\partial\theta_{i}}$ is associated with the
$\theta_{i}$-derivative of Eq. (11),
$\displaystyle\mathbf{M}\frac{\partial\mathbf{X}}{\partial\theta_{i}}=-\frac{\partial\mathbf{Y}}{\partial\theta_{i}}-\frac{\partial\mathbf{M}}{\partial\theta_{i}}\mathbf{X},$
(21)
and can be obtained by solving Eq. (21) as a linear equation with $\partial
Y_{i}/\partial\theta_{k}=\partial\langle H^{2K-i}\rangle/\partial\theta_{k}$
and $\partial M_{ij}/\partial\theta_{k}=\partial\langle
H^{2K-i-j}\rangle/\partial\theta_{k}$. Fig. 1 summarizes the workflow of
PDS($K$)-VQS, where on the classical side the PDS($K$) module includes two
steps, (i) solving two consecutive linear problems to get $\mathbf{X}$ and
$\partial\mathbf{X}/\partial\theta_{i}$, and (ii) solving for roots of
polynomial (12) and computing Eq. (20). On the quantum side, in comparison
with the conventional VQE, the present PDS($K$)-VQS infrastructure relies on
quantum circuits to measure $\langle H^{n}\rangle$ and their
$\vec{\theta}$-derivatives.
In the present work, due to the relatively small system size, we directly
exploit the Hadamard test to compute the real part of $\langle H^{n}\rangle$
for the Hamiltonians that are represented as a sum of Pauli strings. It is
worth mentioning that for typical molecular systems that can be represented by
$N$ qubits, the number of $\langle H^{n}\rangle$ measurement scales as
$\mathcal{O}(N^{4n})$, which nevertheless can be reduced once the Pauli
strings are multiplied and their expectation values are re-used as the
contributions to the higher order moments.[34] For example, as we will show
later for the H4 system that comprises 184 Pauli strings in the Hamiltonian,
the effective number of Pauli strings required for arbitrary $\langle
H^{n}\rangle$ ($n=2,3,4$) measurements can be dropped from 1842, 1843, and
1844 to 1774, 3702, and 4223, respectively, after the Pauli reduction, and the
4223 strings will not be changed for more complex $\langle H^{n}\rangle$’s
($n>4$). Similar findings have also been reported in Ref. [67], where by
grouping the Pauli strings into tensor-product basis sets the authors examined
the operator counts for $\langle H^{4}\rangle$ of Heisenberg model defined on
different lattice geometries for the number of qubits ranging from 2 up to 36,
and found that the effective number of Pauli strings to be measured drops by
several orders of magnitude with sub-linear scaling in a given number of
qubits. For larger systems, the number of measurements can be further reduced
by introducing active space and local approximation. Alternatively, one can
approximate $\langle H^{n}\rangle$ by a linear combination of the time-
evolution operators as introduced in some recent reports.[59, 7] For the
estimation of $\partial\langle H^{n}\rangle/\partial\theta_{k}$, in the
present work we limit $U_{k}(\theta_{k})$ exploited in the state preparation
to be only one-qubit rotations. Then, following Ref. [57], $\partial\langle
H^{n}\rangle/\partial\theta_{k}$ can be obtained by measuring $\langle
H^{n}\rangle$ twice using the same circuit but shifting $\theta_{k}$ by
$\pm\frac{\pi}{2}$ separately, i.e.
$\displaystyle\frac{\partial\langle
H^{n}\rangle_{(\cdots,\theta_{k},\cdots)}}{\partial\theta_{k}}=\frac{1}{2}\Big{(}\langle
H^{n}\rangle_{(\cdots,\theta_{k}+\frac{\pi}{2},\cdots)}-\langle
H^{n}\rangle_{(\cdots,\theta_{k}-\frac{\pi}{2},\cdots)}\Big{)}.$ (22)
If $\theta_{k}$ parametrizes more than one one-qubit rotations in the circuit,
then based on the product rule $\partial\langle
H^{n}\rangle/\partial\theta_{k}$ will have contributions from all one-qubit
$\theta_{k}$ rotations, each of which will be obtained by applying (22) on the
corresponding rotation.
## 3 Numerical examples
In this section, with several examples, we will demonstrate how the
PDS($K$)-VQS performs in some challenging situations, and its difference in
comparison to the conventional VQE and static PDS($K$) expansions.
### 3.1 Toy Hamoltonians
We first test the PDS($K$)-VQS on two toy Hamiltonians
$\displaystyle H_{A}$ $\displaystyle=1.5I_{4\times 4}+0.5(I_{2\times
2}\otimes\sigma_{z}-2\sigma_{z}\otimes\sigma_{z})$
$\displaystyle=\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&2&0&0\\\ 0&0&3&0\\\
0&0&0&0\end{array}\right),$ (27) $\displaystyle H_{B}$
$\displaystyle=I_{4\times 4}+0.5(I_{2\times
2}\otimes\sigma_{z}-\sigma_{z}\otimes\sigma_{z})$
$\displaystyle=\left(\begin{array}[]{cccc}1&0&0&0\\\ 0&1&0&0\\\ 0&0&2&0\\\
0&0&0&0\end{array}\right),$ (32)
with ansatze
$\displaystyle|\phi_{A}(\theta_{1},\theta_{2})\rangle=\tilde{R}_{Y}^{0,1}(\theta_{2})R_{X}^{0}(\theta_{1})|00\rangle,$
$\displaystyle|\phi_{B}(\theta_{1},\theta_{2})\rangle=\tilde{R}_{Y}^{0,1}(\theta_{2})R_{X}^{0}(\theta_{1})R_{X}^{1}(\theta_{1})|01\rangle,$
that have been exploited by McArdle et al. [42] to demonstrate the performance
of different Riemannian metrics in the conventional VQE approach for finding
the ground-state energy of the same Hamiltoinans. Here,
$\tilde{R}_{Y}^{p,q}(\theta)$ is a controlled $Y$ rotation of $\theta$ with
control qubit $p$ and target qubit $q$, and $R_{X}^{p}(\theta)$ is a rotation
of $\theta$ on qubit $p$ around the $x$-axis. The rotation about the $j$-axis
is defined as $R_{\sigma_{j}}(\theta)=e^{-\text{i}\theta\sigma_{j}/2}$ with
$\sigma_{j}$ being one of the Pauli spin matrices.
Figure 2: Variational trajectories on the PDS(2) energy surface (left panels)
and original potential energy surface (right panels) discovering the ground
state energy of Hamiltonian, $H_{A}$, explored by gradient descent (top
panels) and natural gradient descent/imaginary time evolution (bottom panels).
On the background energy surfaces, the dark blue and white colors correspond
to the global maximum and minimum energies, respectively. The arrows indicate
the trajectories of the dymanics, and are colored green if the trajectory
converges to the ground state energy, and red otherwise. The step size
$\eta=0.05$ in all the calculations. Figure 3: Variational trajectories on the
PDS(2) energy surface (left panels) and original potential energy surfaces
(right panels) discovering the ground state energy of Hamiltonian, $H_{B}$,
explored by gradient descent (top panels), natural gradient descent (middle
panels), and variational imaginary time (bottom panels). On the background
energy surfaces, the dark blue and white colors correspond to the global
maximum and minimum energies, respectively. The arrows indicate the
trajectories of the methods, and are colored green if the trajectory converges
to the true ground state energy, and red otherwise. The step size $\eta=0.05$
in all the calculationss.
Figs. 2 and 3 show the performances of the proposed PDS($K$)-VQS ($K=2$, i.e.
PDS(2)-VQS) and the conventional VQE approaches for finding the ground state
energy of the toy Hamiltonians. As can be seen, the ability of VQE navigation
to avoid the local minima on the conventional PES depends on the Riemannian
metric exploited. For system A, in comparison to GD, the NGD (or equivalently
ITE in this case) is able to avoid the local minimum at
$(\theta_{1},\theta_{2})=(0,0)$. This is because the Riemannian metric,
$\displaystyle\mathcal{R}=\left(\begin{array}[]{cc}\frac{1}{4}&0\\\
0&\frac{1}{4}\sin^{2}(\frac{\theta_{1}}{2})\end{array}\right),$ (35)
used in the NGD/ITE correctly characterizes any rotation pair with
$\theta_{1}=0$ as a singular point (i.e. $\det|R|=0$) such that
$\mathcal{R}^{-1}$ will numerically navigate the dynamics (e.g. via singular
value decomposition) to avoid collapsing in this local minimum once the
trajectory is getting close. Therefore, if the metric is unable to
characterize the local minima as singular points, the VQE would still get
trapped. This can be observed from the VQE performance for system B, where
both NGD and ITE fail to escape the local minima,
$(\theta_{1},\theta_{2})\sim(\pm\frac{3\pi}{8},0)$, in the dynamics due to the
fact that the local minima are not the singular points of $R$ in either NGD or
ITE.
In contrast, the PDS(2)-VQS robustly converge to the true ground state for
both systems regardless of the employed Riemannian metric. The success of
PDS($K$)-VQS in these toy examples can be essentially attributed to the fact
that, in comparison to the original PES where the local minima corresponding
to a non-ground state, the entire PDS($K$) energy surface, except the singular
areas (see the infinitesimal white strips on the left panels of Fig. 2 at
$\theta_{1}=0$) where the fidelity of the trial wave function w.r.t the target
state is strictly zero, provides an upper bound energy surface for the same
(ground) state. This state-specific nature makes the PDS($K$)-VQS essentially
explore a lower upper bound of the ground state at a given PDS order, and
therefore the dynamics will not be trapped at a location that is associated
with a different state. It worth mentioning that, a lower bound of the ground
state energy can also be obtained from a static, and more costly, higher order
PDS($K$) standalone calculation as demonstrated in our previous work.[34] From
this perspective, the PDS($K$)-VQS approach provides an effective way to
explore the possibility of pushing the low order PDS($K$) results towards high
accuracy that would otherwise require higher-order and more expensive PDS($K$)
calculations. Besides, since generalized variational principle applies in the
PDS framework,[49, 62] if other roots of Eq. (12) are concerned, the
PDS($K$)-VQS will also be able to navigate the dynamics to give lower upper
bounds for excited states as long as the fidelity of the trial wave function
with respect to the target state is non-zero.
### 3.2 $H_{2}$ and $H_{4}$ systems
We further employ the proposed PDS($K$)-VQS approach to find the ground state
energy of H2 and H4 molecular systems. For H2 molecule, we exploit an
effective Hamiltonian and an ansatz exploited by Yamamoto [71] and Bravyi et
al.[9],
$\displaystyle H=0.4(\sigma_{z}\otimes
I+I\otimes\sigma_{z})+0.2\sigma_{x}\otimes\sigma_{x}$
$\displaystyle|\phi(\vec{\theta})\rangle=R_{Y}^{0}(2\theta_{3})R_{Y}^{1}(2\theta_{4})\tilde{U}_{N}^{0,1}R_{Y}^{0}(2\theta_{1})R_{Y}^{1}(2\theta_{2})|00\rangle$
where $\tilde{U}_{N}^{p,q}$ denotes the CNOT gate with control qubit $p$ and
target qubit $q$.
Figure 4: The computed ground state energy (top panels), energy deviation
w.r.t. exact energy (middle panels), and fidelity of the trial state (bottom
panels) of the H2 molecule iterate in the conventional VQE and PDS($K$)-VQS
($K=2,3,4$) infrastructures employing gradient descent (left panels) and
natural gradient descent/imaginary time evolution (right panels). The initial
rotation is given by $\vec{\theta}=(7\pi/32,\pi/2,0,0)$. The step size
$\eta=0.05$ in all the calculations.
Fig. 4 compares the VQE and PDS($K$)-VQS performances exploiting the above-
mentioned ansatz to find the ground state energy of the H2 Hamiltonian. As can
be seen, starting from the given initial rotation, the VQE is unable to
converge to the ground state energy within 100 iterations, but rather drops to
an excited state energy ($-0.2$ a.u. in this case). Actually, it has been
shown that,[71] starting from the same initial rotation, the VQE needs to go
through a “plateau” that resides at this energy value and spreads over
$\sim$400 iterations before hitting the ground state energy ($\sim$$-0.8$ a.u.
in this case) regardless of the employed Riemannian metric.
To achieve a higher level of accuracy (e.g. chemical accuracy
$\|E(\vec{\theta})-E_{exact})\|<1.5\times 10^{-3}$ a.u.), low order
PDS($K$)-VQS typically needs more iterations than high order PDS($K$)-VQS. As
shown in the middle left panel of Fig. 4, by employing GD in the dynamics, it
takes the PDS(4)-VQS $<$10 iterations to converge to the ground state energy
with energy deviation being $<10^{-14}$ a.u. regardless of the employed
Riemannian metric, while it takes the PDS(2)/PDS(3)-VQS almost 100 steps to
bound the deviation to be $<10^{-3}$ a.u. Remarkably, the performance can be
improved when GD is replaced by NGD/ITE in the PDS(2)/PDS(3)-VQS dynamics. In
particular, within 80 iterations the PDS(3)-VQS employing NGD/ITE can converge
to the accuracy level that is almost same as that of PDS(4)-VQS.
On the other hand, the quality of the trial wave function is more
significantly improved in the low order PDS($K$)-VQS dynamics than in the high
order PDS($K$)-VQS dynamics. For example, the fidelity of the trial wave
function w.r.t the exact ground state gradually increases from almost zero to
$\sim$0.35 within 100 iterations using PDS(2)-VQS regardless of the employed
Riemannian metric, and this change is significantly steeper than the almost
flat curves of PDS(3)/PDS(4)-VQS as shown at the bottom of Fig. 4. However, in
comparison to GD, employing NGD/ITE in the PDS(3)/PDS(4)-VQS quickly improves
the fidelity of the trial wave function from $<$0.02 to 0.2$\sim$0.3 within 10
iterations. It is worth mentioning that since the fidelity of the trial wave
function at the initial rotation is almost zero, both VQE and the static
PDS($K$) ($K=2,3,4$) calculations alone cannot help identify the ground state
energy in this case, which makes PDS($K$)-VQS a necessary and effective
approach to target ground state energy and improve the trial wave function.
Remarkably, the improvement of the trial wave function employing PDS($K$)-VQS
approach might be limited. This can be seen from the flat fidelity curves of
the trial state driven in the PDS(3/4)-VQS dynamics after first several
iterations as shown at the bottom right of Fig. 4. This is due to the fact
that the PDS($K$) formalism does not require the ansatz to sufficiently
approximate the target state, while is still able to provide systematically
improvable upper bounds of the expectation value of the target state by
exploring the Krylov subspace. The benefit is the great simplification of the
state preparation. The limitation is also obvious in that it sometimes would
be challenging to further improve the quality of the trial state within the
PDS($K$)-VQS framework if the energies were well converged already, which
would then compromise the accuracy of the property calculations that usually
requires a sufficiently accurate description of the target wave function.
Figure 5: The performance of VQE and PDS($K$)-VQS ($K=2,3,4$) employing
ordinary gradient descent (GD) to compute the ground state energy of a planar
H4 system with $R_{H-H}=2.0$ a.u. (Top left) The circuit used to generate the
ansatz with 16 rotation parameters that is inspired by the basis rotation
ansatz for a linear hydrogen chain in Ref. [2]. Here, we consider the planar
H4 system in 3-21G basis. The generated Hamiltonian acts on eight qubits, and
considers an active space of four electrons in eight spin-orbitals. With this
Hamiltonian, the ground state of the planar H4 system is a triplet state with
the exact energy $E_{\text{exact}}=-2.00591266$ a.u. (Bottom left) the
deviations of the VQE and PDS($K$)-VQS energies and (bottom right) the
fidelity change of the trial wave function w.r.t. true ground state during the
PDS($K$)-VQS calculations. The initial values of all the rotations are set to
0.001. The step size $\eta=1.0$ in all the calculations.
We also test the proposed PDS($K$)-VQS approach for a slightly larger system,
the planar H4 system, where a 8-qubit circuit with 16 rotation parameters
shown at the top of Fig. 5 is employed to prepare the trial wave function for
finding the ground state energy. The state preparation circuit is inspired by
the similar circuit that has been reported being successfully applied for
preparing the trial state for the Hartree Fock state of the linear hydrogen
chain systems.[2] For the planar H4 system whose ground state is a triplet,
the circuit with close-to-zero initial rotations would generate a trial state
that is almost singlet, which makes the conventional VQE and the static
PDS($K$) ($K=2,3,4$) simply fail. On the other hand, as shown at the bottom of
Fig. 5, the PDS($K$)-VQS ($K=2,3,4$) are capable of dealing with such a tough
situation and again outperform. As can be seen, within 200 iterations,
PDS($K$)-VQS ($K=2,3,4$) are able to converge to the ground state energy well
below chemical accuracy and improving the fidelity of the trial wave function
to be $>$0.96. It is worth noting that even though the converged rotations
obtained from the PDS($K$)-VQS calculations generate a high fidelity state,
the expectation value of the generated state is still $\sim 0.02$ a.u. above
the exact energy, and it then becomes challenging to further improve the
fidelity employing the same circuit infrastructure through varying the
rotations. Therefore, the circuit used here might not be sufficient for
preparing true ground state in practice if higher fidelity is desired. We here
intentionally employ the circuit to artificially generate an extreme
challenging case to show the performance difference between conventional VQE
and PDS($K$)-VQS approaches.
## 4 Discussion
From Section III, it has been seen that the PDS($K$)-VQS approach bears the
potential of speeding up the iterations in comparison with the conventional
VQE approach. However, it is worth noting that the measurement effort of
evaluating $\langle H^{n}\rangle$’s ($n>1$) and their derivatives are usually
more expensive than that of $\langle H\rangle$ and its derivative, and the
actual cost saving will therefore be compromised.
To have a close look at the measurement of the $\langle H^{n}\rangle$ (and its
impact on the total cost), we employ the following metric to give an estimate
for the number of measurements, $M$,[23, 56, 69]
$\displaystyle M=\Bigg{(}\frac{\sum_{G}\sqrt{\sum_{i,j,\in
G}h_{i}h_{j}\text{cov}\big{(}P_{i},P_{j}\big{)}}}{\epsilon}\Bigg{)}^{2},$ (36)
where $\epsilon$ is the desired precision and $h_{i}$’s and $P_{i}$’s are the
coefficients and Pauli strings representing a moment (i.e.
$H^{n}=\sum_{i}h_{i}P_{i}$) and having been partitioned into certain groups,
$G$’s, in which simultaneous measurement can be performed.
$\text{cov}\big{(}P_{i},P_{j}\big{)}$ is the covariance between two Pauli
strings bounded by
$\displaystyle\text{cov}\big{(}P_{i},P_{j}\big{)}\leq|\sqrt{\text{var}(P_{i})\cdot\text{var}(P_{j})}|$
(37)
with the variance being computed from $\text{var}(P_{i})=1-\langle
P_{i}\rangle^{2}$. Here, we assume the covariances between different Pauli
strings to be zero for the brevity of the discussion. We can apply the above
metric to, for example, estimate the number of measurements of $H^{n}$
($n=1,2,3$) required by the PDS(2)-VQS calculation for the complete active
space (4 electrons, 4 spin-orbitals) of the planar H4 system. Given
$\epsilon\sim 0.5$mHartree, since $H^{n}$ ($n=1,2,3$) can be generated from at
most $\sim 3700$ Pauli strings, the estimated number of measurements needs to
be done is $\sim 4.8\times 10^{9}$, which is one order of magnitude higher
than that for $\langle H\rangle$ ($\sim 1.2\times 10^{8}$). Thus, given the
same trial state in this H4 case, if the number of conventional VQE iterations
is no more than one order of magnitude larger than that of the PDS(2)-VQS
iterations, VQE would outperform PDS(2)-VQS in terms of total number of
measurements, and PDS(2)-VQS outperforms otherwise. It is worth mentioning
that, during the PDS($K$)-VQS process for the ground state and energy, the
excited state energies can also be estimated directly from the higher roots of
the polynomial (12) without any additional measurement (although accurate
excited state energies would require higher order PDS($K$)-VQS calculations).
In contrast, the conventional VQE would need distinct trial states, and thus
different measurements, for targeting different states.
Generally speaking, as long as the relatively large number of measurements of
the Pauli strings becomes manageable, the PDS($K$)-VQS approach can be
potentially applied for targeting the exact solutions for the system sizes
that are not classically tractable, in particular for the systems whose true
ground and excited states we have little knowledge of, or are challenging to
obtain classically. To reduce the measurement demand, typical strategy is to
partition the Pauli strings (that contribute to the moments) into commuting
subsets that follow a certain rule, e.g. qubit-wise commutativity (QWC)[31,
44], general commutativity[22, 72], unitary partitioning[30], and/or Fermionic
basis rotation grouping[28] to name a few. The applications of these commuting
rules to the single Hamiltonian have shown that, at a cost of introducing
additional one-/multi-qubit unitary transformation before the measurement, the
total number of required measurements can be significantly reduced from
$\mathcal{O}(N^{4})$ to $\mathcal{O}(N^{2\sim 3})$, or for simpler cases even
$\mathcal{O}(N)$. For higher order moments, as we mentioned in the method
section, early study of applying QWC bases to Heisenberg models represented by
up to 36 qubits exhibits a sub-linear scaling of the number of measurements in
the number of the qubits (Ref. [67]), which then leads us to expect similar
scaling behaviors of the number of required measurements for evaluating
moments for molecular systems. Beside exploring the commutativity of Pauli
strings, other approaches including the linear combinations of unitary
operators (LCU) technique[13], direct block-encoding[21, 6], and quantum power
methods[59] might also be worth studying for reducing the number of
measurements at the cost of circuit depth. In the light of that, we plan to
perform a comprehensive benchmark as a follow-up work.
Since the PDS($K$)-VQS formalism involves solving linear system of equations
and polynomial, there is a concern of numerical instability when applying the
PDS($K$)-VQS approach in optimization. Theoretically, the numerical
instability of the PDS($K$)-VQS approach might come from two sources, (a) the
singularity and ill-conditioning of the matrix $M$ in Eq. (11) that might
consist of high order moments, and (b) the singularity of the Riemannian
metric ($\mathcal{R}$) used in the dynamics (16). In particular, the
singularity of matrix $M$ can be easily observed if the trial vector becomes
very close to the exact wave function ($\rm det|M|=0$ if we replace the trial
vector with exact vector). Numerically, the singularity problem can be avoided
by adding a small positive number (e.g. $10^{-6}$) to the eigenvalue of the
matrix $M$ or $\mathcal{R}$ via singular value decomposition (SVD). However,
it is worthing noting that adding small perturbation to $M$ might violate the
variationality of the PDS approach, and would not be recommended to use if the
strict upper-bounds to the true energy are concerned. The ill-conditioning of
matrix $M$ could occur in the high order PDS calculations, where high order
moments could make the condition number of matrix $M$ very large. Thus, from
the practical point of view, due to the potentially larger number of
measurements and ill-conditioning arising from high Hamiltonian powers, lower
oder PDS($K$)-VQS approaches are usually more feasible.
Figure 6: The performance of VQE and PDS(2)-VQS employing ordinary gradient
descent (GD) to compute the ground state energy of the four-site 2D Heisenberg
model. (Top right) The circuit employed to generate the trial vector, where
only the first rotation in RY gate is treated as a variational parameter
$\theta$, and other three rotations are fixed to $(0,3,3)$. Two initial
rotations $\theta_{0}=-2.0$ and $\theta_{0}=-3.0$ are chosen for performance
comparison. The exact ground state energy of the 2D Heisenberg model is
$E_{\text{exact}}=-3.6$ a.u. (Center) The VQE and PDS(2)-VQS energies and
(bottom) the corresponding fidelity changes of the trial vectors w.r.t. true
ground state in the first ten iterations in the conventional VQE and
PDS(2)-VQS noise-free calculations. The step size $\eta=1.0/\text{Iteration}$
in all the calculations. Figure 7: The computed ground state energy (top left)
and magnetization (top right) of the four-site 2D Heisenberg model and the
corresponding changes of the fidelity (bottom left) and variational parameter
$\theta$ (bottom right) in the first ten PDS($K$)-VQS ($K=2,3,4$) iterations
running on IBM Toronto quantum hardware. The physical setup, error sources,
and computed expectation values of Hamiltonian moments (up to $\langle
H^{7}\rangle$) and the associated standard deviations are shown in Fig. 8. In
all the calculations ordinary gradient descent (GD) is employed. The initial
rotation $\vec{\theta}_{0}=-3.0$. The exact ground state energy and
magnetization of the 2D Heisenberg model are $E_{\text{exact}}=-3.6$ a.u. and
$\sum_{i}\langle\sigma_{z_{i}}\rangle=-4.0$ a.u., respectively. The step size
$\eta=1.0/\text{Iteration}$ in all the calculations.
Ultimately, one would be concerned about how the PDS($K$)-VQS applies to
general models and how it performs on the real quantum hardware subject to the
device noise. To address these concerns and explore the potential of the
PDS($K$)-VQS approach, we have started to launch the PDS($K$)-VQS calculations
for more general Hamiltonians on both simulator and the real quantum hardware.
Figs. 6 and 7 exhibit some preliminary results for a four-site 2D Heisenberg
model with external magnetic field, $H=J\sum_{\langle
ij\rangle}\big{(}X_{i}X_{j}+Y_{i}Y_{j}+Z_{i}Z_{j}\big{)}+B\sum_{i}Z_{i}$ with
$J/B=0.1$. The simple circuit employed for the state preparation in both VQE
and PDS($K$)-VQS simulations is shown in Fig. 6, where, for the brevity of our
discussion, we only treat one rotation in the state preparation as the
variational parameter, and fix all other three rotations. As can be seen from
the noise-free simulations in Fig. 6, the PDS(2)-VQS results quickly converge
within five iterations achieving $\sim 0.99$ fidelity, while the performance
of VQE exhibits strong dependence on the initial rotation (for
$\vec{\theta}_{0}=-2.0$, the conventional VQE is able to converge in 10
iterations with $\Delta E<0.05$ a.u. and Fidelity $\sim 0.97$). When running
the PDS($K$)-VQS simulations for the same model on the IBM Toronto quantum
hardware, as shown in Fig. 7, in comparison to the ideal curves, the
PDS(2/3)-VQS optimization curves on the real hardware significantly slows
down, and deviate from the exact solutions due to the error from the real
machine. However, if we increase the PDS order to perform PDS(4)-VQS
calculations, the accuracy of the results systematically improves. For
example, in the PDS(4)-VQS approach both the computed ground state energy and
the trial state (and thus the magnetization) converge within 10 iterations
being very close to the exact solutions.
## 5 Conclusion
In summary, we propose a new variational quantum solver that employs the PDS
energy gradient. In comparison with the usual VQE, the PDS($K$)-VQS helps
identify an upper bound energy surface for the ground state, and thus frees
the dynamics from being trapped at local minima that refer to non-ground
states. In comparison with the static PDS($K$) expansions, the PDS($K$)-VQS
guides the rotation of the trial wave function of modest quality, and is able
to achieve high accuracy at the expense of low order PDS($K$) expansions. We
have demonstrated the capability of the PDS($K$)-VQS approach at finding the
ground state and its energy for toy models, H2 molecule, and strongly
correlated planar H4 system in some challenging situations. In all the case
studies, the PDS($K$)-VQS outperforms the standalone VQE and static PDS($K$)
calculations in terms of efficiency even at the lowest order. We also
discussed the limitations of the PDS($K$)-VQS approach at the current stage.
In particular, the PDS($K$)-VQS approach may suffer from large amount of
measurements for large systems, which can nevertheless be reduced at the cost
of circuit depth by working together with some measurement reduction methods.
Finally, we have started to launch PDS($K$)-VQS simulations for more general
Hamiltonians on IBM quantum hardware. Preliminary results for Heisenberg model
indicate that higher order PDS($K$)-VQS approach exhibits better noise-
resistance than the lower order ones. The discussed approach can be extended
to any variational formulation based on the utilization of $\langle
H^{n}\rangle$ moments (e.g. Krylov subspace algorithms).
Figure 8: (a) Quantum processor device map for ibmq_toronto showing the four
qubits (Qn, $n=0-3$) used in the present computation. (b) Average CNOT error,
1-qubit readout assignment error, and thermal relaxation time constant (T1)
and dephasing time constant (T2) in the four qubits used in the present
computation. (c) The expectation values of the Hamiltonian moments, $\langle
H^{n}\rangle$ ($n=1-7$), assembled from the measurements of the expectation
values of 21 QWC bases for four-site 2D Heisenberg model $H=J\sum_{\langle
ij\rangle}\big{(}X_{i}X_{j}+Y_{i}Y_{j}+Z_{i}Z_{j}\big{)}+B\sum_{i}Z_{i}$ with
$J/B=0.1$. The data points correspond to mean value from the calculations on
IBM Quantum processor ibmq_toronto with statistical error bars corresponding
to $5\times 8192$ shots (per point). The trial state is constructed using the
circuit given in Fig. 6 with initial rotation $\theta_{0}=-3.0$.
## 6 Acknowledgement
B. P. and K. K. were supported by the “Embedding QC into Many-body Frameworks
for Strongly Correlated Molecular and Materials Systems” project, which is
funded by the U.S. Department of Energy, Office of Science, Office of Basic
Energy Sciences (BES), the Division of Chemical Sciences, Geosciences, and
Biosciences. B. P. and K. K. acknowledge the use of the IBMQ for this work.
The views expressed are those of the authors and do not reflect the official
policy or position of IBM or the IBMQ team.
## 7 Data Availability
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* Amari [1998] S. Amari. Natural gradient works efficiently in learning. _Neural Computation_ , 10(2):251–276, 1998. doi: 10.1162/089976698300017746.
* Arute et al. [2020] Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Sergio Boixo, Michael Broughton, Bob B. Buckley, David A. Buell, Brian Burkett, Nicholas Bushnell, Yu Chen, Zijun Chen, Benjamin Chiaro, Roberto Collins, William Courtney, Sean Demura, Andrew Dunsworth, Edward Farhi, Austin Fowler, Brooks Foxen, Craig Gidney, Marissa Giustina, Rob Graff, Steve Habegger, Matthew P. Harrigan, Alan Ho, Sabrina Hong, Trent Huang, William J. Huggins, Lev Ioffe, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Cody Jones, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Seon Kim, Paul V. Klimov, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Pavel Laptev, Mike Lindmark, Erik Lucero, Orion Martin, John M. Martinis, Jarrod R. McClean, Matt McEwen, Anthony Megrant, Xiao Mi, Masoud Mohseni, Wojciech Mruczkiewicz, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Hartmut Neven, Murphy Yuezhen Niu, Thomas E. O’Brien, Eric Ostby, Andre Petukhov, Harald Putterman, Chris Quintana, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Doug Strain, Kevin J. Sung, Marco Szalay, Tyler Y. Takeshita, Amit Vainsencher, Theodore White, Nathan Wiebe, Z. Jamie Yao, Ping Yeh, and Adam Zalcman. Hartree-fock on a superconducting qubit quantum computer. _Science_ , 369(6507):1084–1089, 2020. doi: 10.1126/science.abb9811.
* Babbush et al. [2018] Ryan Babbush, Nathan Wiebe, Jarrod McClean, James McClain, Hartmut Neven, and Garnet Kin-Lic Chan. Low-depth quantum simulation of materials. _Phys. Rev. X_ , 8:011044, 2018. doi: 10.1103/PhysRevX.8.011044.
* Bartlett et al. [1989] Rodney J. Bartlett, Stanisław A. Kucharski, and Jozef Noga. Alternative coupled-cluster ansätze II. the unitary coupled-cluster method. _Chem. Phys. Lett._ , 155(1):133–140, 1989. doi: 10.1016/s0009-2614(89)87372-5.
* Berry et al. [2007] Dominic W Berry, Graeme Ahokas, Richard Cleve, and Barry C Sanders. Efficient quantum algorithms for simulating sparse hamiltonians. _Comm. Math. Phys._ , 270(2):359–371, 2007. doi: 10.1007/s00220-006-0150-x.
* Berry et al. [2015] Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. Simulating hamiltonian dynamics with a truncated taylor series. _Phys. Rev. Lett._ , 114:090502, 2015. doi: 10.1103/PhysRevLett.114.090502.
* Bespalova and Kyriienko [2020] Tatiana A. Bespalova and Oleksandr Kyriienko. Hamiltonian operator approximation for energy measurement and ground state preparation. _preprint_ , arXiv:2009.03351, 2020. URL https://arxiv.org/abs/2009.03351.
* Bogolubov [1947] N. Bogolubov. On the theory of superfluidity. _J. Phys._ , 11:23–32, 1947. doi: 10.1142/9789814612524_0001.
* Bravyi et al. [2017] Sergey Bravyi, Jay M. Gambetta, Antonio Mezzacapo, and Kristan Temme. Tapering off qubits to simulate fermionic hamiltonians. _preprint_ , arXiv:1701.08213, 2017. URL https://arxiv.org/abs/1701.08213.
* Cerezo and Coles [2021] M Cerezo and Patrick J Coles. Higher order derivatives of quantum neural networks with barren plateaus. _Quantum Sci. Technol._ , 6(3):035006, jun 2021\. doi: 10.1088/2058-9565/abf51a.
* Cerezo et al. [2021] M. Cerezo, Akira Sone, Tyler Volkoff, Lukasz Cincio, and Patrick J. Coles. Cost function dependent barren plateaus in shallow parametrized quantum circuits. _Nat. Commun._ , 12(1), 2021. doi: 10.1038/s41467-021-21728-w.
* Childs [2010] Andrew M Childs. On the relationship between continuous-and discrete-time quantum walk. _Comm. Math. Phys._ , 294(2):581–603, 2010. doi: 10.1007/s00220-009-0930-1.
* Childs and Wiebe [2012] Andrew M. Childs and Nathan Wiebe. Hamiltonian simulation using linear combinations of unitary operations. _Quantum Inf. Comput._ , 12(11-12):0901–0924, 2012. doi: 10.26421/qic12.11-12.
* Claudino et al. [2021] Daniel Claudino, Bo Peng, Nicholas P. Bauman, Karol Kowalski, and Travis S. Humble. Improving the accuracy and efficiency of quantum connected moments expansions. _Quantum Sci. Technol._ , accepted, 2021. doi: 10.1088/2058-9565/ac0292.
* Cleve et al. [1998] Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca. On quantum algorithms. _Proc. R. Soc. Lond. A_ , 454(1969):339–354, 1998\. doi: 10.1002/(SICI)1099-0526(199809/10)4:1<33::AID-CPLX10>3.0.CO;2-U.
* Colless et al. [2018] J. I. Colless, V. V. Ramasesh, D. Dahlen, M. S. Blok, M. E. Kimchi-Schwartz, J. R. McClean, J. Carter, W. A. de Jong, and I. Siddiqi. Computation of molecular spectra on a quantum processor with an error-resilient algorithm. _Phys. Rev. X_ , 8:011021, 2018. doi: 10.1103/PhysRevX.8.011021.
* Evangelista et al. [2019] Francesco A Evangelista, Garnet Kin-Lic Chan, and Gustavo E Scuseria. Exact parameterization of fermionic wave functions via unitary coupled cluster theory. _J. Chem. Phys._ , 151(24):244112, 2019. doi: 10.1063/1.5133059.
* Fessatidis et al. [2006] Vassilios Fessatidis, Jay D Mancini, Robert Murawski, and Samuel P Bowen. A generalized moments expansion. _Phys. Lett. A_ , 349(5):320–323, 2006. doi: 10.1016/j.physleta.2005.09.039.
* Fessatidis et al. [2010] Vassilios Fessatidis, Frank A Corvino, Jay D Mancini, Robert K Murawski, and John Mikalopas. Analytic properties of moments matrices. _Phys. Lett. A_ , 374(28):2890–2893, 2010. doi: 10.1016/j.physleta.2010.05.010.
* Feynman [1955] R. P. Feynman. Slow electrons in a polar crystal. _Phys. Rev._ , 97:660–665, 1955. doi: 10.1103/PhysRev.97.660.
* Gilyén et al. [2019] András Gilyén, Yuan Su, Guang Hao Low, and Nathan Wiebe. Quantum singular value transformation and beyond: Exponential improvements for quantum matrix arithmetics. In _Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing_ , STOC 2019, page 193–204, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450367059. doi: 10.1145/3313276.3316366.
* Gokhale et al. [2020] Pranav Gokhale, Olivia Angiuli, Yongshan Ding, Kaiwen Gui, Teague Tomesh, Martin Suchara, Margaret Martonosi, and Frederic T. Chong. $o(n^{3})$ measurement cost for variational quantum eigensolver on molecular hamiltonians. _IEEE Trans. Qunatum Eng._ , 1:1–24, 2020. doi: 10.1109/TQE.2020.3035814.
* Gonthier et al. [2020] Jérôme F. Gonthier, Maxwell D. Radin, Corneliu Buda, Eric J. Doskocil, Clena M. Abuan, and Jhonathan Romero. Identifying challenges towards practical quantum advantage through resource estimation: the measurement roadblock in the variational quantum eigensolver. _preprint_ , arXiv:2012.04001, 2020. URL https://arxiv.org/abs/2012.04001.
* Grimsley et al. [2019] Harper R Grimsley, Sophia E Economou, Edwin Barnes, and Nicholas J Mayhall. An adaptive variational algorithm for exact molecular simulations on a quantum computer. _Nat. Commun._ , 10(1):1–9, 2019. doi: 10.1038/s41467-019-10988-2.
* Guerreschi and Smelyanskiy [2017] Gian Giacomo Guerreschi and Mikhail Smelyanskiy. Practical optimization for hybrid quantum-classical algorithms. _preprint_ , arXiv:1701.01450, 2017. URL https://arxiv.org/abs/1701.01450.
* Häner et al. [2016] T. Häner, D. S. Steiger, M. Smelyanskiy, and M. Troyer. High performance emulation of quantum circuits. In _SC ’16: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis_ , pages 866–874, Nov 2016\. doi: 10.1109/SC.2016.73.
* Hoffmann and Simons [1988] Mark R Hoffmann and Jack Simons. A unitary multiconfigurational coupled-cluster method: Theory and applications. _J. Chem. Phys._ , 88(2):993–1002, 1988. doi: 10.1063/1.454125.
* Huggins et al. [2021] William J. Huggins, Jarrod R. McClean, Nicholas C. Rubin, Zhang Jiang, Nathan Wiebe, K. Birgitta Whaley, and Ryan Babbush. Efficient and noise resilient measurements for quantum chemistry on near-term quantum computers. _npj Quantum Inf._ , 7(1):23, 2021. doi: 10.1038/s41534-020-00341-7.
* Huggins et al. [2020] William James Huggins, Joonho Lee, Unpil Baek, Bryan O’Gorman, and K Birgitta Whaley. A non-orthogonal variational quantum eigensolver. _New J. Phys._ , 22:073009, 2020. doi: 10.1088/1367-2630/ab867b.
* Izmaylov et al. [2020] Artur F. Izmaylov, Tzu-Ching Yen, Robert A. Lang, and Vladyslav Verteletskyi. Unitary partitioning approach to the measurement problem in the variational quantum eigensolver method. _J. Chem. Theory Comput._ , 16(1):190–195, 2020\. doi: 10.1021/acs.jctc.9b00791.
* Kandala et al. [2017] Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M. Chow, and Jay M. Gambetta. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. _Nature_ , 549:242–246, 2017. doi: 10.1038/nature23879.
* Kandala et al. [2019] Abhinav Kandala, Kristan Temme, Antonio D Corcoles, Antonio Mezzacapo, Jerry M Chow, and Jay M Gambetta. Error mitigation extends the computational reach of a noisy quantum processor. _Nature_ , 567:491–495, 2019. doi: 10.1038/s41586-019-1040-7.
* Knowles [1987] Peter J Knowles. On the validity and applicability of the connected moments expansion. _Chem. Phys. Lett._ , 134(6):512–518, 1987. doi: 10.1016/0009-2614(87)87184-1.
* Kowalski and Peng [2020] Karol Kowalski and Bo Peng. Quantum simulations employing connected moments expansions. _J. Chem. Phys._ , 153(20):201102, 2020. doi: 10.1063/5.0030688.
* Kutzelnigg [1991] Werner Kutzelnigg. Error analysis and improvements of coupled-cluster theory. _Theor. Chim. Acta_ , 80(4-5):349–386, 1991. doi: 10.1007/BF01117418.
* Kyriienko [2020] Oleksandr Kyriienko. Quantum inverse iteration algorithm for programmable quantum simulators. _npj Quantum Inf._ , 6(1):1–8, 2020. doi: 10.1038/s41534-019-0239-7.
* Lee et al. [2019] Joonho Lee, William J. Huggins, Martin Head-Gordon, and K. Birgitta Whaley. Generalized unitary coupled cluster wave functions for quantum computation. _J. Chem. Theory Comput._ , 15(1):311–324, 2019\. doi: 10.1021/acs.jctc.8b01004.
* Luis and Peřina [1996] A Luis and J Peřina. Optimum phase-shift estimation and the quantum description of the phase difference. _Phys. Rev. A_ , 54(5):4564, 1996. doi: 10.1103/PhysRevA.54.4564.
* Mancini et al. [1994] Jay D Mancini, Yu Zhou, and Peter F Meier. Analytic properties of connected moments expansions. _Int. J. Quantum Chem._ , 50(2):101–107, 1994\. doi: 10.1002/qua.560500203.
* Mancini et al. [1995] Jay D Mancini, William J Massano, Janice D Prie, and Yu Zhuo. Avoidance of singularities in moments expansions: a numerical study. _Phys. Lett. A_ , 209(1-2):107–112, 1995. doi: 10.1016/0375-9601(95)00757-2.
* Marrero et al. [2020] Carlos Ortiz Marrero, Mária Kieferová, and Nathan Wiebe. Entanglement induced barren plateaus. _preprint_ , arXiv:2010.15968, 2020. URL https://arxiv.org/abs/2010.15968.
* McArdle et al. [2019] Sam McArdle, Tyson Jones, Suguru Endo, Ying Li, Simon C Benjamin, and Xiao Yuan. Variational ansatz-based quantum simulation of imaginary time evolution. _npj Quantum Inf._ , 5(1):1–6, 2019. doi: 10.1038/s41534-019-0187-2.
* McArdle et al. [2020] Sam McArdle, Suguru Endo, Alan Aspuru-Guzik, Simon C Benjamin, and Xiao Yuan. Quantum computational chemistry. _Rev. Mod. Phys._ , 92(1):015003, 2020. doi: 10.1103/RevModPhys.92.015003.
* McClean et al. [2016] Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alán Aspuru-Guzik. The theory of variational hybrid quantum-classical algorithms. _New J. Phys._ , 18(2):023023, 2016. doi: 10.1088/1367-2630/18/2/023023.
* McClean et al. [2018] Jarrod R McClean, Sergio Boixo, Vadim N Smelyanskiy, Ryan Babbush, and Hartmut Neven. Barren plateaus in quantum neural network training landscapes. _Nat. Commun._ , 9(1):1–6, 2018. doi: 10.1038/s41467-018-07090-4.
* Motta et al. [2020] Mario Motta, Chong Sun, Adrian TK Tan, Matthew J O’Rourke, Erika Ye, Austin J Minnich, Fernando GSL Brandão, and Garnet Kin-Lic Chan. Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution. _Nat. Phys._ , 16(2):205–210, 2020. doi: 10.1038/s41567-019-0704-4.
* Nielsen and Chuang [2011] Michael A. Nielsen and Isaac L. Chuang. _Quantum Computation and Quantum Information: 10th Anniversary Edition_. Cambridge University Press, New York, NY, USA, 10th edition, 2011. ISBN 1107002176, 9781107002173. doi: 10.1017/CBO9780511976667.
* Parrish and McMahon [2019] Robert M Parrish and Peter L McMahon. Quantum filter diagonalization: Quantum eigendecomposition without full quantum phase estimation. _preprint_ , arXiv:1909.08925, 2019. URL https://arxiv.org/abs/1909.08925.
* Peeters and Devreese [1984] François M Peeters and Jozef T Devreese. Upper bounds for the free energy. a generalisation of the bogolubov inequality and the feynman inequality. _J. Phys. A: Math. Gen._ , 17(3):625, 1984. doi: 10.1088/0305-4470/17/3/024.
* Peruzzo et al. [2014] Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter J Love, Alán Aspuru-Guzik, and Jeremy L O’brien. A variational eigenvalue solver on a photonic quantum processor. _Nat. Commun._ , 5:4213, 2014. doi: 10.1038/ncomms5213.
* Pesah et al. [2020] Arthur Pesah, M Cerezo, Samson Wang, Tyler Volkoff, Andrew T Sornborger, and Patrick J Coles. Absence of barren plateaus in quantum convolutional neural networks. _preprint_ , arXiv:2011.02966, 2020. URL https://arxiv.org/abs/2011.02966.
* Poulin et al. [2018] David Poulin, Alexei Kitaev, Damian S. Steiger, Matthew B. Hastings, and Matthias Troyer. Quantum algorithm for spectral measurement with a lower gate count. _Phys. Rev. Lett._ , 121:010501, 2018. doi: 10.1103/PhysRevLett.121.010501.
* Preskill [2018] John Preskill. Quantum computing in the nisq era and beyond. _Quantum_ , 2:79, 2018. doi: 10.22331/q-2018-08-06-79.
* Prie et al. [1994] Janice D Prie, D Schwall, Jay D Mancini, D Kraus, and William J Massano. On the relation between the connected-moments expansion and the lanczos variational scheme. _Nuov. Cim. D_ , 16(5):433–448, 1994. doi: 10.1007/BF02463732.
* Romero et al. [2018] Jonathan Romero, Ryan Babbush, Jarrod R McClean, Cornelius Hempel, Peter J Love, and Alán Aspuru-Guzik. Strategies for quantum computing molecular energies using the unitary coupled cluster ansatz. _Quantum Sci. Technol._ , 4(1):014008, 2018. doi: 10.1088/2058-9565/aad3e4.
* Rubin et al. [2018] Nicholas C Rubin, Ryan Babbush, and McClean Jarrod. Application of fermionic marginal constraints to hybrid quantum algorithms. _New J. Phys._ , 20:053020, 2018. doi: 10.1088/1367-2630/aab919.
* Schuld et al. [2019] Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. Evaluating analytic gradients on quantum hardware. _Phys. Rev. A_ , 99:032331, 2019. doi: 10.1103/PhysRevA.99.032331.
* Seeley et al. [2012] Jacob T. Seeley, Martin J. Richard, and Peter J. Love. The bravyi-kitaev transformation for quantum computation of electronic structure. _J. Chem. Phys._ , 137(22):224109, 2012. doi: 10.1063/1.4768229.
* Seki and Yunoki [2021] Kazuhiro Seki and Seiji Yunoki. Quantum power method by a superposition of time-evolved states. _PRX Quantum_ , 2:010333, 2021. doi: 10.1103/PRXQuantum.2.010333.
* Shen et al. [2017] Yangchao Shen, Xiang Zhang, Shuaining Zhang, Jing-Ning Zhang, Man-Hong Yung, and Kihwan Kim. Quantum implementation of the unitary coupled cluster for simulating molecular electronic structure. _Phys. Rev. A_ , 95:020501, 2017. doi: 10.1103/PhysRevA.95.020501.
* Shor [1999] Peter W Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. _SIAM Rev._ , 41(2):303–332, 1999. doi: 10.1137/S0036144598347011.
* Soldatov [1995] Andrey V Soldatov. Generalized variational principle in quantum mechanics. _Int. J. Mod. Phys. B_ , 9(22):2899–2936, 1995\. doi: 10.1142/S0217979295001087.
* Stokes et al. [2020] James Stokes, Josh Izaac, Nathan Killoran, and Giuseppe Carleo. Quantum Natural Gradient. _Quantum_ , 4:269, 2020. doi: 10.22331/q-2020-05-25-269.
* Taube and Bartlett [2006] Andrew G Taube and Rodney J Bartlett. New perspectives on unitary coupled-cluster theory. _Int. J. Quantum Chem._ , 106(15):3393–3401, 2006\. doi: 10.1002/qua.21198.
* Ullah [1995] Nazakat Ullah. Removal of the singularity in the moment-expansion formalism. _Phys. Rev. A_ , 51(3):1808, 1995. doi: 10.1103/PhysRevA.51.1808.
* Uvarov and Biamonte [2021] Alexey Uvarov and Jacob Biamonte. On barren plateaus and cost function locality in variational quantum algorithms. _J. Phys. A: Math. and Theo._ , 54:245301, 2021. doi: 10.1088/1751-8121/abfac7.
* Vallury et al. [2020] Harish J. Vallury, Michael A. Jones, Charles D. Hill, and Lloyd C. L. Hollenberg. Quantum computed moments correction to variational estimates. _Quantum_ , 4:373, 2020. doi: 10.22331/q-2020-12-15-373.
* Wang et al. [2020] Samson Wang, Enrico Fontana, Marco Cerezo, Kunal Sharma, Akira Sone, Lukasz Cincio, and Patrick J Coles. Noise-induced barren plateaus in variational quantum algorithms. _preprint_ , arXiv:2007.14384, 2020. URL https://arxiv.org/abs/2007.14384.
* Wecker et al. [2015] Dave Wecker, Matthew B. Hastings, and Matthias Troyer. Progress towards practical quantum variational algorithms. _Phys. Rev. A_ , 92:042303, 2015. doi: 10.1103/PhysRevA.92.042303.
* Wierichs et al. [2020] David Wierichs, Christian Gogolin, and Michael Kastoryano. Avoiding local minima in variational quantum eigensolvers with the natural gradient optimizer. _Phys. Rev. Research_ , 2:043246, 2020. doi: 10.1103/PhysRevResearch.2.043246.
* Yamamoto [2019] Naoki Yamamoto. On the natural gradient for variational quantum eigensolver. _preprint_ , arXiv:1909.05074, 2019. URL https://arxiv.org/abs/1909.05074.
* Yen et al. [2020] Tzu-Ching Yen, Vladyslav Verteletskyi, and Artur F. Izmaylov. Measuring all compatible operators in one series of single-qubit measurements using unitary transformations. _J. Chem. Theory Comput._ , 16(4):2400–2409, 2020\. doi: 10.1021/acs.jctc.0c00008.
* Yuan et al. [2019] Xiao Yuan, Suguru Endo, Qi Zhao, Ying Li, and Simon C. Benjamin. Theory of variational quantum simulation. _Quantum_ , 3:191, 2019. doi: 10.22331/q-2019-10-07-191.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.